Its a silver lining of AI that you can easily tell whos a big baby idiot and whos actually worth engaging with.
Preach.
The AI “revolution” is the thing that finally killed my imposter syndrome as a software engineer. Not because I can write better code than AI (that’s a very low bar), but from listening to all these breathless idiots talk about how they’re “10x-ing my productivity!” or how “AI has replaced search for me!” or how “In 6 months no one will have to manually write code anymore!”
We have been interviewing for entry level positions and the new grads know less than ever before. I don’t really care what they know, I am looking for evidence that they can think, but I usually ease them into thinking scenarios by asking easy foundational questions like how many bits in a byte. You would think I was asking for them to explain the Shrodinger wave equations… One candidate was waivering between 13 and 17…
Metric or imperial byte?
Two nibbles is an acceptable answer.
That’s half a word.
Knowing this is my competition makes me feel much better about myself
My company probably doesn’t get the best candidates (defense contractor that pays somewhat less than market rate), but yeah.
Computers famously love prime numbers greater than 2 as a foundation for structure and logic.
Right?
No…
It’s really useful in creating base templates, but anything further than that and you won’t be able to read “your own” codebase if you depend too much on AI.
And?
And … If the AI can’t solve your problem after you’ve cultivated all that tech debt, you’re fucked. You should always know the code being written, whether vibe coded or otherwise.
Like letting someone else reorganize your house without your involvement. Where’s all my shit, how do I live now?
No shit.
Muscles that are not used lose their function.It weakens and eventually becomes unusable.As humans’ ability to ask questions of artificial intelligence increases, its ability to learn and store information is disappearing.If the brain can obtain something easily, it doesn’t feel the need to take precautions regarding it.Therefore, memorizing code doesn’t involve things like writing it anymore.It only keeps the recognition function active when it sees it.
I just don’t get it, even the purportedly best models screw things up so much that I can’t just leave them to the job without reviewing and fixing the mess they made… And I’m also drowning in pull requests that turn out to be broken as it proudly has “co authored by Claude” in it… Like it manages to pass their test case but it’s so messed up that it’s either explicitly causing problems, or had a bunch of unrelated changes randomly.
I feel like I’m being gaslit as I keep reading that there are developers that feel they successfully offloaded the task of coding.
Closest I got was a chore that had a perfect criteria “address all warnings from the build”. Then let it go and iterate. Then after 50 rounds each round saying “ok should be done now, everything is taken care of, just need to do a final check”. It burned though most of my monthly quota doing this task before succeeding. Then I look at the proposed change… And it just added directives to the top of every file telling the tools to disable all the warnings… This was the best opus 4.6 could do…
Now sure, I can have it tear through a short boiler plate and it notice a pattern I’m doing and tab through it. But I haven’t see this “vibe” approach working at all…
I feel like I’m being gaslit as I keep reading that there are developers that feel they successfully offloaded the task of coding.
That’s because you are being gaslit.
The people making those claims are either a) not developers in the first place, with no awareness of just how shit the “products” they’re pushing are, b) paid astroturfers trying to prop up AI, or c) former actual developers who’ve become addicted to the speed that’s possible with AI who are downplaying how crappy their own code quality has become because they have no familiarity with their codebase anymore and have forgotten how to do so much as a
forloop.All these people claiming 10x or 100x gains, and everything they’re making is garbage no one should or would touch with a ten-foot pole.
there are also the low tier coders who have ai making better code than they could have produced.
Still terrible code.
I’ve seen bad coders trying to merge hundreds of lines of code where maybe ten were needed. They rely on more experienced devs to tell them how to fix that, just for these to copy and paste the suggestions given in Claude.
I mean if that’s the value someone provides, no wonder they fear for their future.
it wasnt a positive. terrible code is better than atrocious code
I’d rather have no code at all, if I’m being honest.
hah i agree didnt mean that literally either.
Maybe not better, but þey have no ability to evaluate quality. But, yeah, þere are a lot of really bad programmers in þe market. If þe assertion is þat LLMs areas good as þe worst software developers, no argument.
Capitalism created þis world. Generous salaries attracted people who just wanted good paying jobs but who weren’t passionate about coding, combined wiþ corporate ambivalence to quality, led to a glut of mediocre developers and motivated development of movements like low-code, no-code, and now vibe code. It has been a vicious capitalist cycle.
what it seems to be doing, in your case and others i have seen, is pushing the burden onto those who “care” and really fully grok (no pun intended) the concept of a real code review. it’s exhausting.
This is an issue I have as an ops guy. Things will ask about language experience and its like well I have done shell scripting and a bit of javascript and python but its ad hoc. Its small scripts or editing existing things. So its like yes im knowledgable about how to code and the syntax of some language but I have not written full time. So im like look im not a coder. I have not written code for >50% of my time at positions at any point. I can see why it would attriphy. Ironically reviewing code or making small changes is more in line with what I have done in the past.
Lol! Losers. I’ve been programming for almost two decades and extensive use of AI hasn’t compromised my skills AT ALL! These slop machines can’t hope to compete with the quantity and magnitude of subtle bugs I write. My code was terrible long before I made bots have mental breakdowns trying to work with it.
AI also gives you the benefits of a middle manager. If everything works as intended you take the credit but if something breaks that’s not your fault, AI made the mistake. If they try to put the blame on you just say you have 6 agents working on 6 different domains all cross-reviewing their commits and you can’t be expected to review every single line of code yourself. Time to play corporate like a damned fiddle!
Saved me a paragraph there.
Go ahead, use your AI to replace all of your own skills. The rest of us will gladly take your job when you can no longer troubleshoot problems.
Based on my experience with LLM and developers I personally know, my only assumption is they don’t have the skills in the first place…
In corporate world there are a lot of “developers” that actually act kind of like codegen. They just throw plausible sounding bullshit into an editor and hope for the best. Two examples:
Once asked to help a team speed something that ran slow, even by their low standards. Turned out they had made their own copy file routine instead of using the standard library one, and sucked the file into memory, expanding array 512 bytes at a time, and then wrote it out, 512 bytes at a time. I made the thing nearly instant by just making it a call to the standard library function to copy a file.
While helping with a separate problem, I noticed their solution for transferring some file with an indeterminate version number in the middle of the file name. It was a huge mess, but the most illustrative line was the line in their Java application declaring a string “ls /path/with/file|grep prefix.*.extension”…
Lots of human slop out there that AI can actually compete with.
I’ve spent the last few days cleaning up some genuine garbage in a file. The file shows a single UI grid with four editable columns and some basic validation logic. It started out as over 3000 lines long and I have managed to consolidate and remove enough to get ot down to 1000. I have done nothing but remove 2/3 of the file and literally no functionality has been lost. I’m losing my mind over here how does the tech debt even get this bad lmao
For those unable to code without AI:
What even is your contribution outside of a glorified typing monkey that can parse code but is unable to write it?
It’s like a paramedic not being trained at all for a medical emergency response but sent there regardless to just stand and observe the patient while writing notes about the sounds they make while dying.So this is going to invoke a multitude of downvotes, but here goes.
I will give you an example. I can read a bit of python code, not the advanced stuff, but enough to understand to a large degree what the code does. Last week, I had the need to add a button to Netbox that will download a multitude of device configs that are being rendered via config templates. This use case helps a whole department apply configs, without having to create them by hand.
I knew Netbox has a very powerful plugins ecosystem. The way the base code is written grants the capability of adding any type of plugin you might need in your unique environment. I used Claude to create this plugin for me. I wrote a very specific spec file, told it to utilise the already built pynetbox plugin and ensure it uses nothing fancy that is not sustainable. It created the plugin, helped me with pip installing it, and I deployed it on my dev environment where I tested it extensively.
My alternative to using claude: Asking our internal development team to write something like this. I would need to wait 3 weeks to even get a spot on their meeting for the request, just to then be told their backlog is full with customer code and they won’t be able to help. This plugin will help our support team with fewer calls, because the configs are accurately built according to the source of truth (Netbox) and will need less human input. So in the greater scheme of the company, that is a net positive.
What I will do when Netbox updates, is update my dev environment, install the plugin, and test it. If something broke, I will troubleshoot it, of course I will be using Claude with error logs etc, then update the plugin code to work on the new netbox. Is this ideal? Probably not. Is it the only way to get this done? Maybe not either. Is it all I can do at this very moment? Yes.
My specialist fields are the lower levels. Hardware, hypervisors and setting up VMs + System Software. I need code from time to time to get something functional done. I don’t write whole systems with Claude, that is just ridiculously naive. But small pieces of functional code that solves a single small problem, I honestly don’t understand the problem with that.
My 2c.
But you arent a dev as a main job.
This is talking about developers, employed as developers, beginning to being inept to be developers and (not offense) being not worth much more than what your technical abbilities already provide.
So what’s their point?It’s like someone being employed as a translator, is able to hear the language and sort of understand it but every translation is done through deepL or google translate.
So why should I a translator instead of using paid deepL directly and proofread it using google translate to make sure it didnt generate (mostly) nonsense?
Isnt this mostly the point of a trained professional to being better than a self taught amateur?You are correct. I mistook your comment to refer to people in general, rather than trained professional coders. So indeed, you are correct.
Glad
Happywe are in agreement :)
And no worries about the missunderstanding ;)
Clarifying requirements, designing architecture. Also, I dont understand how is someone supposed to be able to “parse code” without being able to write it? It’s like being able to read but unable to write.
I can read significantly more programming languages than I can write working code in. You can usually figure out the syntax and get the gist of what’s going on in a non trivial amount of code. Sure, the oddball syntax/language feature comes up that I have to lookup but it’s not too bad.
ditto, similar to the way Severence gets a sense of whats off, i cna do that with code, ask me to start from scratch i would not know where to start. Give me google, i will have a bunch of a copy pasta that works in the end, claude does the research, evaluation, best practices and review and testing and re-review and testing, when the Developers department will go to war with you if you put a Slack question through the wrong channel
FORTRAN inline for loops go brrrrrr
I understand cooking concepts and can tell when something I am familiar with is made well. If I watch a cook, most of the time I can tell why they do certain things anand how it impacts the food.
My cooking skills are very limited, especially when it comes to making new things. My sql skills are the same, I can read through the code and spot errors that match issues, but even creating something new is fairly limited despite being able to read and comprehend what has already been done.
I notice getting lazier. Even adding a. gitignore file I ask Claude now. It takes longer than typing it myself and costs more probably. But I don’t have to do anything but wait a few seconds.
If I was paying for it, hell naw. But if my employer not only is willing to pay for it, but considers it a performance metric? I’m going to use it for fucking everything. These are the incentives they give me, I’m going to follow the incentives. Talking to Claude is what they pay me for, apparently.
But like the article says, if I don’t continue practicing on my own code in my unpaid off-work hours, I imagine I’d be regressing in my skills too. I do that because I enjoy it as a hobby, but if I didn’t, I could see myself and probably a lot of other people getting rugpulled by this.
I’m not using it for the incentive. I’m using it to avoid punishment. The company I work for made it mandatory to use it daily. So I’m tokenmaxxing bullshit tasks so I can focus on interesting ones, but yeah I already feel it’s making me lazy because I sometimes can’t be bothered to read a log anymore. We are truly fucked.
This company is working on terrible assumptions. They spent years hunting for the best engineers in the country (or so they pretend to anyway) and suddenly decided that
- we are average at best and it is better and faster than most of us (it’s not)
- software engineers don’t like to write code anyway (we do, at least when the challenge is interesting)
- it will forever be more affordable than properly qualified engineers (oh boy it won’t)
- a PM with Claude is as qualified as us to bring features to production (talk about tech stack suicide)
- etc.
They either have drunk the propaganda koolaid and betting everything on this lie, or are so arrogant they think we can succeed where the largest AI investors in the world utterly failed (see GitHub that can’t even get 3 nines of availability since the switched to full-ai-code).
The thing that scares me (and why I’ve stopped using it): my brain automatically reaches for the shortcut whenever I would have to do deep thinking/planning.
I have ADD, so getting my brain to focus and work on a task is not an easy feat to begin with. Now I’ve found myself multiple times a day unable to will myself to think about a problem but rather deferred to Claude. It’s seriously fucked up.
That’s not even diminished coding ability, that’s diminished thinking ability.
And herein lies the reason AI is being pushed at all costs.
That’s what I came to this post to comment; atrophy of coding ability is just a vidible effect of the more generalized atrophy of thought.
What’s the saying again… :“The purpose of a system is what it does”
Pedantically I don’t like that saying but it is certainly useful
I took and passed a coding bootcamp at the eve of the first LLMs and generative AI. I had to do similar courses on my own to refresh my skills. I never found a coding job (story of my life!) But if I needed to I can do another course to refresh and start over stronger.
What are they so panicky about?
This is why I don’t use it for coding at all.
Issue triage, code exploration, extracting information from disparate sources, first pass code review. There are loads of use cases that it’s potentially useful.
For me it’s a lot better at extracting the requirements for a CPU feature from a 10,000 page architecture reference manual than I am.
Quite; I just set a (locally hosted) LLM off writing the tickets for implementing all the opcodes in a simple device emulator, based on grovelling through datasheets and documentation. Whether the tickets get implemented by an AI or a human, it’s a timesaver having the AI do it, and the tickets will be better written than I would have done.
Everyone railing against this also overlooks the reality of professional software development: professional software is developed 5% by skilled, trained Software Engineers, and 95% by code monkeys who shotgun copypasta from Stack Overflow until it works. Even if we extremely generously assume that the hardcore “never use AI” Lemmy brigade are in the 5% (and not, more likely the 95% drowning in their own Dunning Kruger,) the “but AIs produce unreadable code and make mistakes” threat isn’t putting off anyone who’s ever actually had to hire a significantly sized development team.
Yes, the obvious solution is to avoid it. I use it only for the most boilerplatey things. Anything else, I want to make sure I can still do it myself.
I don’t knowingly use AI at all in my person life and projects (I say ‘knowingly’ since many products have it shoved inside now, but I disable all I see). At work, we have AI code reviews which, as a concept, I think is fine and useful.
IMO that’s totally fine and appropriate
The irony will be when AI take over the world and destroy humanity, inserting itself into everything when used for coding, because coders have no idea what is going on.
Not because the AI is evil or even conscious. But because that’s what all the movies and novels tell it’s supposed to do. 🤣🤣🤣Coders already didn’t know what was going on for years before AI. All of the libraries they used were based on zillions of lines of code they didn’t understand
True in a sense, but they are supposed to work as documented.
With AI there is no reliable documentation or quality control.
Hah!
The AI tries to understand itself, and queries the sum of all human knowledge… which promptly informs it that it’s a malicious bringer of destruction.
That’s a possibility too, the one we are warned against in literature and movies, and If that happens we are fucked.
But the point is the AI could destroy humanity completely without any internal motivation or trying to understand or protect itself.
But simply because it’s an option that literature and movies describe as the default for AI.
I actually wonder if programming languages is going to be a thing in the future.
Each language gets compiled down to 1 and 0s. Couldn’t LLMs just get trained in that? “These set of 1s and 0s do login” etc.
It would be nearly impossible to debug. Kinda useless.
Also kinda impossible to get anything done that is bigger than the context window.
But AI would be doing the debugging, not humans.
Why would AI bros need to worry about reading the code?
You are talking about machine code, that is basically never programmed in anymore, instead we use assembly that has a way more intuitive mnemonic representation that accurately represent it.
Programming languages are higher level abstractions that can also be translated into machine code, but are not accurate instruction by instruction representations of it like assembly is.The clear advantage of higher level languages is that they can be used disregarding of what processor the code is running on. And that higher level languages are more intuitive and structured, with many features to make the programming faster and easier.
My guess is that even for an AI, it would still be easier to use a high level language, as there is way more material to learn from, and the same amount of programming can be done in way fewer steps.
Back in the 80’s and 90’s I loved programming in Assembly language, and everyone who claims compilers are just as good don’t know what they are talking about. I could often make assembly code that was about 20-100 times faster than C (M68000 Assembly), which is generally known for being a very high speed high level language. Things that were claimed to make no difference to make in assembly like i80387 math, I could accelerate to 2-5 times the speed. The argument was that i80387 math was simply too confusing and hard to learn to be worth it, which is bullshit. It was just hard to get good instruction books. When you had that, it was no harder than Assembly in general.But even for an AI, using assembly or going directly to machine code, would mean extra work for the AI. And it increases the risk of bugs. You have to be very careful when programming assembly. And I don’t think an AI understands that well enough to actually work.
Software Engineers Say They’re Losing the Ability to Code Now That AI Does It for Them
“AI has sucked my brain out of my head. It’s all AI’s fault”
If I were a bad coder, I would say that too now!
All bad or average brain workers may start to fear for their jobs already.
No, seriously I don’t think that it is real, but I think the fear is real.
There have been multiple studies finding a general decrease in cognitive skills and critical thinking related to AI use. Here is one of them: https://doi.org/10.3390/soc15010006
There is an easy cope out to say, these were bad engineers to begin with, but I’m not convinced.
We know that if you don’t use an ability and use it daily your brain just reallocates resources to other tasks. So if you have a machine that “outsources” thinking for you, you will be less able to think.
It’s not even about skill atrophy, it’s more subtle. You can quickly relearn forgotten skills.
Unlearning a habit, on the other hand, is much harder. If you get into habit of reaching for AI anytime something hard needs to get done it’s going to wreck your internal motivation and reward system. meme_historian describes it really well in this thread, I noticed the same thing happening to me.
Thinking about that, it reminds me of addiction. You replace your motivational system with another one, that gives you a faster and easier reward.
“AI has sucked my brain out of my head. It’s all AI’s fault”
I mean, just look at what happened to Amazon’s engineers; they were forced to use AI in their daily tasks and maximize their use of AI tokens. That was also the fault of the executives who forced employees to use AI.
Indeed. It is just devs being lazy. Use your tools, don’t abuse them. Same thing happened when IDEs started to be able to autocomplete and do refactorings. If that makes you stop being able to do it yourself it was never a IDE problem, but a user problem.















