I appreciate Simon’s balanced take on how LLMs can enhance a project when used responsibly.
I’m curious, though—what are this community’s opinions on the use of LLMs in programming?
The problem with this article is that he stresses that you need to check the code and step in when needed - yet relying heavily on LLMs will invariably make it impossible for you to tell what’s wrong and eventually how to even read the code (since it will produce code using libraries you never experimented with because the LLM can just write the code).
Also “vibe-coding” is stupid af. You take out the human element altogether because you just accept all changes without reading them and then copy/paste errors back in without any context.
The issue isn’t whether you can get a good results or not. The issue is the skills you are outsourcing to a proprietary tool, skills that you will never learn or forget. Getting information out of documentation, designing an architecture, understanding and replicating an algorithm, etc.
You will eventually start struggling with critical thinking, there are already studies about that.
Of course, if you use it in moderation and don’t rely on LLMs too much, you should be ok.
But how did that work for everyone with short-form content and social networks in the last ten years? How is your attention span doing? Surely we all have managed to take short-form content in moderation, since we knew the risks to our attention span, right?
It’s funny, to me I’ve had an llm give me the wrong answer to questions every time.
The first time I couldn’t remember how to read a file as a string in python and it got me most of the way there. But I trusted the answer thinking “yeah, that looks right” but it was wrong, I just got the io class I didn’t call the read() function.
The other time it was an out of date answer. I asked it how to do a thing in bevy and it gave me an answer that was deprecated. I can sort of understand that though, bevy is new and not amazingly documented.
On a different note, my senior who is all PHP, no python, no bash, has used LLM’s to help him write python and bash. It’s not the best code, I’ve had to do optimisations on his bash code to make it run on CI without taking 25 minutes, but it’s definitely been useful to him with python and bash, he was hired as a PHP dev.
I’ve almost completely stopped using them, unless I’m stuck at a dead end. In the end all they have done is slow me down and make me unable to think properly anymore. They usually write way too much code, especially with tab complete stuff, resulting in me needing to delete code after hitting tab (what’s the point even, intellisense has always been really good and now it’s somehow worse). They’re usually wrong unless prompted multiple times. People say you can use them to generate boilerplate, but just use a language with less or no boilerplate like Kotlin. There’s usually very subtle bugs they introduce or they’re solving a problem that is simply documented on stack overflow, while I wouldn’t be using an LLM if I could just kagi it, so they solve the wrong thing.
One thing it’s decent for, if you don’t care about code quality, is converting code to a language you do not know. You’re not going to end up with good idiomatic code at the end, but it will probably function.
None of this is to say that the LLMs aren’t amazing, but if you start to depend on them you very very quickly realize that your ability to solve more complex problems will atrophy. And then when you get to a difficult problem you now waste much more time trying to solve a problem that might have been simpler for past you.
Had a subscription, unsubscribed 6 months ago. Simplistically:
- They create bad code,
- You stop learnng. You want to program? Learn.