BY: Andrew Zuo
There’s a new IDE out called Cursor. Although as I said before:
Or perhaps ‘IDE’ is not the correct term. [Cursor] is part of a new line of developer environments that are much faster because they are modular. They consist of a text editor portion and a ‘language server’ portion. The language server is like a server meaning it runs asynchronously. It does not block the text editor so the text editor can run lightning quick. So perhaps these should be called ‘modular development environments’ or MDEs.
Or perhaps not that new. The IDE itself is just a fork of Visual Studio Code, the OG Modular Development Environment. And, in fact, Cursor doesn’t add much, if at all, to VS Code. The only notable feature it adds is AI. Also when researching this post Google told me this:
No, Cursor is not just Visual Studio Code (VS Code) plus AI, but rather a standalone fork of VS Code that’s built around AI:
Sounds like VS Code + AI to me. It’s even compatible with VS Code extensions. Which brings me to my core question: Why is Cursor its own program? Why not just make it a VS Code extension? I mean, GitHub Copilot does it.
Oh, Copilot is using hidden APIs.
So that’s why Cursor did it. Now if Cursor and Github copilot just continued development as normal that wouldn’t be much of a story. But there seems to be a growing trend to make VS Code more and more about Copilot.
Now, in all fairness, there is more new stuff in the VS Code update that is not Copilot. Mostly icons. But it is concerning how much Copilot has infiltrated VS Code. Especially as VS Code is supposed to be modular. All the heavy duty IDE stuff is done using language servers which are separate processes, that’s why it’s so fast. To put all this dedicated Copilot code in VS Code? It feels wrong.
Especially as I’m not a big fan of AI coding. I mean, I do code with AI, but that’s for stuff I’m too lazy to write or not too familiar with. As you increase the amount and complexity of code you write with an LLM you also increase the risk of introducing subtle bugs.
And as I argued previously, these hidden bugs that LLMs can introduce can cost you a lot of time, in the post I talked about how they could make you a -10× developer.
And also LLMs are limited in their ability to debug:
This is exemplified whenever I try to debug code with an LLM. It’s not thinking for itself, it’s just regurgitating information it found somewhere else and is like, “Did you try this?” even if the ‘this’ is not really related to what I’m currently doing and unlikely to help. It has no understanding of why the problem is happening, it just tries to apply solutions that have historically worked on related problems. And ultimately it just leads you in circles.
Remember Devin? The ‘AI software engineer’ that was supposed to shake up the world? Turns out that no, it didn’t shake up the world, it was a total scam. Or maybe scam is too strong of a word. It was overhyped and underdelivered. Just like so many other ‘AI programming features’. Like OpenAI’s Canvas. Like Copilot:
Many developers say AI coding assistants make them more productive, but a recent study set forth to measure their output and found no significant gains. Use of GitHub Copilot also introduced 41% more bugs, according to the study from Uplevel, a company providing insights from coding and collaboration data.
And yet people continue to invest in AI programming features. Now before I was like, “Who cares about AI tools? Just don’t use them.” This is something I also discussed in a recent post on Flutter:
I originally did not see what the big deal was with Foundations. Who cares if your company is operated by a foundation or a massive company? No one… as long as the market is doing well. But we’ve seen a lot of turmoil lately in the software development field. People are being laid off left and right. The Flutter team also had some trouble recently. Although I’m not sure what the extent of the damage is.
When you are operating inside another company, in the case of Github which is itself inside Microsoft, you are at the whims of management. This can be very bad in the case of Flutter or it could be very good in the case of Copilot. The problem is that this attention Copilot is getting is also sucking out all the air from other coding tools as seen in the recent Visual Studio Code update.
I wish this Visual Studio Code update was an isolated event but it doesn’t look like that’s the case. It looks like Copilot will continue to steal resources away from other areas of programming.
And it’s not just Visual Studio Code. Have you noticed how much Gemini is being integrated into Google’s tools? It’s pretty ridiculous. I suspect that Gemini is starting to suck the air out of other Google projects like Firebase. Maybe that’s why Flutter isn’t doing as well as it could be, they’re wasting time on AI.
And on top of this programmer skills are atrophying. This is something I’ve feared for quite a while and I’m starting to think it’s coming true. If you’re not actively coding or coding less as many ‘senior devs’ say they are your skills are deteriorating.
Over-reliance on AI coding tools is dangerous. And it’s also insidious. Why do people continue to rely on these things despite so many studies saying they hurt? Because people hate work. So it should be no doubt that the one area that AI coding hurts the most is not tool development, it’s not with current programmers, it’s with education.
There have been multiple pieces written saying how AI tools have made students lazy and not learn anything. Like this one:
In my most recent job, I taught academic writing to doctoral students at a technical college. My graduate students, many of whom were computer scientists, understood the mechanisms of generative AI better than I do. They recognized LLMs as unreliable research tools that hallucinate and invent citations. They acknowledged the environmental impact and ethical problems of the technology. They knew that models are trained on existing data and therefore cannot produce novel research. However, that knowledge did not stop my students from relying heavily on generative AI. Several students admitted to drafting their research in note form and asking ChatGPT to write their articles.
As an experienced teacher, I am familiar with pedagogical best practices. I scaffolded assignments. I researched ways to incorporate generative AI in my lesson plans, and I designed activities to draw attention to its limitations. I reminded students that ChatGPT may alter the meaning of a text when prompted to revise, that it can yield biased and inaccurate information, that it does not generate stylistically strong writing and, for those grade-oriented students, that it does not result in A-level work. It did not matter. The students still used it.
And this post:
- The crutch is a dangerous approach because if we use a crutch, we stop thinking. Students who use AI as a crutch don’t learn anything. It prevents them from thinking. Instead, using AI as co-intelligence is important because it increases your capabilities and also keeps you in the loop.
Easier said than done. As we’ve seen no amount of warnings will change how people use AI. They’re going to over rely on it and it is going to cheat people out of an education.
Now I’m not all negative on AI. I think that AI coding can improve some things and I’ve learned a lot from AI, how to use different packages, for instance. The other day ChatGPT taught me how to use Go’s GoQuerry to parse HTML just like Dart’s HTML package. It’s a massive maintainability win for me because now I can make it so my Go and Dart code are almost line-for-line identical in their function despite looking nothing like each other.
The problem is when you try to shoehorn AI into places where it doesn’t belong, when you try to overuse it. In one of the Harry Potter movies Dumbledore says everyone has to choose between what is right and what is easy. And he’s right. AI coding is easy. But is it right? Sometimes. But in the vast majority of cases AI coding hurts more than it helps.
And it seems to me that these situations where it hurts more than it helps are popping up more and more often. We have a chronic over reliance on AI tools and at the same time companies are funneling all of their development into creating AI tools to make things easy at the cost of correctness.
The way I see it, more than ever now, AI is killing coding.
source: medium