AI coding assistants, like GitHub Copilot, promise to increase developer productivity. But recent studies show a decline in software quality, security, and reusability. And senior developers confirm.
The part about forgetting how to code is the crux of this. This also means that junior coders using Copilot won’t learn how to code properly in the first place.
Nevertheless, and given the shortage of developers, Copilot will allow even more bodgy coders to enter the market, but they will not be the ones working on innovative and large scale software.
Low-code visual development may win out against Copilot code because bug free solutions are easier to build with those, and code reviews become a thing of the past.
Innovative and large scale solutions will continue to be dominated by pro coders, having Copilot turned off.
Maybe AI can be used to do code reviews, to highlight areas that need refactoring or correction.
Excellent article. I think the key thing is that these LLMs don't know anything except the symbol patterns they use to derive answers. Here is an interesting article in Apple Insider that talks about how including additional, irrelevant details changes the output from the model when it really shouldn't. The bottom line is the models are just statistical engines without any way to tell correct from incorrect.
I wonder if we might be doing it wrong - maybe instead of code creation we should look to AI for code documentation and code review. Having run engineering teams for longer than I want to admit I find those are always under appreciated tasks and ones that I would love us to collectively push AI to do better/faster/more consistently. Imagine a "security by design" process that included consistency of code review. That would be useful.
The part about forgetting how to code is the crux of this. This also means that junior coders using Copilot won’t learn how to code properly in the first place.
Nevertheless, and given the shortage of developers, Copilot will allow even more bodgy coders to enter the market, but they will not be the ones working on innovative and large scale software.
Low-code visual development may win out against Copilot code because bug free solutions are easier to build with those, and code reviews become a thing of the past.
Innovative and large scale solutions will continue to be dominated by pro coders, having Copilot turned off.
Maybe AI can be used to do code reviews, to highlight areas that need refactoring or correction.
Matthew, I agree with you. AI coding assistants are just one tool in the toolbox. They are not a silver bullet.
Excellent article. I think the key thing is that these LLMs don't know anything except the symbol patterns they use to derive answers. Here is an interesting article in Apple Insider that talks about how including additional, irrelevant details changes the output from the model when it really shouldn't. The bottom line is the models are just statistical engines without any way to tell correct from incorrect.
https://appleinsider.com/articles/24/10/12/apples-study-proves-that-llm-based-ai-models-are-flawed-because-they-cannot-reason
I wonder if we might be doing it wrong - maybe instead of code creation we should look to AI for code documentation and code review. Having run engineering teams for longer than I want to admit I find those are always under appreciated tasks and ones that I would love us to collectively push AI to do better/faster/more consistently. Imagine a "security by design" process that included consistency of code review. That would be useful.
Thanks Meg, I agree and it would not surprise me if some development teams are already going in this direction.