Discussion about this post

User's avatar
Matthew King's avatar

The part about forgetting how to code is the crux of this. This also means that junior coders using Copilot won’t learn how to code properly in the first place.

Nevertheless, and given the shortage of developers, Copilot will allow even more bodgy coders to enter the market, but they will not be the ones working on innovative and large scale software.

Low-code visual development may win out against Copilot code because bug free solutions are easier to build with those, and code reviews become a thing of the past.

Innovative and large scale solutions will continue to be dominated by pro coders, having Copilot turned off.

Maybe AI can be used to do code reviews, to highlight areas that need refactoring or correction.

Expand full comment
Andy Schlei's avatar

Excellent article. I think the key thing is that these LLMs don't know anything except the symbol patterns they use to derive answers. Here is an interesting article in Apple Insider that talks about how including additional, irrelevant details changes the output from the model when it really shouldn't. The bottom line is the models are just statistical engines without any way to tell correct from incorrect.

https://appleinsider.com/articles/24/10/12/apples-study-proves-that-llm-based-ai-models-are-flawed-because-they-cannot-reason

Expand full comment
3 more comments...

No posts