Mind you, ChatGPT has written some pretty good code for me when I wanted to use an Xbox 360 USB controller to change the lights on an RGB matrix on a Pi - to describe what I want to it in plain English and have it spit the code out… well, that’s remarkable!
Yes, remarkable. And deeply depressing. Won't be long before writing code will be like they do repairs now: swap PCBs until it works and then it's job done. No-one actually understands what they're doing and don't have a clue how to fault-find to component level. So there will be lots of code that the authors don't understand (if, indeed, they even understand the language it's written it).
Of course, there will be a few exceptions since someone has to do the really low level shoulders that the AIs stand upon. But they won't be making the implementation decisions. Decisions such as, for instance, using some complex protocol over HTTP (because the high level implementer knows how to create websites) rather than an existing one like ftp, tftp, etc (yes, that happened to a project of mine once).
It’s not ALL remarkable; it takes many, many attempts and me telling it off and asking for it to try again, for working code to come out. Yeah I’d rather learn how the code works, but as a quick solution for a pointless but fun toy experiment, it worked.
Never in history have we been in such a time when human ego has been SO gigantic and full of itself; ChatGPT is an example of this, and the notion that it can “replace” us or is “better” than a human mind, is lunacy in the minds of those who develop it. It’s OKAY, and answers a few questions for me, but since I have seen it make up stories MANY times, it’s more of a farce than a useful tool.
How sad that so many companies revere this garbage SO much. Another Silicon Valley fad and a fanstasy they want to FORCE to come to pass. Lunatics.