| General > General Technical Chat |
| ChatGPT and the world of programming |
| (1/5) > >> |
| Picuino:
OpenIA has recently released a new version of GPT (GPT3.5) called ChatGPT. https://chat.openai.com/chat I am testing it and so far it has helped me to write some paragraph of technical documentation and to format information from a table to html format. But it has many more possibilities, including creating code automatically and finding bugs. I think this tool opens up a whole new world of possibilities and when it improves, it is possible that it will reduce the demand for programmers, since it will make them more productive. I also see that it can take away Google's position, because it is able to respond better to all kinds of doubts. Who else has tried it? What do you think of this new ChatGPT? |
| tom66:
I have played with similar code-generation AIs (though not yet had a go with GitHub Copilot which is the one most are talking about.) My overall impressions are if you give it a good description of the task it can do the job well, but it clearly doesn't understand anything other than how to connect random fragments of information together. What surprised me is it can handle esoteric requests, for instance, you can ask it to write some Verilog to blink an LED based on some clock at a certain fraction, and it will mostly get it right, but then sometimes include a synchronous or asynchronous reset. That wasn't part of the request, but some code examples will include those, which means occasionally the output will too. Also, if you ask it to produce code and run it, about 50% of the code will have a syntax or nonsensical error like a for statement with a condition that is never reached. It also surprised me that it can write GStreamer pipelines (anyone who has dealt with embedded video is likely to know how to do this, but it seems like there isn't much on Google, so it suggests it has read a lot of documentation and other source code to extract samples.) Many of these pipelines will at least initialise correctly, but only about 25% of them work. There's definitely a need for the lower level programming that these tools do, and I could see for instance writing test benches for software becoming a bit obsolete eventually, because it can be automated. But at the higher level, much of what programmers do is to translate a request and requirements written in English into source code, which includes everything from selecting the tools and language to use, to the methods and data structures, to the lower level implementation. I think it's unlikely this will be automated any time soon as it would require a true generalised intelligence which is sort of the holy grail for AI and I don't think we're getting close (it's very likely to require data-centre sized clusters to do what the human brain does, in any case, and raise significant moral and ethical concerns about conscious AIs.) |
| hans:
It looks like an impressive AI, but I'm not convinced it's anything besides a tech demo of the future. Anyone can confidently write BS. This is what the bot seems to do. Only if you give it precise instructions what to do, it can perform it well; in the classical "computers are good at what humans are bad at"-way. Obviously it can digest large amounts of assembly and code in a fraction of a second [compared to hours by a human] and in future probably with even more precision. But that's similar that you can generate 1000 prime numbers in a fraction of a second on even a 8-bit AVR, but to find them manually takes hours. The problem I have with putting any kind of trust in these tools is how they were created. If they were created from human generated datasets, e.g. data from the WWW, then I would consider that really low quality. Again, anyone can write BS or even partially true statements, and it happens a lot as well. Moreover, we humans are severely cognitively biased at all times, so it's kind of a stupid case study to learn from. E.g. research isn't revolutionized with 1 research paper.. it typically takes several and half a dozen years to take some finding as a common fact. If you only look at the Dunning-Kruger effect you'll start to know why. Someone can be confident about something because they don't know what they don't know; so they don't know they might be wrong as they cannot oversee everything at once. We go through daily lives like this, and to correct ourselves requires internal reflection and critical thinking (e.g. hindsight) to resolve. You could hope for an AI to "know everything" and so not be effected by this phenomena, eventually. But in order to reach it ("know"), it also needs to be critical and reflective. And how much is this AI critical/reflective, and how much is it just interpolating together? Given the amount of BS, sarcasm, irony and fake news that goes around, I wouldn't trust any essay or documentation written by an AI for the upcoming years. |
| ataradov:
All I've seen those "AI" things do is spit out boilerplate code. The reality of programming is not doing new trivial things like this, but maintaining existing code base. What are you going to do if you need to add a button into exiting mess of the code generated by previous AIs? What if you need to fix a very specific bug that happens once in a blue moon? |
| RoGeorge:
It's uncanny how many things can a GPT do offline, with a casual desktop and a 10GB generic trained model (the model was from eleuther ai). Can talk, chat, translate, speak various languages, and can generate code, all with the same GPT and the same model. It was very easy to install then use it from Pyton. Posted a few examples here: https://www.eevblog.com/forum/programming/ai-hello-world/ I expect a bigger model running on an entire datacenter to do much more. |
| Navigation |
| Message Index |
| Next page |