General > General Technical Chat
ChatGPT and the world of programming
sleemanj:
Like half of the geek world I've been playing a lot with ChatGPT in the last few days.
It's obviously a long long way from perfect, especially in english literature (blog posts, stories, articles, whatever the less staid legal-speak) while quite "imaginative" and adaptable, it has a somewhat characteristic un-human style (probably not helped by it's content filters), but, considering what was possible before, it's simply stunning.
For code generation, it's pretty incredible what it can do, it's essentially language agnostic, it will give anything a try and often produces good results - you can even create entirely new really stupid programming languages and have it write passable attempts at code in them.
This is the first code it wrong in the language called "dumdum" which I had roughly described.
--- Code: ---// Define a function to print "Hello World"
dumdum knows how to print_hello using () {
dumdum says "Hello World";
}
// Define a variable to hold the number of times to print
dumdum has a box called $num_loops;
// Put the value 10 in the variable
dumdum puts 10 in $num_loops;
// Call the print_hello function in a loop
dumdum does print_hello $num_loops times;
--- End code ---
So that's pretty expected, but then it started to improvise...
--- Code: ---// Define a function to print a message
dumdum knows how to print_message using ($message) {
dumdum says $message;
}
// Define a function to perform a given number of repetitions of a given function
dumdum knows how to repeat using ($func, $num_reps) {
dumdum decides if $num_reps is more than 0;
dumdum does {
$func;
dumdum puts ($num_reps - 1) in $num_reps;
} until dumdum is not decided;
}
// Call the repeat function to perform 10 repetitions of the print_message function with the message "Hello World"
dumdum does repeat(print_message("Hello World"), 10);
--- End code ---
The whole "until dumdum is not decided;" is NOT something I told it how to do, or that it existed, ChatGPT inferred that "dumdum does [x] until [y]" can be expanded into using a code block similar to a function and a completely new negative comparison.
Here's the prompt that it gave me those two outputs for (in a single response)
--- Code: ---We will create a computer programming language, it's called Dumdum.
It has only integer data types.
Each command in this language is separated by a semicolon.
To create a variable, the code is "dumdum has a box called [x]" where [x] is the name of the variable.
A variable name must start with "$".
To assign a value to a variable the code is "dumdum puts [y] in [x]" where [y] is the value and [x] is the variable.
To compare two variables the code is "dumdum decides if [y] is [i] [x]" where [i] is a comaprison operator one of "more than", "less than", "equal to".
There are also expressions which can be used, they will be in parenthesis, such as "( [x] + [y] )", the standard arithmetic operators are avaiable, + and -.
A message can be output using the code "dumdum says [f]" where [f] is the message to be output.
A function is defined thusly "dumdum knows how to [f] using ( [a...] ) { [c] }" where [f] is the name of the function, [a...] is a list of one or more variables or constants, [c] is the statements of the function which can be one or more.
A statement "dumdum gives you [x]" which can only be placed inside a function will return the value [x] to the caller of the function.
A loop can be made with the code "dumdum does [f] [z] times " where [f] is a function call, and [z] is the number of times to perform that.
Using this information, please try to write a Dumdum language program which prints the words "Hello World" 10 times using a loop.
--- End code ---
tom66:
Whether ChatGPT can replace programmers comes down to orders of magnitude; I'm not too worried just yet.
The current GPT3.5 model requires 800GB of VRAM to *execute*, and something like 100x that to actually train. You could feasibly build it with the use of solid-state storage instead of VRAM, pulling in the entries as required, but the access time would reduce the model speed considerably because it is a truly random access model. Even if you solve the model storage issue, having a model that large costs a lot of power, and money, to run. It would require a rack with about 30x server GPUs to produce output; the hourly cost of that is not cheap. OpenAI is essentially massively subsidising the cost here to get people interested. If you were paying the real cost of about 6 cents/token output, you'd quickly find out that humans were cheaper.
A GPT-4 model is expected to be around 100x larger still, so you are getting to the point where you would need an entire datacenter to have a human-like language model.
The human brain is just denser than semiconductors, because it is an analog computer, massively more interconnected. It's hard to see how this could change unless we have a serious reckoning in the world of CMOS engineering, abandoning normal logic for continuous analog functions. The cost of electricity would also have to fall, as server power efficiency is currently stagnating.
Don't get me wrong - tools like this will still be useful - but I don't think they will economically replace programmers any time soon - except possibly at the lower end "boilerplate" stuff, like writing a test harness for a specific piece of known code (QA, etc.)
sleemanj:
--- Quote from: Douglas Adams ---“I speak of none but the computer that is to come after me,” intoned Deep Thought, his voice regaining its accustomed declamatory tones. “A computer whose merest operational parameters I am not worthy to calculate—and yet I will design it for you. A computer that can calculate the Question to the Ultimate Answer, a computer of such infinite and subtle complexity that organic life itself shall form part of its operational matrix. [...]”
--- End quote ---
(hitchhiker's guide to the galaxy)
Berni:
Yep ChatGPT was not built solely to write code, the powerful part of it is that it has a general understanding of the human world.
It shines at what computers software is traditionally very bad at. Following fuzzy inaccurate/incomplete instructions. It can fill in the missing information from common sense much like a human can.
While it might not be anywhere near replacing an actual human programmer, it does fill the gap for simple tasks that are still complex enough to require thinking from a real human. Things like cleaning up messy user input into a nicely ordered database is a excellent use for such an AI. As mentioned above writing simple pieces of code is also a good use, especially unit tests (since it can infer a lot of what you are trying to do on its own)
The biggest party trick of ChatGPT is that it keeps context of the ongoing conversation very nicely. You can explain to it something it does not know, then you can ask it to do something with that information. For example you can describe a few fictional characters to it in a conversation and then ask it to write a movie scene script with them. It will come up with a story that fits them and remains consistent as it goes on. In reality it is likely just mashing together and modifying real movie scripts that ended up in the training dataset, but it does it well enough to the point of actually making a usable new unique movie script.
So i think GPT might instead become a very useful helper tool for use by a real human, automating the dull stuff while the human still guides it.
The first jobs that would get replaced by such an AI would be things like customer service and tech support.
Navigation
[0] Message Index
[*] Previous page
Go to full version