Products > Programming

Is ChatGPT any good at writing code?

(1/7) > >>

cfbsoftware:
Who else would be better than ChatGPT at answering this question?


--- Quote ---Is ChatGPT any good at writing code?
--- End quote ---

This is how ChatGPT replied:


--- Quote ---As of my last update in September 2021, ChatGPT is more suited for generating human-like text based on the prompts given to it. While it can provide some basic code snippets, it is not specifically designed or trained to write complex or functional code. For coding tasks, you would be better off using specialized tools like code editors or IDEs.

If there have been updates or improvements to ChatGPT's capabilities since my last update, I recommend checking the OpenAI website or official documentation for the most up-to-date information.
--- End quote ---

Phew! I might get to retire before I'm made redundant ;)

Psi:
LLMs will become a integrated tool in IDEs to make coding easier and quicker.
You will get the LLM to add code blocks/functions and then check them yourself as well as check them using the LLM. This will simply make coding easier and faster for humans. Over time as LLMs get better more and more of it will be done by the LLM vs the human.  But it's hard to say where that will ultimately end up.

There are LLM tools for coding available already but we probably need locally run LLMs for the concept to fully take off and be secure.

Maybe in 5 years a PCIe card will be available for a reasonable price that can slot into a pc/laptop and accelerate AI for locally run LLMs.   You can run local LLMs now with a ~RTX4090 but only the smaller LLM models due to ram limitations and the cost of the card is huge.

amwales:
When I have played with chatgpt I found that it would make up things that didn't exist the trick is to keep asking it to iterate of the result by telling it about the bits you think are wrong. I have used it for help with excel macros and prior to that had never written an excel macro, like I said it had a habit of making things up but I got what I needed in the end.

If you are interested in offline LLMs, you can run Ollama locally on windows ( linux and mac ) with GPU hardware acceleration

https://ollama.com/blog/windows-preview

Depending on your system RAM and GPU your mileage will vary.
I have a 24core Ryzen with 32GB and an RTX 2060 GPU which is an older card and it ran pretty fast with a 7B model, there are a number of them you can download and try out, its certainly a step up from Eliza.

amwales:
You blink and you miss things, the AI landscape is moving so fast. Ollama running on Rasberry pi5 Linux

SiliconWizard:

--- Quote from: cfbsoftware on April 01, 2024, 10:17:26 pm ---Who else would be better than ChatGPT at answering this question?


--- Quote ---Is ChatGPT any good at writing code?
--- End quote ---

This is how ChatGPT replied:


--- Quote ---As of my last update in September 2021, ChatGPT is more suited for generating human-like text based on the prompts given to it. While it can provide some basic code snippets, it is not specifically designed or trained to write complex or functional code. For coding tasks, you would be better off using specialized tools like code editors or IDEs.

If there have been updates or improvements to ChatGPT's capabilities since my last update, I recommend checking the OpenAI website or official documentation for the most up-to-date information.
--- End quote ---

Phew! I might get to retire before I'm made redundant ;)

--- End quote ---

OpenAI themselves have always stated this clearly. That doesn't prevent people from pretending they haven't heard anything.

Beyond the "correctness" of the models' output, which over time may be made arbitrarily better, one interesting question is not so much about AI than about languages in general: it looks like we're yet again trying to reinvent the language wheel. Here's why.

Natural languages are well known to be very poor, in general, and by themselves, at expressing technical things accurately. That's why we have invented maths notation, and various programming languages.
The idea that one can accurately define, with natural language, a complete system is a fallacy.
And, as I often say, in programming (and engineering in general), the hard part is often to specify things clearly and unambiguously, that's the hard part. Not the implementation. But implementing has the benefit of making us think of aspect of the specs that are incomplete, possibly incorrect or inconsistent. Without that step, we're just acting blind.

So, yeah, people are getting all excited because they can ask relatively trivial stuff and get a (sometimes) working piece of code, but just try with something more elaborate and you'll quickly figure out that both the LLM and yourself will fail miserably.

One key starting point IMO would be to improve the way we are able to express ourselves with "natural language" in a completely consistent and unambiguous way. Good luck with that.

So, no, you won't get redundant. But you're likely to get frustrated as more and more people expect these LLMs to do your job, and then come back to you to help them get out of the mess they have put themselves in. Get prepared to this to get over the frustration (the "I told you" syndrome), and you'll be fine with probably no shortage of work to do for years to come.

Navigation

[0] Message Index

[#] Next page

There was an error while thanking
Thanking...
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod