Author Topic: ChatGPT fails EE 101  (Read 15368 times)

0 Members and 1 Guest are viewing this topic.

Offline tooki

  • Super Contributor
  • ***
  • Posts: 11501
  • Country: ch
Re: ChatGPT fails EE 101
« Reply #25 on: March 17, 2023, 08:09:00 pm »
It doesn't need to understand something
You’re still not getting it, and despite saying “it doesn’t need to understand something”, the rest of your reply is still operating from the premise that it does understand to some extent.

but it should never "make things up".
Making things up is literally all ChatGPT can do. It is, at its core, a glorified version of the predictive text engine in your smartphone (the word suggestions above the keyboard). Your input becomes a seed for said predictive text engine. But don’t think for one second that it knows anything about the subject matter. All it truly knows is patterns of words that appear near each other. It doesn’t know what those words mean, nor what they mean in context.

How could it possibly come up with using a divider to get a higher voltage? That just doesn't pass the sniff test.
Because it didn’t come up with using a voltage divider to get a higher voltage. That suggests it has even the most vague idea of what voltage is, what “higher” means, etc. It doesn’t. It just starts stringing words together and testing them against patterns.

I don’t know whether ChatGPT works like this, but as I understand it, image generation AI actually works by starting with random pixels (i.e. noise), and iteratively changing some at random, each time comparing it to the input and seeing if it matches the input pattern more or less than it did before. If it does, the change is kept; if it doesn’t, it tries a different change instead.

So in reality, generative AI starts with noise and refines it until it seems to be a plausible match for the input prompt, the exact opposite of what I suspect most people would assume.
 
The following users thanked this post: SiliconWizard, thinkfat

Offline alm

  • Super Contributor
  • ***
  • Posts: 2881
  • Country: 00
Re: ChatGPT fails EE 101
« Reply #26 on: March 17, 2023, 08:29:08 pm »
Those are boiler plate transformations, the fact it can do those is impressive. What it does it do though when you ask for an algorithm (or a synthesis thereof). The infamous example is the sparse matrix transpose.
It has learned a lot of common transformations and patterns, and based on subtle context clues, can combine these techniques to give a suggestion of how to continue. For example, if I define a variable foo_by_id, and type foo_by_id = , then it will come up with a suggestion to transform my list foos into a mapping with id as key. How it find the id of foo, it might find out from somewhere else, like input data.

Think code completion on steroids. I don't need it to write an algorithm for me, I have libraries for that. But throughout the day it helps with all kinds of small things, like writing a simple function or few lines of code, suggesting sensible mock values for tests, etc.

Perhaps an interesting side note, given that AI derives it's answers from it's input, how many of it's answers will violate copyright and/or patents?  Maybe there will come a day when an AI lawyer sues an AI engineer for patent infringement.  Maybe it's not that far off.
Were the textbooks you studied open source? Did you discuss the licenses of the books with your employer before working there? There may be a few lines you write that are similar to what's in a textbook, but no large parts of text. Why would a machine learning model be different? I'm sure lawyers will argue over it. Here's a (US) lawyer discussing arguments on both sides.

Github Copilot actually has an option where it will filter out suggestions that are too close to existing open-source code.

It doesn't need to understand something but it should never "make things up". How could it possibly come up with using a divider to get a higher voltage? That just doesn't pass the sniff test.
Note that this thread is about two different versions of the model. There is Github Copilot, which is designed to be a professional tool with a very limited scope, and ChatGPT, which is a technology demo with a surprisingly broad knowledge. Both are based on GPT3, a very large language model, and they show what a large language model can achieve by learning from a very large volume of data. I imagine you could do something similar by training a model on electronics textbooks, app notes and data sheets, but so far no one has done this yet. But the toy ChatGPT can do an okay job on it.

I don’t know whether ChatGPT works like this, but as I understand it, image generation AI actually works by starting with random pixels (i.e. noise), and iteratively changing some at random, each time comparing it to the input and seeing if it matches the input pattern more or less than it did before. If it does, the change is kept; if it doesn’t, it tries a different change instead.

So in reality, generative AI starts with noise and refines it until it seems to be a plausible match for the input prompt, the exact opposite of what I suspect most people would assume.
No, GPT3 is not an adversarial model where two models are playing cat and mouse, one trying to spot the other model, and the other trying to trick the other model to think what it made was made by a human. GPT3 learned a mapping of words into a many-dimensional vector space (word embeddings), which gives it a level of understanding of semantics. So for example it knows that capacitor is more similar to resistor than to suitcase. And with that mapping it has been trained to predict the next word in a lot of sentences. this video gives a decent overview of how GPT3 works.

Offline phil from seattleTopic starter

  • Super Contributor
  • ***
  • Posts: 1029
  • Country: us
Re: ChatGPT fails EE 101
« Reply #27 on: March 17, 2023, 08:51:54 pm »
Perhaps an interesting side note, given that AI derives it's answers from it's input, how many of it's answers will violate copyright and/or patents?
Definitely interesting.

Microsoft's GitHub Copilot is similarly problematic, because it has been trained on copyright-protected material (under various open source licenses), but without tracking the license requirements at all.  In my opinion, if Microsoft wants to provide that kind of a service, they should be obligated to include all their own proprietary source code (Windows, Office, etc.) in the training material also.  If they believe Copilot does not violate copyrights, they should not have any objection.  If they refuse, they are currently enabling copyright violation for profit by suggesting exact sequences of copyrighted code to unknowing users (hiding the copyright of the suggested code), and should be penalized to the exact same amount they themselves demand per copyright violation.

Unless, of course, they're back to their good ol' self a decade ago, claiming that open source is not copyrightable, only proprietary source code is.

And, if code is lifted from some place with some sort of FOSS license that raises yet another set of issues.  Many companies are pretty strict about not including such code in their products. Will that lead to them banning AI software assistance? It gets murkier and murkier...
 

Offline alm

  • Super Contributor
  • ***
  • Posts: 2881
  • Country: 00
Re: ChatGPT fails EE 101
« Reply #28 on: March 17, 2023, 09:06:55 pm »
And, if code is lifted from some place with some sort of FOSS license that raises yet another set of issues.  Many companies are pretty strict about not including such code in their products. Will that lead to them banning AI software assistance? It gets murkier and murkier...
Nope, they will just enable the setting to block suggestions matching public code: https://dev.to/transient-thoughts/avoiding-accidental-open-source-laundering-with-github-copilot-g1d. And yes, you can easily enable this organization-wide.

Offline phil from seattleTopic starter

  • Super Contributor
  • ***
  • Posts: 1029
  • Country: us
Re: ChatGPT fails EE 101
« Reply #29 on: March 17, 2023, 11:56:11 pm »
Makes sense.  Though as a developer, I think it worthwhile to at least see how others solved a given problem, even if I don't directly copy the source code.
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6255
  • Country: fi
    • My home page and email address
Re: ChatGPT fails EE 101
« Reply #30 on: March 18, 2023, 01:20:21 am »
And, if code is lifted from some place with some sort of FOSS license that raises yet another set of issues.  Many companies are pretty strict about not including such code in their products. Will that lead to them banning AI software assistance? It gets murkier and murkier...
Nope, they will just enable the setting to block suggestions matching public code: https://dev.to/transient-thoughts/avoiding-accidental-open-source-laundering-with-github-copilot-g1d. And yes, you can easily enable this organization-wide.
Interesting.  So they are well aware that GitHub Copilot can suggest copyrighted code.  I wonder what their legal theory is that they believe permits them to "launder open source code" while ignoring the copyright licenses associated with said source code.

I really hope it isn't "Because they don't ask for money, and don't have expensive lawyers on retainers, we really don't need to follow their license requirements, do we?  Open source is free-for-all, isn't it?"

I for one am very glad I never ended up putting any of my projects at GitHub.  It would be rather annoying to be exploited like that for someones proprietary paid product, by someone who is extremely aggressive about protecting their own copyrights.
« Last Edit: March 18, 2023, 01:25:05 am by Nominal Animal »
 
The following users thanked this post: SiliconWizard

Offline EEVblog

  • Administrator
  • *****
  • Posts: 37738
  • Country: au
    • EEVblog
Re: ChatGPT fails EE 101
« Reply #31 on: March 18, 2023, 01:28:02 am »
Thread mvoed to the new AI forum section.
 

Offline xrunner

  • Super Contributor
  • ***
  • Posts: 7517
  • Country: us
  • hp>Agilent>Keysight>???
Re: ChatGPT fails EE 101
« Reply #32 on: March 18, 2023, 01:32:12 am »
Oh wow we got a new AI board!

I'm sure we'll use it quite a bit in the coming years, that is until we are all taken over by our bot-replacements.  :-DD
I told my friends I could teach them to be funny, but they all just laughed at me.
 
The following users thanked this post: Ed.Kloonk

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6255
  • Country: fi
    • My home page and email address
Re: ChatGPT fails EE 101
« Reply #33 on: March 18, 2023, 01:34:53 am »
I wonder if the first bot-spawning bot has already been created.  :popcorn:
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14466
  • Country: fr
Re: ChatGPT fails EE 101
« Reply #34 on: March 18, 2023, 02:37:14 am »
I suspect that people configuring ChatGPT bots will make them cautiously avoid forum threads with 'ChatGPT' in it.
Just a thought. :popcorn:
 

Offline Microdoser

  • Frequent Contributor
  • **
  • Posts: 423
  • Country: gb
Re: ChatGPT fails EE 101
« Reply #35 on: April 01, 2023, 11:38:47 am »
I plugged '8^x+2^x=130, solve for x' into chatGPT as a test, after 1 minute it gave 3 methods to find a solution and a rough answer of 2.42. I asked it to use one of the other methods, which it said would be more accurate, and it has told me that using the bisection method we get 2.321928094

It then said that 8^2.321928094+2^2.321928094=128.000003218 + 1.999996782=130.000000

The first thing that jumped out at me is that 2^2.321928094 is closer to 5, and cannot be less than 2. I told ChatGPT this, and it then told me that the sum it gave me actually resolved out to 133 (also incorrect) and so the original equation was what was wrong and should have been 8^x+2^x=133. Hilarious, you couldn't make it up.

It turns out that when calculated properly:

8^2.321928094=124.99999976934773412728988276068
2^2.321928094=4.9999999969246364531389595814579

and their sum is 129.99999976627237058042884234214

So the answer it gave me, 2.321928094, was actually fairly accurate, but it completely failed when explaining why, then doubled down on its ignorance and incorrect maths and then decided that the wrong answer it gave must have been right and this meant the original question must have been wrong...

I have seen ChatGPT do this with code, it makes a basic error and instead of correcting it, it decides the whole world must be incorrect instead.

 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14466
  • Country: fr
Re: ChatGPT fails EE 101
« Reply #36 on: April 01, 2023, 09:55:35 pm »
it makes a basic error and instead of correcting it, it decides the whole world must be incorrect instead.

Sounds like a basic politician to me. :-DD
 

Offline RJSV

  • Super Contributor
  • ***
  • Posts: 2121
  • Country: us
Re: ChatGPT fails EE 101
« Reply #37 on: April 30, 2023, 04:03:23 am »
Hey Phil from:    (probably not the Phil I knew ?)
   This IS, whether you realized it, or not,  actually a quite exciting issue, and answer, with some subtleties, reveals the different interpretation of the real world vs. the designers attitude.  Same goes for EE's and their interpretation of the question and problem!

   You see, I think the chat-bot is not yet fully trained, as an EE would interpret the question different...Seems like the BOT takes a short-cut by (invalidly) leaving out the requirement that any / most circuits have, which is to not only create the voltages, but to provide outputs in typical 'black box's manner...meaning that the question needs both the creating of the required voltage(s), but also the 'output from an input'.
The BOT thinks it took things in order, as per the question, but that's not the case.
The BOT, instead, grabbed an example of a two resistor voltage divider, and while not understanding, effectively said "Look! it has the required lower voltage, at 2.06V, and there's the 5 V, also present."
So, in its mind, the whole question was answered adequately.
   Problem is (I think), the BOT didn't realize the difference between 'having 5 V' and ' creating 5 V'.
Looks like the error is in initially presenting that resistor divider with 'already' having 5 V to divide...and then going in later and declaring 'here it is'.
   Also, similar to a plagiarism as the BOT never really built any 5V, it just stated that it had that, as part of the divider.   So that's, almost, maybe two pseudo-errors.
   If that makes sense, in a twisted way.
 

Online Marco

  • Super Contributor
  • ***
  • Posts: 6721
  • Country: nl
Re: ChatGPT fails EE 101
« Reply #38 on: April 30, 2023, 12:33:20 pm »
I plugged '8^x+2^x=130, solve for x' into chatGPT as a test
It's pretty impressive it's able to see it's a root finding problem, but how is it actually solving it?

Does ChatGPT already use multiple steps behind the scenes while hiding it's output?
 

Offline Microdoser

  • Frequent Contributor
  • **
  • Posts: 423
  • Country: gb
Re: ChatGPT fails EE 101
« Reply #39 on: April 30, 2023, 02:43:22 pm »
I plugged '8^x+2^x=130, solve for x' into chatGPT as a test
It's pretty impressive it's able to see it's a root finding problem, but how is it actually solving it?

Does ChatGPT already use multiple steps behind the scenes while hiding it's output?

It used it's "What word comes next" matrix, and the next 'words' pumped out by the matrix were the right answer. But just like we, as humans, don't know how it works exactly (just that it does), it didn't know how it got the right answer (even though it knew the correct name for the method it used) and this became plain when asked to explain its workings. It completely failed doing basic maths. Similarly, electricity can find the shortest path in a maze from start to end without even knowing what a maze is.
 

Online Marco

  • Super Contributor
  • ***
  • Posts: 6721
  • Country: nl
Re: ChatGPT fails EE 101
« Reply #40 on: May 01, 2023, 02:12:29 am »
I doubt it has enough examples to get a meaningful output to approximate bisection with an arbitrary function with a straight through prediction.

I think it's doing a chain of thought solution behind the scenes, which normally requires some handholding. That way it can divide and conquer the problem, fill in values in the formulas so it only has to do simple arithmetic, which is easier for the network to directly handle.
« Last Edit: May 01, 2023, 02:15:28 am by Marco »
 

Offline KE5FX

  • Super Contributor
  • ***
  • Posts: 1890
  • Country: us
    • KE5FX.COM
Re: ChatGPT fails EE 101
« Reply #41 on: May 01, 2023, 02:28:10 am »
Whatever you tried with GPT3 or 3.5, you should try with GPT4.

Never mind the current state of the art.  Nothing matters except the first couple of derivatives.
 
The following users thanked this post: bookaboo

Online Marco

  • Super Contributor
  • ***
  • Posts: 6721
  • Country: nl
Re: ChatGPT fails EE 101
« Reply #42 on: May 01, 2023, 02:54:21 am »
Nothing matters except the first couple of derivatives.

Not for bisection. But lets assume Newton Raphson, with chain of thought it can write out the derivative, fill it into NR formula, fill in basic values and repeat ... it's symbolic manipulation and simple arithmetic.

With straight prediction, the NN has to learn how to do it directly from a very sparse training set. It seems an incredibly tough task for backprop, calling the problem non-linear for arbitrary functions doesn't do it justice.

PS. For Bing, the prompt injectors have uncovered some of the ways it used hidden output at least, Microsoft calls it inner monologue. I assume they got the trick from OpenAI.
« Last Edit: May 02, 2023, 10:28:45 am by Marco »
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14466
  • Country: fr
Re: ChatGPT fails EE 101
« Reply #43 on: May 01, 2023, 07:47:18 pm »
GPT-5 will be the one.
 

Offline KE5FX

  • Super Contributor
  • ***
  • Posts: 1890
  • Country: us
    • KE5FX.COM
Re: ChatGPT fails EE 101
« Reply #44 on: May 01, 2023, 09:28:32 pm »
GPT-5 will be the one.

Keeping in mind most people here haven't played with GPT-4 yet, given that it's not free.  ChatGPT-4 is a pretty big bump over the GPT-3.5 version. 

ML has always evolved through punctuated equilibrium, though... it could be that all of the low-hanging fruit has been picked for the next couple of decades.
 

Offline EEVblog

  • Administrator
  • *****
  • Posts: 37738
  • Country: au
    • EEVblog
Re: ChatGPT fails EE 101
« Reply #45 on: May 02, 2023, 04:25:52 am »
I dunno, it should be able to at least summarize existing knowledge.  It does surprisingly well with programming.
You do realize that the majority of code produced is pretty crap?  The equivalent of blocks glued together with hot snot?  With the reliability of soap bubbles?

It was pretty impressive with my Macgyver project.
 

Offline Microdoser

  • Frequent Contributor
  • **
  • Posts: 423
  • Country: gb
Re: ChatGPT fails EE 101
« Reply #46 on: May 03, 2023, 09:18:03 am »
I dunno, it should be able to at least summarize existing knowledge.  It does surprisingly well with programming.
You do realize that the majority of code produced is pretty crap?  The equivalent of blocks glued together with hot snot?  With the reliability of soap bubbles?

It was pretty impressive with my Macgyver project.


It's been doing fine making relatively small functions in my latest project too. It's also good for taking the drudge work out of coding.

For example, I had a bunch of layers in a Photoshop document that I wanted combined with the background and saved as individually named files, to be buttons on a display. I googled how to do that, but that didn't turn up anything useful straight away (not without having to wade through 'ad' results, suggested results and so on anyway), so I asked ChatGPT how to export them, combined with the background, with the filename being the layer name. It just told me, so I did that and got the result I wanted. I then had a folder filled with a bunch of images, some new, some I had already used. I asked GPT to write a function that would look at all the files, strip the extension, make a list of all the names, parse my existing code and remove all the names that were already set up as 'button instances', then add (using the same button instance format), in alphabetical order, any buttons missing, and write out the code again. I noticed some were names like '=' and '%' so I told it to name control characters as the word instead of the symbol. I put the existing code in a .txt file, and it just worked. I now have about 30 individually named instances added to my code in far less time than it would take to type them out. The instance has fields for x and y size of the button, and it got those directly from the images.

Incidentally, I tried doing this with Bard, and it just insisted on doing 'other things', not what I wanted at all (it made a screen and placed all the images on it using tkinter after making a list of the images)
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6255
  • Country: fi
    • My home page and email address
Re: ChatGPT fails EE 101
« Reply #47 on: May 04, 2023, 04:53:36 am »
I dunno, it should be able to at least summarize existing knowledge.  It does surprisingly well with programming.
You do realize that the majority of code produced is pretty crap?  The equivalent of blocks glued together with hot snot?  With the reliability of soap bubbles?
It was pretty impressive with my Macgyver project.
I was not referring to only GPT engines, but to human code writers as well.

Sure, it looks impressive, because that is exactly what gets copied and used the most.

Popular ≠ high quality.  Remember, GPT engines base their 'knowledge' on what humans have already produced.  They have no facility to determine 'quality', so the GPT trainers use some kind of weighting scheme based on links; a qualified popularity, if you will.  This yields "impressive" results, but there is no real quarantee of the underlying quality at all.  My problem is that most of code written by humans or generated by various toolkits is at most "pretty", not of good quality (reliability, robustness against unexpected inputs, efficiency, etc.).

Think of shiny gadgets from China with lots of blinky leds, but with their actual innards being the Cheapest-Backalley-Mart quality.
Lots of people think that stuff is impressive, too.

The root of the issue is best exemplified by my pet peeve.  Pick up any C book, and one of the exercises will be to read all files in a directory.  The "correct" solution it shows will use opendir(), readdir(), and closedir().  This is utter crap, because 99% of such examples fail to handle the cases where the directory contents change during the scan.  Another exercise will extend that to scan all files in a directory tree.  That will be similar, with string operations to handle the path name construction.  That is even worse, because it utterly fails if subtrees are moved or renamed during the scan.  The only reason it is shown this way, is because it was the way it was done in 1989.  (I'll omit a rant here, involving BSD, Single Unix Specification, POSIX, the C standard, and what happened with the C11 standard and Microsoft.  Let's just say that I'm quite happy that C2x/C23 seems to treat C11 mostly as a fork (including removing Annex K), and return to the approach used in the standard developed from C89 to C99: adding features 'end users'/programmers need and have asked from compiler developers, instead of having the committee dictate new features.)

The correct solution on all systems except Microsoft Windows is to use scandir(), scandirat(), glob(), wordexp(), nftw(), the aforementioned being built-in to POSIXy standard C libraries and most BSDs, and/or fts_..() family of functions available on all BSD variants and Linux.  Because these are part of the base C library on all operating systems except Windows, they are expected to handle the aforementioned issues correctly (but if you test, remember that time-based race windows will still exist; just checking when a specific function returns using a wall clock does not mean the actual filesystem change has propagated to be visible to all processes).

On Windows, you also use the above, but with custom implementations that fulfill the synchronicity requirements you have.  One can write their own, or use any of the freely-licensed implementations –– noting that the straightforward opendir()/readdir()/closedir() ones are heuristic, not deterministic, when directory tree operations occur during scanning.

I often see extremely experienced C programmers describe a neat "solution" using straightforward opendir()/readdir()/closedir() without any additional logic to handle concurrent modifications as "impressive", too.   :'(

Now, I do realize I am in a very tiny minority here, because almost all humans are used to the fact that the software they use is pretty crappy in absolute terms.  Then, something that seems to work just fine is certainly impressive.  This is sufficient for even commercial software, and anything above that is –– in my personal experience –– declared excessive perfectionism.  I like to write code and build systems that work without issues for years on end, that when an issue does occur (even if just a hardware one that is normally ignored, say like a delayed write error at file close()/fclose() time) does let me know, so I can decide for myself what to do.
« Last Edit: May 04, 2023, 04:58:13 am by Nominal Animal »
 

Offline Microdoser

  • Frequent Contributor
  • **
  • Posts: 423
  • Country: gb
Re: ChatGPT fails EE 101
« Reply #48 on: May 04, 2023, 09:28:36 am »
I dunno, it should be able to at least summarize existing knowledge.  It does surprisingly well with programming.
You do realize that the majority of code produced is pretty crap?  The equivalent of blocks glued together with hot snot?  With the reliability of soap bubbles?
It was pretty impressive with my Macgyver project.
I was not referring to only GPT engines, but to human code writers as well.

Sure, it looks impressive, because that is exactly what gets copied and used the most.

Popular ≠ high quality.  Remember, GPT engines base their 'knowledge' on what humans have already produced.  They have no facility to determine 'quality', so the GPT trainers use some kind of weighting scheme based on links; a qualified popularity, if you will.  This yields "impressive" results, but there is no real quarantee of the underlying quality at all.  My problem is that most of code written by humans or generated by various toolkits is at most "pretty", not of good quality (reliability, robustness against unexpected inputs, efficiency, etc.).

Think of shiny gadgets from China with lots of blinky leds, but with their actual innards being the Cheapest-Backalley-Mart quality.
Lots of people think that stuff is impressive, too.

The root of the issue is best exemplified by my pet peeve.  Pick up any C book, and one of the exercises will be to read all files in a directory.  The "correct" solution it shows will use opendir(), readdir(), and closedir().  This is utter crap, because 99% of such examples fail to handle the cases where the directory contents change during the scan.  Another exercise will extend that to scan all files in a directory tree.  That will be similar, with string operations to handle the path name construction.  That is even worse, because it utterly fails if subtrees are moved or renamed during the scan.  The only reason it is shown this way, is because it was the way it was done in 1989.  (I'll omit a rant here, involving BSD, Single Unix Specification, POSIX, the C standard, and what happened with the C11 standard and Microsoft.  Let's just say that I'm quite happy that C2x/C23 seems to treat C11 mostly as a fork (including removing Annex K), and return to the approach used in the standard developed from C89 to C99: adding features 'end users'/programmers need and have asked from compiler developers, instead of having the committee dictate new features.)

The correct solution on all systems except Microsoft Windows is to use scandir(), scandirat(), glob(), wordexp(), nftw(), the aforementioned being built-in to POSIXy standard C libraries and most BSDs, and/or fts_..() family of functions available on all BSD variants and Linux.  Because these are part of the base C library on all operating systems except Windows, they are expected to handle the aforementioned issues correctly (but if you test, remember that time-based race windows will still exist; just checking when a specific function returns using a wall clock does not mean the actual filesystem change has propagated to be visible to all processes).

On Windows, you also use the above, but with custom implementations that fulfill the synchronicity requirements you have.  One can write their own, or use any of the freely-licensed implementations –– noting that the straightforward opendir()/readdir()/closedir() ones are heuristic, not deterministic, when directory tree operations occur during scanning.

I often see extremely experienced C programmers describe a neat "solution" using straightforward opendir()/readdir()/closedir() without any additional logic to handle concurrent modifications as "impressive", too.   :'(

Now, I do realize I am in a very tiny minority here, because almost all humans are used to the fact that the software they use is pretty crappy in absolute terms.  Then, something that seems to work just fine is certainly impressive.  This is sufficient for even commercial software, and anything above that is –– in my personal experience –– declared excessive perfectionism.  I like to write code and build systems that work without issues for years on end, that when an issue does occur (even if just a hardware one that is normally ignored, say like a delayed write error at file close()/fclose() time) does let me know, so I can decide for myself what to do.

So to boil down what you are saying, ChatGPT writes software as good as most commercially released software, but that isn't the best possible code.
 
Well, that does suggest that it provides a good commercial solution, and quicker than a human can do it.
 
Similarly, self-driving cars still have accidents, but if they have fewer accidents than the average human, then it makes sense to not have humans drive and to have the car drive itself. There will be fewer accidents and fewer people will die. Of course, the accidents they do have will be seen as completely avoidable and stupid because a human would not do whatever caused the accident, but conversely the accidents that humans have could be seen as completely avoidable and stupid because the car would not do that either.
 
At the end of the day, the AI just has to provide more than it costs for it to be worth using. Sure, there will be times that a really competent human would do a better job, but in reality, how many really competent humans are writing code anyway? If you are a human that can write better code than the AI, and in less time, then you'll still have a job. For the rest, the AI will do a better job, cheaper, and in less time.
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6255
  • Country: fi
    • My home page and email address
Re: ChatGPT fails EE 101
« Reply #49 on: May 04, 2023, 10:31:27 am »
So to boil down what you are saying, ChatGPT writes software as good as most commercially released software, but that isn't the best possible code.
I'm not sure I agree with "as good as", as that's to be seen (insufficient data thus far), plus it gleefully regenerates copyrighted code with no respect to the licensing –– that's the reason they don't use their own proprietary code in their training data, only others' open source code ––, but along those lines yes.

Thing is, most commercially released software is crap.  Analogous to the cheapest Chinese electronics pap you can find.

What you are saying about software development equally applies to EE design.  Unless you're willing to cut corners and make the gadgets and tools and test equipment at minimum possible price, expect your job to be cut within the next decade or so.

I do not like this race towards bottom, to the least common denominator.  Not at all, not in any field.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf