Author Topic: ChatGPT fails EE 101  (Read 14806 times)

0 Members and 1 Guest are viewing this topic.

Offline phil from seattleTopic starter

  • Super Contributor
  • ***
  • Posts: 1029
  • Country: us
ChatGPT fails EE 101
« on: March 16, 2023, 02:27:16 am »
Playing around with ChatGPT I asked it the cheapest way to translate 3.3V logic to 5V.  It initially suggested a 74HC4050 which is a fail. I explained why that wouldn't work reliably.  Then, it suggested a voltage divider.  So, I asked it to show me.


Me: Can you show me how a voltage divider can translate to a higher voltage?

ChatGPT: A voltage divider is a simple circuit that can be used to divide a voltage into a lower value. However, it can also be used to scale up a lower voltage to a higher voltage, though it is not an ideal solution for voltage level translation between digital logic levels. Here is an example of how a voltage divider can be used to scale up a lower voltage to a higher voltage:

Let's assume you have a 3.3V signal that you want to translate to a 5V signal. A voltage divider can be constructed using two resistors connected in series between the 5V power supply and ground, with the output voltage taken from the connection between the two resistors. By selecting the proper values of the resistors, the voltage divider can scale up the 3.3V signal to a 5V signal.

For example, using a 10kΩ resistor between the 5V supply and the output and a 6.8kΩ resistor between the output and ground, the output voltage can be calculated as follows:

Vout = Vin x (R2 / (R1 + R2))
= 3.3V x (6.8kΩ / (10kΩ + 6.8kΩ))
= 2.06V

This output voltage is still lower than the desired 5V level, but it can be amplified using an op-amp or transistor circuit to achieve the desired voltage level. However, this method is not ideal for voltage level translation in digital circuits, as it can introduce additional noise and may not provide reliable results in all situations.


So, I should use an op amp after the voltage divider. I didn't expect much but it must be reading some real garbage to build up it's knowledge base. What I found odd is it never pointed me towards HCT or LV logic. I'd say EEs jobs are safe for now.


 

Online T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21501
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: ChatGPT fails EE 101
« Reply #1 on: March 16, 2023, 03:27:44 am »
You're expecting the wrong thing from it.  It's a language model, not a knowledge / inference engine.  It knows the language of a wide variety of subjects, but it doesn't know how to work with them beyond what it can quote and massage.  And probability being what it is, you're most likely to get answers from the most common phrases, topics, subjects, etc. on the internet.

So, take the average quality of internet posts, then remove what little brain was behind them (to put it rather uncharitably..). :P

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 
The following users thanked this post: EEVblog, tooki, pdenisowski

Offline phil from seattleTopic starter

  • Super Contributor
  • ***
  • Posts: 1029
  • Country: us
Re: ChatGPT fails EE 101
« Reply #2 on: March 16, 2023, 04:00:07 am »
I dunno, it should be able to at least summarize existing knowledge.  It does surprisingly well with programming.
 

Offline beenosam

  • Regular Contributor
  • *
  • Posts: 65
  • Country: ca
Re: ChatGPT fails EE 101
« Reply #3 on: March 16, 2023, 06:28:48 am »
I'm not surprised. ChatGPT seems to be hype by a lot of laypeople. Whenever I dug into it for anything beyond superficial things, it just kind of fell flat.
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 5964
  • Country: fi
    • My home page and email address
Re: ChatGPT fails EE 101
« Reply #4 on: March 16, 2023, 07:15:01 am »
I dunno, it should be able to at least summarize existing knowledge.  It does surprisingly well with programming.
You do realize that the majority of code produced is pretty crap?  The equivalent of blocks glued together with hot snot?  With the reliability of soap bubbles?

ChatGPT does not summarize existing knowledge.  It summarizes existing writing, without understanding the difference between correct and incorrect.

When the majority of existing writing is quite crappy –– structurally sound but horrible content-quality-wise ––, even a crude summary of it looks "surprisingly good" on the surface.

The old adage is apt here: "A billion flies cannot be wrong.  Shit tastes good."
 
The following users thanked this post: tooki, woody, rhodges

Offline Ice-Tea

  • Super Contributor
  • ***
  • Posts: 3052
  • Country: be
    • Freelance Hardware Engineer
Re: ChatGPT fails EE 101
« Reply #5 on: March 16, 2023, 07:35:04 am »
I dunno, it should be able to at least summarize existing knowledge.  It does surprisingly well with programming.

Why? That's not what it does. I searches words that have a reasonable chance or "working" together with other words. That's it.

Perfect to generate well sounding Star Trek technobabble or a sales pitch for some audiofoolery product. Full stop.

Offline phil from seattleTopic starter

  • Super Contributor
  • ***
  • Posts: 1029
  • Country: us
Re: ChatGPT fails EE 101
« Reply #6 on: March 16, 2023, 07:35:40 am »
Have you watched any of the videos about ChatGPT and programming?  Dave Plummer is a pretty good one (Dave's Garage). I happen to know him and he is a world class programmer. His dives into ChatGPT are pretty eye opening.  There's also a guy out there using it for Pico programming and getting surprisingly good results.  It doesn't write production ready code but gets pretty close. And, as a "coding assistant", it seems to be worthwhile. 

But with circuit design it is pretty bad.  I asked it for the microSD pinout - how hard can that be?  It gave me the wrong answer and it wasn't even close.
 
The following users thanked this post: tooki

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 5964
  • Country: fi
    • My home page and email address
Re: ChatGPT fails EE 101
« Reply #7 on: March 16, 2023, 07:50:55 am »
My opinion of Dave Plummer's work product is somewhat less than yours.

Let me give you real-world examples, in C.  Hopefully it illustrates the problem.

Basically all Linux/Unix/BSD programming guides have an exercise or example, where the program needs to scan a directory or directory tree.  Then they go into detail on how opendir(), readdir(), and closedir() work, and how you solve the problem using them.

Except it is wrong at the fundamental level.  That is the wrong C library interface to use for this.  It is one of those ancient relics that just don't cut it today, especially if the directory tree is modified concurrently to the scan.  Naïve code can skip unexpected entries (because the directory entries get reordered), or list entries twice.

The proper POSIX interface is nftw() for directory tree walks, scandir() for single directories, and glob() for specific pathname patterns.  Most BSDs have the same interfaces, but their native interface (also supported in e.g. Linux) is fts – fts_open(), fts_read(), fts_children(), fts_set(), fts_close().

The exact same issue is code reading text files.  The proper interface to use is not fgets(), but getline() and getdelim().

And sorting.  All standard C libraries have a qsort() implementation, and most even have a re-entrant/thread-safe qsort_r(), that can sort any array.  You only need to provide it a comparison function, and it handles the rest.

See?  Even though the code generated and shown was "bugless", in that it seems to work without errors, there are fundamental problems with it due to the inherent assumptions and horrible choices due to the long history of the C language.  It is like treating a Tesla as if it was a 1975 Datsun 100A, and writing an owners handbook and examples for the former using the latter.  Hell, it might even work to some degree, but it still is fundamentally shit.

Just because some code compiles and produces the expected results for a few test cases, does not mean it is any good.
 

Online alm

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: 00
Re: ChatGPT fails EE 101
« Reply #8 on: March 16, 2023, 09:33:40 am »
We use Github Copilot a lot at work, which is a similar GPT3 model, but trained purely on code. And it will frequently suggest very helpful solutions where it combined knowledge from different files to for example suggests unit tests for a function (I'm not talking about functions copied from Stackoverflow here), or the implementation of a function based on its name and the context. Not all suggestions are correct, that's why they're suggestions, but we are regularly surprised by how it will come up with suggestions that require a quite broad understanding of the problem beyond what's available in the code. The model has clearly gained an understanding of coding concepts on some level.

But clearly they didn't develop such a model dedicated to EE. EE would probably be a more difficult domain because schematics aren't really language, and GPT is a language model. But technology will only improve from here, so give it time and Altium might start suggesting a level translator when you connect a 3.3V STM32 output to a 5V sensor.
« Last Edit: March 16, 2023, 09:52:02 am by alm »
 
The following users thanked this post: thm_w

Offline jpanhalt

  • Super Contributor
  • ***
  • Posts: 3297
  • Country: us
Re: ChatGPT fails EE 101
« Reply #9 on: March 16, 2023, 10:50:54 am »
@phil
That original reply has a remarkable resemblance to Wikipedia fog.  Makes one wonder. ;)
 

Offline thinkfat

  • Supporter
  • ****
  • Posts: 2149
  • Country: de
  • This is just a hobby I spend too much time on.
    • Matthias' Hackerstübchen
Re: ChatGPT fails EE 101
« Reply #10 on: March 16, 2023, 11:22:07 am »
I asked it recently to explain a certain piece of open source software to me. The answers showed that it knew the software, had ingested the source code, but the answers were all bogus. It even hallucinated code to demonstrate its explanation, and provided a git hash of the revision of the code it based its answer on. Nothing matched with reality. It had read the code but not made any sense of it. And "pretended" to have given a valid answer, even when I pointed out how things didn't add up.

A dangerous tool in the hands of a fool.
Everybody likes gadgets. Until they try to make them.
 
The following users thanked this post: Nominal Animal

Offline vk6zgo

  • Super Contributor
  • ***
  • Posts: 7523
  • Country: au
Re: ChatGPT fails EE 101
« Reply #11 on: March 16, 2023, 01:27:21 pm »
@phil
That original reply has a remarkable resemblance to Wikipedia fog.  Makes one wonder. ;)

It is not uncommon to find errors on Wikipedia.
There was an "Amateur Radio Wiki" for a while, which has now become an advertising scam.
When it still really was an attempt at a "wiki", it had some reasonably good basic information on conventional linear power supply circuitry.

Unfortunately, amongst this was a quite ordinary bridge rectifier, which they stated to be a "Kratz" rectifier.
Despite never having heard it referred to by that name, I Googled & found no mention of "Mr Kratz", so I made a modification so it read "bridge rectifier".
Later, I got to thinking about the radio pioneer Mr Graetz, & concluded that was who it was meant to be.

If Chat GPT had dutifully absorbed that Wiki, it would no doubt have repeated the mistake.
"Garbage in, garbage out!"
 

Offline phil from seattleTopic starter

  • Super Contributor
  • ***
  • Posts: 1029
  • Country: us
Re: ChatGPT fails EE 101
« Reply #12 on: March 16, 2023, 04:01:44 pm »
@phil
That original reply has a remarkable resemblance to Wikipedia fog.  Makes one wonder. ;)

I wouldn't be surprised if wikipedia is a big source of GPT's input.  Though, in theory, it should draw from a wide range of sources. They do things like assess the veracity of each source and compare various sources for consistency.  This why the social media side of it is pernicious - hard to distinguish a widely stated lie from truth.

One thing about wikipedia errors - they usually get corrected.  But using a voltage divider to translate up?  I doubt that is written anywhere.
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6671
  • Country: nl
Re: ChatGPT fails EE 101
« Reply #13 on: March 16, 2023, 04:23:42 pm »
We use Github Copilot a lot at work, which is a similar GPT3 model, but trained purely on code.

I suspect that much like Bing AI they use traditional search to put good code in the context (with good being determined by metrics which are hard to incorporate during training, such as the height of a stackoverflow answer). I also suspect it's fine tuned to make subtle changes to variable names and comments.

They just have to keep this a secret for obvious reasons.
 

Online alm

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: 00
Re: ChatGPT fails EE 101
« Reply #14 on: March 16, 2023, 04:46:38 pm »
I suspect that much like Bing AI they use traditional search to put good code in the context (with good being determined by metrics which are hard to incorporate during training, such as the height of a stackoverflow answer). I also suspect it's fine tuned to make subtle changes to variable names and comments.

They just have to keep this a secret for obvious reasons.
Your suspicions are wrong. The suggestions it gives are much shorter, and are very much integrated with the code you're working on. Like refactoring a bunch of functions into a class. Or inferring the relationship between a JSON file and an object representing this JSON code, which is not any kind of standard library.

It will also recognize similar patterns in other parts of the code, but will figure out which parts need to change based on understanding of language (both parameter names but also English strings). It can summarize what code is doing in a comment. It's certainly not perfect, but it's much smarter than other AI coding assistants.

Offline phil from seattleTopic starter

  • Super Contributor
  • ***
  • Posts: 1029
  • Country: us
Re: ChatGPT fails EE 101
« Reply #15 on: March 16, 2023, 06:34:17 pm »
You are complaining about the code not being perfect but that's not the point.  It gets close and can be made correct without a huge amount of effort.  The level shifting example wasn't even in the right ballpark.  I assume you didn't watch dave's videos on chatgpt.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8001
  • Country: fi
Re: ChatGPT fails EE 101
« Reply #16 on: March 16, 2023, 06:36:30 pm »
You are complaining about the code not being perfect but that's not the point.  It gets close and can be made correct without a huge amount of effort. 

Total and utter 100% bullshit in any non-trivial project.

And trivial demonstrations are uninteresting.
 

Offline phil from seattleTopic starter

  • Super Contributor
  • ***
  • Posts: 1029
  • Country: us
Re: ChatGPT fails EE 101
« Reply #17 on: March 16, 2023, 06:59:11 pm »
You are complaining about the code not being perfect but that's not the point.  It gets close and can be made correct without a huge amount of effort. 

Total and utter 100% bullshit in any non-trivial project.

And trivial demonstrations are uninteresting.
Tough crowd.  Did you look at the videos?  I was a skeptic at one point too.  You won't get full apps out of it but as an assistant, it is proving it's value.
 

Online alm

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: 00
Re: ChatGPT fails EE 101
« Reply #18 on: March 16, 2023, 07:03:56 pm »
Total and utter 100% bullshit in any non-trivial project.

And trivial demonstrations are uninteresting.
Strong words. Have you tried it in a non-trivial project? We have, and we're impressed. It's a great help and it certainly improves productivity of developers. It just won't replace one anytime soon.
 
The following users thanked this post: thm_w, tooki, george.b, phil from seattle

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6671
  • Country: nl
Re: ChatGPT fails EE 101
« Reply #19 on: March 17, 2023, 03:16:38 am »
Your suspicions are wrong. The suggestions it gives are much shorter, and are very much integrated with the code you're working on. Like refactoring a bunch of functions into a class. Or inferring the relationship between a JSON file and an object representing this JSON code, which is not any kind of standard library.

It will also recognize similar patterns in other parts of the code, but will figure out which parts need to change based on understanding of language (both parameter names but also English strings). It can summarize what code is doing in a comment. It's certainly not perfect, but it's much smarter than other AI coding assistants.

Those are boiler plate transformations, the fact it can do those is impressive. What it does it do though when you ask for an algorithm (or a synthesis thereof). The infamous example is the sparse matrix transpose.
 

Offline phil from seattleTopic starter

  • Super Contributor
  • ***
  • Posts: 1029
  • Country: us
Re: ChatGPT fails EE 101
« Reply #20 on: March 17, 2023, 05:46:27 pm »
Your suspicions are wrong. The suggestions it gives are much shorter, and are very much integrated with the code you're working on. Like refactoring a bunch of functions into a class. Or inferring the relationship between a JSON file and an object representing this JSON code, which is not any kind of standard library.

It will also recognize similar patterns in other parts of the code, but will figure out which parts need to change based on understanding of language (both parameter names but also English strings). It can summarize what code is doing in a comment. It's certainly not perfect, but it's much smarter than other AI coding assistants.

Those are boiler plate transformations, the fact it can do those is impressive. What it does it do though when you ask for an algorithm (or a synthesis thereof). The infamous example is the sparse matrix transpose.
No one said it was the equivalent of a well trained and experienced developer. But, the AI is getting better. And currently, it is capable of understanding context, at least in software.  FWIW, GPT-4 is able to pass a bar exam at the 90% level (US test to become a certified attorney).

Perhaps an interesting side note, given that AI derives it's answers from it's input, how many of it's answers will violate copyright and/or patents?  Maybe there will come a day when an AI lawyer sues an AI engineer for patent infringement.  Maybe it's not that far off.
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6671
  • Country: nl
Re: ChatGPT fails EE 101
« Reply #21 on: March 17, 2023, 06:05:52 pm »
Wait a minute, I was never saying it wasn't extremely useful. I'm saying that there's probably some things happening behind the curtains which make the "it's not copying" argument harder to maintain.

Bing AI is an advance on even GPT-4 because it can use up to date search results, with human optimized quality metrics, as part of the context for a query. Doing the same thing for code however is hard to justify.
« Last Edit: March 17, 2023, 06:08:31 pm by Marco »
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8001
  • Country: fi
Re: ChatGPT fails EE 101
« Reply #22 on: March 17, 2023, 06:20:04 pm »
One thing about wikipedia errors - they usually get corrected.  But using a voltage divider to translate up?  I doubt that is written anywhere.

You are STILL making the same wrong assumption that ChatGPT "consumes information" and understands it at some level. That is not the case. It just combines language. Even if no one EVER claimed voltage divider could translate voltage up, ChatGPT could STILL come up with this claim. It could then claim in the next sentence that voltage divider can't translate voltage up. That's perfectly normal operation for ChatGPT.

This is also why the correctness of training material does not matter much. Even when trained with "perfect" material, output will have colossal mistakes.
 
The following users thanked this post: tooki

Offline phil from seattleTopic starter

  • Super Contributor
  • ***
  • Posts: 1029
  • Country: us
Re: ChatGPT fails EE 101
« Reply #23 on: March 17, 2023, 06:56:39 pm »
One thing about wikipedia errors - they usually get corrected.  But using a voltage divider to translate up?  I doubt that is written anywhere.

You are STILL making the same wrong assumption that ChatGPT "consumes information" and understands it at some level. That is not the case. It just combines language. Even if no one EVER claimed voltage divider could translate voltage up, ChatGPT could STILL come up with this claim. It could then claim in the next sentence that voltage divider can't translate voltage up. That's perfectly normal operation for ChatGPT.

This is also why the correctness of training material does not matter much. Even when trained with "perfect" material, output will have colossal mistakes.

It doesn't need to understand something but it should never "make things up". How could it possibly come up with using a divider to get a higher voltage? That just doesn't pass the sniff test.

And, beg to differ but correctness of the training material (ie input) is key.  When you ask it for today's train schedule, it better be based on something in its input stream. A good AI would not only give you the published time table but also any social media posts that note the train is running late.
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 5964
  • Country: fi
    • My home page and email address
Re: ChatGPT fails EE 101
« Reply #24 on: March 17, 2023, 07:39:05 pm »
Perhaps an interesting side note, given that AI derives it's answers from it's input, how many of it's answers will violate copyright and/or patents?
Definitely interesting.

Microsoft's GitHub Copilot is similarly problematic, because it has been trained on copyright-protected material (under various open source licenses), but without tracking the license requirements at all.  In my opinion, if Microsoft wants to provide that kind of a service, they should be obligated to include all their own proprietary source code (Windows, Office, etc.) in the training material also.  If they believe Copilot does not violate copyrights, they should not have any objection.  If they refuse, they are currently enabling copyright violation for profit by suggesting exact sequences of copyrighted code to unknowing users (hiding the copyright of the suggested code), and should be penalized to the exact same amount they themselves demand per copyright violation.

Unless, of course, they're back to their good ol' self a decade ago, claiming that open source is not copyrightable, only proprietary source code is.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf