Products > ChatGPT/AI

ChatGPT fails EE 101

(1/13) > >>

phil from seattle:
Playing around with ChatGPT I asked it the cheapest way to translate 3.3V logic to 5V.  It initially suggested a 74HC4050 which is a fail. I explained why that wouldn't work reliably.  Then, it suggested a voltage divider.  So, I asked it to show me.

Me: Can you show me how a voltage divider can translate to a higher voltage?

ChatGPT: A voltage divider is a simple circuit that can be used to divide a voltage into a lower value. However, it can also be used to scale up a lower voltage to a higher voltage, though it is not an ideal solution for voltage level translation between digital logic levels. Here is an example of how a voltage divider can be used to scale up a lower voltage to a higher voltage:

Let's assume you have a 3.3V signal that you want to translate to a 5V signal. A voltage divider can be constructed using two resistors connected in series between the 5V power supply and ground, with the output voltage taken from the connection between the two resistors. By selecting the proper values of the resistors, the voltage divider can scale up the 3.3V signal to a 5V signal.

For example, using a 10kΩ resistor between the 5V supply and the output and a 6.8kΩ resistor between the output and ground, the output voltage can be calculated as follows:

Vout = Vin x (R2 / (R1 + R2))
= 3.3V x (6.8kΩ / (10kΩ + 6.8kΩ))
= 2.06V

This output voltage is still lower than the desired 5V level, but it can be amplified using an op-amp or transistor circuit to achieve the desired voltage level. However, this method is not ideal for voltage level translation in digital circuits, as it can introduce additional noise and may not provide reliable results in all situations.

So, I should use an op amp after the voltage divider. I didn't expect much but it must be reading some real garbage to build up it's knowledge base. What I found odd is it never pointed me towards HCT or LV logic. I'd say EEs jobs are safe for now.

T3sl4co1l:
You're expecting the wrong thing from it.  It's a language model, not a knowledge / inference engine.  It knows the language of a wide variety of subjects, but it doesn't know how to work with them beyond what it can quote and massage.  And probability being what it is, you're most likely to get answers from the most common phrases, topics, subjects, etc. on the internet.

So, take the average quality of internet posts, then remove what little brain was behind them (to put it rather uncharitably..). :P

Tim

phil from seattle:
I dunno, it should be able to at least summarize existing knowledge.  It does surprisingly well with programming.

beenosam:
I'm not surprised. ChatGPT seems to be hype by a lot of laypeople. Whenever I dug into it for anything beyond superficial things, it just kind of fell flat.

Nominal Animal:

--- Quote from: phil from seattle on March 16, 2023, 04:00:07 am ---I dunno, it should be able to at least summarize existing knowledge.  It does surprisingly well with programming.

--- End quote ---
You do realize that the majority of code produced is pretty crap?  The equivalent of blocks glued together with hot snot?  With the reliability of soap bubbles?

ChatGPT does not summarize existing knowledge.  It summarizes existing writing, without understanding the difference between correct and incorrect.

When the majority of existing writing is quite crappy –– structurally sound but horrible content-quality-wise ––, even a crude summary of it looks "surprisingly good" on the surface.

The old adage is apt here: "A billion flies cannot be wrong.  Shit tastes good."