Author Topic: ChatGPT fails EE 101  (Read 19005 times)

0 Members and 1 Guest are viewing this topic.

Offline KE5FX

  • Super Contributor
  • ***
  • Posts: 2096
  • Country: us
    • KE5FX.COM
Re: ChatGPT fails EE 101
« Reply #50 on: May 04, 2023, 06:38:03 pm »
  Unless you're willing to cut corners and make the gadgets and tools and test equipment at minimum possible price, expect your job to be cut within the next decade or so.

Those are the people who will be replaced first.

Which is fine.
 

Offline RJSV

  • Super Contributor
  • ***
  • Posts: 2768
  • Country: us
Re: ChatGPT fails EE 101
« Reply #51 on: May 04, 2023, 08:06:16 pm »
Nominal Animal:
   I've had to smile, again, as I had been reading through the trade-offs, in last couple of posts.  Now, I know we are friends, so please don't think I'm being solicitous here, but I was just thinking the same words:
   "Lowest common denominator". 
   
   That dynamic is dis-satisfying, from a professional standpoint, regardless if that is a short-term apparent advantage.  So, I'm taking the NominalAnimal side, on the issue of 'which safety mechanism' to adopt.
I guess it's just not a simple, binary dynamic (to mention an over-used term, lol).

   I mean, the whole software-driven automobile dynamic is still, way young.  (My local town has hardly even established safe sidewalks or bike lanes).
In theory I'd certainly agree, that unsafe cell phone use is a big cause of injury accidents, but not a simple, binary choice... (taking humans out of any driving tasks).
   Too early, but I need more time, to ponder and to respond, coherently.
 

Offline RJSV

  • Super Contributor
  • ***
  • Posts: 2768
  • Country: us
Re: ChatGPT fails EE 101
« Reply #52 on: May 04, 2023, 08:17:06 pm »
...but the heart, of my response up there, is Microdoser taking the statement 'seemingly better', and running with it, as a 'conclusion' to be carried forward.
 

Offline Microdoser

  • Frequent Contributor
  • **
  • Posts: 423
  • Country: gb
Re: ChatGPT fails EE 101
« Reply #53 on: May 05, 2023, 12:14:21 am »
I wasn't disagreeing with Nominal, the opposite, in fact. ChatGPT doesn't write amazing code, it ignores copyright, and all the other things. I was more exploring how companies, authorities, and the average person in the real world will end up applying this new tool notwithstanding all of that.

Many companies look at the wider profit and loss that arises from adopting a system or practice, and if the cost of adopting a system (bugs, patches, reputation damage, recalls, returns) is less than the benefit of adopting that system (lower productions costs, speed of deployment, lower wages), then it simply makes plain business sense to use it and maximise profit. These days a lot of companies have the policy of 'fuckit, release it now, and we'll fix the bugs with patches'. Intel did it recently with their graphic card drivers, for example. Of course there will always be companies in specialised areas and mission critical systems made by those companies that have to work first time and be as reliable and error free as possible, aerospace for one, and they will want the best human coders, but the majority will want the speed and cost savings over 'good code'.

I'm not saying it's a desirable outcome, but I reckon it's the one that will happen.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 7209
  • Country: fi
    • My home page and email address
Re: ChatGPT fails EE 101
« Reply #54 on: May 05, 2023, 05:14:49 am »
I'm not saying it's a desirable outcome, but I reckon it's the one that will happen.
Yes, I do agree.

As I've mentioned elsewhere, I find the technology behind GPT wonderful.  But, just like with GMO – another technology I find wonderful –, it is the practical use cases it has/will be used for that I object to, because they don't make the world a better place, only easier for some Owners to extract rent.  And produce more "pretty crappy" but popular things at lower costs, making quality things even more expensive, relatively speaking.  I seriously dislike the race to the bottom.
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 5050
  • Country: si
Re: ChatGPT fails EE 101
« Reply #55 on: May 05, 2023, 06:11:21 am »
These GPT AI are a tool for humans, not a replacement for humans.

All of these large language model AI appear very smart on the surface, but they are just glorified statistics of a giant pile of human written text.

They are useful in a similar sense as Wikipedia where you need a condensed piece of information quickly, without having to do actual research and dig trough a pile of sources. The speed at which GPT models can retrieve and assemble the desired piece of information out of the collective human knowledge is way faster than a actual real human ever could. You just need to type in a short question and you get a tailor made answer for your specific use case.

It is similar to asking a question on this forum, except that the answer comes instantly and there is no drama to read trough before you actually get to the answer. The answer could also be wrong (like it might on this forum too) but it points you in at least the right direction, you can still double check the answer and dig deeper afterwards.

For programing ChatGPT works rather well too. Just don't be in the mindset of making it do ALL the work for you. Just ask it for spinets of code, like algorithms, library examples, then work those spinets into your code yourself. Still better than copy pasting from StackOverflow
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 7209
  • Country: fi
    • My home page and email address
Re: ChatGPT fails EE 101
« Reply #56 on: May 05, 2023, 07:41:33 am »
It is similar to asking a question on this forum, except that the answer comes instantly and there is no drama to read trough before you actually get to the answer.
No, I believe it is similar to asking a question at StackOverflow/StackExchange network, not here.

Here, at least in the threads I have participated, it is common that the first few responses are about clarification or completely miss the mark.  The ensuing discussion clarifies the actual problem the OP is having, and then suggested solutions are provided.  Usually it takes at least a dozen posts (counting only those with information) to reach a truly "good" solution.

The difference is interaction.

Like in this very post, I'm giving you pushback on that assertion.  This will (hopefully!) refine both of our understanding, by adding in the others' reasoning to our own.  (This is my intent, at least.  I do not care whether any of you come to agreement with me; I only want us to share the reasons for our opinions, because it is those reasons that matter, not the opinions themselves.  I am very serious about this: I appreciate the most people who disagree with me, but do describe why.  Including in real life.)

In some cases, when I read an interesting question, I myself do not know the answer immediately: I only know a few methods that I can employ to find the solution.  One reason for the verbosity of my posts is that I do not simply post "the answer", I describe how I arrived at it, and what its upsides and downsides or benefits and limitations are.

You get nothing of this with GPT-based tools, or at Q&A -type sites like StackOverflow, StackExchange, Quora, etc.
It takes interaction, discussion, to get to a really good solution.  I hope that is not the "drama" you are referring to?

In C programming, a particularly complex question is whether one should check pointers for NULL or not.  There is no single answer, because the correct answer depends on the context.  For a couple of examples, realloc(NULL, N) is equivalent to malloc(N) and perfectly okay for N>0, free(NULL) is completely safe, but strlen(NULL) may crash the process.
It is exactly such context details regarding a solution that makes discussion more useful than just knowing the answer.

Switching to EE: Flicker irritates me, so I once decided to find the difference between PDM and PWM.  (Actually it was some research to find out how to do better than PWM.)  I discovered that with PDM, by limiting the dynamic range (to a symmetric section around the middle), I could trivially ensure the switching noise is pushed to much higher frequencies, and thus is much easier to filter out.

(For example, limiting to 16..240 of 256 (8-bit PDM, with 87.5% of dynamic range used, so "7.8 bit PDM", really) ensures that the maximum interval between transitions is 16 clock cycles.  You only get the harmonics from higher frequencies on the lower frequencies.  This is why PDM is so often used in e.g. high-end audio stuff: the quantization noise is easy to filter out.)

It took a long discussion in this forum for me to realize why PWM is more useful in practice than PDM for many purposes.  Simply put, PDM has many more transitions than PWM, and in circuits where e.g. MOSFETs are used to generate the signal, that means a larger fraction of time is spent in the transition state, during which MOSFETs act as resistors, wasting power and generating waste heat.  This means that PDM is only useful when its minimum period is much longer than the transition period between MOSFET conducting and non-conducting states.
 _ _ _

It is exactly the type that just copies solutions from the web to their work without understanding their limitations/requirements/assumptions, that will most find GPT tools useful.  It just happens to be the class of programmers I wish did not exist: they just produce more crap.  I don't want more crap.

In late 1990s, Daniel J. Bernstein wrote some DNS servicesservice supervisors (daemontools), mail services (qmail and related utilities, including Maildir format), and other stuff.

His work is of high quality.  The code is not exactly bug-free, but the frequency of bugs is radically lower than in e.g. Sendmail or Bind.  At the time, I maintained a couple of servers (file server for classic Macs and Windows, printing, and email and web for a small University department.  I often had to explain why I was using those instead of the "industry standard" Sendmail and Bind to other admins, even after the security fiasco that showed my approach was technically "better".  Even today, many developers express disgust at his code, even when admitting its structure and quality is way above average.

This is also the reason I do not publish my own projects.  At best, I would be ignored; at worst, I'd be barraged by questions from idiots who don't like the look of my code because it differs from what others show.  I have never been good at advertising or selling myself, so it just isn't worth it.  The very few that appreciate my output, can just as well email me directly and ask for help; then we can discuss and adapt some code to best suit their needs, while myself learning even more at the same time.  (I do not currently do paid work.  I either help for free, or ask to pay it forwards.  I have some mental health work-related issues like utter inability to handle any stress I'm still working out.)

If you look at the CVE's related to Sendmail or Bind, you'll hopefully understand why I think the current code quality situation is utter crap.  DJB's example shows it could be much, much better.

I do believe security will be severely negatively impacted by increased GPT use, in the exact same way it would be if developers simply copied code from online sources even more often.  A good example of this is the fundamental security flaw in all currently used online forums, including SimpleMachines (this one currently being SMF 2.0.19). Because they run under a single user account (as enforced by various web hosting control panels like cPanel, Plesk, Virtualmin etc.), the code can always modify itself.  (Actually, I haven't talked to gnif if they have configured SMF here thus.  It is usually possible to do by installing the software in a completely different way to what the vendor suggests, and then fixing the few cases where the code itself expects to be able to modify its own code.)  Compare to the Unix model, where applications run by multiple users are owned by a dedicated account, and only executable by those users.  Having the code be able to modify itself means that any failure of filtering inputs potentially gives an attacker the opportunity to penetrate the system.  Most attacks are based on "script drops", uploading script files based on an unprotected upload form or misconfigured upload settings, then using a cross-site scripting flaw (failure to filter inputs so that a client can choose which script will end up being executed) to execute said script.

That fundamental security flaw can only be solved by changing the web hosting model into one that allows at least two users and three groups per hosted web site.  Most operators (both human and companies) I've talked about this are hostile towards such change, even though it would trivially make a significant difference in securit, if the used script interpreter also supported an option to refuse to execute scripts server-side if they are writable by the current user (which is trivial to implement).  For one, it would immediately disable all script drop -type attacks.
 _ _ _

I apologize for this wall of text.  I did not write this in order to get anyone to adopt my opinion.  I wrote this so that anyone reading this thread can consider the basis behind my opinions, and consider for themselves; and I believe that basis is relevant to this thread.  Do note, for example, that I like the technology behind GPT, but dislike the typical use cases it is/will be put into.  Whether you end up agreeing with me or not, is irrelevant: that is affected by personal factors like life experience, and will naturally vary from person to person.  It is the reasoning behind opinions that are important, not the opinions themselves.
« Last Edit: May 05, 2023, 07:48:32 am by Nominal Animal »
 
The following users thanked this post: MK14, Microdoser

Offline Berni

  • Super Contributor
  • ***
  • Posts: 5050
  • Country: si
Re: ChatGPT fails EE 101
« Reply #57 on: May 05, 2023, 10:04:39 am »
You are certainly not going to have a in depth technical discussion with ChatGPT. this is what forums like this are good at.

But in most cases someone is trying to solve an already solved problem, they just haven't seen the solution yet. This is where ChatGPT works well. It can distill down a answer for solving it in the relevant context of what the person is trying to do. It also does it in 10 seconds rather than waiting for a response from someone on a forum or digging trough a lot of search results. The sort of beginner questions also take a lot of effort from experts on forums to answer.

There are a lot of things that i have no clue about, yet ChatGPT is a nice tool for quickly getting pointed in the right direction so i know what to look into. Or sometimes i am just lazy and don't feel like reading the horribly put together documentation for a given python library for doing the thing i want, so i ask it for a example program that i can use as a starting point, then go from there. It saves me time. Then once the library does something and i like it, then i can go deeper in and properly figure out. This keeps me from wasting time on a solution that turns out to not work well anyway.

Like every tool has limits, ChatGPT has its limits too. Just don't use it for the things it is not good at, maybe next versions of it will improve in those regards.

As for software quality going down hill, that's mostly just the fault of people not even bothering to understand the thing they are coding. These days it is all a tangled mess of multiple layers of libraries and frameworks, so it is hard to even understand what really going on under your code. To the point where new software developers are comfortable with everything under there code being a mysterious black box.
 

Offline alm

  • Super Contributor
  • ***
  • Posts: 2903
  • Country: 00
Re: ChatGPT fails EE 101
« Reply #58 on: May 05, 2023, 11:35:23 am »
As for software quality going down hill, that's mostly just the fault of people not even bothering to understand the thing they are coding. These days it is all a tangled mess of multiple layers of libraries and frameworks, so it is hard to even understand what really going on under your code. To the point where new software developers are comfortable with everything under there code being a mysterious black box.
By that logic, things have been going down hill every people started using ICs for logic gates instead of building them from discrete components. Very few people will have an understanding from their programming language, to operating systems, to modern processor architecture all the way to semiconductor physics, so at some point everyone abstracts what they are working on and considers it a fairly well behaved black box that performs according to its specifications. Abstraction is how we were able to build increasingly complicated systems. Of course abstractions break down in edge cases, so for some type of work you need to look beyond the abstraction. Someone writing an operating system or boot loader is going to need to be more familiar with the underlying hardware than someone writing a Windows application. Some systems (software or hardware) may be a bad fit to the abstraction: for example early Intel Pentium CPUs and their FDIV bug.

Is the problem with abstraction as a principle, or just a mismatch between that particular abstraction and that particular system?
 
The following users thanked this post: tooki

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 7209
  • Country: fi
    • My home page and email address
Re: ChatGPT fails EE 101
« Reply #59 on: May 05, 2023, 01:33:08 pm »
As for software quality going down hill, that's mostly just the fault of people not even bothering to understand the thing they are coding. These days it is all a tangled mess of multiple layers of libraries and frameworks, so it is hard to even understand what really going on under your code. To the point where new software developers are comfortable with everything under there code being a mysterious black box.
By that logic, things have been going down hill every people started using ICs for logic gates instead of building them from discrete components. Very few people will have an understanding from their programming language, to operating systems, to modern processor architecture all the way to semiconductor physics, so at some point everyone abstracts what they are working on and considers it a fairly well behaved black box that performs according to its specifications.
No.  The problem is they abstract the things into oracles, assuming they give the answer the programmer wants, instead of the answer the code asks for; completely disregarding the specifications.

Various serial libraries are good examples of this.  They typically assume no write ever fails and short writes never occur, and don't even bother checking the return values from low-level I/O functions.  That's absolutely not what the specifications say.  (It's also why I prefer to use the termios layer directly on all non-Windows OSes, even in higher-level languages like Python.)
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 5050
  • Country: si
Re: ChatGPT fails EE 101
« Reply #60 on: May 05, 2023, 10:28:55 pm »
As for software quality going down hill, that's mostly just the fault of people not even bothering to understand the thing they are coding. These days it is all a tangled mess of multiple layers of libraries and frameworks, so it is hard to even understand what really going on under your code. To the point where new software developers are comfortable with everything under there code being a mysterious black box.
By that logic, things have been going down hill every people started using ICs for logic gates instead of building them from discrete components. Very few people will have an understanding from their programming language, to operating systems, to modern processor architecture all the way to semiconductor physics, so at some point everyone abstracts what they are working on and considers it a fairly well behaved black box that performs according to its specifications. Abstraction is how we were able to build increasingly complicated systems. Of course abstractions break down in edge cases, so for some type of work you need to look beyond the abstraction. Someone writing an operating system or boot loader is going to need to be more familiar with the underlying hardware than someone writing a Windows application. Some systems (software or hardware) may be a bad fit to the abstraction: for example early Intel Pentium CPUs and their FDIV bug.

Is the problem with abstraction as a principle, or just a mismatch between that particular abstraction and that particular system?

Not saying you have to know how the electrons move trough the energy levels of transistors inside a CPU to be able to write python code.

Just that having some slight basic level of familiarity with the lower layers goes a long way in knowing how to go about the task in a way that works well all the way down the stack of layers, rather than doing whatever looks the easiest to do with the single layer under you.

For example to write software you don't need to even know what assembly language is, what memory is, what a compiler does. You can approach it from just simply "I write this line of code and the text shows up on screen". And it might work for simple things, but once you start building bigger things and need things to be performant, you will benifit grately knowing things like how floating point divide is slow, what CPU cache memory is, how the compiler turns your class into code, how memory is allocated or garbage collected, how the OS does task scheduling, how a graphics card renders things..etc If you don't, then something as simple as string concatenation might make your application run 10000x slower than expected. Copying code from the internet is not inherently a bad thing either, there is lots of very nice code out there that efficiently solves a given task, what is bad is blindly copy pasting code snippets without even trying to understand what it actually does.

There will always be bad code out there since there will always be a significant proportion of bad programmers out of all the programmers out there.

Heck i have written my own fair share of bad code when i just needed to quickly throw something together to do that one off task.
 

Offline Microdoser

  • Frequent Contributor
  • **
  • Posts: 423
  • Country: gb
Re: ChatGPT fails EE 101
« Reply #61 on: May 06, 2023, 12:39:28 pm »

Not saying you have to know how the electrons move trough the energy levels of transistors inside a CPU to be able to write python code.

Do they move through a trough?
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 7209
  • Country: fi
    • My home page and email address
Re: ChatGPT fails EE 101
« Reply #62 on: May 06, 2023, 01:13:16 pm »
Not saying you have to know how the electrons move trough the energy levels of transistors inside a CPU to be able to write python code.
Do they move through a trough?
You failed the joke. (Me fail English, but Berni just had a typo, missing an h.)

It was supposed to be
Q: Do they move through a trough, though?
A: Well, it depends on the transistor energy level two.
Q: You mean...?
A: Yes, I am.  I am an ent, and I meant transistor type: pee-and-pee, or ent-pee-ent.

Edit: Ah, forget about this.  This has no potential whatsoever.
« Last Edit: May 06, 2023, 01:18:41 pm by Nominal Animal »
 

Offline KE5FX

  • Super Contributor
  • ***
  • Posts: 2096
  • Country: us
    • KE5FX.COM
Re: ChatGPT fails EE 101
« Reply #63 on: May 27, 2023, 07:34:04 pm »
Thought this was an interesting interaction:

Quote
Q: Can you perform a dimensional analysis of the skin depth equation?

Sure, I can help with that. The skin depth (δ) in a conductor is given by the equation:$$
\delta = \sqrt{\frac{2\rho}{\omega \mu}}
$$where:
- \(\delta\) is the skin depth
- \(\rho\) is the resistivity of the conductor
- \(\omega\) is the angular frequency
- \(\mu\) is the permeability of the conductor

The units of these quantities are:

- Skin depth (\(\delta\)): meters (m)
- Resistivity (\(\rho\)): ohm meters (Ω·m)
- Angular frequency (\(\omega\)): radians per second (rad/s)
- Permeability (\(\mu\)): henries per meter (H/m)

Let's perform the dimensional analysis:

The right-hand side of the equation is the square root of the quantity \(\frac{2\rho}{\omega \mu}\). The units inside the square root are:

- Numerator: \(\rho\) is in Ω·m
- Denominator: \(\omega \mu\) is in rad/s * H/m = H·rad/(s·m)

So, the units inside the square root are:$$
\frac{Ω·m}{H·rad/(s·m)} = \frac{Ω·m^2·s}{H·rad}
$$Since 1 Ω = 1 H/s, we can substitute H/s for Ω in the numerator:
$$
\frac{H·m^2·s^2}{H·rad} = \frac{m^2·s}{rad}
$$Taking the square root of these units gives:$$
\sqrt{\frac{m^2·s}{rad}} = \frac{m·\sqrt{s}}{\sqrt{rad}}
$$However, radian is a dimensionless unit, so we can ignore it in the dimensional analysis. This leaves us with:$$
m·\sqrt{s}
$$This is not the same as the units of skin depth, which is meters (m). Therefore, it seems there is a discrepancy in the units.

However, this discrepancy is resolved when we consider that the square root of seconds is not a physical unit. In the context of the skin depth equation, the square root of the resistivity divided by the product of angular frequency and permeability has the effect of producing a length, which is the skin depth in meters.

So, while the dimensional analysis might seem to not work out at first glance, the equation is indeed dimensionally consistent when interpreted correctly.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf