Products > ChatGPT/AI

ChatGPT fails EE 101

<< < (12/13) > >>

Berni:
These GPT AI are a tool for humans, not a replacement for humans.

All of these large language model AI appear very smart on the surface, but they are just glorified statistics of a giant pile of human written text.

They are useful in a similar sense as Wikipedia where you need a condensed piece of information quickly, without having to do actual research and dig trough a pile of sources. The speed at which GPT models can retrieve and assemble the desired piece of information out of the collective human knowledge is way faster than a actual real human ever could. You just need to type in a short question and you get a tailor made answer for your specific use case.

It is similar to asking a question on this forum, except that the answer comes instantly and there is no drama to read trough before you actually get to the answer. The answer could also be wrong (like it might on this forum too) but it points you in at least the right direction, you can still double check the answer and dig deeper afterwards.

For programing ChatGPT works rather well too. Just don't be in the mindset of making it do ALL the work for you. Just ask it for spinets of code, like algorithms, library examples, then work those spinets into your code yourself. Still better than copy pasting from StackOverflow

Nominal Animal:

--- Quote from: Berni on May 05, 2023, 06:11:21 am ---It is similar to asking a question on this forum, except that the answer comes instantly and there is no drama to read trough before you actually get to the answer.
--- End quote ---
No, I believe it is similar to asking a question at StackOverflow/StackExchange network, not here.

Here, at least in the threads I have participated, it is common that the first few responses are about clarification or completely miss the mark.  The ensuing discussion clarifies the actual problem the OP is having, and then suggested solutions are provided.  Usually it takes at least a dozen posts (counting only those with information) to reach a truly "good" solution.

The difference is interaction.

Like in this very post, I'm giving you pushback on that assertion.  This will (hopefully!) refine both of our understanding, by adding in the others' reasoning to our own.  (This is my intent, at least.  I do not care whether any of you come to agreement with me; I only want us to share the reasons for our opinions, because it is those reasons that matter, not the opinions themselves.  I am very serious about this: I appreciate the most people who disagree with me, but do describe why.  Including in real life.)

In some cases, when I read an interesting question, I myself do not know the answer immediately: I only know a few methods that I can employ to find the solution.  One reason for the verbosity of my posts is that I do not simply post "the answer", I describe how I arrived at it, and what its upsides and downsides or benefits and limitations are.

You get nothing of this with GPT-based tools, or at Q&A -type sites like StackOverflow, StackExchange, Quora, etc.
It takes interaction, discussion, to get to a really good solution.  I hope that is not the "drama" you are referring to?

In C programming, a particularly complex question is whether one should check pointers for NULL or not.  There is no single answer, because the correct answer depends on the context.  For a couple of examples, realloc(NULL, N) is equivalent to malloc(N) and perfectly okay for N>0, free(NULL) is completely safe, but strlen(NULL) may crash the process.
It is exactly such context details regarding a solution that makes discussion more useful than just knowing the answer.

Switching to EE: Flicker irritates me, so I once decided to find the difference between PDM and PWM.  (Actually it was some research to find out how to do better than PWM.)  I discovered that with PDM, by limiting the dynamic range (to a symmetric section around the middle), I could trivially ensure the switching noise is pushed to much higher frequencies, and thus is much easier to filter out.

(For example, limiting to 16..240 of 256 (8-bit PDM, with 87.5% of dynamic range used, so "7.8 bit PDM", really) ensures that the maximum interval between transitions is 16 clock cycles.  You only get the harmonics from higher frequencies on the lower frequencies.  This is why PDM is so often used in e.g. high-end audio stuff: the quantization noise is easy to filter out.)

It took a long discussion in this forum for me to realize why PWM is more useful in practice than PDM for many purposes.  Simply put, PDM has many more transitions than PWM, and in circuits where e.g. MOSFETs are used to generate the signal, that means a larger fraction of time is spent in the transition state, during which MOSFETs act as resistors, wasting power and generating waste heat.  This means that PDM is only useful when its minimum period is much longer than the transition period between MOSFET conducting and non-conducting states.
 _ _ _

It is exactly the type that just copies solutions from the web to their work without understanding their limitations/requirements/assumptions, that will most find GPT tools useful.  It just happens to be the class of programmers I wish did not exist: they just produce more crap.  I don't want more crap.

In late 1990s, Daniel J. Bernstein wrote some DNS services,  service supervisors (daemontools), mail services (qmail and related utilities, including Maildir format), and other stuff.

His work is of high quality.  The code is not exactly bug-free, but the frequency of bugs is radically lower than in e.g. Sendmail or Bind.  At the time, I maintained a couple of servers (file server for classic Macs and Windows, printing, and email and web for a small University department.  I often had to explain why I was using those instead of the "industry standard" Sendmail and Bind to other admins, even after the security fiasco that showed my approach was technically "better".  Even today, many developers express disgust at his code, even when admitting its structure and quality is way above average.

This is also the reason I do not publish my own projects.  At best, I would be ignored; at worst, I'd be barraged by questions from idiots who don't like the look of my code because it differs from what others show.  I have never been good at advertising or selling myself, so it just isn't worth it.  The very few that appreciate my output, can just as well email me directly and ask for help; then we can discuss and adapt some code to best suit their needs, while myself learning even more at the same time.  (I do not currently do paid work.  I either help for free, or ask to pay it forwards.  I have some mental health work-related issues like utter inability to handle any stress I'm still working out.)

If you look at the CVE's related to Sendmail or Bind, you'll hopefully understand why I think the current code quality situation is utter crap.  DJB's example shows it could be much, much better.

I do believe security will be severely negatively impacted by increased GPT use, in the exact same way it would be if developers simply copied code from online sources even more often.  A good example of this is the fundamental security flaw in all currently used online forums, including SimpleMachines (this one currently being SMF 2.0.19). Because they run under a single user account (as enforced by various web hosting control panels like cPanel, Plesk, Virtualmin etc.), the code can always modify itself.  (Actually, I haven't talked to gnif if they have configured SMF here thus.  It is usually possible to do by installing the software in a completely different way to what the vendor suggests, and then fixing the few cases where the code itself expects to be able to modify its own code.)  Compare to the Unix model, where applications run by multiple users are owned by a dedicated account, and only executable by those users.  Having the code be able to modify itself means that any failure of filtering inputs potentially gives an attacker the opportunity to penetrate the system.  Most attacks are based on "script drops", uploading script files based on an unprotected upload form or misconfigured upload settings, then using a cross-site scripting flaw (failure to filter inputs so that a client can choose which script will end up being executed) to execute said script.

That fundamental security flaw can only be solved by changing the web hosting model into one that allows at least two users and three groups per hosted web site.  Most operators (both human and companies) I've talked about this are hostile towards such change, even though it would trivially make a significant difference in securit, if the used script interpreter also supported an option to refuse to execute scripts server-side if they are writable by the current user (which is trivial to implement).  For one, it would immediately disable all script drop -type attacks.
 _ _ _

I apologize for this wall of text.  I did not write this in order to get anyone to adopt my opinion.  I wrote this so that anyone reading this thread can consider the basis behind my opinions, and consider for themselves; and I believe that basis is relevant to this thread.  Do note, for example, that I like the technology behind GPT, but dislike the typical use cases it is/will be put into.  Whether you end up agreeing with me or not, is irrelevant: that is affected by personal factors like life experience, and will naturally vary from person to person.  It is the reasoning behind opinions that are important, not the opinions themselves.

Berni:
You are certainly not going to have a in depth technical discussion with ChatGPT. this is what forums like this are good at.

But in most cases someone is trying to solve an already solved problem, they just haven't seen the solution yet. This is where ChatGPT works well. It can distill down a answer for solving it in the relevant context of what the person is trying to do. It also does it in 10 seconds rather than waiting for a response from someone on a forum or digging trough a lot of search results. The sort of beginner questions also take a lot of effort from experts on forums to answer.

There are a lot of things that i have no clue about, yet ChatGPT is a nice tool for quickly getting pointed in the right direction so i know what to look into. Or sometimes i am just lazy and don't feel like reading the horribly put together documentation for a given python library for doing the thing i want, so i ask it for a example program that i can use as a starting point, then go from there. It saves me time. Then once the library does something and i like it, then i can go deeper in and properly figure out. This keeps me from wasting time on a solution that turns out to not work well anyway.

Like every tool has limits, ChatGPT has its limits too. Just don't use it for the things it is not good at, maybe next versions of it will improve in those regards.

As for software quality going down hill, that's mostly just the fault of people not even bothering to understand the thing they are coding. These days it is all a tangled mess of multiple layers of libraries and frameworks, so it is hard to even understand what really going on under your code. To the point where new software developers are comfortable with everything under there code being a mysterious black box.

alm:

--- Quote from: Berni on May 05, 2023, 10:04:39 am ---As for software quality going down hill, that's mostly just the fault of people not even bothering to understand the thing they are coding. These days it is all a tangled mess of multiple layers of libraries and frameworks, so it is hard to even understand what really going on under your code. To the point where new software developers are comfortable with everything under there code being a mysterious black box.

--- End quote ---
By that logic, things have been going down hill every people started using ICs for logic gates instead of building them from discrete components. Very few people will have an understanding from their programming language, to operating systems, to modern processor architecture all the way to semiconductor physics, so at some point everyone abstracts what they are working on and considers it a fairly well behaved black box that performs according to its specifications. Abstraction is how we were able to build increasingly complicated systems. Of course abstractions break down in edge cases, so for some type of work you need to look beyond the abstraction. Someone writing an operating system or boot loader is going to need to be more familiar with the underlying hardware than someone writing a Windows application. Some systems (software or hardware) may be a bad fit to the abstraction: for example early Intel Pentium CPUs and their FDIV bug.

Is the problem with abstraction as a principle, or just a mismatch between that particular abstraction and that particular system?

Nominal Animal:

--- Quote from: alm on May 05, 2023, 11:35:23 am ---
--- Quote from: Berni on May 05, 2023, 10:04:39 am ---As for software quality going down hill, that's mostly just the fault of people not even bothering to understand the thing they are coding. These days it is all a tangled mess of multiple layers of libraries and frameworks, so it is hard to even understand what really going on under your code. To the point where new software developers are comfortable with everything under there code being a mysterious black box.

--- End quote ---
By that logic, things have been going down hill every people started using ICs for logic gates instead of building them from discrete components. Very few people will have an understanding from their programming language, to operating systems, to modern processor architecture all the way to semiconductor physics, so at some point everyone abstracts what they are working on and considers it a fairly well behaved black box that performs according to its specifications.
--- End quote ---
No.  The problem is they abstract the things into oracles, assuming they give the answer the programmer wants, instead of the answer the code asks for; completely disregarding the specifications.

Various serial libraries are good examples of this.  They typically assume no write ever fails and short writes never occur, and don't even bother checking the return values from low-level I/O functions.  That's absolutely not what the specifications say.  (It's also why I prefer to use the termios layer directly on all non-Windows OSes, even in higher-level languages like Python.)

Navigation

[0] Message Index

[#] Next page

[*] Previous page

There was an error while thanking
Thanking...
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod