Can ChatGPT generate all the code for my Macgyver Project?
Does ChatGPT understand what 7 segment displays and shift registers are, and can it integrate that knowledge with Arduino programming? Let's find out!
This is Part 3 of the Magyver project.
How many human coders would get it exactly right from scratch the first time it's compiled and run? It's seems to be good for a starting point which isn't a bad thing. It also depends on how well the human describes the problem.
Also, remember that ChatGPT is not even a finished product at this time, and it will be better and better as time goes on.
It's 80% there .. and there's 80% left to do. Just like real software engineering.
yeah, expecting an AI to be 100% perfect first time is unrealistic, just like expecting a human to be 100% right first time.
But it got the vast majority of it correct and considering AI chat is a relatively new tech its amazing that it's this good this soon.
And AI is only as good as the amount of training data it has. Every day humans write more and more stuff on the internet and so AI can be trained on a larger and larger dataset information. It can only get better and better. Probably quite a lot better.
Definitely going to change the face of programming, as well as other industries.
You can actually see how coding in the future might be done by speaking to your computer and getting it to write and tweak functions for you. You'll still have to understand how to code and type out some 'glue logic' but the time to write stuff should be greatly reduced.
It's 80% there .. and there's 80% left to do. Just like real software engineering.
The last 10% takes 90% of the time.
It's 80% there .. and there's 80% left to do. Just like real software engineering.
The last 10% takes 90% of the time.
Yeah, but that is assuming the AI will only generate you the 'easy' stuff.
I'm not sure if that's a true statement.
AI is not bound by easy vs hard it's bound by the pool of training data.
When you train it on something like stack overflow you are training it on the bits that people find difficult and ask for help on.
Not saying you're wrong, just that i dont think it's as clear cut as that.
How many human coders would get it exactly right from scratch the first time it's compiled and run? It's seems to be good for a starting point which isn't a bad thing. It also depends on how well the human describes the problem.
I pulled this off just once. Though I was showing off to a friend and I was being ultra careful.
I don't think I could ever do it again...
How many human coders would get it exactly right from scratch the first time it's compiled and run? It's seems to be good for a starting point which isn't a bad thing. It also depends on how well the human describes the problem.
I pulled this off just once. Though I was showing off to a friend and I was being ultra careful.
I don't think I could ever do it again...
Yea I maybe did it to flash an LED.
I've done quite a few microcontroller projects but there's no way I could have done any of them perfectly the very first time. I think breaking down the project for ChatGPT is the best way to do it (like we teach students ... Hmmm).
To be ridiculously simpleminded tell it you want a function to add two variables passed to it and return the result. It'll get that correct the first time. At some point the function will be prone to errors due to the complexity of the function and/or the explanation the human provides to ChatGPT, and due to it's misunderstanding. So, if you break down the programming project into functions for ChatGPT to work on it will probably get you farther to the correct full-functioning program.
So ChatGPT is (maybe not yet) an abstraction layer?
Follow up
Me: can you write for me a c function. I want to pass it two variables and the function needs to add the two variables and return the result
ChatGPT: Certainly! Here's an example of a C function that adds two variables and returns the result:
int add(int a, int b) {
int sum = a + b;
return sum;
}
No errors ... But continue to make the function more complex and it will make an error eventually.
No errors ... But continue to make the function more complex and it will make an error eventually.
So will a human, but in this case the AI is only a few years old.
Ya, have to think of an AI or neural nets like an imperfect entity at a fundamental level
When you train it on something like stack overflow you are training it on the bits that people find difficult and ask for help on.
At least it tries to be polite, unlike the downright unhelpful stack overflow mods
.
Bing is getting a reputation for the opposite. Maybe they trained ChatGPT on git and Bing on Stack overflow.
How many human coders would get it exactly right from scratch the first time it's compiled and run? It's seems to be good for a starting point which isn't a bad thing. It also depends on how well the human describes the problem.
I pulled this off just once. Though I was showing off to a friend and I was being ultra careful.
I don't think I could ever do it again...
Yea I maybe did it to flash an LED.
No.
Mine was a home made state machine processor for a PIC microcontroller which generated a set of output controls based on a sequence and timing of door sensors for an alarm system. Yes, pulling it off surprised us both as we both were software engineers. Made a simple 1.5k$ cash on that bet for 1.5 hour work.
Can ChatGPT generate all the code for my Macgyver Project?
Does ChatGPT understand what 7 segment displays and shift registers are, and can it integrate that knowledge with Arduino programming? Let's find out!
Does this mean you will not be doing the 74-series hardware solution?
There are legal/PR limitations on how good they can make Codepilot type systems (in the absence of AGI).
Bing improves on GPT by putting it in a hybrid expert system with some guided search results as context/citations, but it has the law and PR on its side. Codepilot probably uses guided search results as context same as Bing, but it can't really provide citations and lift the curtain ... whether the law is on its side remains to be seen, PR wise it would certainly a disaster.
How many human coders would get it exactly right from scratch the first time it's compiled and run? It's seems to be good for a starting point which isn't a bad thing. It also depends on how well the human describes the problem.
I pulled this off just once. Though I was showing off to a friend and I was being ultra careful.
I don't think I could ever do it again...
Yea I maybe did it to flash an LED.
No.
Mine was a home made state machine processor for a PIC microcontroller which generated a set of output controls based on a sequence and timing of door sensors for an alarm system. Yes, pulling it off surprised us both as we both were software engineers. Made a simple 1.5k$ cash on that bet for 1.5 hour work.
That's pretty good! In my career I wrote utility stuff in C but that wasn't my main job so I have an excuse.
I guess now I could leverage ChatGPT and perhaps get it right the first time.
On the theme of Dave trying out AI tools for electronics engineering, what about the DeepPCB AI autorouter? (Although some people were claiming it was fake human-in-the-loop due to the 24-hour turnaround time).
I've just gotten done routing a moderately complicated mixed-signal board. There sure is a lot of engineering know-how that goes into routing for signal integrity so the end-result has good performance and low noise, and you use completely different techniques depending on whether you're routing two layer or four+ layer. The end result is both art and science. I do get the feeling, though, that if AI trains up on enough existing human-routed boards that it will eventually do a half-decent job.
Anyway, might make for an interesting idea for another video in the Dave-does-AI series...
For the record, I only came across DeepPCB today in a Google search and have never used it, so this isn't an endorsement, but curiosity.
On the theme of Dave trying out AI tools for electronics engineering, what about the DeepPCB AI autorouter? (Although some people were claiming it was fake human-in-the-loop due to the 24-hour turnaround time).
I've just gotten done routing a moderately complicated mixed-signal board. There sure is a lot of engineering know-how that goes into routing for signal integrity so the end-result has good performance and low noise, and you use completely different techniques depending on whether you're routing two layer or four+ layer. The end result is both art and science. I do get the feeling, though, that if AI trains up on enough existing human-routed boards that it will eventually do a half-decent job.
Anyway, might make for an interesting idea for another video in the Dave-does-AI series...
For the record, I only came across DeepPCB today in a Google search and have never used it, so this isn't an endorsement, but curiosity.
https://www.eevblog.com/forum/projects/deeppcb-beta-tensorflow-based-machine-learning-pcb-routing-online-free-trial/thinkfat is saying its fake. Probably worth trying, but, remember that all it does is route. It does not claim to provide good signal integrity, etc.
If we are just talking routing signals from A to B, a human can easily be beaten with enough CPU horsepower.
thinkfat is saying its fake. Probably worth trying, but, remember that all it does is route. It does not claim to provide good signal integrity, etc.
If we are just talking routing signals from A to B, a human can easily be beaten with enough CPU horsepower.
That's kinda lame if it can't finish a known previously routed placement.
That would be the best way to test it I think, get an existing design wiht all the constraints and rip up all the traces.
Maybe as an idea for a followup video: Try doing the exact same test with GitHub Copilot. It's supposedly significantly better at generating code.