but ... let's see ... it compiles in tool A , but the binary doesn't work. It does not compile in tool B. It emulates in tool C but crashes randomly. Tool C has a debugger but its user interface is a box of hammers and i'm trying to peel a banana.
Tools A, B and C all pretend to be toolchains for the same thing.
Other people suggest trying to run load it on a pig and see if it can walk there.
I can't speak for the tools you're using (or, trying to); I would suggest looking at the backend. Likely the GUI is just a layer on top and it's using something in the background. Arduino? GCC in a hidden process. Tinkercad? No idea, maybe it's GCC and actually emulating a CPU, more likely (from what others have said here) it's something much higher level, and therefore you should beware of shortcuts and inconsistencies. Find what it really is and see what its capabilities are.
Nothing worse than an underspecified tool. You don't go rummaging around someone's toolbox and pick up the swiss-army-knife-looking grinder that's also a drill, sander, exfoliator and cocktail mixer, and start using it without understanding what each of those functions does.
Well.. you wouldn't ordinarily, but such is the "adventure" of using these, "artist" tools as you would say...
1) fast (small payloads to transport)
2) the comms channel never leaves the board.
3) the functions do not return any values.
Well-- there's probably only a few dozen, even a few hundred, pointers; a byte index would suffice. Compression!
And if there really are approx. 64k options to choose from, well for one thing your pointers won't be 64k anymore, you'll have to use a >128k AVR with long pointers to hold even a fraction of that many functions; but more importantly, you're probably better off mixing and matching those operations in a much more free-form manner, for example transmitting bytecode of a scripting language. Maybe it's just a sequence of ops, maybe it's a proper conditional language, maybe it's a whole-ass VM.
Put another way: it goes against the principle of least data. Why do you need 64k options when there will only be a few hundred, or thousand even? It's not fail-safe over that set. Use only what you need. Same as bandwidth in analog circuits, filter off what you don't need, and you're saved some noise, immunity, etc.
And what about when a bit error does finally happen, and the call
does suddenly return a lot of values? -- a missing bit, it jumps into the preamble, and now the stack is garbage. (avr-gcc normally makes nearly bare functions when they are simple, but when more than some registers are required, it will push them onto the stack. Or if a lot is needed, a block is allocated on the stack -- including anything that needs pointer access, like local scoped arrays.) Impossible to debug, you'll never catch the error in action, it'll just randomly crash some day. Quality design!
Alternately, could add some verification -- checking against a known list for example (reminiscent of "COMEFROM" in certain languages*), or using ECC or CRC or hash or what have you; which is a good idea on a serial link anyway, even on board.
Or make sure that every operation can fail, with no consequence, and be restarted with relative ease. Enable watchdog so it always restarts in a known state. Blinking lights is fail safe, who cares. Running machinery, though, I sincerely hope you use a different approach!
*Not...
that one. Not sure what else has it, but the example I remember reading about is Etherium smart contracts; it's a VM, which has to be secure of course. It has branch instructions, but will only branch to COMEFROM tokens.
Think about those art installations that british guy does Mike Mike mike ..... mikes electric stuff ? , with all the leds. that sort of thing. but then cheaper one cheap processor per led, distributed system and all that jazz.
transport packet could be 2 bytes target , 2 bytes opcode (address to execute) and 2 bytes optional arguments. if target is 00 that is a broadcast. if MSB is set : that is now a group number stuff along those lines. the transport code is extremely small.
I think Mike does more networking oriented solutions, passing command or data packets sort of thing. Though I don't recall if he's gone into much detail over the comms he likes to use (understandable, that sort of stuff may be proprietary, and is very boring as video content goes) so if he's watching maybe he can offer some ideas here...
The higher level solution really does get useful though. Once everything is enumerated with unique addresses, just send a packet with a header "addressed to mr. 2304" and pass it along until received. Best of all, numerous solutions already exist -- stick it on Modbus for example.
Or hook up an Ethernet PHY and do like Profinet or Ethernet IP, or just plain old TCP/IP. Leverage cheap cables and switches to extend your network as far as you like! It's a heavy weight protocol to be sure, but it's popular for good reason!
i come from a time where microcontrollers had 4k rom and 64 bytes ram. simple does it. you can't spend 2k of overhead code on a transport protocol.
Ah, self-flagellation is a hard habit to break!
You're more than welcome to plop assembly into your .ino sketch, if you must... I'm sure Tinkercad would savor every character of that...
Today they run a python interpreter on a virtual processor architecture in a virtual machine on a processor with a totally different architecture in an emulated environment under a hypervisor. Communicate between boards uses a HTTP post over ethernet broadcasts and pass json packaged payloads to each other ... to turn on an led. And you wonder why it takes 3 seconds between pushing the button and the light turning on... if it turns on at all. Sometimes it turns on all by itself. Sometimes , someone in china futzes with a server and your house becomes a christmas tree, or worse. They switch off the server cause they milked all the money they could, and now you can throw away all that hardware.(sonos anyone ? roku soundbridge anyone? bueller ? bueller ? )
You say that like it's a bad thing!
rPi's are popular for good reason -- they have everything you need, they're a whole PC on a tiny board, plus it's got the low level crap like GPIO, I2C and SPI. You can access all of it from the shell, no programming needed!
You can pipe it over Ethernet if you like, and control it anywhere in the world. If you're smart, you'll even turn on TLS so you don't have China flicking your light switches.
Delays? Unreliable communications? These aren't problems with the system per se, they're problems with the fact that you're opening up such powerful connections. Sometimes the internet is slow, sometimes packets get dropped. A basic PoC might not include mitigations for those events, so it looks janky as hell. Especially just for blinking lights. But that's not the limit of what you can do, it's the simplest possible beginning.
Python is heavyweight, sure, but it's unimaginably powerful, if all you know is tiny constrained machines. You can effortlessly log and trace whole comm channels, or program executions, or emulate a zillion other systems and do the same with all of them. You can manage a zillion connected LED-blinkers and alert when any one has a frequency variance.
And best of all, we can learn lessons from developing on such systems -- by having such tools, and such computational excess, available, we can literally throw everything at the wall and see what sticks. Then decide what parts we want to include in some project and strip it down and optimize it for a cheaper platform, like say an STM32 or something. Which even then, STM32F4+ can run Linux, so you still have a lot of power available on such platforms. Maybe you can even bring some such services down to an AVR.
Like there's graphical canvas for Arduino, needs more than your basic AVR but it does indeed fit into just a few k of RAM (8k+ I think?). That's a very powerful graphical environment, something we take for granted on PCs but can still harness on even such simple systems. Is it really the best tool to use for building an HMI or whatever? Depends. It's going to be relatively slow. But then, no graphics are going to be real-time on AVR, it just doesn't have that much bandwidth. Would you sacrifice smooth animations for rapid development? A lot of people do!
Over the years i find myself growing more and more cranky at all these things. Stuff is too complicated for simple things. And most the stuff out there is only half baked. Nobody understands how anything works anymore.
I used to feel that way, but I've gotten used to the fact that, despite all the jankiness, the leaky abstractions, the inscrutability -- not only is it still possible to do work in such environments, but it's possible to push the envelope of what used to be possible, to bootstrap new and better things into existence.
And if you want to backport those inventions to earlier systems, that's the incredible value of highly portable languages (like C, "if it exists it has a C compiler" just about), you can do that. And sure, portability is a lie, but it's a soft lie. Well written code abstracts core functionality away from hardware interfaces, so you only need to patch the gaps around the edges. Poorly written code, that's tightly integrated with the hardware, or highly optimized to it, is poorly portable. Which is one of the pitfalls of constrained systems, they're simply harder to build upon. We'd get nowhere if we had to integrate every new piece of hardware with every single thing we write, that's why we have operating systems, drivers and multiple levels of abstraction. It's slower in strict terms, but it's far, far more productive. Transistors are essentially free, don't worry about the computational burden.
There are better things than DOS (or other shell / CLI environment). There are better things than mouse or touch GUIs. And most of all, none of those things lose their usefulness in light of each latest advance -- we aren't required to use just one, all the time. "Better" works both ways, forwards and backwards across the history of user interfaces; there is no true perfect interface mode. Being able to select interfaces appropriate for tasks, is the true best meta-interface as it were.
That's the thing about technology: the more we move forward, the more we leave behind in our wake -- sometimes yes, in the standard meaning of "left behind", discarded to history -- vacuum tubes are nearly such an example, horses for transportation another. But also just more plainly, left there in our wake, sitting there, available for use -- and indeed more often than not, continuing to be useful. Like, I don't think CD4000 logic is ever going away, nor 8-bit micros (or even 4-bit for that matter). 32-bit micros are easily as cheap and plentiful (heh, well, this year excepted..), but there will always be a place for both. Those places may become more obscure and special over time (e.g. on-chip state machines for specific applications), but they aren't going away completely. (Tubes haven't gone away, though that doesn't happen to be for any technical reason -- we can quite fully emulate their characteristics of course. It's more of a...human thing. And hey, relay computers haven't gone away either, entirely for human reasons again, AFAIK.)
And don't give me that open source sauce. What was the latest big open sauce stink we had a couple of months ago ? Some bug that went undetected for 5 years. Doesn't anyone read the code ? It's there after all for the world to see.
Maybe that's the problem. We suffer from bookcase syndrome : i have a book case full of books. i've never read them ... it looks impressive to an outsider, i can just show off how knowledgeable i am by the amount of paper i posses... like that diploma on to the wall.
I will however give a more cynical opinion on FOSS... (Compared to my preceding paragraph, I mean.) Small projects are, take it or leave it -- the author made it to satisfy some pet problem, and that's that. Bug fixes, expanded features, just not being a pile of janky interfaces -- what's that?
It's only in the big projects, with huge numbers of users, strong donations, lots of volunteers -- that you get to a scale where things work. Linux, Firefox, Chromium, etc. might be held up as shining jewels of the FOSS ecosystem, but literally millions of other projects languish in some state between these two extremes. You get what you pay for, basically, and the same is true of commercial software just as well, and more broadly, products in general.
Everything has bugs; "software" is a verb: it simply changes over time as bug fixes are incorporated, and new (buggy) features are added in turn. Only trivially simple projects are done, top to bottom, fully considered, with no bugs whatsoever; or they're done by prodigies of the art. (Knuth's projects come to mind. Mind, LaTeX for example is janky as hell, but at least it's not buggy. Though it's also set up so that, when he passes away, it's no longer a bug, but a feature not necessarily written into the documentation. So there's that.)
We are, slowly, pushing the envelope of what is provable, what is computable -- correctness proofs are something that's expanded into a number of areas, and we have a few languages and hardware designs actually formally proven. Among informal tools, every time a new exploit or pattern is found, its analysis can be applied to existing codebases or binaries to find new bugs (or rather, the same bugs in new places). Software keeps getting piled on top, true, but the foundations are also becoming stronger and stronger. Nobody's broken hard crypto (...that we know of, and other than using plain brute force, which will always be an option), all the exploits have been in the tools written around the core functionality. (Or poorly written cores, that failed the spec in various and often subtle ways. Sometimes hardware-dependent ways, like with caching...)
Apple has a great one in their M1 processor... oh it's super safe with physically isolated cores. Except somebody figured out there are two status bits that are shared between the cores (all cores can read and write these bits, they are basically semaphores used for bus access or something) . Some smartass implemented something similar to the i2c protocol over those two bits, one is used as a clock , the other as data. They wrote a small driver and they can transport data between processes that should be isolated . The entire system security sank over 2 fucking "flifplops". And it can't be resolved because the two bits are needed for housekeeping.
Yeah, that's a neat one...
Tim