Author Topic: storing function pointer and calling  (Read 2500 times)

0 Members and 1 Guest are viewing this topic.

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 4079
  • Country: fi
Re: storing function pointer and calling
« Reply #50 on: June 04, 2021, 10:03:45 am »
gcc -Wall

Also you look and sound like a professional and are doing beyond-artist-beginner stuff like a simple scheduler. Don't use any "for artists" tools. Just say no to tinkercad and arduino. If you stick to C and basic GNU tools, you'll be able to learn most of the language and the tools in a matter of a few intensive months (or a few less intensive years).

Also do what nctnico said, when unsure, try things out on a PC. I have this habit of running

nano t.c; gcc -Wall t.c; ./a.out

and just write a few lines to test how uint16_t wraps, what is sizeof(something) or how that function pointer syntax worked again.

Or you can test your complete "logic" by printing things you want to output, and inputting things from command line arguments, and so on. The rest, what you can't simulate like this, leave it for testing on actual hardware.

If you want to see an actual LED blink, why not skip all unnecessary layers and test it on real hardware? Different methods go hand-in-hand, you can still test other parts on your PC much faster.
« Last Edit: June 04, 2021, 10:10:11 am by Siwastaja »
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 21241
  • Country: nl
    • NCT Developments
Re: storing function pointer and calling
« Reply #51 on: June 04, 2021, 10:07:59 am »
hindsight is always 20:20 ... no warnings or errors were thrown.
this is why i hate these kind if systems. you spend hours on a tiny little detail and the IDe is not very forthcoming with what is wrong.
That is why you develop on a PC first and then move to an embedded platform. On a PC you could have single stepped through the code and spot the problem right away. But you'd likely got a warning during compilation which hinted towards the problem. Instead you tried to run before you can walk.
no warnings during compilation. the arduino ide said : compilation complete and spat out a binary.

I don't like the your approach of having to emulate the i/o code. i want to see the led turn on and the display display what it needs. How do you emulate things like rotary encoders or hardware timers and interrupts ? or a control loop that interacts with the analog world ?
Model it and simulate. The hardware layer should be very thin so it is not prone to errors and when you put known-good code on top of it you know a problem is in the hardware. Divide and conquer otherwise you'll driving blind. I'd never start developing something like your code on an embedded platform. I want to test the sh#t out of it before committing it to a embedded platform. In some cases you can't even test with real hardware before having a certain degree of certainty the software does wreck the hardware. A while ago I made a controller for a system with extreme pressures. I have modelled the hardware and used simulations to get the control software right. As a bonus I can test & debug software without having the actual hardware available.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: free_electron

Offline free_electron

  • Super Contributor
  • ***
  • Posts: 7653
  • Country: us
    • SiliconValleyGarage
Re: storing function pointer and calling
« Reply #52 on: June 04, 2021, 01:22:39 pm »
- C isn't BASIC.  Why are you trying to use it as something it's not?
true, but that is not my point.

the logic principle remains the same, just a different language . i have coded this in several machine languages (6802, 6805 , 6809, 8051 ,) and higher level languages (pl/m , basic, pascal).
I'm not so much frustrated at c , as to the issues i ran into with the environment i was using.

Trying to debug code in an unfamiliar language based off good meaning suggestions from people on a forum. (thank the universe for at least that)

but ... let's see ... it compiles in tool A , but the binary doesn't work. It does not compile in tool B. It emulates in tool C but crashes randomly. Tool C has  a debugger but its user interface is a box of hammers and i'm trying to peel a banana.
Tools A, B and C all pretend to be toolchains for the same thing.
Other people suggest trying to run load it on a pig and see if it can walk there.

So what problem am i solving ? is it my understanding of the syntax of the language that is flawed ? is it a logic error in the program ? or is it the toolchain that is fucking up ? what am i trying to debug here ?
Then i get very cranky... like when using a scope probe with a bad contact in it... or worse, a scope that lies at random showing data that doesn't exist or is randomly time skewed between channels ( hello LeCrap ? fixed your Z7 logic analyzer module yet ?).

Quote
An architectural point: why make something code-dependent, like passing an actual (physical) pointer over serial?
1) fast (small payloads to transport)
2) the comms channel never leaves the board.
3) the functions do not return any values.

A simple (but silly) example. 1000 processors , all running the same binary. one broadcasts : execute address 0xdeadbeef. why do you need a lookup table ? instead of saying " go look in the phone book for johns smiths number and call him ". i simply give you the number.
That function performs startdaemon(1) and exits. the attached handler blinks an led at 1 second interval.
My packets can all be a standard fixed size format. Example 1 start byte, 2 bytes address to execute and 1 byte for a possible argument. The receiver routine is very simple. No need for lookup tables or any other processing. Put the argument in a global variable and perform a hard call to the given address. Also for the sender it is easy. He knows what will be called , fetches the address of the function and sends the same address to its siblings and they all do the same.
such a system can run on very memory restricted machines. like small processors with only 1k rom and 32 bytes ram. you don't have much wiggle room to start creating a high level api , you are going to bust the code memory.

Think about those art installations that british guy does Mike Mike mike ..... mikes electric stuff ? , with all the leds. that sort of thing. but then cheaper :) one cheap processor per led, distributed system and all that jazz.
transport packet could be 2 bytes target , 2 bytes opcode (address to execute) and 2 bytes optional arguments. if target is 00 that is a broadcast. if MSB is set : that is now a group number stuff along those lines. the transport code is extremely small.

i come from a time where microcontrollers had 4k rom and 64 bytes ram. simple does it. you can't spend 2k of overhead code on a transport protocol.

Today they run a python interpreter on a virtual processor architecture in a virtual machine on a processor with a totally different architecture in an emulated environment under a hypervisor. Communicate between boards uses a HTTP post over ethernet broadcasts  and pass json packaged payloads to each other ... to turn on an led. And you wonder why it takes 3 seconds between pushing the button and the light turning on... if it turns on at all. Sometimes it turns on all by itself. Sometimes , someone in china futzes with a server and your house becomes a christmas tree, or worse. They switch off the server cause they milked all the money they could, and now you can throw away all that hardware.(sonos anyone ? roku soundbridge anyone? bueller ? bueller ? )

Over the years i find myself growing more and more cranky at all these things. Stuff is too complicated for simple things. And most the stuff out there is only half baked. Nobody understands how anything works anymore.
And don't give me that open source sauce. What was the latest big open sauce stink we had a couple of months ago ?  Some bug that went undetected for 5 years. Doesn't anyone read the code ? It's there after all for the world to see.
Maybe that's the problem. We suffer from bookcase syndrome : i have a book case full of books. i've never read them ... it looks impressive to an outsider, i can just show off how knowledgeable i am by the amount of paper i posses... like that diploma on to the wall.

Apple has a great one in their M1 processor... oh it's super safe with physically isolated cores. Except somebody figured out there are two status bits that are shared between the cores (all cores can read and write these bits, they are basically semaphores used for bus access or something) . Some smartass implemented something similar to the i2c protocol over those two bits, one is used as a clock , the other as data. They wrote a small driver and they can transport data between processes that should be isolated . The entire system security sank over 2 fucking "flifplops". And it can't be resolved because the two bits are needed for housekeeping.
Professional Electron Wrangler.
Any comments, or points of view expressed, are my own and not endorsed , induced or compensated by my employer(s).
 

Offline free_electron

  • Super Contributor
  • ***
  • Posts: 7653
  • Country: us
    • SiliconValleyGarage
Re: storing function pointer and calling
« Reply #53 on: June 04, 2021, 01:50:11 pm »
hindsight is always 20:20 ... no warnings or errors were thrown.
this is why i hate these kind if systems. you spend hours on a tiny little detail and the IDe is not very forthcoming with what is wrong.
That is why you develop on a PC first and then move to an embedded platform. On a PC you could have single stepped through the code and spot the problem right away. But you'd likely got a warning during compilation which hinted towards the problem. Instead you tried to run before you can walk.
no warnings during compilation. the arduino ide said : compilation complete and spat out a binary.

I don't like the your approach of having to emulate the i/o code. i want to see the led turn on and the display display what it needs. How do you emulate things like rotary encoders or hardware timers and interrupts ? or a control loop that interacts with the analog world ?
Model it and simulate. The hardware layer should be very thin so it is not prone to errors and when you put known-good code on top of it you know a problem is in the hardware. Divide and conquer otherwise you'll driving blind. I'd never start developing something like your code on an embedded platform. I want to test the sh#t out of it before committing it to a embedded platform. In some cases you can't even test with real hardware before having a certain degree of certainty the software does wreck the hardware. A while ago I made a controller for a system with extreme pressures. I have modelled the hardware and used simulations to get the control software right. As a bonus I can test & debug software without having the actual hardware available.
That's a different use case. i would do that too. Like a processor controlling a 20 kilowatt inverter. There is no way you debug that on the board. You don't go "let's put a breakpoint right after we switched on the mosfets in the H-bridge and see what happens."

This case ? show a student how a simple scheduler can be created quickly using simple and cheap stuff she has laying around. It's not easy in india to get your hands on real hardware. There's no digikey ( there is but it costs an arm and a leg ).
So i thought, hey, i've done this many times, let me write this for arduino. It's only a few lines of code... how hard can it be ? i've done it on many platforms in many languages. And arduino is so simple it can be used by artists, so this will be a doddle. After all it's nothing but a simple 8 bit machine with an ide and framework used by millions. Should be easy.

Apparently much harder cause i was trying to do too many things at the same time :

- code it in an unfamiliar language
- struggle with different compilers ,  (some compile it, but it does nothing ; some do not compile it ,giving error, some cannot handle the language )
- struggle with simulators. Some simulate it wrong; some simulate it right but it doesn't work because of a logic error. )
- not having a real board in front of me and be stuck with the above.

So what was i trying to debug ? syntax error ? logic error ? compiler error ? or simulator error ? or a combination of all...
Now i understand why most arduino enthusiasts never make it past blinking an led.

It's Like learning french from a badly cross translated dictionary, trying to talk to a guy that only speaks "patois". His neighbour speaks creole so he's useless as well. They both are 'basically' french but different enough it doesn't work. Meanwhile some canadian is trying to be helpful telling you to add 'eh' to the end of the sentence.
We're not moving forward very fast here. it's not "allez vite vite, sur votre bicyclette et roulez !"  (quickly, get on your bike and go. that idiom is not translateable)

Next time someone mentions arduino i will just walk away. Give them one of my working sources (available for many processors and languages, just not c ) and they can port it. Let them figure out if it is the arduino compiler/simulator/debugger that does not cooperate or their code.

But then ... they are family... you can't really do that.

Anyway, thanks to all for the help. even though i sound cranky , it is appreciated. got there in the end , but not an exercise i am looking to do ever again. (unless i can't weasel out of it , for like .. family ..)
« Last Edit: June 04, 2021, 01:52:55 pm by free_electron »
Professional Electron Wrangler.
Any comments, or points of view expressed, are my own and not endorsed , induced or compensated by my employer(s).
 

Offline free_electron

  • Super Contributor
  • ***
  • Posts: 7653
  • Country: us
    • SiliconValleyGarage
Re: storing function pointer and calling
« Reply #54 on: June 04, 2021, 01:58:52 pm »
For the record : THANKS TO ALL ! it is appreciated ! Got there in the end.

i may sound cranky and that's cause i am. Cranky cause there is nothing worse than trying to get something working with stuff you can't trust works right.
Same crankiness if you have a bad contact in your scope probe. that sort of thing.
Professional Electron Wrangler.
Any comments, or points of view expressed, are my own and not endorsed , induced or compensated by my employer(s).
 

Offline agehall

  • Frequent Contributor
  • **
  • Posts: 306
  • Country: se
Re: storing function pointer and calling
« Reply #55 on: June 04, 2021, 04:41:00 pm »
it is supposed to be c++
neither typedef nor using works.

And this is why I stay far away from Arduino if at all possible if we are talking anything more complicated than blinking an LED on a breadboard.

PlatformIO+VSCode and bare metal is so much easier since you have control over what is going on. And if you need to, use libraries that are not tied to Arduino to speed up the development.
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 17390
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: storing function pointer and calling
« Reply #56 on: June 04, 2021, 08:52:53 pm »
but ... let's see ... it compiles in tool A , but the binary doesn't work. It does not compile in tool B. It emulates in tool C but crashes randomly. Tool C has  a debugger but its user interface is a box of hammers and i'm trying to peel a banana.
Tools A, B and C all pretend to be toolchains for the same thing.
Other people suggest trying to run load it on a pig and see if it can walk there.

I can't speak for the tools you're using (or, trying to); I would suggest looking at the backend.  Likely the GUI is just a layer on top and it's using something in the background.  Arduino?  GCC in a hidden process.  Tinkercad?  No idea, maybe it's GCC and actually emulating a CPU, more likely (from what others have said here) it's something much higher level, and therefore you should beware of shortcuts and inconsistencies.  Find what it really is and see what its capabilities are.

Nothing worse than an underspecified tool.  You don't go rummaging around someone's toolbox and pick up the swiss-army-knife-looking grinder that's also a drill, sander, exfoliator and cocktail mixer, and start using it without understanding what each of those functions does.

Well.. you wouldn't ordinarily, but such is the "adventure" of using these, "artist" tools as you would say... :D


Quote
1) fast (small payloads to transport)
2) the comms channel never leaves the board.
3) the functions do not return any values.

Well-- there's probably only a few dozen, even a few hundred, pointers; a byte index would suffice.  Compression! ;D

And if there really are approx. 64k options to choose from, well for one thing your pointers won't be 64k anymore, you'll have to use a >128k AVR with long pointers to hold even a fraction of that many functions; but more importantly, you're probably better off mixing and matching those operations in a much more free-form manner, for example transmitting bytecode of a scripting language.  Maybe it's just a sequence of ops, maybe it's a proper conditional language, maybe it's a whole-ass VM.

Put another way: it goes against the principle of least data.  Why do you need 64k options when there will only be a few hundred, or thousand even?  It's not fail-safe over that set.  Use only what you need.  Same as bandwidth in analog circuits, filter off what you don't need, and you're saved some noise, immunity, etc.

And what about when a bit error does finally happen, and the call does suddenly return a lot of values? -- a missing bit, it jumps into the preamble, and now the stack is garbage.  (avr-gcc normally makes nearly bare functions when they are simple, but when more than some registers are required, it will push them onto the stack.  Or if a lot is needed, a block is allocated on the stack -- including anything that needs pointer access, like local scoped arrays.)  Impossible to debug, you'll never catch the error in action, it'll just randomly crash some day.  Quality design! :P

Alternately, could add some verification -- checking against a known list for example (reminiscent of "COMEFROM" in certain languages*), or using ECC or CRC or hash or what have you; which is a good idea on a serial link anyway, even on board.

Or make sure that every operation can fail, with no consequence, and be restarted with relative ease.  Enable watchdog so it always restarts in a known state.  Blinking lights is fail safe, who cares.  Running machinery, though, I sincerely hope you use a different approach!


*Not... that one.  Not sure what else has it, but the example I remember reading about is Etherium smart contracts; it's a VM, which has to be secure of course.  It has branch instructions, but will only branch to COMEFROM tokens.


Quote
Think about those art installations that british guy does Mike Mike mike ..... mikes electric stuff ? , with all the leds. that sort of thing. but then cheaper :) one cheap processor per led, distributed system and all that jazz.
transport packet could be 2 bytes target , 2 bytes opcode (address to execute) and 2 bytes optional arguments. if target is 00 that is a broadcast. if MSB is set : that is now a group number stuff along those lines. the transport code is extremely small.

I think Mike does more networking oriented solutions, passing command or data packets sort of thing.  Though I don't recall if he's gone into much detail over the comms he likes to use (understandable, that sort of stuff may be proprietary, and is very boring as video content goes) so if he's watching maybe he can offer some ideas here...

The higher level solution really does get useful though.  Once everything is enumerated with unique addresses, just send a packet with a header "addressed to mr. 2304" and pass it along until received.  Best of all, numerous solutions already exist -- stick it on Modbus for example.

Or hook up an Ethernet PHY and do like Profinet or Ethernet IP, or just plain old TCP/IP.  Leverage cheap cables and switches to extend your network as far as you like!  It's a heavy weight protocol to be sure, but it's popular for good reason!


Quote
i come from a time where microcontrollers had 4k rom and 64 bytes ram. simple does it. you can't spend 2k of overhead code on a transport protocol.

Ah, self-flagellation is a hard habit to break!

You're more than welcome to plop assembly into your .ino sketch, if you must... I'm sure Tinkercad would savor every character of that... :-DD


Quote
Today they run a python interpreter on a virtual processor architecture in a virtual machine on a processor with a totally different architecture in an emulated environment under a hypervisor. Communicate between boards uses a HTTP post over ethernet broadcasts  and pass json packaged payloads to each other ... to turn on an led. And you wonder why it takes 3 seconds between pushing the button and the light turning on... if it turns on at all. Sometimes it turns on all by itself. Sometimes , someone in china futzes with a server and your house becomes a christmas tree, or worse. They switch off the server cause they milked all the money they could, and now you can throw away all that hardware.(sonos anyone ? roku soundbridge anyone? bueller ? bueller ? )

You say that like it's a bad thing!

rPi's are popular for good reason -- they have everything you need, they're a whole PC on a tiny board, plus it's got the low level crap like GPIO, I2C and SPI.  You can access all of it from the shell, no programming needed!

You can pipe it over Ethernet if you like, and control it anywhere in the world.  If you're smart, you'll even turn on TLS so you don't have China flicking your light switches. :)

Delays? Unreliable communications?  These aren't problems with the system per se, they're problems with the fact that you're opening up such powerful connections.  Sometimes the internet is slow, sometimes packets get dropped.  A basic PoC might not include mitigations for those events, so it looks janky as hell.  Especially just for blinking lights.  But that's not the limit of what you can do, it's the simplest possible beginning.

Python is heavyweight, sure, but it's unimaginably powerful, if all you know is tiny constrained machines.  You can effortlessly log and trace whole comm channels, or program executions, or emulate a zillion other systems and do the same with all of them.  You can manage a zillion connected LED-blinkers and alert when any one has a frequency variance.

And best of all, we can learn lessons from developing on such systems -- by having such tools, and such computational excess, available, we can literally throw everything at the wall and see what sticks.  Then decide what parts we want to include in some project and strip it down and optimize it for a cheaper platform, like say an STM32 or something.  Which even then, STM32F4+ can run Linux, so you still have a lot of power available on such platforms.  Maybe you can even bring some such services down to an AVR.

Like there's graphical canvas for Arduino, needs more than your basic AVR but it does indeed fit into just a few k of RAM (8k+ I think?).  That's a very powerful graphical environment, something we take for granted on PCs but can still harness on even such simple systems.  Is it really the best tool to use for building an HMI or whatever?  Depends.  It's going to be relatively slow.  But then, no graphics are going to be real-time on AVR, it just doesn't have that much bandwidth.  Would you sacrifice smooth animations for rapid development?  A lot of people do!


Quote
Over the years i find myself growing more and more cranky at all these things. Stuff is too complicated for simple things. And most the stuff out there is only half baked. Nobody understands how anything works anymore.

I used to feel that way, but I've gotten used to the fact that, despite all the jankiness, the leaky abstractions, the inscrutability -- not only is it still possible to do work in such environments, but it's possible to push the envelope of what used to be possible, to bootstrap new and better things into existence.

And if you want to backport those inventions to earlier systems, that's the incredible value of highly portable languages (like C, "if it exists it has a C compiler" just about), you can do that.  And sure, portability is a lie, but it's a soft lie.  Well written code abstracts core functionality away from hardware interfaces, so you only need to patch the gaps around the edges.  Poorly written code, that's tightly integrated with the hardware, or highly optimized to it, is poorly portable.  Which is one of the pitfalls of constrained systems, they're simply harder to build upon.  We'd get nowhere if we had to integrate every new piece of hardware with every single thing we write, that's why we have operating systems, drivers and multiple levels of abstraction.  It's slower in strict terms, but it's far, far more productive.  Transistors are essentially free, don't worry about the computational burden.

There are better things than DOS (or other shell / CLI environment).  There are better things than mouse or touch GUIs.  And most of all, none of those things lose their usefulness in light of each latest advance -- we aren't required to use just one, all the time.  "Better" works both ways, forwards and backwards across the history of user interfaces; there is no true perfect interface mode.  Being able to select interfaces appropriate for tasks, is the true best meta-interface as it were.

That's the thing about technology: the more we move forward, the more we leave behind in our wake -- sometimes yes, in the standard meaning of "left behind", discarded to history -- vacuum tubes are nearly such an example, horses for transportation another.  But also just more plainly, left there in our wake, sitting there, available for use -- and indeed more often than not, continuing to be useful.  Like, I don't think CD4000 logic is ever going away, nor 8-bit micros (or even 4-bit for that matter).  32-bit micros are easily as cheap and plentiful (heh, well, this year excepted..), but there will always be a place for both.  Those places may become more obscure and special over time (e.g. on-chip state machines for specific applications), but they aren't going away completely.  (Tubes haven't gone away, though that doesn't happen to be for any technical reason -- we can quite fully emulate their characteristics of course.  It's more of a...human thing.  And hey, relay computers haven't gone away either, entirely for human reasons again, AFAIK.)


Quote
And don't give me that open source sauce. What was the latest big open sauce stink we had a couple of months ago ?  Some bug that went undetected for 5 years. Doesn't anyone read the code ? It's there after all for the world to see.
Maybe that's the problem. We suffer from bookcase syndrome : i have a book case full of books. i've never read them ... it looks impressive to an outsider, i can just show off how knowledgeable i am by the amount of paper i posses... like that diploma on to the wall.

I will however give a more cynical opinion on FOSS... (Compared to my preceding paragraph, I mean.)  Small projects are, take it or leave it -- the author made it to satisfy some pet problem, and that's that.  Bug fixes, expanded features, just not being a pile of janky interfaces -- what's that?

It's only in the big projects, with huge numbers of users, strong donations, lots of volunteers -- that you get to a scale where things work.  Linux, Firefox, Chromium, etc. might be held up as shining jewels of the FOSS ecosystem, but literally millions of other projects languish in some state between these two extremes.  You get what you pay for, basically, and the same is true of commercial software just as well, and more broadly, products in general.

Everything has bugs; "software" is a verb: it simply changes over time as bug fixes are incorporated, and new (buggy) features are added in turn.  Only trivially simple projects are done, top to bottom, fully considered, with no bugs whatsoever; or they're done by prodigies of the art.  (Knuth's projects come to mind.  Mind, LaTeX for example is janky as hell, but at least it's not buggy.  Though it's also set up so that, when he passes away, it's no longer a bug, but a feature not necessarily written into the documentation.  So there's that.)

We are, slowly, pushing the envelope of what is provable, what is computable -- correctness proofs are something that's expanded into a number of areas, and we have a few languages and hardware designs actually formally proven.  Among informal tools, every time a new exploit or pattern is found, its analysis can be applied to existing codebases or binaries to find new bugs (or rather, the same bugs in new places).  Software keeps getting piled on top, true, but the foundations are also becoming stronger and stronger.  Nobody's broken hard crypto (...that we know of, and other than using plain brute force, which will always be an option), all the exploits have been in the tools written around the core functionality.  (Or poorly written cores, that failed the spec in various and often subtle ways.  Sometimes hardware-dependent ways, like with caching...)


Quote
Apple has a great one in their M1 processor... oh it's super safe with physically isolated cores. Except somebody figured out there are two status bits that are shared between the cores (all cores can read and write these bits, they are basically semaphores used for bus access or something) . Some smartass implemented something similar to the i2c protocol over those two bits, one is used as a clock , the other as data. They wrote a small driver and they can transport data between processes that should be isolated . The entire system security sank over 2 fucking "flifplops". And it can't be resolved because the two bits are needed for housekeeping.

Yeah, that's a neat one... :-DD

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline IanB

  • Super Contributor
  • ***
  • Posts: 9939
  • Country: us
Re: storing function pointer and calling
« Reply #57 on: June 04, 2021, 09:03:17 pm »
I think Mike does more networking oriented solutions, passing command or data packets sort of thing.  Though I don't recall if he's gone into much detail over the comms he likes to use (understandable, that sort of stuff may be proprietary, and is very boring as video content goes) so if he's watching maybe he can offer some ideas here...

If I recall correctly, I think I have seen videos where Mike talks about using RS-485 to network things together.
I'm not an EE--what am I doing here?
 

Offline free_electron

  • Super Contributor
  • ***
  • Posts: 7653
  • Country: us
    • SiliconValleyGarage
Re: storing function pointer and calling
« Reply #58 on: June 04, 2021, 09:17:59 pm »
I think Mike does more networking oriented solutions, passing command or data packets sort of thing.  Though I don't recall if he's gone into much detail over the comms he likes to use (understandable, that sort of stuff may be proprietary, and is very boring as video content goes) so if he's watching maybe he can offer some ideas here...

If I recall correctly, I think I have seen videos where Mike talks about using RS-485 to network things together.
which is .. basically a serial port over current loop. possibly modbus.
Professional Electron Wrangler.
Any comments, or points of view expressed, are my own and not endorsed , induced or compensated by my employer(s).
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 17390
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: storing function pointer and calling
« Reply #59 on: June 04, 2021, 09:33:44 pm »
Yeah I recall he's mentioned RS-485 before.  The comm hardware doesn't matter much... note that RS-485 doesn't even say how you use bits, just what the levels of those bits shall be.  Let alone how you should manage multi-mastering.  It's the protocol that's "draw the rest of the fucking owl" here... ;D

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline free_electron

  • Super Contributor
  • ***
  • Posts: 7653
  • Country: us
    • SiliconValleyGarage
Re: storing function pointer and calling
« Reply #60 on: June 04, 2021, 09:39:32 pm »
I can't speak for the tools you're using (or, trying to); I would suggest looking at the backend


yes and no. this was supposed to be a half hour project. and i can't be arsed digging in the underpinnings of these tools. I never dig into how keil works , or the firmware of my oscilloscope.
I don't have time nor desire for that. it should work. period. if i buy a microwave it works. i don't need to check of the magnetron is compatible with the transformer used. Same thing for devtools. My screwdriver works. my scope works. my IDE should also work.

Quote
Well-- there's probably only a few dozen, even a few hundred, pointers; a byte index would suffice.  Compression! ;D

LOL, yes compression but 40 lines of code to compress and decompress and some ram.

Quote
Quality design! :P
my board layouts do not suffer from signal integrity problems  >:D There is no such thing as noise , ESD and comic radiation events.  ;D . just like that coronavirus. doesn't exist.

Quote
Ah, self-flagellation is a hard habit to break!
nope ,  a matter of KISS. keep it simple. no need for billions of lines of overhead nobody understands.

Quote
You say that like it's a bad thing!
it is ! it's full of holes, trouble and untraceable. turning on an LED need wading through 50 billion lines of code when it (doesn't always) work.

Quote
rPi's are popular for good reason -- they have everything you need,

They are also power hungry, overheat frequently , have piss poor designed PoE shields ... i do not want to see a house with every lightbulb and power socket having a rpi attached to it... the vampire drain will be astronomical.

Quote
And best of all, we can learn lessons from developing on such systems
let's first learn how to crawl before we sit down in the cockpit of a hypersonic jet ... I too frequently encounter people that don't know you can make an AND gate with only NOR's ... they need a simulator to check that and will write one if needed.

Quote
- philosophical paragaphs -

Sometimes all you need is a nail. And sometimes all you have is a nail. And that should be fine too.
Forcing people to buy an expensive , need to read manual , screw driving machine and using tape fed screws is not right. not everyone can afford that. or build one from open source plans. or have time for that. Picture, meet nail . Nail, meet wall. -BAM-

On the other hand , i expect the hammer to work.... if you are going to give me a hammer where the head flies of the handle at the first strike ... prepare for a load of vitriol. If i miss the nail and strike my thumb , that's on me. if the head flies off : someone will get tarred and feathered over that. i'm too old to deal with such crap.

I needed to turn an arduino screw. unfortunately the quality of the arduino compatible screwdrivers is such that they break, bend , or have such tolerances they plainly don't fit and you spend hours mucking to find one, or modify one,  than can sort of get the job done.
« Last Edit: June 04, 2021, 09:47:06 pm by free_electron »
Professional Electron Wrangler.
Any comments, or points of view expressed, are my own and not endorsed , induced or compensated by my employer(s).
 

Online westfw

  • Super Contributor
  • ***
  • Posts: 3457
  • Country: us
Re: storing function pointer and calling
« Reply #61 on: June 04, 2021, 11:12:42 pm »
Quote
no warnings during compilation.
Did you turn on warnings in the Arduino "Preferences" panel?  You can...

(not that the Arduino IDE will make it clear that warnings have occurred - they're certain to have scrolled out of the ~3-line window that they give you into the compiler output, and you might have to hunt for them, and then copy&paste them into something else so that you can actually read what they say..)

 

Offline cv007

  • Frequent Contributor
  • **
  • Posts: 610
Re: storing function pointer and calling
« Reply #62 on: June 05, 2021, 08:59:16 am »
Quote
Did you turn on warnings in the Arduino "Preferences" panel?  You can..
Would help if one pays attention to the output, but warnings scrolling by in your little 10 line high output window probably not a big help when you see the last 10 lines showing you the compiler is all happy.

The 'Default' setting would already be showing the warnings he was producing (not sure if 'Default' is default, sounds like it should be). What would be helpful is to get -Werror into the options, but does not appear to be a way other than finding/changing a settings file. I guess the arduino creators are happy if the arduino users make it to the end of the compile process, including warnings. That works if arduino user is aware that warnings are not really a good thing to ignore, and seeks them out in the output window.
 

Offline free_electron

  • Super Contributor
  • ***
  • Posts: 7653
  • Country: us
    • SiliconValleyGarage
Re: storing function pointer and calling
« Reply #63 on: June 05, 2021, 02:36:09 pm »
Quote
no warnings during compilation.
Did you turn on warnings in the Arduino "Preferences" panel?  You can...

(not that the Arduino IDE will make it clear that warnings have occurred - they're certain to have scrolled out of the ~3-line window that they give you into the compiler output, and you might have to hunt for them, and then copy&paste them into something else so that you can actually read what they say..)
the compiler report showed compilation complete , code size, ram size and it spat out a binary. i am (was) using the web based ide.

i am using wokwi right now as it has GDB integration and allows me to graphically simulate what the thing is doing.
Professional Electron Wrangler.
Any comments, or points of view expressed, are my own and not endorsed , induced or compensated by my employer(s).
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf