General > General Technical Chat
Embedded software development. Best practices.
<< < (4/6) > >>
AaronD:

--- Quote from: nctnico on August 15, 2021, 09:21:18 am ---
--- Quote from: AaronD on August 15, 2021, 03:05:15 am ---Likewise, don't do anything more than what is absolutely necessary in an interrupt handler.  Any time that you spend there is time that you can't be doing something else, including lower-priority interrupts.

--- End quote ---
No, no, no, no. This is the worst advice ever. For one thing: you will need to spend the time processing the data coming from the interrupt one way or another. So the total amount of processing time stays the same. If you add overhead of buffering, then you actually make things worse because suddenly the slow main loop looking at buttons or blinking a LED becomes time sensitive.

The only proper way is to plan how much time is spend in each interrupt (including processing) and determine which interrupt should have the highest priority. From there it becomes clear if there are conflicting interrupts and you may need buffering but more likely there is a better way out of such situations (like combining interrupts into one). For example: if you are doing digital signal processing, you get input samples and output samples. If you write the output samples from the ADC interrupt then the output samplerate is automatically equal to the input samplerate; you don't need an extra output sample timer interrupt.

Some of my embedded firmware projects spend 90% of the time in interrupts doing signal processing.

All in all a better way to look at the interrupt controller is to regard it as a process scheduler and each interrupt routine is a seperate process. By setting lower / higher priorities and using interrupt nesting, you can have several concurrent processes without using an OS.


--- Quote ---Floating-point is even worse than that (floats and doubles), unless you have a floating-point accelerator, and then it only helps you for the size that it's designed for.

--- End quote ---
OMG  :palm: Really? More nonsense again. It all depends how much processing time you have available and you have plenty nowadays from a typical ARM-Cortex CPU. I have used soft-floating point in audio signal processing on a 70MHz ARM microcontroller about a decade ago.

Floating point makes working with numbers much easier (still keep an eye out for accumulating drift / errors) so you can write software quicker and keep the resulting code more readable / easier maintain. The first mistake to make when writing software is to start with optimisation before determining speed is actually a problem.

For example: if you need to read a temperature sensor input every 10 seconds then using soft-floating point has zero impact on performance. You probably can't even measure the extra time it takes.

I was brought up with soft-floating point being a big no-no in embedded firmware but in now I realise the people who told me that where very wrong.

Edit: and not using printf? Really  :palm:  Please use printf and don't go around re-inventing the wheel. If printf is too big or uses global memory, then implement your own which has a smaller footprint. The MSP430 GCC compiler -for example- comes with a very small vuprintf. And otherwise it is not difficult to find examples of even smaller micro-printf implementations. The worse thing to do by far is to invent your own string printing routines. I've seen those many times and they all sucked so bad that in the end even the original author started using printf. In the end the 'problem' (non re-entrant or code size) is in the vuprintf function, just fix the problem there.  In a professional environment you need to keep to standards as much as possible. The standard C library is such a standard. Don't go doing non-standard stuff because it will confuse and annoy the hell out of the person who needs to maintain the code after you.

--- End quote ---

What are you running on?  Embedded Linux?  In that case, you'd be right, and arguably the correct way to do it for the reasons that you stated.  But on an 8-bit 10MIPS machine with 300 bytes of RAM (yes, bytes; not even kbytes) that needs a "humanly-instant" response time - the sort of thing that I usually do - your approach could barely do anything at all!

Also note that I never said to not do *anything* in interrupts.  I have an interrupt-driven ADC module, for example, that runs a lowpass filter from a 10-bit SAR converter to an array of 16-bit "output" variables for everything else to use.  The only reason it's interrupt-driven is to keep the sample rate up across all channels so I can have a decently functional filter.  And that oversampled filter is indeed done in the ISR because the data that it works on is that ephemeral.  Drop the results in the global output array, and then the rest of the code can pick up the "magically existing ADC readings" whenever it gets around to it.

When every instruction cycle and every byte of memory is important, you do things differently.  Still keep it sensible, but in the sense of someone seeing how your no-extras-at-all debug spew works (which could just as easily be a PWM-driven or bit-banged "pin-wiggler" for an oscilloscope to look at) so they can get on with the actual function of things, instead of |O over not getting printf to fit; or how to output an ASCII stream anyway, when your only UART is already tied up with DMX or whatever.
(and yes, that includes myself after a few months :))

---

If the main loop takes longer than the shortest required non-interrupt response time, then there's something else wrong.  If you can't reduce or redistribute the workload (so you don't end up triggering *everything* on the same pass, often by accident or naivete), then you might poll the more critical things more than once throughout the main loop code.

That's one place where a bit-banged "pin-wiggler" comes in really handy!  (if you have a spare output pin)  Toggle it once per loop, or once per poll of the critical thing, and you can see on the 'scope or logic analyzer, exactly how your timing works out.
nctnico:

--- Quote from: AaronD on August 16, 2021, 05:02:08 pm ---
--- Quote from: nctnico on August 15, 2021, 09:21:18 am ---
--- Quote from: AaronD on August 15, 2021, 03:05:15 am ---Likewise, don't do anything more than what is absolutely necessary in an interrupt handler.  Any time that you spend there is time that you can't be doing something else, including lower-priority interrupts.

--- End quote ---
No, no, no, no. This is the worst advice ever. For one thing: you will need to spend the time processing the data coming from the interrupt one way or another. So the total amount of processing time stays the same. If you add overhead of buffering, then you actually make things worse because suddenly the slow main loop looking at buttons or blinking a LED becomes time sensitive.

The only proper way is to plan how much time is spend in each interrupt (including processing) and determine which interrupt should have the highest priority. From there it becomes clear if there are conflicting interrupts and you may need buffering but more likely there is a better way out of such situations (like combining interrupts into one). For example: if you are doing digital signal processing, you get input samples and output samples. If you write the output samples from the ADC interrupt then the output samplerate is automatically equal to the input samplerate; you don't need an extra output sample timer interrupt.

Some of my embedded firmware projects spend 90% of the time in interrupts doing signal processing.

All in all a better way to look at the interrupt controller is to regard it as a process scheduler and each interrupt routine is a seperate process. By setting lower / higher priorities and using interrupt nesting, you can have several concurrent processes without using an OS.


--- Quote ---Floating-point is even worse than that (floats and doubles), unless you have a floating-point accelerator, and then it only helps you for the size that it's designed for.

--- End quote ---
OMG  :palm: Really? More nonsense again. It all depends how much processing time you have available and you have plenty nowadays from a typical ARM-Cortex CPU. I have used soft-floating point in audio signal processing on a 70MHz ARM microcontroller about a decade ago.

Floating point makes working with numbers much easier (still keep an eye out for accumulating drift / errors) so you can write software quicker and keep the resulting code more readable / easier maintain. The first mistake to make when writing software is to start with optimisation before determining speed is actually a problem.

For example: if you need to read a temperature sensor input every 10 seconds then using soft-floating point has zero impact on performance. You probably can't even measure the extra time it takes.

I was brought up with soft-floating point being a big no-no in embedded firmware but in now I realise the people who told me that where very wrong.

Edit: and not using printf? Really  :palm:  Please use printf and don't go around re-inventing the wheel. If printf is too big or uses global memory, then implement your own which has a smaller footprint. The MSP430 GCC compiler -for example- comes with a very small vuprintf. And otherwise it is not difficult to find examples of even smaller micro-printf implementations. The worse thing to do by far is to invent your own string printing routines. I've seen those many times and they all sucked so bad that in the end even the original author started using printf. In the end the 'problem' (non re-entrant or code size) is in the vuprintf function, just fix the problem there.  In a professional environment you need to keep to standards as much as possible. The standard C library is such a standard. Don't go doing non-standard stuff because it will confuse and annoy the hell out of the person who needs to maintain the code after you.

--- End quote ---

What are you running on?  Embedded Linux?  In that case, you'd be right, and arguably the correct way to do it for the reasons that you stated. 

--- End quote ---
No. Regular microcontrollers like NXP's LPC ARM series or TI's MSP430 for example.


--- Quote ---But on an 8-bit 10MIPS machine with 300 bytes of RAM (yes, bytes; not even kbytes) that needs a "humanly-instant" response time - the sort of thing that I usually do - your approach could barely do anything at all!

--- End quote ---
But who would use such a constricted microcontroller nowadays? If you are into high volume products maybe but in such a case you likely start with a bigger microcontroller for test & design verification. For anything produced in less than 10k units the NRE costs will be a huge part of the cost of a product. So if you can reduce the NRE cost by using a microcontroller that doesn't need balancing on one foot while touching your nose with the other then you are already ahead. Plus there is likely room for future extensions as well.
AaronD:

--- Quote from: nctnico on August 16, 2021, 06:06:35 pm ---But who would use such a constricted microcontroller nowadays? If you are into high volume products maybe but in such a case you likely start with a bigger microcontroller for test & design verification. For anything produced in less than 10k units the NRE costs will be a huge part of the cost of a product. So if you can reduce the NRE cost by using a microcontroller that doesn't need balancing on one foot while touching your nose with the other then you are already ahead. Plus there is likely room for future extensions as well.

--- End quote ---

I used to work for a mass-produced niche company that used something about the size that I've been talking about as their jelly-bean standard.  At their volume, they could get pretty much the entire family of that chip for almost nothing.  And by the time I started with them, most of the "cramming it all in" work was already done, and they had a somewhat comfortable set of linker commands and in-house libraries to replace the compiler's bloated libraries.  ("Don't use the '*' symbol!  Call our mult(a, b) function instead, that one of our guys wrote by hand in assembly.")

There was one project though, where the hardware designer(s) grossly underestimated the amount of processing it was going to take.  Fortunately, I had only spent the first week or so of my allotted 2 months or whatever it was, to figure out how to do it and then write the code that almost did it.  Enough to see that it would indeed work that way if I could cram it into the available code space.  (RAM and CPU time were not a problem in this case, only code space)  So I spent the next month and a half refactoring, hand-optimizing, reading and criticizing what the compiler thought was good assembly (I can do that myself in 2 fewer instructions!), tweaking the linker to fill some holes, and sometimes combining unrelated functions just because they had similar parts so there would only be one copy of those parts, and never using the complete results of that combined function.

The result was something that was finished on time and worked beautifully on the outside, but was a little weird on the inside (commented profusely!) and barely fit.  The next version was definitely going to have a bigger chip!  But then the project was cancelled for unrelated reasons.

So yes, you make a very valid point about using a bigger chip than what you think you need.  But there are lots of places, even today, where the rules and skills that I mentioned are valid too.
indeterminatus:

--- Quote from: Miyuki on August 15, 2021, 04:41:57 pm ---Only thing that matters is to write clean code.

--- End quote ---

Stressing that point.
emece67:
.
Navigation
Message Index
Next page
Previous page
There was an error while thanking
Thanking...

Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod