Author Topic: Embedded software development. Best practices.  (Read 4093 times)

0 Members and 1 Guest are viewing this topic.

Offline RemarkTopic starter

  • Contributor
  • Posts: 31
  • Country: lt
Embedded software development. Best practices.
« on: August 14, 2021, 11:50:33 pm »
Hello,

Maybe i can ask those who work with embedded systems, what design practices or patterns do you apply in programming? Do you apply state-machines or object oriented programming with C ++, data structures in embedded software design ? I saw a lot of code in projects, that was written between while loop, performing several same operations, for example by scanning sensors and outputting information to the LCD screen. It is good practice and maybe it is widely applied? Or perhaps better and more proven practices are applied? I am currently a beginner in this field, so I would like to read answers from more experienced people than me. Can you recommend what books I should read to know more about embedded software design and its patterns?

Thank you very much for your answers
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Embedded software development. Best practices.
« Reply #1 on: August 15, 2021, 03:05:15 am »
The fewer things that you handle with interrupts, the more responsive each interrupt can be.  You generally learn polling before interrupts so that you get a good handle on the basics of program flow in the first place, and it's often the best way to do things even when you do understand everything there is to know about interrupts.
Not everything needs microsecond-immediate attention.  As long as you never busy-wait for anything (see the next section), the main-loop polling rate is usually more than sufficient.

Likewise, don't do anything more than what is absolutely necessary in an interrupt handler.  Any time that you spend there is time that you can't be doing something else, including lower-priority interrupts.  (priority as you've told the chip, which is not necessarily what you intended :palm:)  So do the absolute bare minimum that really must be done RIGHT NOW and get out.  If you can pare it all the way down to just setting a flag and leaving, then you can eliminate that interrupt altogether and just poll for the condition itself instead of the flag that you would have set.  That's a big plus!

The entire point of this section is to keep the interrupts reserved for when you really do need a drop-everything-immediate response.  You don't want to have to wait for printf to finish because you standardized on interrupts for everything, including your debug spew.
(Why are you using printf anyway?  It's HUGE!  In fact, most of the desktop-standard functions don't appear very often here.  The environment is just too small to make them practical.)

---

Multitasking is easy once you understand state machines.  Instead of busy-waiting for one thing at a time, you can poll-wait for lots of things at the same time.  Your main function might then look something like this:

Code: [Select]
void main()
{
    //clock and other chip-wide set up

    init_module_A();
    init_module_B();
    init_module_C();
    init_module_D();

    while(1)
    {
        run_module_A();
        run_module_B();
        if (run_module_C())    //returns non-zero if some C-related event happened, this allows other modules to synchronize to it
        {
            trigger_module_D();
            module_A_input.foo = module_B_get_next_output();    //module A might have a simple static behavior, whereas module B has a sequence that must be kept
        }
        run_module_D();
    }

}

and a module file might include:

Code: [Select]
enum
{
    STARTING,
    BYTE_ODD,
    BYTE_EVEN,
    DONE
} state_machine;

void init_module_C()
{
    //set up ONLY what is needed to run this module

    state_machine = STARTING;
}

uint8_t run_module_C()
{
    switch(state_machine)
    {
    case STARTING:
        if (ready_to_start)
        {
            //start code here
            state_machine = BYTE_ODD;
        }
        break;

    case BYTE_ODD:
        //do something with the odd-numbered bytes
        state_machine = BYTE_EVEN;
        break;

    case BYTE_EVEN:
        //do something with the even-numbered bytes
        if (more_bytes_to_come)
        {
            state_machine = BYTE_ODD;
        }
        else
        {
            state_machine = DONE;
        }
        break;

    default:
        //error-correction and normal reset
        state_machine = STARTING;
        break;
    }

    return (state_machine == DONE);
}

Now you can copy a module file to a different project that has a different use for that same concept, or even make a central library out of them, without rewriting anything.

Also notice the poll-wait for case STARTING.  This is how you wait for things without blocking everything else.

And it's usually faster to check for zero than for any other value.  Just load, and check the Z flag; instead of load, subtract, and check the Z flag.  It's not much, but if you're short on code space or processing time, it might help to arrange things like that.

---

Global variables are okay, if used SPARINGLY!  Intentional inputs and outputs of a module, for example, but nothing more.  Declare those in the module's header file; everything else in the source file.  Global variables in the source file are global to that module (below the declaration), but not to the entire program.  That can be useful too, but again, only when needed.

Like desktop programming, try to keep everything as local as you can get away with, except to the point of using trivial access functions.  In that case, just make the variable itself accessible; you often don't have much of a stack to work with.  (Recursion is right out!)

Likewise for function prototypes, custom datatypes, etc.  Only what the rest of the project needs to see goes in the header; everything else goes in the source.

---

There are some good embedded C++ compilers, but most of the time you don't need C++.  When you do, it's nice to have, and it's not really that bloated when you do it right, but most of the time you just don't need those tools at all.  C is perfectly fine.

---

Math is interesting in a lot of cases.  If you're on an 8-bit architecture (0-255 or -128 to 127), then you have a time penalty for using anything bigger.  Sometimes that's okay, sometimes not.  Likewise for using 32-bit numbers on a 16-bit architecture, etc.  The (u)intN_t datatypes tell you exactly how big it is: uint16_t is 16 bits unsigned (0 to 65535).

You might also be restricted to adding, subtracting, and shifting, as the only native operations.  (shifting is essentially multiplying or dividing by powers of 2)  Multiplication by a non-power of 2 can give you a significant time penalty if you don't have the on-chip hardware for it (some compilers are smart enough to convert a constant multiplier into a combination of shifts and adds; others just pull in their standard block of longhand library code), and division by a variable is a nightmare by comparison!  It's literally doing explicit long division in that block of library code.  So try to avoid it if at all possible.

Floating-point is even worse than that (floats and doubles), unless you have a floating-point accelerator, and then it only helps you for the size that it's designed for.  (a 32-bit FP accelerator works for floats, but not doubles)  Fixed-point is guaranteed to work anywhere, which is simply you the programmer keeping track of what fraction you're counting by, and fixing it up (shifting) as needed to keep the answer straight without overflowing or underflowing in the middle somewhere.  Instead of wishing you had 8 times the resolution in your 0-to-15 counter, just count by 1/8ths!  That's fixed-point.  The compiler and the hardware still think you're working with integers, so you need to keep track of the fractional point yourself, but that's how you get fractions on an integer-only machine.

(For a real-world example, "integer" audio is actually fixed-point that is entirely fractional.  Instead of -32768, 16384, 8192, 4096, etc. for a signed number, these bit values are -1, 1/2, 1/4, 1/8, etc.  They're handled by the exact same circuitry that handles "true integers", so the hardware can't tell the difference, but that's how the audio industry and the software that it uses actually interpret it.  A different size, like 8-bit or 24-bit, either truncates the fractional bits or adds more of them so that the peak value is always +/-1.  Small DSP's often have a few bits above the fractional point to make them slightly more forgiving, and larger DSP's almost always use floating-point with FP 1.0 = "integer" 011111111111... at the points of conversion.)
 
The following users thanked this post: Neomys Sapiens, Remark

Offline Kjelt

  • Super Contributor
  • ***
  • Posts: 6586
  • Country: nl
Re: Embedded software development. Best practices.
« Reply #2 on: August 15, 2021, 08:26:11 am »
I saw a lot of code in projects, that was written between while loop, performing several same operations,
This is the superloop vs interrupt driven vs RTOS discussion.
And the answer IMO is it all depends.
- Depends on platform (8 bit uC on 16MHz vs 32 bit arm uC on 100+MHz)
- product (code size, code complexity, price point, BOM targets, ROM size)

So in short there is no black and white, simply said on an 8 bit uC you often have to struggle with code size and superloops added with some time critical interrupt driven events were (are?) pretty common to 10 yrs ago.
For more complex products with display, GUI and some fancy protocols it is 32 bit Arm with an RTOS that is default, for even larger more complex fancy products with security Ethernet etc Embedded Linux is the way to go IMO that is etc.

To learn more I would advise to read up in RTOS vs superloop, RMA ( Rate Monotonic Analysis) and just play around with a small 8 bit superloop uC and give it more to eat than it can handle and you gain more experience.
 
The following users thanked this post: harerod, Remark

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28429
  • Country: nl
    • NCT Developments
Re: Embedded software development. Best practices.
« Reply #3 on: August 15, 2021, 09:21:18 am »
Likewise, don't do anything more than what is absolutely necessary in an interrupt handler.  Any time that you spend there is time that you can't be doing something else, including lower-priority interrupts.
No, no, no, no. This is the worst advice ever. For one thing: you will need to spend the time processing the data coming from the interrupt one way or another. So the total amount of processing time stays the same. If you add overhead of buffering, then you actually make things worse because suddenly the slow main loop looking at buttons or blinking a LED becomes time sensitive.

The only proper way is to plan how much time is spend in each interrupt (including processing) and determine which interrupt should have the highest priority. From there it becomes clear if there are conflicting interrupts and you may need buffering but more likely there is a better way out of such situations (like combining interrupts into one). For example: if you are doing digital signal processing, you get input samples and output samples. If you write the output samples from the ADC interrupt then the output samplerate is automatically equal to the input samplerate; you don't need an extra output sample timer interrupt.

Some of my embedded firmware projects spend 90% of the time in interrupts doing signal processing.

All in all a better way to look at the interrupt controller is to regard it as a process scheduler and each interrupt routine is a seperate process. By setting lower / higher priorities and using interrupt nesting, you can have several concurrent processes without using an OS.

Quote
Floating-point is even worse than that (floats and doubles), unless you have a floating-point accelerator, and then it only helps you for the size that it's designed for.
OMG  :palm: Really? More nonsense again. It all depends how much processing time you have available and you have plenty nowadays from a typical ARM-Cortex CPU. I have used soft-floating point in audio signal processing on a 70MHz ARM microcontroller about a decade ago.

Floating point makes working with numbers much easier (still keep an eye out for accumulating drift / errors) so you can write software quicker and keep the resulting code more readable / easier maintain. The first mistake to make when writing software is to start with optimisation before determining speed is actually a problem.

For example: if you need to read a temperature sensor input every 10 seconds then using soft-floating point has zero impact on performance. You probably can't even measure the extra time it takes.

I was brought up with soft-floating point being a big no-no in embedded firmware but in now I realise the people who told me that where very wrong.

Edit: and not using printf? Really  :palm:  Please use printf and don't go around re-inventing the wheel. If printf is too big or uses global memory, then implement your own which has a smaller footprint. The MSP430 GCC compiler -for example- comes with a very small vuprintf. And otherwise it is not difficult to find examples of even smaller micro-printf implementations. The worse thing to do by far is to invent your own string printing routines. I've seen those many times and they all sucked so bad that in the end even the original author started using printf. In the end the 'problem' (non re-entrant or code size) is in the vuprintf function, just fix the problem there.  In a professional environment you need to keep to standards as much as possible. The standard C library is such a standard. Don't go doing non-standard stuff because it will confuse and annoy the hell out of the person who needs to maintain the code after you.
« Last Edit: August 15, 2021, 01:31:47 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: Jacon, Remark

Offline Miyuki

  • Frequent Contributor
  • **
  • Posts: 908
  • Country: cz
    • Me on youtube
Re: Embedded software development. Best practices.
« Reply #4 on: August 15, 2021, 04:41:57 pm »
It all depends on the combination of Usage & MCU
Sometimes can be the best solution to use Arduino with its libraries or even some scripting, sometimes go bare metal with even assemble routines and sometimes it is best with some RTOS
When you want to do one calculation every long time and put it on the display, just take the highest level library with easy to use interface
When multitasking RTOS can really be savor even on weak 8bit AVR
There is no single best solution
Same with arithmetics, software floating point, fixed decimal point, or even come magic constant when you need it really fast and accuracy is not relevant.

Only thing that matters is to write clean code. In clean code is easy to find and fix bugs.
Do not copy past function blocks, call the function. If overhead is too much, just add it inline. Avoid mess like goto.
And with interrupts beware of volatile variables hazard.
 
The following users thanked this post: Remark

Offline RemarkTopic starter

  • Contributor
  • Posts: 31
  • Country: lt
Re: Embedded software development. Best practices.
« Reply #5 on: August 15, 2021, 05:47:49 pm »
Thank you so much for the answers to everyone.

And what about binary trees (nodes and leafs), it is used for embedded programming?
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28429
  • Country: nl
    • NCT Developments
Re: Embedded software development. Best practices.
« Reply #6 on: August 15, 2021, 07:27:33 pm »
Thank you so much for the answers to everyone.

And what about binary trees (nodes and leafs), it is used for embedded programming?
You can but allocating/freeing memory using functions like malloc/free is a bit iffy because microcontrollers typically don't have enough memory to avoid fragmentation of memory. All in all you'll likely need to create some kind of memory manager and make sure to be able to deal with a shortage of memory in a graceful way.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: Remark

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15797
  • Country: fr
Re: Embedded software development. Best practices.
« Reply #7 on: August 15, 2021, 10:03:11 pm »
Binary trees, like linked lists, can be implemented without dynamic allocation per se.

 
The following users thanked this post: Neomys Sapiens, Remark

Offline Neomys Sapiens

  • Super Contributor
  • ***
  • Posts: 3268
  • Country: de
Re: Embedded software development. Best practices.
« Reply #8 on: August 15, 2021, 10:52:23 pm »
Limiting interrupt processing to the ABSOLUTE MINIMUM NECESSARY is how PLCs achieve their deterministic reaction times and high-integrity program scheduling. Read regular inputs - process cyclical task - write regular outputs. For anything aquired during interrupt processing, you will have to analyse diligently at which point of the program's processing you should introduce this data. Because it would have a detrimental if not even damage-prone effect in most RT applications if you are half through with a calculation and logic processing and then perform the remaining part with a measurement value which does not belong to the same state of the real-world process than your binary inputs.
 
The following users thanked this post: harerod, Remark

Offline dmills

  • Super Contributor
  • ***
  • Posts: 2093
  • Country: gb
Re: Embedded software development. Best practices.
« Reply #9 on: August 16, 2021, 12:06:34 pm »
Binary trees (Sometimes built as a pre processing step on a real computer), skip lists (Same thing, sort of), hashing, really all of the algorithms stuff gets used (And unlike on something like a PC) you often find yourself writing these things because the generic library ones either do not fit your memory model or bring in unimportant features that cost you too much code space.

Doing things like using the low two bits of a pointer (in a 32 bit machine) for a couple of flags very much is a thing in embedded programming, and you just never need to do that shit on a PC.

On the interrupt thing, it depends on context, I have DSP codes running on processors that spend 90% of their time in an ISR that actually does all the signal processing in a hard realtime situation, the main loop is literally while (1) {};
« Last Edit: August 16, 2021, 12:09:27 pm by dmills »
 
The following users thanked this post: Remark

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28429
  • Country: nl
    • NCT Developments
Re: Embedded software development. Best practices.
« Reply #10 on: August 16, 2021, 12:31:06 pm »
Limiting interrupt processing to the ABSOLUTE MINIMUM NECESSARY is how PLCs achieve their deterministic reaction times and high-integrity program scheduling. Read regular inputs - process cyclical task - write regular outputs. For anything aquired during interrupt processing, you will have to analyse diligently at which point of the program's processing you should introduce this data. Because it would have a detrimental if not even damage-prone effect in most RT applications if you are half through with a calculation and logic processing and then perform the remaining part with a measurement value which does not belong to the same state of the real-world process than your binary inputs.
No. PLCs don't work that way. PLCs have a fixed (configurable) cycle time in which they process all inputs and calculate new output values. And it is possible to run several of these cycles sequentially as if they are parallel processes on more advanced PLCs. But this is all done at a higher (OS) level and the PLC's CPU idles when it is done processing. However, if your PLC program takes too long then the cycle time won't be met. In the end there is no magic bullet to add extra processing power.
« Last Edit: August 16, 2021, 12:33:24 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline PaulAm

  • Frequent Contributor
  • **
  • Posts: 939
  • Country: us
Re: Embedded software development. Best practices.
« Reply #11 on: August 16, 2021, 02:48:51 pm »
Be aware of what your target hardware is and code appropriately.

As an example, say your target doesn't support floating point hardware.  Even one line of C involving a floating point operation  will cause the floating point emulation libraries to be loaded into the object.  If you have hardware with very limited stack or code space, that could be a disaster.

FSAs are very useful in many situations.  However, hand managing FSA tables can be a nightmare and even worse for maintenance.  The best way I've found to deal with FSAs over a 40 year career is an FSA generator called Libero written by Pieter Hintjens back around 2000.  Pieter was an absolute genius.  The code is available on github  (https://imatix-legacy.github.io/libero) and will compile with out too much effort.  The FSA is described using a dialog and the tool will generate the appropriate tables, stubs, etc.  Modifications become fairly trivial since you work at a high level in the dialog and then fill in the short action stub routines.  I can't say the examples are all that great, but it's worth a couple days effort to master.  The really interesting thing about Libero is that it is not language specific.  It uses templates to generate the target code and it will happily turn the FSA into anything.  There are templates for Perl, shell, PHP, C, C++ , PC assembly and yes, even Cobol, among others.  It's really a universal tool if you can get your head wrapped around it.
 
The following users thanked this post: pardo-bsso, Remark

Offline emece67

  • Frequent Contributor
  • **
  • !
  • Posts: 614
  • Country: 00
Re: Embedded software development. Best practices.
« Reply #12 on: August 16, 2021, 04:05:36 pm »
.
« Last Edit: August 19, 2022, 04:38:33 pm by emece67 »
 
The following users thanked this post: enz, Remark

Offline Neomys Sapiens

  • Super Contributor
  • ***
  • Posts: 3268
  • Country: de
Re: Embedded software development. Best practices.
« Reply #13 on: August 16, 2021, 04:16:22 pm »
Some useful resources (several are in German):

Real-Time Systems Development, A practical Guide to; Sylvia Goldsmith, Prentice Hall, 1993, ISBN   0-13-718503-0

Entwurf komplexer Echtzeitsysteme - State of the Art; Hüsener, BI, 1994, ISBN 3-411-16441-7

Software Engineering Handbook; Jessica Keyes, Auerbach, 2003, ISBN 0-203-97278-3

Handbook of Real-Time and Embedded Systems; Insup Lee et.al, Chapman&Hall/CRC, 2008, ISBN 1-58488-678-1

Design and Analysis of Reliable and Fault-Tolerant Computer Systems;   Mostafa Abd-El-Barr, Imperial College Press,   2007, ISBN 1-86094-668-2

Programmable controllers: theory and implementation,   L.A. Bryan & E.A. Bryan, Industrial Text and Video, 1997, ISBN 0-944107-32-X

Control System Fundamentals (The Control Handbook - 2nd ed.); William S. Levine (Ed.),   CRC Press, 2011, ISBN 978-1-4200-7363-8

Computing Handbook - Computer Science and Software Engineering; Allen B. Tucker (Ed.)   Chapman&Hall/CRC, 2014, eISBN 978-1-4398-9853-6

Embedded Systems Handbook, Richard Zurawski (Ed.), CRC Press, 2006, ISBN tbd

High Performance Embedded Computing Handbook - A Systems Perspective; D.R. Martinez, R.A. Bond, M.M. Vai (Eds.), CRC Press, 2008, ISBN 978-0-8493-7197-4

Numerical Methods for  Real-Time and Embedded Systems Programming; Don Morgan, M&T Books, 1992, ISBN 1-55851-232-2

Embedded Systems Building Blocks - Complete and Ready-to-Use Modules in C; Jean J. Labrosse, R&D Books, 2000, ISBN 0-87939-604-1

Anwendungsorientierte Mikroprozessoren, Mikro-controller und Digitale Signalprozessoren; H. Baehring,   Springer, 2010, ISBN 978-3-642-12291-0

Practical Aspects of Embedded System Design using Microcontrollers; Balakrishnan Selvan et.al, Springer, 2008, e-ISBN 978-1-4020-8393-8

Software-Implemented Hardware Fault Tolerance;   O. Goloubeva et.al,   Springer. 2006, ISBN 0-387-32937-4

Embedded Systems - Hardware, Design, and Implementation; Krzysztof Iniewski, Wiley, 2013, ISBN 978-1-118-35215-1

Real-Time Systems Development; Rob Williams, Butterworth-Heinemann, 2006, ISBN 978-0-7506-6471-4

Embedded Systems Design; Steve Heath; Newnes, 2003, ISBN 0-7506-5546-1

Embedded Systems and Computer Architecture, Graham Wilson, Newnes, 2002,   ISBN 0-7506-5064-8

Mission-critical and safety-critical systems handbook : design and development for embedded applications; K. Fowler (Ed.), Newnes, tbd, ISBN 978-0-7506-8567-2

Embedded Systems (World Class Designs); Jack Ganssle et.al, Newnes, tbd,  ISBN tbd

Embedded Software Know-It-All; Labrosse et.al, Newnes, 2008, ISBN 978-0-7506-8583-2

Embedded Hardware Know-It-All; Jack Ganssle et.al, Newnes, 2008, ISBN 978-0-7506-8584-9

 
The following users thanked this post: Remark

Offline RemarkTopic starter

  • Contributor
  • Posts: 31
  • Country: lt
Re: Embedded software development. Best practices.
« Reply #14 on: August 16, 2021, 04:27:40 pm »
Both approaches
  • keep ISR code at the bare minimum (ideally just trueing flags) and do all tasks in the super-loop
  • do all your processing on ISRs, keep super-loop task at minimum (ideally NOP)
are time-proven approaches. So, what's up with them? Well, I think that both are better suited to different problems. I find myself using 2 when DSP is the main task of the system (also when bit banging), but 1 when it is control. IMHO not any of them substitutes the other, they are complementary.

There is also another approach: use a tick to move all the system (this includes polling peripherals at such tick events). I do not use this approach much, though.

As now I do more control than DSP, I now use much 1 than 2. Sometimes in bare metal scenarios, some other times using and RTOS. In both cases using FSMs.

Be aware of what your target hardware is and code appropriately.

This is it! No matter what you code, you must understand what exactly means each line you write. Moving data to and fro and converting data between different formats can be time hogs, so you better avoid them and plan your application keep data movement and format conversions to a minimum. Sometimes such movements and conversions are hidden by the language, so be aware.

C/C++? Well, after 30+ years doing all embedded in C and assembly, I'm now switching to C++ (I've used C++ out of embedded also for years, but my C++ experience is definitely smaller than in C). If I can decide, I will never use C again. No bloat at all, code is more readable, same speed, better interfaces, no more need to use that clunky feature named "macros". Sure, it can be done in C and it has been done in C forever, but I prefer C++ now.

Regards.

And you probably apply OOP programming when you use the C++ language?
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Embedded software development. Best practices.
« Reply #15 on: August 16, 2021, 05:02:08 pm »
Likewise, don't do anything more than what is absolutely necessary in an interrupt handler.  Any time that you spend there is time that you can't be doing something else, including lower-priority interrupts.
No, no, no, no. This is the worst advice ever. For one thing: you will need to spend the time processing the data coming from the interrupt one way or another. So the total amount of processing time stays the same. If you add overhead of buffering, then you actually make things worse because suddenly the slow main loop looking at buttons or blinking a LED becomes time sensitive.

The only proper way is to plan how much time is spend in each interrupt (including processing) and determine which interrupt should have the highest priority. From there it becomes clear if there are conflicting interrupts and you may need buffering but more likely there is a better way out of such situations (like combining interrupts into one). For example: if you are doing digital signal processing, you get input samples and output samples. If you write the output samples from the ADC interrupt then the output samplerate is automatically equal to the input samplerate; you don't need an extra output sample timer interrupt.

Some of my embedded firmware projects spend 90% of the time in interrupts doing signal processing.

All in all a better way to look at the interrupt controller is to regard it as a process scheduler and each interrupt routine is a seperate process. By setting lower / higher priorities and using interrupt nesting, you can have several concurrent processes without using an OS.

Quote
Floating-point is even worse than that (floats and doubles), unless you have a floating-point accelerator, and then it only helps you for the size that it's designed for.
OMG  :palm: Really? More nonsense again. It all depends how much processing time you have available and you have plenty nowadays from a typical ARM-Cortex CPU. I have used soft-floating point in audio signal processing on a 70MHz ARM microcontroller about a decade ago.

Floating point makes working with numbers much easier (still keep an eye out for accumulating drift / errors) so you can write software quicker and keep the resulting code more readable / easier maintain. The first mistake to make when writing software is to start with optimisation before determining speed is actually a problem.

For example: if you need to read a temperature sensor input every 10 seconds then using soft-floating point has zero impact on performance. You probably can't even measure the extra time it takes.

I was brought up with soft-floating point being a big no-no in embedded firmware but in now I realise the people who told me that where very wrong.

Edit: and not using printf? Really  :palm:  Please use printf and don't go around re-inventing the wheel. If printf is too big or uses global memory, then implement your own which has a smaller footprint. The MSP430 GCC compiler -for example- comes with a very small vuprintf. And otherwise it is not difficult to find examples of even smaller micro-printf implementations. The worse thing to do by far is to invent your own string printing routines. I've seen those many times and they all sucked so bad that in the end even the original author started using printf. In the end the 'problem' (non re-entrant or code size) is in the vuprintf function, just fix the problem there.  In a professional environment you need to keep to standards as much as possible. The standard C library is such a standard. Don't go doing non-standard stuff because it will confuse and annoy the hell out of the person who needs to maintain the code after you.

What are you running on?  Embedded Linux?  In that case, you'd be right, and arguably the correct way to do it for the reasons that you stated.  But on an 8-bit 10MIPS machine with 300 bytes of RAM (yes, bytes; not even kbytes) that needs a "humanly-instant" response time - the sort of thing that I usually do - your approach could barely do anything at all!

Also note that I never said to not do *anything* in interrupts.  I have an interrupt-driven ADC module, for example, that runs a lowpass filter from a 10-bit SAR converter to an array of 16-bit "output" variables for everything else to use.  The only reason it's interrupt-driven is to keep the sample rate up across all channels so I can have a decently functional filter.  And that oversampled filter is indeed done in the ISR because the data that it works on is that ephemeral.  Drop the results in the global output array, and then the rest of the code can pick up the "magically existing ADC readings" whenever it gets around to it.

When every instruction cycle and every byte of memory is important, you do things differently.  Still keep it sensible, but in the sense of someone seeing how your no-extras-at-all debug spew works (which could just as easily be a PWM-driven or bit-banged "pin-wiggler" for an oscilloscope to look at) so they can get on with the actual function of things, instead of |O over not getting printf to fit; or how to output an ASCII stream anyway, when your only UART is already tied up with DMX or whatever.
(and yes, that includes myself after a few months :))

---

If the main loop takes longer than the shortest required non-interrupt response time, then there's something else wrong.  If you can't reduce or redistribute the workload (so you don't end up triggering *everything* on the same pass, often by accident or naivete), then you might poll the more critical things more than once throughout the main loop code.

That's one place where a bit-banged "pin-wiggler" comes in really handy!  (if you have a spare output pin)  Toggle it once per loop, or once per poll of the critical thing, and you can see on the 'scope or logic analyzer, exactly how your timing works out.
« Last Edit: August 16, 2021, 05:26:17 pm by AaronD »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28429
  • Country: nl
    • NCT Developments
Re: Embedded software development. Best practices.
« Reply #16 on: August 16, 2021, 06:06:35 pm »
Likewise, don't do anything more than what is absolutely necessary in an interrupt handler.  Any time that you spend there is time that you can't be doing something else, including lower-priority interrupts.
No, no, no, no. This is the worst advice ever. For one thing: you will need to spend the time processing the data coming from the interrupt one way or another. So the total amount of processing time stays the same. If you add overhead of buffering, then you actually make things worse because suddenly the slow main loop looking at buttons or blinking a LED becomes time sensitive.

The only proper way is to plan how much time is spend in each interrupt (including processing) and determine which interrupt should have the highest priority. From there it becomes clear if there are conflicting interrupts and you may need buffering but more likely there is a better way out of such situations (like combining interrupts into one). For example: if you are doing digital signal processing, you get input samples and output samples. If you write the output samples from the ADC interrupt then the output samplerate is automatically equal to the input samplerate; you don't need an extra output sample timer interrupt.

Some of my embedded firmware projects spend 90% of the time in interrupts doing signal processing.

All in all a better way to look at the interrupt controller is to regard it as a process scheduler and each interrupt routine is a seperate process. By setting lower / higher priorities and using interrupt nesting, you can have several concurrent processes without using an OS.

Quote
Floating-point is even worse than that (floats and doubles), unless you have a floating-point accelerator, and then it only helps you for the size that it's designed for.
OMG  :palm: Really? More nonsense again. It all depends how much processing time you have available and you have plenty nowadays from a typical ARM-Cortex CPU. I have used soft-floating point in audio signal processing on a 70MHz ARM microcontroller about a decade ago.

Floating point makes working with numbers much easier (still keep an eye out for accumulating drift / errors) so you can write software quicker and keep the resulting code more readable / easier maintain. The first mistake to make when writing software is to start with optimisation before determining speed is actually a problem.

For example: if you need to read a temperature sensor input every 10 seconds then using soft-floating point has zero impact on performance. You probably can't even measure the extra time it takes.

I was brought up with soft-floating point being a big no-no in embedded firmware but in now I realise the people who told me that where very wrong.

Edit: and not using printf? Really  :palm:  Please use printf and don't go around re-inventing the wheel. If printf is too big or uses global memory, then implement your own which has a smaller footprint. The MSP430 GCC compiler -for example- comes with a very small vuprintf. And otherwise it is not difficult to find examples of even smaller micro-printf implementations. The worse thing to do by far is to invent your own string printing routines. I've seen those many times and they all sucked so bad that in the end even the original author started using printf. In the end the 'problem' (non re-entrant or code size) is in the vuprintf function, just fix the problem there.  In a professional environment you need to keep to standards as much as possible. The standard C library is such a standard. Don't go doing non-standard stuff because it will confuse and annoy the hell out of the person who needs to maintain the code after you.

What are you running on?  Embedded Linux?  In that case, you'd be right, and arguably the correct way to do it for the reasons that you stated. 
No. Regular microcontrollers like NXP's LPC ARM series or TI's MSP430 for example.

Quote
But on an 8-bit 10MIPS machine with 300 bytes of RAM (yes, bytes; not even kbytes) that needs a "humanly-instant" response time - the sort of thing that I usually do - your approach could barely do anything at all!
But who would use such a constricted microcontroller nowadays? If you are into high volume products maybe but in such a case you likely start with a bigger microcontroller for test & design verification. For anything produced in less than 10k units the NRE costs will be a huge part of the cost of a product. So if you can reduce the NRE cost by using a microcontroller that doesn't need balancing on one foot while touching your nose with the other then you are already ahead. Plus there is likely room for future extensions as well.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Embedded software development. Best practices.
« Reply #17 on: August 16, 2021, 06:58:34 pm »
But who would use such a constricted microcontroller nowadays? If you are into high volume products maybe but in such a case you likely start with a bigger microcontroller for test & design verification. For anything produced in less than 10k units the NRE costs will be a huge part of the cost of a product. So if you can reduce the NRE cost by using a microcontroller that doesn't need balancing on one foot while touching your nose with the other then you are already ahead. Plus there is likely room for future extensions as well.

I used to work for a mass-produced niche company that used something about the size that I've been talking about as their jelly-bean standard.  At their volume, they could get pretty much the entire family of that chip for almost nothing.  And by the time I started with them, most of the "cramming it all in" work was already done, and they had a somewhat comfortable set of linker commands and in-house libraries to replace the compiler's bloated libraries.  ("Don't use the '*' symbol!  Call our mult(a, b) function instead, that one of our guys wrote by hand in assembly.")

There was one project though, where the hardware designer(s) grossly underestimated the amount of processing it was going to take.  Fortunately, I had only spent the first week or so of my allotted 2 months or whatever it was, to figure out how to do it and then write the code that almost did it.  Enough to see that it would indeed work that way if I could cram it into the available code space.  (RAM and CPU time were not a problem in this case, only code space)  So I spent the next month and a half refactoring, hand-optimizing, reading and criticizing what the compiler thought was good assembly (I can do that myself in 2 fewer instructions!), tweaking the linker to fill some holes, and sometimes combining unrelated functions just because they had similar parts so there would only be one copy of those parts, and never using the complete results of that combined function.

The result was something that was finished on time and worked beautifully on the outside, but was a little weird on the inside (commented profusely!) and barely fit.  The next version was definitely going to have a bigger chip!  But then the project was cancelled for unrelated reasons.

So yes, you make a very valid point about using a bigger chip than what you think you need.  But there are lots of places, even today, where the rules and skills that I mentioned are valid too.
 

Offline indeterminatus

  • Contributor
  • Posts: 30
  • Country: at
Re: Embedded software development. Best practices.
« Reply #18 on: August 16, 2021, 08:16:55 pm »
Only thing that matters is to write clean code.

Stressing that point.
 

Offline emece67

  • Frequent Contributor
  • **
  • !
  • Posts: 614
  • Country: 00
Re: Embedded software development. Best practices.
« Reply #19 on: August 16, 2021, 09:13:11 pm »
.
« Last Edit: August 19, 2022, 04:38:47 pm by emece67 »
 
The following users thanked this post: Remark

Offline PlainName

  • Super Contributor
  • ***
  • Posts: 7508
  • Country: va
Re: Embedded software development. Best practices.
« Reply #20 on: August 17, 2021, 02:45:39 am »
Quote
The code is available on github  (https://imatix-legacy.github.io/libero)


The website appears to be but there are no downloads. Hardly surprising since it's over 20 years old, but I was certainly tempted to take a look!
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28429
  • Country: nl
    • NCT Developments
Re: Embedded software development. Best practices.
« Reply #21 on: August 17, 2021, 07:55:44 am »
Only thing that matters is to write clean code.
Not just that, also easy to debug code. One of the things that is standard in all my microcontroller projects is a serial port command line interface. This can be used to read the status of the device, attached devices and get some statistics as well (like number of messages received, failed messages, etc). This is something I picked up at one of my employers and it has proven to be extremely handy for field diagnostics / remote debugging. Either I go to the customer's site and look what the output is or a customer can hook up a serial port and send me the output. Either way the problem usually is clear quickly.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline PaulAm

  • Frequent Contributor
  • **
  • Posts: 939
  • Country: us
Re: Embedded software development. Best practices.
« Reply #22 on: August 17, 2021, 02:26:56 pm »
Quote

    The code is available on github  (https://imatix-legacy.github.io/libero)

The website appears to be but there are no downloads. Hardly surprising since it's over 20 years old, but I was certainly tempted to take a look!

I did a download a few months ago, let me track down the source link.  It's out there somewhere

OK, here it is:
    https://github.com/imatix-legacy/libero

They moved the source from the web page to it's own github repository.
« Last Edit: August 17, 2021, 02:32:14 pm by PaulAm »
 

Offline PlainName

  • Super Contributor
  • ***
  • Posts: 7508
  • Country: va
Re: Embedded software development. Best practices.
« Reply #23 on: August 17, 2021, 03:23:35 pm »
Blinding, thanks  :-+
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15797
  • Country: fr
Re: Embedded software development. Best practices.
« Reply #24 on: August 17, 2021, 05:19:51 pm »
Only thing that matters is to write clean code.

Stressing that point.

Oh yeah. I'm all for that too. Problem though is that pretty much everyone has a different view of what clean code is.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf