General > General Technical Chat

Embedded software development. Best practices.

<< < (6/6)

Sal Ammoniac:

--- Quote from: nctnico on August 15, 2021, 09:21:18 am ---OMG  :palm: Really? More nonsense again.
--- End quote ---

You seem to like broad, sweeping generalizations (and to some extent so did the person you quoted). All embedded systems are different and they all need to be analyzed and their software designed for each specific case. Sure, there are embedded development best practices that should generally be followed, but sometimes it's also necessary to break the rules.

AaronD:

--- Quote from: Sal Ammoniac on August 17, 2021, 09:36:34 pm ---...but sometimes it's also necessary to break the rules.

--- End quote ---

Yep!  Like one of my grade-school teachers said in English class:

--- Quote ---You can break any rule you want, as long as you know what the rule is, why it's there, and why you're breaking it.
--- End quote ---
I think the same goes for Engineering, which Embedded Software certainly is.  Each level of complexity, from a 4-pin MCU to a Raspberry Pi, has a different set of rules, and there isn't necessarily a clear-cut distinction between adjacent levels.

When you're at a high enough level that you can afford the overhead to use someone else's generalized work, then it's usually a good idea to do that.  The wide use usually means that whatever bugs it used to have are gone now, so any problems are probably with your use of it and not the library itself.
Below that level though, the constraints are often tight enough to forbid using a general-purpose library.  (a company's hand-optimized in-house math library, for example, instead of what the compiler does)  And there's still a lot of good work both above and below that level...if you can figure out where the cutoff is; good luck with that!

---

And if you're just starting out, you WILL write bad code.  Do it anyway.  You'll make it work, and you'll be proud of it in the moment, but when you look back on it after a few years of using those tools, you'll cringe.  Yes, it works, but WOW that looks awful now!  There's really no way to avoid that.  Just do it and get through it.

laureng:
A few thoughts based on what I've done/am doing. Not claiming best practice.

I'm using C++ on PIC32 microcontrollers, and compiling with the XC32 compiler (from Microchip). Works well, but when you start writing larger programs, you really need to get a paid license to use the "-Os" optimisation, which drastically reduces code size. (There are other free hacks for size - see below.)


* I like the main "do it" loop, which calls various objects and lets them do their own work. I have a "Runnable" interface that just declares a void run(), and classes can inherit that if they have some work to do during the main loop.
* Yes, there's the "interrupts good" vs "interrupts bad" camps. Arguments against are that interrupts are non-deterministic. I use them sparingly, mainly for time-critical hardware I/O (e.g. reading from a hardware serial RX FIFO, where the FIFO has limited depth and can overrun quickly).
* With the "do it" loop approach and objects, you're effectively implementing cooperative multitasking, so, just like if you were using interrupts, each individual object shouldn't be blocking for ages when it gets control. In the past I've used non-blocking state machines, but currently I'm blocking for external I/O (e.g. between command and response) if I think I'll get a guaranteed, quick reply
* From the above, you might find yourself writing code that has a sort of "handle cranking" behaviour, where a higher level object triggers a side effect in a lower level object that then avoids the need for the lower level object to explicitly receive control in the loop. An example would be a business logic class that calls a "write" function on a serial port object, where the serial port object then does some hardware I/O to send data over the serial port immediately (rather than, e.g., queuing the data and waiting for explicit control in the main loop). This works fine if the action can happen immediately and doesn't need repeating - if, e.g., you're sending a message over an RF link and the outbound message gets lost, and then your higher-level object immediately expects a reply (which never arrives), and your higher level object blocks forever, you're in trouble. Options are to have the higher level object timeout and retry, or to make both objects into non-blocking state machines (more robust, but greater complexity).
* There are also the "heap good" and "heap bad" camps. Arguments against heap are non-deterministic behaviour that can lead to heap fragmentation and/or exhaustion. I shamelessly use the heap, because I like collection classes, and it's sometimes easier to just new() and delete() rather than using the stack, but I'm writing code where if it crashes, the user is just pissed off and will reboot the device. If you're writing something safety critical, where blowing the heap will cause someone to die, you might exclusively use the stack. Consider profiling the heap. This can be done by writing a repeating pattern to the heap, with assembly code, before crt0, then later checking the extent of pattern remaining. I also do it by writing a test harness around my business logic that can be compiled under Linux (with hardware devices like LCD screens stubbed out - more on this below), then running a heap profiler like valgrind. You can also profile stack and check for memory errors with valgrind, which might let you track them down more easily than using the ICD.
* STL containers use the default allocator, and this throws exceptions. Exceptions create memory bloat. I use a custom allocator (based on this blog post), that doesn't throw exceptions, then use the STL containers (vector, deque, map...) with that allocator. You can then compile with -fno-exceptions.
* std::string also throws exceptions, and you can't make it use a different allocator like you can for the STL container classes. I create my own string class, which inherits std::basic_string, then have it use my nothrow allocator.
* printf adds a lot of bloat (like 20KB or something?). I use a tiny printf by Marco Paland.
* Further size hacks - with XC32 and other gcc derivatives, take a look at flags like "-Wl,--report-mem -ffunction-sections -fdata-sections -Wl,--gc-sections,--print-gc-sections -save-temps -Wl,-Map,mapfile.map" to (in order) get a memory report from the linker, move things into their own linker sections, garbage collect unused sections at link time (and show what was removed), save intermediate code (assembly) and produce a memory map. Take a look at the memory map to see what's using the most space, then dive into the assembly to see if you can work out why it's bloated. Getting the compiler to separate everything into sections for linker garbage collection runs the risk of having useful code removed by accident, but you'd be surprised how much stuff you don't need that can be culled.
* Streams are useful for debugging, but add a lot of bloat too! I write my own stream class that uses tiny snprintf for operator<< on ints, uints and doubles.
* The Dependency Injection Pattern is awesome IMHO. I have a builder (see Builder Pattern) that creates all necessary objects, injecting most dependencies into each via their constructors. This lets you really easily create different builders for simulators and different hardware configurations, without needing any new code outside of writing a new builder. For example, the builder for real hardware could inject an actual LCD object when creating objects that need to output to an LCD display, but the builder for a simulator could inject a stubbed or debug LCD object. Both would inherit some abstract interface like "LCDInterface", so the object being constructed doesn't know or care if it's getting a real or fake LCD display. For bonus points, use inheritance in builders to reduce code re-use, by putting the common build logic in a base class.
* Factory Pattern is another useful design pattern for loose-coupling between objects. Basically if you need to make things, you don't really want to know how they're made (SOLID - "depend upon abstractions, not concretions"), so you use a factory to make things for you, and the factory hides the construction detail. An example might be creating a UDP socket object (itself inheriting from some socket abstraction), where you could have different factories for hardware UDP sockets (using some hardware library) and for POSIX UDP sockets (for when you're in a simulated build running under Linux). Both factories inherit from a factory abstraction, so the thing making sockets just says "give me a socket" and it gets an object that works like a socket for both simulated and actual hardware builds. You can go too far with design patterns, but for any non-trivial code, this sort of decoupling makes the difference between pro code that's easily maintainable and expandable, and noob code that becomes a total mess. Specifically for embedded systems, loose coupling and abstraction will really help with simulation, porting and debugging.
* In the above spirit of loose coupling (see also dependency inversion, as part of the SOLID principle), when your embedded code is so modular that it's easy to compile desktop simulators with mock UIs (e.g. implemented using Qt), you realise you can do a lot of other awesome things too. Seeing the same business logic running embedded and on the web (e.g. C++ compiled to WebAssembly with Qt-based I/O devices) is cool.
That's all I could think of from the top of my head. Again, I'm not claiming any of these are best practice or how anyone else should code (depends on your chip and application).

I'd love to expand on some of this in a blog post some time. I'm not much for reading thick programming books (although some are excellent), and there really isn't a huge amount (that I've found) on embedded C++ that has all these tricks in one place. I really do think C++ can be great in embedded, but as above, some of the language and STL "features" (like exceptions being almost irremovably baked into things) can make it hard to use in practice.

EDIT: Added links and blurb on factory pattern.

emece67:
.

miceuz:

--- Quote from: nctnico on August 17, 2021, 07:55:44 am ---One of the things that is standard in all my microcontroller projects is a serial port command line interface.

--- End quote ---

Another thing I have picked up - implement command line interface via said serial port. This proved again and again to be priceless tool for debugging and test automation.

Navigation

[0] Message Index

[*] Previous page

There was an error while thanking
Thanking...
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod