Author Topic: Embedded program design thinking  (Read 1891 times)

0 Members and 1 Guest are viewing this topic.

Offline BlogRahul

  • Regular Contributor
  • *
  • Posts: 59
  • Country: in
Embedded program design thinking
« on: August 28, 2021, 04:37:24 am »
Hi everyone
This is my first post I hope I will get good advice and suggestion. I am looking experts advice on designing Embedded C program. How do you design program to solve a problem? What are step's for designing a program to solve a problem?
 

Offline lapm

  • Frequent Contributor
  • **
  • Posts: 562
  • Country: fi
Re: Embedded program design thinking
« Reply #1 on: August 28, 2021, 05:36:24 am »
Im not super experienced on embedded. But here's my 0.01 snt.

Major difference on embedded vs. normal programming is hardware limitations. No gigs of ram usually, no gigahertz of CPU cycles to burn usually, often no OS to relay on common tasks, etc...

Otherwise it goes much like any software development. Chip big problem into series of small problems.


Another special requirement in embedded is need to read and understand datasheets about subsystems you are using or hoping to use. Want to use serial port? How to initialize it to know state you expect it to be, etc.


Especially in lower end of embedded controllers its very much a optimization game. Cram your desired functionality as small amount of memory as possible. If you cant, then something must be left out.



Elicia WHite wrote nice book about designing embedded systems, i have read this book and do think highly of it.  https://www.oreilly.com/library/view/making-embedded-systems/9781449308889/
Electronics, Linux, Programming, Science... im interested all of it...
 

Online Berni

  • Super Contributor
  • ***
  • Posts: 4097
  • Country: si
Re: Embedded program design thinking
« Reply #2 on: August 28, 2021, 07:47:26 am »
Its the same as any other C programming with the only difference that you don't have massive amount of RAM available, no standard OS libraries and that you need to include hardware drivers as part of your code.

The main benefit being that you can get your app to run on really cheap and slow hardware while at the same time having very tight control over the timings of stuff.

For very large and complex applications its sometimes a good idea to take a middleground of using an RTOS. This provides multithreading support that you would enjoy under a OS, but does not provide drivers and all the nitty gritty low level stuff, so you still have the flexibility to have tight control over everything while using only mere kilobytes of RAM extra.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 14986
  • Country: gb
    • Having fun doing more, with less
Re: Embedded program design thinking
« Reply #3 on: August 28, 2021, 08:10:02 am »
For an embedded system, before you consider a programming language you can benefit from defining your problem and solution in terms of:
  • events: inputs and messages between processes
  • states: as in a finite state machine (FSM)
  • actions: what happens when an event is received in a each state
  • processes: an "engine" that implements one action at a time, from beginning to end
  • threads: independent processes that can execute simultaneously
  • algorithms: what processing is necessary to implement an event
  • timing

Then, if you use C, either use an RTOS or make very sure you understand what the various C keywords don't guarantee. Most people think they understand them, most don't.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline dmills

  • Super Contributor
  • ***
  • Posts: 1950
Re: Embedded program design thinking
« Reply #4 on: August 28, 2021, 05:56:38 pm »
In particular understand what 'volatile' does NOT mean when it comes to multi threaded applications (Also any type larger then the machine word size), it does not mean 'atomic'...

 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9099
  • Country: us
Re: Embedded program design thinking
« Reply #5 on: August 29, 2021, 04:16:36 pm »
Select a chip based on the availability of a port of FreeRTOS - the web site discusses which chips are supported

supported.https://freertos.org/RTOS_ports.html

Get FreeRTOS running first .  Actually, second, right after serial IO for console messages.

The reason for using FreeRTOS isn't necessarily the real-time capabilities but rather that it helps partition the project.  Each significant block of code becomes a 'task' and it gets scheduled from time to time.  But you only need to think about the running task.  If task interaction is required (and it will be), things like semaphores and queues deal with that.  The advantage of the RTOS is that the interface between tasks is clean and well defined.

All of the above for a sizeable project.  For something on the order of a few hundred lines of code, it may be possible to use brute force and no RTOS.

Sometimes the factory provides a bunch of library code and often it will include FreeRTOS.  Select a check box and the code generator installs FreeRTOS and all the related files to the project.  Cypress PSOC Creator does this.

The PSOC 6 device is very interesting and the Creator IDE runs well.  I like the fact that I drag and drop hardware blocks and the IDE generates the bulk of the code.  The dual core PSOC 6 is my favorite device followed by the Teensy 4.1 if I feel the need for speed!

If I need a little project built right now, I almost always reach for an Arduino.  Then there is the mbed LPC1768 using the online toolchain.  A very nice device to use with a lot of library support including TCP/IP and Berkeley Sockets.
« Last Edit: August 29, 2021, 04:20:13 pm by rstofer »
 

Offline dietert1

  • Super Contributor
  • ***
  • Posts: 1309
  • Country: de
    • CADT Homepage
Re: Embedded program design thinking
« Reply #6 on: August 29, 2021, 09:14:08 pm »
An important difference from application programming is the requirement for consistent error checking and recovery as there may be no user as backup to handle problems. Non trivial. I have seen systems that hang after two or three weeks of continuous operation. Some embedded systems have a LED indicator to try and call a user for help..

Regards, Dieter
 

Offline Kjelt

  • Super Contributor
  • ***
  • Posts: 6219
  • Country: nl
Re: Embedded program design thinking
« Reply #7 on: August 30, 2021, 05:25:41 am »
All of the above but for me the most important thing: domain knowledge and you need to know EXACTLY what each pin of your micro is supposed to do and its effects on the hardware you are controlling.
This means EE hardware knowledge, you need to have experience on HSI designs , low level programming, protocols. Also debugging often requires basic skills and knowledge on EE test gear, like an oscilloscope and logic analyzer.

Embedded hardware can be unforgiving, if you program a GUI on a PC and you make a mistake you will get incorrect behaviour but nothing is damaged. On many embedded systems if you make a mistake the hardware or microcontroller it self might become damaged.
« Last Edit: August 30, 2021, 05:27:16 am by Kjelt »
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 14986
  • Country: gb
    • Having fun doing more, with less
Re: Embedded program design thinking
« Reply #8 on: August 30, 2021, 08:22:46 am »
Embedded hardware can be unforgiving, if you program a GUI on a PC and you make a mistake you will get incorrect behaviour but nothing is damaged. On many embedded systems if you make a mistake the hardware or microcontroller it self might become damaged.

... plus in the case of life support applications or where lots of energy is being controlled, ...

I'll leave you to fill in the rest!
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline snarkysparky

  • Frequent Contributor
  • **
  • Posts: 325
  • Country: us
Re: Embedded program design thinking
« Reply #9 on: August 31, 2021, 12:30:37 pm »
Start building tools you will need as isolated functions.  Think of the bricks you will need to build your embedded house.

Get those bricks running and well tested.  Then think about how to lay those bricks to achieve your overall objective.

Requires jumping in though process from top level to bottom level and back many times in the design process.

 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 3584
  • Country: fi
    • My home page and email address
Re: Embedded program design thinking
« Reply #10 on: September 01, 2021, 08:14:18 am »
The very first thing you do, is make sure you understand the problem at hand.  I cannot emphasize this enough; this is absolutely critical.

I myself like to observe the people performing the actual tasks, to understand the fundamental nature of the problem.

When people describe the problem they are having, they almost never describe the actual, underlying task or problem they're trying to solve, but instead have already chosen a path and want help with a barrier along that path.

This same thing, or principle, works on all levels of development.  Even when you are writing actual code, and are worrying if some function or subsystem is reliable or efficient enough, the true solution is usually found on the algorithmic level.  That is, we humans tend to get "stuck" trying to optimize the approach we have already chosen; but really, we should take a step back, and examine the greater context, and consider whether the approach itself is correct.  (This often leads to premature optimization: spending time on a detail that does not matter, or should not even exist in the first place.)

I have found I get best results with a mixed approach (similar to what snarkysparky mentioned above): on one hand, I sketch (often on paper, text files, Inkscape/Dia diagrams) the overall structure, schematic diagrams, and subsystems I'd need.  On the other hand, I do small alternate unit test programs/firmware snippets/Arduino sketches, to determine how the "tricky" deep details of the hardware works; things like serial buses, DMA, timers and interrupts, PWM, et cetera.  (On the hardware side, I'm just a butterfinger hobbyist, but I've done a lot of software development, and the same applies; except that the "tricky" parts there are things like privilege separation (processes, privileges) and security details, interprocess communication, configuration methods, and so on.)

For applications and appliances with nontrivial user interfaces –– i.e., more than an on/off button! –– I have learned to always simulate the user interface first, before the actual development begins.  Some may disagree, but fact is, this is the part humans will interact with the thingamabob, and in many ways is the most critical part.

This UX part (user experience) can either be done using a higher-level language (say Python and Qt, on any OS), or in the target language with integral unit testing on how the interface is best implemented on the target hardware –– stuff like what kind of data structures you need to describe the UI elements efficiently.  I don't use Windows myself, and prefer working in Linux.  In Linux, creating an interface simulator in pure C or C++ or a mix of the two, is very easy.  I've also used a cheap dev board in the Arduino environment and a display module, with a trivial "slave" Arduino firmware, controlled via USB on the host computer, to test or simulate the actual look and feel of the user interface for my own gadget ideas.  For example, I created this Pro Micro clone (ATmega32U4) gamepad three years ago, that can use an 128x32 I²C OLED display to select the keypresses/events the buttons generate.  I still haven't ordered the board or built it yet, but I did do some OLED and HID tests, to satisfy myself it would work if I chose to do it.  Including finding out how to do it all so that all UI elements, including the menu structures, would be stored in Flash and not the meager RAM available.

(As I said, I'm just a butterfinger hobbyist, and not an EE or designer at all.  I am most interested in solving problems, and the problem that board (and the later, even cheaper CH551G board) solves, was to find out how hard it would be to create a Arduino- or similarly free/open-source reprogrammable gamepad with a small OLED display to show the currently active keymap, so that one could play not only native but also ubiquitous web games that often only support specific keyboard controls.  Existing boards I've seen tend to use the small 6x6mm tactile switches, but I much prefer the feel and robustness of the larger 12×12mm switches with 3mm stems and round or square hats.  On the CH551G, the OLED display would be vertical, by the way.)

In ones early days, one often starts designing UIs in the traditional imperative fashion, like executing a program which poses questions to the user and proceeds based on the answers, but human user interfaces are much better implemented using an event-driven approach, modeling the entire interface as a finite state machine.  (If you've ever created interactive web pages with JavaScript, you've already been introduced to "event-driven" approach, since JavaScript is chock full of event handlers.  Instead of you having a loop you control, the web browser triggers specific event handlers, as functions, whenever those events occur.)
« Last Edit: September 01, 2021, 08:17:03 am by Nominal Animal »
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 14986
  • Country: gb
    • Having fun doing more, with less
Re: Embedded program design thinking
« Reply #11 on: September 01, 2021, 08:23:17 am »
For applications and appliances with nontrivial user interfaces –– i.e., more than an on/off button! –– I have learned to always simulate the user interface first, before the actual development begins.  Some may disagree, but fact is, this is the part humans will interact with the thingamabob, and in many ways is the most critical part.

Especially true with software - but then you have to get the end user to understand there's nothing "behind" the UI. The best trick I saw to help that was a Java GUI "napkin" skin, that looked like a paper napkin with writing on it.

Quote
In ones early days, one often starts designing UIs in the traditional imperative fashion, like executing a program which poses questions to the user and proceeds based on the answers, but human user interfaces are much better implemented using an event-driven approach, modeling the entire interface as a finite state machine.  (If you've ever created interactive web pages with JavaScript, you've already been introduced to "event-driven" approach, since JavaScript is chock full of event handlers.  Instead of you having a loop you control, the web browser triggers specific event handlers, as functions, whenever those events occur.)

Not just the GUI, the whole application, as per my reply somewhere above!
« Last Edit: September 01, 2021, 08:25:19 am by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 3584
  • Country: fi
    • My home page and email address
Re: Embedded program design thinking
« Reply #12 on: September 01, 2021, 10:56:19 am »
For applications and appliances with nontrivial user interfaces –– i.e., more than an on/off button! –– I have learned to always simulate the user interface first, before the actual development begins.  Some may disagree, but fact is, this is the part humans will interact with the thingamabob, and in many ways is the most critical part.
Especially true with software - but then you have to get the end user to understand there's nothing "behind" the UI. The best trick I saw to help that was a Java GUI "napkin" skin, that looked like a paper napkin with writing on it.
I've combined the UI testing with observing user reactions (frustrations, issues) by basically being the simulator myself.  This works even if you use plain paper sketches for the UI "screens".

One important thing that reveals is the user need for immediate feedback.  OSes and many appliances use an indicator when the device is busy doing something; that is not just "useful", it is crucial for users to not get frustrated at the unresponsiveness of the device when it is busy.

Quote
Quote
In ones early days, one often starts designing UIs in the traditional imperative fashion, like executing a program which poses questions to the user and proceeds based on the answers, but human user interfaces are much better implemented using an event-driven approach, modeling the entire interface as a finite state machine.  (If you've ever created interactive web pages with JavaScript, you've already been introduced to "event-driven" approach, since JavaScript is chock full of event handlers.  Instead of you having a loop you control, the web browser triggers specific event handlers, as functions, whenever those events occur.)
Not just the GUI, the whole application, as per my reply somewhere above!
Ah yes; you tied it a bit too closely to RTOS use for my taste, as the underlying ideas definitely apply basically everywhere.

Even if one does a simple widget with a small OLED screen in the Arduino environment –– so there is no need for an RTOS! –– and the "core" of the widget is an imperative sequence, the UI still has to be event-driven, because us humans poke our fingers whenever we want; and the device must be prepared to handle that, or it will feel unresponsive or even broken.  Of course, modeling the entire gadget as a finite state machine usually yields a much better design, but it is not mandatory in the same sense as it is for the UI.

Consider, say, a DIY electronic safe lock with time-evolving opening code.  The UI is basically just the input of the code.  Only when the full code is input, the core imperative sequence runs.  It does not matter if it is a bit clunky and takes a second or so, because it occurs so rarely, and humans don't expect that to occur "without delay".  So, the details on how the code is compared to the RTC clock, how the stepper is powered/controlled to pull the lock open, and so on, does not need to be written as a finite state machine, and a naïve simple procedural code will work just fine (as long as it does not get confused if the user activates some of the UI in the middle of it!).

That said, I do warmly recommend experimenting and getting familiar with finite state machines.  I often use Graphviz, and its DOT language, to create a text description of what I think the state machine states are, and then let Graphviz draw it as a graph to me.  (The reason is similar to rubber duck debugging: writing the states and state transitions as text, somehow exercises different parts of my mind, and having Graphviz produce the visual representation of the image lets me more easily compare my understanding to my belief of what I understand; that is, detect the gaps and errors in my understanding or coverage.)  Those graphs (including the DOT source files) are an excellent part of design documentation, too.

In a huge number of software applications and physical gadgets and appliances, the finite state machine model approach seems to work best.  I suspect it has something to do with how we humans are fundamentally tool users, and how states map nicely to various tool use cases and states.  (For example: You swing the hammer back, make sure your fingies are not in the target zone, and then swing the hammer at the target.  Three states, with additional transitions if you do have your sensitive bits in the danger zone, and when the target is sufficiently hammered; otherwise repeat.  This kind of description just seems to fit, somehow.)
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 14986
  • Country: gb
    • Having fun doing more, with less
Re: Embedded program design thinking
« Reply #13 on: September 01, 2021, 12:10:59 pm »
In ones early days, one often starts designing UIs in the traditional imperative fashion, like executing a program which poses questions to the user and proceeds based on the answers, but human user interfaces are much better implemented using an event-driven approach, modeling the entire interface as a finite state machine.  (If you've ever created interactive web pages with JavaScript, you've already been introduced to "event-driven" approach, since JavaScript is chock full of event handlers.  Instead of you having a loop you control, the web browser triggers specific event handlers, as functions, whenever those events occur.)
Not just the GUI, the whole application, as per my reply somewhere above!
Ah yes; you tied it a bit too closely to RTOS use for my taste, as the underlying ideas definitely apply basically everywhere.

Oh, my escape route is that I have a very wide definition of what constitutes an RTOS - certainly not just a commercial product!

Personally I like the approach of cooperatively scheduled tasks, i.e. where a flow of control is uninterrupted until it does a yield() of some sort. Couple that with interrupts merely putting a magic number in an event queue, and you've got the basis of an FSM that can be decoupled neatly from the hardware during design and test. Not everything fits that model, but it goes a long way.

One thing I don't like is the fetish for doing everything in C - especially if that means making coding styles awkward in order to avoid C not being able to switch stack pointers. A tiny amount of machine-dependent assembler is no big problem for an embedded application.

Quote
That said, I do warmly recommend experimenting and getting familiar with finite state machines.  I often use Graphviz, and its DOT language, to create a text description of what I think the state machine states are, and then let Graphviz draw it as a graph to me.  (The reason is similar to rubber duck debugging: writing the states and state transitions as text, somehow exercises different parts of my mind, and having Graphviz produce the visual representation of the image lets me more easily compare my understanding to my belief of what I understand; that is, detect the gaps and errors in my understanding or coverage.)  Those graphs (including the DOT source files) are an excellent part of design documentation, too.

There are several codiing design patterns that I use, all of which are designed to force me to design and code all possible event/state combinations. The one design pattern I dislike is the switch() or if-then-else, but I will use that for trivially simple FSMs.

As for the correspondance between text and diagram, I'm completely in two minds. I like and dislike both.

If there was a good tool for doing a two-way round trip (as with schematic <-> layout), then I would look at it very carefully.
« Last Edit: September 01, 2021, 12:14:15 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 9135
  • Country: fr
Re: Embedded program design thinking
« Reply #14 on: September 01, 2021, 05:18:44 pm »
Personally I like the approach of cooperatively scheduled tasks, i.e. where a flow of control is uninterrupted until it does a yield() of some sort. Couple that with interrupts merely putting a magic number in an event queue, and you've got the basis of an FSM that can be decoupled neatly from the hardware during design and test. Not everything fits that model, but it goes a long way.

Agreed. Cooperative multitasking is much simpler, more efficient if done right, can be proven correct...

One thing in favor of preemptive multitasking is that it can accomodate tasks that were not there when the system was designed - so that naturally fits general-purpose OSs, for which the running "tasks" are unknown as far as the OS design is concerned.

But for firmware in which all tasks are predefined and well known, cooperative multitasking is absolutely usable, and often better. It takes a bit of thought for when to place "yields" inside each task, but avoids a lot of potential pitfalls of preemptive, with its synchronization issues, concurrent memory sharing, overhead of context switching, and so on.

Now, maintenance (modifying or adding new tasks) can be harder with cooperative multitasking - because each task has a higher probability of having an impact on others. So, those are things to consider when picking an approach.

One thing I don't like is the fetish for doing everything in C - especially if that means making coding styles awkward in order to avoid C not being able to switch stack pointers. A tiny amount of machine-dependent assembler is no big problem for an embedded application.

Well, if such parts in C consist of non-standard compiler extensions and/or nasty tricks, which would make the corresponding code non-portable anyway, I agree with you.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 14986
  • Country: gb
    • Having fun doing more, with less
Re: Embedded program design thinking
« Reply #15 on: September 01, 2021, 06:08:17 pm »
Personally I like the approach of cooperatively scheduled tasks, i.e. where a flow of control is uninterrupted until it does a yield() of some sort. Couple that with interrupts merely putting a magic number in an event queue, and you've got the basis of an FSM that can be decoupled neatly from the hardware during design and test. Not everything fits that model, but it goes a long way.

Agreed. Cooperative multitasking is much simpler, more efficient if done right, can be proven correct...

One thing in favor of preemptive multitasking is that it can accomodate tasks that were not there when the system was designed - so that naturally fits general-purpose OSs, for which the running "tasks" are unknown as far as the OS design is concerned.

But for firmware in which all tasks are predefined and well known, cooperative multitasking is absolutely usable, and often better. It takes a bit of thought for when to place "yields" inside each task, but avoids a lot of potential pitfalls of preemptive, with its synchronization issues, concurrent memory sharing, overhead of context switching, and so on.

I've found that often fits in with waiting for an "interesting" event to occur, then processing it, then rewaiting....

Quote
Now, maintenance (modifying or adding new tasks) can be harder with cooperative multitasking - because each task has a higher probability of having an impact on others. So, those are things to consider when picking an approach.

One thing I don't like is the fetish for doing everything in C - especially if that means making coding styles awkward in order to avoid C not being able to switch stack pointers. A tiny amount of machine-dependent assembler is no big problem for an embedded application.

Well, if such parts in C consist of non-standard compiler extensions and/or nasty tricks, which would make the corresponding code non-portable anyway, I agree with you.

IIRC I am thinking of the FreeRTOS style of co-routines, which all share a single stack and therefore the yield() can only be at the top level of the co-routine. Yes it works and is standard C, but it contorts the code - and I like code that looks (and is!) simple and clean.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Infraviolet

  • Regular Contributor
  • *
  • Posts: 51
  • Country: gb
Re: Embedded program design thinking
« Reply #16 on: September 27, 2021, 11:29:27 pm »
Depending what debugging methods you will and won't have available for your embedded device, often you'll also be trying to ensure that bugs which occur are of the sort which will be simpler to solve from limited debugging information. So in some circumstances, say for a low performance embeddes system on a microcontrolelr you might try to do things like minimising use of pointers so you don't get those kinds of stack crashes which can be harder to unpick than crashes caused by typos and failures in design and thinking.

If your embedded device is something running a full linux OS, and can be accessed easily via ssh or something, thogh, then embedded development can be pretty similar to developing a program to run on an ordinary OS.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf