Author Topic: How you usually start architecturing your firmware ?  (Read 4640 times)

0 Members and 2 Guests are viewing this topic.

Offline YTusernameTopic starter

  • Regular Contributor
  • *
  • Posts: 83
  • Country: tr
    • Atadiat
How you usually start architecturing your firmware ?
« on: December 20, 2023, 11:51:05 am »
I've been in the embedded systems industry for almost 10 years now. I usually join projects when the design stage is completed. Also, the teams and companies I worked for are relatively small. So basically, there is no true architecting experience for a complete product. When I code something as a hobby, I usually open the IDE and start coding which is called code now debug later (or refactor later). I know it is better to go through a more systematic way.

It is very similar to electronics where is should start with a diagram on paper, then prototyping, and then lastly open the CAD tool to draw the schematic and PCB. While a lot of experienced engineers jump into the PCB right away.

So I would like to ask how/what you usually do before starting to code firmware: state machine (UML) ? Test Drivin Development? Behavioral Driving Development? something else or even like me, code now and debug later? 

 
The following users thanked this post: Warhawk

Offline ataradov

  • Super Contributor
  • ***
  • Posts: 11269
  • Country: us
    • Personal site
Re: How you usually start architecturing your firmware ?
« Reply #1 on: December 20, 2023, 03:56:17 pm »
I'm not sure if simply opening an IDE and starting to write code is a bad approach. The key here is that with each new project, the code you start writing would be informed by the experience from the previous projects, so likely close to what you want in the end.
Alex
 
The following users thanked this post: neil555, Siwastaja, YTusername, 5U4GB

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: How you usually start architecturing your firmware ?
« Reply #2 on: December 20, 2023, 04:15:10 pm »
A long time ago, Top Down Design - Bottom Up Coding was quite popular.  I still use it...  Start designing with the big blocks and design down until you have gone as far as possible.  Now start coding from the bottom up.

My first task is to get the UART running and code up a bunch of formatting functions like itoa() and friends.  I generally won't be using a heap so I can't use printf() and many of the library string functions.  So, I write my own.  I suspect there are exceptions to the 'no heap' rule...
 
The following users thanked this post: YTusername

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8180
  • Country: fi
Re: How you usually start architecturing your firmware ?
« Reply #3 on: December 21, 2023, 11:57:57 am »
I'm not sure if simply opening an IDE and starting to write code is a bad approach. The key here is that with each new project, the code you start writing would be informed by the experience from the previous projects, so likely close to what you want in the end.

And also informed about the previous attempts within the current project, in other words you should be eager to throw away code and not get too attached to it.

Some would say this is like starting to construct a building from randomly placed bricks without drawings, but the analogy is incorrect. Writing code is not comparable to building. What compilers and CPUs do when they implement the code is closer to constructing a building, and that is already automated. Code is specification written in formal language, designed exactly for specifying how software should work, for computers to compile.

Writing code is equivalent to drawing the plans for a building, and sure enough, a designer who does that does sketch, erase, start again, and so on until they and customers are satisfied with how it looks, and several calculations pass. In other words, "software architects" should totally be writing code.

Not understanding this is one of my pet peeves as it causes the pattern of managers sitting in ivory towers writing insane specifications, in combinations of natural languages and stuff like UML, and expecting "code monkeys", possibly outsource to India with lowest possible wages, to "just implement" them. In real world, projects like linux kernel which is pretty much written ad-hoc in code directly are really successful and also relatively robust (even though not perfect).

So IMHO there's nothing wrong in starting from the code, test ideas, implement bottom-up because that gives you some good functions related to the project which you can use even if the big picture changes.
« Last Edit: December 21, 2023, 03:35:17 pm by Siwastaja »
 
The following users thanked this post: YTusername, Nominal Animal, harerod

Offline abeyer

  • Frequent Contributor
  • **
  • Posts: 292
  • Country: us
Re: How you usually start architecturing your firmware ?
« Reply #4 on: December 21, 2023, 10:19:56 pm »
I usually open the IDE and start coding which is called code now debug later (or refactor later).

Both debugging and refactoring later are almost an inevitability, no matter how much time you spend on up-front design something will change later or not match the design -- plan ahead to expect them.

Both are also made much easier if you've designed your components and your system as a whole to support testing easily. This is fundamentally what TDD is about, if you strip out all the dogma and zealotry. While I would hesitate to suggest TDD as a solution as it's often promoted, understanding that aspect of it and building testability in from the beginning is a good lesson to internalize.
 
The following users thanked this post: YTusername

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6266
  • Country: fi
    • My home page and email address
Re: How you usually start architecturing your firmware ?
« Reply #5 on: December 21, 2023, 11:42:12 pm »
I don't do commercial embedded development at all, only hobby projects on my own (or in tiny teams of not more than three people); but I do have quite a bit of experience in programming in general in different environments and contexts.

My approach has evolved into a mixed bottom-up + top-down one.  I start by implementing and testing the key points I don't have enough experience in, as separate test programs or "test firmwares".  (Most common one currently is in the Arduino+Teensyduino environment and Teensy 4.x; be it a measurement, new algorithm or data structure, data generation, throughput or latency testing, or even duplicating an USB device behaviour for a device I don't have using an USB data transfer dump.) This is the bottom-up approach.

I spend quite a lot of time thinking about the overall structure (and sketching on paper, as a cache or temporary scratch "file") and even the interfaces ("library" or module interfaces, function call models, what kind of data to pass as a function parameter, what to keep statically allocated in RAM, and so on).  It is mostly about how to make it maintainable and robust, avoiding typical sources of glitches and errors.  This is the top-down approach.

I do the above in parallel, because they feed each other.  Sometimes my subconscious bubbles up an idea, often an algorithm or abstract data structure, that can neatly solve a crucial part of the task at hand, but may have hardware requirements or dependencies that need to be tested first.  Sometimes an overall idea may require the use of an algorithm or hardware control approach I'm not familiar yet, so I need to test it.  After testing, I might find them lacking.

I do write a lot of code I don't end up using in the final version, because for me, the above two methods tend to refine each other.  (For example, the low-level tests on actual hardware may indicate a specific programming approach –– say, interrupt- or event-driven, as opposed to imperative/sequential –– is required at some point, so the overall design has to accommodate that.  Also, it is common for me to discover that my initial overall design can be simplified because may "optional features" I originally considered aren't really needed at all.)  I consider all of those "branches" useful information that I do not simply throw away, but document and keep in "related experiments and tests" folders.  (Except for the paper sketches.  If I ever sketch something that is truly useful, I end up redrawing and cleaning it up it in Dia/Inkscape/Graphviz/special script, saving the end result (and reworkable versions) in the project documentation.)
I do think of it as throwing code out of the project, and I do so often; I just do archive the reasons and example code showing why.

In a Programming sub-forum thread the purpose of comments was recently discussed.  To me, the source code (including comments and documentation) should be kept in a version control, providing history for the development of the project.  However, there should also be a parallel tree of "related experiments and tests" and "musings" that document why the current approaches were chosen, why some approaches (that might seem better suited at first glance) were rejected, and their related test code and experimental firmwares and scripting tools.  This secondary tree is not useful to the users; only to the project maintainers.  In a commercial/proprietary development, this secondary tree is kinda-sorta important for the development teams, and it might be best to only summarize the salient points in the build tree changelog.  (Knowing what kind of alternates the development team has considered and experimented with tells a lot about the development team, and would be very valuable information to anyone interested in employee poaching.  Conversely, in an open-source development project, this secondary tree can be exposed publicly as a blog, giving potential employers a close look at the authors' problem solving and design skills.)



Just before the turn of the century, I did some design and programming work (mostly integration and polish) for a CD-ROM project involving an university artwork archive (using Macromedia Director), with a few student teams working on separate aspects of the entire project.  Instead of telling them their ideas would not work in practice –– at that time, large animations ("sprites") would be clunky, low-FPS, on most machines ––, I showed examples of what it actually would look like, and some examples of alternate ways that would work technically better.  It affected the end result a lot, and most (but not all) understood that the tools and computers and time we had available limited the implementation of their "vision", not me.

(Some nontechnical people are just fixated on the idea that the innovation, the core idea, is 99% of the project, and implementation is just typing it up on a computer, something a monkey could do for bananas.  Some also thought that because something was implemented in a game, it should be just as easy to implement in the Macromedia Director environment – all tools are equal, aren't they?  So it wasn't a conflict-free project, either.  They were just resolved generally satisfactorily.)

Essentially, I countered many technically-problematic high-level ideas by showing low-level examples of how those would look in practice; and showed different low-level examples of things that would work well, to feed their high-level ideas and creativity.  The end result was of course a compromise –– practical solutions always are! –– but I think it worked well, both in the sense of the end result being pretty good, but also as an example of teamwork involving different levels of technical expertise.

This had a big impact on my attitude regarding development project teamwork.  This top-down-bottom-up mixed approach, with counter-suggestions and test examples during design phase, has just worked better than anything else for me.  I have not seen any development project produce a robust, effective, efficient design, without doing any practical testing first.  User interfaces need mock-ups, crucial hardware and algorithms need to be tested before relied upon, and complex internal interactions need simulation (or mathematical proof) to verify their working is understood and acceptable before implementation; it's that simple.

All that said, I personally have always focused on the quality of the end product, rather than making some arbitrary deadline or spending only a limited amount of resources (development time!) that guarantees a profit on the end product.  I just cannot do that.  So, if you look at the design process from a commercial point of view, with the intent of maximising profit, you probably shouldn't put too much weight on my opinions or experience in this.
« Last Edit: December 21, 2023, 11:45:29 pm by Nominal Animal »
 
The following users thanked this post: hans, thermistor-guy, YTusername

Offline thermistor-guy

  • Frequent Contributor
  • **
  • Posts: 375
  • Country: au
Re: How you usually start architecturing your firmware ?
« Reply #6 on: December 22, 2023, 01:39:30 am »
..
It is very similar to electronics where is should start with a diagram on paper, then prototyping, and then lastly open the CAD tool to draw the schematic and PCB. While a lot of experienced engineers jump into the PCB right away.

I'll comment on the OP's main question later. Just to address the quote above, as a h/w designer, I have never jumped into a PCB right
away. It's definitely not my style

How fast you go  in h/w development depends on:

(a) how big and how deep is the design problem?
(b) how much do you know about it?
(c) how much practical experience do you have with it?
(d) what is it about this problem that is new to you (if anything)?
(e) what is it about this problem that makes it look difficult (if anything): cost? time available? technical demands? manufacturing constraints?
(f) what development style works for you?
(g) what are the stakes in getting it wrong?
(h) what level of excellence do you want to apply to it?

At the lowest level of cognitive demand: if it's a small design problem that you've pretty much designed before,
and understand well, with nothing new about it, you can hold all the details in your head, it's low-stakes, and
"near enough is good enough" - yeah, go for it.

If you haven't designed it before, but others have and their work is available, that adds a little more cognitive
demand - you need to learn enough of what they know to apply it.

If no-one has solved it, because you're trying to advance the state of the art, and you're not sure if it's even
possible ...
 
The following users thanked this post: YTusername

Offline Smokey

  • Super Contributor
  • ***
  • Posts: 2597
  • Country: us
  • Not An Expert
Re: How you usually start architecturing your firmware ?
« Reply #7 on: December 22, 2023, 02:28:14 am »
I've been doing more rtos projects recently because they require something like a Bluetooth stack that is vendor supplied and only runs under the rtos.  In that case, because of the complexity of operating within the vendor stack frameworks, I'll start with the closest vendor demo application.  Strip out everything I don't need, and customize from there.  It's amazing how many configuration parameters there are for something like ble that if you don't get right it just won't work.  I can't imagine starting something like that from scratch.
 
The following users thanked this post: YTusername

Online hans

  • Super Contributor
  • ***
  • Posts: 1641
  • Country: nl
Re: How you usually start architecturing your firmware ?
« Reply #8 on: December 22, 2023, 10:49:05 am »
I'm with Nominal Animal on going through a combined top-down and bottom-up scheme.

The classic waterfall design methodology strictly is a top-down one, which IMO does not suit software terrifically well because software tends to be more iterative based. It works better for hardware, where you don't want to iterate as many times through the waterfall.

Thus for software I tend to stick more with TDD methodologies. Its nice to have code in an automated test environment anyway, as it prevents an innocent "fix" to cascade into a whole set of failed functionality. It also helps in design because it forces you to write the test first, which should describe how you want to use the code and what expected outcomes it should provide. It should also incorporate which dependencies an unit may have.
For example, if you want to write a LCD/OLED driver, then you need to consider what the underlying hardware representation is, and what the use cases will be. E.g. the driver should provide functions like DrawText, DrawLine, etc. and write its intermediate data into a framebuffer that is held by the display chipset driver.

Then, one of the key decisions to settle on is how you want to interface your blocks: is this function call small enough to be effectively inlined (e.g. a variable getter/setter)? Is it blocking? Under which conditions? Or is it asynchronous? How can I poll the status of this async transaction? Should it provide a callback mechanism? Is there some error condition/handling necessary?

I find that by approaching some functionality "top-down" like this helps in making the right decisions on how a certain piece of driver/code should be interacted with. It also comes down to some experience though, as writing code is anecdotal at best, and things like "design patterns" are merely best practices that people have collected over the years.

When designing individual blocks I typically do start with the most bottom blocks of a software design (bottom-up). That means BSP bring-up, converting raw data bytes into packets, connections/sessions and structures (configuration parameters, stored lists, etc.), and then finally the "application". I try to find a point at which I can cut off application code from hardware, e.g. by mocking BSP drivers by supplying simulated data. This way I can run behavioural tests on PC, and only need to port a small bit to actual hardware, test BSP drivers, and verify timing is OK.

This has led to numerous projects that, after the BSP code has been verified&stable, ""just works"" when I upload them to the target. All the application code has been tested on PC, including (error) conditions that take a lot of time to reproduce. IMO the absolute worst is to be the human test robot while having to work through a limited debug environment on your embedded MCU where halting your code breaks the real-time aspects, and then trying to guess whats going wrong.
« Last Edit: December 22, 2023, 10:57:05 am by hans »
 
The following users thanked this post: YTusername, Nominal Animal, newtekuser

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3147
  • Country: ca
Re: How you usually start architecturing your firmware ?
« Reply #9 on: December 22, 2023, 03:53:30 pm »
I think you need to think it through first, create a design. The design determines what components you're going to have, data flows, data structures, hardware use (e.g. I need DMA for this, 3 timers for that etc.), some sort of verification where you write small programs to validate that the hardware can indeed do what you have designed. Once this is done, you implement what you have designed.

First, of course, you need to select a part which has enough "things" to implement your design, allows the pins to be assigned as you wish.

Then, you need to design a PCB which will include your MCU, other hardware devices you're going to use - like if you want to run a motor you'll need FETs, FET drivers etc.

Once you receive a board, you need to validate it. The PCB will never come out right from the first try, so you'll have to fix things. At this stage, you'll probably want to write (or most likely re-use from earlier projects) some pieces of code which will become parts of your firmware.

Once you get the board right, you start writing the software. Many things will be already written. That's the most boring part when you need to write lots of code to implement what you have designed.

I'm sure, if I started writing code before doing the design, I would later need to re-write or throw away most of what I have written. Or worse, I may want to butcher the design to accommodate what I have written.
 
The following users thanked this post: YTusername

Online shtirka

  • Contributor
  • Posts: 16
  • Country: se
Re: How you usually start architecturing your firmware ?
« Reply #10 on: December 22, 2023, 04:39:57 pm »
I've been in the embedded systems industry for almost 10 years now. I usually join projects when the design stage is completed. Also, the teams and companies I worked for are relatively small. So basically, there is no true architecting experience for a complete product. When I code something as a hobby, I usually open the IDE and start coding which is called code now debug later (or refactor later). I know it is better to go through a more systematic way.

It is very similar to electronics where is should start with a diagram on paper, then prototyping, and then lastly open the CAD tool to draw the schematic and PCB. While a lot of experienced engineers jump into the PCB right away.

So I would like to ask how/what you usually do before starting to code firmware: state machine (UML) ? Test Drivin Development? Behavioral Driving Development? something else or even like me, code now and debug later?

Hello,

My personal method is by no means perfect, but I hope you will learn something from it and it will be an inspiration to you.
  • When I start a new firmware project, I try and assess the key objectives and results for whats intended (ie, will it just be firmware for measuring and logging some specific parameters, will it be controlling something, etc)
  • Once I roughly know which category the project falls into in the previous step, I can start choosing which microcontroller will fit the bill the closest (I will say though that with several years and a few evaluation kits, one development kit and a hardware debugger around I have started to get a decent grip of whats what) - usually based on my prev experience I will chooose a couple of potential fits for the project and I will have a look at their data sheet and the block diagram for them: when i do have a block diagram in front of my eyes, i can clearly see information flow in and out (ie what info is coming in via ADC and other comms and whats going out so to speak like DAC, SD memory and so on, so that I see where the bottlenecks are) and thus make an estimate if that candidate will work for a specific project
  • Thats where I actually write the basic firmware using both the various drivers and other components and test/debug it with any of my eval kits
  • Once Ive tested my firmware on any of my dev or evaluation kits, I do my final board design taking onto account the results of testing and perhaps the need for a slightly different microcontroller - and while thats done I can make any adjustmets to the firmware


BUt that is a very general approach - as many may have pointed out, things do depend on a specific project, and sometimes a more complex solution may be needed, but in general thats the steps.

Ilya
 
The following users thanked this post: YTusername

Offline PDP-1

  • Contributor
  • Posts: 15
  • Country: us
  • Spacewar!
Re: How you usually start architecturing your firmware ?
« Reply #11 on: December 23, 2023, 03:04:41 am »
I like to start off with the system requirements and work out a top-level block diagram breaking it down into sub-systems (power supply, microcontroller, com port, sensors, etc.) and how they are connected. If we were building a temperature controlled oven for example, we would have blocks for the temperature sensor, a way to control power to a heating element, a microcontroller running a PID loop, and a USB port going to a PC for user interface.

Then break each sub-system down into reasonably detailed specs and component selection, e.g. the temperature sensor has to read a Type K thermocouple to within 0.1C over a range of -40C-100C, we can use chip XYZ to do that and it talks to the microcontroller over SPI with a 100kHz clock. Once all of your IO functions are detailed out you can start to choose a microcontroller that can speak to them all. Then I order a dev board for that microcontroller and all of the sensors/IO bits as well as any pin expander boards needed to glue them together. That stuff will get here in a week.

While waiting for our parts I begin designing the PCB as if it were a finished product rather than working on firmware. The cost of small run PCBA has dropped so much over the last decade or so that I'm no longer really worried about getting the design 100% right in one go, and prefer "fast" hardware iteration meaning we can get several copies of the assembled boards back in 2-3 weeks for $1000USD. Put test points everywhere you can in a way that will let you connect a scope or logic analyzer easily. If there is some debate over whether we should use sensor A or sensor B for a job just put them both on if you can. If there are analog control loops involved give yourself plenty of options to tune their feedback using part sizes that you won't hate hand soldering. Ship the design off for manufacture, it'll be back in three weeks. (Pro tip: make the solder mask on these test PCBs different than what you normally use so it's obvious what they are. I like to use red - these prototype boards are dangerous!)

In the meantime our dev board and test sensors should have arrived and it's time to start in on the firmware. Attach one external chip to the dev board at a time and start up a simple project file where the goal is to end up with some chipName.h/cpp files that form a reference driver. I usually start with a simple read/write function that blocks until the operation is complete to make sure I understand how the device works, then upgrade to something that runs it off of a DMA channel or interrupts depending on what makes sense. Work up these reference driver projects for however many external things the micro needs to talk to.

Now we get to the actual firmware architecture part, this can take many forms but I like to try to enforce a strong separation between "hardware" and "software" functions, often by having a header file that contains all of the hardware functions that the software can call, without the software having to know anything about  how they are implemented. So for the oven controller example the software side could be running a PID loop and calling through that header file into the hardware functions readTemp() and enableHeater(bool). Or the software side has a command parser with a callback function that the hardware side uses to let it know when the control PC has sent a command, and a write(message) function to respond.

When we get the custom PCBs back it should now be a pretty quick job to port over our sensor drivers and bring them up one by one. Then patch in the software side of the code. Squeeze as much info and performance out of the rev1 PCB as you can, fix what you don't like and get rev2 out. Polish the software side as much as possible while waiting for it to return. Having the software and hardware sides of the project cleanly separated should mean you can flip GPIO pins around or change external peripheral chips on the PCB without the software side of the project needing any modification. Repeat until satisfied.

 
The following users thanked this post: YTusername, Nominal Animal, imacgreg

Offline YTusernameTopic starter

  • Regular Contributor
  • *
  • Posts: 83
  • Country: tr
    • Atadiat
Re: How you usually start architecturing your firmware ?
« Reply #12 on: December 24, 2023, 06:17:30 am »
Just to make the discussion more focused. When firmware is being designed, then I assume that the Hardware department already has a draft version of the Circuit. I know that sometimes firmware design requires a Hardware modification. But I was more looking at the Firmware part only. I mean the process mentioned by some of the replies indeed intersects with firmware, but it does not really the firmware design loop.

Some would say this is like starting to construct a building from randomly placed bricks without drawings, but the analogy is incorrect. Writing code is not comparable to building. What compilers and CPUs do when they implement the code is closer to constructing a building, and that is already automated. Code is a specification written in formal language, designed exactly for specifying how software should work, for computers to compile.

I like this analogy, it is descriptive. However:

Writing code is equivalent to drawing the plans for a building, and sure enough, a designer who does that does sketch, erase, start again, and so on until they and customers are satisfied with how it looks, and several calculations pass. In other words, "software architects" should totally be writing code.

I don't agree 100%, because when you are sketching, you should start over later with the final plan version . However, what really happens in that approach is that the trails become the released code and we keep refactoring all the time (because we started without a clear plan).

I'm not sure if simply opening an IDE and starting to write code is a bad approach.

That is exactly what I'm looking to know/hear. Firmware and embedded systems engineers have less Software background due to their academic study compared to software engineers. So I believe that is why software design methodologies and patterns are less important and less used.

I'm with Nominal Animal on going through a combined top-down and bottom-up scheme.

The classic waterfall design methodology strictly is a top-down one, which IMO does not suit software terrifically well because software tends to be more iterative based. It works better for hardware, where you don't want to iterate as many times through the waterfall.

Thus for software I tend to stick more with TDD methodologies. Its nice to have code in an automated test environment anyway, as it prevents an innocent "fix" to cascade into a whole set of failed functionality.

Actually, the book "Test Driven Development for Embedded C" by James W. Grenning was one of the things that made me think again about my approach of just coding until all requirements are met. I agree with you TDD seems to make me more strict in coverage code with tests.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8180
  • Country: fi
Re: How you usually start architecturing your firmware ?
« Reply #13 on: December 24, 2023, 07:32:09 am »
I don't agree 100%, because when you are sketching, you should start over later with the final plan version . However, what really happens in that approach is that the trails become the released code and we keep refactoring all the time (because we started without a clear plan).

Yeah. Note though that we also refactor all the time because we can. Similarly, a modern office building designed with modifiable spaces is also under constant modification. One which is cast in concrete isn't, but that is because modifications are prohibitively expensive, not because they aren't needed; so people just cope and work around. This happens sometimes in software, too; but refactoring and restructuring the foundations is often the better way, and why not because in software, it is possible.
 
The following users thanked this post: YTusername

Offline Karel

  • Super Contributor
  • ***
  • Posts: 2221
  • Country: 00
Re: How you usually start architecturing your firmware ?
« Reply #14 on: December 24, 2023, 08:25:18 am »
Bottom to top approach. I always start with the low-level drivers, specially if I have to use a new MCU.

1. Clock tree setup and GPIO.
2. I2C, USART, SPI, etc. can usually be copied from older projects if the same MCU is used.
3. Higher level drivers, e.g. sensors, EEPROM, A/D-converter, etc.
4. At last I start with the high-level functionality.

And no, I don't use HAL...
 

Offline dobsonr741

  • Frequent Contributor
  • **
  • Posts: 674
  • Country: us
Re: How you usually start architecturing your firmware ?
« Reply #15 on: December 24, 2023, 04:12:37 pm »
Start in the middle: define a hardware abstraction layer, with the functionalities in mind you will need, not necessarily what the hardware provides.
Write bringup code up to the abstraction layer.
Start writing from top down to the abstraction layer, experimenting with user interaction/communication and algorithms.

The Arduino or the Raspberry Pi Pico follows the same model, by giving you the abstraction layer.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3147
  • Country: ca
Re: How you usually start architecturing your firmware ?
« Reply #16 on: December 25, 2023, 01:50:58 am »
Some would say this is like starting to construct a building from randomly placed bricks without drawings, but the analogy is incorrect. Writing code is not comparable to building. What compilers and CPUs do when they implement the code is closer to constructing a building, and that is already automated. Code is specification written in formal language, designed exactly for specifying how software should work, for computers to compile.

The compiler is simply a tool to simplify code writing. Say, as part of your plan, you want to perform a certain DMA transfer. For this you need to pass certain values to the registers. You use C to write code which will execute this task. Instead of using C you could write the code directly in assembler, or you could call a contractor and command him to produce code that implements your idea. No matter what venue you take, it doesn't change your plan, nor does it change the result - your end product will perform the DMA transfer just as you planned.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16626
  • Country: us
  • DavidH
Re: How you usually start architecturing your firmware ?
« Reply #17 on: December 25, 2023, 02:24:12 am »
I like to think that I learned something from Knuth.

Experience has led me to design the programming starting from the data structures.  That gives me the memory requirements, and knowing how much data I have to read and write gives some idea of the processing requirements at least as far as feasibility.  If there is not going to be enough processing and cache or memory bandwidth, then the expectation of performance must be scaled back, or hardware resources increased.
 
The following users thanked this post: neil555

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8180
  • Country: fi
Re: How you usually start architecturing your firmware ?
« Reply #18 on: December 25, 2023, 07:01:29 am »
The compiler is simply a tool to simplify code writing. Say, as part of your plan, you want to perform a certain DMA transfer. For this you need to pass certain values to the registers. You use C to write code which will execute this task. Instead of using C you could write the code directly in assembler, or you could call a contractor and command him to produce code that implements your idea. No matter what venue you take, it doesn't change your plan, nor does it change the result - your end product will perform the DMA transfer just as you planned.

You are describing a compiler of a very low-level language, say an assembler. This obviously depends on language, but compiler usually is more than just a simple tool which simplifies tedious tasks. With high level languages, such as C (there are even higher level languages, but even C definitely is one), compiler reads the code against language specifications, and implements a program which produces the desired result, so that the code written by the programmer is actually a specification of the desired end results, not some kind of step-by-step instruction which just needs a little bit of help. (In C, the standard has the "abstract machine" concept.)

Your misconception is a pretty common one, especially in C, and usually leads to crappy code which misses the opportunities to make the code more readable by making it high level, a specification of the intended behavior of the program, written for humans, not computers or compilers.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6266
  • Country: fi
    • My home page and email address
Re: How you usually start architecturing your firmware ?
« Reply #19 on: December 25, 2023, 07:57:30 am »
[...] misses the opportunities to make the code more readable by making it high level
This is even more important with later ISO C versions, and the reason why one must understand and correctly apply C qualifiers like restrict and volatile (and to lesser extent, const).  If indeed ISO C23 adds constexpr, that as well.

Correctly used, these not only tell a lot more about the author intent, but they help C compilers to generate better, more efficient machine code.

(For POSIX C, we add even more things on top, including concepts like asynchronous signal safety, process identifiers (man 7 credentials), and so on.)

Experience has led me to design the programming starting from the data structures.
Data structures (and related algorithms) used are indeed absolutely crucial.  If you mean starting from discovering and testing the best suited data structures for the key aspects/problems/tasks, I fully agree!
I've found I often have an idea of using one, but either find a reason why it won't work, or via testing and experimentation end up developing it quite far from my original idea.  (Not always towards more complexity; I sometimes discover ways to prune it down to a tiny subset.)

I have quite a few practical examples of this.  Elsewhere on this forum, I've described my favourite example about sorting, the difference between offline sorting (i.e., read all data into an array, then sort that array) and online sorting using self-sorting data structures (trees and heaps, effectively sorting each input item as it is received, so that when all items have received, the data is essentially sorted), and how that affects total CPU time used, real-time wall-clock time used, and latency.  Disjoint-set data structures have provided me very efficient solutions in e.g. molecular dynamics' atom cluster and molecule detection.  Even something as simple as a binary min- or max-heap, often used for multiple concurrent timeout handling using a single timer, can be implemented in different ways: via pointers, or a linear array.

Even something as simple as circular buffers in an array, have technical details one may not understand before they use such a data structure to solve a real life problem.  Indeed, the simplest way to robustly handle them means you cannot have the buffer absolutely full, as at least one entry is always technically unused.  When using lockless atomic indexes, it gets even hairier, so to verify your own embedded implementation will work correctly, you really should test it on a multi-core machine using multiple concurrent threads; I like using POSIX threads aka pthreads (on Linux/BSD/MacOS), and gcc/clang/icc-provided atomic built-in accessor functions on x86/x86-64.

That is also why I do suggest embedded programmers to use open source toolchains: the same toolchains can be used to test the algorithms and data structures in a fully-hosted multi-core environment.  Yes, there will be architectural and hardware differences, but at the C implementation level, they should not matter.  POSIX is extra nice, because signal delivery causes very similar effects to microcontroller interrupts, so even those can be tested somewhat.
« Last Edit: December 25, 2023, 08:00:32 am by Nominal Animal »
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 3701
  • Country: gb
  • Doing electronics since the 1960s...
Re: How you usually start architecturing your firmware ?
« Reply #20 on: December 25, 2023, 01:24:52 pm »
I re-use existing proven hardware+software as much as possible, and go straight to a PCB simply because old-style prototyping (e.g. wire-wrapping) isn't viable with modern chips.

For coding, I re-use an existing FreeRTOS setup. That saves a huge amount of time. In the old days, one wrote some "factory test" code first to check/exercise the hardware (and I still do that) then wrote a "main loop" to do the function, with time critical stuff done by a) a 1kHz ISR and b) device-specific (e.g. UARTs) ISRs. Now, the RTOS task (multiple ones usually) replace the main loop.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline nigelwright7557

  • Frequent Contributor
  • **
  • Posts: 690
  • Country: gb
    • Electronic controls
Re: How you usually start architecturing your firmware ?
« Reply #21 on: December 25, 2023, 01:38:39 pm »
For small projects I just dive in and code it.
For larger projects I break the problem down into modules where possible.
Then flowchart them out.

As the old saying goes if you fail to plan then you plan to fail.

My biggest program has been around 500,000 lines and that is very modular.
 
The following users thanked this post: YTusername

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26910
  • Country: nl
    • NCT Developments
Re: How you usually start architecturing your firmware ?
« Reply #22 on: December 25, 2023, 02:50:31 pm »
There are a lot of good suggestions in this thread already but the one thing I'm missing is making an estimation of the amount of processor power needed. This is something that I try to get clear when developing embedded firmware so I can make an estimation on whether code needs to be super efficient (with the extra time spend) or the processor is fast enough without having to care much about execution speed. Usually it is simpler both in writing and readability of the resulting code to use floating point (even simulated) compared to using fixed point calculations.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 3701
  • Country: gb
  • Doing electronics since the 1960s...
Re: How you usually start architecturing your firmware ?
« Reply #23 on: December 25, 2023, 02:59:29 pm »
Yes, this is the big change in say the last 10 years, with ARM32 or similar CPUs. It would take a very unusual application to get one of these 180MHz-480MHz chips to run out of steam, and at $5-$10 (volume qty) there is no point in using slower stuff.

And people working in those unusual spheres probably know enough about what they are doing to work it out... sometimes going to FPGAs and such.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8180
  • Country: fi
Re: How you usually start architecturing your firmware ?
« Reply #24 on: December 25, 2023, 06:28:50 pm »
There are a lot of good suggestions in this thread already but the one thing I'm missing is making an estimation of the amount of processor power needed.

This, plus, often even more importantly, what peripherals are needed, what are available, and if peripherals mentioned on datasheet can actually be used or not (due to limited pin routing options, limited DMA mappings available and so on). There can be quite serious consequences on the choice of the whole product family, or even the manufacturer, which can then affect the code quite a bit, because sometimes you don't want and cannot do layered abstraction but some of the hardware "leaks" into the higher level code, no matter how unelegant that sounds to software scientists.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf