Author Topic: RTOS theory help. Global variables vs Mailbox/Messages? Re-usable code?  (Read 13415 times)

0 Members and 1 Guest are viewing this topic.

Offline jnzTopic starter

  • Frequent Contributor
  • **
  • Posts: 593
Building my first RTOS application. Using Keil RTX which is CMSIS, but this should apply to really any RTOS. Have a fair understanding of the data transfer and blocking options; Signals, Semaphores, Mutexes, Mailboxes, Messages, Yield, Delay, etc.

I'm not sure when to use globals variables vs directly sending messages/signals/mail between threads. The idea is that depending on who you asking, Globals are a fine option - or the devil in code. On the other hand, if I want my threads to be re-useable I don't really want to make assumptions with messages/mail since I won't know what application they are being plugged into.

For example:

I have an 8channel SPI LED driver. I want to store this thread in LED_Driver.c so I can hopefully plug it into any application that later uses this chip. I just initialize and call it when needed. The thread should take in data to set the LEDs to, should be able to return or send back data about diagnostics. So my immediate options are:

1. There is a global struct. It has the LEDs requested states, diagnostic info, etc. The LED thread is run when called, or run at an interval, which then has to call SPI thread and toss a mutex on that, it reads the struct and stores the result there. Any other thread can set requests then call the LED Driver Thread, or can read the diagnostic results.

2. There is no global struct. But there a local one to my main logic thread. I could pass this pointer over in a message/mailbox to the LED thread, set a signal or a semaphore that I need the thread to do something, it has the data now to do so. Call the SPI thread once or many times depending on how it works with this part. If I needed to send diagnostic data back, I could put it in a mailbox and anyone interested could read it. The issue I with this I don't know who will be reading from the mailbox, when/how often, etc.

Which of these is the better option in the long run? Considering the real application would get much more complicated than this and I want an emphasis on re-usable code for threads and drivers etc?
 

Offline AutomationGuy

  • Contributor
  • Posts: 39
  • Country: de
If you pass just a pointer of a local variable to another thread you might get raise conditions or the memory is on the stack which can result to invalid memory.
If two threads write to memory at the same time you might lose data.
If just one thread writes and the other reads you might miss some data or your struct can be incomplete.
Inter-thread communication always needs some kind of locking.
Message boxes are an implementation of a locking mechanism. But don't put a pointer in a message  box becorse the memory the pointer points to is not protected.
 

Offline jnzTopic starter

  • Frequent Contributor
  • **
  • Posts: 593
Yea, I'm all about blocking and protection, but I'm kicking around global structs vs local structs and data transfered mailboxes.  I suppose I already have a few globals with some pre-made stacks, so it shouldn't be an issue to go that route for now. I'll probably switch between and combine as I get more experience with RTOSs.

Race Condition not Raise, but I got what you meant. :)
 

Offline AutomationGuy

  • Contributor
  • Posts: 39
  • Country: de
Variables local to a certain function reside on the stack. Which is memory temporary created for the execution of a function.
The memory will be freed and reused by other variables if you leave the function.
In C you can create static memory which is actualy global memory just visible in the current scope (function).
If you send a pointer to a local variable by a message or mailbox or what ever inter thread communication you use the memory to the local variable might not be valid anymore when the message is received or the mailbox is read.
You can either put your complete struct into one message or mailbox (copies involved) or you create a global varibale and protect access to that variable.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19506
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Your primary consideration is what is constant or read-only, versus what mutates or is read/write.

If it is constant or read-only then, after initialisation it can be in a global.

Anything mutable or read-write can cause intermittent problems in a multitasking environment - whether or not it is multiprocessor.

There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline andyturk

  • Frequent Contributor
  • **
  • Posts: 895
  • Country: us
I have an 8channel SPI LED driver. I want to store this thread in LED_Driver.c so I can hopefully plug it into any application that later uses this chip. I just initialize and call it when needed. The thread should take in data to set the LEDs to, should be able to return or send back data about diagnostics.

You may not need/want threads for this. At least, your LED driver API sounds like something that probably doesn't need be a thread all by itself, but may end up being called by some other code that *is* a thread. Lets make it interesting and assume that your driver needs to send a lot of data over a slow SPI bus. I.e., maybe it takes 100msec to send all the info, and you can't afford to lock up the system for that much time just to update the LEDs.

The first thing is to write a thread-aware SPI driver. It looks the same as a normal SPI call (e.g., status_t spi_send(SPI *bus, void *data, size_t length)), but it uses the RTOS to block the calling thread until the SPI transaction completes. That allows all the other threads (when you write them) to keep chugging away while one thread (perhaps with a message queue) blasts out data to your LEDs. To do this, you'll probably put a semaphore inside the SPI bus structure that's shared between the SPI interrupt handler and the threaded side of your SPI driver. That code sets up the parameters for a SPI transaction, primes the SPI peripheral with the first byte to send, and then calls the wait() on the semaphore. Later on, the interrupt handler will signal() the semaphore when the transaction is complete and allow the thread to pick up where it left off. For extra credit, you'll want to wrap access to your SPI peripheral in a mutex to make sure that two threads don't try to send data simultaneously.

Once you have that stuff in place, you could write a thread that listens on a message queue for a request, and then calls the aforementioned RTOS-aware SPI driver in an infinite loop. That loop would be really tiny, almost a one-liner.
 

Offline jnzTopic starter

  • Frequent Contributor
  • **
  • Posts: 593
The first thing is to write a thread-aware SPI driver. It looks the same as a normal SPI call (e.g., status_t spi_send(SPI *bus, void *data, size_t length)), but it uses the RTOS to block the calling thread until the SPI transaction completes. That allows all the other threads (when you write them) to keep chugging away while one thread (perhaps with a message queue) blasts out data to your LEDs. To do this, you'll probably put a semaphore inside the SPI bus structure that's shared between the SPI interrupt handler and the threaded side of your SPI driver. That code sets up the parameters for a SPI transaction, primes the SPI peripheral with the first byte to send, and then calls the wait() on the semaphore. Later on, the interrupt handler will signal() the semaphore when the transaction is complete and allow the thread to pick up where it left off. For extra credit, you'll want to wrap access to your SPI peripheral in a mutex to make sure that two threads don't try to send data simultaneously.

This 100% exactly what Keil's SPI driver / middleware does. I looked over that yesturday so we're on the same page right now! The differnce I have here is that (and remember I'm just using the LED driver as an example, I also have some other chips): I want a thread because there is a bit of calculation I want the driver thread to do itself. That is, it would be nice if the led driver every 100ms would update it's own states/status by reading to the chip over SPI and storing that data. It would also do a wait(0ms) to see if a message/mail/signal came in giving it something special to do, like turn a light on. In this way, I have an always updated account of statuses every 100ms and an option to take in an argument and go do something. If I use a global struct for the statuses, I can write once, and read by many different threads. For example my main logic thread and supervisor threads can both monitor the status of this chip, auto-updated every 100ms, and if I remove my supervisor thread, I don't need to adjust any mail or messaging in the driver itself.

I see the issues in re-usability of combining a driver and a thread. But I'm trying to plan for "what if I have three drivers and four unknown threads that need their data" if I use a counting semaphore or mail or etc, I can't be sure every consumer will get the data from every producer. But if I have a global struct only the producers can write to, makes it easy to consume.


Quote
Once you have that stuff in place, you could write a thread that listens on a message queue for a request, and then calls the aforementioned RTOS-aware SPI driver in an infinite loop. That loop would be really tiny, almost a one-liner.
Ok... So now I see.... Write the driver as a driver. Make it "blocking" to the calling thread with semaphores/mutexes where applicable, then write a very small tread that calls that. I'd be getting the same functionality as what I just wrote above, but with the option to use that code outside of the RTOS later (probably never happen).

Code: [Select]
Small_Driver_Task {
 while(1){
  MutexWait(led_driver);
  LED_Driver_Update(struct); //already calls the SPI thread-aware driver
  MutexRelease(led_driver);
  osMessageWait(led_driver_request_msg, don'tWait);
  if (msg) is a something I care about {
   do it
  }
  osDelay(100);
 }
}

The issue is.... This doesn't answer my question exactly. If I do the above, which looks good to me. I need to get the struct data from update into other threads. I can mail/msg it around but I'd need to know who it's going to. I could leave it local and just feed to anyone that asks via a signal/msg request for data and mail return. Or... I can make the struct global so everyone has read access to it. This would be risky-er, and also add dependacnies on any thread that will read this data, but also the easiest to implement esp on an unknown configuration later.

Am I making sense?
« Last Edit: July 02, 2015, 05:40:17 pm by jnz »
 

Offline jnzTopic starter

  • Frequent Contributor
  • **
  • Posts: 593
Your primary consideration is what is constant or read-only, versus what mutates or is read/write.

If it is constant or read-only then, after initialisation it can be in a global.

Anything mutable or read-write can cause intermittent problems in a multitasking environment - whether or not it is multiprocessor.

I get that's the concern with globals. But how about this... I have one logic thread and 3 drivers in this application the drivers are producers of info, the logic thread is a consumer but also will send requests.... Now, what if my next configuration uses three logic threads and 8 drivers? I want to be able to scale the same-ish code over those applications. If I have each led driver use globals, this is pretty easy.

How would I do it using the safe mail/msg approach and local structs?
 

Offline AutomationGuy

  • Contributor
  • Posts: 39
  • Country: de
 

Offline jnzTopic starter

  • Frequent Contributor
  • **
  • Posts: 593
 

Offline AutomationGuy

  • Contributor
  • Posts: 39
  • Country: de
If I get you right you want to publish status data to an arbitrary number of subscriber.
The publisher is your subject (your LED driver). When you subcribe for LED status data you need to remember to whom to send the message. When you unsubscribe for status information the observer will be removed from the observer list.
Each time you want to publish your LED status (every 100ms) you can send a notification to all observer via message.
 

Offline jnzTopic starter

  • Frequent Contributor
  • **
  • Posts: 593
If I get you right you want to publish status data to an arbitrary number of subscriber.
The publisher is your subject (your LED driver). When you subcribe for LED status data you need to remember to whom to send the message. When you unsubscribe for status information the observer will be removed from the observer list.
Each time you want to publish your LED status (every 100ms) you can send a notification to all observer via message.

Got it. I was a little lost that this is a subscription model. Um, ok. I suppose out the extreme this is definitely not as memory efficient as a single global, but at the cost of ram I get the idea.

On the topic of efficiency, if I have 5 subscribers, I need to send add a mail into the queue for each one separately. Which will be latency of mail * subscribers for every time I want to update them. Hmmm... I'll have to think about the resources this will take. There is also a coding issue of having to figure out who will subscribe to who, when, then also a process of removing subscribers and a leak if I fail to do that properly. I get this, it just seems to be outside of the scope of what I'm doing here. I'll keep reading tho.
 

Offline AutomationGuy

  • Contributor
  • Posts: 39
  • Country: de
It is just a pattern. You can choose how efficient you can implement it.
If you need to transport a huge amount of data (may be 10K or 10MB) you can keep the data in a global memory pool and just send a notification to each observer. You must be carefull when changing data in the pool. On a real time system you can be sure having complete data in the pool before notifying the observers.
Btw I often implement message queues by my self (not using the OSs) becorse they are more efficient for certain use cases.
Reusability of sourcecode or common interfaces (API) design is often subject to performance compromises.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19506
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Your primary consideration is what is constant or read-only, versus what mutates or is read/write.

If it is constant or read-only then, after initialisation it can be in a global.

Anything mutable or read-write can cause intermittent problems in a multitasking environment - whether or not it is multiprocessor.

I get that's the concern with globals. But how about this... I have one logic thread and 3 drivers in this application the drivers are producers of info, the logic thread is a consumer but also will send requests.... Now, what if my next configuration uses three logic threads and 8 drivers? I want to be able to scale the same-ish code over those applications. If I have each led driver use globals, this is pretty easy.

How would I do it using the safe mail/msg approach and local structs?

Think in terms of threads communicating via FIFOs with a depth of N messages, N>=1. Each message in a FIFO is a request or  a response. Each thread either creates or a consumes messages, or both.

For the threads that put something into a FIFO, work out what they should do if the FIFO is already full: should they continue, or should they block.

For threads that take something out of a FIFO, work out what they should do if the FIFO is empty: continue or block.

With those primitives you can compose many systems with properties that can be described at a higher level and analysed in terms of queueing theory. You can see where bottlenecks are by looking at the size of the queues.

There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline jnzTopic starter

  • Frequent Contributor
  • **
  • Posts: 593
Need to bring this back up.... still having some issues with the theory.

Example:

- I have main two logic threads, responsible for main functions of the device.
- I have between two and eight threads generating information. Perhaps one is a system watchdog, another is a traffic cop for managing incoming digital data, one is UART where data is coming in from other systems, user data, USB, CAN, SPI, lots of data streams.

Using globals, I had planned having the threads responsible for "getting data" being the only ones to update their respective data structures. I could then access-only them anywhere else. Example for this would be over a digital bus I'm getting voltage, temperature, on/off states of other modules, their current input states, etc, my logic thread can examine those structures to act upon them without the producers/contributors ever knowing where their goods were going.

If I do that, use globals, in my logic threads I can just do something like

Code: [Select]
//inside logic thread 1
if (uart_data.voltage>10)
 do_something;

//mayne inside logic thread 2
 if (uart_data.voltage>12 && usb_data.request==START && can_data.module1.system_status == OFF && onboard_processor_2.temp<105)
 do_something_more_complex;

This is what I'm getting at. In that second example... If I wanted to use signals, mail, or messages, I'd need to do so many times to gather enough data before I can get anything done. Each request and storage has its own overhead. Then also, each data generating thread (producer) needs to be "aware" of what they need to get to the logic threads instead (consumers) instead of being agnostic / universal threads that are responsible for getting, combining, sorting their own information on a regular basis.

In that second logic thread example, I'm just not clearly seeing how to coordinate that not using globals. I can have another thread who's job it is to ask and sort of the data to become localized mail for the logic threads... But another issue there is queues. ... If I'm trying to generate mail for each logic thread, I'm going to at some point GREATLY increase the risk that I'll be acting in a logic thread old data. And that my all_the_info_logic1_needs_struct type is queued and I don't have time to work on it yet... Imagine I stacked all this together and sent it as mail, and I send mail another before the first is acted upon. When it can, the logic thread would run twice in a row with different data, the first with old data, the next with what should have been the only thing to send.

I guess in the case of multiple contributors, I'm not seeing a way to keep them unaware of the whole system and re-usable, while also making sure my complex logic threads that are taking in multiple sources.

Make sense to anyone?
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19506
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
If I wanted to use signals, mail, or messages, I'd need to do so many times to gather enough data before I can get anything done. Each request and storage has its own overhead.

The maximum amount of space is predefined: the maximum number of entries in the queue.

Quote
Then also, each data generating thread (producer) needs to be "aware" of what they need to get to the logic threads instead (consumers) instead of being agnostic / universal threads that are responsible for getting, combining, sorting their own information on a regular basis.

Each producer puts everything it generates into the queue. Each consumer only takes any notice of what it needs to know, and ignores the rest.

Quote
In that second logic thread example, I'm just not clearly seeing how to coordinate that not using globals.

You seem to think globals simply communication and control. They don't. They hide interactions and distribute interactions across multiple changing threads.

As a very wise person (Tony Hoare) summed it up: "There are two ways of constructing a piece of software: One is to make it so simple that there are obviously no errors, and the other is to make it so complicated that there are no obvious errors." Queues are the former, globals the latter.

Quote
If I'm trying to generate mail for each logic thread, I'm going to at some point GREATLY increase the risk that I'll be acting in a logic thread old data.

Easily solved by the consumer sitting in a tight loop sucking data from the queue - when the queue is empty it knows it has the latest data.

Quote
And that my all_the_info_logic1_needs_struct type is queued and I don't have time to work on it yet... Imagine I stacked all this together and sent it as mail, and I send mail another before the first is acted upon. When it can, the logic thread would run twice in a row with different data, the first with old data, the next with what should have been the only thing to send.

Globals don't simplfy that; they bring different problems that are transient and impossible nail down.

Quote
I guess in the case of multiple contributors, I'm not seeing a way to keep them unaware of the whole system and re-usable, while also making sure my complex logic threads that are taking in multiple sources.

During system initialisation you have  process that "connects" or "wires" each producer output to the relevant queue input(s), and each consumer input to the relevant queue output.

Think of it as a "patch panel" or backplane. The fancy modern pattern name for that old technique is "dependency inversion principle".
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline jnzTopic starter

  • Frequent Contributor
  • **
  • Posts: 593
overhead.
The maximum amount of space is predefined: the maximum number of entries in the queue.

Overhead per reading and writing messages. No OS does this for "free". If have a logic thread that's reading mail/messages, each time I check there is overhead. Now figure I have a fast logic thread, there are 100 variables I need to know now, from 3-30 producers, before I can act.... That's a lot of message writing and reading. Consider that one of the threads might have a 1ms cyclic rate.

Quote
Each producer puts everything it generates into the queue. Each consumer only takes any notice of what it needs to know, and ignores the rest.
Um... well, that's not really how any CMSIS RTOS works really... There are no IDs for mesages, just a FIFO que. So, you can send something to MessageQ_A or _B, but there is no method that I know of for "notice what it needs and ignores the rest". Just to be clear about that part quickly. Maybe I'm drastically missing something.

Quote
You seem to think globals simply communication and control. They don't. They hide interactions and distribute interactions across multiple changing threads.
Yea... I'm not following. When I say globals I'm talking about a struct called say USB_Data, inside it are all sort of values that will come in over USB. The USB thread is the only one that can write to them. There might be 10 or 1000 variables in that struct. The USB thread is also responsible for keeping the info up-to-date. Any thread can then read from USB_Data.whatever.  By writing that the hide interactions make me think you aren't following what I'm talking about. In my example, USB_Thread doesn't have to know anything about anyone else, just how to do it's own job. My logic thread doesn't even need to know everything about the USB_Struct, my USB_Thread doesn't need to know to send a specific subset of data, etc. By switching this this to a mail/message scenario, I can't be sure if I "sent" the entire struct that I wouldn't be stacking it in a que. Again, I can't even be certain that any consumer threads are even running at X or Y time.

I'm not sure if we're on the same page because it seems that you're just giving me cautionary tale warnings about "globals". If we are on the same page, I'm definitely not seeing what you're talking about considering the subscriber model. Can you make a psuedo-code example of how you'd "wire" a consumer thread to a producer in your example?

Because say USB_thread and Led_Driver_threads both have 100 variables that are read and re-read every 100ms. Before my logic thread / consumer can do anything I need 4 states from USB and 2 states from the Led_Driver. But I don't really want any special code that would prevent my work from being re-used in another application where the logic thread might be entirely different. Dig?

Because in your example, I'm seeing a lot of this
Code: [Select]

//in logic thread
usb_data_struct_type *usb_data;

osMessageSend(USB_D1 | USB_D2 | USB_D3 | USB_D4, waitForever); //example limiting to 32bits, USB_Thread has to now put together mail for USB_Data
resp = osMailGet(USB_Data_mbox_for_Logic1, 100msTimout);
if resp_timeout
 handle_resp_timout
else
 resp.pointer = usb_data;
//now use usb_data... Which I can not be assured is the most recent data because I pulled it from a FIFO, newer, better data could be behind and stacking up

Which isn't that bad... EXCEPT... USB_Thread is already running cyclicly, which means my logic thread will have a timeout of about 1-2x the cyclic rate of the USB_Thread. Upto once for the send message depending on where it is in it's wait, and upto once more to pack the mail and my logic thread to get it. This is overhead ASIDE from the OS actually handling messages and mail.
 
Worse, is that in this psuedo-code example I have a mailbox that has to be dedicated directly to the logic thread, and while I can do this by passing a mailbox ID / pointer over to USB_Thread it would be a lot easier to just hard code in the mailbox name. This seems fairly complicated. I'd be very very wary of posting mail in a shared mailbox and just assuming the right thread will get it. Again, there don't seem to be mailbox IDs or a way to sort through mail. Each mailbox is a fifo.

Quote
"dependency inversion principle".

I'm familiar with the idea. What I'm not seeing is how this applies to an RTOS in my example of a lot of data to consume by many different producers. In fact, so far with the patchboard analogy, all I'm seeing an increase in immobility and bloat by having to name and give away specific mailboxes for each source the logic thread will get data from. Although to be fair as far as mobility goes, with globals I'd have to extern the struct for the logic threads so they knew what was available.

Any chance you could psudeo-code something?
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19506
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Any pseudo code would be either too trite and simple to be helpful, or so long and complicated that everybody would get bogged down in irrelevant details.

As for overhead, nothing comes for free.

If the FIFO had any knowledge of what's in it, then the abstraction is badly broken. The application has to determine what to process and what to ignore. Having said that some FIFOs can have filters attached to them, but in my applications that always had to be done in the application code.

I have pointed you towards some high level abstractions that have been found useful in a wide range of applications; sometimes these are known as "design patterns". With any design pattern you have to work out whether its advantages match your system; if not then the pattern is not applicable. That's your choice.

Engineering is all about choosing the tradeoffs that will make your application practical, predictable, debuggable, and manageable. Good luck in documenting all the ways in which your globals can be and should not be accessed, as well as ensuring all code running in the processor does not over-optimise accesses to them. It is a nightmare debugging multiple threads simultaneously accessing and mutating a global. Ditto working out where the bottlenecks are in a running system. Those problems are very largely absent if standard RTOS facilities, particularly FIFOs, are used.

I suggest you google for "rtos design pattern" and "realtime design pattern" and variants thereof.
« Last Edit: July 07, 2015, 12:36:42 am by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline jnzTopic starter

  • Frequent Contributor
  • **
  • Posts: 593
Any pseudo code would be either too trite and simple to be helpful, or so long and complicated that everybody would get bogged down in irrelevant details.

:\

Yea I'm fairly certain we're not on the same page.   I need data in from multiple sources, I don't want those sources to be consumer aware as it'll kill mobility/portability. And nothing you've written is an example of what would work. I'll look into RTOS patterns more.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19506
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
I'll look into RTOS patterns more.

While you're at it, have a look at "enterprise" message passing design patterns. That mob are sufficiently narrow that they think they've invented everything, and don't realise that the same ways of thinking are applicable in very different domains. http://www.enterpriseintegrationpatterns.com/ is a reasonable starting point.

Obviously the core technology used is very different, so don't expect directly applicable answers. Nonetheless you are both dealing with getting discrete information from a sources to a sinks without either being aware of the other.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online Sal Ammoniac

  • Super Contributor
  • ***
  • Posts: 1671
  • Country: us
Complexity is the number-one enemy of high-quality code.
 

Offline jnzTopic starter

  • Frequent Contributor
  • **
  • Posts: 593
http://koopman.us/bess/chap19_globals.pdf

See, I think this os one of those cases that people are told something as an absolute like "globals are evil" which is meant to be an extreme hyperbole in order to encourage other uses - when in reality, globals CAN be fine is used correctly and sparingly.

"Globals are evil, and it’s better to avoid using them. As with any idea, avoiding globals can be carried to absurd extremes, but the usual pitfall is using too many globals instead of too few."

The title is meant to be satire in the theme of a cautionary tale and not a literal.


I have a method of passing all the data coming into my logic threads right now. But it's MASSIVELY more complex than my previous idea of using a write-once read-many global struct. In terms of re-using the code, it's slightly better as now I have a logic thread that is almost entirely agnostic of what a USB / UART / CAN / SPI / etc message is, and instead is only focused on signals. I still need to share common signal structure among all the producers and responders, but it's not that bad. All the same, this is a LOT more work to do it correctly. Each producer has a request_data mailbox, a que sized to twice the expected need, same for a request_function mailbox if that producer is a driver, same for logic, etc etc. It's probably 10-20x the RAM to make what are effectively shadow copies of the objects I need. I'm being careful to keep that as low as possible, but it adds up when each signal now has a 32bit tick variable associated instead of what was effectively bundling the tick/timestamp among many signals. There is also a ton of processing overhead in taking in a request via mail, packing it, and sending it to the consumers. But whatever... I'll give it a shot only because I can keep each module/thread slightly more separated and agnostic of the rest of the system.

I have the RAM and the processing time right now . If I find that I'm running low on either I'll have to go to other "evil" options.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19506
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Out of curiosity, how much experience do you have of other people using your libraries and code without having direct access to you?

IMNSHO you find all sorts of unexpected behaviour (both human and machine) crawling out of the woodwork at that point.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online Sal Ammoniac

  • Super Contributor
  • ***
  • Posts: 1671
  • Country: us

"Globals are evil, and it’s better to avoid using them. As with any idea, avoiding globals can be carried to absurd extremes, but the usual pitfall is using too many globals instead of too few."

The title is meant to be satire in the theme of a cautionary tale and not a literal.

Yes, the title of that chapter is meant somewhat tongue-in-cheek, I'm sure, but the main point is still valid--globals can lead to code instability and should be avoided whenever possible.

The use of globals can be taken to extremes on both ends of the scale. A good example of one extreme is Toyota's use of 11,000 globals in their engine management code which was recently in the news. On the other extreme, I've seen people go so far out of their way to eliminate globals entirely that their code was inefficient, convoluted, and artificial.

To paraphrase George Washington, "globals, like fire, are a dangerous servant and a fearful master". We should carefully consider the use of each global, use them only where necessary, and attempt to eliminate them wherever possible.
Complexity is the number-one enemy of high-quality code.
 

Online Berni

  • Super Contributor
  • ***
  • Posts: 4955
  • Country: si
What should you be asking yourself is if you need a separate thread for this.

If you only need something to happen when you call it then its best so simply do it like any other hardware driver routine. Only thing to watch out for is that you obtain a mutex before using the SPI registers just in case you call it at the same time from two separate threads.

If you want the call to your driver function to return back out quickly you can make use of oldschool interrupts. For example load up the SPI data to be sent out in a array and then use SPI interrupts to shift the data in to the registers. In a lot of large MCUs you might have a big enugh FIFO in your SPI peripheral so you only have to fire one quick interrupt to clear the CS line. For short and often reoccurring tasks you save a lot of CPU cycles this way because a interrupt handler can enter and exit a lot quicker than a RTOS context switch. You still need even more careful mutex locking to make it reliable so i would advise to only do this is the driver is going to be called a lot and you want to reduce CPU load.

Another method for slow interfaces is what i like to call delay polling. With those you wait for your peripheral (say an UART with a decent buffer built in) by looping in a while loop, but you insert a short 1 to 10 ms RTOS sleep command in the loop. That way the thread wakes up every so often and checks if the peripheral is ready, if not then RTOS will execute other things in that time to use the CPU cycles for something useful. This causes a lot of context switching and is only usable for slow polling in the milisecond range.

As for having a dedicated thread is for when the peripheral needs constant babysitting. For example running a SPI IO expander that you use to blink LEDs a lot. In that case its most sensible to spawn a new thread that for example calculates the time until the next LED blink and tells the RTOS to sleep until then while also waiting for any commands to come in over FIFOs on how to blink the LEDs.

By the way there is no need to expose any of this in your drivers C file. You just simply make a routine that say messages the LED blink pattern to where it needs to be. That way in other parts of your code you just call that routine and not worry about how it does that.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf