overhead.
The maximum amount of space is predefined: the maximum number of entries in the queue.
Overhead per reading and writing messages. No OS does this for "free". If have a logic thread that's reading mail/messages, each time I check there is overhead. Now figure I have a fast logic thread, there are 100 variables I need to know now, from 3-30 producers, before I can act.... That's a lot of message writing and reading. Consider that one of the threads might have a 1ms cyclic rate.
Each producer puts everything it generates into the queue. Each consumer only takes any notice of what it needs to know, and ignores the rest.
Um... well, that's not really how any CMSIS RTOS works really... There are no IDs for mesages, just a FIFO que. So, you can send something to MessageQ_A or _B, but there is no method that I know of for "notice what it needs and ignores the rest". Just to be clear about that part quickly. Maybe I'm drastically missing something.
You seem to think globals simply communication and control. They don't. They hide interactions and distribute interactions across multiple changing threads.
Yea... I'm not following. When I say globals I'm talking about a struct called say USB_Data, inside it are all sort of values that will come in over USB. The USB thread is the only one that can write to them. There might be 10 or 1000 variables in that struct. The USB thread is also responsible for keeping the info up-to-date. Any thread can then read from USB_Data.whatever. By writing that the hide interactions make me think you aren't following what I'm talking about. In my example, USB_Thread doesn't have to know anything about anyone else, just how to do it's own job. My logic thread doesn't even need to know everything about the USB_Struct, my USB_Thread doesn't need to know to send a specific subset of data, etc. By switching this this to a mail/message scenario, I can't be sure if I "sent" the entire struct that I wouldn't be stacking it in a que. Again, I can't even be certain that any consumer threads are even running at X or Y time.
I'm not sure if we're on the same page because it seems that you're just giving me cautionary tale warnings about "globals". If we are on the same page, I'm definitely not seeing what you're talking about considering the subscriber model. Can you make a psuedo-code example of how you'd
"wire" a consumer thread to a producer in your example?
Because say USB_thread and Led_Driver_threads both have 100 variables that are read and re-read every 100ms. Before my logic thread / consumer can do anything I need 4 states from USB and 2 states from the Led_Driver. But I don't really want any special code that would prevent my work from being re-used in another application where the logic thread might be entirely different. Dig?
Because in your example, I'm seeing a lot of this
//in logic thread
usb_data_struct_type *usb_data;
osMessageSend(USB_D1 | USB_D2 | USB_D3 | USB_D4, waitForever); //example limiting to 32bits, USB_Thread has to now put together mail for USB_Data
resp = osMailGet(USB_Data_mbox_for_Logic1, 100msTimout);
if resp_timeout
handle_resp_timout
else
resp.pointer = usb_data;
//now use usb_data... Which I can not be assured is the most recent data because I pulled it from a FIFO, newer, better data could be behind and stacking up
Which isn't that bad... EXCEPT... USB_Thread is already running cyclicly, which means my logic thread will have a timeout of about 1-2x the cyclic rate of the USB_Thread. Upto once for the send message depending on where it is in it's wait, and upto once more to pack the mail and my logic thread to get it. This is overhead ASIDE from the OS actually handling messages and mail.
Worse, is that in this psuedo-code example I have a mailbox that has to be dedicated directly to the logic thread, and while I can do this by passing a mailbox ID / pointer over to USB_Thread it would be a lot easier to just hard code in the mailbox name. This seems fairly complicated. I'd be very very wary of posting mail in a shared mailbox and just assuming the right thread will get it. Again, there don't seem to be mailbox IDs or a way to sort through mail. Each mailbox is a fifo.
"dependency inversion principle".
I'm familiar with the idea. What I'm not seeing is how this applies to an RTOS in my example of a lot of data to consume by many different producers. In fact, so far with the patchboard analogy, all I'm seeing an increase in immobility and bloat by having to name and give away specific mailboxes for each source the logic thread will get data from. Although to be fair as far as mobility goes, with globals I'd have to extern the struct for the logic threads so they knew what was available.
Any chance you could psudeo-code something?