In the case of a stream going to several threads, have the input handler put the data in the proper queue. The threads can just slurp the data from their end of the queue.
There's really no such thing as 'never'. What it really means is 'never' until you just have to.
This is exactly why you should avoid having parallel threads which together do one job. It is always going to end in a mess with potential deadlocks. Better rewrite the threads into a single thread which uses one or more statemachines.
Things you share inter-thread are technically global. Unless you passing pointers with finite life local objects. But you'd need to be even more cautious with that.
The two thing you need to lookout for are:
- when both threads write to the shared data object.
- when one thread write is interrupted and the second thread reads. You could theoretically read a half updated word.
So Writing/Reading shared data is always exclusive!
ie, resource contention is a PITA, not matter how it's managed.
Which when you have a ton of incoming data, and some threads only every 40 times the data handler does that won't work. Right now I have a Request and Respond system. Thread2 runs every 100ms, at the end it running, it sends a request for new data, the Thread1 that is supplying that data needs to keep itself refreshed every 5ms. If I just poured all the data in, every 100ms Thread2 would have a ton of duplicate data it would have to sort through, I'd need to have a worst-case buffer size to deal with. In the Request model I know only one set of data is coming.
That's wouldn't be better in this case. I have data coming in over UART in a thread that is designed to be portable between this and other applications. I have SPI devices that are unique to this application, and they are self-contained state machines - but the data may be needed somewhere else. An RTOS means you'll need to transfer data between threads, it's not really avoidable.
Not sure if it can translate to your application, but in ours that seems similar the data is received using interrupts, i.e. all UART receive events are interrupt-driven, and the ISRs for those just push the received bytes onto a FIFO buffer (separate one for each UART).
When the thread that needs the data wakes up it just pops from the FIFO(s) until they're empty and do their parsing job.
The issue isn't not using globals, it's not using globals from different threads without mutual exclusion. I expect that the better compilers have features to provide thread-local storage (TLS) and mutex protected variables, since even Java has them.
Now what if 5 threads need that data?
The issue isn't not using globals, it's not using globals from different threads without mutual exclusion. I expect that the better compilers have features to provide thread-local storage (TLS) and mutex protected variables, since even Java has them.
Or you just have one thread process the received data, store the various processed forms the others need somewhere, and use semaphores to signal availability to them.
Pretty good comments everyone...There's really no such thing as 'never'. What it really means is 'never' until you just have to.
This is exactly why you should avoid having parallel threads which together do one job. It is always going to end in a mess with potential deadlocks. Better rewrite the threads into a single thread which uses one or more statemachines.
That's wouldn't be better in this case. I have data coming in over UART in a thread that is designed to be portable between this and other applications. I have SPI devices that are unique to this application, and they are self-contained state machines - but the data may be needed somewhere else. An RTOS means you'll need to transfer data between threads, it's not really avoidable.
Not sure if it can translate to your application, but in ours that seems similar the data is received using interrupts, i.e. all UART receive events are interrupt-driven, and the ISRs for those just push the received bytes onto a FIFO buffer (separate one for each UART).
When the thread that needs the data wakes up it just pops from the FIFO(s) until they're empty and do their parsing job.
Now what if 5 threads need that data?
Slightly off-topic: One of the interesting things I have observed about programmers is that most go through the following stages:
- Program everything as a single task
- Discover parallel threads, use them too much and realise data synchronisation between threads can turn into a world of pain quickly
- Go back to programming using a single task as much as possible
- The main issue with globals seems to be reads and writes - How in Thread1 can I be sure that I haven't interrupted Thread2 during a write? But isn't this exactly what Mutexes are for!?
Basically, I think I adhered so strictly to "no globals!" that I made the program massively more complex, which was a mistake. It's serious work now to take a struct, pass a pointer to it in a mailbox, copy it the data to the stack, cast it to a compatible struct, copy the data to it's new shadow struct, then keep all of that in sync. A lot can go wrong with pointer passing and casting
Slightly off-topic: One of the interesting things I have observed about programmers is that most go through the following stages:
- Program everything as a single task
- Discover parallel threads, use them too much and realise data synchronisation between threads can turn into a world of pain quickly
- Go back to programming using a single task as much as possible
This is exactly why you should avoid having parallel threads which together do one job. It is always going to end in a mess with potential deadlocks. Better rewrite the threads into a single thread which uses one or more statemachines.
Basically, I think I adhered so strictly to "no globals!" that I made the program massively more complex, which was a mistake. It's serious work now to take a struct, pass a pointer to it in a mailbox, copy it the data to the stack, cast it to a compatible struct, copy the data to it's new shadow struct, then keep all of that in sync. A lot can go wrong with pointer passing and casting.
Things you share inter-thread are technically global. Unless you passing pointers with finite life local objects. But you'd need to be even more cautious with that.
struct Uart {
int uart1_baud,
int uart1_stop,
int uart1_bits
};
void main(void)
{
struct Uart uart1;
struct Uart uart2;
mainApplicationEntry(&uart1,&uart2);
}
From this point on the structure should only be accessed using the reference pointer and never directly.typedef struct {
mutex_t lock;
uint32_t RefCount;
uint32_t DataCount;
uint8_t data[BUFF_SIZE];;
} poolBuffer_t;
Technically if you create a mailbox in CMSIS, you are still defining a global object. But I think that is more acceptable, because any read/write action on that object is done via OS routines which should be thread safe, reentrant, etc.
The problem with global integers etc is once you start writing to it from multiple places or have some thread priority 'issue' that causes other threads to work on old or corrupted data.
This is exactly why you should avoid having parallel threads which together do one job. It is always going to end in a mess with potential deadlocks. Better rewrite the threads into a single thread which uses one or more statemachines.I see this so many times... people break up their code into modules which have no business being separated, and then contort their code into knots trying to get the modules to play nicely with each other.
KISS; if you need to break code into threads you'll know it because it will simply not make a lot of sense to have it any other way. Until then... one process with interrupt handlers which do as absolutely little as possible. :-)
I think you are right you have taken the "no globals!" too literally. What this so called rule is trying to avoid is the case where you have one variable for every feature of every peripheral etc.
For your example, I hadn't really thought about defining the vars in Main() but I'm not sure that changes much. All the same issues with globals. But I need to read that example better so I may be missing something. One thing to note, is that right now none of my threads shutdown. They're all looping all the time, they "sleep" for periods, but for the most part I have yet to need to shut a thread down.
I also need to read your next example about the video capture better - and I am using RTX - but like threads shutting down, this is a fairly deterministic system, I'm not creating and allocating pools because I'd have no reason to ever shut them down.