Finally we need to see how to implement it in C with a bit of assembly (eventually) and we're done :-DD
Good idea for EP: a/synchronous calls between Producers and Consumers, blissfully unaware of one another and interact through the message queue only.
Producers are entities that generate events and send them to a message queue.
Consumers are entities that either subscribe to receive new events or poll periodically from the queue.
Good idea for EP: a/synchronous calls between Producers and Consumers, blissfully unaware of one another and interact through the message queue only.
Producers are entities that generate events and send them to a message queue.
Consumers are entities that either subscribe to receive new events or poll periodically from the queue.
In my event-based sort command example in the other thread, I showed that things like records emitted from a datastore can be usefully treated as events as well ("datastore record (this) available"), but I am not actually certain if that is better considered an event source, or just a normal queue.
This is exactly the unknown region where real-world experimentation and research ought to yield useful results and information: exactly how can be express things like ...
I firmly believe that experimenting on scenarios like this ... with an imagined language, experimenting on the syntax while also roughly sketching out what kind of machine code that would compile to, is the only way to find the answers this kind of discussion threads are looking for.
I do not believe such work has been done yet, but I have already used event-based libraries and programming paradigm in several languages, so that I know the underlying idea is sound: the question is, how such concepts best map to human-written linear language forms (i.e. textual source code) –– and without involving abstractions that require heavy runtime or more RAM than is available on small microcontrollers.
There are even conceptual things that are very important when looking at the low-level, machine-code implementation. One is, does a hardware/interrupt/signal event handler need to be re-entrant? I do not ever recall writing an interrupt or signal handler that would have been hindered by not being re-entrant. If one does not need to be re-entrant, it can run off a static context; essentially a tiny dedicated stack. In POSIX, one can even set up an alternate stack for (selected) signal handlers; see sigaltstack() (https://man7.org/linux/man-pages/man2/sigaltstack.2.html). In practice, if signals/interrupts with the same priority cannot interrupt each other, the maximum nesting in the alternate stack is defined by the number of unique priorities, and the exact maximum alternate stack size can be statically determined (sum of maximum stack sizes needed at each priority level).
Let's discuss here (if it's possible/reasonable) this interesting idea.
How should it be? :D
Let's discuss here (if it's possible/reasonable) this interesting idea.
How should it be? :D
I think its high time intelligent, thinking, engineering minded people asked themselves why they are all obediently perpetuating the fashion of using "oriented" when discussing languages.
Like "language oriented language" or "computer oriented language" or "crash oriented language" or "reliability oriented languages" the term is just so over used!
Let's discuss here (if it's possible/reasonable) this interesting idea.
How should it be? :D
I think its high time intelligent, thinking, engineering minded people asked themselves why they are all obediently perpetuating the fashion of using "oriented" when discussing languages.
Like "language oriented language" or "computer oriented language" or "crash oriented language" or "reliability oriented languages" the term is just so over used!
Well sure, but it has the benefit of clearly stating what was the focus when designing some language.
The problem may not be so much with the term "oriented" than with the fact that general-purpose languages actually be too "oriented" one way or another, making people using them design software fitting a single paradigm religiously. Which leads for instance to the disastrous intricate piles of objects you get in most C++ designs.
But the same appears with just any paradigm which is too opinionated. So likewise, writing everything as events in an "event-oriented" programming language would potentially lead to some spectacular piles of shit. IMHO. :popcorn:
The problem may not be so much with the term "oriented" than with the fact that general-purpose languages actually be too "oriented" one way or another, making people using them design software fitting a single paradigm religiously. Which leads for instance to the disastrous intricate piles of objects you get in most C++ designs.isnt that what you get in Java? or even Python? piles of objects in its runtime/standard library? for being too "object" or "machine independent" oriented language?
Anyone for node.js?
https://nodejs.org/en/
Node is asyncronous event driven architecture: https://en.m.wikipedia.org/wiki/Event-driven_architecture
My t-shit knows it ;)
"Use tools rationally"
CriticismBold added by me.
The design of those programs which rely on event-action model has been criticised, and it has been suggested that the event-action model leads programmers to create error-prone, difficult to extend and excessively complex application code.[2] Table-driven state machines have been advocated as a viable alternative.[5] On the other hand, table-driven state machines themselves suffer from significant weaknesses including the state explosion phenomenon.[6] A solution for this is to use Petri nets.
https://en.wikipedia.org/wiki/Event-driven_programmingQuoteCriticismBold added by me.
The design of those programs which rely on event-action model has been criticised, and it has been suggested that the event-action model leads programmers to create error-prone, difficult to extend and excessively complex application code.[2] Table-driven state machines have been advocated as a viable alternative.[5] On the other hand, table-driven state machines themselves suffer from significant weaknesses including the state explosion phenomenon.[6] A solution for this is to use Petri nets.
https://en.wikipedia.org/wiki/Event-driven_programmingQuoteCriticismBold added by me.
The design of those programs which rely on event-action model has been criticised, and it has been suggested that the event-action model leads programmers to create error-prone, difficult to extend and excessively complex application code.[2] Table-driven state machines have been advocated as a viable alternative.[5] On the other hand, table-driven state machines themselves suffer from significant weaknesses including the state explosion phenomenon.[6] A solution for this is to use Petri nets.
https://en.wikipedia.org/wiki/Event-driven_programmingQuoteCriticismBold added by me.
The design of those programs which rely on event-action model has been criticised, and it has been suggested that the event-action model leads programmers to create error-prone, difficult to extend and excessively complex application code.[2] Table-driven state machines have been advocated as a viable alternative.[5] On the other hand, table-driven state machines themselves suffer from significant weaknesses including the state explosion phenomenon.[6] A solution for this is to use Petri nets.
State explosion can be limited using hierarchical state machines. They are somewhat more complex to implement, but way more flexible, than the traditional flat, table-driven state machines.
https://en.wikipedia.org/wiki/Event-driven_programmingQuoteCriticismBold added by me.
The design of those programs which rely on event-action model has been criticised, and it has been suggested that the event-action model leads programmers to create error-prone, difficult to extend and excessively complex application code.[2] Table-driven state machines have been advocated as a viable alternative.[5] On the other hand, table-driven state machines themselves suffer from significant weaknesses including the state explosion phenomenon.[6] A solution for this is to use Petri nets.
State explosion can be limited using hierarchical state machines. They are somewhat more complex to implement, but way more flexible, than the traditional flat, table-driven state machines.
They aren't that much more complex to implement, and if the FSM is complex then it is a good tradeoff. I've used the state behaviour=class, event=method, current state = singleton instance of class, to very good effect.
It is easy to add logging with trivial performance impact in a production system, which was invaluable during commissioning and in (correctly) deflecting blame onto the other company's products. Great for avoiding lawyers :)
Ditto adding performance measurements.
https://en.wikipedia.org/wiki/Event-driven_programmingQuoteCriticismBold added by me.
The design of those programs which rely on event-action model has been criticised, and it has been suggested that the event-action model leads programmers to create error-prone, difficult to extend and excessively complex application code.[2] Table-driven state machines have been advocated as a viable alternative.[5] On the other hand, table-driven state machines themselves suffer from significant weaknesses including the state explosion phenomenon.[6] A solution for this is to use Petri nets.
State explosion can be limited using hierarchical state machines. They are somewhat more complex to implement, but way more flexible, than the traditional flat, table-driven state machines.
They aren't that much more complex to implement, and if the FSM is complex then it is a good tradeoff. I've used the state behaviour=class, event=method, current state = singleton instance of class, to very good effect.
It is easy to add logging with trivial performance impact in a production system, which was invaluable during commissioning and in (correctly) deflecting blame onto the other company's products. Great for avoiding lawyers :)
Ditto adding performance measurements.
Implementing the state transitions in hierarchical state machines is a bit more involved compared to the simple state machines, because the HSM needs to be able to support entry actions, initial state concept, exit handlers, and do that in a correct order so that the states are first exited up to the common parent, and then entered to the target state while performing any enter actions and checking initial states.
Miro Samek's book "Practical UML Statecharts in C/C++, 2nd Ed Event-Driven Programming for Embedded Systems" has a good introduction and a reference implementation for all this.
I have created a set of C macros that are used to define and implement a simple DSL (domain specific language) for describing these HSMs in a compact way.
I have also included support for state timeout events, optional default timeouts for each state, which means that entering a state will start the state timer if the state has a default timeout time defined. Exiting the state will stop the state timer. The state timeout handler will be called automagically if the state timer expires.
...
I suspect that a language-based filter assignment, implemented in the runtime, would yield more efficient and robust microcontroller firmware.
...
I am leaning towards a different approach, one where there are one or more event queues, abstract instances, with the aforementioned dependencies and postponing defined in terms of which queue is "active" and which "paused". The event queue itself is an abstraction; a first-level "object" in the language, without any limit as to what kind of events or which context those events have, used for the management of event order, priority, and interdependence.
It might be very useful to not associate events themselves with any priority, with each queue being strictly a FIFO, and only define priority between event queues. (This would significantly simplify the event queue/dequeue operations.)
Then, the logical equivalent of a semaphore is an event queue, with sem_wait equating to grab object from queue, and sem_post to putting an object to the queue; the equivalent of a mutex is a single-event queue where mutex_lock equates to grabbing the token object from the queue, and mutex_unlock to putting the token object back to the queue, with waiters blocking on the grab-token-object operation.
Trick is, for this to be useful on microcontrollers, the operations must compile to efficient machine code.
(Just because the abstraction sounds nice, does not mean it is useful in practice. It must both be understandable to us human programmers, but also compile to effective and efficient machine code. Abstractions that fail one of them have no room in microcontroller and limited-resources embedded development!)
In imperative languages, write operations are synchronous: the call returns when the data has been sent to the storage device. On especially microcontrollers, much of that time is spent busy-waiting for the operation to complete. We could get more out of the same hardware, if that wait time could be used for something more useful. To do so, a write must be turned into a two-part operation: you start the write, and at some time later, the write completes, and in the mean time, you must not modify the buffer contents as it is still in the progress of being written out.
No, not really. I used 'suspect', because I don't have a clear picture of exactly what would work for me.I suspect that a language-based filter assignment, implemented in the runtime, would yield more efficient and robust microcontroller firmware.Similar to Erlang's pattern matching?
Multiple levels of FIFO may be desirable. For example, in a telecoms server, there will be a single FIFO for all incoming events. There will also be a single FIFO associated with each call in progress, only containing events relevant to that call. Transferring an event from the "incoming" FIFO to one of the "call FIFOs" is done when convenient.That is exactly the sort of pattern I'm thinking more than one "event queue" object would be useful.
No, but it is enticing to anyone developing a new language to think of an abstraction they love, that turns out to be hellishly complicated to implement on currently available hardware, requiring lots of RAM and complex operations like stack unwinding.QuoteThen, the logical equivalent of a semaphore is an event queue, with sem_wait equating to grab object from queue, and sem_post to putting an object to the queue; the equivalent of a mutex is a single-event queue where mutex_lock equates to grabbing the token object from the queue, and mutex_unlock to putting the token object back to the queue, with waiters blocking on the grab-token-object operation.There's no fundamental reason why it would be any less efficient that other mechanisms that also take account of atomicity volatility and parallelism.
Trick is, for this to be useful on microcontrollers, the operations must compile to efficient machine code.
But I'll modify that to include concepts like "as simple as possible but no simpler" and "simple programs that obviously have no defects vs complex programs that have no obvious defects" and "visibility of deadlock/livelock properties".Very true.
Sure, but there is no actual technical or logical requirement for them. Even in C, one can implement a write asIn imperative languages, write operations are synchronous: the call returns when the data has been sent to the storage device. On especially microcontrollers, much of that time is spent busy-waiting for the operation to complete. We could get more out of the same hardware, if that wait time could be used for something more useful. To do so, a write must be turned into a two-part operation: you start the write, and at some time later, the write completes, and in the mean time, you must not modify the buffer contents as it is still in the progress of being written out.
If you are using some sort of preemtive RTOS, this is typically case ie. the program flow will contain these (inefficient) busy loops.
I have created a set of C macros that are used to define and implement a simple DSL (domain specific language) for describing these HSMs in a compact way.
I have also included support for state timeout events, optional default timeouts for each state, which means that entering a state will start the state timer if the state has a default timeout time defined. Exiting the state will stop the state timer. The state timeout handler will be called automagically if the state timer expires.
Perhaps I am wrong, but I think [XPC860] is one of the best to experience event oriented programming :DThe one downside I can see is that you'll need to find an old/NOS/used board, since XPC860 (and similar ones like NXP MPC860 (https://www.nxp.com/products/processors-and-microcontrollers/legacy-mpu-mcus/powerquicc-processors/powerquicc-i-mpc8xx/mpc860-powerquicc-processor:MPC860)) are no longer available at larger sellers like Mouser and Digikey. Otherwise, they definitely look well suited.
In imperative languages, write operations are synchronous: the call returns when the data has been sent to the storage device. On especially microcontrollers, much of that time is spent busy-waiting for the operation to complete. We could get more out of the same hardware, if that wait time could be used for something more useful. To do so, a write must be turned into a two-part operation: you start the write, and at some time later, the write completes, and in the mean time, you must not modify the buffer contents as it is still in the progress of being written out.
If you are using some sort of preemtive RTOS, this is typically case ie. the program flow will contain these (inefficient) busy loops.
Using co-operative scheduler and event-driven techniques will typically prevent this. Using patterns like Observer, and Subscribe/Publish are nice techniques to trigger processing only when something needs to be done. Combining these techniques with state machines and event queues, you can create systems that are responsive. Also, the devices will be quite deterministic in a sense that nothing is happening if there are no events generated, and the device is active only when processing events.
For time-critical systems one can use time-triggered scheduling, for example. This kind of scheduling will produce even more deterministic systems, if needed.
No, not really. I used 'suspect', because I don't have a clear picture of exactly what would work for me.I suspect that a language-based filter assignment, implemented in the runtime, would yield more efficient and robust microcontroller firmware.Similar to Erlang's pattern matching?
For example, we could map hardware interrupts and such to event queues using
'Map' [ context-object '.' ] event-name 'to' [ queue-object '.' ] event-invocation(parameter-list) [ 'using' context-object ] ';'
One possible idea is to support explicit filters to a queue-object, for example
'Filter' queue-object [ ':' [ 'Allow' | 'Drop' | 'Postpone' ] 'if' filter-expression ]* ';'
that are implicitly evaluated whenever the queue-object is accessed for events (by the runtime).
Another is to support named 'blocks', like electrichickens use those multi-locks when shutting down systems they're working on, for example
'Block' queue-object 'with' block-identifier [ 'until' auto-unblock-condition ] ';'
'Unblock' queue-object 'with' block-identifier ';'
where the queue is blocked if it has one or more blocks placed on it.
The former way is more powerful, but also has more overhead (since the filter-expression is executed potentially for each queued event during each queue state check). The latter is simpler, potentially requiring just one bit per block per event queue.
Additional approaches are explicit filter expressions for forwarding events to other queues, and so on.
I just don't know enough to say what would work best for myself, yet.
Like I said, the exploration of this is something I'd love to contribute to, but is too much work for a single person to achieve.
I also need quite a bit of pushback whenever I don't see the downsides of my own suggestions or my own errors yet; that too happens too often. :P
Multiple levels of FIFO may be desirable. For example, in a telecoms server, there will be a single FIFO for all incoming events. There will also be a single FIFO associated with each call in progress, only containing events relevant to that call. Transferring an event from the "incoming" FIFO to one of the "call FIFOs" is done when convenient.That is exactly the sort of pattern I'm thinking more than one "event queue" object would be useful.
Dealing with individual event priorities leads to all sorts of complex and chaotic situations (like priority inversion).
Dealing with multiple queues, in the runtime (so that one can obtain events from a set of queues, ordered by queue priority for example, with optional filters applied per queue), seems much more reasonable level of complexity to myself.
No, but it is enticing to anyone developing a new language to think of an abstraction they love, that turns out to be hellishly complicated to implement on currently available hardware, requiring lots of RAM and complex operations like stack unwinding.QuoteThen, the logical equivalent of a semaphore is an event queue, with sem_wait equating to grab object from queue, and sem_post to putting an object to the queue; the equivalent of a mutex is a single-event queue where mutex_lock equates to grabbing the token object from the queue, and mutex_unlock to putting the token object back to the queue, with waiters blocking on the grab-token-object operation.There's no fundamental reason why it would be any less efficient that other mechanisms that also take account of atomicity volatility and parallelism.
Trick is, for this to be useful on microcontrollers, the operations must compile to efficient machine code.
The trick is to consider the logical equivalents as having approximately the same level of abstraction and complexity. So, if you think of a way of implementing an event queue that requires the equivalents of mutexes and condition variables to implement, it is probably not suitable for real life implementation.
Indeed, in systems programming, I mostly use lockless techniques using atomic ops for these (GCC/Clang/ICC atomic built-ins (https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-Builtins.html) on x86-64 in particular), so I know it is/should be possible on most architectures.
On some, like AVR, you might need to disable interrupts for a few cycles (less than a dozen per critical section), but it should be doable.
I have created a set of C macros that are used to define and implement a simple DSL (domain specific language) for describing these HSMs in a compact way.
very interesting! Can you show some examples of them?
C-macros always smell of possibly language add-on-s
(here, OMG, in my-c there is no cpp and #define is banned
so I am are really forced to add real language constructs
DMS for FSMs: sounds good!!!)
... <snip>
YHSM_DECLARE(yns_sm);
YHSM_STATE_DECLARE(yns_sm_top_state);
YHSM_STATE_DECLARE(yns_network_closed_state);
YHSM_STATE_DECLARE(yns_network_error_state);
YHSM_STATE_DECLARE(yns_network_fail_state);
YHSM_STATE_DECLARE(yns_network_opened_state);
YHSM_STATE_DECLARE(yns_network_reconnect_retry_check_state);
YHSM_STATE_DECLARE(yns_network_disconnected_state);
YHSM_STATE_DECLARE(yns_network_connected_state);
YHSM_STATE_DECLARE(yns_server_connected_state);
<snip> ...
#define YNS_SERVER_CONNECTED_STATE_TIMEOUT_ms 2000
static int yns_server_poll_retry_count;
YHSM_STATE_BEGIN(yns_server_connected_state, &yns_network_connected_state, YNS_SERVER_CONNECTED_STATE_TIMEOUT_ms);
YHSM_STATE_ENTER_ACTION(yns_server_connected_state, hsm)
{
yns_server_socket_open();
yns_led_enable();
// Notify the application observer that the connection to the server is established.
yns_notify(YNETWORK_EVENT_SERVER_CONNECTED);
yns_server_reconnect_retry_count =
yns_network_config.server_reconnect_retry_count;
yns_server_poll_retry_count = yns_network_config.server_poll_retry_count;
}
YHSM_STATE_INIT_ACTION(yns_server_connected_state, hsm)
{
YHSM_INIT(&yns_network_idle_state);
}
YHSM_STATE_TIMEOUT_ACTION(yns_server_connected_state, hsm)
{
YHSM_TRAN(&yns_server_reconnect_retry_check_state);
}
YHSM_STATE_EXIT_ACTION(yns_server_connected_state, hsm)
{
// Notify the application observer that the connection to the server is disconnected.
yns_notify(YNETWORK_EVENT_SERVER_DISCONNECTED);
yns_server_socket_close();
yns_led_disable();
}
YHSM_STATE_EVENT_HANDLER(yns_server_connected_state, hsm, ev)
{
if (ev->id == YNS_EVENT_SOCKET_ERROR || ev->id == YNS_EVENT_NETWORK_TIMEOUT)
{
YHSM_TRAN(&yns_server_reconnect_retry_check_state);
}
if (ev->id == YNS_EVENT_SOCKET_DO_CLOSE)
{
YHSM_TRAN(&yns_server_reconnect_delay_state);
}
YHSM_RAISE();
}
YHSM_STATE_END(yns_server_connected_state);
I do wonder, if their (assumed!) lack of success is related to there not really being a language where one could easily and effortlessly express the patterns this kind of hardware is well suited for?
In imperative languages, write operations are synchronous: the call returns when the data has been sent to the storage device. On especially microcontrollers, much of that time is spent busy-waiting for the operation to complete. We could get more out of the same hardware, if that wait time could be used for something more useful. To do so, a write must be turned into a two-part operation: you start the write, and at some time later, the write completes, and in the mean time, you must not modify the buffer contents as it is still in the progress of being written out.
If you are using some sort of preemtive RTOS, this is typically case ie. the program flow will contain these (inefficient) busy loops.
It needn't be inefficient. All you need is one processor/core per event loop. Cores are cheap nowadays :)
QuoteUsing co-operative scheduler and event-driven techniques will typically prevent this. Using patterns like Observer, and Subscribe/Publish are nice techniques to trigger processing only when something needs to be done. Combining these techniques with state machines and event queues, you can create systems that are responsive. Also, the devices will be quite deterministic in a sense that nothing is happening if there are no events generated, and the device is active only when processing events.
RTOSs are merely a hack to multiplex several processes (i.e. event loops) onto a single execution engine. They are a useful hack when insufficient execution engines are available.
QuoteFor time-critical systems one can use time-triggered scheduling, for example. This kind of scheduling will produce even more deterministic systems, if needed.
There is absolutely nothing special about timeouts: they are merely an event equivalent to a message arriving, an input being available or an output completing. All such events should be treated identically at the language level and the runtime level.
Why 'language' as opposed to 'runtime' or 'library' ?
It's easy to do event-driven programming with any language, so I don't see why you'd need a new one. What problem are you trying to solve ?
In imperative languages, write operations are synchronous: the call returns when the data has been sent to the storage device. On especially microcontrollers, much of that time is spent busy-waiting for the operation to complete. We could get more out of the same hardware, if that wait time could be used for something more useful. To do so, a write must be turned into a two-part operation: you start the write, and at some time later, the write completes, and in the mean time, you must not modify the buffer contents as it is still in the progress of being written out.
If you are using some sort of preemtive RTOS, this is typically case ie. the program flow will contain these (inefficient) busy loops.
It needn't be inefficient. All you need is one processor/core per event loop. Cores are cheap nowadays :)
You are so spoiled! I don't have a luxury of having multiple cores :) I work in a single core embedded environment, where the available amount of Flash and RAM are typically very limited (Cheap ARM M3 devices), the device's energy consumption has to be minimized, and battery life-time has to be maximized.
QuoteQuoteUsing co-operative scheduler and event-driven techniques will typically prevent this. Using patterns like Observer, and Subscribe/Publish are nice techniques to trigger processing only when something needs to be done. Combining these techniques with state machines and event queues, you can create systems that are responsive. Also, the devices will be quite deterministic in a sense that nothing is happening if there are no events generated, and the device is active only when processing events.
RTOSs are merely a hack to multiplex several processes (i.e. event loops) onto a single execution engine. They are a useful hack when insufficient execution engines are available.
Luxury items again! :) RTOSes need to allocate RAM for each task. As I have only very limited amount of RAM available, I prefer using / have to use a simple co-operative tasker/scheduler which requires only one global stack frame. On some special cases I may use a preemptive scheduler with two tasks: one task for the main application running a co-operative tasker, and the other task for the networking.
QuoteQuoteFor time-critical systems one can use time-triggered scheduling, for example. This kind of scheduling will produce even more deterministic systems, if needed.
There is absolutely nothing special about timeouts: they are merely an event equivalent to a message arriving, an input being available or an output completing. All such events should be treated identically at the language level and the runtime level.
I meant by time-triggered scheduling like this: https://en.wikipedia.org/wiki/Time-triggered_architecture (https://en.wikipedia.org/wiki/Time-triggered_architecture)
Especially this one: "Use of TT systems was popularized by the publication of Patterns for Time-Triggered Embedded Systems (PTTES) in 2001[1] and the related introductory book Embedded C in 2002.[4]".
The book is freely available from here: https://www.safetty.net/publications/pttes (https://www.safetty.net/publications/pttes)
Here is a nice summary for Analysis Of Time Triggered Schedulers In Embedded System:
https://www.interscience.in/cgi/viewcontent.cgi?article=1014&context=ijcsi (https://www.interscience.in/cgi/viewcontent.cgi?article=1014&context=ijcsi)
That doesn't change my contention. All it means is that a processes' mainloop is sitting idly until the tick/timeout event arrives.
That doesn't change my contention. All it means is that a processes' mainloop is sitting idly until the tick/timeout event arrives.
Idle yes, but not necessarily running. The MCU may be held in the low-power sleep-state consuming only 1uA or less, while waiting for the next timer tick to occur. This can be used for minimizing device's energy consumption.
While you can do object oriented programming in C (and I did in the early-mid 80s), you can do it more easily and clearly in a (decent) object oriented language (C++ is excluded from that!).
Classic imperative patterns are not good, this is probably why no hobbyist wants anything to do with those chips (which are also hard to find, and expensive).I do prefer free toolchains and relatively affordable development boards –– seeing as I can nowadays get very powerful Linux SBCs for ~ 40-60 € (Amlogic, Samsung, Rockchip SoC chips and chipsets having very good vanilla Linux kernel support, so one is not dependent on vendor forks) –– as a hobbyist myself.
Why 'language' as opposed to 'runtime' or 'library' ?To do better.
I do prefer free toolchains and relatively affordable development boards –– seeing as I can nowadays get very powerful Linux SBCs for ~ 40-60 € (Amlogic, Samsung, Rockchip SoC chips and chipsets having very good vanilla Linux kernel support, so one is not dependent on vendor forks) –– as a hobbyist myself.
I do prefer free toolchains and relatively affordable development boards –– seeing as I can nowadays get very powerful Linux SBCs for ~ 40-60 € (Amlogic, Samsung, Rockchip SoC chips and chipsets having very good vanilla Linux kernel support, so one is not dependent on vendor forks) –– as a hobbyist myself.
Opensource is limited, and for example, you cannot have the same Ada experience with Gnat that you can have with GreenHills AdaMulti.
Here you need a serious job, AdaMULTI is a complete integrated development environment with serious ICE support for embedded applications using Ada and C with serious support. Just the ICE's header costs 3000 euro, the full package costs 50000 euro.
What do you have with Opensource? Poor man debugger? gdb-stub? umm?
Our my-c-ICE technology is not as great as GreenHills's but it's several light years ahead gdb!
And note: once again, it's not OpenSource!
So, my opinion here is clear: you need to find a job in avionics to enjoy the full C+Ada experience.
The same applies with Linux SBC ... they are all the same, over and over, all the same story. Linux bugs, Userland bugs ... nothing of nothing different from the same boring daily experience, just new toys.
The M683xx and MPC840 are great piece of hardware like never seen, and - once again - opensource has ZERO support for their hardware design, whereas the industry has some great stuff.
MPC840 is used in AFDX switches, used from from avionics to naval to high speed railways
M683xx is used by Ford Racing for internal combustion engine
Now, I'd love to find a job which exposes me to the Dyson Digital Motor technology.
I know, their electric car is an epic business failure, but their technology, even at the software side, is great!
I admit I've been extremely interested in XMOS xCore ever since I first heard of it from tggzzz, but the single-vendor approach with non-open toolchain feels, well, "too risky" or something
As a hobbyist.I do prefer free toolchains and relatively affordable development boards –– seeing as I can nowadays get very powerful Linux SBCs for ~ 40-60 € (Amlogic, Samsung, Rockchip SoC chips and chipsets having very good vanilla Linux kernel support, so one is not dependent on vendor forks) –– as a hobbyist myself.Opensource is limited, and for example, you cannot have the same Ada experience with Gnat that you can have with GreenHills AdaMulti.
Here you need a serious job, AdaMULTI is a complete integrated development environment with serious ICE support for embedded applications using Ada and C with serious support. Just the ICE's header costs 3000 euro, the full package costs 50000 euro.
What do you have with Opensource? Poor man debugger? gdb-stub? umm?
The processors are not too expensive, sure, but I'd like to start with a development kit. The cheapest active (non-obsoleted) one I can get from Digikey here, XK-EVK-XU316, costs 266€ including VAT. That is quite a lot for me: remember, I'm a poor burned out husk of a man without steady income.I admit I've been extremely interested in XMOS xCore ever since I first heard of it from tggzzz, but the single-vendor approach with non-open toolchain feels, well, "too risky" or somethingumm, it's not expensive, so... if I were you, I'd give it a try.
Classic imperative patterns are not good, this is probably why no hobbyist wants anything to do with those chips (which are also hard to find, and expensive).I do prefer free toolchains and relatively affordable development boards –– seeing as I can nowadays get very powerful Linux SBCs for ~ 40-60 € (Amlogic, Samsung, Rockchip SoC chips and chipsets having very good vanilla Linux kernel support, so one is not dependent on vendor forks) –– as a hobbyist myself.
I admit I've been extremely interested in XMOS xCore ever since I first heard of it from tggzzz, but the single-vendor approach with non-open toolchain feels, well, "too risky" or something. (I've been burned enough times by vendors already, you see, so maybe I'm paranoid.) And vendor toolchain support for Linux or BSDs tends to be second-tier, which further reduces the value of investment, even if for purely learning and experimentation.
Which nicely leads me to:Why 'language' as opposed to 'runtime' or 'library' ?To do better.
Like I mentioned earlier in a reply to Kalvin, none of what I have suggested here leads to "new" machine code; everything stems from already existing patterns. The problem I'd like to solve, is to express those patterns in a clearer, more efficient manner, and at the same time avoid the known pitfalls in existing low-level languages –– memory safety, and ease of static analysis.
No new abstractions, just easier ways for us humans to describe the patterns in a way compilers can statically check and generate efficient machine code for.
To me, the number of times I've had to argue for MPI_Isend()/MPI_Irecv() –– event-based I/O in MPI; the call initiates the transfer and provides an MPI_Request handle one can examine or wait for completion or error –– against highly-paid "MPI Experts", indicates that whenever imperative approach is possible, it will be used over event-oriented one, because humans.
I myself do not have "libraries" for my event-oriented stuff, I just have patterns (with related unit test cases and examples) I adapt for each use case separately. I seriously dislike the idea of having one large library that provides such things, because it leads to framework ideation where you do things a certain way because it is already provided for you, instead of doing things the most efficient or sensible way. Many small libraries, on the other hand, easily lead to (inter-)dependency hell.
Do note I've consistently raised the idea of experimenting with how to express the various patterns, using an imagined language (but at the same time thinking hard about what kind of machine code the source should compile to). So, there is no specific single pattern I'm trying to recommend anyone to use, I'm pushing/recommending/discussing/musing about how to experimentally discover better-than-what-we-have-now ways of describing the patterns we already use, and build a new language based on that.
If you've ever taken a single course on programming language development, or any computer science courses related to programming languages really, this will absolutely look like climbing a tree ass first. Yet, I have practical reasons to believe it will work, and can/may/should lead to a programming language that is better suited to our current event-oriented needs on resource-constrained systems than what we have now.
(As to those practical reasons: I've mentioned that I've occasionally dabbled in optimizing work flows by spending significant time and effort beforehand to observe and analyse how humans perform the related tasks. This itself is a very well documented (including scientific papers, as well as practical approaches done in high-production development/factory environments) problem solving approach. My practical reasons are thus based on treating programming as a problem solving workflow. This has worked extremely well for myself (in software development in about a dozen programming languages), so I have practical reasons to expect that this approach, even if considered "weird" or "inappropriate" by true CS folks, will yield very useful results.)
The processors are not too expensive, sure, but I'd like to start with a development kit. The cheapest active (non-obsoleted) one I can get from Digikey here, XK-EVK-XU316, costs 266€ including VAT. That is quite a lot for me: remember, I'm a poor burned out husk of a man without steady income.
But anyway, it's more that I've sworn to myself to no longer give money to vendors who just use that to fuck me over later on. Burned before too many times, shame on me, you can't fool me
tggzzz said that the ecosystem works, so perhaps I should just give it a try, if I can find a cheap/used kit.. and I do like the background of the company.
But, I cannot download the XTC Tools (https://www.xmos.ai/software-tools/) without registering, and the 15.1.4 release notes say "Inclusion of the lib_xcore system library and headers marks a shift in emphasis towards programming the xcore in C, rather than XC", which combined with the shift towards audio processing, makes me suspect the longevity of the platform.
you do realize even XMOS uses gdb
As I point out to inexperienced softies, when building systems neither top-down nor bottom up is sufficient. Both are necessary - and they should meet each other in the middle :)Quite! To me, it is analogous to the modular approach, where one constructs the solution from modular pieces, instead of trying to define the solution beforehand (top-down) or just throw stuff together and see what comes out (bottom-up).
(To see the industrial technology behind a "black-box voter and its redundant system" can be much worse, [...])I'm not actually disagreeing with you, just explaining the reasons for my current opinion/stance.
considering the psychological cost of that training, it was a rather insanely bizarre experience of which I'm not even allowed to talk about because of what I had to sign (worst negative).I do know how weird NDAs can get; that's why I included "(properly written)" when mentioning those.
So even before I got home, I felt quite emotionally and logically understand Cypher, when in the first Matrix movie, he betrayed his friends to re-enter the Matrix.
Red Pill always has a price to pay.Yup. Put more generally: There ain't no such thing as a free lunch. (https://en.wikipedia.org/wiki/There_ain't_no_such_thing_as_a_free_lunch)
you do realize even XMOS uses gdb
Yup, because it must be cheap! gdb is not bad, just it doesn't give you the experience you have with high-end ICEs.
Different costs, different purposes, different experiences.
What am I missing?
We can mention Erlang too, although this one is a particular beast in itself. I like the concepts, but not so much how they have been translated. Certainly interesting to learn though.
At the moment I am trying to support a frame for my-c to transform a function with multiple arguments into a sequence of single-argument functions; that means converting a function like this f(a, b, c, ...) into a function like this f(a)(b)(c)...
I desperately need it to use the map method with a curried function, this even if my-c is not functional programming like ErLang or Haskell.
Oh, from doc (I am a complete nuuuub with JS), it seems that JavaScript can easily do it via "function wrapper".
That's good!!! :o :o :o
So, I'll probably try to add a "function wrapper" mechanism to my-c(1), hopefully it fixes my problem.
I remember having created a thread about message passing as a way of circumventing synchronization problems entirely. Of course, that was absolutely nothing new.I do believe such change in thinking suits the event-driven paradigm quite well, too.
It has been shown to scale much better and be more robust, but it's still used in niche applications only.
If you are processing a large amount of data, it may look less efficient. All this message passing looks pretty expensive at first sight. But it does require rethinking your data flows almost entirely.
Anyway, that talk made me realize one possible model for 'event handlers' is to model each handler as a process, with events and associated data passed in messages/events between them. Instead of global memory accessible to all event handlers, each event handler would have their own state/context object(s), plus possibly access to explicitly named read-only or 'slow-atomic' objects. (Do note that 'process' here is the concept, I'm not referring to OS processes.)
...
This is still quite vague in my mind, still forming, but I believe there might be something useful in here.
...
I remember having created a thread about message passing as a way of circumventing synchronization problems entirely. Of course, that was absolutely nothing new.
It has been shown to scale much better and be more robust, but it's still used in niche applications only.
If you are processing a large amount of data, it may look less efficient. All this message passing looks pretty expensive at first sight. But it does require rethinking your data flows almost entirely.
Synchronization and concurrent access in parallel computing are the bottleneck, and are notoriously difficult to get right, and even harder to prove correct.
I have written a small library with message queues and did some experiments with it. It turned out to make it very easy to get near 100% CPU use across all cores for multithreaded computation, compared to a more typical approach. I plan on using that more often.