EEVblog Electronics Community Forum

Products => Computers => Programming => Topic started by: DiTBho on January 02, 2023, 10:58:14 pm

Title: event-oriented programming language
Post by: DiTBho on January 02, 2023, 10:58:14 pm
Let's discuss here (if it's possible/reasonable) this interesting idea.

How should it be? :D
Title: Re: event-oriented programming language
Post by: DC1MC on January 02, 2023, 11:06:19 pm
IMHO, first we need a clear and consistent definition of an event and some taxonomy(types, classed and property of events).
Afterwards we need to see on what programming problems is the event paradigm applicable and if is a general thing or some limited special case application.
Finally we need to see how to implement it in C with a bit of assembly (eventually) and we're done  :-DD
Title: Re: event-oriented programming language
Post by: gf on January 02, 2023, 11:16:05 pm
Things that come into my mind are state machine, declarative paradigm, QML/QtQuick, GUI programming.
Title: Re: event-oriented programming language
Post by: DiTBho on January 03, 2023, 12:03:21 am
Finally we need to see how to implement it in C with a bit of assembly (eventually) and we're done  :-DD

Yup, great start, like it  :D

(if you find examples, please post here, and I will study them)
Title: Re: event-oriented programming language
Post by: DiTBho on January 03, 2023, 12:18:37 am
Good idea for EP: a/synchronous calls between Producers and Consumers, blissfully unaware of one another and interact through the message queue only.

Producers are entities that generate events and send them to a message queue.
Consumers are entities that either subscribe to receive new events or poll periodically from the queue.

Title: Re: event-oriented programming language
Post by: Alex Eisenhut on January 03, 2023, 12:26:05 am
I've heard of object-oriented and event-driven, but not event-oriented.
Title: Re: event-oriented programming language
Post by: DiTBho on January 03, 2023, 12:37:00 am
Like this?  :D

Book: Practical UML Statecharts in C/C++: Event-DrivenProgramming
(it's already in my list)
Title: Re: event-oriented programming language
Post by: YurkshireLad on January 03, 2023, 01:26:16 am
Good idea for EP: a/synchronous calls between Producers and Consumers, blissfully unaware of one another and interact through the message queue only.

Producers are entities that generate events and send them to a message queue.
Consumers are entities that either subscribe to receive new events or poll periodically from the queue.

Been there, done that, enjoyed it. 😁
Title: Re: event-oriented programming language
Post by: SiliconWizard on January 03, 2023, 02:02:40 am
Ever looked at Go? (Not that I like all of it, but just worth a mention.)
Title: Re: event-oriented programming language
Post by: Mechatrommer on January 03, 2023, 05:18:29 am
been using event oriented/driven for ages, its VB6, had experience with Delphi (now called Embarcadero), built my own simple MFC like class/structure.. currently still struggling with learning/implementing Qt.. read and exercise book like Windows++ by Paul Dilascia then you'll have better picture how it works, esp on single CPU, the trick is scheduler (if you have to build one your own) and a lot of callbacks..
Title: Re: event-oriented programming language
Post by: Nominal Animal on January 03, 2023, 06:59:49 am
I work with two different kinds of events: explicitly queued, and asynchronous.

Interrupts and POSIX signal handlers are asynchronous events: their handlers can be called at any point in time.
Interrupts often do not have any payload, but POSIX signals handlers can have one (of siginfo_t (https://man7.org/linux/man-pages/man2/sigaction.2.html) type, including an user-defined arbitrary pointer or corresponding integer value per sigqueue() (https://man7.org/linux/man-pages/man3/sigqueue.3.html)).

(Technically, asynchronous events can also be queued, in the sense that when e.g. a signal or interrupt is blocked, it will be postponed until not blocked anymore; and if a POSIX realtime signal arrives while the thread is already handling that signal, it won't be dropped but postponed until the signal handler returns.  I'm sure there are better terms for the two (queued vs. asynchronous), I just don't recall/realize what they are when writing this.)

Queued events are what GUI toolkits like Qt, Gtk, etc. use, and what things like X11 (X Windows) are based on.  You have one or more queues, and a call that blocks until an event object is available from one of them, returning the first one (or the one with highest priority).  The object itself is polymorphic, or a tuple (type, object), so that new events can be added without recompiling the applications.

Imperative languages like C, C++, Python, Perl, Ruby, Rust, etc. implement the former using callbacks (or closures) and special requirements (see e.g. async-signal-safe functions in POSIX C (https://man7.org/linux/man-pages/man7/signal-safety.7.html): the C library interfaces that can be used in signal handlers), and the latter using explicit queue operations and a superloop.  (Javascript is also an imperative language used with event-based interfaces, but it does not have signal or interrupt support.  It's "superloop" is inherent in its runtime.)

In my event-based sort command example in the other thread, I showed that things like records emitted from a datastore can be usefully treated as events as well ("datastore record (this) available"), but I am not actually certain if that is better considered an event source, or just a normal queue.

This is exactly the unknown region where real-world experimentation and research ought to yield useful results and information: exactly how can be express things like "let this datastore generate an event, emitting its contents one record at a time in order, until it is empty", in a way that requires minimal runtime support?  Imperative languages do this via polling, and that part is prone to bugs (especially when event priority is involved; see e.g. priority inversion (https://en.wikipedia.org/wiki/Priority_inversion)).

I firmly believe that experimenting on scenarios like this –– event-based sort utility that reads records into a min/max-heap and then emits them to standard output –– with an imagined language, experimenting on the syntax while also roughly sketching out what kind of machine code that would compile to, is the only way to find the answers this kind of discussion threads are looking for.  I do not believe such work has been done yet, but I have already used event-based libraries and programming paradigm in several languages, so that I know the underlying idea is sound: the question is, how such concepts best map to human-written linear language forms (i.e. textual source code) –– and without involving abstractions that require heavy runtime or more RAM than is available on small microcontrollers.

There are even conceptual things that are very important when looking at the low-level, machine-code implementation.  One is, does a hardware/interrupt/signal event handler need to be re-entrant?  I do not ever recall writing an interrupt or signal handler that would have been hindered by not being re-entrant.  If one does not need to be re-entrant, it can run off a static context; essentially a tiny dedicated stack.  In POSIX, one can even set up an alternate stack for (selected) signal handlers; see sigaltstack() (https://man7.org/linux/man-pages/man2/sigaltstack.2.html).  In practice, if signals/interrupts with the same priority cannot interrupt each other, the maximum nesting in the alternate stack is defined by the number of unique priorities, and the exact maximum alternate stack size can be statically determined (sum of maximum stack sizes needed at each priority level).
Title: Re: event-oriented programming language
Post by: tggzzz on January 03, 2023, 10:23:57 am
Good idea for EP: a/synchronous calls between Producers and Consumers, blissfully unaware of one another and interact through the message queue only.

Producers are entities that generate events and send them to a message queue.
Consumers are entities that either subscribe to receive new events or poll periodically from the queue.

Welcome to Communicating Sequential Processes (CSP), Occam, Erlang, xC, and languages incorporating elements of CSP such as Rust and Go.

There's a lot of hard-won experience out there. Choose whether to stand on other people's shoulders or toes :)

In the language's whitepaper, define which concepts have been included and omitted, why the included ones play well together. This excellent example (https://www.oracle.com/java/technologies/introduction-to-java.html) from an author I respected convinced me that the language was worth learning. I was right :)

Always remember " You know you've achieved perfection in design, not when you have nothing more to add, ut when you have nothing more to take away." Antoine de Saint Exupery. 
Title: Re: event-oriented programming language
Post by: tggzzz on January 03, 2023, 10:43:37 am
In my event-based sort command example in the other thread, I showed that things like records emitted from a datastore can be usefully treated as events as well ("datastore record (this) available"), but I am not actually certain if that is better considered an event source, or just a normal queue.

That is exactly the kind of area where the half-sync half-async design pattern works superbly.

An incoming event is recorded in a queue, and pulled from that queue by a worker thread, and the  relevant action is processed to completion. If such processing would take "too long" (e.g. a database update) then that can be forked off to a separate context for completion. If appropriate, a "completion event" can be placed in the queue for subsequent processing.


Quote
This is exactly the unknown region where real-world experimentation and research ought to yield useful results and information: exactly how can be express things like ...

I firmly believe that experimenting on scenarios like this ... with an imagined language, experimenting on the syntax while also roughly sketching out what kind of machine code that would compile to, is the only way to find the answers this kind of discussion threads are looking for. 

Exactly. At this stage use a pseudo-code to express concepts; ignore compilers since they are a very well understood technology.

Quote
I do not believe such work has been done yet, but I have already used event-based libraries and programming paradigm in several languages, so that I know the underlying idea is sound: the question is, how such concepts best map to human-written linear language forms (i.e. textual source code) –– and without involving abstractions that require heavy runtime or more RAM than is available on small microcontrollers.

There are quite a few event based environments, usually involving a proprietary Domain Specific Language.

There are many realtime design patterns that encapsulate different aspects of event oriented programming.

Many specifications are written in terms of events, e.g. telecoms systems.

See what primitive concepts are in embedded RTOSs, ignoring the ones that are simple transliterations of C/UNIX/POSIX.

Quote
There are even conceptual things that are very important when looking at the low-level, machine-code implementation.  One is, does a hardware/interrupt/signal event handler need to be re-entrant?  I do not ever recall writing an interrupt or signal handler that would have been hindered by not being re-entrant.  If one does not need to be re-entrant, it can run off a static context; essentially a tiny dedicated stack.  In POSIX, one can even set up an alternate stack for (selected) signal handlers; see sigaltstack() (https://man7.org/linux/man-pages/man2/sigaltstack.2.html).  In practice, if signals/interrupts with the same priority cannot interrupt each other, the maximum nesting in the alternate stack is defined by the number of unique priorities, and the exact maximum alternate stack size can be statically determined (sum of maximum stack sizes needed at each priority level).

There are too many similarities between hardware interrupts, messages, exceptions, and events for them not to be treated identically. That implies they are all unified in a single language concept, and the runtime support treats them identically.

It is possible that different run-time support systems could be required for different environments, e.g. server side and MCU.
Title: Re: event-oriented programming language
Post by: Sherlock Holmes on January 04, 2023, 11:20:47 pm
Let's discuss here (if it's possible/reasonable) this interesting idea.

How should it be? :D

I think its high time intelligent, thinking, engineering minded people asked themselves why they are all obediently perpetuating the fashion of using "oriented" when discussing languages.

Like "language oriented language" or "computer oriented language" or "crash oriented language" or "reliability oriented languages" the term is just so over used!

Title: Re: event-oriented programming language
Post by: AndyBeez on January 04, 2023, 11:39:44 pm
Anyone for node.js?
https://nodejs.org/en/

Node is asyncronous event driven architecture: https://en.m.wikipedia.org/wiki/Event-driven_architecture
Title: Re: event-oriented programming language
Post by: SiliconWizard on January 05, 2023, 12:00:27 am
Let's discuss here (if it's possible/reasonable) this interesting idea.

How should it be? :D

I think its high time intelligent, thinking, engineering minded people asked themselves why they are all obediently perpetuating the fashion of using "oriented" when discussing languages.

Like "language oriented language" or "computer oriented language" or "crash oriented language" or "reliability oriented languages" the term is just so over used!

Well sure, but it has the benefit of clearly stating what was the focus when designing some language.
The problem may not be so much with the term "oriented" than with the fact that general-purpose languages actually be too "oriented" one way or another, making people using them design software fitting a single paradigm religiously. Which leads for instance to the disastrous intricate piles of objects you get in most C++ designs.
But the same appears with just any paradigm which is too opinionated. So likewise, writing everything as events in an "event-oriented" programming language would potentially lead to some spectacular piles of shit. IMHO. :popcorn:
Title: Re: event-oriented programming language
Post by: Sherlock Holmes on January 05, 2023, 12:03:32 am
Let's discuss here (if it's possible/reasonable) this interesting idea.

How should it be? :D

I think its high time intelligent, thinking, engineering minded people asked themselves why they are all obediently perpetuating the fashion of using "oriented" when discussing languages.

Like "language oriented language" or "computer oriented language" or "crash oriented language" or "reliability oriented languages" the term is just so over used!

Well sure, but it has the benefit of clearly stating what was the focus when designing some language.
The problem may not be so much with the term "oriented" than with the fact that general-purpose languages actually be too "oriented" one way or another, making people using them design software fitting a single paradigm religiously. Which leads for instance to the disastrous intricate piles of objects you get in most C++ designs.
But the same appears with just any paradigm which is too opinionated. So likewise, writing everything as events in an "event-oriented" programming language would potentially lead to some spectacular piles of shit. IMHO. :popcorn:

You do have a point: "common business-oriented language" seems I missed that, it can all be traced back to COBOL!

Anyway, lets start drafting C++++ that'll get everything nice and clean again.



Title: Re: event-oriented programming language
Post by: Mechatrommer on January 05, 2023, 12:18:25 am
The problem may not be so much with the term "oriented" than with the fact that general-purpose languages actually be too "oriented" one way or another, making people using them design software fitting a single paradigm religiously. Which leads for instance to the disastrous intricate piles of objects you get in most C++ designs.
isnt that what you get in Java? or even Python? piles of objects in its runtime/standard library? for being too "object" or "machine independent" oriented language?
Title: Re: event-oriented programming language
Post by: DiTBho on January 05, 2023, 12:54:33 am
My t-shit knows it  ;)
"Use tools rationally"
Title: Re: event-oriented programming language
Post by: DiTBho on January 05, 2023, 12:59:54 am
Anyone for node.js?
https://nodejs.org/en/

Node is asyncronous event driven architecture: https://en.m.wikipedia.org/wiki/Event-driven_architecture

Yup, it's in my list because it offers several real life examples.
Title: Re: event-oriented programming language
Post by: SiliconWizard on January 05, 2023, 01:00:11 am
My t-shit knows it  ;)
"Use tools rationally"

Good idea. ;D
Title: Re: event-oriented programming language
Post by: MK14 on January 05, 2023, 06:35:47 am
https://en.wikipedia.org/wiki/Event-driven_programming

Quote
Criticism
The design of those programs which rely on event-action model has been criticised, and it has been suggested that the event-action model leads programmers to create error-prone, difficult to extend and excessively complex application code.[2] Table-driven state machines have been advocated as a viable alternative.[5] On the other hand, table-driven state machines themselves suffer from significant weaknesses including the state explosion phenomenon.[6] A solution for this is to use Petri nets.
Bold added by me.
Title: Re: event-oriented programming language
Post by: Kalvin on January 05, 2023, 07:53:27 am
https://en.wikipedia.org/wiki/Event-driven_programming

Quote
Criticism
The design of those programs which rely on event-action model has been criticised, and it has been suggested that the event-action model leads programmers to create error-prone, difficult to extend and excessively complex application code.[2] Table-driven state machines have been advocated as a viable alternative.[5] On the other hand, table-driven state machines themselves suffer from significant weaknesses including the state explosion phenomenon.[6] A solution for this is to use Petri nets.
Bold added by me.

State explosion can be limited using hierarchical state machines. They are somewhat more complex to implement, but way more flexible, than the traditional flat, table-driven state machines.
Title: Re: event-oriented programming language
Post by: tggzzz on January 05, 2023, 09:34:56 am
https://en.wikipedia.org/wiki/Event-driven_programming

Quote
Criticism
The design of those programs which rely on event-action model has been criticised, and it has been suggested that the event-action model leads programmers to create error-prone, difficult to extend and excessively complex application code.[2] Table-driven state machines have been advocated as a viable alternative.[5] On the other hand, table-driven state machines themselves suffer from significant weaknesses including the state explosion phenomenon.[6] A solution for this is to use Petri nets.
Bold added by me.

That statement is of very limited value.

There are several good and practical ways of implementing FSMs. Sometimes table-driven FSMs are a good solution (e.g. parsing sequences of characters to create corresponding numbers) sometimes not.
Title: Re: event-oriented programming language
Post by: tggzzz on January 05, 2023, 10:20:36 am
A lot of reinventing the wheel would be avoided if people would read the literature.

I recommend
Title: Re: event-oriented programming language
Post by: tggzzz on January 05, 2023, 10:27:11 am
https://en.wikipedia.org/wiki/Event-driven_programming

Quote
Criticism
The design of those programs which rely on event-action model has been criticised, and it has been suggested that the event-action model leads programmers to create error-prone, difficult to extend and excessively complex application code.[2] Table-driven state machines have been advocated as a viable alternative.[5] On the other hand, table-driven state machines themselves suffer from significant weaknesses including the state explosion phenomenon.[6] A solution for this is to use Petri nets.
Bold added by me.

State explosion can be limited using hierarchical state machines. They are somewhat more complex to implement, but way more flexible, than the traditional flat, table-driven state machines.

They aren't that much more complex to implement, and if the FSM is complex then it is a good tradeoff. I've used the state behaviour=class, event=method, current state = singleton instance of class, to very good effect.

It is easy to add logging with trivial performance impact in a production system, which was invaluable during commissioning and in (correctly) deflecting blame onto the other company's products. Great for avoiding lawyers :)

Ditto adding performance measurements.
Title: Re: event-oriented programming language
Post by: Kalvin on January 05, 2023, 10:46:50 am
https://en.wikipedia.org/wiki/Event-driven_programming

Quote
Criticism
The design of those programs which rely on event-action model has been criticised, and it has been suggested that the event-action model leads programmers to create error-prone, difficult to extend and excessively complex application code.[2] Table-driven state machines have been advocated as a viable alternative.[5] On the other hand, table-driven state machines themselves suffer from significant weaknesses including the state explosion phenomenon.[6] A solution for this is to use Petri nets.
Bold added by me.

State explosion can be limited using hierarchical state machines. They are somewhat more complex to implement, but way more flexible, than the traditional flat, table-driven state machines.

They aren't that much more complex to implement, and if the FSM is complex then it is a good tradeoff. I've used the state behaviour=class, event=method, current state = singleton instance of class, to very good effect.

It is easy to add logging with trivial performance impact in a production system, which was invaluable during commissioning and in (correctly) deflecting blame onto the other company's products. Great for avoiding lawyers :)

Ditto adding performance measurements.

Implementing the state transitions in hierarchical state machines is a bit more involved compared to the simple state machines, because the HSM needs to be able to support entry actions, initial state concept, exit handlers, and do that in a correct order so that the states are first exited up to the common parent, and then entered to the target state while performing any enter actions and checking initial states. Miro Samek's book "Practical UML Statecharts in C/C++, 2nd Ed Event-Driven Programming for Embedded Systems" has a good introduction and a reference implementation for all this.

I have created a set of C macros that are used to define and implement a simple DSL (domain specific language) for describing these HSMs in a compact way.

I have also included support for state timeout events, optional default timeouts for each state, which means that entering a state will start the state timer if the state has a default timeout time defined. Exiting the state will stop the state timer. The state timeout handler will be called automagically if the state timer expires.
Title: Re: event-oriented programming language
Post by: Nominal Animal on January 05, 2023, 11:41:04 am
From state machines we can easily slide into the next important design decision when considering an event-oriented language: (standard) library interfaces.

If we consider typical microcontroller applications, a lot of basic I/O is handled at least partially by peripheral subsystems, without constant supervision from the actual processor.  In particular, consider things like UART and SPI/QSPI transfers, especially slow block I/O to something like a microSD card (very cheap, very large storage capacity, easy to interface to via SPI/QSPI).  Let's examine such a write operation.

In imperative languages, write operations are synchronous: the call returns when the data has been sent to the storage device.  On especially microcontrollers, much of that time is spent busy-waiting for the operation to complete.  We could get more out of the same hardware, if that wait time could be used for something more useful.  To do so, a write must be turned into a two-part operation: you start the write, and at some time later, the write completes, and in the mean time, you must not modify the buffer contents as it is still in the progress of being written out.

I've used this interface in MPI extensively in both Fortran and C (MPI_Isend (https://www.open-mpi.org/doc/v4.0/man3/MPI_Isend.3.php), MPI_Irecv (https://www.open-mpi.org/doc/v4.0/man3/MPI_Irecv.3.php)), and it has worked really, really well for me in various use cases.  Each ongoing I/O operation is associated with an MPI_Request object, which can be tested using MPI_Test* (https://www.open-mpi.org/doc/v4.0/man3/MPI_Test.3.php) functions and waited for completion using MPI_Wait* (https://www.open-mpi.org/doc/v4.0/man3/MPI_Wait.3.php) functions.

However, I've also had big arguments about this with "MPI experts", who claim such interfaces are "inherently unsafe", just because they do not conceptually understand it to use it effectively.  So, conceptual clarity of how it works is absolutely crucial.

I consider such completion events a third type, perhaps 'pending': the event is known to occur some time in the future, but its payload (completion status, perhaps error) and exact time is unknown.

Instead of being able to handle all possible orders of events, postponing events that have already been queued, to after one or more such pending events have been received and handled, can make the code much simpler.  (I personally use it all the time in MPI, by documenting well the tags of pending communication events, and carefully designing the order in which such events/messages are read.)
Thus, an important question is how this postponing is expressed in the language.

Those used to implementing event and state machines in imperative languages will immediately gravitate towards "you use a loop around the event queue, so just add statements to requeue the event if it can't be handled yet", ending up with an imperative-oriented event handling loop.
I suspect that a language-based filter assignment, implemented in the runtime, would yield more efficient and robust microcontroller firmware.

This also relates to how the hardware-generated events, like interrupts, are mapped to handlers or event queues.
A language keyword or operator could be used to designate the hardware sources and the handler or event queue it is mapped to; this would also generate the necessary runtime code (interrupt handler assignment and trampoline or event-queueing), and allow things like "and include this object as context for the event".  (That way e.g. buttons could use the exact same event handling code, and just have a unique context object or event attribute per button.)

In Javascript and GUI toolkits like Qt and Gtk, events are essentially callbacks generated from basically any suitable object as the context.

I am leaning towards a different approach, one where there are one or more event queues, abstract instances, with the aforementioned dependencies and postponing defined in terms of which queue is "active" and which "paused".  The event queue itself is an abstraction; a first-level "object" in the language, without any limit as to what kind of events or which context those events have, used for the management of event order, priority, and interdependence.
It might be very useful to not associate events themselves with any priority, with each queue being strictly a FIFO, and only define priority between event queues. (This would significantly simplify the event queue/dequeue operations.)

Then, the logical equivalent of a semaphore is an event queue, with sem_wait equating to grab object from queue, and sem_post to putting an object to the queue; the equivalent of a mutex is a single-event queue where mutex_lock equates to grabbing the token object from the queue, and mutex_unlock to putting the token object back to the queue, with waiters blocking on the grab-token-object operation.
Trick is, for this to be useful on microcontrollers, the operations must compile to efficient machine code.

(Just because the abstraction sounds nice, does not mean it is useful in practice.  It must both be understandable to us human programmers, but also compile to effective and efficient machine code.  Abstractions that fail one of them have no room in microcontroller and limited-resources embedded development!)
Title: Re: event-oriented programming language
Post by: tggzzz on January 05, 2023, 11:54:54 am
https://en.wikipedia.org/wiki/Event-driven_programming

Quote
Criticism
The design of those programs which rely on event-action model has been criticised, and it has been suggested that the event-action model leads programmers to create error-prone, difficult to extend and excessively complex application code.[2] Table-driven state machines have been advocated as a viable alternative.[5] On the other hand, table-driven state machines themselves suffer from significant weaknesses including the state explosion phenomenon.[6] A solution for this is to use Petri nets.
Bold added by me.

State explosion can be limited using hierarchical state machines. They are somewhat more complex to implement, but way more flexible, than the traditional flat, table-driven state machines.

They aren't that much more complex to implement, and if the FSM is complex then it is a good tradeoff. I've used the state behaviour=class, event=method, current state = singleton instance of class, to very good effect.

It is easy to add logging with trivial performance impact in a production system, which was invaluable during commissioning and in (correctly) deflecting blame onto the other company's products. Great for avoiding lawyers :)

Ditto adding performance measurements.

Implementing the state transitions in hierarchical state machines is a bit more involved compared to the simple state machines, because the HSM needs to be able to support entry actions, initial state concept, exit handlers, and do that in a correct order so that the states are first exited up to the common parent, and then entered to the target state while performing any enter actions and checking initial states.

That's only beneficial if you are attempting to implement one type of FSM specification: a Harel State Chart (i.e. the UML state machine diagram). You don't need it if you are implementing the conceptually simpler FSM patterns where an event only invokes an action that depends on the current state. That's equivalent to table-driven FSMs and if/then/ele/case patterns.

I haven't yet needed the full Harel/UML form, although it can have benefits in some circumstances.

Quote
Miro Samek's book "Practical UML Statecharts in C/C++, 2nd Ed Event-Driven Programming for Embedded Systems" has a good introduction and a reference implementation for all this.

Great minds think alike, although I prefer the GoF Design Patterns book for its brevity, and being language agnositc.

Quote
I have created a set of C macros that are used to define and implement a simple DSL (domain specific language) for describing these HSMs in a compact way.

I hate macros (that were beneficial in the 1970s), since they are a form of Design Specific Language that cripples IDE and other tooling, and requires special training in non-transferrable skills.

Quote
I have also included support for state timeout events, optional default timeouts for each state, which means that entering a state will start the state timer if the state has a default timeout time defined. Exiting the state will stop the state timer. The state timeout handler will be called automagically if the state timer expires.

Yup.

Others are easy and possible, especially keeping a consise history of the state/event trajectory useful when understanding "strange behaviour".
Title: Re: event-oriented programming language
Post by: tggzzz on January 05, 2023, 12:12:06 pm
...
I suspect that a language-based filter assignment, implemented in the runtime, would yield more efficient and robust microcontroller firmware.

Similar to Erlang's pattern matching?

Quote
...
I am leaning towards a different approach, one where there are one or more event queues, abstract instances, with the aforementioned dependencies and postponing defined in terms of which queue is "active" and which "paused".  The event queue itself is an abstraction; a first-level "object" in the language, without any limit as to what kind of events or which context those events have, used for the management of event order, priority, and interdependence.
It might be very useful to not associate events themselves with any priority, with each queue being strictly a FIFO, and only define priority between event queues. (This would significantly simplify the event queue/dequeue operations.)

Good choices :)

Anytime priority is introduced for normal operations, sooner or later people will want to fiddle with priorities to avoid rare emergent problems. Such fiddling might avoid that problem materialising, but will introduce others where they didn't exist before.

Design principle:

Multiple levels of FIFO may be desirable. For example, in a telecoms server, there will be a single FIFO for all incoming events. There will also be a single FIFO associated with each call in progress,  only containing events relevant to that call. Transferring an event from the "incoming" FIFO to one of the "call FIFOs" is done when convenient.

Quote
Then, the logical equivalent of a semaphore is an event queue, with sem_wait equating to grab object from queue, and sem_post to putting an object to the queue; the equivalent of a mutex is a single-event queue where mutex_lock equates to grabbing the token object from the queue, and mutex_unlock to putting the token object back to the queue, with waiters blocking on the grab-token-object operation.
Trick is, for this to be useful on microcontrollers, the operations must compile to efficient machine code.

There's no fundamental reason why it would be any less efficient that other mechanisms that also take account of atomicity volatility and parallelism.

Quote
(Just because the abstraction sounds nice, does not mean it is useful in practice.  It must both be understandable to us human programmers, but also compile to effective and efficient machine code.  Abstractions that fail one of them have no room in microcontroller and limited-resources embedded development!)

Yup.

But I'll modify that to include concepts like "as simple as possible but no simpler" and "simple programs that obviously have no defects vs complex programs that have no obvious defects" and "visibility of deadlock/livelock properties".
Title: Re: event-oriented programming language
Post by: Kalvin on January 05, 2023, 12:16:38 pm
In imperative languages, write operations are synchronous: the call returns when the data has been sent to the storage device.  On especially microcontrollers, much of that time is spent busy-waiting for the operation to complete.  We could get more out of the same hardware, if that wait time could be used for something more useful.  To do so, a write must be turned into a two-part operation: you start the write, and at some time later, the write completes, and in the mean time, you must not modify the buffer contents as it is still in the progress of being written out.

If you are using some sort of preemtive RTOS, this is typically case ie. the program flow will contain these (inefficient) busy loops.

Using co-operative scheduler and event-driven techniques will typically prevent this. Using patterns like Observer, and Subscribe/Publish are nice techniques to trigger processing only when something needs to be done. Combining these techniques with state machines and event queues, you can create  systems that are responsive. Also, the devices will be quite deterministic in a sense that nothing is happening if there are no events generated, and the device is active only when processing events.

For time-critical systems one can use time-triggered scheduling, for example. This kind of scheduling will produce even more deterministic systems, if needed.
Title: Re: event-oriented programming language
Post by: Nominal Animal on January 05, 2023, 01:11:52 pm
I suspect that a language-based filter assignment, implemented in the runtime, would yield more efficient and robust microcontroller firmware.
Similar to Erlang's pattern matching?
No, not really.  I used 'suspect', because I don't have a clear picture of exactly what would work for me.

For example, we could map hardware interrupts and such to event queues using
    'Map' [ context-object '.' ] event-name 'to' [ queue-object '.' ] event-invocation(parameter-list) [ 'using' context-object ] ';'

One possible idea is to support explicit filters to a queue-object, for example
    'Filter' queue-object [ ':' [ 'Allow' | 'Drop' | 'Postpone' ] 'if' filter-expression ]* ';'
that are implicitly evaluated whenever the queue-object is accessed for events (by the runtime).

Another is to support named 'blocks', like electrichickens use those multi-locks when shutting down systems they're working on, for example
    'Block' queue-object 'with' block-identifier [ 'until' auto-unblock-condition ] ';'
    'Unblock' queue-object 'with' block-identifier ';'
where the queue is blocked if it has one or more blocks placed on it.

The former way is more powerful, but also has more overhead (since the filter-expression is executed potentially for each queued event during each queue state check).  The latter is simpler, potentially requiring just one bit per block per event queue.

Additional approaches are explicit filter expressions for forwarding events to other queues, and so on.
I just don't know enough to say what would work best for myself, yet.
Like I said, the exploration of this is something I'd love to contribute to, but is too much work for a single person to achieve.
I also need quite a bit of pushback whenever I don't see the downsides of my own suggestions or my own errors yet; that too happens too often.  :P

Multiple levels of FIFO may be desirable. For example, in a telecoms server, there will be a single FIFO for all incoming events. There will also be a single FIFO associated with each call in progress,  only containing events relevant to that call. Transferring an event from the "incoming" FIFO to one of the "call FIFOs" is done when convenient.
That is exactly the sort of pattern I'm thinking more than one "event queue" object would be useful.

Dealing with individual event priorities leads to all sorts of complex and chaotic situations (like priority inversion).
Dealing with multiple queues, in the runtime (so that one can obtain events from a set of queues, ordered by queue priority for example, with optional filters applied per queue), seems much more reasonable level of complexity to myself.

Quote
Then, the logical equivalent of a semaphore is an event queue, with sem_wait equating to grab object from queue, and sem_post to putting an object to the queue; the equivalent of a mutex is a single-event queue where mutex_lock equates to grabbing the token object from the queue, and mutex_unlock to putting the token object back to the queue, with waiters blocking on the grab-token-object operation.
Trick is, for this to be useful on microcontrollers, the operations must compile to efficient machine code.
There's no fundamental reason why it would be any less efficient that other mechanisms that also take account of atomicity volatility and parallelism.
No, but it is enticing to anyone developing a new language to think of an abstraction they love, that turns out to be hellishly complicated to implement on currently available hardware, requiring lots of RAM and complex operations like stack unwinding.

The trick is to consider the logical equivalents as having approximately the same level of abstraction and complexity.  So, if you think of a way of implementing an event queue that requires the equivalents of mutexes and condition variables to implement, it is probably not suitable for real life implementation.
Indeed, in systems programming, I mostly use lockless techniques using atomic ops for these (GCC/Clang/ICC atomic built-ins (https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-Builtins.html) on x86-64 in particular), so I know it is/should be possible on most architectures.
On some, like AVR, you might need to disable interrupts for a few cycles (less than a dozen per critical section), but it should be doable.

But I'll modify that to include concepts like "as simple as possible but no simpler" and "simple programs that obviously have no defects vs complex programs that have no obvious defects" and "visibility of deadlock/livelock properties".
Very true.

The reason I don't mind having "lots" of reserved keywords, is that explicit expressions that make static analysis of such things (like which blocks may be placed on which event queues) easier, is more desirable and important than the problem of having to replace names in user code to avoid conflicts.

Other features, like "arrays" (memory ranges) instead of pointers (memory points/singular addresses), if constructed so that static analysis can verify if all accesses are within the ranges, can fix the fundamental memory safety issues we have with most C code right now.  But these, too, rely on ensuring static and compile-time analysis is well supported by the language features and definitions.

In imperative languages, write operations are synchronous: the call returns when the data has been sent to the storage device.  On especially microcontrollers, much of that time is spent busy-waiting for the operation to complete.  We could get more out of the same hardware, if that wait time could be used for something more useful.  To do so, a write must be turned into a two-part operation: you start the write, and at some time later, the write completes, and in the mean time, you must not modify the buffer contents as it is still in the progress of being written out.

If you are using some sort of preemtive RTOS, this is typically case ie. the program flow will contain these (inefficient) busy loops.
Sure, but there is no actual technical or logical requirement for them.  Even in C, one can implement a write as
    int async_write(int fd, const void *buf, size_t len, void *ctx,
                    int (*completed)(int fd, const void *buf, size_t len, void *ctx, int status),
                    int (*failure)(int fd, const void *buf, size_t len, void *ctx, int status));
where the call returns immediately, but the buf is read sometime afterwards, and must stay unmodified, until one of the two callbacks is called.

This is the difference between ones write operation being event-oriented or imperative –– although others use synchronous vs. asynchronous, and other terms...  (Which is why being hung up on specific terms, like 'event-oriented' vs. 'event-based' is simply utter bullshit: human languages are vague, so as long as we agree to what we mean by each term, the terms themselves don't matter, only the concepts the terms represent matter.  And as long as we convey the concepts to each other, all is good.)

Let me reiterate: I am personally not proposing anything new at the machine code level.  Everything I've described has already been done in various languages, and quite often in C.

What I am trying to achieve by describing how to discover what a true low-level event-based microcontroller language could be, is to discuss how to find a better way (than current C/C++) to express these patterns –– and hopefully avoid the deficiencies C has (like memory safety, difficulty of static analysis, no standard function/variable attribute declaration mechanism, and so on), arriving at a better programming language for microcontrollers and similar stuff, where the program or firmware is built on top of the concept of events.
Title: Re: event-oriented programming language
Post by: DiTBho on January 05, 2023, 01:17:53 pm
I have created a set of C macros that are used to define and implement a simple DSL (domain specific language) for describing these HSMs in a compact way.

very interesting! Can you show some examples of them?
C-macros always smell of possibly language add-on-s


(here, OMG, in my-c there is no cpp and #define is banned
so I am are really forced to add real language constructs

DMS for FSMs: sounds good!!!)
Title: Re: event-oriented programming language
Post by: DiTBho on January 05, 2023, 01:19:01 pm
I have also included support for state timeout events, optional default timeouts for each state, which means that entering a state will start the state timer if the state has a default timeout time defined. Exiting the state will stop the state timer. The state timeout handler will be called automagically if the state timer expires.

WOW!!! This sounds awesome for my XPC860 board, PowerPC 32bit core, nothing different from a PPC603 with 32-bit general-purpose registers (GPRs), but (great news!!!) with Quad Integrated Communications Controller.

It's called "PowerQUIC", and it's very a versatile one-chip integrated microprocessor and peripheral  combination solution, designed for a variety of controller applications, but profiled as Networking Communication oriented microprocessor.

In short, it's a classic 90s PowerPC with 32 GPR (general purpose registers, 32bit), added with a RISC communications processor (aka CPM), which makes it full of fun because it's stuffed of modules :o :o :o

The CPM is a weird dedicated RISC-ish core, which has been enhanced by the addition of the  inter-integrated  controller (SPI and I2C)  channel and a real-time clock, support for continuous mode transmission and reception on all - 16 - serial DMA channels, with up to 8Kbytes of dual-port RAM buffer, as well as up to 2 (or even 4!!!) Fast Ethernet controllers, fully compliant with the IEEE 802.3u Standard (except when you wanna use the UTOPIA module in ATM mode, which I frankly will ignore), and and other stuff like HDLC/SDLC channels.

The  memory  controller  has  also been  enhanced, enabling  the  MPC860  to  support  *any*  type  of  memory,  including  high-performance memories  and  new  types  of  DRAMs. 

All of these pieces of hardware have time-out, queues, produce and consume events.

As my friend Nominal Animal said above, you'd better experiment to find out what fits your needs and tastes best. Perhaps I am wrong, but I think this chip is one of the best to experience event oriented programming :D
Title: Re: event-oriented programming language
Post by: Sherlock Holmes on January 05, 2023, 02:17:05 pm
There are some interesting points being made here, something that I find extremely interesting is the fact that there are at least two models of computability. One being a Turing machine and the other being Lambda Calculus.

These are as different as chalk and cheese yet have been shown to be logically equivalent in that they can each describe computation, there's no computable problem that one can solve that the other cannot.

Lambda calculus however does not involve state, or loops, or mutability it offers powerful benefits (as seen in functional languages) but seems ill suited to MCU applications.
Title: Re: event-oriented programming language
Post by: Nominal Animal on January 05, 2023, 03:33:29 pm
Perhaps I am wrong, but I think [XPC860] is one of the best to experience event oriented programming :D
The one downside I can see is that you'll need to find an old/NOS/used board, since XPC860 (and similar ones like NXP MPC860 (https://www.nxp.com/products/processors-and-microcontrollers/legacy-mpu-mcus/powerquicc-processors/powerquicc-i-mpc8xx/mpc860-powerquicc-processor:MPC860)) are no longer available at larger sellers like Mouser and Digikey.  Otherwise, they definitely look well suited.

(I do wonder, if their (assumed!) lack of success is related to there not really being a language where one could easily and effortlessly express the patterns this kind of hardware is well suited for?  After all, human history is full of inventions where the technically lesser implementation has won due to human-related reasons.  Popularity does not correlate strongly with quality or price, even though some humans fervently believe so.)

To get a pretty good conceptual grip on the benefits and downsides, I believe client-side HTML+Javascript –– so zero tooling needed, only a plain text editor and a relatively recent browser, on any OS or architecture –– is a viable choice.  Bad choices, like long calculation done directly within the event handler, exhibit the same problems as they tend to do in hardware, too: the UI events are queued, so nothing seems to work, until everything gets registered at once; current browsers will even interrupt such code and ask the user if they want to stop the "hung script"!
If you use WebSockets or HTTP/HTTPS queries, they're fully asynchronous (so that the call only initiates the I/O or request, with registered callbacks called when the I/O or request completes).
Thus, to make effective client-side HTML+Javascript stuff, you must understand and apply the paradigm/approach: just translating a C/Python/VB application to JS just will not work in a browser (because the browser environment is inherently event-oriented).

There are two use cases where client-side HTML+Javascript is particularly useful in my opinion:
1. Simulating embedded user interfaces, especially menus
2. Simple tool pages, like my FIR analysis page (https://www.nominal-animal.net/answers/fir-analysis.html) (put the coefficients, like 0.2 0.4 0.6 0.8 1.0 0.8 0.6 0.4 0.2 to the top (right) input box, and press Enter, and it'll show the FIR frequency response), or the window function spectral response (https://nominal-animal.net/answers/window-spectrum.html) (put 0.2 0.4 0.6 0.8 1.0 0.8 0.6 0.4 0.2 0.0 in the upper red box, and 0.2 0.4 0.6 0.8 1.0 1.0 0.8 0.6 0.4 0.2 in the upper blue box, and click Recalculate, to see the difference in spectral response between "odd" and "even" triangular window functions)

The former is useful because that way the most user-visible part (interface) gets tested and simulated and worked out first, instead of implemented ad-hoc when the functionality is done.
The latter is useful because it is truly portable, and browsers' Javascript engines are nowadays ridiculously well optimized: even naïve code using lots of Math.sin() and Math.cos() like my examples above run at practically native code speed.
The examples above are standalone pages, single HTML files that contain the Javascript code, and do not require any access outside that file, not even Internet access.  One only needs a server, if one wants to "save" data to or "load" data from external files.  (A script on the server takes the POST data, and reformats it as the desired MIME type file.  For file upload, it takes the POST data containing the uploaded files, parses the data, and inserts the parsed data to the same HTML file (typically as a Javascript array).

So, I agree if we're talking about hardware to do real development and experiments and learning on.  I do claim HTML+Javascript is a better introduction to event-oriented programming in general, however!  ;)
(For no other real reason than that it requires no other investment except human time and effort: we all have the tools necessary already installed.)
Title: Re: event-oriented programming language
Post by: tggzzz on January 05, 2023, 03:46:00 pm
In imperative languages, write operations are synchronous: the call returns when the data has been sent to the storage device.  On especially microcontrollers, much of that time is spent busy-waiting for the operation to complete.  We could get more out of the same hardware, if that wait time could be used for something more useful.  To do so, a write must be turned into a two-part operation: you start the write, and at some time later, the write completes, and in the mean time, you must not modify the buffer contents as it is still in the progress of being written out.

If you are using some sort of preemtive RTOS, this is typically case ie. the program flow will contain these (inefficient) busy loops.

It needn't be inefficient. All you need is one processor/core per event loop. Cores are cheap nowadays :)

Quote
Using co-operative scheduler and event-driven techniques will typically prevent this. Using patterns like Observer, and Subscribe/Publish are nice techniques to trigger processing only when something needs to be done. Combining these techniques with state machines and event queues, you can create  systems that are responsive. Also, the devices will be quite deterministic in a sense that nothing is happening if there are no events generated, and the device is active only when processing events.

RTOSs are merely a hack to multiplex several processes (i.e. event loops) onto a single execution engine. They are a useful hack when insufficient execution engines are available.

Quote
For time-critical systems one can use time-triggered scheduling, for example. This kind of scheduling will produce even more deterministic systems, if needed.


There is absolutely nothing special about timeouts: they are merely an event equivalent to a message arriving, an input being available or an output completing. All such events should be treated identically at the language level and the runtime level.
Title: Re: event-oriented programming language
Post by: tggzzz on January 05, 2023, 04:04:34 pm
I suspect that a language-based filter assignment, implemented in the runtime, would yield more efficient and robust microcontroller firmware.
Similar to Erlang's pattern matching?
No, not really.  I used 'suspect', because I don't have a clear picture of exactly what would work for me.

For example, we could map hardware interrupts and such to event queues using
    'Map' [ context-object '.' ] event-name 'to' [ queue-object '.' ] event-invocation(parameter-list) [ 'using' context-object ] ';'

One possible idea is to support explicit filters to a queue-object, for example
    'Filter' queue-object [ ':' [ 'Allow' | 'Drop' | 'Postpone' ] 'if' filter-expression ]* ';'
that are implicitly evaluated whenever the queue-object is accessed for events (by the runtime).

Another is to support named 'blocks', like electrichickens use those multi-locks when shutting down systems they're working on, for example
    'Block' queue-object 'with' block-identifier [ 'until' auto-unblock-condition ] ';'
    'Unblock' queue-object 'with' block-identifier ';'
where the queue is blocked if it has one or more blocks placed on it.

The former way is more powerful, but also has more overhead (since the filter-expression is executed potentially for each queued event during each queue state check).  The latter is simpler, potentially requiring just one bit per block per event queue.

I have found that a FIFO with two get/put variants to be sufficient for my purposes. A put which either blocks until success or if the FIFO is full it immediately returns control to the calling process. Similarly a get which either blocks until the FIFO isn't empty, or if the FIFO is empty it immediately returns control to the calling process.

In almost all cases the blocking variant is sufficient. If it isn't sufficient then it usually means the system is under-provisioned.

Quote
Additional approaches are explicit filter expressions for forwarding events to other queues, and so on.
I just don't know enough to say what would work best for myself, yet.
Like I said, the exploration of this is something I'd love to contribute to, but is too much work for a single person to achieve.
I also need quite a bit of pushback whenever I don't see the downsides of my own suggestions or my own errors yet; that too happens too often.  :P

I seek out people where I listen carefully to what they say - especially when they disagree with me :)

Quote
Multiple levels of FIFO may be desirable. For example, in a telecoms server, there will be a single FIFO for all incoming events. There will also be a single FIFO associated with each call in progress,  only containing events relevant to that call. Transferring an event from the "incoming" FIFO to one of the "call FIFOs" is done when convenient.
That is exactly the sort of pattern I'm thinking more than one "event queue" object would be useful.

Dealing with individual event priorities leads to all sorts of complex and chaotic situations (like priority inversion).
Dealing with multiple queues, in the runtime (so that one can obtain events from a set of queues, ordered by queue priority for example, with optional filters applied per queue), seems much more reasonable level of complexity to myself.

It isn't clear to me whether it is better to have the filtering/matching as part of the runtime/language, or as part of your process. My gut feel is that being part of the runtime/language is best in a limited number of very important cases, e.g. high availability and hot-swapping applications. Otherwise attempting to use it for application specific filtering is likely to be a bad fit.

Quote
Quote
Then, the logical equivalent of a semaphore is an event queue, with sem_wait equating to grab object from queue, and sem_post to putting an object to the queue; the equivalent of a mutex is a single-event queue where mutex_lock equates to grabbing the token object from the queue, and mutex_unlock to putting the token object back to the queue, with waiters blocking on the grab-token-object operation.
Trick is, for this to be useful on microcontrollers, the operations must compile to efficient machine code.
There's no fundamental reason why it would be any less efficient that other mechanisms that also take account of atomicity volatility and parallelism.
No, but it is enticing to anyone developing a new language to think of an abstraction they love, that turns out to be hellishly complicated to implement on currently available hardware, requiring lots of RAM and complex operations like stack unwinding.

I have difficulty distinguishing between hardware and software. Those that think it is easy have major gaps in understanding not only the theoretical fundamentals but also what's implemented in real systems.

Quote
The trick is to consider the logical equivalents as having approximately the same level of abstraction and complexity.  So, if you think of a way of implementing an event queue that requires the equivalents of mutexes and condition variables to implement, it is probably not suitable for real life implementation.
Indeed, in systems programming, I mostly use lockless techniques using atomic ops for these (GCC/Clang/ICC atomic built-ins (https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-Builtins.html) on x86-64 in particular), so I know it is/should be possible on most architectures.
On some, like AVR, you might need to disable interrupts for a few cycles (less than a dozen per critical section), but it should be doable.

I'd like to see an analysis of which mechanisms are fundamentally necessary and sufficient, and the implementation details that have lead to other mechanisms being desirable.
Title: Re: event-oriented programming language
Post by: Kalvin on January 05, 2023, 04:05:53 pm
I have created a set of C macros that are used to define and implement a simple DSL (domain specific language) for describing these HSMs in a compact way.

very interesting! Can you show some examples of them?
C-macros always smell of possibly language add-on-s


(here, OMG, in my-c there is no cpp and #define is banned
so I am are really forced to add real language constructs

DMS for FSMs: sounds good!!!)

Here is a small snippet implementing some networking stuff from my code using HSM.

At the top of the source code file there are forward declarations for the state machine instance yns_sm, and its states.
The top level state is yns_sm_top_state, and its child states are listed below.

Code: [Select]
... <snip>
YHSM_DECLARE(yns_sm);
YHSM_STATE_DECLARE(yns_sm_top_state);

YHSM_STATE_DECLARE(yns_network_closed_state);
YHSM_STATE_DECLARE(yns_network_error_state);
YHSM_STATE_DECLARE(yns_network_fail_state);

YHSM_STATE_DECLARE(yns_network_opened_state);
YHSM_STATE_DECLARE(yns_network_reconnect_retry_check_state);
YHSM_STATE_DECLARE(yns_network_disconnected_state);
YHSM_STATE_DECLARE(yns_network_connected_state);

YHSM_STATE_DECLARE(yns_server_connected_state);
<snip> ...

Here is the implementation of the yns_server_connected_state.

The state has a default timeout value of 2000 milliseconds, and the state declares a local variable for counting the number of remaining retries.

Code: [Select]
#define YNS_SERVER_CONNECTED_STATE_TIMEOUT_ms 2000

static int yns_server_poll_retry_count;

Here we define a new state yns_server_connected_state, and declare its parent state to be yns_network_connected_state, and set the default state timeout time to be 2000 milliseconds.

Code: [Select]
YHSM_STATE_BEGIN(yns_server_connected_state, &yns_network_connected_state, YNS_SERVER_CONNECTED_STATE_TIMEOUT_ms);

When the state machine enters this state, and if state's default timeout value is larger than 0, state's timeout timer will be started automatically by the default timeout value. When the state machine makes a transition exiting the state, this state's timeout timer will be stopped automatically. If the state's timeout timer expires while the state machine is still in this state, an event handler YHSM_STATE_TIMEOUT_ACTION() will be called.

Here is state's enter action which will be executed whenever a transition into this particular state takes place:

Code: [Select]
YHSM_STATE_ENTER_ACTION(yns_server_connected_state, hsm)
{
    yns_server_socket_open();
    yns_led_enable();

    // Notify the application observer that the connection to the server is established.
    yns_notify(YNETWORK_EVENT_SERVER_CONNECTED);

    yns_server_reconnect_retry_count =
        yns_network_config.server_reconnect_retry_count;

    yns_server_poll_retry_count = yns_network_config.server_poll_retry_count;
}

After the state's enter action is executed, the state machine will transition to the given initial state yns_network_idle_state.
By definition the yns_network_idle_state shall be a sub-state of the yns_server_connected_state.

Code: [Select]
YHSM_STATE_INIT_ACTION(yns_server_connected_state, hsm)
{
    YHSM_INIT(&yns_network_idle_state);
}

Macro YHSM_INIT() performs the actual initial state transition. Be definition sub-state's enter action will be executed during this initial transition.

Here is the state's timeout action, which will be executed after 2000 milliseconds if the state is still active after two seconds:

Code: [Select]
YHSM_STATE_TIMEOUT_ACTION(yns_server_connected_state, hsm)
{
        YHSM_TRAN(&yns_server_reconnect_retry_check_state);
}

Macro YHSM_TRAN() is used to trigger a state transition.

Here is state's exit action which will be executed whenever a transition from this particular state takes place:

Code: [Select]
YHSM_STATE_EXIT_ACTION(yns_server_connected_state, hsm)
{
    // Notify the application observer that the connection to the server is disconnected.
    yns_notify(YNETWORK_EVENT_SERVER_DISCONNECTED);

    yns_server_socket_close();
    yns_led_disable();
}

Here is the state's actual event handler:

Code: [Select]
YHSM_STATE_EVENT_HANDLER(yns_server_connected_state, hsm, ev)
{
    if (ev->id == YNS_EVENT_SOCKET_ERROR || ev->id == YNS_EVENT_NETWORK_TIMEOUT)
    {
        YHSM_TRAN(&yns_server_reconnect_retry_check_state);
    }

    if (ev->id == YNS_EVENT_SOCKET_DO_CLOSE)
    {
        YHSM_TRAN(&yns_server_reconnect_delay_state);
    }

    YHSM_RAISE();
}

Macro YHSM_TRAN() is used to trigger a state transition.

If the state doesn't want to handle some events, it will delegate those unhandled events to its parent state using macro YHSM_RAISE().

Here is the end of the state's definition.

Code: [Select]
YHSM_STATE_END(yns_server_connected_state);

Note particularly how the socket will be always opened and LED will be turned on when entering the state, and the socket will always be closed and LED turned off when exiting state. This guarantees by definition that the setup- and cleanup-actions are always performed in correct order, every time a state transition takes place.
Title: Re: event-oriented programming language
Post by: artag on January 05, 2023, 04:08:52 pm
Why 'language' as opposed to 'runtime' or 'library' ?
It's easy to do event-driven programming with any language, so I don't see why you'd need a new one. What problem are you trying to solve ?
Title: Re: event-oriented programming language
Post by: DiTBho on January 05, 2023, 04:28:09 pm
I do wonder, if their (assumed!) lack of success is related to there not really being a language where one could easily and effortlessly express the patterns this kind of hardware is well suited for?

Well, umm. programming PowerPC is not that bad, you just have to care more (pipeline and cache) quirks than with MIPS32, but programming the PowerQUIC CPM engine with classic imperative patterns is ... as terrible as programming the TPU engine of the old CPU32, of which, the 683xx has been massively used by Ford Racing and it's still in production - I think - thanks to their internal language support, whereas the rest of people need to use TPU-assembly and C-tricks like #define Macro to mimic the description of FSMs.

Classic imperative patterns are not good, this is probably why no hobbyist wants anything to do with those chips (which are also hard to find, and expensive).

But hey? That hardware is really as weird as awesome, I mean a true good challenge  :D :D :D
Title: Re: event-oriented programming language
Post by: Kalvin on January 05, 2023, 04:32:49 pm
In imperative languages, write operations are synchronous: the call returns when the data has been sent to the storage device.  On especially microcontrollers, much of that time is spent busy-waiting for the operation to complete.  We could get more out of the same hardware, if that wait time could be used for something more useful.  To do so, a write must be turned into a two-part operation: you start the write, and at some time later, the write completes, and in the mean time, you must not modify the buffer contents as it is still in the progress of being written out.

If you are using some sort of preemtive RTOS, this is typically case ie. the program flow will contain these (inefficient) busy loops.

It needn't be inefficient. All you need is one processor/core per event loop. Cores are cheap nowadays :)

You are so spoiled! I don't have a luxury of having multiple cores :) I work in a single core embedded environment, where the available amount of Flash and RAM are typically very limited (Cheap ARM M3 devices), the device's energy consumption has to be minimized, and battery life-time has to be maximized.

Quote
Quote
Using co-operative scheduler and event-driven techniques will typically prevent this. Using patterns like Observer, and Subscribe/Publish are nice techniques to trigger processing only when something needs to be done. Combining these techniques with state machines and event queues, you can create  systems that are responsive. Also, the devices will be quite deterministic in a sense that nothing is happening if there are no events generated, and the device is active only when processing events.

RTOSs are merely a hack to multiplex several processes (i.e. event loops) onto a single execution engine. They are a useful hack when insufficient execution engines are available.

Luxury items again! :) RTOSes need to allocate RAM for each task. As I have only very limited amount of RAM available, I prefer using / have to use a simple co-operative tasker/scheduler which requires only one global stack frame. On some special cases I may use a preemptive scheduler with two tasks: one task for the main application running a co-operative tasker, and the other task for the networking.

Quote
Quote
For time-critical systems one can use time-triggered scheduling, for example. This kind of scheduling will produce even more deterministic systems, if needed.


There is absolutely nothing special about timeouts: they are merely an event equivalent to a message arriving, an input being available or an output completing. All such events should be treated identically at the language level and the runtime level.

I meant by time-triggered scheduling like this: https://en.wikipedia.org/wiki/Time-triggered_architecture (https://en.wikipedia.org/wiki/Time-triggered_architecture)

Especially this one: "Use of TT systems was popularized by the publication of Patterns for Time-Triggered Embedded Systems (PTTES) in 2001[1] and the related introductory book Embedded C in 2002.[4]".

The book is freely available from here: https://www.safetty.net/publications/pttes (https://www.safetty.net/publications/pttes)

Here is a nice summary for Analysis Of Time Triggered Schedulers In Embedded System:
https://www.interscience.in/cgi/viewcontent.cgi?article=1014&context=ijcsi (https://www.interscience.in/cgi/viewcontent.cgi?article=1014&context=ijcsi)
Title: Re: event-oriented programming language
Post by: tggzzz on January 05, 2023, 04:56:01 pm
Why 'language' as opposed to 'runtime' or 'library' ?
It's easy to do event-driven programming with any language, so I don't see why you'd need a new one. What problem are you trying to solve ?

That is a good question.

There are multiple answers, none of which are completely compelling (Turing machines and all that!). A few that spring to mind, for example...

While you can do object oriented programming in C (and I did in the early-mid 80s), you can do it more easily and clearly in a (decent) object oriented language (C++ is excluded from that!). For example, xC's constructs strongly encourage event oriented architecture and implementation in the real-time embedded arena.

If a good set of abstractions and concepts are chosen and embodied in a language, then they will guide people to using them effectively. OTOH a library is easier to ignore and or use badly.

A language should enable automated tooling that cannot be achieved with a library. Examples are SPARK (proof of program properties), xC on xCORE enabling calculation of worst case execution times (none of that measure and hope crap!), Java introspection at runtime and in an IDE (think ctrl-space autocompletion) vs that unavailable in C++.

P.S. all those advantages presume the language embodies a good set of abstractions, they are well implemented, and the time and effort is available for the tools to be implemented. If any of those don't apply, then it will usually be preferable to implement the abstractions as a library. Basically it is a damn sight more practical to implement a Domain Specific Library than a Domain Specific Language.

P.P.S. implementing a library requires that the underlying language has suitable behavioural guarantees. Thus good luck implementing threads in C (except recent versions) or in Python or Ruby.
Title: Re: event-oriented programming language
Post by: tggzzz on January 05, 2023, 05:02:28 pm
In imperative languages, write operations are synchronous: the call returns when the data has been sent to the storage device.  On especially microcontrollers, much of that time is spent busy-waiting for the operation to complete.  We could get more out of the same hardware, if that wait time could be used for something more useful.  To do so, a write must be turned into a two-part operation: you start the write, and at some time later, the write completes, and in the mean time, you must not modify the buffer contents as it is still in the progress of being written out.

If you are using some sort of preemtive RTOS, this is typically case ie. the program flow will contain these (inefficient) busy loops.

It needn't be inefficient. All you need is one processor/core per event loop. Cores are cheap nowadays :)

You are so spoiled! I don't have a luxury of having multiple cores :) I work in a single core embedded environment, where the available amount of Flash and RAM are typically very limited (Cheap ARM M3 devices), the device's energy consumption has to be minimized, and battery life-time has to be maximized.

I know which way history is headed. I want to be ahead of the curve :)

More importantly, the current mainstreram languages are insufficient; we will need significant improvements. That means we need to start yesterday!

Quote
Quote
Quote
Using co-operative scheduler and event-driven techniques will typically prevent this. Using patterns like Observer, and Subscribe/Publish are nice techniques to trigger processing only when something needs to be done. Combining these techniques with state machines and event queues, you can create  systems that are responsive. Also, the devices will be quite deterministic in a sense that nothing is happening if there are no events generated, and the device is active only when processing events.

RTOSs are merely a hack to multiplex several processes (i.e. event loops) onto a single execution engine. They are a useful hack when insufficient execution engines are available.

Luxury items again! :) RTOSes need to allocate RAM for each task. As I have only very limited amount of RAM available, I prefer using / have to use a simple co-operative tasker/scheduler which requires only one global stack frame. On some special cases I may use a preemptive scheduler with two tasks: one task for the main application running a co-operative tasker, and the other task for the networking.

See above.

See xCORE processors for embedded hard real-time systems. Currently up to 32cores/chip, and chips can be "paralleled".

Quote
Quote
Quote
For time-critical systems one can use time-triggered scheduling, for example. This kind of scheduling will produce even more deterministic systems, if needed.


There is absolutely nothing special about timeouts: they are merely an event equivalent to a message arriving, an input being available or an output completing. All such events should be treated identically at the language level and the runtime level.

I meant by time-triggered scheduling like this: https://en.wikipedia.org/wiki/Time-triggered_architecture (https://en.wikipedia.org/wiki/Time-triggered_architecture)

Especially this one: "Use of TT systems was popularized by the publication of Patterns for Time-Triggered Embedded Systems (PTTES) in 2001[1] and the related introductory book Embedded C in 2002.[4]".

The book is freely available from here: https://www.safetty.net/publications/pttes (https://www.safetty.net/publications/pttes)

Here is a nice summary for Analysis Of Time Triggered Schedulers In Embedded System:
https://www.interscience.in/cgi/viewcontent.cgi?article=1014&context=ijcsi (https://www.interscience.in/cgi/viewcontent.cgi?article=1014&context=ijcsi)

That doesn't change my contention. All it means is that a processes' mainloop is sitting idly until the tick/timeout event arrives.
Title: Re: event-oriented programming language
Post by: Kalvin on January 05, 2023, 05:21:32 pm
That doesn't change my contention. All it means is that a processes' mainloop is sitting idly until the tick/timeout event arrives.

Idle yes, but not necessarily running. The MCU may be held in the low-power sleep-state consuming only 1uA or less, while waiting for the next timer tick to occur. This can be used for minimizing device's energy consumption.
Title: Re: event-oriented programming language
Post by: tggzzz on January 05, 2023, 06:12:26 pm
That doesn't change my contention. All it means is that a processes' mainloop is sitting idly until the tick/timeout event arrives.

Idle yes, but not necessarily running. The MCU may be held in the low-power sleep-state consuming only 1uA or less, while waiting for the next timer tick to occur. This can be used for minimizing device's energy consumption.

Just so. The xCORE devices do that on a per-core basis, I believe. The equivalent processes in FPGA LUTs will have reduced consumption due to there being no signals changing, but the clock system will still be running full throttle.
Title: Re: event-oriented programming language
Post by: DiTBho on January 05, 2023, 07:06:24 pm
While you can do object oriented programming in C (and I did in the early-mid 80s), you can do it more easily and clearly in a (decent) object oriented language (C++ is excluded from that!).

yup, even polimorphism in C89 is possible but the language doesn't help, so you have to spend more time to write stuff. I know because I wrote a b+tree library that way.

It's what "X-oriented" means the language helps with the "X set of features/needs".

For example, I can say that my-c is "ICE-testing-oriented" because it adds native support(1) for ICE-testing.

(1) actually, it adds new constructs which help, but - I decided to also restrict the C89 grammar in a specific way so you don't have to later "modify" your code, so if you write something and it compiles, it's already ready for ICE-testing.

This being "specifically written" (C89 grammar restriction) is my personal second meaning of "ICE-oriented".
Perhaps wrong, but it works insanely great with my colleagues  :D
Title: Re: event-oriented programming language
Post by: DiTBho on January 05, 2023, 11:51:27 pm
My Atlas MIPS board has a special circuit that disables/enables the clock and issues a "wake up the machine" interrupt (different than reset).

Both the uart card and the network card (DEC-TULIP based) can work autonomously without the CPU in regards to packet acceptance/rejection and fire an interrupt to the CPU when special packets require CPU attention.

It looks a small but interesting working scheme for a simple event-driven skeleton  :D
Title: Re: event-oriented programming language
Post by: Nominal Animal on January 06, 2023, 01:32:36 pm
Classic imperative patterns are not good, this is probably why no hobbyist wants anything to do with those chips (which are also hard to find, and expensive).
I do prefer free toolchains and relatively affordable development boards –– seeing as I can nowadays get very powerful Linux SBCs for ~ 40-60 € (Amlogic, Samsung, Rockchip SoC chips and chipsets having very good vanilla Linux kernel support, so one is not dependent on vendor forks) –– as a hobbyist myself.

I admit I've been extremely interested in XMOS xCore ever since I first heard of it from tggzzz, but the single-vendor approach with non-open toolchain feels, well, "too risky" or something.  (I've been burned enough times by vendors already, you see, so maybe I'm paranoid.)  And vendor toolchain support for Linux or BSDs tends to be second-tier, which further reduces the value of investment, even if for purely learning and experimentation.

Which nicely leads me to:
Why 'language' as opposed to 'runtime' or 'library' ?
To do better.

Like I mentioned earlier in a reply to Kalvin, none of what I have suggested here leads to "new" machine code; everything stems from already existing patterns.  The problem I'd like to solve, is to express those patterns in a clearer, more efficient manner, and at the same time avoid the known pitfalls in existing low-level languages –– memory safety, and ease of static analysis.

No new abstractions, just easier ways for us humans to describe the patterns in a way compilers can statically check and generate efficient machine code for.

To me, the number of times I've had to argue for MPI_Isend()/MPI_Irecv() –– event-based I/O in MPI; the call initiates the transfer and provides an MPI_Request handle one can examine or wait for completion or error –– against highly-paid "MPI Experts", indicates that whenever imperative approach is possible, it will be used over event-oriented one, because humans.

I myself do not have "libraries" for my event-oriented stuff, I just have patterns (with related unit test cases and examples) I adapt for each use case separately.  I seriously dislike the idea of having one large library that provides such things, because it leads to framework ideation where you do things a certain way because it is already provided for you, instead of doing things the most efficient or sensible way.  Many small libraries, on the other hand, easily lead to (inter-)dependency hell.

Do note I've consistently raised the idea of experimenting with how to express the various patterns, using an imagined language (but at the same time thinking hard about what kind of machine code the source should compile to).  So, there is no specific single pattern I'm trying to recommend anyone to use, I'm pushing/recommending/discussing/musing about how to experimentally discover better-than-what-we-have-now ways of describing the patterns we already use, and build a new language based on that.

If you've ever taken a single course on programming language development, or any computer science courses related to programming languages really, this will absolutely look like climbing a tree ass first.  Yet, I have practical reasons to believe it will work, and can/may/should lead to a programming language that is better suited to our current event-oriented needs on resource-constrained systems than what we have now.

(As to those practical reasons: I've mentioned that I've occasionally dabbled in optimizing work flows by spending significant time and effort beforehand to observe and analyse how humans perform the related tasks.  This itself is a very well documented (including scientific papers, as well as practical approaches done in high-production development/factory environments) problem solving approach.  My practical reasons are thus based on treating programming as a problem solving workflow.  This has worked extremely well for myself (in software development in about a dozen programming languages), so I have practical reasons to expect that this approach, even if considered "weird" or "inappropriate" by true CS folks, will yield very useful results.)
Title: Re: event-oriented programming language
Post by: DiTBho on January 06, 2023, 02:35:55 pm
I do prefer free toolchains and relatively affordable development boards –– seeing as I can nowadays get very powerful Linux SBCs for ~ 40-60 € (Amlogic, Samsung, Rockchip SoC chips and chipsets having very good vanilla Linux kernel support, so one is not dependent on vendor forks) –– as a hobbyist myself.

Opensource is limited, and for example, you cannot have the same Ada experience with Gnat that you can have with GreenHills AdaMulti.

Here you need a serious job, AdaMULTI is a complete integrated development environment with serious ICE support for embedded applications using Ada and C with serious support. Just the ICE's header costs 3000 euro, the full package costs 50000 euro.

What do you have with Opensource? Poor man debugger? gdb-stub? umm?
Our my-c-ICE technology is not as great as GreenHills's but it's several light years ahead gdb!

And note: once again, it's not OpenSource!
 
So, my opinion here is clear: you need to find a job in avionics to enjoy the full C+Ada experience.

The same applies with Linux SBC ... they are all the same, over and over, all the same story. Linux bugs, Userland bugs ... nothing of nothing different from the same boring daily experience, just new toys.

The M683xx and MPC840 are great piece of hardware like never seen, and - once again - opensource has ZERO support for their hardware design, whereas the industry has some great stuff.

MPC840 is used in AFDX switches, used from from avionics to naval to high speed railways
M683xx is used by Ford Racing for internal combustion engine

Now, I'd love to find a job which exposes me to the Dyson Digital Motor technology.
I know, their electric car is an epic business failure, but their technology, even at the software side, is great!
Title: Re: event-oriented programming language
Post by: Sherlock Holmes on January 06, 2023, 03:27:03 pm
I do prefer free toolchains and relatively affordable development boards –– seeing as I can nowadays get very powerful Linux SBCs for ~ 40-60 € (Amlogic, Samsung, Rockchip SoC chips and chipsets having very good vanilla Linux kernel support, so one is not dependent on vendor forks) –– as a hobbyist myself.

Opensource is limited, and for example, you cannot have the same Ada experience with Gnat that you can have with GreenHills AdaMulti.

Here you need a serious job, AdaMULTI is a complete integrated development environment with serious ICE support for embedded applications using Ada and C with serious support. Just the ICE's header costs 3000 euro, the full package costs 50000 euro.

What do you have with Opensource? Poor man debugger? gdb-stub? umm?
Our my-c-ICE technology is not as great as GreenHills's but it's several light years ahead gdb!

And note: once again, it's not OpenSource!
 
So, my opinion here is clear: you need to find a job in avionics to enjoy the full C+Ada experience.

The same applies with Linux SBC ... they are all the same, over and over, all the same story. Linux bugs, Userland bugs ... nothing of nothing different from the same boring daily experience, just new toys.

The M683xx and MPC840 are great piece of hardware like never seen, and - once again - opensource has ZERO support for their hardware design, whereas the industry has some great stuff.

MPC840 is used in AFDX switches, used from from avionics to naval to high speed railways
M683xx is used by Ford Racing for internal combustion engine

Now, I'd love to find a job which exposes me to the Dyson Digital Motor technology.
I know, their electric car is an epic business failure, but their technology, even at the software side, is great!

I agree, I'd love to see Dyson reenter electric cars, they'd crush Tesla, literally drive them out of business.
Title: Re: event-oriented programming language
Post by: DiTBho on January 06, 2023, 03:55:14 pm
I admit I've been extremely interested in XMOS xCore ever since I first heard of it from tggzzz, but the single-vendor approach with non-open toolchain feels, well, "too risky" or something

umm, it's not expensive, so... if I were you, I'd give it a try.
let's define x-experience as x: { worst ... best }
to me, it seems it's worth the attempt  :D
Title: Re: event-oriented programming language
Post by: DiTBho on January 06, 2023, 04:48:10 pm
(
More about my personal experience and thinking

To see the industrial technology behind a "black-box voter and its redundant system" can be much worse, like seeing for yourself what's down the rabbit hole you have to spend four months retrofitting a steam train (2) to modern rail to then finally see what's behind an high speed train!

I did it in 8 years ago, and after two months of training (1) I finally saw the one truth behind GreenHills (lucky coincidence) and its developing ecosystem Ensuring Code Quality, being similar to DO-178B/C Compliant(1), and got a Level 4 personal card which it allowed me to read all the technical documentation on both their software and hardware, including their "voter box", which is a true secret black box ....

WOW, the secret box has no more secrets for you, sure, but ... considering the psychological cost of that training, it was a rather insanely bizarre experience of which I'm not even allowed to talk about because of what I had to sign (worst negative).

So even before I got home, I felt quite emotionally and logically understand Cypher, when in the first Matrix movie, he betrayed his friends to re-enter the Matrix.

Red Pill always has a price to pay.

Sure, with Xmos you have to seat your belt but it's a soft drop into the border of the comfort zone.
It costs, but it's not a steep price  :D

(1) railway ~ avionics, I had to study and adapt my little knowledge and skills, and exercise with new tools.
(2) "crazy snob, no argument just because I was paid for the job" I thought at the time... years later... well that stuff made a lot of money with up to seven serving platters served aboard a steam train!!!

)
Title: Re: event-oriented programming language
Post by: Nominal Animal on January 06, 2023, 05:00:26 pm
I do prefer free toolchains and relatively affordable development boards –– seeing as I can nowadays get very powerful Linux SBCs for ~ 40-60 € (Amlogic, Samsung, Rockchip SoC chips and chipsets having very good vanilla Linux kernel support, so one is not dependent on vendor forks) –– as a hobbyist myself.
Opensource is limited, and for example, you cannot have the same Ada experience with Gnat that you can have with GreenHills AdaMulti.

Here you need a serious job, AdaMULTI is a complete integrated development environment with serious ICE support for embedded applications using Ada and C with serious support. Just the ICE's header costs 3000 euro, the full package costs 50000 euro.

What do you have with Opensource? Poor man debugger? gdb-stub? umm?
As a hobbyist.

When you start doing product development, things become very different.  (You know that I myself use very different licenses – from CC0-1.0 in my examples to fully proprietary, and I've no problem developing stuff under a (properly written) NDA.)

I've also mentioned how low quality Linux vendor systems integration work is; I do my own.  For a hobbyist interested in such things, a few hours now and then to reconfigure or tinker with the appliance is OK.  But, to do proper systems integration and maintenance, you actually need experience.  To provide those to customers, you need servers (to hold pre-vetted and tested repositories of packages) and a lot more infrastructure, including actual paid maintainers. Current vendors aren't even interested in paying for people who can do the integration, they just use whoever is cheap enough to do that.  And it shows: most appliances are put together with spit and bubblegum.

Also, you do realize even XMOS uses GDB (https://www.xmos.ai/file/tools-15-source-xgdb/?version=15.1.4) for debugging?
Open source isn't better or worse, it is just developed using a completely different 'payment'/'cost'/'benefit' model.

I admit I've been extremely interested in XMOS xCore ever since I first heard of it from tggzzz, but the single-vendor approach with non-open toolchain feels, well, "too risky" or something
umm, it's not expensive, so... if I were you, I'd give it a try.
The processors are not too expensive, sure, but I'd like to start with a development kit.  The cheapest active (non-obsoleted) one I can get from Digikey here, XK-EVK-XU316, costs 266€ including VAT.  That is quite a lot for me: remember, I'm a poor burned out husk of a man without steady income.

But anyway, it's more that I've sworn to myself to no longer give money to vendors who just use that to fuck me over later on.  Burned before too many times, shame on me, you can't fool me again, like Dubya said.

tggzzz said that the ecosystem works, so perhaps I should just give it a try, if I can find a cheap/used kit.. and I do like the background of the company.

But, I cannot download the XTC Tools (https://www.xmos.ai/software-tools/) without registering, and the 15.1.4 release notes say "Inclusion of the lib_xcore system library and headers marks a shift in emphasis towards programming the xcore in C, rather than XC", which combined with the shift towards audio processing, makes me suspect the longevity of the platform.

Also, what exactly is the xcc compiler based on?  Its documentation says (https://www.xmos.ai/documentation/XM-014363-PC-6/html/prog-guide/quick-start/c-programming-guide/index.html) "The XCore compiler (xcc) supports targeting the XCore using GNU C or C++." but all I can find is an out of date clang (https://github.com/xmos/devops_clang) mirror.

I sure hope it is not a GCC fork in violation of the license.  I've had enough of vendors like Microchip and their shady shenanigans and marketing-speak, exploiting others' (gcc devs') work and trying to lie about end users' rights by planting misinformation in the discussions.  (Yes, end users are allowed to both modify the Microchip GCC-derived compiler, and publish the changes and even binaries, so that one does not need to pay Microchip to use an unencumbered open source compiler.  There has been several threads with mostly misconceptions (especially how EULA somehow overrides the copyright license that allowed Microchip to create a derivative in the first place ::) here at EEVblog forums about this.)

If one wants to do proprietary software, it's absolutely fine; I do so all the time myself.  But doing so in violation of the copyright license, forking an open source project to do so, is evil.  The licenses aren't hard to abide by, and nobody is forcing anyone to use the open source projects either.
If your business plan revolves around breaking copyright licenses and banking on nobody suing, you are no better than any other mass pirates online.

So yeah, I'm interested, but suspicious, too.
Title: Re: event-oriented programming language
Post by: tggzzz on January 06, 2023, 05:12:41 pm
Classic imperative patterns are not good, this is probably why no hobbyist wants anything to do with those chips (which are also hard to find, and expensive).
I do prefer free toolchains and relatively affordable development boards –– seeing as I can nowadays get very powerful Linux SBCs for ~ 40-60 € (Amlogic, Samsung, Rockchip SoC chips and chipsets having very good vanilla Linux kernel support, so one is not dependent on vendor forks) –– as a hobbyist myself.

I admit I've been extremely interested in XMOS xCore ever since I first heard of it from tggzzz, but the single-vendor approach with non-open toolchain feels, well, "too risky" or something.  (I've been burned enough times by vendors already, you see, so maybe I'm paranoid.)  And vendor toolchain support for Linux or BSDs tends to be second-tier, which further reduces the value of investment, even if for purely learning and experimentation.

It is definitely a risk, both that a vendor might disappear (they've been shipping product for 15 years) and in terms of spending effort on a niche that isn't interesting to the next employer (not a problem for me).

Quote
Which nicely leads me to:
Why 'language' as opposed to 'runtime' or 'library' ?
To do better.

Like I mentioned earlier in a reply to Kalvin, none of what I have suggested here leads to "new" machine code; everything stems from already existing patterns.  The problem I'd like to solve, is to express those patterns in a clearer, more efficient manner, and at the same time avoid the known pitfalls in existing low-level languages –– memory safety, and ease of static analysis.

No new abstractions, just easier ways for us humans to describe the patterns in a way compilers can statically check and generate efficient machine code for.

Yes. The harmonious integration of existing knowledge/experience should be sufficient; it was for Java.

Quote
To me, the number of times I've had to argue for MPI_Isend()/MPI_Irecv() –– event-based I/O in MPI; the call initiates the transfer and provides an MPI_Request handle one can examine or wait for completion or error –– against highly-paid "MPI Experts", indicates that whenever imperative approach is possible, it will be used over event-oriented one, because humans.

I myself do not have "libraries" for my event-oriented stuff, I just have patterns (with related unit test cases and examples) I adapt for each use case separately.  I seriously dislike the idea of having one large library that provides such things, because it leads to framework ideation where you do things a certain way because it is already provided for you, instead of doing things the most efficient or sensible way.  Many small libraries, on the other hand, easily lead to (inter-)dependency hell.

Design patterns are a key concept, which appear to be becoming less fashionable.

Quote
Do note I've consistently raised the idea of experimenting with how to express the various patterns, using an imagined language (but at the same time thinking hard about what kind of machine code the source should compile to).  So, there is no specific single pattern I'm trying to recommend anyone to use, I'm pushing/recommending/discussing/musing about how to experimentally discover better-than-what-we-have-now ways of describing the patterns we already use, and build a new language based on that.

If you've ever taken a single course on programming language development, or any computer science courses related to programming languages really, this will absolutely look like climbing a tree ass first.  Yet, I have practical reasons to believe it will work, and can/may/should lead to a programming language that is better suited to our current event-oriented needs on resource-constrained systems than what we have now.

(As to those practical reasons: I've mentioned that I've occasionally dabbled in optimizing work flows by spending significant time and effort beforehand to observe and analyse how humans perform the related tasks.  This itself is a very well documented (including scientific papers, as well as practical approaches done in high-production development/factory environments) problem solving approach.  My practical reasons are thus based on treating programming as a problem solving workflow.  This has worked extremely well for myself (in software development in about a dozen programming languages), so I have practical reasons to expect that this approach, even if considered "weird" or "inappropriate" by true CS folks, will yield very useful results.)

As I point out to inexperienced softies, when building systems neither top-down nor bottom up is sufficient. Both are necessary - and they should meet each other in the middle :)
Title: Re: event-oriented programming language
Post by: tggzzz on January 06, 2023, 05:21:21 pm
The processors are not too expensive, sure, but I'd like to start with a development kit.  The cheapest active (non-obsoleted) one I can get from Digikey here, XK-EVK-XU316, costs 266€ including VAT.  That is quite a lot for me: remember, I'm a poor burned out husk of a man without steady income.

Yes, that's a bummer.

I have 5 SBCs that cost £15 each :) USB on one side and pins on the other, conceptually similar to an Arduino.

Quote
But anyway, it's more that I've sworn to myself to no longer give money to vendors who just use that to fuck me over later on.  Burned before too many times, shame on me, you can't fool me
tggzzz said that the ecosystem works, so perhaps I should just give it a try, if I can find a cheap/used kit.. and I do like the background of the company.

But, I cannot download the XTC Tools (https://www.xmos.ai/software-tools/) without registering, and the 15.1.4 release notes say "Inclusion of the lib_xcore system library and headers marks a shift in emphasis towards programming the xcore in C, rather than XC", which combined with the shift towards audio processing, makes me suspect the longevity of the platform.

Oh; that's concerning.

They do appear to be drifting towards ML, but I hadn't spotted a potential drift away from xC. They've always had tight C/C++ interoperability.

If the underlying hardware architecture is the same, then it ought to be possible to achieve the same benefits in C. A sugary syntax is less important than the patterns.
Title: Re: event-oriented programming language
Post by: DiTBho on January 06, 2023, 06:25:09 pm
you do realize even XMOS uses gdb

Yup, because it must be cheap! gdb is not bad, just it doesn't give you the experience you have with high-end ICEs.
Different costs, different purposes, different experiences.
Title: Re: event-oriented programming language
Post by: Nominal Animal on January 06, 2023, 07:01:27 pm
As I point out to inexperienced softies, when building systems neither top-down nor bottom up is sufficient. Both are necessary - and they should meet each other in the middle :)
Quite!  To me, it is analogous to the modular approach, where one constructs the solution from modular pieces, instead of trying to define the solution beforehand (top-down) or just throw stuff together and see what comes out (bottom-up).

For me, the typical application design process starts with a lot of thought, and some unit tests to see what kind of modules I have for the lowest-level key parts.  Often it is an iteration, with my overall design changing and evolving, until I have a full plan.  Even then, I typically end up needing a full rewrite later on, when I have some experience using the application, and can find ways to optimize the workflows.

With such an approach, things like combining proprietary computation (in a dynamically linked library) and an open-source, user-modifiable user interface (written in e.g. Python), is no longer "strange": you look at the licenses and their intent (and possibly existing case law) to define the generally accepted division, also defining the API between the two sides.  You then start developing each side modularly/piecewise, hopefully refining (or even redefining) the API as needed by both sides.  For web-based services, you have a very similar situation, with server-side logic never revealed to end users, but client-side logic (run within the browser) always is (although Javascript/WebAssembly code obfuscation and minimization tries to do just that).

(To see the industrial technology behind a "black-box voter and its redundant system" can be much worse, [...])
I'm not actually disagreeing with you, just explaining the reasons for my current opinion/stance.

I'm taking the hobbyist approach here, because a thing like a new programming language will take something like a decade before it is ready to be widely used in industrial applications.  Much less, if you create a variant like you did with my-c, of course; but here, we're talking about an event-oriented language, designed basically from scratch.  Indeed, I've been talking more about how I'd like that design process to go, instead of what the resulting design should be!  ;D

The problem with vendor-created programming languages or programming toolchains is a three-edged sword.  On one hand, the development is limited to the resources the company can afford to put into its development.  On the other hand, for any for-profit company, such spending must be explained to the shareholders, and essentially have to generate income.  If we consider language development, things like ease of use and portability, the company interests and the client developer interests, do not align too well.  On the gripping hand (it's a reference to Moties), the vendor is in full control of the language, and if you are big enough, you can get the vendor to extend/modify the language so that solving a particularly intractable detail becomes easier; rather than having to convince basically unpaid volunteers that such change is better for everybody (or at minimum, harms nobody), and do the hard work needed to implement the change as well.

Open source development is not cheap, it is just ... different.  The rules are different, I mean.

considering the psychological cost of that training, it was a rather insanely bizarre experience of which I'm not even allowed to talk about because of what I had to sign (worst negative).

So even before I got home, I felt quite emotionally and logically understand Cypher, when in the first Matrix movie, he betrayed his friends to re-enter the Matrix.
I do know how weird NDAs can get; that's why I included "(properly written)" when mentioning those.

As to Cypher, I always perceived him as overly naïve; like the people who are ardent proponents of one political system and another, believing that it would make life all roses and butterflies, with everyone happy (and those who disagree silenced without messing up the clean world).
Yes, he was put into a hard position without asking him whether he wanted it or not.  Thing is, easy answers don't make people happy.  You might think you'd be happier if you didn't know or did not have a specific impactful experience –– they do say "with knowledge comes pain" –– but fact is, it is experience and knowledge that shapes us.  That unknowing person would be a different one than you are now, so it is making a decision for someone else.  There is no guarantee that other you wouldn't be even more miserable, for example because they never got to make the choice in the first place.

This relates very closely to programming languages and software development.  It seems that one truly has to work with difficult and painful projects before they understand the importance of maintainability and readability –– and even then, some just never grasp it.

In this analog, one might wish they never had to encounter a specific project –– like me with certain Perl project by authors I like, but who definitely were not suited for creating that project back then at least; causing me to avoid Perl to this day! ––, but it is exactly those painful experiences that teach us the cost of doing things that way.

Red Pill always has a price to pay.
Yup.  Put more generally: There ain't no such thing as a free lunch. (https://en.wikipedia.org/wiki/There_ain't_no_such_thing_as_a_free_lunch)

(Definitely also applies to open source, as well.  The costs and payments are just measured using a completely different yard stick.)
Title: Re: event-oriented programming language
Post by: DiTBho on January 06, 2023, 07:20:38 pm
I just discovered an eBay feature that I didn't know about: you can save a search and ask eBay to notify - by email - you when it finds something. This way you can find a second hand board for cheap without daily actively having to search.

Great stuff  :D

Title: Re: event-oriented programming language
Post by: tggzzz on January 06, 2023, 07:58:19 pm
you do realize even XMOS uses gdb

Yup, because it must be cheap! gdb is not bad, just it doesn't give you the experience you have with high-end ICEs.
Different costs, different purposes, different experiences.

I always found the debugging experience was excellent with the XMOS IDE, which is based on Eclipse. I expect is the same as debugging C/C++ in Eclipse. Certainly breakpoints, single stepping at the xC and machine code level (as fast as possible with highly optimised C). Plus it shows the exact number of cycles between here and there via all paths - without executing the code.

What am I missing?
Title: Re: event-oriented programming language
Post by: DiTBho on January 06, 2023, 10:33:22 pm
What am I missing?

Dynamic coverage, performance analysis, code and data injection, continuous built-in test stimulus for normal and abnormal behavior, which is very useful for verifying cbit and for simulating hw failure ... etc

Also uploading is massively faster (fiber optic link, up to 400Mbyte/sec), and then you also have automatic test-report though the ICE itself, which requires large built-in buffers and AI local intelligence for pattern matching.

Lot of powerful stuff :D
Title: Re: event-oriented programming language
Post by: Zipdox on January 11, 2023, 02:56:51 pm
JavaScript is a largely event driven programming language. The JavaScript runtime runs an event loop. Other languages can have event loops too, like C with GLib, but JavaScript has by far the nicest syntax for event-driven programming (at least that I have used).
Title: Re: event-oriented programming language
Post by: SiliconWizard on January 11, 2023, 07:43:56 pm
We can mention Erlang too, although this one is a particular beast in itself. I like the concepts, but not so much how they have been translated. Certainly intersting to learn though.
Title: Re: event-oriented programming language
Post by: DiTBho on January 11, 2023, 08:29:17 pm
We can mention Erlang too, although this one is a particular beast in itself. I like the concepts, but not so much how they have been translated. Certainly interesting to learn though.

Yup, I do program in ErLang, but for me it's dedicated stuff (mnesia-only), rather than general purpose.

At the moment I am trying to support a frame for my-c to transform a function with multiple arguments into a sequence of single-argument functions; that means converting a function like this f(a, b, c, ...) into a function like this f(a)(b)(c)...

I desperately need it to use the map method with a curried function, this even if my-c is not functional programming like ErLang or Haskell.

Oh, from doc (I am a complete nuuuub with JS), it seems that JavaScript can easily do it via "function wrapper".

That's good!!!  :o :o :o

So, I'll probably try to add a "function wrapper" mechanism to my-c(1), hopefully it fixes my problem.


Meanwhile, let's {read, study, play} more about JS!



edit:
(1) functions in my-c always have a fixed number of arguments
"..." removed, so printf(...) can no more be implemented, and I'am SO HAPPY about this, because nobody will ever try to resurrect it, nobody, never, no way!!!  ;D
Title: Re: event-oriented programming language
Post by: gf on January 12, 2023, 09:45:32 pm
At the moment I am trying to support a frame for my-c to transform a function with multiple arguments into a sequence of single-argument functions; that means converting a function like this f(a, b, c, ...) into a function like this f(a)(b)(c)...

I desperately need it to use the map method with a curried function, this even if my-c is not functional programming like ErLang or Haskell.

Oh, from doc (I am a complete nuuuub with JS), it seems that JavaScript can easily do it via "function wrapper".

That's good!!!  :o :o :o

So, I'll probably try to add a "function wrapper" mechanism to my-c(1), hopefully it fixes my problem.

The key primitives which make such things possible are
and JavaScript functions are both.
Title: Re: event-oriented programming language
Post by: Nominal Animal on January 14, 2023, 05:49:49 pm
SiliconWizard linked a Youtube video in another thread, GOTO 2018: Old is the New New, by Kevlin Henney (https://youtu.be/AbgsfeGvg3E), talking about how many of the things that are touted as New have been known for quite a long time.  I loved it, because I have had similar vague opinions for a long time, but without clear basis I could explain other than my own observations; and have someone show the basis and refer to the historical publications involved is extemely useful.
(But now I have to go hunt down those papers and books, dammit.)

Anyway, that talk made me realize one possible model for 'event handlers' is to model each handler as a process, with events and associated data passed in messages/events between them.  Instead of global memory accessible to all event handlers, each event handler would have their own state/context object(s), plus possibly access to explicitly named read-only or 'slow-atomic' objects.  (Do note that 'process' here is the concept, I'm not referring to OS processes.)

For example, consider an event handler that receives a chunk of samples, calculates a windowed DFT with 50% overlap (so it technically always has "previous chunk", "current chunk", and generates two DFTs per chunk received, with chunk and DFT window size the same).
If there is some kind of memory arena concept, then the samples and their DFTs could use a shared memory arena, with room for say four sample chunks and four DFTs.  Whenever the handler receives a new chunk, it would allocate calculate the windowed DFT that crosses both chunks (to a newly allocated DFT message), then drop the old sample chunk and send the DFT message, then calculate the windowed DFT for the new chunk (to a newly allocated DFT message), remember the new sample chunk as the old chunk in its own context, and send the DFT message, and be done.

This approach allows static analysis tools to verify all memory accesses within the handler are valid, whether incoming data objects as message payloads are correctly handled (remembered, passed forward, or completely freed/dropped), and so on.  Obviously, some kind of "auto-drop unaccessed data objects" mechanism is required, for the code to be maintainable.  Reference counting data objects in messages would provide a very easy mechanism for that; essentially, something akin to systematic garbage collection with collection done whenever the handler completes.

(The 'process' model also implies that such each event handler is not re-entrant wrt. the same state/context object –– or rather, that each state/context object is single-threaded ––; this has significant implications to how e.g. stack can be managed across many event handlers, using just one stack per concurrent thread.)

This is still quite vague in my mind, still forming, but I believe there might be something useful in here.  The Discrete Fourier Transform (DFT) example above also illustrates how its memory management requirements would differ quite a bit from the traditional C and C++ models, what with the concept of "owner" (tracked by the compiler or interpreter at compile time, not at run time).
A key facet I'm adamant on is that instead of pointers, objects are treated as memory ranges (or sets of memory ranges), so that we can finally get rid of silent buffer overrun bugs.
Title: Re: event-oriented programming language
Post by: SiliconWizard on January 14, 2023, 07:17:26 pm
I remember having created a thread about message passing as a way of circumventing synchronization problems entirely. Of course, that was absolutely nothing new.
It has been shown to scale much better and be more robust, but it's still used in niche applications only.

If you are processing a large amount of data, it may look less efficient. All this message passing looks pretty expensive at first sight. But it does require rethinking your data flows almost entirely.

Synchronization and concurrent access in parallel computing are the bottleneck, and are notoriously difficult to get right, and even harder to prove correct.

I have written a small library with message queues and did some experiments with it. It turned out to make it very easy to get near 100% CPU use across all cores for multithreaded computation, compared to a more typical approach. I plan on using that more often.
Title: Re: event-oriented programming language
Post by: Nominal Animal on January 14, 2023, 07:47:48 pm
I remember having created a thread about message passing as a way of circumventing synchronization problems entirely. Of course, that was absolutely nothing new.
It has been shown to scale much better and be more robust, but it's still used in niche applications only.

If you are processing a large amount of data, it may look less efficient. All this message passing looks pretty expensive at first sight. But it does require rethinking your data flows almost entirely.
I do believe such change in thinking suits the event-driven paradigm quite well, too.

And, as you said, the 'looks expensive' is just at first sight.

At the hardware level, passing data from one event handler to another is just a pointer, size, and object type (unless included in the passed data object itself).  Avoiding unnecessary copying of data ("zero-copy" approaches) tends to be quite important whenever you have lots of data flowing about.

The key is to consider these event handlers as if they were separate processes inside a superprocess, so that while there is no MMU to stop one event handler from accessing whatever memory it wants within the superprocess, the language itself tracks the accesses at build time: the language itself acts like the MMU.
The idea is to have all memory accesses statically verifiable at compile time, by encouraging (or requiring) developers only use patterns that allow that.

(This in turn implies that concepts like "allocate" and "free" are not sufficient; we need concepts something like "accept" or "take ownership for", and "send" or "release ownership for", so that attempts to access the data afterwards is detected as a violation.)

I mentioned in another thread that if C did not allow or automatically convert between arrays and pointers, and in function declarations variably-modified types were allowed to refer to later parameters in the same argument list (so that one could use (char buf[len], size_t len) and not just (size_t len, char buf[len])), the compiler could track basically all buffer accesses at compile time within each compilation unit, and detect most buffer over/underrun bugs.  (If you write such code today, and carefully avoid using pointers, GCC and Clang do that for you already.)
Title: Re: event-oriented programming language
Post by: tggzzz on January 14, 2023, 10:32:49 pm
Anyway, that talk made me realize one possible model for 'event handlers' is to model each handler as a process, with events and associated data passed in messages/events between them.  Instead of global memory accessible to all event handlers, each event handler would have their own state/context object(s), plus possibly access to explicitly named read-only or 'slow-atomic' objects.  (Do note that 'process' here is the concept, I'm not referring to OS processes.)
...
This is still quite vague in my mind, still forming, but I believe there might be something useful in here. 
...

There is indeed.

I'll generalise your points to include hardware, which also responds to event such as input changes and which also doesn't have global memory.

If you aren't already familiar with them, do have a look at CSP (Communicating Sequential Processes) and Erlang. CSP makes no theoretical distinction between hardware and software, and has been practically embodied in xC and Occam. The design patterns in those correspond closely to  your thoughts above :)
Title: Re: event-oriented programming language
Post by: tggzzz on January 14, 2023, 10:44:03 pm
I remember having created a thread about message passing as a way of circumventing synchronization problems entirely. Of course, that was absolutely nothing new.
It has been shown to scale much better and be more robust, but it's still used in niche applications only.

Important niches, and moving towards being mainstream with modern hardware capabilities/limitations and modern languages.

Quote
If you are processing a large amount of data, it may look less efficient. All this message passing looks pretty expensive at first sight. But it does require rethinking your data flows almost entirely.

There is a tendency that larger blobs imply less frequent messages, thus balancing out to some extent.

Message passing needn't be more expensive that the alternatives. If two processes are sharing memory, then messsages only need to pass pointers in a disciplined way. If they don't share memory then you have to copy anyway.

xC supports both types :)

Quote
Synchronization and concurrent access in parallel computing are the bottleneck, and are notoriously difficult to get right, and even harder to prove correct.

There are two levels to consider:

By design, CSP has solid theoretical properties at the application level. In the general case, proving absence of deadlock/livelock will never be easy :)

Quote
I have written a small library with message queues and did some experiments with it. It turned out to make it very easy to get near 100% CPU use across all cores for multithreaded computation, compared to a more typical approach. I plan on using that more often.

I've used Doug Lea's concurrency classes and the half-sync half-async pattern to similar effect in telecom server applications. Another company was so surprised that, unexpectely, they bought access to the application so they could see how it was done :)