You can't test quality into a design. Tests can only prove the prescence of faults, not their absence.
I'm sure you are perfectly aware of that, but it horrifies me every time someone is surprised by that new (to them) concept.
Sure you can. That's what they said about semiconductors. The yield might've been shite early on (like the <1% of early JP transistor lines, or certain Intel lines, etc.), but all they needed was a few parts that worked, and refinements to gradually bring that up.
Mind, this is a case where the design is essentially correct, there's just finitely many errors that occur in manufacturing (process impurities, dust, etc.), and they only need to get lucky enough to find one free of defects.
Does that work
in software? Maybe. But it's worth noting, if nothing else, the meaning behind that statement, and where it may or may not apply. It was applied erroneously to them, by those righteous hardware reliability types. We must always be cognizant of the limitations of our knowledge, like this.
So, software. Well, fundamentally it's a design, not a production process. So, the above is out (and, to be clear, I'm not trying to force the above meaning into current context!).
And, I know where you're coming from. To be clear: software design is something that -- given adequately comprehensive specifications -- we can
prove, perfectly to work. Not just "beyond a shadow of a doubt", not anything that could be tested (complete test coverage is combinatorial in complexity, it can't be done in general!), perfect proof.
Assuming all the toolchain and stuff is working correctly, I mean, but a lot of work goes into those, along similar lines, when you're asking for something so reliable. Provable stacks exist, from transistor level to compiler.
Now, I'm not quite sure if you're talking about formally proven systems here, or more informally, but it's good to know in any case that it's out there, and doable.
AFAIK, provable computing is not very often used, even in high-rel circles, just because it's, I don't know, so much of a pain, so different from regular development processes?
And most of the time, it doesn't matter: if the thing does what it needs to, most of the time, and is reasonably tolerant of nonstandard inputs (as fuzzing can cover -- whether formally, or by the crude efforts of testers), who cares, ship it. Some customers will eventually hit the edge cases, and maybe you patch those things up on an as-needed basis. Maybe the thing is still chock full of disastrous bugs (like RCE), but who's ever going to activate them? And what does it matter if it's not a life-support function, or connected to billions of other nodes (as where viruses can spread)?
So, to be clear, it depends on the level of competency required. Provable computing is just another option in the toolbox.
Clearly, you're approaching things from a high-rel standpoint. That's an important responsibility. But it's also not something that can be applied in general. At least, not with developers and toolchains where they are right now.
And that's even assuming that every project was specified perfectly to begin with. Clients or managers come to engineers for solutions, not for mathematical proofs; it's up to the engineers to figure out if proofs are warranted, or if winging it will suffice. And for 99.9% of everything, the latter is true, and so things are.
And, I also mention testing for a couple reasons:
1. It's the most basic way to figure out how something works (or doesn't). It can be exceedingly inefficient (trivially, say, how do you test a 1000 year timer?), but to the extent anything can be learned by doing it, in any particular case -- that's at least some information rather than complete ignorance, or guesswork.
2. There's "test driven development". Which, I don't even have any good ways to do, in most embedded projects; most of the tools I have, don't come with test suites, so I can't even run tests to confirm they work on my platform. And most embedded platforms have no meaningful way of confirming results, other than what I've put into them (e.g. debug port). In relatively few cases, I can write a function in C, and test it on the PC -- exhaustively if need be (a lazy, and often infeasible method, but when it is, it's no less effective than direct proof).
TDD can be equivalent to proof, even without exhaustive testing, if all code paths can be interrogated and checked; granted, this is also, in general, not something you're often going to have (the code paths are invisible to the test harness, and highly nonlinear against the input, i.e. how the compiler decides to create branches may vary erratically with how the input is formulated or structured). Though, this hints at something which can: if we add flags into every code path, and fuzz until we find the extent of which inputs, given other inputs, activate those paths, we can attempt to solve for all of them -- and as a result, know how many we're yet missing.
TDD I think is mainly a level-up in responsibility, where the project is persistent enough to not only be worth writing tests for, but to accumulate tests over time as bugs are found (write a test for it, to prevent it popping up in later refactoring!), while evolving new features -- extending an API while keeping it backwards-compatible, say. It's far more agile than drawing up a comprehensive provable spec every time, and it's reliable enough for commercial application. (So, it would figure that I haven't been exposed to it; I simply don't work on a scale where that's useful, besides the practicability issue.)
(And maybe I'm overstating how much trouble it is to do provable computing, or something in the spirit of it, if not formal. I don't work with it either, and curious readers should read up on it instead.)
And fuzzing, while it's still not going to be exhaustive; anywhere that we can ensure, or at least expect, linearity between ranges (i.e., a contiguous range of inputs does nothing different with respect to execution), we are at least very unlikely to need test coverage there. (Insert Pentium FDIV bug here.
)
Actually, heh, I wonder how that's affected by branch-free programming. One would want to include flags equivalent to program branches. So, it's not something that can be obviously discovered from the machine code, for example; the compiler may emit branchless style instructions instead of literally implementing a control statement. It might not even branch in the source, if similar [branch-free] techniques are used (like logical and bit operators, and bit or array vectorization tricks).
Frequently it is useful to have multiple chained FSMs. In yout example, the first FSM would gather bits into a char, then generate a "char received" event. Another FSM would react to char received events, and gather them into a "packet recieved" event. Another FSM would react to "packet received" events, and generate maybe "high level" events.
Some of those FSMs will be implemented in software, some in hardware; who cares: an FSM is an FSM.
That's nothing novel; the entire telecom system is specified in that way, where the "highest" level events night be "call connected" "call disconnected" or "money run out" events.
And, it's no accident that it's reminiscent of (or explicitly referencing..) the OSI model, which came out of telecom (more or less?)!
And, compare to the traditional JS VM: there's only ever one thread of execution*, and control is transferred to a received event after each function (or the global) finishes.
*Except Workers, but those are a newer feature, hence, "traditional JS"...
I've always run away from anything JavaScript based. I have zero interest in interactive web pages.
For point of reference, for those more on the software or web dev side of things, you understand.
In my experience it is best to have only two levels of thread priority and/or interrupt priority: "normal" and "panic". You should be able to do anything with those, and if more are introduced then it is a sign the system needs radical refactoring before the technical debt becomes intractible.
I will listen to other cases, but will ignore claims of "convenience" and demand to be convinced claims of "necessity".
Agreed.
Tim