What tools do you use to do unit testing etc when the development platform is not the same as the target platform? As in development is on ye olde x86 PC, and target is an MCU.
Compile for different target, and hope for the best? Compile for correct target and run in an emulator? Run on the actual MCU? If on the MCU what kind of communications? UART? ITM?
As for macro or not ... why not? I know macro's have their drawbacks, but I've also seen some alternatives that are C++ templates galore. Doesn't that bring the risk that you get to feel all happy and purist about it, but not get anything done? And I don't mean STATIC versus static qualifier. I mean use macro's for your asserts and such. Nothing wrong with that IMO inside the unit test code (not the production code). Or maybe I'm missing something...
My question is ...
What tools do you use to do unit testing etc when the development platform is not the same as the target platform? As in development is on ye olde x86 PC, and target is an MCU.
Compile for different target, and hope for the best? Compile for correct target and run in an emulator? Run on the actual MCU? If on the MCU what kind of communications? UART? ITM?
As for macro or not ... why not? I know macro's have their drawbacks, but I've also seen some alternatives that are C++ templates galore. Doesn't that bring the risk that you get to feel all happy and purist about it, but not get anything done? And I don't mean STATIC versus static qualifier. I mean use macro's for your asserts and such. Nothing wrong with that IMO inside the unit test code (not the production code). Or maybe I'm missing something...
...
C++ templates are the devil's invention which have to be used to as a sticking plaster over the gaps in C++
...
C++ templates are the devil's invention which have to be used to as a sticking plaster over the gaps in C++
That's a bit strong, like everything else in C++ knowing when and where to use/not use a feature is just as important as knowing the feature its self.
True. But while developing the template specification, the designers didn't comprehend what they were creating. In particular they refused to believe that the specification was itself a Turing complete language. Until, that is, someone presented a short and valid C++ program that caused the compiler to, very slowly, emit the sequence of prime numbers during compilation.
True. But while developing the template specification, the designers didn't comprehend what they were creating. In particular they refused to believe that the specification was itself a Turing complete language. Until, that is, someone presented a short and valid C++ program that caused the compiler to, very slowly, emit the sequence of prime numbers during compilation.
I just had to find more about that ... the paper is available at http://ubietylab.net/ubigraph/content/Papers/pdf/CppTuring.pdf
I am just starting out with TDD in embedded world.
I am a noob when it comes to TDD - embedded or otherwise.
The book TDD in embedded C by James W Grenning is a big help to understand the importance behind TDD but it has it WTF moments where most of the stuff he talks about goes over the head. Fortunately it is more to do with the frameworks he has chosen and not with the principle behind it.
My question is - How do you guys manage to do TDD - partially or completely. Or is it just a waste of time??
Why are you starting out with this? What draws you to it? Programming paradigms are 10 a penny. Why are you interested in this one?
[/size]
[/size][size=78%]As I ve said, I really do not have enough experience nor maturity to comment on it. I am just learning. [/size]
[/size]
[/size]
[/size]
[/size][size=78%]Correct, It is not really very useful when the point of following a paradigm is missed. I am a noob as I ve stated above. I would like to understand the what, where, why, how correctly rather than blindly following them. I ve started this post to ask more experienced developers their insights and experiences. It will certainly help me avoid some of those issues you ve mentioned. [/size]
[/size]
[/size]I do understand what you wrote about verifying an algorithm. I concur. [size=78%]
[/size]It is, afterall, a tool. How we use it to achieve what we want is upto us. Someone else might see the useful in verifying some specific use cases. Some might want to run the entire range of acceptable and unacceptable values. It is purely dependant on the user. [size=78%]
[/size]I also think that with some effort, TDD can a extremely useful tool to map out codeflow and code coverage. [size=78%]
[/q]
Of course, excercising a path is only effective if the results of exercising the path are visible to the test harness. And that can be extremely difficult to achieve for code that is designed to trap the "should rarely occur" exceptional conditions. Which raises the question: "should you break encapsulation to enable testing?".I am currently reading "Modern C++ Programming with Test-Driven Development" by Jeff Langr, and he takes the position that it is better to be able to test than maintaining some academic sense of purity of design, and that it usually isn't a problem. There are of course many ways to handle this. For example, we use macros that can make a static function global when building the tests.
Bugger "academic purity". His is the kind of sentiment I have previously heard from XP/Agile/TDD religious zealots. Unless you are only considering tiny toy applications, encapsulation is the key to large programs/systems which are reliable, maintainable, and high-performance.
I suggest you read books by someone that has been at the sharp end of writing commercially important code that has to work reliably and be extended for years.
Have a look at Jeff Langr's website, and see if you can find what programs he has written (as opposed to at which companies he was a mentor).
...
The unit testing has made a big difference (although an insufficient amount of good acceptance testing has cost us a bit--which says that, of course, unit testing is only a portion of what you need to do to deliver quality systems).
As far as the TDD is concerned, it's paid off more often than not, though it can be a bit challenging at time (and we punt sometimes). It's not magic and doesn't solve all problems--that's some of the take on TDD I try to present in the book, how to approach it from a pragmatic stance. I've done plenty of non-TDD, too, and while I can survive that way, it's simply not as effective--or fun--for me and my colleagues, *particularly* on larger systems. The goal is to build a system that's maintainable and keeps the cost of change to a minimum. The ability to know that you can make changes without unwittingly breaking stuff (too easy to do) matters a lot, and is the main reason I do TDD. (You can get there with test-after, but I find it to be harder and less effective.)
You're correct, encapsulation is important, and core design principles matter a lot. That too is a personal emphasis I have on development. Some of the TDD folks buy into a heavily mock-based approach; I think mocks need to be used carefully, otherwise you get into some nasty dependencies of the tests on private details, and that can really squash your ability to refactor your code when you must. My recommendation is to minimize and isolate any such exposures--but it's still more important to know that the code works. So I allow certain elements to be inspected and possibly overridden. It's yet to bite me.
Except it completely changes what the compiler does.
Many low-complexity functions still yield a complex piece of software, just split up more. That doesn't necessarily make it less prone to bugs.
Except it completely changes what the compiler does.Changing a function's linkage does not alter its logic, which is what's being tested. Never mind that the test is building a tiny part of the system in isolation, using a different compiler, on a different computer architecture, running a different OS.
QuoteMany low-complexity functions still yield a complex piece of software, just split up more. That doesn't necessarily make it less prone to bugs.But it makes the individual pieces easier to test.
Now consider compiler optimisations which make presumptions (that may or may not be correct) about multithreaded code, aliasing, and all the other poorly defined aspects of C. And then go on to consider the interaction with libraries supplied by other companies. And don't forget the myriad compiler errors that do occur.
But such tests are often very uninteresting and unilluminating - and therefore not a particularly good use of developer's time.
Now consider compiler optimisations which make presumptions (that may or may not be correct) about multithreaded code, aliasing, and all the other poorly defined aspects of C. And then go on to consider the interaction with libraries supplied by other companies. And don't forget the myriad compiler errors that do occur.Again, since the focus of the tests is the business logic (in our setup anyway), that's kind of not relevant. And while compiler errors do happen, in practice they're rare enough that it's not much of a concern in day-to-day development.
QuoteBut such tests are often very uninteresting and unilluminating - and therefore not a particularly good use of developer's time.As Jeff Langr posted, one of the purposes of tests is to alert you to unintended changes of behaviour. From that perspective uninteresting tests are also useful.
/*@ ensures \result >= x && \result >= y;
ensures \result == x || \result == y;
*/
int max (int x, int y) { return (x > y) ? x : y; }
/*@ requires \valid(p) && \valid(q);
ensures *p <= *q;
ensures (*p == \old(*p) && *q == \old(*q)) ||
(*p == \old(*q) && *q == \old(*p));
*/
void max_ptr(int* p, int*q);
/*@
requires \valid(root);
assigns \nothing;
ensures
\forall list* l;
\valid(l) && reachable(root,l) ==>
\result >= l?>element;
ensures
\exists list* l;
\valid(l) && reachable(root,l) && \result == l?>element;
*/
int max_list(list* root);
Have you guys ever tried or even researched formal methods and tools? I work in software industry for 20+ years and I have seen some many fads like TDD so my cynicism and skepticism about all something like that is around level 11
Ever since beginning back in 1970s or 1980s formal methods were always somewhat impractical and academia toy to play with. However over past 5-6 years I see steady adoption of some tools in industry. When I read an article of John Carmack about application of static analysis tools in gaming then I said "It is coming, I can hear it!". Here is little list of tools that I have seen used in real world:
<snipped>
For example, how would you express them for an FFT or inverse FFT function, or even something as simple as calculating the cost of a phone call.
I'll start taking them seriously when they can be used for something interesting and useful, for example proving that a set of communicating FSMs are deadlock-free, or proving the liveness of some real-time code. Background: the last time I showed a pure mathematician a real-life FSM (from network protocols), he recoiled in horror at the complexity.
Those look like little more than Eiffel-style pre and post conditions. They are nice for toy examples in an academic context, but useless in more industrial contexts. For example, how would you express them for an FFT or inverse FFT function, or even something as simple as calculating the cost of a phone call.
I'll start taking them seriously when they can be used for something interesting and useful, for example proving that a set of communicating FSMs are deadlock-free, or proving the liveness of some real-time code. Background: the last time I showed a pure mathematician a real-life FSM (from network protocols), he recoiled in horror at the complexity.
But yes, such techniques can be useful in limited circumstances, e.g. proving the correctness of floating point implementations.
For example, how would you express them for an FFT or inverse FFT function, or even something as simple as calculating the cost of a phone call.Interesting question. How would you go about testing the correctness of an FFT implementation? And then the exact same question, but then for testing an FFT implementation of an other platform where you have some room for a limited test environment but by no means as much as on a modern PC. Lets say the target is a cortex M3.
QuoteI'll start taking them seriously when they can be used for something interesting and useful, for example proving that a set of communicating FSMs are deadlock-free, or proving the liveness of some real-time code. Background: the last time I showed a pure mathematician a real-life FSM (from network protocols), he recoiled in horror at the complexity.Again an interesting problem. I'm not a big fan of following the latest acronym soup, but rather a fan of mix & match. TDD (or rather my limited understanding of TDD) seems to have some nice ideas. And it's not as if these ideas are unique to TDD. The way I look at it is as a collection of ideas from which you grab the ones you like and ditch the ones you don't. All that introductory waffle for the following point: one of the ideas of TDD is that you let the tests drive the design. And lets not get into a whole debate about good or bad on that. Short version: who the fuck thought it a good idea to let tests dictate the product? However, having your tests inform your design seems like a good idea to me.
You want a design that is testable.
So when you find out during writing your test plan that the way you designed your product makes it really really difficult to test, then maybe it's time to rethink the design. Case in point the tangled FSM web. If you find that testing that sucker gives your local friendly math dude a migraine, then maybe a redesign that is less tangled? And I'm not saying you should have rewritten it, because what do I know. It's your design, and you know it way better than random internet person. I'm just thinking out loud here. Isn't one of the (IMO useful) ideas of TDD to have your tests inform the design? Assuming the aim is to end up with a design that is testable, then you want a design that is as test friendly as possible. Maybe the FSM isn't the best idea, because when implementing protocols, the protocol is a given. But hopefully it gets the point across.
Incidentally, how do you prove the FSM doesn't get stuck in an unintended state?
I think if you split FFT into small functions and then prove them one by one - then you will arrive at your destination.
For large FSM, deadlock checking etc. I had very good experience using PROMELA + SPIN. Promela is a language and SPIN is a tool that runs Promela scripts. It is very easy to use in my opinion. It does not, however, automatically generate C code. It is NOT an academia toy - it was designed by Bell Labs and used by them to check phone station/PBX software. Looks like C.
Being involved in infrastructure/environment/stack or embedded code, I haven't had that luxury.
Being involved in infrastructure/environment/stack or embedded code, I haven't had that luxury.Are you really trying to claim that you run into so many compiler bugs that unit tests are useless? Because you're either overstating the problem or need to switch compiler vendor post-haste.