EEVblog Electronics Community Forum

Electronics => Microcontrollers => Topic started by: krish2487 on November 20, 2014, 07:35:08 am

Title: Test Driven Development
Post by: krish2487 on November 20, 2014, 07:35:08 am
Hello.


I am just starting out with TDD in embedded world.
I am a noob when it comes to TDD - embedded or otherwise.




The book TDD in embedded C by James W Grenning is a big help to understand the importance behind TDD but it has it WTF moments where most of the stuff he talks about goes over the head. Fortunately it is more to do with the frameworks he has chosen and not with the principle behind it.


My question is - How do you guys manage to do TDD - partially or completely. Or is it just a waste of time??


If yes, then what frameworks do you guys use. I ask because because I am leaning towards CUnit (mostly because it is easy to follow along along with a decent tutorials on the interweb).
Title: Re: Test Driven Development
Post by: tggzzz on November 20, 2014, 11:08:27 am
TDD is valuable providing you understand its limitations - which many religious zealots don't.

There's an old engineering adage: "you can't inspect/test quality into a product".

TDD is critically dependent on the quality and completeness of the tests. Writing good tests is often more difficult and time consuming than the design itself - and can be practically impossible. Consider, for example, trying to test that your design copes with metastability in synchronizers, or that the correct database ACID properties are being used.

TDD can help you to build the "product right", but cannot ensure that you are building the "right product". (Validation vs verification)

TDD only helps with the things you test; guess what happens if you don't test important aspects of your product!
Title: Re: Test Driven Development
Post by: andersm on November 20, 2014, 11:15:19 am
At my work we use Unity and CMock (I believe they hired Grenning to help set the system up, but that was before I joined). We rarely do TDD "by the book", but we are big users of unit tests. They are run both on workstations while developing, and on our CI servers after committing to version control.

The best advice I can give is to write your code with testing in mind. Our test suite has been retrofitted onto an existing codebase, and the dependency spaghetti has made the tests both more difficult to understand and slower to execute. Our complete unit test suite (currently about 5900 tests, for a family of products) takes almost fifteen minutes to build and run on pretty beefy quad-core machines. We have set up macros in our IDEs that can run a single unit test file, or all the tests for one project, which helps a lot.
Title: Re: Test Driven Development
Post by: krish2487 on November 21, 2014, 03:48:16 am
Sound advice from both!
More interesting to see that not many have actually replied to the question.


Thank you.

[/size]So for your typical run-of-mill small projects, as a percentage, how much time do you spend visualizing/arriving at the tests for the project. [size=78%][/size]I have a more specific question I wanted to ask. If you have a given module that has to be tested, whats the best compromise on the spectrum of test inputs that you typically run. I ll elaborate, if a particular function takes 1 argument as input and 1 as a return value, and the input can have a 100 "correct" values, do you sweep the test across the 100 values and record the data or is it more of a common practice to test say, boundary values, a typical value and a exception value to see that the function and test pair function correctly. Thank you for your time!!![size=78%]
Title: Re: Test Driven Development
Post by: andersm on November 21, 2014, 09:05:31 am
Exhaustive tests would be far too slow. A good guideline would be a representative case, plus any corner cases. While there is a bit of redundancy there, test cases also serve as documentation which makes it worthwhile to explicitly include the "normal" case. Also keep in mind that the refactoring step is an integral part of the TDD loop, so while developing a feature you can have as complex tests as you like, as long as you clean them up before moving on.

We measure the code coverage of our unit tests, and while that says nothing about the quality of the tests, it does clearly show if there are some code paths that aren't excercised at all.
Title: Re: Test Driven Development
Post by: krish2487 on November 21, 2014, 09:34:40 am
Thank you!!!
That clears up a lot of doubts I ve had for sometime now.


Again, thank you for your time and patience explaining how it is done in the industry.
 :)
Title: Re: Test Driven Development
Post by: krish2487 on November 21, 2014, 11:51:49 am
You have to be very careful with TDD. There is a tendency to code to pass the tests, not to produce a robust system. The idea is that the tests are well designed enough to prevent that, but in practice they never are.


I can see what you are pointing at. I caught myself more than once trying to design a framework that would pass the test. Not be inherently robust as you said.


So how do you manage to avoid such issues while coding??


I am asking because I would like to learn the attitude behind coding safely. I am being deliberately vague and open ended because I would like to know the members experiences and coding process. Some, if not all, will be helpful.
Title: Re: Test Driven Development
Post by: tggzzz on November 21, 2014, 05:02:53 pm
We measure the code coverage of our unit tests, and while that says nothing about the quality of the tests, it does clearly show if there are some code paths that aren't excercised at all.

I wish everybody understood that; I've worked on a project where people seemed not to understand/care about that point.

Of course, excercising a path is only effective if the results of exercising the path are visible to the test harness. And that can be extremely difficult to achieve for code that is designed to trap the "should rarely occur" exceptional conditions. Which raises the question: "should you break encapsulation to enable testing?".
Title: Re: Test Driven Development
Post by: tggzzz on November 21, 2014, 05:05:36 pm
You have to be very careful with TDD. There is a tendency to code to pass the tests, not to produce a robust system. The idea is that the tests are well designed enough to prevent that, but in practice they never are.

I've seen people seriously argue that the code works because the code passes the tests. Nonsense of course, but it can be a difficult to persuade people otherwise!
Title: Re: Test Driven Development
Post by: Alexei.Polkhanov on November 21, 2014, 06:24:50 pm
I am a little famous for saying something like that in forums about 15 years ago:

"If test fails don't fix the software - fix the developers, and more inspections, code reviews, training etc."

I have not heard of TDD when I said that. Nothing changed since then and in many cases situation is much worth.

You cannot fix software after defects are found. Proof of that is not so obvious and lies in statistics. First of all adding more tests only gives you benefits in terms of Noise to Signal ratio NOT - number of defects found maybe just fine even in first test. Let's say your tests gives yield of 10% so that you know you will find 1 defect out of 10. Knowing it is enough to estimate that in 1000 lines of code you have 10 bugs if your tests reveal one. Now if you add more tests you will find 2 - but why not simply multiply number of defects from previous test by 2?

Second point is easier to understand, because all developers intuitively know it already. When you attempt to fix defects in a software after they were found by tests you make "modification" as oppose to "writing new code". Every modification will result in more defects than initial coding. At some point these two curves intersect and you adding exactly same number of defects as you trying to remove. So you going in direction of replacing easy to find bugs by those that your tests don't catch but otherwise you are not improving anything.

Why TDD is accepted in some places? Well first of all many managers are familiar with this maths but because it is part of curriculum. They often refer to it as "clean room" approach or Toyota method - but they always add that it is not practical HERE. Second reason is economics - software is made by people, not machines. If you invest into new plasma etching machine that reduces your defects by 5% you will get your money back in 1 year for example. If you invest same money into training your developers, improving process etc - they will leave to work for another shop and your investment is wasted.




Title: Re: Test Driven Development
Post by: andersm on November 21, 2014, 07:40:15 pm
Of course, excercising a path is only effective if the results of exercising the path are visible to the test harness. And that can be extremely difficult to achieve for code that is designed to trap the "should rarely occur" exceptional conditions. Which raises the question: "should you break encapsulation to enable testing?".
I am currently reading "Modern C++ Programming with Test-Driven Development" by Jeff Langr, and he takes the position that it is better to be able to test than maintaining some academic sense of purity of design, and that it usually isn't a problem. There are of course many ways to handle this. For example, we use macros that can make a static function global when building the tests.
Title: Re: Test Driven Development
Post by: nctnico on November 21, 2014, 07:52:58 pm
Macros for function definitions.... yuk...

What I have learned in the past is to write code which is tolerant to getting wrong parameters from other parts of the program. IOW trying to compartmentalize the software. Especially on boundaries between objects and APIs I do checks on the parameters which are passed to a function. That way a wrong value causes the program to do nothing or produce an error instead of something which is (slightly) wrong or cause something else/unrelated to fail because data gets overwritten. Either way it is easier to spot during testing and because the failure doesn't cascade the description of the failure is most likely to point to the offending piece of code. And even if the problem goes unnoticed the bug doesn't cascade into unexpected behaviour or crashes.
Title: Re: Test Driven Development
Post by: tggzzz on November 21, 2014, 08:21:40 pm
Of course, excercising a path is only effective if the results of exercising the path are visible to the test harness. And that can be extremely difficult to achieve for code that is designed to trap the "should rarely occur" exceptional conditions. Which raises the question: "should you break encapsulation to enable testing?".
I am currently reading "Modern C++ Programming with Test-Driven Development" by Jeff Langr, and he takes the position that it is better to be able to test than maintaining some academic sense of purity of design, and that it usually isn't a problem. There are of course many ways to handle this. For example, we use macros that can make a static function global when building the tests.

In which case such tests will have little value, and should be scrapped before you lull yourself into a false sense of security. Why? You simply aren't testing the software product - you are testing  a different (albeit similar at one level) product.

It can be guaranteed that with C/C++ the compiler will produce significantly different code if you change the visibility. Root cause: the pessimisations the compiler must insert when it can't prove there is no aliasing.

I suggest you read books by someone that knows how C/C++ compilers generate code for real-world non-toy programs.

Bugger "academic purity". His is the kind of sentiment I have previously heard from XP/Agile/TDD religious zealots. Unless you are only considering tiny toy applications, encapsulation is the key to large programs/systems which are reliable, maintainable, and high-performance.

I suggest you read books by someone that has been at the sharp end of writing commercially important code that has to work reliably and be extended for years.

Have a look at Jeff Langr's website, and see if you can find what programs he has written (as opposed to at which companies he was a mentor).
Title: Re: Test Driven Development
Post by: rollatorwieltje on November 21, 2014, 08:26:58 pm
One fundamental issue I have with TDD is that the tests are usually written by the same person who does the implementation. The unit tests often are exact implementations of how the object / function should be used. This is acceptable if the complexity of the function is low, but usually bugs are not found in low-complexity functions. Bugs usually come from assumptions about the system, and the programmer would make the exact same assumptions when writing the test.
You could get rid of all assumptions by designing a solid software architecture, but at some point you have to make a trade-off between writing code without assumptions (causing lots of boilerplate code) and writing simple software that just does exactly what you want.

A small example:
Code: [Select]
void MyMachine::openDoor()
{
   hardwareLayer->openDoor();
}

/**
  Open the door!
  @pre pin 1 must be set to output.
**/
HardwareLayer::openDoor()
{
  ASSERT(pinDirection(1) == OUTPUT);
  setPin(1, high);
}

TEST(MyMachine, openDoor)
{
  EXPECT_CALL(hardwareLayerTest.openDoor());
  myMachine.openDoor(); 
}

TEST(HardwareLayer, openDoor)
{
  //Test when pin is input instead of output
  hardwareLayerTest.setPinDirection(1, INPUT);
  EXPECT_ASSERT(myMachine->openDoor());

  //Test when pin is output
  hardwareLayerTest.setPinDirection(1, OUTPUT);

  EXPECT_CALL(hardwareLayerTest, setPin(1, HIGH));
  myMachine->openDoor();
}


The above code yields 100% coverage and 100% test pass, but it still isn't very robust. There won't be a bug in this code, the bug wil manifest itself when somebody during the execution of the program abusively calls setPinDirection(). Even then it still works as intended.
 You could design away the setPindirection() function with some clever OO stuff, but how far do you want to go? Do you really want 100 lines of code to open a damn door? This case is relatively easy to make more robust, but things get more complex really fast... Also be aware that adding code to cover a potential bug also may introduce new potential bugs.

I'm not saying unit tests are completely useless, because they aren't. You often write test functions anyway when writing software. But don't consider unit tests with code coverage the holy grail of writing bug-free software.
Title: Re: Test Driven Development
Post by: andersm on November 21, 2014, 09:07:56 pm
Macros for function definitions.... yuk...
Well, instead of "static" you write "STATIC". Hardly the end of the world.

In which case such tests will have little value, and should be scrapped before you lull yourself into a false sense of security. Why? You simply aren't testing the software product - you are testing  a different (albeit similar at one level) product.
These are unit tests. By definition you are not testing "the software product." There are other tests for that.

This is acceptable if the complexity of the function is low, but usually bugs are not found in low-complexity functions.
The obvious solution to that is not to write very complex functions.
Title: Re: Test Driven Development
Post by: rollatorwieltje on November 21, 2014, 09:28:27 pm
Macros for function definitions.... yuk...
Well, instead of "static" you write "STATIC". Hardly the end of the world.

Except it completely changes what the compiler does.

Quote
In which case such tests will have little value, and should be scrapped before you lull yourself into a false sense of security. Why? You simply aren't testing the software product - you are testing  a different (albeit similar at one level) product.
These are unit tests. By definition you are not testing "the software product." There are other tests for that.
The usual thing to to is build the software as a static library and link it either against the unit test executable or the release executable. Changing preprocessor options between your test and release just invalidates the whole concept of testing what you're going to release.

edit: This is obviously not really possible for platforms that cannot run a unit test framework. But still, avoid the use of preprocessor definitions. It just changes the program without making it obvious.

This is acceptable if the complexity of the function is low, but usually bugs are not found in low-complexity functions.
The obvious solution to that is not to write very complex functions.
[/quote]
Many low-complexity functions still yield a complex piece of software, just split up more. That doesn't necessarily make it less prone to bugs. Just what I said, adding boilerplate introduces new opportunities for making bugs. It's a delicate balance.
Title: Re: Test Driven Development
Post by: tggzzz on November 21, 2014, 11:15:38 pm
In which case such tests will have little value, and should be scrapped before you lull yourself into a false sense of security. Why? You simply aren't testing the software product - you are testing  a different (albeit similar at one level) product.
These are unit tests. By definition you are not testing "the software product." There are other tests for that.

Don't be silly.

Read what rollatorwieltje writes in his next post - he correctly expands what I merely hinted at, viz:
Quote
Except it completely changes what the compiler does.
Changing preprocessor options between your test and release just invalidates the whole concept of testing what you're going to release.
But still, avoid the use of preprocessor definitions. It just changes the program without making it obvious.
Just what I said, adding boilerplate introduces new opportunities for making bugs. It's a delicate balance.
Title: Re: Test Driven Development
Post by: IanB on November 21, 2014, 11:29:14 pm
is it more of a common practice to test say, boundary values, a typical value and a exception value to see that the function and test pair function correctly

This. Choose the smallest number of tests that will exercise all of the code paths and all the permutations of logical tests in the code unit being tested. Expect this number to be fewer than ten.

If you have too many permutations to test it is likely your code is too complex and you should try to simplify it.

Which highlights an important aspect of TDD. TDD is to help you write better code. It makes you keep your eye on the ball and guards against creating the "stream of consciousness, big ball of yarn" that so often is the outcome of coding in the moment without a plan.
Title: Re: Test Driven Development
Post by: krish2487 on November 22, 2014, 04:52:56 am
Thank you Ian.


Interesting to see the discussion and experiences of others
Title: Re: Test Driven Development
Post by: vvanders on November 22, 2014, 05:10:24 am
TDD is great for building APIs that make sense and are approachable. As with everything it can be taken to extremes.

I'm a big fan of introducing a new unit test each time a regression is found that exercises that regression. It means your tests are coming from real-world, potentially complex scenarios and aren't "tests for test sake".
Title: Re: Test Driven Development
Post by: mrflibble on November 22, 2014, 04:24:07 pm
What tools do you use to do unit testing etc when the development platform is not the same as the target platform? As in development is on ye olde x86 PC, and target is an MCU.

Compile for different target, and hope for the best? Compile for correct target and run in an emulator? Run on the actual MCU? If on the MCU what kind of communications? UART? ITM?

As for macro or not ... why not? I know macro's have their drawbacks, but I've also seen some alternatives that are C++ templates galore. Doesn't that bring the risk that you get to feel all happy and purist about it, but not get anything done? And I don't mean STATIC versus static qualifier. I mean use macro's for your asserts and such. Nothing wrong with that IMO inside the unit test code (not the production code). Or maybe I'm missing something...

Title: Re: Test Driven Development
Post by: tggzzz on November 22, 2014, 08:26:23 pm
While acknowledging that TDD can have some advantages, I disagree with your statements below.

If you have too many permutations to test it is likely your code is too complex and you should try to simplify it.

I don't think that's helpful, except in the limited cases where you test trivially simple segments of code. The consequent canonical example, which I have seen too often, is unit tests for getter and setter methods - which is especially dangerous/nonsensical if the getter/setter method's visibility has been widened simply to allow those unit tests!

In addition, some algorithms simply cannot be decomposed to trivial units - an FFT function, for example.

Quote
Which highlights an important aspect of TDD. TDD is to help you write better code. It makes you keep your eye on the ball and guards against creating the "stream of consciousness, big ball of yarn" that so often is the outcome of coding in the moment without a plan.

The downside is that it encourages focussing on tiny segments of code at the expense of seeing/understanding/testing larger cohesive units. Effectively throwing the "trees" into focus and the "wood" out of focus.
Title: Re: Test Driven Development
Post by: IanB on November 22, 2014, 09:40:33 pm
While acknowledging that TDD can have some advantages, I disagree with your statements below.

Well the world is too complicated to be described entirely by simple rules. Note the qualifiers "likely too complex" and "try to simplify". I have seen many cases where the code really is too complex and really does need simplifying. So it's a thought to keep in mind.

My personal area of specialism is with numerical code dealing with floating point and complex algorithms (not unlike FFT). It certainly is very challenging to provide adequate test coverage that will assure such code never fails. On the other hand, decomposition and modularization helps enormously towards that goal.

To your last comment, many would argue that TDD should help you to focus on the forest and achieve a good overall structure. But there is no substitute for experience. Even with TDD, lack of experience will show through.
Title: Re: Test Driven Development
Post by: mrflibble on November 23, 2014, 05:11:59 am
My personal area of specialism is with numerical code dealing with floating point and complex algorithms (not unlike FFT). It certainly is very challenging to provide adequate test coverage that will assure such code never fails. On the other hand, decomposition and modularization helps enormously towards that goal.

I was wondering about that. Suppose you have a function that spits out a non-LTI time-series, how do you unit test that? Create a bunch of test cases, simulate them in matlab so that you know the input and outputs, and then generate test C code using those inputs and outputs?
Title: Re: Test Driven Development
Post by: tggzzz on November 23, 2014, 09:55:28 am
It feels like we are in violent agreement.

I have encountered too many zealots that think and teach that following a set of incantations process will lead to a good result. Clearly you don't fall into that category.

Mature engineering judgement rules!

While acknowledging that TDD can have some advantages, I disagree with your statements below.

Well the world is too complicated to be described entirely by simple rules. Note the qualifiers "likely too complex" and "try to simplify". I have seen many cases where the code really is too complex and really does need simplifying. So it's a thought to keep in mind.

My personal area of specialism is with numerical code dealing with floating point and complex algorithms (not unlike FFT). It certainly is very challenging to provide adequate test coverage that will assure such code never fails. On the other hand, decomposition and modularization helps enormously towards that goal.

To your last comment, many would argue that TDD should help you to focus on the forest and achieve a good overall structure. But there is no substitute for experience. Even with TDD, lack of experience will show through.
Title: Re: Test Driven Development
Post by: rollatorwieltje on November 23, 2014, 11:19:24 am
What tools do you use to do unit testing etc when the development platform is not the same as the target platform? As in development is on ye olde x86 PC, and target is an MCU.

Compile for different target, and hope for the best? Compile for correct target and run in an emulator? Run on the actual MCU? If on the MCU what kind of communications? UART? ITM?

You can't do much when your target really cannot run a unit test framework. In those cases we run the unit tests on Windows. Of course you have to write code as platform-independent as possible (mostly by using predefined inttypes and making sure external communications are done in the correct byte order).

Quote
As for macro or not ... why not? I know macro's have their drawbacks, but I've also seen some alternatives that are C++ templates galore. Doesn't that bring the risk that you get to feel all happy and purist about it, but not get anything done? And I don't mean STATIC versus static qualifier. I mean use macro's for your asserts and such. Nothing wrong with that IMO inside the unit test code (not the production code). Or maybe I'm missing something...
When you need to get information from the preprocessor there's no way around it, like the file and line number of an ASSERT or your logging function.
When you don't need information from the preprocessor there's almost always a better way. Macros are often abused as functions or variables, but then you lose every useful protection functions and variables provide (scope, type safety).
One exception I could think of is generating an enum + matching string map, something like this: http://stackoverflow.com/a/147582 (http://stackoverflow.com/a/147582)
Title: Re: Test Driven Development
Post by: dannyf on November 23, 2014, 12:27:11 pm
Quote
My question is ...

I thought the only reason you develop a piece of code is for customers. Writing a piece of code for testing seems odd, :)
Title: Re: Test Driven Development
Post by: tggzzz on November 23, 2014, 03:16:21 pm
What tools do you use to do unit testing etc when the development platform is not the same as the target platform? As in development is on ye olde x86 PC, and target is an MCU.

Compile for different target, and hope for the best? Compile for correct target and run in an emulator? Run on the actual MCU? If on the MCU what kind of communications? UART? ITM?

There are a whole range of different techniques for that.

The "high-level" machine independent code (e.g an FSM that accepts events and generates events/actions) can and should be tested in a separate environment, with the events/actions mocked out. Then hope that the cross-compiled code is equivalent.

The "low-level" machine dependent code (e.g. turning event/action to/from pin voltage changes etc) can partially be tested in a separate environment, but in reality has to be tested in the target environment with real pin voltage changes.

It is extremely useful to symbolically log each event/action plus sequence of states:

Don't make a fetish out of the tests - if the code visibly works then does it actually need formal tests? A good example of that is a logging interface to a remote computer; if the correct messages are plainly getting to the other end, is exhaustive testing of the comms really beneficial? Can be argued either way, but don't lose sight of the objectives.

Quote
As for macro or not ... why not? I know macro's have their drawbacks, but I've also seen some alternatives that are C++ templates galore. Doesn't that bring the risk that you get to feel all happy and purist about it, but not get anything done? And I don't mean STATIC versus static qualifier. I mean use macro's for your asserts and such. Nothing wrong with that IMO inside the unit test code (not the production code). Or maybe I'm missing something...

C++ templates are the devil's invention which have to be used to as a sticking plaster over the gaps in C++ :)
Title: Re: Test Driven Development
Post by: vvanders on November 23, 2014, 08:56:09 pm
...

C++ templates are the devil's invention which have to be used to as a sticking plaster over the gaps in C++ :)

That's a bit strong, like everything else in C++ knowing when and where to use/not use a feature is just as important as knowing the feature its self.
Title: Re: Test Driven Development
Post by: tggzzz on November 23, 2014, 10:35:29 pm
...

C++ templates are the devil's invention which have to be used to as a sticking plaster over the gaps in C++ :)

That's a bit strong, like everything else in C++ knowing when and where to use/not use a feature is just as important as knowing the feature its self.

True. But while developing the template specification, the designers didn't comprehend what they were creating. In particular they refused to believe that the specification was itself a Turing complete language. Until, that is, someone presented a short and valid C++ program that caused the compiler to, very slowly, emit the sequence of prime numbers during compilation.

That was, to me, a clear indication that C++ templates were becoming an uncontrolled and mis-understood monster. I've seen nothing to make me change my mind since then. Start by considering compilation and run-time error messages, then consider the foul interctions with exceptions.

Sure, they are helpful and arguably necessary for C++ programs - but that's because C++ is a bad base for an object-oriented language.

But this shouldn't degenerate into a C++ thread; it is about TDD.
Title: Re: Test Driven Development
Post by: hamster_nz on November 23, 2014, 11:06:12 pm
True. But while developing the template specification, the designers didn't comprehend what they were creating. In particular they refused to believe that the specification was itself a Turing complete language. Until, that is, someone presented a short and valid C++ program that caused the compiler to, very slowly, emit the sequence of prime numbers during compilation.

I just had to find more about that ... the paper is available at http://ubietylab.net/ubigraph/content/Papers/pdf/CppTuring.pdf (http://ubietylab.net/ubigraph/content/Papers/pdf/CppTuring.pdf)
Title: Re: Test Driven Development
Post by: tggzzz on November 24, 2014, 10:21:58 am
True. But while developing the template specification, the designers didn't comprehend what they were creating. In particular they refused to believe that the specification was itself a Turing complete language. Until, that is, someone presented a short and valid C++ program that caused the compiler to, very slowly, emit the sequence of prime numbers during compilation.

I just had to find more about that ... the paper is available at http://ubietylab.net/ubigraph/content/Papers/pdf/CppTuring.pdf (http://ubietylab.net/ubigraph/content/Papers/pdf/CppTuring.pdf)

Thanks for that reference, I hadn't seen it before. There are other less formal documents around that I haven't bookmarked. It does drive home the point that C++ templates are "undecidable"!

I wonder why they settled on the recommended recursion depth of 17, not 19? Far too magic for my comfort.
Title: Re: Test Driven Development
Post by: coppice on November 27, 2014, 03:34:02 am
I am just starting out with TDD in embedded world.
I am a noob when it comes to TDD - embedded or otherwise.
Why are you starting out with this? What draws you to it? Programming paradigms are 10 a penny. Why are you interested in this one?
The book TDD in embedded C by James W Grenning is a big help to understand the importance behind TDD but it has it WTF moments where most of the stuff he talks about goes over the head. Fortunately it is more to do with the frameworks he has chosen and not with the principle behind it.
Isn't the TDD principle just a restatement of the infinite monkey principle?
Quote
My question is - How do you guys manage to do TDD - partially or completely. Or is it just a waste of time??
You missed "badly". When most people try most of the software development paradigms they do them badly, so they never really act as a reference for whether the paradigm has any merit. The main reasons for doing them badly are:
A lot of methodologies are bunk. A lot have some merit. Most have books and courses behind them, and are making people money, so they massively oversell their benefits. None are actually a panacea, and they need to be seen in that light. There are times when stumbling around with test cases and code is an excellent way to get things clear in your mind. This is usually at the algorithm development stage, rather than the implementation stage. It doesn't seem like a particularly sound general principle, though.
Title: Re: Test Driven Development
Post by: krish2487 on November 27, 2014, 04:28:52 am
Quote
Why are you starting out with this? What draws you to it? Programming paradigms are 10 a penny. Why are you interested in this one?


Because it is as good as any. I do not have any formal SW development training. My course of pursuit is purely a outcome of my reading on the net and what I choose to pursue. It is purely a lack of experience. The validity of a course of action is only useful in hindsight. However even for hindsight, I still need to pursue a course.


[/size]Isn't the TDD principle just a restatement of the infinite monkey principle?

[/size]
Quote

[/size]
[/size][size=78%]As I ve said, I really do not have enough experience nor maturity to comment on it. I am just learning. [/size]
[/size]
[/size]
[/size]You missed "badly". When most people try most of the software development paradigms they do them badly, so they never really act as a reference for whether the paradigm has any merit. The main reasons for doing them badly are:[/size]
Title: Re: Test Driven Development
Post by: jlangr on November 27, 2014, 07:47:58 am
Of course, excercising a path is only effective if the results of exercising the path are visible to the test harness. And that can be extremely difficult to achieve for code that is designed to trap the "should rarely occur" exceptional conditions. Which raises the question: "should you break encapsulation to enable testing?".
I am currently reading "Modern C++ Programming with Test-Driven Development" by Jeff Langr, and he takes the position that it is better to be able to test than maintaining some academic sense of purity of design, and that it usually isn't a problem. There are of course many ways to handle this. For example, we use macros that can make a static function global when building the tests.

Bugger "academic purity". His is the kind of sentiment I have previously heard from XP/Agile/TDD religious zealots. Unless you are only considering tiny toy applications, encapsulation is the key to large programs/systems which are reliable, maintainable, and high-performance.

I suggest you read books by someone that has been at the sharp end of writing commercially important code that has to work reliably and be extended for years.

Have a look at Jeff Langr's website, and see if you can find what programs he has written (as opposed to at which companies he was a mentor).

I've gotten this sort of critique before, and I understand where it comes from. I know a number of my peers to whom I might even subject the same condemnation. Personally I wouldn't expect someone to believe me unless I'd been in the trenches--and so I do for a couple years, do consulting for a couple years, back to a dev team, etc. The stuff I write is based on practical application. See http://langrsoft.com/jeff/2012/09/the-consulting-legitimacy-cycle/. (http://langrsoft.com/jeff/2012/09/the-consulting-legitimacy-cycle/.)

So yeah, about the last time that someone dinged me for spouting consulting BS, I knew it was time to move on. You'll notice no blog entries since August 2013; I've been too busy working full-time as a lead developer at Outpace Systems since. I'm writing Clojure code with TDD/unit testing (we're not quite so dogmatic as to insist it's all test-driven). Our product is largely invisible--it's an offer engine that processes 100,000s of events per day and crunches through that plus customer stats to produce better offers. It's been deployed to a very happy customer for about a year. The unit testing has made a big difference (although an insufficient amount of good acceptance testing has cost us a bit--which says that, of course, unit testing is only a portion of what you need to do to deliver quality systems).

As far as the TDD is concerned, it's paid off more often than not, though it can be a bit challenging at time (and we punt sometimes). It's not magic and doesn't solve all problems--that's some of the take on TDD I try to present in the book, how to approach it from a pragmatic stance. I've done plenty of non-TDD, too, and while I can survive that way, it's simply not as effective--or fun--for me and my colleagues, *particularly* on larger systems. The goal is to build a system that's maintainable and keeps the cost of change to a minimum. The ability to know that you can make changes without unwittingly breaking stuff (too easy to do) matters a lot, and is the main reason I do TDD. (You can get there with test-after, but I find it to be harder and less effective.)

You're correct, encapsulation is important, and core design principles matter a lot. That too is a personal emphasis I have on development. Some of the TDD folks buy into a heavily mock-based approach; I think mocks need to be used carefully, otherwise you get into some nasty dependencies of the tests on private details, and that can really squash your ability to refactor your code when you must. My recommendation is to minimize and isolate any such exposures--but it's still more important to know that the code works. So I allow certain elements to be inspected and possibly overridden. It's yet to bite me.

Regards,
Jeff

PS sorry about the delayed response, but I happened to stumble across this forum while doing a once-a-month-or-two search for my name. If you have a burning concern about any of this, feel free to email me in case I don't find my way back here soon. If you don't buy into any of what I'm claiming, that's ok too; we're happy with it, and it's not for everyone.
Title: Re: Test Driven Development
Post by: IanB on November 27, 2014, 08:36:11 am
...

Wow. I think this is what's known in the vernacular as "click to summon"  :)
Title: Re: Test Driven Development
Post by: tggzzz on November 27, 2014, 11:07:41 am
Much snipped, except where I've a comment...

The unit testing has made a big difference (although an insufficient amount of good acceptance testing has cost us a bit--which says that, of course, unit testing is only a portion of what you need to do to deliver quality systems).

Very true.

I've always been surprised that the "unit testing" zealots only have one, very limited, concept of what constitutes a "unit", and that they don't understand the value of integration and acceptance testing.

Their "unit" is only a method or class, not the interface to a group of closely coupled classes, nor a library, nor a complete subsystem. Neither do they understand that every non-trivial "unit" test is actually an "integration" test, e.g. where they are integrating their code with standard run-time libraries.

As for the concept of unit testing the interface to their complete product - well, there's no need since the individual unit tests "prove the whole thing works"! Yeah, right. Especially when you are in a finger-pointing argument with something that is connected to your product!

Quote
As far as the TDD is concerned, it's paid off more often than not, though it can be a bit challenging at time (and we punt sometimes). It's not magic and doesn't solve all problems--that's some of the take on TDD I try to present in the book, how to approach it from a pragmatic stance. I've done plenty of non-TDD, too, and while I can survive that way, it's simply not as effective--or fun--for me and my colleagues, *particularly* on larger systems. The goal is to build a system that's maintainable and keeps the cost of change to a minimum. The ability to know that you can make changes without unwittingly breaking stuff (too easy to do) matters a lot, and is the main reason I do TDD. (You can get there with test-after, but I find it to be harder and less effective.)

Yes, I completely agree with that.

TDD isn't sufficient, but is sufficiently beneficial that it is almost necessary.

TDD does, of course, presume that somebody on the team really does know everything that is required to produce a successful product. In most cases that's reasonable, but I've spent a fair part of my life in commercial R&D where the first objective is to determine the reasonable and useful objectives!

Quote
You're correct, encapsulation is important, and core design principles matter a lot. That too is a personal emphasis I have on development. Some of the TDD folks buy into a heavily mock-based approach; I think mocks need to be used carefully, otherwise you get into some nasty dependencies of the tests on private details, and that can really squash your ability to refactor your code when you must. My recommendation is to minimize and isolate any such exposures--but it's still more important to know that the code works. So I allow certain elements to be inspected and possibly overridden. It's yet to bite me.

I've seen stupidly applied unit tests unnecessarily ossify a product. One argument says the only necessary tests are those that confirm the external behaviour - pretty much acceptance/integration tests. Another argument says "internal" unit tests are only scaffolding that helps ensure that the external behaviour hasn't accidentally been affected.
Title: Re: Test Driven Development
Post by: andersm on December 01, 2014, 07:58:57 pm
Except it completely changes what the compiler does.
Changing a function's linkage does not alter its logic, which is what's being tested. Never mind that the test is building a tiny part of the system in isolation, using a different compiler, on a different computer architecture, running a different OS.

Quote
Many low-complexity functions still yield a complex piece of software, just split up more. That doesn't necessarily make it less prone to bugs.
But it makes the individual pieces easier to test.
Title: Re: Test Driven Development
Post by: tggzzz on December 01, 2014, 09:04:06 pm
Except it completely changes what the compiler does.
Changing a function's linkage does not alter its logic, which is what's being tested. Never mind that the test is building a tiny part of the system in isolation, using a different compiler, on a different computer architecture, running a different OS.

That's only true in a ideal world that does not exist.

Now consider compiler optimisations which make presumptions (that may or may not be correct) about multithreaded code, aliasing, and all the other poorly defined aspects of C. And then go on to consider the interaction with libraries supplied by other companies. And don't forget the myriad compiler errors that do occur.

Quote
Quote
Many low-complexity functions still yield a complex piece of software, just split up more. That doesn't necessarily make it less prone to bugs.
But it makes the individual pieces easier to test.

But such tests are often very uninteresting and unilluminating - and therefore not a particularly good use of developer's time.

Many, many problems arise from so-called "emergent behaviour" in complex systems. As a trivial example, consider testing all the properties of individual sand grains. Exactly which of those properties implies that piles of sand have a half-angle of 35 degrees?!

Used intelligently, TDD can be helpful - but too often its benefits are overstated.
Title: Re: Test Driven Development
Post by: andersm on December 01, 2014, 10:48:49 pm
Now consider compiler optimisations which make presumptions (that may or may not be correct) about multithreaded code, aliasing, and all the other poorly defined aspects of C. And then go on to consider the interaction with libraries supplied by other companies. And don't forget the myriad compiler errors that do occur.
Again, since the focus of the tests is the business logic (in our setup anyway), that's kind of not relevant. And while compiler errors do happen, in practice they're rare enough that it's not much of a concern in day-to-day development.

Quote
But such tests are often very uninteresting and unilluminating - and therefore not a particularly good use of developer's time.
As Jeff Langr posted, one of the purposes of tests is to alert you to unintended changes of behaviour. From that perspective uninteresting tests are also useful.
Title: Re: Test Driven Development
Post by: tggzzz on December 02, 2014, 12:26:48 am
Now consider compiler optimisations which make presumptions (that may or may not be correct) about multithreaded code, aliasing, and all the other poorly defined aspects of C. And then go on to consider the interaction with libraries supplied by other companies. And don't forget the myriad compiler errors that do occur.
Again, since the focus of the tests is the business logic (in our setup anyway), that's kind of not relevant. And while compiler errors do happen, in practice they're rare enough that it's not much of a concern in day-to-day development.

Lucky you.

Being involved in infrastructure/environment/stack or embedded code, I haven't had that luxury.

Quote
Quote
But such tests are often very uninteresting and unilluminating - and therefore not a particularly good use of developer's time.
As Jeff Langr posted, one of the purposes of tests is to alert you to unintended changes of behaviour. From that perspective uninteresting tests are also useful.

Just so, provided the tests are maintained so they accurately reflect the changing requirements.

And such maintenance is, IMNSHO, at least as difficult as the core logic - and if not done perceptively it can easily either fail or ossify the code.
Title: Re: Test Driven Development
Post by: Alexei.Polkhanov on December 02, 2014, 05:10:15 am
Have you guys ever tried or even researched formal methods and tools? I work in software industry for 20+ years and I have seen some many fads like TDD so my cynicism and skepticism about all something like that is around level 11  :) (if you don't get the reference  here is video - https://www.youtube.com/watch?v=KOO5S4vxi0o (https://www.youtube.com/watch?v=KOO5S4vxi0o)).

Ever since beginning back in 1970s or 1980s formal methods were always somewhat impractical and academia toy to play with. However over past 5-6 years I see steady adoption of some tools in industry. When I read an article of John Carmack about application of static analysis tools in gaming then I said  "It is coming, I can hear it!". Here is little list of tools that I have seen used in real world:

- sal.h in windows SDK. It is a collection of macros for C that allow Windows Driver developers to give hints to compiler and static analyzers like "PREFAST" about function contracts.
- Coverity
- JML - Java Modelling Language for Java
- and then ACSL (ANSI C Specification Language) that used together with Frama-C toolchain - designed SPECIFICALLY FOR EMBEDDED DEVELOPMENT in C

Last 3 are more formal than simple static checking tools. When you write your code you provide a spec for every function like this:

Code: [Select]
/*@ ensures \result >= x && \result >= y;
    ensures \result == x || \result == y;
*/
int max (int x, int y) { return (x > y) ? x : y; }

or like this

Code: [Select]
/*@ requires \valid(p) && \valid(q);
    ensures *p <= *q;
    ensures (*p == \old(*p) && *q == \old(*q)) ||
            (*p == \old(*q) && *q == \old(*p));
*/
void max_ptr(int* p, int*q);

or

Code: [Select]
/*@
      requires \valid(root);
      assigns \nothing;
      ensures
       \forall list* l;
         \valid(l) && reachable(root,l) ==>
           \result >= l?>element;
      ensures
       \exists list* l;
         \valid(l) && reachable(root,l) && \result == l?>element;
  */
  int max_list(list* root);

and tool generates statements in functional language that passed down to another tool that does not test but tries to PROOVE that all  pre- and post conditions are satisfied. If it cannot prove it say - "I don't know". It is expected that you tweak the function till it can find prof in reasonable time. I have heard about good success of this tool for projects in range of 16-20K of C code.
Frama-C automatically inserts statements that are implied by language - it knows when int can overflow and that you cannot dereference NULL and requires you to avoid undefined behaviors etc. Coverity does more of later for C and C++.



Title: Re: Test Driven Development
Post by: tggzzz on December 02, 2014, 09:39:07 am
Have you guys ever tried or even researched formal methods and tools? I work in software industry for 20+ years and I have seen some many fads like TDD so my cynicism and skepticism about all something like that is around level 11  :)

Oh, yes indeed. Just so!

Quote

Ever since beginning back in 1970s or 1980s formal methods were always somewhat impractical and academia toy to play with. However over past 5-6 years I see steady adoption of some tools in industry. When I read an article of John Carmack about application of static analysis tools in gaming then I said  "It is coming, I can hear it!". Here is little list of tools that I have seen used in real world:

<snipped>

Those look like little more than Eiffel-style pre and post conditions. They are nice for toy examples in an academic context, but useless in more industrial contexts. For example, how would you express them for an FFT or inverse FFT function, or even something as simple as calculating the cost of a phone call.

I'll start taking them seriously when they can be used for something interesting and useful, for example proving that a set of communicating FSMs are deadlock-free, or proving the liveness of some real-time code. Background: the last time I showed a pure mathematician a real-life FSM (from network protocols), he recoiled in horror at the complexity.

But yes, such techniques can be useful in limited circumstances, e.g. proving the correctness of floating point implementations.
Title: Re: Test Driven Development
Post by: mrflibble on December 02, 2014, 04:57:50 pm
For example, how would you express them for an FFT or inverse FFT function, or even something as simple as calculating the cost of a phone call.
Interesting question. How would you go about testing the correctness of an FFT implementation? And then the exact same question, but then for testing an FFT implementation of an other platform where you have some room for a limited test environment but by no means as much as on a modern PC. Lets say the target is a cortex M3.

Quote
I'll start taking them seriously when they can be used for something interesting and useful, for example proving that a set of communicating FSMs are deadlock-free, or proving the liveness of some real-time code. Background: the last time I showed a pure mathematician a real-life FSM (from network protocols), he recoiled in horror at the complexity.
Again an interesting problem. I'm not a big fan of following the latest acronym soup, but rather a fan of mix & match. TDD (or rather my limited understanding of TDD) seems to have some nice ideas. And it's not as if these ideas are unique to TDD. The way I look at it is as a collection of ideas from which you grab the ones you like and ditch the ones you don't. All that introductory waffle for the following point: one of the ideas of TDD is that you let the tests drive the design. And lets not get into a whole debate about good or bad on that. Short version: who the fuck thought it a good idea to let tests dictate the product? However, having your tests inform your design seems like a good idea to me. You want a design that is testable. So when you find out during writing your test plan that the way you designed your product makes it really really difficult to test, then maybe it's time to rethink the design. Case in point the tangled FSM web. If you find that testing that sucker gives your local friendly math dude a migraine, then maybe a redesign that is less tangled? And I'm not saying you should have rewritten it, because what do I know. It's your design, and you know it way better than random internet person. I'm just thinking out loud here. Isn't one of the (IMO useful) ideas of TDD to have your tests inform the design? Assuming the aim is to end up with a design that is testable, then you want a design that is as test friendly as possible. Maybe the FSM isn't the best idea, because when implementing protocols, the protocol is a given. But hopefully it gets the point across.

Incidentally, how do you prove the FSM doesn't get stuck in an unintended state?
Title: Re: Test Driven Development
Post by: Alexei.Polkhanov on December 02, 2014, 06:05:21 pm
Those look like little more than Eiffel-style pre and post conditions. They are nice for toy examples in an academic context, but useless in more industrial contexts. For example, how would you express them for an FFT or inverse FFT function, or even something as simple as calculating the cost of a phone call.

I'll start taking them seriously when they can be used for something interesting and useful, for example proving that a set of communicating FSMs are deadlock-free, or proving the liveness of some real-time code. Background: the last time I showed a pure mathematician a real-life FSM (from network protocols), he recoiled in horror at the complexity.

But yes, such techniques can be useful in limited circumstances, e.g. proving the correctness of floating point implementations.

Well Eiffel has them as part of language - something I do in assert() in C/C++. I have very little experience with Eiffel - but it does compile into C code so it is not that impractical.

Speaking of Frama-C - tool does support floating point, so it tries to check proof obligations for your IEEE 754 floating point code on bit-wise basis. I think if you split FFT into small functions and then prove them one by one - then you will arrive at your destination.

For large FSM, deadlock checking etc. I had very good experience using PROMELA + SPIN. Promela is a language and SPIN is a tool that runs Promela scripts. It is very easy to use in my opinion. It does not, however, automatically generate C code. It is NOT an academia toy - it was designed by Bell Labs and used by them to check phone station/PBX software. Looks like C.

Title: Re: Test Driven Development
Post by: tggzzz on December 02, 2014, 06:11:50 pm
For example, how would you express them for an FFT or inverse FFT function, or even something as simple as calculating the cost of a phone call.
Interesting question. How would you go about testing the correctness of an FFT implementation? And then the exact same question, but then for testing an FFT implementation of an other platform where you have some room for a limited test environment but by no means as much as on a modern PC. Lets say the target is a cortex M3.

Well, you can't fully test an FFT, of course. One technique would be to compare (a large number of) input and output with that from a "golden" implementation written by other people, e.g. Matlab.

As for the cross-compiled version, you would have to "inject" the inputs and "extract" the outputs from the target environment.

But neither of those techniques are TDD - which doesn't matter in the slightest except that they are a good example of the limitations of TDD.

Quote
Quote
I'll start taking them seriously when they can be used for something interesting and useful, for example proving that a set of communicating FSMs are deadlock-free, or proving the liveness of some real-time code. Background: the last time I showed a pure mathematician a real-life FSM (from network protocols), he recoiled in horror at the complexity.
Again an interesting problem. I'm not a big fan of following the latest acronym soup, but rather a fan of mix & match. TDD (or rather my limited understanding of TDD) seems to have some nice ideas. And it's not as if these ideas are unique to TDD. The way I look at it is as a collection of ideas from which you grab the ones you like and ditch the ones you don't. All that introductory waffle for the following point: one of the ideas of TDD is that you let the tests drive the design. And lets not get into a whole debate about good or bad on that. Short version: who the fuck thought it a good idea to let tests dictate the product? However, having your tests inform your design seems like a good idea to me.

Agreed.

Quote
You want a design that is testable.

No, you want a design that works as intended and solves the problem for which it is designed. Don't forget the old engineering aphorism that "You can't inspect/test quality into a design/product". As you say, it isn't a good idea to let the tests dictate the design.

Quote
So when you find out during writing your test plan that the way you designed your product makes it really really difficult to test, then maybe it's time to rethink the design. Case in point the tangled FSM web. If you find that testing that sucker gives your local friendly math dude a migraine, then maybe a redesign that is less tangled? And I'm not saying you should have rewritten it, because what do I know. It's your design, and you know it way better than random internet person. I'm just thinking out loud here. Isn't one of the (IMO useful) ideas of TDD to have your tests inform the design? Assuming the aim is to end up with a design that is testable, then you want a design that is as test friendly as possible. Maybe the FSM isn't the best idea, because when implementing protocols, the protocol is a given. But hopefully it gets the point across.

I'm always in favour of simplification - indeed I often "joke" that I know things are going well when the number of lines of code reduces.  A beneficial side effect is that it drives naive managers and code counters up the wall :)

In the case in question, however, the FSM was the protocol, as defined in the standard. The FSM was designed and specified independently of any implementation, and ensured that different equipment from different manufacturers interoperated with each other.

Incidentally, that is normal practice for networking and telecom systems: they don't care how they are implemented, but they do care about the externally visible behaviour.

Quote
Incidentally, how do you prove the FSM doesn't get stuck in an unintended state?

By careful thought. :(
Title: Re: Test Driven Development
Post by: tggzzz on December 02, 2014, 06:18:08 pm
I think if you split FFT into small functions and then prove them one by one - then you will arrive at your destination.

I'm skeptical, and would require a demonstration that TDD can lead you to that nirvana. In other cases, I believe TDD can be very helpful; indeed I have helped introduce such disciplines into companies.

Quote
For large FSM, deadlock checking etc. I had very good experience using PROMELA + SPIN. Promela is a language and SPIN is a tool that runs Promela scripts. It is very easy to use in my opinion. It does not, however, automatically generate C code. It is NOT an academia toy - it was designed by Bell Labs and used by them to check phone station/PBX software. Looks like C.

I'm not aware of those, but will look into them if I have suitable problems in future.

There is, of course, the standard question of how to ensure/demonstrate that the implementation matches the promela/spin executable specification.
Title: Re: Test Driven Development
Post by: Alexei.Polkhanov on December 02, 2014, 07:44:18 pm
For code generation maybe you can try SPIN "Refinement" - I never tried it but looks like it is pretty mature project. https://github.com/codelion/SpinR (https://github.com/codelion/SpinR)

Title: Re: Test Driven Development
Post by: andersm on December 03, 2014, 12:12:02 am
Being involved in infrastructure/environment/stack or embedded code, I haven't had that luxury.
Are you really trying to claim that you run into so many compiler bugs that unit tests are useless? Because you're either overstating the problem or need to switch compiler vendor post-haste.
Title: Re: Test Driven Development
Post by: tggzzz on December 03, 2014, 12:31:54 am
Being involved in infrastructure/environment/stack or embedded code, I haven't had that luxury.
Are you really trying to claim that you run into so many compiler bugs that unit tests are useless? Because you're either overstating the problem or need to switch compiler vendor post-haste.

?!?

I suggest you go back and (re)read the context in which I made my statement. Even then I don't see how you could infer that was the case.

It would also help you (and therefore your audience) if you didn't delete the relevant context.
Title: Re: Test Driven Development
Post by: Smokey on January 19, 2019, 07:39:15 am
Bump for an interesting discussion that's been dormant for a while. 

What's the opinion on back writing unit tests for a code base that didn't use them from the beginning?
Title: Re: Test Driven Development
Post by: tggzzz on January 19, 2019, 09:33:24 am
Bump for an interesting discussion that's been dormant for a while. 

What's the opinion on back writing unit tests for a code base that didn't use them from the beginning?

How do you know what that code base ought to do?
How do you know whether some behaviour is or isn't a bug?
Do you have to preserve those bugs?
Title: Re: Test Driven Development
Post by: Smokey on January 19, 2019, 10:35:46 am
Bump for an interesting discussion that's been dormant for a while. 

What's the opinion on back writing unit tests for a code base that didn't use them from the beginning?

How do you know what that code base ought to do?
How do you know whether some behaviour is or isn't a bug?
Do you have to preserve those bugs?

Good points.  I guess the idea of back writing unit tests is more for the ability to check to keep from accidentally breaking things moving forward.
Title: Re: Test Driven Development
Post by: tggzzz on January 19, 2019, 10:58:55 am
Bump for an interesting discussion that's been dormant for a while. 

What's the opinion on back writing unit tests for a code base that didn't use them from the beginning?

How do you know what that code base ought to do?
How do you know whether some behaviour is or isn't a bug?
Do you have to preserve those bugs?

Good points.  I guess the idea of back writing unit tests is more for the ability to check to keep from accidentally breaking things moving forward.

If you don't know the correct behaviour, how do you know if something is broken?

If, OTOH, you mean "changed" rather than "broken", then that is equivalent to the Halting Problem. The next unwritten test might reveal a change.

Fundamentally, you can neither test correct behaviour, nor inspect quality into a product. You have to design it into a product.

Youngsters aren't taught those engineering fundamentals anymore.
Title: Re: Test Driven Development
Post by: Smokey on January 19, 2019, 11:26:15 am
Bump for an interesting discussion that's been dormant for a while. 

What's the opinion on back writing unit tests for a code base that didn't use them from the beginning?

How do you know what that code base ought to do?
How do you know whether some behaviour is or isn't a bug?
Do you have to preserve those bugs?

Good points.  I guess the idea of back writing unit tests is more for the ability to check to keep from accidentally breaking things moving forward.

If you don't know the correct behaviour, how do you know if something is broken?

If, OTOH, you mean "changed" rather than "broken", then that is equivalent to the Halting Problem. The next unwritten test might reveal a change.

Fundamentally, you can neither test correct behaviour, nor inspect quality into a product. You have to design it into a product.

Youngsters aren't taught those engineering fundamentals anymore.

Ok.. Ok.. I get we are playing the semantics game here, and I get that it's important to be precise (sometimes), but you probably know what I'm getting at. 

Yes, changed is what I'm considering as broken and that would hopefully get picked up by a unit test. 
Title: Re: Test Driven Development
Post by: tggzzz on January 19, 2019, 11:40:45 am
Bump for an interesting discussion that's been dormant for a while. 

What's the opinion on back writing unit tests for a code base that didn't use them from the beginning?

How do you know what that code base ought to do?
How do you know whether some behaviour is or isn't a bug?
Do you have to preserve those bugs?

Good points.  I guess the idea of back writing unit tests is more for the ability to check to keep from accidentally breaking things moving forward.

If you don't know the correct behaviour, how do you know if something is broken?

If, OTOH, you mean "changed" rather than "broken", then that is equivalent to the Halting Problem. The next unwritten test might reveal a change.

Fundamentally, you can neither test correct behaviour, nor inspect quality into a product. You have to design it into a product.

Youngsters aren't taught those engineering fundamentals anymore.

Ok.. Ok.. I get we are playing the semantics game here, and I get that it's important to be precise (sometimes), but you probably know what I'm getting at. 

Yes, changed is what I'm considering as broken and that would hopefully get picked up by a unit test.

This is more than semantics; we can all live in hope.

You need to understand and state your objectives and non-objectives. You have to understand what you aren't doing, what the probability of failure is, and what the penalty of failure is. If the answer is "not much", then "crossing your fingers" or "touching wood" is probably sufficient.
Title: Re: Test Driven Development
Post by: andersm on January 19, 2019, 02:02:43 pm
What's the opinion on back writing unit tests for a code base that didn't use them from the beginning?
This is largely what the book "Working Effectively with Legacy Code" is all about. It seems someone has uploaded a copy to archive.org (https://archive.org/details/WorkingEffectivelyWithLegacyCode) (perfectly legally I'm sure). The examples have a heavy OOP bent, but most described techniques work equally well with non-OOP languages.
Title: Re: Test Driven Development
Post by: rhb on January 19, 2019, 04:41:46 pm
It was not embedded code, but I ported 500KLOC of VAX FORTRAN to Unix.  I wrote a regression test facility for the job.  Whenever we wrote or ported a piece of code we wrote basic unit tests for it.  We did not attempt to provide full coverage.  Just some basic obvious tests.

Every time the system was built we ran the regression test suite and all the results were checked.  If an error was found we fixed the bug and added an additional test to the test suite.  Sometimes changes that appeared to be completely unrelated caused a failure in some test.

After the first year I was preparing the release notes for the 2nd release.  We did not provide all the programs in the first release.  When I looked at the version control logs I found that we had fewer than a dozen user submitted  bug reports.  We caught many more during the port, especially when we ported to AIX as the IBM compiler was much more strict about following the F77 standard.  Eventually the code was ported to 6 different Unix systems.  The last two, IRIX and Ultrix were an afternoon lark.

The bug rate went down from the first release over the course of 6 years.  After a merger it continued in service for another 6-8 years without any support and was finally abandoned because the functionality was no longer needed.

As a consequence of that experience 27 years ago I perform unit tests at frequent intervals during development and save them under version control.  As historically I have been doing scientific codes for DSP and such, the first thing i write is the data I/O and verify that what I write out is exactly what I read in.

Prior to the experience cited above I worked in a largish development team with a separate testing group.  So the developers did *no* unit testing at all.  It was very common that as soon as the code was compiled and run on a big endian machine it crashed because of a long/short mismatch on a function call.  Basic unit tests should be required of the developers.  If they are not, then quality cannot be tested into the code.

In all I have done support work on over 2 million lines of old code.  For FORTRAN I have a standard routine:

invoke IMPLICIT NONE in all functions and subroutines.
inspect the implicit names in the compiler errors messages for transposed characters in names and fix

place all COMMON blocks in a separate file and include it using the C preprocessor.  That has always turned  up at least one or two bugs as I check each block when I change to the #include.

Fix all compiler warnings.

In C I hunt down any invocations of gets(3c) and replace them.  I also test all pointer before a call to free(3c) or any other instance where a null pointer will cause a crash.

While this does not fix all the bugs, it fixes a lot of them which have gone undetected for years.

Test driven development is just the SW version of design for test.

For one library I wrote I had a section at the start of the function calls which tested for an environment variable which matched the function name.   If it was set it used the string as the name of a file and wrote all the input parameters to that file.  At the end of the file was a main() guarded by an #ifdef TEST that read that file and called the function.   This was very helpful dealing with GUI programs as I could test a problem without having to duplicate all the mouse clicks to get to the suspect routine.   I used the same format for the unit tests I wrote during development.  This was invaluable in dealing with numerical codes which evaluated expressions with singularities nearby that could be encountered by the presence of noise in the data.

Title: Re: Test Driven Development
Post by: tggzzz on January 19, 2019, 06:07:40 pm
As a consequence of that experience 27 years ago I perform unit tests at frequent intervals during development and save them under version control. 

It always astounds me that it took so long for mainstream software to catch onto that fundamental engineering concept.
It always astounds me that mainstream softies think it is a panacea and that "it works because it passed its unit tests".

Seriously; I've heard that more than once. The standard of the unit tests is usually crap compared to those required in hardware development.
Title: Re: Test Driven Development
Post by: coppice on January 19, 2019, 06:11:58 pm
As a consequence of that experience 27 years ago I perform unit tests at frequent intervals during development and save them under version control. 

It always astounds me that it took so long for mainstream software to catch onto that fundamental engineering concept.
It always astounds me that mainstream softies think it is a panacea and that "it works because it passed its unit tests".

Seriously; I've heard that more than once. The standard of the unit tests is usually crap compared to those required in hardware development.
There are far too many software people who think its OK when the only people developing the tests are the people developing the code, because they haven't really figured out WHY you test.
Title: Re: Test Driven Development
Post by: rhb on January 19, 2019, 09:56:31 pm
As a consequence of that experience 27 years ago I perform unit tests at frequent intervals during development and save them under version control. 

It always astounds me that it took so long for mainstream software to catch onto that fundamental engineering concept.
It always astounds me that mainstream softies think it is a panacea and that "it works because it passed its unit tests".

Seriously; I've heard that more than once. The standard of the unit tests is usually crap compared to those required in hardware development.
There are far too many software people who think its OK when the only people developing the tests are the people developing the code, because they haven't really figured out WHY you test.

I've never seen anything resembling a standard for unit tests.  I had one. It was very simple.  All *possible* inputs must be handled by either successful completion or an error code returned to the caller. And the unit test needed to supply *all* possible inputs.  I usually used awk scripts to generate the test files.

I've only dealt with three managed software development efforts.  All oil company internal developments. The VAX FORTRAN port only had unit tests because the other contractor suggested we do it and I was so far ahead of schedule that I just did it.  The other projects only had "testing" as a check box on some Gantt chart.  The developers did none and the "testers" knew nothing about what the software was supposed to do, so they just went through the motions.  The majority of my time was spent cleaning up the messes that other people had created.

My observation is that very few people are competent programmers.  Most scientists can't program, but that doesn't stop them.  With sole exception cited above, none of the contractors I worked with were worth a damn.

A character in Shakespeare's Henry VI proposes to kill all the lawyers.   As far as I'm concerned we should kill the programmers first and then kill the lawyers.

The sad part is that how to do things properly is well known and has been extensively written about.  It's just that most software developers can't be bothered with learning their craft, just the language or framework du jour. I found it amusing when a poll found that over half of "tech workers" felt they were frauds.

I always signed my work and included my acm.org email address so that a future maintainer could contact me if needed.  I started doing that on my first contract job, the VAX port because I wrote a 15,000 line library using lex and yacc and I knew that it was highly unlikely that anyone working on the code in the future would know lex and yacc.  So I put a comment at the top of the file explaining how to get in touch with me and a standing offer to fix any problems they encountered without charge.  That and another 15,000 line library I wrote *never* had a single bug found in it.  I did not spend a lot of time developing tests, though I did spend a good bit of time in the evening contemplating how to create them over a glass of scotch or whatever I was drinking at the time.

I also spent a lot of my own time and  money buying and  reading books.  The computer science section of my library consumes 80 ft of shelving.  And that doesn't include subject specific books like linear programming solvers.  I recently bought a couple of books on the implementation of the Method of Moments.  One by Gibson and the classic work by Harrington.as well as the marvelous EM book by Jian-Ming Jin.  Why?  Well I want to do some CEM work.  To be able to write test cases to verify NEC2 or OpenEMS I need to know the details of the calculations so I can craft test cases that are likely to fail.

Sorry about the rant, but I spent many years fixing idiotic errors.

To return to the OP's question, every function or subroutine should have a unit test which tests *all* possible cases.  If that cannot be done,  the code should be factored into pieces small enough that it can.  No work should be accepted as complete without that.  And the first milestone in development should be writing the test suite.  When a developer is given a task they should be required to show that they know how to test the requirements that they were given.  That will actually catch faulty requirements before the bugs are created. 

Most of the programmers I've met blame flaws in the requirements on the customer. "Oh, you want wheels on  your car?  Why didn't you say so?  Tires, too?  We can do that, but we'll have to slip the schedule a bit."  It's the programmer's job to verify the requirements by creating the test suite.

Les Hatton has done a lot of work on software QA and written an excellent book on the subject, "Safer C".  I highly recommend it, but I am prejudiced as I've known Les for a over 30 years.
Title: Re: Test Driven Development
Post by: tggzzz on January 19, 2019, 10:35:09 pm
As a consequence of that experience 27 years ago I perform unit tests at frequent intervals during development and save them under version control. 

It always astounds me that it took so long for mainstream software to catch onto that fundamental engineering concept.
It always astounds me that mainstream softies think it is a panacea and that "it works because it passed its unit tests".

Seriously; I've heard that more than once. The standard of the unit tests is usually crap compared to those required in hardware development.
There are far too many software people who think its OK when the only people developing the tests are the people developing the code, because they haven't really figured out WHY you test.

I've never seen anything resembling a standard for unit tests. I had one. It was very simple.  All *possible* inputs must be handled by either successful completion or an error code returned to the caller. And the unit test needed to supply *all* possible inputs.

I doubt that. Consider proving that you have the correct settings everywhere in the system such that a database's ACID properties are valid.

I once had to code something (in HiLo) that did x+y+640 in a single clock cycle. The nearest I got to a complete test was to have a LFSR driving a behavioural model and my structural model in parallel, and comparing the outputs.

Quote
The sad part is that how to do things properly is well known and has been extensively written about.  It's just that most software developers can't be bothered with learning their craft, just the language or framework du jour.

It is worse than that :(

The ones that have developed competence over time are rejected by HR-droids as being "past sell by date".

The ones that use the framework du jour believe the salesmen when they are told that it does everything for you. "Oh, so you've found a solution to the byzantine generals problem? Where did you publish it?".

Quote
Sorry about the rant, but I spent many years fixing idiotic errors.

You're not alone.

Quote
To return to the OP's question, every function or subroutine should have a unit test which tests *all* possible cases.  If that cannot be done,  the code should be factored into pieces small enough that it can.  No work should be accepted as complete without that.  And the first milestone in development should be writing the test suite.  When a developer is given a task they should be required to show that they know how to test the requirements that they were given.  That will actually catch faulty requirements before the bugs are created. 

Developing tests is more challenging than developing code. If they can't code, what hope do they have creating adequate tests?
Title: Re: Test Driven Development
Post by: coppice on January 19, 2019, 11:09:06 pm
Developing tests is more challenging than developing code. If they can't code, what hope do they have creating adequate tests?
Developing good code is much harder than developing tests. However, most people develop poor code. Developing poor code with problematic corner cases is relatively easy, and developing tests which foresee where those troublesome corner cases are likely to be is hard. It usually requires the input of multiple people, looking at the problem in different ways.
Title: Re: Test Driven Development
Post by: tggzzz on January 19, 2019, 11:57:34 pm
Developing tests is more challenging than developing code. If they can't code, what hope do they have creating adequate tests?
Developing good code is much harder than developing tests. However, most people develop poor code.

I stand by my assertion.

In general, I see far more adequate code than adequate tests.

Where guaranteed operation is required, e.g. high availability and/or safety critical,  more time and effort is spent on the testing than the coding.
Title: Re: Test Driven Development
Post by: rhb on January 20, 2019, 01:10:39 am
If you're writing numerical codes, the tests are the hardest thing to do.  You've got to find a case where you know the answer analytically.  I have literally spent days hunting for those pouring through old mathematical physics texts for relevant solutions.

I repeat my statement:

If you cannot test *all* possible inputs to a routine, it needs to be broken down into smaller pieces.  Testing a full system of significant size is NP-hard.  You simply cannot do it.

The function of comprehensive unit tests is to make sure that other functions don't have compensating errors and propagate through the system.  If that happens the result is unrepairable.  Game over.

In the case of the parser I wrote I generated a file with every permutation of invalid syntax and made sure that the parser caught the error and issued an understandable error message.  No "bailing out near line one" stuff.

The most difficult function I ever wrote evaluated what is called "Gassmann's equation".  It has 3 terms, all of which have a denominator which approaches zero.  I spent 3-5 days figuring out how to do that.  I used the test suite I created to test commercial codes.  It blew all of them up.
Title: Re: Test Driven Development
Post by: coppice on January 20, 2019, 01:23:30 am
Where guaranteed operation is required, e.g. high availability and/or safety critical,  more time and effort is spent on the testing than the coding.
High reliability developers usually put a LOT of resources into code review, because they know they will never test out all the corner case issues.
If you cannot test *all* possible inputs to a routine, it needs to be broken down into smaller pieces.  Testing a full system of significant size is NP-hard.  You simply cannot do it.
Nice idea, but when you go one layer up from those routines you still get back to things being untestable.
Title: Re: Test Driven Development
Post by: rhb on January 20, 2019, 03:42:17 am
Where guaranteed operation is required, e.g. high availability and/or safety critical,  more time and effort is spent on the testing than the coding.
High reliability developers usually put a LOT of resources into code review, because they know they will never test out all the corner case issues.
If you cannot test *all* possible inputs to a routine, it needs to be broken down into smaller pieces.  Testing a full system of significant size is NP-hard.  You simply cannot do it.
Nice idea, but when you go one layer up from those routines you still get back to things being untestable.

The purpose of tests is to verify that a change has not *broken* something.  They are not there to make sure it is correct except in the cases I cited of comparing a numerical solution to an analytic solution or of verifying that all possible inputs are handled properly.

As for my comment, I stated the limitations on what you *can* test and how you deal with it.  What you're arguing is there is no point in testing because the general case is NP-hard.  N layers up may or may not be NP-hard.  It depends entirely upon the problem and the structure of the solution.  Good programmers pay very close attention to complexity and control it as much as possible.

My experience is that with good unit tests and even a modest amount of system level test coverage not many bugs get by.  As for numerical problems, Foreman Acton has written extensively on the subject and how to address it.  For problems such as A = B + C where all three terms are fractions with denominators which approach zero it is very difficult to get that correct. As this is describing a physically realizable problem, a solution exists, but implementing it in software in the presence of noise, quantization error and round off can be very difficult.

After I have a clear statement of the requirements (Do you want wheels and tires?) the first thing I do is examine the computational complexity.  If a problem is NP-hard I then negotiate a satisfactory approximation.

By training I am a scientist.  My sole computer related course was 1 hr of WATFIV.  So I typically know what the best approximation is, but I always explain the issue to the client and let them decide from the available choices.

The fundamental problem with most testing is it presumes than a million monkeys with typewriters can produce a Shakespeare play.  And the entire test program is based on that fallacy.
Title: Re: Test Driven Development
Post by: coppice on January 20, 2019, 03:56:34 am
As for my comment, I stated the limitations on what you *can* test and how you deal with it.  What you're arguing is there is no point in testing because the general case is NP-hard.  N layers up may or may not be NP-hard.  It depends entirely upon the problem and the structure of the solution.  Good programmers pay very close attention to complexity and control it as much as possible.
I never said don't test. I said you're approach of comprehensive testing at the lowest level of routines has limitations as soon as you move up the stack of routines. You still have to do the most testing you realistically can.
Title: Re: Test Driven Development
Post by: snarkysparky on January 21, 2019, 01:57:25 pm
Didn't i read somewhere that there is way to prove software correctness other than exhaustive test of every possible input sequence deep enough to exercise the particular code through its maximum state depth.  I mean how many iterations does a change in input take to fully propagate out of a system.

This makes complete testing impossible for most cases.  Consider double precision FFT.  Not only every valid floating point value but every combination of all possible floating point values.

Title: Re: Test Driven Development
Post by: rhb on January 21, 2019, 02:33:04 pm
Program correctness is an example of the halting problem for which Alan Turing gave a proof.

https://www.cs.rochester.edu/~nelson/courses/csc_173/computability/undecidable.html (https://www.cs.rochester.edu/~nelson/courses/csc_173/computability/undecidable.html)

https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pd (https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pd)

The function of testing is not to prove something is correct.  It's to catch errors as early as possible so that the rest of the system doesn't develop dependencies on the error.  The BSD tape IOCTL code is the most notorious example of an error propagating into other programs.

Once a dependency on an error has propagated into a large amount of code it becomes very expensive to fix.
Title: Re: Test Driven Development
Post by: tggzzz on January 21, 2019, 02:36:57 pm
Didn't i read somewhere that there is way to prove software correctness other than exhaustive test of every possible input sequence deep enough to exercise the particular code through its maximum state depth.  I mean how many iterations does a change in input take to fully propagate out of a system.

This makes complete testing impossible for most cases.  Consider double precision FFT.  Not only every valid floating point value but every combination of all possible floating point values.

You may have read it somewhere, but that doesn't make it correct.

Complete testing is indeed impossible for all but trivial examples: "state space explosion". That does not diminish the utility of testing.

You should start by understanding the differences between "designing" and "testing".

Once you have that under your belt, you can move on to understanding the difference between "inspection" and "formal proof".
Title: Re: Test Driven Development
Post by: tggzzz on January 21, 2019, 02:38:55 pm
The function of testing is not to prove something is correct.

Many people believe that is the function of testing, and it isn't trivial to disabuse them of that notion. :(
Title: Re: Test Driven Development
Post by: snarkysparky on January 21, 2019, 03:10:54 pm


You should start by understanding the differences between "designing" and "testing".

Once you have that under your belt, you can move on to understanding the difference between "inspection" and "formal proof".

I googled "designing vs testing" and nothing came up.  Is it a common topic ?

I left out a "no" in original post.  I meant.
 somewhere that there is no way to prove software correctness


It's of interest to me now because I am coding for some people that demand 'proof' for lack of errors.

Title: Re: Test Driven Development
Post by: rhb on January 21, 2019, 05:15:45 pm
The function of testing is not to prove something is correct.

Many people believe that is the function of testing, and it isn't trivial to disabuse them of that notion. :(

Yeah, every IT manager I ever dealt with.  Fortunately, I usually worked for a science manager and had the latitude to do whatever I thought appropriate. 

One time there was a new IT manager.  When I went around to introduce myself I told him, "I'm the guy from G&G (geology and geophysics) they send around with a baseball bat to bring IT into line." 

I got on very well with those people because I knew when the G&G types were making unreasonable demands and would call them out on it as I was at least as well trained in G&G as they were *and* I also knew what was critical to IT.  My usual mode of operation was to meet with the managers 3 senior staff.  We'd work out a solution and go to his office.  I'd explain the issue and the proposed solution.  He'd look at his staff, they'd nod their heads and we were done.

I used to tell IT types, "The CEO sits at the right hand of God.  But the exploration manager *is* God.  Never tell God no.  He's got a million dollar signature authority and if you tell him no and he wants it, he'll bring in a contractor and you won't know about it until there's a half million lines of code for you to support. 

So you always say ,"Yes.  We can do that.  This approach will cost this much and this other approach will cost this. I recommend choice A for the following reasons."
Title: Re: Test Driven Development
Post by: SiliconWizard on January 21, 2019, 05:30:11 pm
The function of testing is not to prove something is correct.

Many people believe that is the function of testing, and it isn't trivial to disabuse them of that notion. :(

A test can show that a bug exists. It can't prove there are no bugs. Proving something can happen just takes one example. No finite amount of tries can prove something never happens. It's pretty much that simple. Testing is an experimental process by nature. Doesn't mean, of course, that we shouldn't do it.

Many people find it troublesome to think there is no real way of testing for correctness, so they either get in denial or, for the honest ones, they try and cling to formal proofing.
There's been an astounding number of attempts to design formal proof systems for software/digital design development over the past few decades. Laudable effort, but I'm still not convinced they have really managed to make things any better in practice.
Title: Re: Test Driven Development
Post by: rhb on January 21, 2019, 06:17:13 pm
The function of testing is not to prove something is correct.

Many people believe that is the function of testing, and it isn't trivial to disabuse them of that notion. :(

A test can show that a bug exists.
[snip]

Most importantly it can show it *is* fixed and stays that way.   

Beyond rudimentary unit tests, that is what I consider the most important objective. Regression testing catches complementary errors which result in an error when one is fixed, but not the other.  I've seen that happen quite a lot.  It's the main reason I am a big fan of regression testing.

It was a real surprise the first time a small change in module B resulted in a unit test failing for a seemingly unrelated function in module Y.  If you've only written a few thousand lines of code between system build & test  cycles it's not too hard to find.  I don't want to think what it would be like if you had 100K lines of changes and new code.

Despite knowing better, I bought a bunch of books on proving correctness.  None actually proved useful for anything other than an example of a delusion.
Title: Re: Test Driven Development
Post by: snarkysparky on January 21, 2019, 06:59:44 pm
So basically what can be done to test software is to check it for known issues like edge cases and invalid input ...etc, but bug free operation cannot be guaranteed unless the software is so simple that exhaustive test of all input sequences can be performed.

Lots of people don't like to hear that. 

Title: Re: Test Driven Development
Post by: tggzzz on January 21, 2019, 07:01:05 pm


You should start by understanding the differences between "designing" and "testing".

Once you have that under your belt, you can move on to understanding the difference between "inspection" and "formal proof".

I googled "designing vs testing" and nothing came up.  Is it a common topic ?

I left out a "no" in original post.  I meant.
 somewhere that there is no way to prove software correctness


It's of interest to me now because I am coding for some people that demand 'proof' for lack of errors.

It sounds like your role is "project scapegoat".

Demand they prove there are no errors in their specification, or run away.
Title: Re: Test Driven Development
Post by: tggzzz on January 21, 2019, 07:05:15 pm
So basically what can be done to test software is to check it for known issues like edge cases and invalid input ...etc, but bug free operation cannot be guaranteed unless the software is so simple that exhaustive test of all input sequences can be performed.

Lots of people don't like to hear that.

Getting a specification that is bug free is at least as hard as getting code that implements the specification.

"Verification" is not the same as "validation". One aims to "do the right thing", the other aims to "do the thing right".
Title: Re: Test Driven Development
Post by: rhb on January 21, 2019, 09:09:51 pm
So basically what can be done to test software is to check it for known issues like edge cases and invalid input ...etc, but bug free operation cannot be guaranteed unless the software is so simple that exhaustive test of all input sequences can be performed.

Lots of people don't like to hear that.

Getting a specification that is bug free is at least as hard as getting code that implements the specification.


A substantial part of the programmer's job is to make sure the specification is correct.  I am really sick of programmers who blame the customer when the real problem was the programmer never bothered to find out what the actual problem was that the customer wanted solved.

Most of the times that people came in my office wanting a program written, after I asked them a few questions, they suddenly exclaimed, "I know how to do that!" and ran off. Typically I'd just solved a $30K problem for $20.
Title: Re: Test Driven Development
Post by: dmills on January 21, 2019, 09:26:04 pm
A substantial part of the programmer's job is to make sure the specification is correct.  I am really sick of programmers who blame the customer when the real problem was the programmer never bothered to find out what the actual problem was that the customer wanted solved.
What **REALLY** helps is having at least some of the team have some serious task domain chops, it is MUCH easier to figure out what the customer actually needs (as well as where the problems in the spec are) if you know how to do the customers job to at least a reasonable level.
There is however a somewhat popular view among the 'hard of thinking' end of management that programmers are interchangeable, which makes achieving this harder then it should be. 

Regards, Dan.
Title: Re: Test Driven Development
Post by: tggzzz on January 21, 2019, 09:31:15 pm
So basically what can be done to test software is to check it for known issues like edge cases and invalid input ...etc, but bug free operation cannot be guaranteed unless the software is so simple that exhaustive test of all input sequences can be performed.

Lots of people don't like to hear that.

Getting a specification that is bug free is at least as hard as getting code that implements the specification.


A substantial part of the programmer's job is to make sure the specification is correct.  I am really sick of programmers who blame the customer when the real problem was the programmer never bothered to find out what the actual problem was that the customer wanted solved.

Most of the times that people came in my office wanting a program written, after I asked them a few questions, they suddenly exclaimed, "I know how to do that!" and ran off. Typically I'd just solved a $30K problem for $20.

We are in violent agreement.

That attitude is valuable in many cases and in many ways. And that specifically includes this forum. There's a lot of "applied philosophy" in systems development ("what do you mean by 'fast'" and "does a vehicle include horses"?).

The only thing I'll note is that customers can become hostile when asked to explain themselves; such twats dislike their ignorance being exposed ("it is obvious; just do it") and/or having their wooly thinking exposed.
Title: Re: Test Driven Development
Post by: coppice on January 21, 2019, 10:37:40 pm
A substantial part of the programmer's job is to make sure the specification is correct.  I am really sick of programmers who blame the customer when the real problem was the programmer never bothered to find out what the actual problem was that the customer wanted solved.
Programmers aren't usually in a good position to know what the quality of a requirement specification really is. They just don't know enough about the details of what the customer would consider correct behaviour. In the late 80s there was a burst of activity with specification by simulation techniques. I saw these result in some of the best specified projects I have seen. It went out of fashion, though. It isn't always easy to get a customer to treat a simulation seriously, and put effort into playing with it until it does what they expect in every circumstance they check. However, once they start finding things they didn't expect they tend to warm to the idea, and it becomes productive.
Title: Re: Test Driven Development
Post by: rhb on January 22, 2019, 02:00:15 am

That attitude is valuable in many cases and in many ways. And that specifically includes this forum. There's a lot of "applied philosophy" in systems development ("what do you mean by 'fast'" and "does a vehicle include horses"?).

The only thing I'll note is that customers can become hostile when asked to explain themselves; such twats dislike their ignorance being exposed ("it is obvious; just do it") and/or having their wooly thinking exposed.

I had one instance where  per my standard practice I wrote up a 1 page requirement and gave it to the requester.  I went back a couple of days later to discuss it.  "I don't have time for that paper work."  Not something I generally tolerate, but I let it pass.  I had estimated it as 2 weeks work.  So I went ahead  with it.  Meanwhile another guy across the hall from me was doing a very much simpler version of the same thing. Probably 1/4 of the effort or less.  The math for what I'd been asked to do was complex and prone to instability.  The complexity is just an extra day or so, but unstable equations can take days to develop a reliable implementation.

So two weeks after his dismissive comment about the 1 page specification he wanted the program.  I told him, it's not finished.  He's all in a lather because his project will be late, so I told him about Wen Long's simplified version and suggested he might be able to get by with that.

A couple of days later we have a very tense meeting in his boss' office.  The two of them are all over my case.

"What's going on, You said two weeks work"   

"I said two weeks effort.  I didn't say it would be ready in 2 weeks.  I have a lot of other people I support."

"I don't know if we can continue to use you."

blah, blah , blah

When I got back to my office, there was a message from my boss' secretary that he and his boss wanted to see me in his office right away.  So I go down the hall expecting to be fired.

They wanted to see me to give me an award for the excellence of my work and how responsive I was to staff requests.  I was never able to find out what had initiated it. I just solved the problems I encountered and looked for problems that might arise.  I was a contractor, so the company award was rather unusual.  The first guy's project was never completed as he found Wen Long's simplified version adequate.

The first guy did not like me.  Previously he was buying a Linux cluster and I attended the meeting with the sales people and his boss.  During the discussion I raised the issue of power consumption.  This guy had a corner office with all glass walls in Houston.  The sales people said it's not a problem. It runs on a regular 20 A 120 circuit.  Problem was it dissipated about 1800 watts.  They bought it and put it in his office.  In 2-3 days they moved it to the computer room.  After he'd been made to sweat for a few days :)

The most ironic part of the entire tale is his boss, who told me to my face at my interview that he didn't want to hire an oil industry programmer and who had hired a Indian outfit to develop a system for running jobs on workstations evenings and weekends became my primary client.  I attended his weekly staff meetings.  At one of these, during a vendor presentation one of them remarked that they were considering hiring some Indians.  I was gob smacked when he said, "Well, you get what you pay for." or similar.  All the work the Indians had done was discarded after burning through $100K or so at their cheap hourly rates.

I was returning to my office after visiting a senior seismic processor when two of the Indians called out to me.  They had a problem.  They needed to divert stderr from the programs they were running off hours to a file but couldn't figure out how to do it.  The two of them had been trying all morning. 

They showed me their code.  They were calling dup(2), dup2(2) and a long list of other exotic Unix system calls.  I was familiar with the calls and what they were used for but to this day have never encountered a reason to use them.  Once they explained the task, I wrote a 5 line (lots of white space) program that wrote to stderr and another 5 line program that reopened stderr on a file and ran the first program.  Compiled them, demonstrated it worked and left. They'd spent 8 man hours going nowhere.  The entire demonstration took me 5 minutes.  Even though I was being paid 3-4x what they were, I was still the lowest cost solution.

My career was unusual as I had been trained as a scientist, not as a programmer.  I did not get my PhD because of personal conflicts with my supervisor and after 4 years at Austin, it was not worth going to Stanford and spending another 6.  I'd also given myself a common law computer science PhD by buying and reading 80 ft of books.  So even without my doctorate I still commanded first rate PhD rates. I also survived a lot of layoffs.

When I quit work to move to Arkansas to look after Mom & Dad I'd expected to get a job where I did most of my work at home and flew to Houston for 3-4 days once a month.  The crash in 2008 killed that and I had a really tough time for several years until I stumbled across this forum.  I find living in cyber space more than a bit weird, but the company here matches the company anywhere I ever worked professionally.  This forum has completely changed my outlook on life.  The trolls are annoying, but the only thing that comes close is Usenet back in the late 80's and early 90's when mention of a book or paper in a post was likely to get a response from the author.
Title: Re: Test Driven Development
Post by: thermistor-guy on January 22, 2019, 02:28:47 am
...
Despite knowing better, I bought a bunch of books on proving correctness.  None actually proved useful for anything other than an example of a delusion.

To this day, I often use loop invariants and ghost variables, which I learned from David Gries' book:

https://www.springer.com/gp/book/9780387964805 (https://www.springer.com/gp/book/9780387964805)
Title: Re: Test Driven Development
Post by: rhb on January 22, 2019, 03:39:03 am
One of the few books with useful information.
Title: Re: Test Driven Development
Post by: andyturk on January 22, 2019, 06:42:25 am
So basically what can be done to test software is to check it for known issues like edge cases and invalid input ...etc, but bug free operation cannot be guaranteed unless the software is so simple that exhaustive test of all input sequences can be performed.

Lots of people don't like to hear that.
Of course there are no guarantees, even with (unit) testing. But if you devote some effort to test development, you'll end up with a better result than without it.

In my (perhaps limited) experience, tests are still code. Tests have bugs. Tests are incomplete. Tests don't necessarily keep up when the spec or the implementation changes. Even so, if you're able to write test fixtures from a perspective that's orthogonal to how the actual implementation works, you're ahead.

E.g., in addition to checking specific input values, you can generate a "random" collection of input values and look at all the results statistically. Or perform some numerical transformation in the test harness that be used to measure the implementation.

That's one of the reasons I like using C++... I can use Googletest (https://github.com/google/googletest) and bring the full power C++ to bear on the test harness even if the production code is straight C. And googlemock (https://github.com/google/googletest/tree/master/googlemock) is a beautiful thing.
Title: Re: Test Driven Development
Post by: Siwastaja on January 22, 2019, 06:58:03 am
Programmers aren't usually in a good position to know what the quality of a requirement specification really is. They just don't know enough about the details of what the customer would consider correct behaviour.

I think this is a management disaster. IMHO, programmers need to know the details of the customer case. Keeping programmers "out of the loop" never works, because the high-level design and specification never is perfect. If the programmer has something to say, they are able to produce huge savings of time and money. OTOH, if they are treated as "implementation machines", even in best case, they will implement the specification perfectly, resulting in an expensive product with multiple flaws, because I have never seen or heard about a specification that's both exactly and actually what the customer wants, and also technologically implementable in a sane/efficient way.

Programmers at least tend to have the best ground-level understanding what can be easily done, and what is complex. They have best chances of understanding which data structure is a PITA to use, and which data structure works.

The idea that programmers are comparable to the manual workers building a house, or a bridge exactly according to blueprints, and the house/bridge is separately designed by engineers, is a total fallacy. Programmers are part of the design team, want it or not. The equivalent of the house-building manual workers would be people who manually punch out cards to feed the program into a computer, exactly according to the listing. Since we have totally automated the manual part, some management people have became confused and tend to assume we still have them, and incorrectly take the programmers as such.
Title: Re: Test Driven Development
Post by: tggzzz on January 22, 2019, 09:17:44 am
So basically what can be done to test software is to check it for known issues like edge cases and invalid input ...etc, but bug free operation cannot be guaranteed unless the software is so simple that exhaustive test of all input sequences can be performed.

Lots of people don't like to hear that.
Of course there are no guarantees, even with (unit) testing. But if you devote some effort to test development, you'll end up with a better result than without it.

You'll end up with more confidence in the result, but whether it is a sufficiently good result depends on...

Quote
In my (perhaps limited) experience, tests are still code. Tests have bugs. Tests are incomplete. Tests don't necessarily keep up when the spec or the implementation changes. Even so, if you're able to write test fixtures from a perspective that's orthogonal to how the actual implementation works, you're ahead.

Regrettably you will find that limited experience repeated throughout your career :( I've seen some truly dreadful unit tests that were "OK because the green light is on".

Quote
E.g., in addition to checking specific input values, you can generate a "random" collection of input values and look at all the results statistically. Or perform some numerical transformation in the test harness that be used to measure the implementation.

If you have a "golden" implementation that you can compare with your implementation, then it can increase confidence - but no more. Electronics: behavioural vs structural implementations. Software: executable specification vs your implementation.

Otherwise fuzzing can be useful for breaking black boxes.

Quote
That's one of the reasons I like using C++... I can use Googletest (https://github.com/google/googletest) and bring the full power C++ to bear on the test harness even if the production code is straight C. And googlemock (https://github.com/google/googletest/tree/master/googlemock) is a beautiful thing.

The phrase "lipstick on a pig" springs to mind :)
Title: Re: Test Driven Development
Post by: coppice on January 22, 2019, 10:26:35 am
Programmers aren't usually in a good position to know what the quality of a requirement specification really is. They just don't know enough about the details of what the customer would consider correct behaviour.

I think this is a management disaster. IMHO, programmers need to know the details of the customer case. Keeping programmers "out of the loop" never works, because the high-level design and specification never is perfect. If the programmer has something to say, they are able to produce huge savings of time and money. OTOH, if they are treated as "implementation machines", even in best case, they will implement the specification perfectly, resulting in an expensive product with multiple flaws, because I have never seen or heard about a specification that's both exactly and actually what the customer wants, and also technologically implementable in a sane/efficient way.

Programmers at least tend to have the best ground-level understanding what can be easily done, and what is complex. They have best chances of understanding which data structure is a PITA to use, and which data structure works.

The idea that programmers are comparable to the manual workers building a house, or a bridge exactly according to blueprints, and the house/bridge is separately designed by engineers, is a total fallacy. Programmers are part of the design team, want it or not. The equivalent of the house-building manual workers would be people who manually punch out cards to feed the program into a computer, exactly according to the listing. Since we have totally automated the manual part, some management people have became confused and tend to assume we still have them, and incorrectly take the programmers as such.
You keep jumping between things which are requirements gathering, and things which are implementation related. I suggest you try some books on the topic of requirements gathering. There are a few good ones around, although its some years since I consulted one, so I have no specific recommendation. There are some good short courses, too, where you can start to practice the art of flushing implied requirements out of customers.

Good requirements gathering is one of the toughest jobs in Engineering. You can never be sure you have gathered every requirement, even if you take the task really seriously. Any time you interview a customer, trying to flush out all their requirements, and they say something like "Isn't that obvious?", you've just hit something that was only obvious to them, and they they had no idea needed to be explicitly stated. Flushing out as many of these job knowledge related requirements, which probably took years to become part of the customer's working knowledge set, is usually the difference between a satisfied customer and one litigating with you. Since it took the customer years to learn what needs to be done, how much of this do you think the programmers will pick up during a development cycle? This is one of the wonderful things about developing long term relationships, as the implementers really can build up quite a detailed understanding of their customer's needs. Staff turnover in software development is usually quite high, though, so even long term relationships are no panacea.
Title: Re: Test Driven Development
Post by: tggzzz on January 22, 2019, 10:42:57 am
You keep jumping between things which are requirements gathering, and things which are implementation related.

That's reasonable, since the implementation tests must include (but not be limited to) the points of value to the customer - i.e. their requirements.

I agree with your other points.
Title: Re: Test Driven Development
Post by: coppice on January 22, 2019, 11:24:33 am
You keep jumping between things which are requirements gathering, and things which are implementation related.
That's reasonable, since the implementation tests must include (but not be limited to) the points of value to the customer - i.e. their requirements.
I have found it to be very unreasonable. As you start to gather requirements the implementation sometimes looks like its going to be a nightmare. Then as you continue to gather requirements you find something that makes the complexity fall away. More often you start gathering requirements and the implementation looks like it will be a breeze. Then as you continue to gather requirements you suddenly flush out something obscure that will be a huge roadblock. The best course is usually to focus on gathering requirements with little thought about how a solution might be implemented. When the requirements are well fleshed out, and you look at the implementation, you might find you have serious issues. At that point you might want/need to go back and negotiate on requirements which cause problems. However, the first priority is to flush them out.
Title: Re: Test Driven Development
Post by: Siwastaja on January 22, 2019, 01:29:42 pm
I don't believe that the implementers should be decoupled from the requirement gathering. Well, maybe in a massive project, not every coder needs to participate in everything, but in general, I don't believe in the paradigm of "collect all the information" -> "top-level design without coding" -> "implementation" - all by different people.

The team is likely to get too big and too decoupled, and "very important" requirements very hard to implement sneak in, despite the fact they might not be very important after all, or all completely wrong yet eat up 90% of the schedule. Communication is key here, not "abstracting away" the communication.

Quote
As you start to gather requirements the implementation sometimes looks like its going to be a nightmare. Then as you continue to gather requirements you find something that makes the complexity fall away. More often you start gathering requirements and the implementation looks like it will be a breeze. Then as you continue to gather requirements you suddenly flush out something obscure that will be a huge roadblock.

This exactly shows why the implementation aspect needs to be present while gathering requirements. Because the requirements are not only software functions; requirements also include price and schedule (and whether it is feasible at all). If the customer gets a feedback that a certain feature is almost impossible to implement and would blow the budget and schedule, they may want to compromise and think again. This is best found out early, before time is committed to the implementation. This is why the implementers should sit down together with the end customer.

And, the coders are not idiots; quite the opposite, they may have bright ideas to help the customer solve the problem more easily and save money.

If done in a hierarchical waterfall model, the requirement-gatherers and top-level designers need to have almost superhuman skills what comes to understanding implementation; they should be able to implement it themselves. If they are more like marketing people, it's a disaster waiting to happen - especially if the customer is represented by their marketing people, as well.
Title: Re: Test Driven Development
Post by: coppice on January 22, 2019, 01:41:30 pm
I don't believe that the implementers should be decoupled from the requirement gathering.
Has anyone suggested that should be the case?
Well, maybe in a massive project, not every coder needs to participate in everything, but in general, I don't believe in the paradigm of "collect all the information" -> "top-level design without coding" -> "implementation" - all by different people.
I agree that paradigm is stupid. Has anyone suggested it?
Title: Re: Test Driven Development
Post by: tggzzz on January 22, 2019, 02:25:53 pm
You keep jumping between things which are requirements gathering, and things which are implementation related.
That's reasonable, since the implementation tests must include (but not be limited to) the points of value to the customer - i.e. their requirements.
I have found it to be very unreasonable. As you start to gather requirements the implementation sometimes looks like its going to be a nightmare. Then as you continue to gather requirements you find something that makes the complexity fall away. More often you start gathering requirements and the implementation looks like it will be a breeze. Then as you continue to gather requirements you suddenly flush out something obscure that will be a huge roadblock. The best course is usually to focus on gathering requirements with little thought about how a solution might be implemented. When the requirements are well fleshed out, and you look at the implementation, you might find you have serious issues. At that point you might want/need to go back and negotiate on requirements which cause problems. However, the first priority is to flush them out.

Nobody has suggested anything that is in conflict with that.
Title: Re: Test Driven Development
Post by: Siwastaja on January 22, 2019, 02:28:59 pm
Has anyone suggested that should be the case?
I agree that paradigm is stupid. Has anyone suggested it?

I was probably reading "between the lines" a tad too much. Thanks for clarification.