Author Topic: Test Driven Development  (Read 22125 times)

0 Members and 1 Guest are viewing this topic.

Offline Smokey

  • Super Contributor
  • ***
  • Posts: 2536
  • Country: us
  • Not An Expert
Re: Test Driven Development
« Reply #50 on: January 19, 2019, 07:39:15 am »
Bump for an interesting discussion that's been dormant for a while. 

What's the opinion on back writing unit tests for a code base that didn't use them from the beginning?
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19279
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Test Driven Development
« Reply #51 on: January 19, 2019, 09:33:24 am »
Bump for an interesting discussion that's been dormant for a while. 

What's the opinion on back writing unit tests for a code base that didn't use them from the beginning?

How do you know what that code base ought to do?
How do you know whether some behaviour is or isn't a bug?
Do you have to preserve those bugs?
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Smokey

  • Super Contributor
  • ***
  • Posts: 2536
  • Country: us
  • Not An Expert
Re: Test Driven Development
« Reply #52 on: January 19, 2019, 10:35:46 am »
Bump for an interesting discussion that's been dormant for a while. 

What's the opinion on back writing unit tests for a code base that didn't use them from the beginning?

How do you know what that code base ought to do?
How do you know whether some behaviour is or isn't a bug?
Do you have to preserve those bugs?

Good points.  I guess the idea of back writing unit tests is more for the ability to check to keep from accidentally breaking things moving forward.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19279
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Test Driven Development
« Reply #53 on: January 19, 2019, 10:58:55 am »
Bump for an interesting discussion that's been dormant for a while. 

What's the opinion on back writing unit tests for a code base that didn't use them from the beginning?

How do you know what that code base ought to do?
How do you know whether some behaviour is or isn't a bug?
Do you have to preserve those bugs?

Good points.  I guess the idea of back writing unit tests is more for the ability to check to keep from accidentally breaking things moving forward.

If you don't know the correct behaviour, how do you know if something is broken?

If, OTOH, you mean "changed" rather than "broken", then that is equivalent to the Halting Problem. The next unwritten test might reveal a change.

Fundamentally, you can neither test correct behaviour, nor inspect quality into a product. You have to design it into a product.

Youngsters aren't taught those engineering fundamentals anymore.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Smokey

  • Super Contributor
  • ***
  • Posts: 2536
  • Country: us
  • Not An Expert
Re: Test Driven Development
« Reply #54 on: January 19, 2019, 11:26:15 am »
Bump for an interesting discussion that's been dormant for a while. 

What's the opinion on back writing unit tests for a code base that didn't use them from the beginning?

How do you know what that code base ought to do?
How do you know whether some behaviour is or isn't a bug?
Do you have to preserve those bugs?

Good points.  I guess the idea of back writing unit tests is more for the ability to check to keep from accidentally breaking things moving forward.

If you don't know the correct behaviour, how do you know if something is broken?

If, OTOH, you mean "changed" rather than "broken", then that is equivalent to the Halting Problem. The next unwritten test might reveal a change.

Fundamentally, you can neither test correct behaviour, nor inspect quality into a product. You have to design it into a product.

Youngsters aren't taught those engineering fundamentals anymore.

Ok.. Ok.. I get we are playing the semantics game here, and I get that it's important to be precise (sometimes), but you probably know what I'm getting at. 

Yes, changed is what I'm considering as broken and that would hopefully get picked up by a unit test. 
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19279
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Test Driven Development
« Reply #55 on: January 19, 2019, 11:40:45 am »
Bump for an interesting discussion that's been dormant for a while. 

What's the opinion on back writing unit tests for a code base that didn't use them from the beginning?

How do you know what that code base ought to do?
How do you know whether some behaviour is or isn't a bug?
Do you have to preserve those bugs?

Good points.  I guess the idea of back writing unit tests is more for the ability to check to keep from accidentally breaking things moving forward.

If you don't know the correct behaviour, how do you know if something is broken?

If, OTOH, you mean "changed" rather than "broken", then that is equivalent to the Halting Problem. The next unwritten test might reveal a change.

Fundamentally, you can neither test correct behaviour, nor inspect quality into a product. You have to design it into a product.

Youngsters aren't taught those engineering fundamentals anymore.

Ok.. Ok.. I get we are playing the semantics game here, and I get that it's important to be precise (sometimes), but you probably know what I'm getting at. 

Yes, changed is what I'm considering as broken and that would hopefully get picked up by a unit test.

This is more than semantics; we can all live in hope.

You need to understand and state your objectives and non-objectives. You have to understand what you aren't doing, what the probability of failure is, and what the penalty of failure is. If the answer is "not much", then "crossing your fingers" or "touching wood" is probably sufficient.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline andersm

  • Super Contributor
  • ***
  • Posts: 1198
  • Country: fi
Re: Test Driven Development
« Reply #56 on: January 19, 2019, 02:02:43 pm »
What's the opinion on back writing unit tests for a code base that didn't use them from the beginning?
This is largely what the book "Working Effectively with Legacy Code" is all about. It seems someone has uploaded a copy to archive.org (perfectly legally I'm sure). The examples have a heavy OOP bent, but most described techniques work equally well with non-OOP languages.

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3476
  • Country: us
Re: Test Driven Development
« Reply #57 on: January 19, 2019, 04:41:46 pm »
It was not embedded code, but I ported 500KLOC of VAX FORTRAN to Unix.  I wrote a regression test facility for the job.  Whenever we wrote or ported a piece of code we wrote basic unit tests for it.  We did not attempt to provide full coverage.  Just some basic obvious tests.

Every time the system was built we ran the regression test suite and all the results were checked.  If an error was found we fixed the bug and added an additional test to the test suite.  Sometimes changes that appeared to be completely unrelated caused a failure in some test.

After the first year I was preparing the release notes for the 2nd release.  We did not provide all the programs in the first release.  When I looked at the version control logs I found that we had fewer than a dozen user submitted  bug reports.  We caught many more during the port, especially when we ported to AIX as the IBM compiler was much more strict about following the F77 standard.  Eventually the code was ported to 6 different Unix systems.  The last two, IRIX and Ultrix were an afternoon lark.

The bug rate went down from the first release over the course of 6 years.  After a merger it continued in service for another 6-8 years without any support and was finally abandoned because the functionality was no longer needed.

As a consequence of that experience 27 years ago I perform unit tests at frequent intervals during development and save them under version control.  As historically I have been doing scientific codes for DSP and such, the first thing i write is the data I/O and verify that what I write out is exactly what I read in.

Prior to the experience cited above I worked in a largish development team with a separate testing group.  So the developers did *no* unit testing at all.  It was very common that as soon as the code was compiled and run on a big endian machine it crashed because of a long/short mismatch on a function call.  Basic unit tests should be required of the developers.  If they are not, then quality cannot be tested into the code.

In all I have done support work on over 2 million lines of old code.  For FORTRAN I have a standard routine:

invoke IMPLICIT NONE in all functions and subroutines.
inspect the implicit names in the compiler errors messages for transposed characters in names and fix

place all COMMON blocks in a separate file and include it using the C preprocessor.  That has always turned  up at least one or two bugs as I check each block when I change to the #include.

Fix all compiler warnings.

In C I hunt down any invocations of gets(3c) and replace them.  I also test all pointer before a call to free(3c) or any other instance where a null pointer will cause a crash.

While this does not fix all the bugs, it fixes a lot of them which have gone undetected for years.

Test driven development is just the SW version of design for test.

For one library I wrote I had a section at the start of the function calls which tested for an environment variable which matched the function name.   If it was set it used the string as the name of a file and wrote all the input parameters to that file.  At the end of the file was a main() guarded by an #ifdef TEST that read that file and called the function.   This was very helpful dealing with GUI programs as I could test a problem without having to duplicate all the mouse clicks to get to the suspect routine.   I used the same format for the unit tests I wrote during development.  This was invaluable in dealing with numerical codes which evaluated expressions with singularities nearby that could be encountered by the presence of noise in the data.

 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19279
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Test Driven Development
« Reply #58 on: January 19, 2019, 06:07:40 pm »
As a consequence of that experience 27 years ago I perform unit tests at frequent intervals during development and save them under version control. 

It always astounds me that it took so long for mainstream software to catch onto that fundamental engineering concept.
It always astounds me that mainstream softies think it is a panacea and that "it works because it passed its unit tests".

Seriously; I've heard that more than once. The standard of the unit tests is usually crap compared to those required in hardware development.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online coppice

  • Super Contributor
  • ***
  • Posts: 8605
  • Country: gb
Re: Test Driven Development
« Reply #59 on: January 19, 2019, 06:11:58 pm »
As a consequence of that experience 27 years ago I perform unit tests at frequent intervals during development and save them under version control. 

It always astounds me that it took so long for mainstream software to catch onto that fundamental engineering concept.
It always astounds me that mainstream softies think it is a panacea and that "it works because it passed its unit tests".

Seriously; I've heard that more than once. The standard of the unit tests is usually crap compared to those required in hardware development.
There are far too many software people who think its OK when the only people developing the tests are the people developing the code, because they haven't really figured out WHY you test.
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3476
  • Country: us
Re: Test Driven Development
« Reply #60 on: January 19, 2019, 09:56:31 pm »
As a consequence of that experience 27 years ago I perform unit tests at frequent intervals during development and save them under version control. 

It always astounds me that it took so long for mainstream software to catch onto that fundamental engineering concept.
It always astounds me that mainstream softies think it is a panacea and that "it works because it passed its unit tests".

Seriously; I've heard that more than once. The standard of the unit tests is usually crap compared to those required in hardware development.
There are far too many software people who think its OK when the only people developing the tests are the people developing the code, because they haven't really figured out WHY you test.

I've never seen anything resembling a standard for unit tests.  I had one. It was very simple.  All *possible* inputs must be handled by either successful completion or an error code returned to the caller. And the unit test needed to supply *all* possible inputs.  I usually used awk scripts to generate the test files.

I've only dealt with three managed software development efforts.  All oil company internal developments. The VAX FORTRAN port only had unit tests because the other contractor suggested we do it and I was so far ahead of schedule that I just did it.  The other projects only had "testing" as a check box on some Gantt chart.  The developers did none and the "testers" knew nothing about what the software was supposed to do, so they just went through the motions.  The majority of my time was spent cleaning up the messes that other people had created.

My observation is that very few people are competent programmers.  Most scientists can't program, but that doesn't stop them.  With sole exception cited above, none of the contractors I worked with were worth a damn.

A character in Shakespeare's Henry VI proposes to kill all the lawyers.   As far as I'm concerned we should kill the programmers first and then kill the lawyers.

The sad part is that how to do things properly is well known and has been extensively written about.  It's just that most software developers can't be bothered with learning their craft, just the language or framework du jour. I found it amusing when a poll found that over half of "tech workers" felt they were frauds.

I always signed my work and included my acm.org email address so that a future maintainer could contact me if needed.  I started doing that on my first contract job, the VAX port because I wrote a 15,000 line library using lex and yacc and I knew that it was highly unlikely that anyone working on the code in the future would know lex and yacc.  So I put a comment at the top of the file explaining how to get in touch with me and a standing offer to fix any problems they encountered without charge.  That and another 15,000 line library I wrote *never* had a single bug found in it.  I did not spend a lot of time developing tests, though I did spend a good bit of time in the evening contemplating how to create them over a glass of scotch or whatever I was drinking at the time.

I also spent a lot of my own time and  money buying and  reading books.  The computer science section of my library consumes 80 ft of shelving.  And that doesn't include subject specific books like linear programming solvers.  I recently bought a couple of books on the implementation of the Method of Moments.  One by Gibson and the classic work by Harrington.as well as the marvelous EM book by Jian-Ming Jin.  Why?  Well I want to do some CEM work.  To be able to write test cases to verify NEC2 or OpenEMS I need to know the details of the calculations so I can craft test cases that are likely to fail.

Sorry about the rant, but I spent many years fixing idiotic errors.

To return to the OP's question, every function or subroutine should have a unit test which tests *all* possible cases.  If that cannot be done,  the code should be factored into pieces small enough that it can.  No work should be accepted as complete without that.  And the first milestone in development should be writing the test suite.  When a developer is given a task they should be required to show that they know how to test the requirements that they were given.  That will actually catch faulty requirements before the bugs are created. 

Most of the programmers I've met blame flaws in the requirements on the customer. "Oh, you want wheels on  your car?  Why didn't you say so?  Tires, too?  We can do that, but we'll have to slip the schedule a bit."  It's the programmer's job to verify the requirements by creating the test suite.

Les Hatton has done a lot of work on software QA and written an excellent book on the subject, "Safer C".  I highly recommend it, but I am prejudiced as I've known Les for a over 30 years.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19279
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Test Driven Development
« Reply #61 on: January 19, 2019, 10:35:09 pm »
As a consequence of that experience 27 years ago I perform unit tests at frequent intervals during development and save them under version control. 

It always astounds me that it took so long for mainstream software to catch onto that fundamental engineering concept.
It always astounds me that mainstream softies think it is a panacea and that "it works because it passed its unit tests".

Seriously; I've heard that more than once. The standard of the unit tests is usually crap compared to those required in hardware development.
There are far too many software people who think its OK when the only people developing the tests are the people developing the code, because they haven't really figured out WHY you test.

I've never seen anything resembling a standard for unit tests. I had one. It was very simple.  All *possible* inputs must be handled by either successful completion or an error code returned to the caller. And the unit test needed to supply *all* possible inputs.

I doubt that. Consider proving that you have the correct settings everywhere in the system such that a database's ACID properties are valid.

I once had to code something (in HiLo) that did x+y+640 in a single clock cycle. The nearest I got to a complete test was to have a LFSR driving a behavioural model and my structural model in parallel, and comparing the outputs.

Quote
The sad part is that how to do things properly is well known and has been extensively written about.  It's just that most software developers can't be bothered with learning their craft, just the language or framework du jour.

It is worse than that :(

The ones that have developed competence over time are rejected by HR-droids as being "past sell by date".

The ones that use the framework du jour believe the salesmen when they are told that it does everything for you. "Oh, so you've found a solution to the byzantine generals problem? Where did you publish it?".

Quote
Sorry about the rant, but I spent many years fixing idiotic errors.

You're not alone.

Quote
To return to the OP's question, every function or subroutine should have a unit test which tests *all* possible cases.  If that cannot be done,  the code should be factored into pieces small enough that it can.  No work should be accepted as complete without that.  And the first milestone in development should be writing the test suite.  When a developer is given a task they should be required to show that they know how to test the requirements that they were given.  That will actually catch faulty requirements before the bugs are created. 

Developing tests is more challenging than developing code. If they can't code, what hope do they have creating adequate tests?
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online coppice

  • Super Contributor
  • ***
  • Posts: 8605
  • Country: gb
Re: Test Driven Development
« Reply #62 on: January 19, 2019, 11:09:06 pm »
Developing tests is more challenging than developing code. If they can't code, what hope do they have creating adequate tests?
Developing good code is much harder than developing tests. However, most people develop poor code. Developing poor code with problematic corner cases is relatively easy, and developing tests which foresee where those troublesome corner cases are likely to be is hard. It usually requires the input of multiple people, looking at the problem in different ways.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19279
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Test Driven Development
« Reply #63 on: January 19, 2019, 11:57:34 pm »
Developing tests is more challenging than developing code. If they can't code, what hope do they have creating adequate tests?
Developing good code is much harder than developing tests. However, most people develop poor code.

I stand by my assertion.

In general, I see far more adequate code than adequate tests.

Where guaranteed operation is required, e.g. high availability and/or safety critical,  more time and effort is spent on the testing than the coding.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3476
  • Country: us
Re: Test Driven Development
« Reply #64 on: January 20, 2019, 01:10:39 am »
If you're writing numerical codes, the tests are the hardest thing to do.  You've got to find a case where you know the answer analytically.  I have literally spent days hunting for those pouring through old mathematical physics texts for relevant solutions.

I repeat my statement:

If you cannot test *all* possible inputs to a routine, it needs to be broken down into smaller pieces.  Testing a full system of significant size is NP-hard.  You simply cannot do it.

The function of comprehensive unit tests is to make sure that other functions don't have compensating errors and propagate through the system.  If that happens the result is unrepairable.  Game over.

In the case of the parser I wrote I generated a file with every permutation of invalid syntax and made sure that the parser caught the error and issued an understandable error message.  No "bailing out near line one" stuff.

The most difficult function I ever wrote evaluated what is called "Gassmann's equation".  It has 3 terms, all of which have a denominator which approaches zero.  I spent 3-5 days figuring out how to do that.  I used the test suite I created to test commercial codes.  It blew all of them up.
 

Online coppice

  • Super Contributor
  • ***
  • Posts: 8605
  • Country: gb
Re: Test Driven Development
« Reply #65 on: January 20, 2019, 01:23:30 am »
Where guaranteed operation is required, e.g. high availability and/or safety critical,  more time and effort is spent on the testing than the coding.
High reliability developers usually put a LOT of resources into code review, because they know they will never test out all the corner case issues.
If you cannot test *all* possible inputs to a routine, it needs to be broken down into smaller pieces.  Testing a full system of significant size is NP-hard.  You simply cannot do it.
Nice idea, but when you go one layer up from those routines you still get back to things being untestable.
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3476
  • Country: us
Re: Test Driven Development
« Reply #66 on: January 20, 2019, 03:42:17 am »
Where guaranteed operation is required, e.g. high availability and/or safety critical,  more time and effort is spent on the testing than the coding.
High reliability developers usually put a LOT of resources into code review, because they know they will never test out all the corner case issues.
If you cannot test *all* possible inputs to a routine, it needs to be broken down into smaller pieces.  Testing a full system of significant size is NP-hard.  You simply cannot do it.
Nice idea, but when you go one layer up from those routines you still get back to things being untestable.

The purpose of tests is to verify that a change has not *broken* something.  They are not there to make sure it is correct except in the cases I cited of comparing a numerical solution to an analytic solution or of verifying that all possible inputs are handled properly.

As for my comment, I stated the limitations on what you *can* test and how you deal with it.  What you're arguing is there is no point in testing because the general case is NP-hard.  N layers up may or may not be NP-hard.  It depends entirely upon the problem and the structure of the solution.  Good programmers pay very close attention to complexity and control it as much as possible.

My experience is that with good unit tests and even a modest amount of system level test coverage not many bugs get by.  As for numerical problems, Foreman Acton has written extensively on the subject and how to address it.  For problems such as A = B + C where all three terms are fractions with denominators which approach zero it is very difficult to get that correct. As this is describing a physically realizable problem, a solution exists, but implementing it in software in the presence of noise, quantization error and round off can be very difficult.

After I have a clear statement of the requirements (Do you want wheels and tires?) the first thing I do is examine the computational complexity.  If a problem is NP-hard I then negotiate a satisfactory approximation.

By training I am a scientist.  My sole computer related course was 1 hr of WATFIV.  So I typically know what the best approximation is, but I always explain the issue to the client and let them decide from the available choices.

The fundamental problem with most testing is it presumes than a million monkeys with typewriters can produce a Shakespeare play.  And the entire test program is based on that fallacy.
« Last Edit: January 20, 2019, 03:56:12 am by rhb »
 

Online coppice

  • Super Contributor
  • ***
  • Posts: 8605
  • Country: gb
Re: Test Driven Development
« Reply #67 on: January 20, 2019, 03:56:34 am »
As for my comment, I stated the limitations on what you *can* test and how you deal with it.  What you're arguing is there is no point in testing because the general case is NP-hard.  N layers up may or may not be NP-hard.  It depends entirely upon the problem and the structure of the solution.  Good programmers pay very close attention to complexity and control it as much as possible.
I never said don't test. I said you're approach of comprehensive testing at the lowest level of routines has limitations as soon as you move up the stack of routines. You still have to do the most testing you realistically can.
 

Offline snarkysparky

  • Frequent Contributor
  • **
  • Posts: 414
  • Country: us
Re: Test Driven Development
« Reply #68 on: January 21, 2019, 01:57:25 pm »
Didn't i read somewhere that there is way to prove software correctness other than exhaustive test of every possible input sequence deep enough to exercise the particular code through its maximum state depth.  I mean how many iterations does a change in input take to fully propagate out of a system.

This makes complete testing impossible for most cases.  Consider double precision FFT.  Not only every valid floating point value but every combination of all possible floating point values.

 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3476
  • Country: us
Re: Test Driven Development
« Reply #69 on: January 21, 2019, 02:33:04 pm »
Program correctness is an example of the halting problem for which Alan Turing gave a proof.

https://www.cs.rochester.edu/~nelson/courses/csc_173/computability/undecidable.html

https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pd

The function of testing is not to prove something is correct.  It's to catch errors as early as possible so that the rest of the system doesn't develop dependencies on the error.  The BSD tape IOCTL code is the most notorious example of an error propagating into other programs.

Once a dependency on an error has propagated into a large amount of code it becomes very expensive to fix.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19279
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Test Driven Development
« Reply #70 on: January 21, 2019, 02:36:57 pm »
Didn't i read somewhere that there is way to prove software correctness other than exhaustive test of every possible input sequence deep enough to exercise the particular code through its maximum state depth.  I mean how many iterations does a change in input take to fully propagate out of a system.

This makes complete testing impossible for most cases.  Consider double precision FFT.  Not only every valid floating point value but every combination of all possible floating point values.

You may have read it somewhere, but that doesn't make it correct.

Complete testing is indeed impossible for all but trivial examples: "state space explosion". That does not diminish the utility of testing.

You should start by understanding the differences between "designing" and "testing".

Once you have that under your belt, you can move on to understanding the difference between "inspection" and "formal proof".
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19279
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Test Driven Development
« Reply #71 on: January 21, 2019, 02:38:55 pm »
The function of testing is not to prove something is correct.

Many people believe that is the function of testing, and it isn't trivial to disabuse them of that notion. :(
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline snarkysparky

  • Frequent Contributor
  • **
  • Posts: 414
  • Country: us
Re: Test Driven Development
« Reply #72 on: January 21, 2019, 03:10:54 pm »


You should start by understanding the differences between "designing" and "testing".

Once you have that under your belt, you can move on to understanding the difference between "inspection" and "formal proof".

I googled "designing vs testing" and nothing came up.  Is it a common topic ?

I left out a "no" in original post.  I meant.
 somewhere that there is no way to prove software correctness


It's of interest to me now because I am coding for some people that demand 'proof' for lack of errors.

 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3476
  • Country: us
Re: Test Driven Development
« Reply #73 on: January 21, 2019, 05:15:45 pm »
The function of testing is not to prove something is correct.

Many people believe that is the function of testing, and it isn't trivial to disabuse them of that notion. :(

Yeah, every IT manager I ever dealt with.  Fortunately, I usually worked for a science manager and had the latitude to do whatever I thought appropriate. 

One time there was a new IT manager.  When I went around to introduce myself I told him, "I'm the guy from G&G (geology and geophysics) they send around with a baseball bat to bring IT into line." 

I got on very well with those people because I knew when the G&G types were making unreasonable demands and would call them out on it as I was at least as well trained in G&G as they were *and* I also knew what was critical to IT.  My usual mode of operation was to meet with the managers 3 senior staff.  We'd work out a solution and go to his office.  I'd explain the issue and the proposed solution.  He'd look at his staff, they'd nod their heads and we were done.

I used to tell IT types, "The CEO sits at the right hand of God.  But the exploration manager *is* God.  Never tell God no.  He's got a million dollar signature authority and if you tell him no and he wants it, he'll bring in a contractor and you won't know about it until there's a half million lines of code for you to support. 

So you always say ,"Yes.  We can do that.  This approach will cost this much and this other approach will cost this. I recommend choice A for the following reasons."
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14297
  • Country: fr
Re: Test Driven Development
« Reply #74 on: January 21, 2019, 05:30:11 pm »
The function of testing is not to prove something is correct.

Many people believe that is the function of testing, and it isn't trivial to disabuse them of that notion. :(

A test can show that a bug exists. It can't prove there are no bugs. Proving something can happen just takes one example. No finite amount of tries can prove something never happens. It's pretty much that simple. Testing is an experimental process by nature. Doesn't mean, of course, that we shouldn't do it.

Many people find it troublesome to think there is no real way of testing for correctness, so they either get in denial or, for the honest ones, they try and cling to formal proofing.
There's been an astounding number of attempts to design formal proof systems for software/digital design development over the past few decades. Laudable effort, but I'm still not convinced they have really managed to make things any better in practice.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf