I used code collection method when programming for mini-computers and PCs. When I started with AI programming in 90's I changed methodologies. It was all IP locked down, everything I did was essentially original. Later in 00 when programming uC I found I preferred the deconstruction methodology for these tasks as well.
I'm not totally clear on the distinction between "code collection" and "deconstruction", but it sounds reasonable. There's quite a lot of open "source" out there now, and it often makes sense to use someone else's code (if it works) rather than writing it yourself. It used to be build vs. buy, but now it's more like build vs. fork. Is that what you mean by "code collection"?
BTW, it sounds like we're contemporaries. My first job was writing Z80 assembly code and FORTRAN in the early 80s.
No it doesn't imply borrowed code. All the code can be original. The distinction is by when in the programming process that functional blocks are implemented.
As I said I may have the names wrong.
All programming problems are broken down in phases. Starting at concept ending at working code, and extended by revision.
The least used method method is story board is when the concept deconstruction diverts to code generation after info and data flow charting is done. Fourth was the first attempt the bridge the gap. Delphi was a half step between this and the next methodology.
Code collection continues the process through process control and end point identification. This methodology can have archive shorter development time through the use of existing libraries. The negative effect is every architecture change can involve huge increases in conditional compile blocks. In a perfect scenario the only code specifically created is the glue code to connect the parts together. Another restriction is this method only gain it's full benefit if the process components have been generated before by the last methodology.
Deconstruction continues the process through scope charts, convention controls and instance plans. At this stage all that is left is code each task block. Heave use of code snippets enables increased programming speed or just type at over 180wpm. I can only do 120wpm so I use code snippets, allot of my peers are really fast typers. Advantages are no IP costs and not preexisting code dependant. Disadvantage longer pre-code planing.
I hope this explains what I was referring to. Even if I got the methodology names wrong.
I was taught that keying in should only be 20% of the effort and if you have to recompile a third time you failed. Yes this is old school it no longer takes a week to compile a complete project. Obviously code for a uC could take more than 30min to compile. My last major computer based project took a very fast PC 8 hrs to compile. So this point was more significant then than now.
Almost all my coding these days follows the deconstruction methodology not because it is better but because it is necessary. At work I get assigned the tasks others don't want to tackle. I get the project just before it goes back out the door as unable to implement. The bean counters get the excuse to bid high and the job comes back and I get the work.
Seems I find glueing libraries together boring, I don't do it as a hobby. My last 5 DIY of my last 7 projects I did was to prove the that something can be done. I really annoys me when people say something can't be done when it can. I do it to prove to myself they are wrong no prove to them they are wrong. The other projects because I could not find what I wanted for sale anywhere.