I don't do commercial embedded development at all, only hobby projects on my own (or in tiny teams of not more than three people); but I do have quite a bit of experience in programming in general in different environments and contexts.
My approach has evolved into a mixed bottom-up + top-down one. I start by implementing and testing the key points I don't have enough experience in, as separate test programs or "test firmwares". (Most common one currently is in the Arduino+Teensyduino environment and Teensy 4.x; be it a measurement, new algorithm or data structure, data generation, throughput or latency testing, or even duplicating an USB device behaviour for a device I don't have using an USB data transfer dump.) This is the bottom-up approach.
I spend quite a lot of time thinking about the overall structure (and sketching on paper, as a cache or temporary scratch "file") and even the interfaces ("library" or module interfaces, function call models, what kind of data to pass as a function parameter, what to keep statically allocated in RAM, and so on). It is mostly about how to make it maintainable and robust, avoiding typical sources of glitches and errors. This is the top-down approach.
I do the above in parallel, because they feed each other. Sometimes my subconscious bubbles up an idea, often an algorithm or abstract data structure, that can neatly solve a crucial part of the task at hand, but may have hardware requirements or dependencies that need to be tested first. Sometimes an overall idea may require the use of an algorithm or hardware control approach I'm not familiar yet, so I need to test it. After testing, I might find them lacking.
I do write a lot of code I don't end up using in the final version, because for me, the above two methods tend to refine each other. (For example, the low-level tests on actual hardware may indicate a specific programming approach –– say, interrupt- or event-driven, as opposed to imperative/sequential –– is required at some point, so the overall design has to accommodate that. Also, it is common for me to discover that my initial overall design can be simplified because may "optional features" I originally considered aren't really needed at all.) I consider all of those "branches" useful information that I do not simply throw away, but document and keep in "related experiments and tests" folders. (Except for the paper sketches. If I ever sketch something that is truly useful, I end up redrawing and cleaning it up it in Dia/Inkscape/Graphviz/special script, saving the end result (and reworkable versions) in the project documentation.)
I do think of it as throwing code out of the project, and I do so often; I just do archive the reasons and example code showing why.
In a Programming sub-forum thread the purpose of comments was recently discussed. To me, the source code (including comments and documentation) should be kept in a version control, providing history for the development of the project. However, there should also be a parallel tree of "related experiments and tests" and "musings" that document why the current approaches were chosen, why some approaches (that might seem better suited at first glance) were rejected, and their related test code and experimental firmwares and scripting tools. This secondary tree is not useful to the users; only to the project maintainers. In a commercial/proprietary development, this secondary tree is kinda-sorta important for the development teams, and it might be best to only summarize the salient points in the build tree changelog. (Knowing what kind of alternates the development team has considered and experimented with tells a lot about the development team, and would be very valuable information to anyone interested in employee poaching. Conversely, in an open-source development project, this secondary tree can be exposed publicly as a blog, giving potential employers a close look at the authors' problem solving and design skills.)
Just before the turn of the century, I did some design and programming work (mostly integration and polish) for a CD-ROM project involving an university artwork archive (using Macromedia Director), with a few student teams working on separate aspects of the entire project. Instead of telling them their ideas would not work in practice –– at that time, large animations ("sprites") would be clunky, low-FPS, on most machines ––, I showed examples of what it actually would look like, and some examples of alternate ways that would work technically better. It affected the end result a lot, and most (but not all) understood that the tools and computers and time we had available limited the implementation of their "vision", not me.
(Some nontechnical people are just fixated on the idea that the innovation, the core idea, is 99% of the project, and implementation is just typing it up on a computer, something a monkey could do for bananas. Some also thought that because something was implemented in a game, it should be just as easy to implement in the Macromedia Director environment – all tools are equal, aren't they? So it wasn't a conflict-free project, either. They were just resolved generally satisfactorily.)
Essentially, I countered many technically-problematic high-level ideas by showing low-level examples of how those would look in practice; and showed different low-level examples of things that would work well, to feed their high-level ideas and creativity. The end result was of course a compromise –– practical solutions always are! –– but I think it worked well, both in the sense of the end result being pretty good, but also as an example of teamwork involving different levels of technical expertise.
This had a big impact on my attitude regarding development project teamwork. This top-down-bottom-up mixed approach, with counter-suggestions and test examples during design phase, has just worked better than anything else for me. I have not seen any development project produce a robust, effective, efficient design, without doing any practical testing first. User interfaces need mock-ups, crucial hardware and algorithms need to be tested before relied upon, and complex internal interactions need simulation (or mathematical proof) to verify their working is understood and acceptable before implementation; it's that simple.
All that said, I personally have always focused on the quality of the end product, rather than making some arbitrary deadline or spending only a limited amount of resources (development time!) that guarantees a profit on the end product. I just cannot do that. So, if you look at the design process from a commercial point of view, with the intent of maximising profit, you probably shouldn't put too much weight on my opinions or experience in this.