The very first thing you do, is make sure you understand the problem at hand. I cannot emphasize this enough; this is absolutely critical.
I myself like to observe the people performing the actual tasks, to understand the fundamental nature of the problem.
When people describe the problem they are having, they almost never describe the actual, underlying task or problem they're trying to solve, but instead have already chosen a path and want help with a barrier along that path.
This same thing, or principle, works on all levels of development. Even when you are writing actual code, and are worrying if some function or subsystem is reliable or efficient enough, the true solution is usually found on the algorithmic level. That is, we humans tend to get "stuck" trying to optimize the approach we have already chosen; but really, we should take a step back, and examine the greater context, and consider whether the approach itself is correct. (This often leads to
premature optimization: spending time on a detail that does not matter, or should not even exist in the first place.)
I have found I get best results with a mixed approach (similar to what snarkysparky mentioned above): on one hand, I sketch (often on paper, text files, Inkscape/Dia diagrams) the overall structure, schematic diagrams, and subsystems I'd need. On the other hand, I do small alternate unit test programs/firmware snippets/Arduino sketches, to determine how the "tricky" deep details of the hardware works; things like serial buses, DMA, timers and interrupts, PWM, et cetera. (On the hardware side, I'm just a butterfinger hobbyist, but I've done a lot of software development, and the same applies; except that the "tricky" parts there are things like privilege separation (processes, privileges) and security details, interprocess communication, configuration methods, and so on.)
For applications and appliances with nontrivial user interfaces –– i.e., more than an on/off button! –– I have learned to
always simulate the user interface first, before the actual development begins. Some may disagree, but fact is, this is the part humans will interact with the thingamabob, and in many ways is the most critical part.
This UX part (user experience) can either be done using a higher-level language (say Python and Qt, on any OS), or in the target language with integral unit testing on how the interface is best implemented on the target hardware –– stuff like what kind of data structures you need to describe the UI elements efficiently. I don't use Windows myself, and prefer working in Linux. In Linux, creating an interface simulator in pure C or C++ or a mix of the two, is very easy. I've also used a cheap dev board in the Arduino environment and a display module, with a trivial "slave" Arduino firmware, controlled via USB on the host computer, to test or simulate the actual look and feel of the user interface for my own gadget ideas. For example, I created
this Pro Micro clone (ATmega32U4) gamepad three years ago, that can use an 128x32 I²C OLED display to select the keypresses/events the buttons generate. I still haven't ordered the board or built it yet, but I did do some OLED and HID tests, to satisfy myself it would work if I chose to do it. Including finding out how to do it all so that all UI elements, including the menu structures, would be stored in Flash and not the meager RAM available.
(As I said, I'm just a butterfinger hobbyist, and not an EE or designer at all. I am most interested in
solving problems, and the problem that board (and the later, even cheaper
CH551G board) solves, was to find out how hard it would be to create a Arduino- or similarly free/open-source reprogrammable gamepad with a small OLED display to show the currently active keymap, so that one could play not only native but also ubiquitous web games that often only support specific keyboard controls. Existing boards I've seen tend to use the small 6x6mm tactile switches, but I much prefer the feel and robustness of the larger 12×12mm switches with 3mm stems and round or square hats. On the CH551G, the OLED display would be vertical, by the way.)
In ones early days, one often starts designing UIs in the traditional imperative fashion, like executing a program which poses questions to the user and proceeds based on the answers, but human user interfaces are much better implemented using an
event-driven approach, modeling the entire interface as a
finite state machine. (If you've ever created interactive web pages with JavaScript, you've already been introduced to "event-driven" approach, since JavaScript is chock full of event handlers. Instead of you having a loop you control, the web browser triggers specific event handlers, as functions, whenever those events occur.)