Edit: it must be CMSIS in combination with HAL .. there is no Std Peripheral for the L0 series .. or??
I guess it is the usual case in such big companies, that one department doesn't know or care what the other department is doing. Instead of doing the only sane things, to provide the SPL for every chip they make, and then maybe build CubeMX on top of it, they develop the same things multiple times, because CubeMX comes with its own HAL, mixed with the ugly code generator. That said, the code quality of STM32CubeIDE is improving with time, at least they fixed a lot of bugs, and the IDE is easy to install and to use. If you throw enough money at something, even a bad architecture can be implemented and working.
I know you said you like STM32CubeIDE/Atollic TrueSTUDIO and ST proprietary tools, but I think this post is an example of why people like David who are already familiar with GCC+OpenOCD+CMake (possible from different platforms) prefer rolling their own toolchain. I don't doubt your statement that STM32CubeIDE is improving with time and fixing a lot of bugs, and definitely don't question that it's easier to install and get started with your first hello world blinky, but it was Atollic TrueSTUDIO before, STM32CubeIDE today, and who knows what will come in the future. There's a lot of churn for vendor tools, for example MPLAB to the netbeans-based MPLAB-X, AVRStudio 4 to the .NET-based Atmel Studio 5/6/7, Kinetis Design Studio to MCUXpresso, etc. Not all of these changes are good, and not everyone wants to learn the quirks of a completely new set of tools for every platform they develop on. Luckily much of the embedded world is moving to eclipse-based IDEs (although most of the ones I've mentioned so far aren't), but even then there is a *ton* of variation in terms of how well they implemented/butchered their eclipse plugin. Probably the worst implementation I can think of is old versions of TI's Code Composer Studio, which implemented their own, much-worse version of CDT for eclipse and basically didn't work for a lot of code completion and navigation. It also used to pop up a modal dialog on every compile that prevented you from doing anything, and compiles were single-threaded and extremely slow if using their SYS/BIOS for their DSPs. I'm sure STM32CubeIDE is much better, but I'm sure if I ever used it there would still be quirks that annoyed me.
The GCC+OpenOCD+CMake combination has looked the same for the past couple of decades for ARM microcontroller development, and it's the same for the Cortex-M series, the Cortex-A series, the arm7tdmi back in the day, and probably whatever future series ARM come up with. The CMake part of it is probably the newest part, but it does have the advantage that you can use whatever IDE you want, including a fully stock version of eclipse/CDT, VisualStudio code as David used, or even no IDE at all. And you can use the same development tools across platforms. If you need to use radiation-hardened SPARC (Leon) and PowerPC for space applications, GCC+CMake are still the same, if you do desktop development (e.g. a GUI to interact with your embedded system, or even just running parts of your embedded code on the desktop for testing), GCC+CMake are still the same. Notably the warning and error messages from GCC are still the same, and are generally much better than proprietary compilers -- especially proprietary embedded compilers. Proprietary compilers obviously do usually give better performance for many embedded architectures (e.g. accumulator-based architectures, or the 8051 or classic PIC), but at the expense of generally worse standards support, and much worse warnings and error messages (more of an issue for C++ than C, but the worse warnings are an issue for C as well). I do not miss one bit having to use DIAB, Metrowerks, or even the Keil ARM compiler (v5).
I think the firmware for the uSupply could have been developed multiple times faster with STM32CubeIDE and plain C.
I think what you mean to say is that you would be multiple times faster using STM32CubeIDE, since that's what you're familiar with. The C vs C++ debate for embedded is a separate issue and slightly religious. I haven't looked at any of the code (so maybe it's horrible, but probably David did a good job), but in general I think C++ is underused in embedded. I think drivers that consist of a bunch of functions operating on instance pointers to a struct (many vendor-supplied HAL libraries do this) would be better in C++. I think type-safe enum classes are almost always preferable to REALLY_LONG_DEFINED_CONSTANTS_WITH_HUNGARIAN_NAMING_BUT_VERY_SIMILAR_NAMES. I often prefer templates even when they aren't necessarily zero-cost (e.g. if the alternative involves a bunch of preprocessor macros, or a bunch of casts to and from void*). But the C vs C++ debate for embedded is somewhat religious, since there isn't technically anything you can't do in C++ that you can do in C ...although by that same token the entire project (minus the USB parts) maybe could have fit in a few 1K of flash on a 8-bit microcontroller if hand-coded in assembly to eek out every last bit of flash. And then you could avoid using the bloated vendor-provided HALs also, although to be fair they do help with some of the more complicated peripherals and setup in modern ARM microcontrollers. But anyways, if using C++, might as well use a modern compiler. There have been a lot of recent changes (mostly positive) to the standard, although I suppose the fact that there are changes at all is a disadvantage.
With regards to westfw's comments about Ninja: He did also install Ninja to do the actual builds, but CMake can generate regular Makefiles as well. The main advantage of Ninja (especially with large codebases separated into many folders, where "recursive" Makefiles would usually be used) is that it is much faster, especially in the "null" case where no files have changed, or almost no files have changed:
https://ninja-build.org/manual.html#_design_goalsThe main advantages of CMake over hand-written Makefiles is being able to do multiple out-of-tree builds, correctness for parallel builds of large codebases (compared to hand-written recursive Makefiles, which are almost never correct), and again being able to use the same CMakelist descriptions for parts of both the embedded build and say a desktop version. From a single code checkout, you can have multiple out-of-tree builds for say a Debug and Release version, and also the embedded and desktop version. This is possible using hand-written Makefiles as well, but people rarely do support multiple out-of-tree builds. You can of course have multiple code checkouts and different build options for each, but there are advantages to having all the builds be from one checkout if you make changes. Also hand-written Makefiles don't usually support automatically detecting changes to compiler options used in the Makefiles, whereas CMake does automatically detect changes to CMakelists and rebuild files needed.
Which is somewhat related to the next point, which is that hand-written recursive Makefiles are almost never written correctly to support parallel builds. Sure things might happen to work fine when initially building from a clean checkout, but things eventually go wrong when modifying multiple files and trying to do parallel builds. The usual solution in codebases I've seen is to just do a clean rebuild of everything occasionally. Ninja usually builds faster also.
Proprietary IDEs sometimes have their own build systems, but they are almost always worse in terms of one of: being able to do parallel builds, being able to revision control the build configuration/project files easily, doing multiple out-of-tree builds with different defines, difficult to find GUI dialogs for various include/linker/etc. options, and just general issues with trying to reinvent the wheel poorly.