I have a Ryzen 5600G (6 cores 12 threads) and I installed a utility that constantly and quietly shows CPU usage on my desktop. There are some applications that make brief usage of more then one thread but for me those are quite rare. It is nice if a "make -j12" finishes in a few minutes instead of half an hour, but that sort of usage is rare for me.
I believe you that is true for you, and it's probably true for most people.
However it's not true of developers. Yeah, you might spend an hour editing code using a fraction of one CPU core, but then the difference between a few seconds and a few minutes to build and run your code is the difference between losing your train of thought -- your "flow" -- and not. Also CI (continuous integration) can use a whole bunch of cores. And running thousands of independent unit tests.
And especially developers who are helping design an ISA. You think of a new instruction. You add it to the simulator and the compiler. You recompile the compiler to be able to generate code using the new instruction. Then you build all the libraries. Then the Linux kernel. Then all the basic packages for a distro (typically Buildroot or Yocto for speed if you're doing this). It's a LOT of stuff to build to test how your new instruction works, how much difference it makes to code size, how much difference it makes to speed (if your emulator is cycle-accurate, or if you build a hardware core for an FPGA)
Many of Apple's core customers are doing things that can use all the cores they can get. Photoshop. Video and audio editing and compressors.
Benchmarks show large speed increases for multi core processors, but (unfortunately) the single thread performance is still very much the driving factor.
Right, which is why we have CPUs that might drop down close to 2 GHz with all 24 or 32 or 64 cores running flat out, and then burst to 6 GHz when just one or two cores are being used.
Even with applications that do support multi threading, it's rare to see CPU usage above 30% (which suggests only 4 of the 12 threads are actually used). Hyperthreading started in 2002: https://en.wikipedia.org/wiki/Hyper-threading and now, 22 years later we still have a long way to go before "mainstream" software developers start taking it seriously. I have for example a FreeCAD drawing, that takes several minutes to load, and during that time, CPU usage is a steady 8 percent (i.e 100/12 = 1 thread). It seems that programs are designed as a single thread, and then some limited multithreading is being attempted only after it is discovered that the application gets slow and sluggish.
Depends on the OS.
Apple introduced "Grand Central Dispatch" into Mac OS and iOS and compilers in 2009 -- and incidentally it was added to FreeBSD the same year. Apple quite strongly encourages developers to use GCD.
GCD consists of breaking your program into individual processing steps, with inputs, outputs, and dependencies between them. The library then organises the processing steps and decides which ones can be run in parallel based on their dependencies and the number of CPU cores you have.
It is not unlike "make", but within a program.
At the time GCD was introduced most people had just 2 CPU cores (Core 2 Duo), but of course now 4, 8, 10, 12, 16 are common.
This FreeBSD page has a simple explanation and example:
https://wiki.freebsd.org/GrandCentralDispatch