General > General Technical Chat
why did 70/80/90 only have 1 cpu if cpus where so slow?
<< < (7/10) > >>
tom66:
One of the biggest issues with multiple CPUs is maintaining coherency between different processors.

Cache coherency is probably the biggest headache.  If two threads are operating on similar areas of memory, it's possible that threads could have a different view of the memory.  This is important as most operating systems depend upon the fundamental idea of a processor-width data type being atomic, for instance x86-64 guarantees that 64-bit loads and stores are atomic.  So you need a mechanism to broadcast "incoherency" data across the bus to ensure that the caches know to flush invalid lines out.

The other issue is that in the 90's, CPUs used to be quite big, on 300nm+ processes.  Once an SRAM cache was on the die (which was far more important than multi-core functionality), there wasn't really enough space to fit additional cores and data-buses to link those cores to each other.  A multi-core CPU also needs multiple caches, as each core needs a local cache to give acceptable performance, whereas a single-core CPU at a lower speed might get away with a single larger cache.  The size is an issue especially for cores like x86 which functionally are very complicated, having long instruction words and pipelines.  It really was process nodes falling below 65nm that made it possible to have dual and quad core CPUs in consumer hands, and then once things fell below 20nm we see 8/16 core CPUs as commonplace.  Improved architectural design allows those CPUs to execute more than one thread per core with certain limitations.

Parallel computing also requires changes to the operating system and different approaches to programming,  programmers need to be familiar with the traps of multi-threading and things like mutexes, semaphores, thread-safe data structures etc., and operating systems need to have schedulers which can cope well with multithreaded programs (especially ones with a lot of I/O -- historically Linux and Windows both sucked at this, but improvements in the last decade have been noticeable, like Linux's CFS.)
dmills:
As a teenager I wirewrapped a dual 68000 system, it was effectively a NUMA box and had each processor with its own memory space that could be read or written to from the other core at the cost of forcing a bus request/bus grant on the target.

Clock speed was IIRC 8MHz.
It worked, but then teenage me found out that I REALLY needed some locking primitives.

Would have been late 1980s or so.

Codex:
Some did. The Z80 systems I worked on had multiple Z80 CPUs, one to handle all the communications, one to handle and manage the disk drives, another for Analogue IO and one for the general control processes.  You could use Z80 bus controls like ~BUSRQ to control the address and data bus (blocking) and so each CPU could have access to memory and IO.  Alternatively we used dual port memory, IO ports and interrupts to allowed each of the CPUs to run with almost no blocking. This was in the early 80s.
ejeffrey:
The question is backwards, it's why do we have multi-core CPUs now.  Paralleling CPUs has a cost in terms of complexity and performance, and a *huge* cost of developing parallel applications to actually use them.  1 fast CPU is almost always better than 2 slower CPUs.

The 80s and 90s were full of startups trying to do something really clever, but most of them failed because it didn't matter what you did, you were going to get shown up by a new micro-architecture on a new fab process in 12-18 months.  That sets a very narrow window to make back your development costs before it is obsolete. 

Multi processor systems did exist, as everyone else here has pointed out, but in the mainstream market, people were busy adding things like caches, pipelining, multi-issue, vector units, out-of-order execution, branch prediction, and process other improvements that give orders of magnitude performance improvement for roughly the same hardware cost instead of adding multiple processors that could get you less than 2x the performance for a substantially higher cost -- well over double the cost for the processors themselves, although you didn't necessarily need to double the memory and IO.

What happened in the late 2000s and early 2010s was that the huge rush of micro-architectural improvements  had mostly played out.  There were still improvements to be made, but a good feature is one that gives single digit percentage increase in a handful of important workloads.  At the same time, process node improvements stopped producing significantly higher clock speeds, hitting a practical wall around 4-5 GHz.  Process node improvements however still continued to allow more transistors on a single die, and so the practical way to use them became multi-core CPUs.  For mainstream systems, multi-core CPUs improve a lot of the problems with traditional SMP that used multiple sockets.  The cores can share a memory controller and L3 cache, running fewer wide/fast buses off chip. They can also do fast and low-latency cache coherence communication between cores. You don't need multiple 1000+ pin sockets, and motherboards need no special design to support multi-core configurations.
tooki:

--- Quote from: ejeffrey on March 29, 2023, 05:14:08 am ---The question is backwards, it's why do we have multi-core CPUs now.  Paralleling CPUs has a cost in terms of complexity and performance, and a *huge* cost of developing parallel applications to actually use them.  1 fast CPU is almost always better than 2 slower CPUs.


--- End quote ---
Great explanation.

The only thing I’d say is that I do think that going from 1 to 2 CPUs almost always helps overall system performance, since one CPU can make sure system housekeeping and other OS stuff, background tasks, etc stay out of the way of your application, even if your application is single-threaded. (Fewer context switches.)

But from there, the marginal benefits of each additional CPU drop asymptotically.

I remember this very clearly in the early 2000s when multi-CPU Macs finally made sense, since Mac OS X has real SMP support. Few applications back then were meaningfully multithreaded, but adding that second CPU massively improved system responsiveness.
Navigation
Message Index
Next page
Previous page
There was an error while thanking
Thanking...

Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod