General > General Technical Chat
why did 70/80/90 only have 1 cpu if cpus where so slow?
pcprogrammer:
There was also the Basis 108 that had a 6502 and a Z80 and if I recall correctly also had a 6809 extension card made for it.
Also the BBC micro was able to get a second CPU unit, and probably some more that offered this multiple CPU setup to allow running the different operating systems.
Sure nothing like what there is now and not intended to speed up processing, but available non the less.
coppercone2:
they had dual access memory or whatever in some stuff, but the thing is
1) parallel programming problems were and are being solved
2) personal computers were deemed to be a solution to a particular problem (mostly relating to replacing paperwork), people saw that a single core is basically capable of this, since its replacing essentially a type writer and calculators
I would say that most people do not realize the complex engineering heritage of busses/protocols/datapath designs/etc.. fairly obvious digital circuits but all that stuff had to be figured out. And good interconnects and even PCB related stuff. Reliable cheap many layer PCB with internal vias and stuff. IMO people imagined the wire wrap version and thought 'there is no fucking way'. Even PCB trace minaturization is pretty complicated, sure you just etch it smaller, but the amount of trust you get from such fine pin spacings to make these systems portable? You need advanced knowledge of glue, silk screens, etc to make that kind of design decision for something you want to be durable. Tons of pioneers IMO.
If you wanted to implement that kind of density in 1970 on a large scale people would just think "manufacturing problems" and that they are gonna go out of business if they offer a warentee past 2 weeks. Low production for the air force maybe, but they would have a legion of technicians inspecting and checking everything, and probobly the throughput would be so pathetically low that for anything but national defense the 'efficiency' of the business would be beyond dismal. When you look at the advanced military semiconductor tech of the past, 'scared stupid' (rightfully so) comes to mind, because what they got for how much it cost is abysmal, thankfully we figured out how to use some of it.
Like if you never did it, you just think 'etching foils' duh. But if you do it you know that you need the right photo resist. If the awesome high reliability low cost good shelf life DOW corning blue whatever was not around for etching fine geometry, you would be stuck fiddling with the old process for a long time. And developments in PCB manufacturing/glue.
Like look at how dismal BGA reliability was for maybe 15 years when they start to use it for consumer electronics lol, its still a problem, and thats just some mechanical BS. I feel like there is a ton of invisible esoteric technologies that were coincidentally developed in different fields that allow this stuff to be made now. Its common to still hear that a design has 'too many parts' and that the design is viewed as unreliable for that reason.
SiliconWizard:
I remember having read some 80's articles on multiprocessing and it all seemed like breakthrough stuff at the time and nothing really practical outside of research centers and extremely expensive hardware.
Usually the key issue was quickly diminishing returns due to the bottlenecks of having to share resources. Along with much more complex software design.
That made it all unpractical for personal computers, and even for pro computers outside of maybe "supercomputers".
As I and some others have said, you still had small computers with several CPUs, but each was usually dedicated to its own thing, so while they were still "multiprocessor" systems, that was much, much simpler. They often didn't even share main memory.
But several identical CPUs sharing the same resources in a typical multi-CPU architecture? That was rarely worth it except maybe in niche applications.
Say with a typical "consumer" CPU of the time, you'd be lucky if you could get about 1.5x the computing power of a single-CPU system with a dual-CPU one, all this with much higher cost and much more complex software. While getting 2x the power just by clocking at twice the frequency quickly became possible, for a much lower cost and complexity.
Interestingly, while many-core computing is now the norm, things are running a bit in circles - different types of cores in recent CPUs, instead of all the same, along with dedicated accelerators, are coming back.
tooki:
--- Quote from: SiliconWizard on March 26, 2023, 08:01:45 pm ---Usually the key issue was quickly diminishing returns due to the bottlenecks of having to share resources.
--- End quote ---
This is a really good point that bears repeating. It’s easy to forget how back then, the CPU was just one of multiple potential bottlenecks all vying for the honor of limiting overall system performance. CPU was one, expansion bus throughout and memory bandwidth were others, but IO (including storage) and IO latency were huge bottlenecks back then (and in some ways, remained so until SSDs took over from hard disks).
Remember when memory access was so slow that we had to set hard disk interleave factors to artificially slow down the throughout of hard disks? :p
Kleinstein:
In the early days I would consider the lack of suitable software and the still existing complications in using more than 1 core for many problems as the large hurdles than sharing other resources. The other bottlenecks still existed and this was in part limiting the useful clock speed. So the need for a much faster CPU was limited. If needed there were often better alternatives to multiple CPUs to share the load.
A thing was more havine a dedicated FPU and the first steps towards graphics acceleration.
We don't use PCs with multiple cores because it is a very efficient way, but more like because most of the alternatives (high clock, wider words) have reached there limits. The SW side is still limiting in how muliple CPU cores can be used. A 16 core CPU may not be any faster (maybe even slower) than a single core CPU if the SW does allow for parallel execution. The point is more than modern PCs go a bid overboard with just adding more cores that are rarely used.
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version