General > General Technical Chat
why did 70/80/90 only have 1 cpu if cpus where so slow?
<< < (6/10) > >>
alm:

--- Quote from: Kleinstein on March 26, 2023, 09:11:23 pm ---We don't use PCs with multiple cores because it is a very efficient way, but more like because most of the alternatives (high clock, wider words) have reached there limits. The SW side is still limiting in how muliple CPU cores can be used. A 16 core CPU may not be any faster (maybe even slower) than a single core CPU if the SW does allow for parallel execution. The point is more than modern PCs go a bid overboard with just adding more cores that are rarely used.

--- End quote ---
What helps is that on modern computers there are usually multiple things running at any one time, unlike back in the '80s when multi-tasking was the exception. Now your average computer is running a bunch of browser tabs, likely a virus scanner, maybe playing a video, downloading a data sheet, etc all at the same time, so there's at least a bunch of processes that can be spread out over cores. And the high loads like matrix computations, simulations, games etc are generally optimized to use as many cores as possible efficiently.
EPAIII:
IIRC, around the time of the Apple II, I was trying to find a way to get a single one into the budget of the company I worked for. I wondered if multiple users could use it at the same time from separate terminals, which would have been a great selling point.

Aren't dreams nice?
DrGeoff:

--- Quote from: aqarwaen on March 25, 2023, 09:47:35 pm ---so my question is why did 70/80/90 only have 1 cpu if cpus where so slow?if i understand most computer only had 1 single main cpu.
for example why it was possible use for example multiple Intel 4004
instead single main cpu to make faster computers.
if money was not issue,what would prevent 4 same cpus running in single system?

--- End quote ---

The CPUs weren't considered "slow" at the time. Any more than today's processors are considered "slow".

coppercone2:

--- Quote from: Kleinstein on March 26, 2023, 09:11:23 pm ---In the early days I would consider the lack of suitable software and the still existing complications in using more than 1 core for many problems as the large hurdles than sharing other resources. The other bottlenecks still existed and this was in part limiting the useful clock speed. So the need for a much faster CPU was limited. If needed there were often better alternatives to multiple CPUs to share the load.
A thing was more havine a dedicated FPU and the first steps towards graphics acceleration.

We don't use PCs with multiple cores because it is a very efficient way, but more like because most of the alternatives (high clock, wider words) have reached there limits. The SW side is still limiting in how muliple CPU cores can be used. A 16 core CPU may not be any faster (maybe even slower) than a single core CPU if the SW does allow for parallel execution. The point is more than modern PCs go a bid overboard with just adding more cores that are rarely used.

--- End quote ---

I would say given that the driving force behind computer development is corporations, that "have reached their limits" means that parallel computing was found to be slightly cheaper to develop in the short term based on financial analysis by the major players ;)

Like how four screws and are not commercially viable compared to one screw and a plastic clip in that $500 ecu

Just going by what I know. And it might not have enough synergy with neural networks, I think we just decided its time to squeeze the mathematicians a little bit instead of all that touchy chemistry and EM problems. I noticed there is just too much heralding about parallel computation, it starts to feel like a sales pitch. And for the last like.. 15 years? there has been this massive push to teach people how to code and push them into CS, from grade school even. Well they don't know about making things faster but they can write code, so they will work on logical problems rather then physics problems.... they certainly mustered quite a big work force that is trained at solving those types of problems. Number of people encouraged to learn about optimization algorithms vs transistor EM physics/engineering. Lots of piggy backing of skill sets too, teach logic and it is supposedly applicable to numerous fields (for now).

Not that I disagree too much, because its frustrating as hell to wonder 'well why can't I put two of these together...' but I do wonder what kind of other things we miss out on, could be something real nice. maybe less downplaying of cascade failure scenarios would be a result?
tooki:

--- Quote from: coppercone2 on March 28, 2023, 07:43:10 am ---I would say given that the driving force behind computer development is corporations, that "have reached their limits" means that parallel computing was found to be slightly cheaper to develop in the short term based on financial analysis by the major players ;)

Like how four screws and are not commercially viable compared to one screw and a plastic clip in that $500 ecu

--- End quote ---
LOL no.

If you look at the history of computing, technologies tend to first emerge in extremely high-performance computing (HPC, that is, supercomputers and mainframes) and then trickles down. HPC is not nearly as price-sensitive as desktop hardware.

Parallel computing was, and is, done because all the other measures are at their technical limits. Clock speeds don’t scale cleanly. Word size doesn’t help with many things (I don’t think we will see 128-bit CPUs any time soon). Etc etc etc.
Navigation
Message Index
Next page
Previous page
There was an error while thanking
Thanking...

Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod