EEVblog Electronics Community Forum

Products => Computers => Topic started by: NiHaoMike on June 06, 2021, 08:00:38 pm

Title: The phasing out of 32 bit
Post by: NiHaoMike on June 06, 2021, 08:00:38 pm
Excluding embedded systems, that is.
https://www.youtube.com/watch?v=jlRAnO1GR0U (https://www.youtube.com/watch?v=jlRAnO1GR0U)
Although that video focuses on ARM, just how much extra complexity in a modern 64 bit x86 CPU goes towards making it compatible with 32 bit? How much would there be to gain by making it compatible with 32 bit only at the app level (as ARM did with some of their CPUs) and how much by removing 32 bit compatibility from the hardware and moving it to software emulation?
Title: Re: The phasing out of 32 bit
Post by: SilverSolder on June 07, 2021, 12:27:41 am

64 bit can double the memory requirements in some applications...   64 bit isn't a universal "good"...
Title: Re: The phasing out of 32 bit
Post by: NiHaoMike on June 07, 2021, 01:09:34 am
There's a mode called "x32" that solves that problem, while still retaining most of the advantages of 64 bit. I don't think it's popular because not that many apps get enough of a performance boost to make it worthwhile, especially with RAM being much cheaper than it was when software support for it was being developed.
https://en.wikipedia.org/wiki/X32_ABI
Title: Re: The phasing out of 32 bit
Post by: bson on June 07, 2021, 02:26:29 am
I don't think there's that much complexity to support legacy 32-bit in x64.  Register save and load at traps and faults, MMU lookups, and some arithmetic ops.  The big complexity is in software, for things likes like fetch system call parameters - same system call, but different size parameters depending on whether the caller is 32 or 64-bit (in Unix this made things like ioctl exceedingly painful to get right due to its arguments going to a driver), different trap and VM operations, different context in an interrupt, etc.  At Sun we made the kernel 64-bit with 32-bit process support.  We had zero support for 32-bit kernel components like drivers or file systems.  But even then making it work was not a small undertaking.  (This in the mid/late 90s.)  We made the kernel SMP and run thread hot around the same time.

Title: Re: The phasing out of 32 bit
Post by: David Hess on June 07, 2021, 02:34:37 am
Although that video focuses on ARM, just how much extra complexity in a modern 64 bit x86 CPU goes towards making it compatible with 32 bit? How much would there be to gain by making it compatible with 32 bit only at the app level (as ARM did with some of their CPUs) and how much by removing 32 bit compatibility from the hardware and moving it to software emulation?

One of the features of x86 which allowed it to succeed is backwards compatibility.

The extra complexity represents a significant verification issue but once that is crossed, the physical cost is low compared to other things which must be included.
Title: Re: The phasing out of 32 bit
Post by: NiHaoMike on June 07, 2021, 03:07:15 am
One of the features of x86 which allowed it to succeed is backwards compatibility.

The extra complexity represents a significant verification issue but once that is crossed, the physical cost is low compared to other things which must be included.
So move the legacy support into firmware, then it becomes a part of the firmware that only needs to be verified once per revision, which is far less often than the times the core is updated. Or move it into an Atom/Quark like core that could be repurposed for stuff like power management or audio DSP when the CPU is operating in 64 bit mode.

I wonder how long before some Spectre-like vulnerability is found that is only possible (or even merely made easier to exploit) because of the legacy support, thereby making the legacy support a security liability.
Title: Re: The phasing out of 32 bit
Post by: David Hess on June 07, 2021, 03:42:03 am
So move the legacy support into firmware, then it becomes a part of the firmware that only needs to be verified once per revision, which is far less often than the times the core is updated. Or move it into an Atom/Quark like core that could be repurposed for stuff like power management or audio DSP when the CPU is operating in 64 bit mode.

Intel, Transmeta, and DEC all tried  that and failed, and I think Apple's attempts hurt them more than they helped.  Maybe these were all implementation failures, but if every attempt has failed, then that argues that the concept is flawed.
Title: Re: The phasing out of 32 bit
Post by: NiHaoMike on June 07, 2021, 03:49:25 am
Intel, Transmeta, and DEC all tried  that and failed, and I think Apple's attempts hurt them more than they helped.  Maybe these were all implementation failures, but if every attempt has failed, then that argues that the concept is flawed.
Didn't Apple just cut out legacy support altogether, starting with removing support from software and not having it at all with newer internally designed hardware?
Title: Re: The phasing out of 32 bit
Post by: SilverSolder on June 07, 2021, 04:36:06 am

I guess backwards compatibility isn't important to a certain level of consumer, but it might matter for professional / business applications?
Title: Re: The phasing out of 32 bit
Post by: brucehoult on June 07, 2021, 04:55:34 am
So move the legacy support into firmware, then it becomes a part of the firmware that only needs to be verified once per revision, which is far less often than the times the core is updated. Or move it into an Atom/Quark like core that could be repurposed for stuff like power management or audio DSP when the CPU is operating in 64 bit mode.

Intel, Transmeta, and DEC all tried  that and failed, and I think Apple's attempts hurt them more than they helped.  Maybe these were all implementation failures, but if every attempt has failed, then that argues that the concept is flawed.

Apple has changed the CPU family in "Macintosh" computers three times now i.e. used four different ISAs: Motorola 68000, IBM PowerPC, Intel x86_64, and now ARM Aarch64. Actually, five, as there were a couple of 32 bit Core models at the start of the Intel era.

Each time they have for some years provided software emulators for the old ISA which have been pretty transparent.

For example, it was possible to run a PowerPC version of MacOS 9 on an Intel Mac up until Snow Leopard was replaced by Lion in July 2011 -- and you could run 68000 apps in that MacOS 9 using it's own built in emulator.
Title: Re: The phasing out of 32 bit
Post by: Berni on June 07, 2021, 05:51:49 am
I don't think supporting 32bit is really all that much extra transistors.

Even if you are a 64bit chip you still need a way to move around 8bit 16bit 32bit words. You don't want an architecture that requires a 8bit value in a memory structure to be padded out with extra 56 zeroes to fit into 64bits. Also a 64bit chip supporting 32bit pointers makes sense since 95% of processes running under a typical OS use significantly less than 4GB of memory while typically holding onto quite a bit of pointers due to the prevalence of object oriented programing that allocates most things dynamically and thus needs pointers to everything. So a well designed 64bit architecture will end up including most of the abilities of a 32bit architecture.

The x86 architecture however does take backwards compatibility to a point where it might be getting annoying for chip designers. Even that shiny new 12 core Intel i9 processor once coming out of reset acts like it is a 16bit 8086 processor. So if its fed 8086 machine code it will run it just fine just like it did on a IBM PC XT in the 1980s. Its only when the OS does a special register dance the chip switches into acting like a 32bit Intel 386 and the CPU is now fully binary compatible with all 386 machine code. Then with another magic register dance ritual the processor finally starts acting like its a 64bit CPU. I'm guessing at this point Intel just sticks a tiny 8086 into a few square microns of die space and uses that to run the startup baggage then just turns it off and forgets about it. But the 16bit instructions from the 8086 are still valid on x64 since it still needs a way of moving 16bit words, they simply added extra variations of that instruction that work on 32 and 64 bits(along with a truckload of other new instructions as its usual for the confusing giant mountain instructions in x86).

It's more about maintaining support for it, it costs extra engineering time to implement the backwards compatibility and even more engineering time to properly test that it works in all weird cases that old software might abuse it. While on the fly translating 32bit machine code to 64bit machine code for the same architecture is probably reasonably easy to do without a significant performant hit.
Title: Re: The phasing out of 32 bit
Post by: Kleinstein on June 07, 2021, 06:38:36 am
X86 backward compatibility even goes back to 8080 code for a large part. Another odd backward point is the ominous A20 gate, to also emulate an old hardware quirk of early PC implementations when there 20 Bit address space overflows. At least it can be turned off - but it was needed to run old MS-DOS one x386 and on.

How much extra transistors are needed depends on the ISA and how much the 64 bit ISA also supports subsets of the 64 bits. I would not worry so much about the extra transistors, as much of the CPU are FPU and cache anyway. The actual interger ALU is tiny. The problem is more that the extra support / deoding can add delays. A new clean instruction set can be also be more compact or faster to decode. 64 Bit code also needs more memory and thus also more memory bandwidth - so in some cases 32 bit code can be faster.

The main reason to go beyound 32 bit is that memory beyound 4 GB gets practical. With word addressing 32 bit addresses the limit would be 16 GB, which is still limited. For a while apps could still live with that, but it makes sense to plan ahead a little.
Title: Re: The phasing out of 32 bit
Post by: james_s on June 07, 2021, 07:23:16 am
Transistors are cheap, backward compatibility is paramount. It is the reason that "Wintel" PCs have absolutely dominated for 30+ years and continue to absolutely dominate the desktop/laptop market to this day. Something like 90% of all of the personal computers in the entire world are x86 running Windows, not because either x86 or Windows are particularly amazing but because they offer backward compatibility with an absolutely enormous library of software. Other innovative systems have come and gone, the BeBox was totally cool at the time but there was no software for it so it was a flop. Apple is the only other platform that is even a serious contender on the consumer desktop and they are a very, very distant second place.
Title: Re: The phasing out of 32 bit
Post by: PKTKS on June 07, 2021, 12:55:42 pm
I have read comments with quite interest...
But this discussion is oversimplified..

Given that since 90/00s all CPUs are not exactly 16/32 or 64 bits
they all have a messy combination of things (8086 A20, 80x86 SIMD...)
and..

given those "extensions" instructions..

they just can now handle 128 bits.. 256 bits and even 512 bits w/latest AVX

This discussion is nothing but vapor once anyone can write a 16 bit app
or 32 app and they will run fine on modern OS/hardware.

In particular AMD CPUs are quite well designed for that.
Seems mostly the ARM vaporware showing up  ::)

NV ARM takeover will certainly want things above the 128 lanes...
no competition on that raceway where they will have vertical IP.

Paul
Title: Re: The phasing out of 32 bit
Post by: NiHaoMike on June 07, 2021, 01:33:53 pm
I guess backwards compatibility isn't important to a certain level of consumer, but it might matter for professional / business applications?
If it's still supported through emulation, the old stuff will still work.
How much extra transistors are needed depends on the ISA and how much the 64 bit ISA also supports subsets of the 64 bits. I would not worry so much about the extra transistors, as much of the CPU are FPU and cache anyway. The actual interger ALU is tiny. The problem is more that the extra support / deoding can add delays. A new clean instruction set can be also be more compact or faster to decode. 64 Bit code also needs more memory and thus also more memory bandwidth - so in some cases 32 bit code can be faster.
My understanding is that "x32" is in fact the 64 bit instruction set with 32 bit pointers, thereby solving the memory use problem.
Title: Re: The phasing out of 32 bit
Post by: SilverSolder on June 07, 2021, 04:02:53 pm
I guess backwards compatibility isn't important to a certain level of consumer, but it might matter for professional / business applications?
If it's still supported through emulation, the old stuff will still work.
How much extra transistors are needed depends on the ISA and how much the 64 bit ISA also supports subsets of the 64 bits. I would not worry so much about the extra transistors, as much of the CPU are FPU and cache anyway. The actual interger ALU is tiny. The problem is more that the extra support / deoding can add delays. A new clean instruction set can be also be more compact or faster to decode. 64 Bit code also needs more memory and thus also more memory bandwidth - so in some cases 32 bit code can be faster.
My understanding is that "x32" is in fact the 64 bit instruction set with 32 bit pointers, thereby solving the memory use problem.

I guess you can do pretty much anything with a virtual machine...
Title: Re: The phasing out of 32 bit
Post by: nctnico on June 07, 2021, 05:27:37 pm
I don't think supporting 32bit is really all that much extra transistors.

Even if you are a 64bit chip you still need a way to move around 8bit 16bit 32bit words. You don't want an architecture that requires a 8bit value in a memory structure to be padded out with extra 56 zeroes to fit into 64bits.
If you look closer you'll see an integer (the most common;ly used storage type in C software which is the base of most of the software) is still 32 bit on most platforms. In the end the only difference between 32 bit and 64 bit is the memory space.
Title: Re: The phasing out of 32 bit
Post by: PKTKS on June 07, 2021, 05:57:27 pm
I don't think supporting 32bit is really all that much extra transistors.

Even if you are a 64bit chip you still need a way to move around 8bit 16bit 32bit words. You don't want an architecture that requires a 8bit value in a memory structure to be padded out with extra 56 zeroes to fit into 64bits.
If you look closer you'll see an integer (the most common;ly used storage type in C software which is the base of most of the software) is still 32 bit on most platforms. In the end the only difference between 32 bit and 64 bit is the memory space.

For that...  clever header #define and #include should be enough...
and are already enough

But mostly  IOMMU is the main piece of the puzzle..
and that ..    goes to GPUs and these "modern" single plug (like USB)
"fit all serve all"  peripherics..

DMA is still the most complicated piece to bundle.

Paul
Title: Re: The phasing out of 32 bit
Post by: langwadt on June 07, 2021, 06:11:05 pm
I don't think supporting 32bit is really all that much extra transistors.

Even if you are a 64bit chip you still need a way to move around 8bit 16bit 32bit words. You don't want an architecture that requires a 8bit value in a memory structure to be padded out with extra 56 zeroes to fit into 64bits.
If you look closer you'll see an integer (the most common;ly used storage type in C software which is the base of most of the software) is still 32 bit on most platforms. In the end the only difference between 32 bit and 64 bit is the memory space.

and that's an issue for naughty code that assumes a pointer fits in an int

Title: Re: The phasing out of 32 bit
Post by: ejeffrey on June 07, 2021, 06:23:56 pm
So move the legacy support into firmware, then it becomes a part of the firmware that only needs to be verified once per revision, which is far less often than the times the core is updated. Or move it into an Atom/Quark like core that could be repurposed for stuff like power management or audio DSP when the CPU is operating in 64 bit mode.

Intel, Transmeta, and DEC all tried  that and failed, and I think Apple's attempts hurt them more than they helped.  Maybe these were all implementation failures, but if every attempt has failed, then that argues that the concept is flawed.

Transmeta was mostly sunk by emulation of ancient 16 bit code.  Their binary translation layer apparently worked well enough for 32 bit code, but windows at the time had too much 16 bit code which they had to emulate in software slowly.  They were also trying to virtualize the entire OS, including protected instructions and hardware access which tend to be the things hardest to translate and require slow emulation.  Intel's x86 emulation on ia64 was a problem because they are such radically different architectures.  x86 and amd64 are quite similar, so I expect binary translation would be much more successful.  Also, with modern OSes if you get the kernel on board supporting the translation it should be even better.

I wouldn't really expect it to happen any time soon.  It is still a lot of work to replace something that isn't broken, but I don't think the performance or compatibility would be nearly the problems people had trying to do this 2 decades ago.
Title: Re: The phasing out of 32 bit
Post by: David Hess on June 07, 2021, 07:36:29 pm
I guess backwards compatibility isn't important to a certain level of consumer, but it might matter for professional / business applications?

It has less importance in embedded applications, which not coincidentally is where the various failed processors which dropped compatibility live on.

Arguably things have changed with managed applications and walled gardens like the iPhone for consumer use, which eventually leads back to a discussion about whether personal computers will survive at all.  The various companies selling walled platforms sure say they will not, but the RISC vendors said the same about Intel's x86 and Microsoft Windows, and look where they ended up.

ARM is now doing to Intel what Intel did to the RISC vendors with economy of scale pushing performance up from the low end, but I am not convinced that will be enough to displace x86 if ARM systems remain closed, or perhaps "curated".  And any advantage from a simpler ISA and simplicity from lack of support for backwards compatibility may not be enough.  That is a fancy way of asking, "where is my open ARM desktop replacement?"  And like the failed RISC vendors of the past, companies that do not make desktops, including Apple, say that I do not need one.  Well of course I do not need one if they are not making them; just ask them!

Apple has changed the CPU family in "Macintosh" computers three times now i.e. used four different ISAs: Motorola 68000, IBM PowerPC, Intel x86_64, and now ARM Aarch64. Actually, five, as there were a couple of 32 bit Core models at the start of the Intel era.

Each time they have for some years provided software emulators for the old ISA which have been pretty transparent.

For example, it was possible to run a PowerPC version of MacOS 9 on an Intel Mac up until Snow Leopard was replaced by Lion in July 2011 -- and you could run 68000 apps in that MacOS 9 using it's own built in emulator.

I think that explains why Apple is now primarily a maker of phones and consumer electronics, who just happens to also make some personal computers.

I wouldn't really expect it to happen any time soon.  It is still a lot of work to replace something that isn't broken, but I don't think the performance or compatibility would be nearly the problems people had trying to do this 2 decades ago.

The various processor manufacturers crushed by Intel's lower performance x86 always said, "just recompile!"  Of course you could run JAVA on anything, right?  RIGHT?
Title: Re: The phasing out of 32 bit
Post by: james_s on June 07, 2021, 10:38:27 pm
Arguably things have changed with managed applications and walled gardens like the iPhone for consumer use, which eventually leads back to a discussion about whether personal computers will survive at all.  The various companies selling walled platforms sure say they will not, but the RISC vendors said the same about Intel's x86 and Microsoft Windows, and look where they ended up.

Personal computers will survive into the foreseeable future. There are a lot of people out there whose needs are met by mobile devices, but those are the people who never really needed a PC in the first place, it was just the only way to get on the internet. Now they have other options that work for their use case but millions of other people need a PC. Nobody is developing smartphone apps ON a smartphone, they use a PC. Mobile devices are fine for content consumption but somebody has to make all that content. The PC market is not growing like it once was but that isn't because it's dead, it's because it has matured and there is far less reason to upgrade regularly than there once was, even a 10 year old PC can run most modern software just fine, imagine trying to use a 10 year old PC in 1995 when multimedia was taking off.
Title: Re: The phasing out of 32 bit
Post by: NiHaoMike on June 08, 2021, 12:26:30 am
Transmeta was mostly sunk by emulation of ancient 16 bit code.  Their binary translation layer apparently worked well enough for 32 bit code, but windows at the time had too much 16 bit code which they had to emulate in software slowly.  They were also trying to virtualize the entire OS, including protected instructions and hardware access which tend to be the things hardest to translate and require slow emulation.
So in other words, if they came up with that when 2000 or XP was in mainstream use, things might have went quite different for them.
Title: Re: The phasing out of 32 bit
Post by: brucehoult on June 08, 2021, 01:53:18 am
I don't think supporting 32bit is really all that much extra transistors.

Even if you are a 64bit chip you still need a way to move around 8bit 16bit 32bit words. You don't want an architecture that requires a 8bit value in a memory structure to be padded out with extra 56 zeroes to fit into 64bits.
If you look closer you'll see an integer (the most common;ly used storage type in C software which is the base of most of the software) is still 32 bit on most platforms. In the end the only difference between 32 bit and 64 bit is the memory space.

and that's an issue for naughty code that assumes a pointer fits in an int

That already didn't work on either 8086 or 68000, 40+ years ago.

"long" is a better bet, and works pretty much everywhere except 64 bit Windows, where both int and long are 32 bit and for pointers you need "long long". Grrrr.

"intptr_t" or "uintptr_t" is the only correct way to do it. ("size_t" will usually work, but isn't strictly correct)
Title: Re: The phasing out of 32 bit
Post by: David Hess on June 08, 2021, 02:01:03 am
Personal computers will survive into the foreseeable future. There are a lot of people out there whose needs are met by mobile devices, but those are the people who never really needed a PC in the first place, it was just the only way to get on the internet. Now they have other options that work for their use case but millions of other people need a PC. Nobody is developing smartphone apps ON a smartphone, they use a PC. Mobile devices are fine for content consumption but somebody has to make all that content. The PC market is not growing like it once was but that isn't because it's dead, it's because it has matured and there is far less reason to upgrade regularly than there once was, even a 10 year old PC can run most modern software just fine, imagine trying to use a 10 year old PC in 1995 when multimedia was taking off.

I think the PC will survive as well but for a different reason.  It existed before the advent of "consumer" personal computers for those who were interested in the form of business computers and development systems.  Various CP/M systems come to mind.

What is less clear is if that market is large enough to support the development and production of the parts needed to build those systems, including CPUs, RAM, GPUs, etc.  These have been payed for by consumer demand for a long time now leading to a economy of scale which will not exist in the future.  Microsoft will abandon it by that time but Linux will be well placed to take up the slack.

Transmeta was mostly sunk by emulation of ancient 16 bit code.  Their binary translation layer apparently worked well enough for 32 bit code, but windows at the time had too much 16 bit code which they had to emulate in software slowly.  They were also trying to virtualize the entire OS, including protected instructions and hardware access which tend to be the things hardest to translate and require slow emulation.

So in other words, if they came up with that when 2000 or XP was in mainstream use, things might have went quite different for them.

I do not remember the details but Linus Torvalds who worked there has quite a lot to say about the subject which you can find online through a search.
Title: Re: The phasing out of 32 bit
Post by: nigelwright7557 on June 08, 2021, 03:14:08 am

I think the PC will survive as well but for a different reason.  It existed before the advent of "consumer" personal computers for those who were interested in the form of business computers and development systems.  Various CP/M systems come to mind.


PC's are very powerful but not everyone needs that power or cost.
Smaller cheaper hardware is now available to do internet, email and the basic word processing at a much lower cost.

I am hooked on my desk top 5GHZ with 32 inch monitor. I just hate the laptop finger pad and small screen and slowness.

I remember the first pc's which were terrible, little memory, little speed, amber monitor.
I have always craved a faster pc over the years but now am at a point where I am happy with my i7 8700.


Title: Re: The phasing out of 32 bit
Post by: David Hess on June 08, 2021, 04:03:16 am
I think the PC will survive as well but for a different reason.  It existed before the advent of "consumer" personal computers for those who were interested in the form of business computers and development systems.  Various CP/M systems come to mind.

PC's are very powerful but not everyone needs that power or cost.
Smaller cheaper hardware is now available to do internet, email and the basic word processing at a much lower cost.

I do not think it is processing power which primarily distinguishes the PC.  It is the human factor's engineering and expandability, which are lacking in current "smaller cheaper hardware".

I could do internet, email, and word processing on a Raspberri Pi, at least if it had 16+ GB of RAM, but I could not attach enough monitors and storage and peripherals.  And modern stylish laptops are right out because of their horrid human factors engineering which is used to distinguish them from traditional PCs including PC replacement laptops.

This is what convinces me that people are *not* doing serious work on these modern laptops by choice, especially the Apple ones.  I was in a computer store not too long ago testing various keyboards by touch typing on them.  I got the feeling that the salespeople had never seen someone touch type.  The keyboards were universally horrid.

Quote
I am hooked on my desk top 5GHZ with 32 inch monitor.  I have always craved a faster pc over the years but now am at a point where I am happy with my i7 8700.

I was happy with my Phenom II 940 but had to upgrade to get more RAM.  8GB just did not cut it with Windows 10, or any browser.
Title: Re: The phasing out of 32 bit
Post by: brucehoult on June 08, 2021, 04:19:14 am
Apple has changed the CPU family in "Macintosh" computers three times now i.e. used four different ISAs: Motorola 68000, IBM PowerPC, Intel x86_64, and now ARM Aarch64. Actually, five, as there were a couple of 32 bit Core models at the start of the Intel era.

Each time they have for some years provided software emulators for the old ISA which have been pretty transparent.

For example, it was possible to run a PowerPC version of MacOS 9 on an Intel Mac up until Snow Leopard was replaced by Lion in July 2011 -- and you could run 68000 apps in that MacOS 9 using it's own built in emulator.

I think that explains why Apple is now primarily a maker of phones and consumer electronics, who just happens to also make some personal computers.

Last I checked, Apple is selling more Macs now than they ever have before.

iPhones are bigger for them, but Macintosh is not a small business.
Title: Re: The phasing out of 32 bit
Post by: NiHaoMike on June 08, 2021, 04:20:55 am
Personal computers will survive into the foreseeable future. There are a lot of people out there whose needs are met by mobile devices, but those are the people who never really needed a PC in the first place, it was just the only way to get on the internet. Now they have other options that work for their use case but millions of other people need a PC. Nobody is developing smartphone apps ON a smartphone, they use a PC. Mobile devices are fine for content consumption but somebody has to make all that content.
The latest high end smartphones and tablets support desktop mode when connected to an external display. They're more than capable of doing basic content creation tasks like text/image editing. Presumably, if there was demand for it, it could be possible to develop apps for the device using the device itself in that mode.
What is less clear is if that market is large enough to support the development and production of the parts needed to build those systems, including CPUs, RAM, GPUs, etc.  These have been payed for by consumer demand for a long time now leading to a economy of scale which will not exist in the future.  Microsoft will abandon it by that time but Linux will be well placed to take up the slack.
GPUs are very much in demand. It could very well be possible that they might eventually be designed to connect to tablets rather than x86 PCs, or be installed in high end smart TVs. CPUs and RAM are here to stay, what might change is the form factor of the box they go in.
I could do internet, email, and word processing on a Raspberri Pi, at least if it had 16+ GB of RAM, but I could not attach enough monitors and storage and peripherals.
Not sure what kind of email and word processing you're doing that can't be done with a Pi Zero. Internet browsing I can understand given just how much RAM it takes to open up so many tabs.
Title: Re: The phasing out of 32 bit
Post by: brucehoult on June 08, 2021, 05:23:39 am
I think the PC will survive as well but for a different reason.  It existed before the advent of "consumer" personal computers for those who were interested in the form of business computers and development systems.  Various CP/M systems come to mind.

PC's are very powerful but not everyone needs that power or cost.
Smaller cheaper hardware is now available to do internet, email and the basic word processing at a much lower cost.

I do not think it is processing power which primarily distinguishes the PC.  It is the human factor's engineering and expandability, which are lacking in current "smaller cheaper hardware".

I could do internet, email, and word processing on a Raspberri Pi, at least if it had 16+ GB of RAM, but I could not attach enough monitors and storage and peripherals.

Funny you should say that, because my latest PC, which I'm putting into a case probably later today, is quad core, 64 bit, a little faster CPU-wise than a Raspberry Pi 3, but it has 16 GB of DDR4, a PCIe slot, and two M.2 slots. I've got a Samsung 500 GB  970 Evo Plus in the bigger M.2 slot and an old Sapphire R5 230 video card in the PCIe.

https://www.youtube.com/watch?v=3o411cQ7XG0 (https://www.youtube.com/watch?v=3o411cQ7XG0)
Title: Re: The phasing out of 32 bit
Post by: Berni on June 08, 2021, 05:34:29 am
The actual problem is that software requirements act like a gas, they expand to fill the available hardware at the time.

Sure word processing is an easy task that any crappy PC from the last 20 years should run with barely any effort, but go ahead and try to run Microsoft Word 2019 on a device with the processing power of a RaspberriPi or a average smartphone. Even just starting up Word will likely take longer than a modern PC takes to boot into windows, then once you are in expect to wait a few moments after your click every time you use a more 'advanced' feature such as moving an embeded image a little bit to the right. If you insert a graph with many points, then you better go get a coffee in the mean time, since even that takes a few seconds on the fastest x86 machines available today.

Same thing is happening with web browsers. The hardware running the browser has become significantly more powerful while browsers added many more features. The web developers take advantage of this and keep piling on stuff until the website just about still loads sort of quickly on the latest iPhone. The result is webpages that first contain 1MB of HTML that pulls in another 1MB of CSS and multiple javascript files that can often total >5MB, these scripts will often open additional connections and download >1MB of json files and parse those in java script to generate more HTML code for the browser to parse... oh and on top of that we probably also have to load 10MB of images. All of this crap requires a lot of processing power to just render the web page. But then sometimes the scripts will continue to run and do stuff in the background, for some websites this is so resource intensive that opening about 20 to 50 tabs of this website will overwhelm a modern PC (a powerful one even, not just a crappy little 10 year old i3) to the point that other applications start feeling slow.

Smartphones have multiplied in processing power many times over. 30 years ago we would have needed a supercomputer consuming many kilowatts to equal the power of a single smartphone that runs for hours from a small battery. Yet as phones get even a bit older they start feeling too slow. Driving people to still use a full on PC burning 100x the electrical power for doing serious work. If the software didn't become so inefficient then all that most people would need is a phone with a desktop docking station.
Title: Re: The phasing out of 32 bit
Post by: bson on June 08, 2021, 05:40:29 am
But the 16bit instructions from the 8086 are still valid on x64 since it still needs a way of moving 16bit words,
Well, it's not just about moving them around.  You will still have 16-bit integers, and will need to perform arithmetic on them, which means not impacting bits 16-31, setting the flags correctly according to the 16-bit result, and so on.  Arbitrarily aligned loads and stores on a wide bus. It does make the arithmetic units a bit more complex, but I'm not sure it really adds that much.  (Not sure if modern pipelined cores really have an ALU as such anymore since they're not microcoded.)  And since you have to do this you might as well support 8086.  The real pain with that era is the segment registers which frankly can go burn in a pit of flame somewhere.
Title: Re: The phasing out of 32 bit
Post by: Berni on June 08, 2021, 11:02:22 am
But the 16bit instructions from the 8086 are still valid on x64 since it still needs a way of moving 16bit words,
Well, it's not just about moving them around.  You will still have 16-bit integers, and will need to perform arithmetic on them, which means not impacting bits 16-31, setting the flags correctly according to the 16-bit result, and so on.  Arbitrarily aligned loads and stores on a wide bus. It does make the arithmetic units a bit more complex, but I'm not sure it really adds that much.  (Not sure if modern pipelined cores really have an ALU as such anymore since they're not microcoded.)  And since you have to do this you might as well support 8086.  The real pain with that era is the segment registers which frankly can go burn in a pit of flame somewhere.

Well they already have a bus that feeds the appropriate register into the ALU, its just a matter of cutting the bus up into 8bit chunks and giving each chunk a few dedicated enable line so that you can just throw 8 bits on there. Tho the actual implementation is likely a lot more complex since these things a highly pipelined and might execute more than one instruction per cycle using some duplicated logic.

My point is that because a CPU is 32bit does not mean that it should only be able to move 32bit words. Some of the early mainframes did have just operate with one bit width (as well as that bit width being something weird like 36 or 38 bit) but all of the modern high bit count CPUs retain the ability to move smaller bit widths because not every variable needs 32bit. For example when working with ASCII text you only need 8 bits per character, so it makes sense to have instruction that can pluck a 8bit value from somewhere in memory, do some math to it in a register and then write a 8bit value back to memory. If you only had 32bit instructions this would mean some OR,AND,Bitshift...etc operations to actually cut the 8bit value out of a 32bit one and then splice it back into a 32bit value before writing back to memory. This is why modern CPUs will never throw away the ability to operate with smaller words even if they are a 64bit CPU built from scratch with no legacy baggage. In fact the latest and greatest AVX512 vector instructions on Intel CPUs have operations that can do math on 64 separate 8bit values simultaniusly. Supporting 8bit is an advantage because you can fit more values into 512bits, if they stuck with 32bit only they could only fit 16 values into those wide AVX registers. Same reason why ARM supports 8bit and 16bit operations even tho the first ever ARM processor was already 32bit (but does have other 32bit quirks such as liking memory to be 32bit aligned)
Title: Re: The phasing out of 32 bit
Post by: PKTKS on June 08, 2021, 11:47:48 am
But the 16bit instructions from the 8086 are still valid on x64 since it still needs a way of moving 16bit words,
Well, it's not just about moving them around.  You will still have 16-bit integers, and will need to perform arithmetic on them, which means not impacting bits 16-31, setting the flags correctly according to the 16-bit result, and so on.  Arbitrarily aligned loads and stores on a wide bus. It does make the arithmetic units a bit more complex, but I'm not sure it really adds that much.  (Not sure if modern pipelined cores really have an ALU as such anymore since they're not microcoded.)  And since you have to do this you might as well support 8086.  The real pain with that era is the segment registers which frankly can go burn in a pit of flame somewhere.

Well they already have a bus that feeds the appropriate register into the ALU, its just a matter of cutting the bus up into 8bit chunks and giving each chunk a few dedicated enable line so that you can just throw 8 bits on there. Tho the actual implementation is likely a lot more complex since these things a highly pipelined and might execute more than one instruction per cycle using some duplicated logic.

My point is that because a CPU is 32bit does not mean that it should only be able to move 32bit words. Some of the early mainframes did have just operate with one bit width (as well as that bit width being something weird like 36 or 38 bit) but all of the modern high bit count CPUs retain the ability to move smaller bit widths because not every variable needs 32bit. For example when working with ASCII text you only need 8 bits per character, so it makes sense to have instruction that can pluck a 8bit value from somewhere in memory, do some math to it in a register and then write a 8bit value back to memory. If you only had 32bit instructions this would mean some OR,AND,Bitshift...etc operations to actually cut the 8bit value out of a 32bit one and then splice it back into a 32bit value before writing back to memory. This is why modern CPUs will never throw away the ability to operate with smaller words even if they are a 64bit CPU built from scratch with no legacy baggage. In fact the latest and greatest AVX512 vector instructions on Intel CPUs have operations that can do math on 64 separate 8bit values simultaniusly. Supporting 8bit is an advantage because you can fit more values into 512bits, if they stuck with 32bit only they could only fit 16 values into those wide AVX registers. Same reason why ARM supports 8bit and 16bit operations even tho the first ever ARM processor was already 32bit (but does have other 32bit quirks such as liking memory to be 32bit aligned)

That was very well pointed...

Bottom line..   there is still the above all problem
of feeding data in and out the CPU..

Internally CPUs today are free to handle chunks from 8 to 512 bits..
but how they are being moved in/out problem persists..

It does not matter any more if CPU X,Y or Z is 32/64 or else..
The skill to handle 8bit chunks is a must and
all CPUs now  can handle them from 8 to 512 (AVX) just fine.

how that is fed ?  another question..

Paul
Title: Re: The phasing out of 32 bit
Post by: Berni on June 08, 2021, 01:19:52 pm
how that is fed ?  another question..

Paul

This brings in another can of worms, the inner workings of DRAM memory.

All modern computers use DRAM as main working memory because its available in large capacities while still being fast. To make it even faster the modern DDR memory chips are heavily pipelined too. So you send in a request to read some data and about 10 clock cycles later the data starts coming out and keeps coming out for as long as you want. This means random access to a area of DRAM is relatively slow, but once the data is flowing its flowing really fast. For this reason the CPU has a cache between itself and main memory. The L1 L2 L3 cache inside the CPU is designed to be incredibly fast for random accesses(due to being SRAM) while also having a wide data path to move data in and out quickly. The L1 cache in particular tends to be designed to be read/written at multiple locations on the same clock cycle. Not sure how the 8 bit transfer works, its possible it ignores the top 24 bits of a 32bit bus, or it might have some dedicated narrower buses that feed a narrow ALU, all depends on what the boffins at Intel found more appropriate. So later on when the cache needs flushing into main memory this happens as one big wide (64 128 256bit...) transfer into DDR RAM.

This is also one of the reasons why optimizing x86 code to run faster is mostly about keeping RAM access confined to small areas at a time. This minimizes the movement of data from the lighting fast cache into slower main memory. There is no point in having a fast CPU core if its just waiting for data to arrive most of the time.
Title: Re: The phasing out of 32 bit
Post by: PKTKS on June 08, 2021, 01:30:02 pm
AFAIK they did not "solve" that either..

GPUs just can not access internal caches (L1/L2/L3)

For that they now have two dedicated controllers..
- MMU for the CPU
- IOMMU for GPUs  (may be others  DMA  "capable" IO)

IMHO that is why for the CPU it really does not matter anymore
paths that wide as matter for GPUs... while the former
MUST deal w/8bit chunks.. the latter will not...

As a matter of fact.. GPUs has become so picky that
not only all independent stuff is now required but also
an enormous amount of power to handle that..
and it should be really separated from the CPU power.
they draw spikes order of magnitudes higher..  :-\
 

Paul
Title: Re: The phasing out of 32 bit
Post by: David Hess on June 08, 2021, 05:55:51 pm
My point is that because a CPU is 32bit does not mean that it should only be able to move 32bit words. Some of the early mainframes did have just operate with one bit width (as well as that bit width being something weird like 36 or 38 bit) but all of the modern high bit count CPUs retain the ability to move smaller bit widths because not every variable needs 32bit. For example when working with ASCII text you only need 8 bits per character, so it makes sense to have instruction that can pluck a 8bit value from somewhere in memory, do some math to it in a register and then write a 8bit value back to memory. If you only had 32bit instructions this would mean some OR,AND,Bitshift...etc operations to actually cut the 8bit value out of a 32bit one and then splice it back into a 32bit value before writing back to memory. This is why modern CPUs will never throw away the ability to operate with smaller words even if they are a 64bit CPU built from scratch with no legacy baggage. In fact the latest and greatest AVX512 vector instructions on Intel CPUs have operations that can do math on 64 separate 8bit values simultaniusly. Supporting 8bit is an advantage because you can fit more values into 512bits, if they stuck with 32bit only they could only fit 16 values into those wide AVX registers. Same reason why ARM supports 8bit and 16bit operations even tho the first ever ARM processor was already 32bit (but does have other 32bit quirks such as liking memory to be 32bit aligned)

One of the common features of the RISC upstarts which tried to compete with x86 was limited word width.  They soon discovered that this was a poor tradeoff, and for example Alpha added instructions for 8 and 16 bit manipulations.

As a matter of fact.. GPUs has become so picky that
not only all independent stuff is now required but also
an enormous amount of power to handle that..
and it should be really separated from the CPU power.
they draw spikes order of magnitudes higher..  :-\

I think GPU power has gotten out of control.  I ended up with an RX570 on my newest workstation partially because anything newer required so much more power, and even so, I set the power limit in the driver to minimum, which is a nice feature.
Title: Re: The phasing out of 32 bit
Post by: brucehoult on June 08, 2021, 06:42:54 pm
This is why modern CPUs will never throw away the ability to operate with smaller words even if they are a 64bit CPU built from scratch with no legacy baggage.

You must have missed the DEC Alpha, the fastest CPU on earth in the early 90s and one of the first 64 bit microprocessors (the other was MIPS).

The 20164 (released 1992), 21064A, 21066, 21068, 21066A, 20168A, 21164 did not have any byte instructions.

The 21164A (EV56) introduced in 1996 added instructions for 8 and 16 bit data types: LDBU, LDWU, SEXTB, SEXTW, STB, STW. Arithmetic is still only on 32 or 64 bit data, and 32 bit results are sign-extended into the 64 bit register (as RISC-V does today).

The addition of the 8 and 16 bit instructions were primarily for code size reasons, not performance.

Note that there was still no ability to load a signed byte directly. It had to be loaded as unsigned and then explicitly sign-extended afterwards.
Title: Re: The phasing out of 32 bit
Post by: DiTBho on June 08, 2021, 07:57:46 pm
Would you use a CPU that has
- 15 general purpose integer registers, 32bit
- 8 address registers, 64bit (they can only used in { load/store, stack, jump, branch } instructions)
- 8 fixed point registers, 32bit (internally the fixed-point engine uses more bits during computations)

Or would you throw it away?  :o

It doesn't exist as chip. It's just a piece of paper, written 50 years ago to pass a university examination, or something. It is the work of a senior engineer I met on a business trip. There is no pipeline, there is no branch prediction, cache etc, but I am seriously impressed. It describes an hypothetical CPU that nobody has ever implemented, and it looks so modern even if it was thought 50 years ago, and I imagine the young student who wrote it didn't have access to any HDL tool. 50 years ago ... when people where dreaming about RISC and playing with 8bit CPUs.

What have we learned since then? Let's check it out ...

... with MIPS-IV everything has been 64-bit since 1996(1), now the MIPS family has been merged into RISC-V, but yet you can find some recent toolchains where "long" still means 64 bit.

Really, why does sizeof(long) need to return 8 byte?!?, why don't people fix things and once they fix they don't touch them anymore?  :-//


(1) 64bit addresses, 40-bit physical address and a 44-bit virtual address
(2) November 1996, several papers talking about 200-MHz superscalar MIPS 64bit microprocessor
Title: Re: The phasing out of 32 bit
Post by: brucehoult on June 08, 2021, 11:55:18 pm
What do you suggest sizeof(long) should be?

It's hard to know how impressed to be about this machine without seeing the actual instructions.

50 years ago is 10 years before the term "RISC" and the projects at Berkeley and Stanford. It's five years before IBM's Project 801. The only computer close to RISC principles was 1964's CDC6600, which had eight 60 bit data registers X0-X7 (for both integer and FP operations), eight 18 bit address registers A0-A7, and eight 18 bit increment registers B0-B7.

As it was word-addressed, 18 bit addresses were equivalent to about 21 bit addresses on a byte-addressed machine. That was perfectly adequate as not even government nuclear weapons agencies could afford a megabyte or more of fast RAM in those days.

64 bit addresses would just be totally ludicrous in 1971. What would you do with them? They would be a total waste of transistors and memory to store the pointers. People complain enough about them now, when they actually *have* more than 4 GB of RAM in the supercomputer in their pocket.
Title: Re: The phasing out of 32 bit
Post by: NiHaoMike on June 09, 2021, 04:30:19 am
I don't think the question is about removing the capability of the CPU to natively compute values less than 64 bits, but rather just remove or minimize on the ability to natively execute "legacy" code.

I'm under the impression that 32 bit ARM is a far simpler instruction set than 32 bit x86, yet ARM still thought it's worthwhile to remove support to streamline the design. In that case, wouldn't there be an even bigger case to remove 32 bit instruction support from x86 or at least move it aside into a small core or firmware so that it doesn't increase the complexity of the speed critical stuff?
Title: Re: The phasing out of 32 bit
Post by: Berni on June 09, 2021, 05:45:55 am
This is why modern CPUs will never throw away the ability to operate with smaller words even if they are a 64bit CPU built from scratch with no legacy baggage.

You must have missed the DEC Alpha, the fastest CPU on earth in the early 90s and one of the first 64 bit microprocessors (the other was MIPS).

The 20164 (released 1992), 21064A, 21066, 21068, 21066A, 20168A, 21164 did not have any byte instructions.

The 21164A (EV56) introduced in 1996 added instructions for 8 and 16 bit data types: LDBU, LDWU, SEXTB, SEXTW, STB, STW. Arithmetic is still only on 32 or 64 bit data, and 32 bit results are sign-extended into the 64 bit register (as RISC-V does today).

The addition of the 8 and 16 bit instructions were primarily for code size reasons, not performance.

Note that there was still no ability to load a signed byte directly. It had to be loaded as unsigned and then explicitly sign-extended afterwards.

Yes there are rare exceptions, but it did soon get the functionality to move around smaller data types, because it makes sense.

It is fairly easy feature to add to a CPU since you just need a few extra byte enable signals to disable the bytes you don't want to work with. The actual memory access is typically still the full width hence no I/O performance improvement, but it is convenient for software because it doesn't have to worry about picking up any extra junk from the higher bits and make sure everything is padded appropriately. Saving some bitmasking and bitshifting instructions also means less instruction cycles to do the job. No need to have smaller bit width versions of every operation, as long as a small word can be easily plonked into a register with appropriate padding.

In the same way 64bit ARM processors retain all the ability of working with smaller data types as the step up from x86 to x64. The registers get wider and wider versions of instructions get added while the occasion tends to also be used to add some features to the CPU here or there. Since the 32bit set is still buried inside the new 64bit set means that running a 32 bit app on a 64bit OS on a 64bit ARM chip is possible by just handling a few edge cases rather than translating all of the machine code. What did get removed is the Thumb instruction set and this is what breaks compatibility to 32bit. So those instructions do need to get all translated before executing on a 64bit chip, but Thumb is mostly just a more compact form of the regular A32 set, so it should mostly translate 1:1 onto appropriate A32 instructions. I would imagine having to support A32 and Thumb at the same time does add a good bit of fat onto the CPU (especially since you can on the fly jump between them with zero latency or performance hit), so it is understandable that ARM would want to get rid of this legacy baggage, but it does mean it breaks all 32bit software that uses thumb.
Title: Re: The phasing out of 32 bit
Post by: DiTBho on June 09, 2021, 07:46:06 am
What do you suggest sizeof(long) should be?


on my MIPS32R2 (32 bit CPU)
sizeof(char) 1 byte
sizeof(short) 2 byte
sizeof(int) 4 byte
sizeof(long) 4 byte
sizeof(long long) 8 byte

on my MIPS-IV (64 bit CPU)
sizeof(char) 1 byte
sizeof(short) 2 byte
sizeof(int) 4 byte
sizeof(long) 8 byte <------------------------ does it make sense?!? why is it equal to long long?!?
sizeof(long long) 8 byte
Title: Re: The phasing out of 32 bit
Post by: DiTBho on June 09, 2021, 08:15:05 am
50 years ago is 10 years before the term "RISC" and the projects at Berkeley and Stanford. It's five years before IBM's Project 801

Indeed! He was not talking about "RISC" even if he was *somehow* talking about a RISCish thing  :D

"RISC" is a term he had never even heard when he wrote his thesis in 1972, most of his colleagues were enthusiast for the "6800" released by Motorola in 1974; 2021 - 1972 = 49 years ago (~50) he was imagining a CPU with a "RISCish" register-set, and , although it was a primitive structure, he imagined even a structure where all the I/O was reserved for load / store instructions, that's the reason why he reserved  8 registers as "address-only-registers".

Not too bad, isn't it? He looked ahead of several years, and that idea is even the same we then saw in 1978 when Motorola released the 68K with D-registers, and A-registers. Again ... not RISC, but I imagine his teachers and colleagues must have thought his "weird thesis" was just a useless toy.

How many times have I seen this repeating during my life?

"Computers will never fit a desk"
"nobody will ever buy a computer"
"RISC? what nonsense, it won't work"

What a shame IBM didn't hire that guy, and guys like him, and it took more time to have the modern RISC, with people who continually take two steps forward and two steps back: that's my point of my previous post.
Title: Re: The phasing out of 32 bit
Post by: DiTBho on June 09, 2021, 08:23:03 am
64 bit addresses would just be totally ludicrous in 1971. What would you do with them? They would be a total waste of transistors and memory to store the pointers. People complain enough about them now, when they actually *have* more than 4 GB of RAM in the supercomputer in their pocket.

Sure, probably this was a bad point in his thesis. Some of us are visionary leaders, others ... simply youg people who think about planning the future with imagination rather than wisdom.

He said "why not 64bit? I don't have to implement anything, it's like a tale by Philip dick when he describes things happening on Proxima, he doesn't have to build any rocket to reach a far star out of the solar system"

Title: Re: The phasing out of 32 bit
Post by: brucehoult on June 09, 2021, 10:09:45 am
Since the 32bit set is still buried inside the new 64bit set means that running a 32 bit app on a 64bit OS on a 64bit ARM chip is possible by just handling a few edge cases rather than translating all of the machine code. What did get removed is the Thumb instruction set and this is what breaks compatibility to 32bit.

No that's not correct.

Modern Aarch64-only machines such as Apple since A12 (iPhone 8), Cavium ThunderX, Fujitsu A64FX, and now ARM's new Cortex-A510 and Cortex-X2 are all 64 bit ISA *only*.

The 32 bit fixed-length 32 bit traditional ARM ISA is *not* contained within the 32 bit fixed-length 64 bit ISA. They are completely different ISAs. Completely different instruction encodings, completely different instructions. Several very important things in the 32 bit ISA do not exist at all in the 64 bit ISA: for example, every instruction in the 32 bit ISA has 4 bits for instruction-by-instruction predication. The 64 bit ISA does not have predication. The only similar thing it has is conditional select. The 32 bit ISA has instructions to store and load (or push and pop) multiple registers -- the block data transfer instructions have 16 bits specifying for each of the 16 registers whether to include it, and any subset or the whole lot can be transferred one instruction. The 64 bit ISA has nothing like that -- the most it has is storing and loading a pair of registers.


I do think it is inexplicable that ARM didn't include any analogue to Thumb2 in the 64 bit ISA. This omission make it have quite bad code density -- very similar to x86_64, which was the known competition at the time it was being designed. In contrast, riscv64 does have shorter 16 bit opcode versions of the most common instructions, and as a result has by quite a bit the most compact code of all 64 bit ISAs.
Title: Re: The phasing out of 32 bit
Post by: brucehoult on June 09, 2021, 10:14:05 am
What do you suggest sizeof(long) should be?


on my MIPS32R2 (32 bit CPU)
sizeof(char) 1 byte
sizeof(short) 2 byte
sizeof(int) 4 byte
sizeof(long) 4 byte
sizeof(long long) 8 byte

on my MIPS-IV (64 bit CPU)
sizeof(char) 1 byte
sizeof(short) 2 byte
sizeof(int) 4 byte
sizeof(long) 8 byte <------------------------ does it make sense?!? why is it equal to long long?!?
sizeof(long long) 8 byte

Because before the development and standardisation of intptr_t and size_t, "long" was the best and most portable approximation you had to an integer type that could contain a pointer.

The proper alternative before intptr_t was of course to make up your own typedef, and have a bunch of #ifdef to figure out what machine you were on and define it appropriately. But not all code that you might want to port was so carefully written.
Title: Re: The phasing out of 32 bit
Post by: Berni on June 09, 2021, 04:40:05 pm
Ah sorry for the mistake then. I did see Aarch64 being called a "64 bit extension" of the Aarch32 instruction set. So i assumed it is much like the extensions in x86 where extra instructions are only added on, not removed.

But as for Thumb, is this still relevant with modern chips? The main purpose of it is to shrink down the size of instructions to save memory and memory bandwidth back when RAM speed feel behind and memory speed presented a bottleneck to ever faster CPUs. These days we have a lot of fast cache and a lot of memory to store code, so perhaps code density is no longer as big of a issue. So maybe a bit of extra memory bandwith usage was worth the price for the transistor count saving by dropping thumb.
Title: Re: The phasing out of 32 bit
Post by: magic on June 09, 2021, 09:46:41 pm
I'm under the impression that 32 bit ARM is a far simpler instruction set than 32 bit x86, yet ARM still thought it's worthwhile to remove support to streamline the design. In that case, wouldn't there be an even bigger case to remove 32 bit instruction support from x86 or at least move it aside into a small core or firmware so that it doesn't increase the complexity of the speed critical stuff?
It certainly wasn't the case when x86-64 was introduced 20 years ago. The absolutely first priority was to run existing 32b systems and existing 32b software under 64b systems with maximum performance. Nobody would buy those CPUs if they were slower than Intel.

As for die area savings, this annotated photograph of VIA Nano CPU from some 10 years ago has been available for a while.
https://www.viagallery.com/isaiah-architecture/ (https://www.viagallery.com/isaiah-architecture/)
As you can see, almost half of it is cache to begin with. SIMD would be unaffected by dropping 32b support, load/store probably too. You would only be fighting for some simplifications of the bottom right corner, and perhaps not a great one, because a lot of those transistors are devoted to the complex task of tracking dependencies, scheduling and reordering instructions.
Title: Re: The phasing out of 32 bit
Post by: brucehoult on June 09, 2021, 11:03:45 pm
Ah sorry for the mistake then. I did see Aarch64 being called a "64 bit extension" of the Aarch32 instruction set. So i assumed it is much like the extensions in x86 where extra instructions are only added on, not removed.

Aarch64 is a clean sheet design.

If you showed it to someone knowledgable in 2010 without telling them what it was, the only thing that would make them say "Is this from ARM?" would be the ability to shift or rotate one operand of arithmetic instructions. Otherwise it's much more similar to Alpha or PowerPC than it is to 32 bit ARM.

Quote
But as for Thumb, is this still relevant with modern chips? The main purpose of it is to shrink down the size of instructions to save memory and memory bandwidth back when RAM speed feel behind and memory speed presented a bottleneck to ever faster CPUs. These days we have a lot of fast cache and a lot of memory to store code, so perhaps code density is no longer as big of a issue. So maybe a bit of extra memory bandwith usage was worth the price for the transistor count saving by dropping thumb.

People in the embedded world care. ROM or Flash is a fixed size and once you fill it going to the next larger size is expensive. 33% smaller code is 50% more features in the same size ROM.

Certain people are constantly dinging 32 bit RISC-V for having 5% or 10% bigger code size than Thumb2. Whether that is a genuine concern or just partisanship is hard to say.

Huawei are leading a RISC-V Task Group designing a new ISA extension with a few extra instructions that look like getting 32 bit RISC-V code actually smaller than Thumb2 code, at low cost. They're already using chips with a custom ISA extension for code size in their own equipment but it uses far too much of the remaining opcode space to be viable as a standard extension. The TG is paring it back to keep the best parts, and some new ideas too. So in a year or two that will be available to anyone who really really cares about minimum code size. And it will work with 64 bit RISC-V also.

On desktop / laptop / server / smartphone this is of course much less of a concern. Flash is big and so is RAM and all the space in them is being taking up with music and images and video anyway.

However it's not completely irrelevant, because L1 instruction cache is just as limited in size on big machines as ROM/RAM is on small ones. L1 instruction cache has been generally 32 KB for decades now. Occasionally someone will make a chip with 48k or 64k and then next generation -- back to 32k. The area, speed, power consumption benefits of keeping L1 small are just too large to ignore. I've seen figures of up to 30% of total energy going to fetching instructions from L1 cache. Even if 48k or 64k becomes standard for L1 instruction cache in future, it's not megabytes, and probably never will be.

So if you can get 50% more of your program code into L1 cache -- or get by with a smaller cache -- that's a big deal. Definitely on anything battery powered, or where cooling is a big expense such as server farms or supercomputers.
Title: Re: The phasing out of 32 bit
Post by: Ed.Kloonk on June 10, 2021, 01:53:31 am
Hey, confirm my suspicion.

I suspect that this move away from 32-bit is only centered around the Arm architecture and it's ever prevalent in-built crypto. Supporting 32-bit backwards compatible is ball and chain.

Title: Re: The phasing out of 32 bit
Post by: DiTBho on June 10, 2021, 09:54:45 am
Linux fails to compile on mips with gcc v10.2.0.
I see the same issue too, but only when compiling natively on a mips64r2 machine.
Cross-compiling on a x86-64 box works nicely.

First I thought it's a problem with setting the "cross_compiling" flag in ./Makefile.
But that's not sufficient.

mips-gcc and mips64-gcc are separate compilers.

I don't have a 64-bit mips userspace yet (just kernel).
- kernel 64bit
- userspace 32bit

This means, that all builds on my mips machines are 32bit and do a cross-compilation to a mips64 kernel if requested in the .config.

worse still, some files like asm-offsets.c are still preprocessed with 32bit compiler, so with gcc, not mips-gcc

which introduces a lot of flaws, over-work to double-check things two times, and basically it's prone to fail


Crazy?


We have a lot of things to fix  :D
Title: Re: The phasing out of 32 bit
Post by: nigelwright7557 on June 10, 2021, 10:08:26 am
Microchip's MPLAB X has moved up to 64 bits and with that they threw out MPASM which is a disaster for some people.
As for backwards compatability Windows 10 still uses command prompt DOS mode !
It still used as a shell for npm ! node JS etc



Title: Re: The phasing out of 32 bit
Post by: NiHaoMike on June 10, 2021, 10:39:40 pm
It certainly wasn't the case when x86-64 was introduced 20 years ago. The absolutely first priority was to run existing 32b systems and existing 32b software under 64b systems with maximum performance. Nobody would buy those CPUs if they were slower than Intel.

As for die area savings, this annotated photograph of VIA Nano CPU from some 10 years ago has been available for a while.
https://www.viagallery.com/isaiah-architecture/ (https://www.viagallery.com/isaiah-architecture/)
As you can see, almost half of it is cache to begin with. SIMD would be unaffected by dropping 32b support, load/store probably too. You would only be fighting for some simplifications of the bottom right corner, and perhaps not a great one, because a lot of those transistors are devoted to the complex task of tracking dependencies, scheduling and reordering instructions.
In the beginning, it certainly made sense to have the transition as smooth as possible. But the transition to 64 bit being mainstream was over a long time ago and the question now is if it's a good time to start transitioning to 64 bit only CPUs. The video mentioned that the x86 CPUs in modern game consoles are 64 bit only, not sure if they have any actual hardware differences.

The savings that matter isn't the silicon area but rather the complexity of the logic that needs to run as fast and efficiently as possible.
Microchip's MPLAB X has moved up to 64 bits and with that they threw out MPASM which is a disaster for some people.
As for backwards compatability Windows 10 still uses command prompt DOS mode !
It still used as a shell for npm ! node JS etc
I only did a quick search on MPASM but it looks like they have a way to migrate projects to a new assembler?

I haven't used the "ghetto shell" in Windows for many years, Powershell is the modern replacement.
Title: Re: The phasing out of 32 bit
Post by: David Hess on June 11, 2021, 04:06:46 am
I suspect that this move away from 32-bit is only centered around the Arm architecture and it's ever prevalent in-built crypto. Supporting 32-bit backwards compatible is ball and chain.

ARM does not have a significant installed base of 32-bit software to preserve, and this is not the first time ARM has dropped backwards compatibility, which did not help it succeed as a desktop processor.
Title: Re: The phasing out of 32 bit
Post by: brucehoult on June 11, 2021, 04:36:10 am
I suspect that this move away from 32-bit is only centered around the Arm architecture and it's ever prevalent in-built crypto. Supporting 32-bit backwards compatible is ball and chain.

ARM does not have a significant installed base of 32-bit software to preserve, and this is not the first time ARM has dropped backwards compatibility, which did not help it succeed as a desktop processor.

Say what?

ARM has been making 32 bit processors for 35 years and they and their customers have a HUGE installed base of 32 bit software, both embedded and Linux.

RISC-V would be the one without a significant installed base of 32 bit software, at least for Linux, as 64 bit has been the focus for applications processors from the start.
Title: Re: The phasing out of 32 bit
Post by: DiTBho on June 11, 2021, 08:05:21 am
He probably meant "Wintel" (Windows + intel-x86) legacy  :-//

The aggressive commercial policy made by Microsoft coupled with the interests of Intel to rule the marketplace.
Title: Re: The phasing out of 32 bit
Post by: DiTBho on June 11, 2021, 08:19:01 am
HUGE installed base of 32 bit software, both embedded and Linux.

DOS has a huge installed base of 16 software
Windows has a huge installed base of 32 software
Mobile things has a huge installed base of ARM software (mostly is Android/Arm based)

Linux / ARM and RISC-OS / ARM (which is mainly used in the UK) in this scenario is like a grain of sand on a beach, and talking about workstation and sever stuff, due to the nature of "free software", things that run on Linux / ARM can easily be migrated to Linux / x86, which is mainstream with a huge and larger user base.

I can say it according to what I see on repositories.
Title: Re: The phasing out of 32 bit
Post by: magic on June 11, 2021, 08:22:46 am
Say what?

ARM has been making 32 bit processors for 35 years and they and their customers have a HUGE installed base of 32 bit software, both embedded and Linux.
He is right, though. No customer will buy a 64 bit refrigerator expecting it to run his old 32 bit refrigerator firmware. Meanwhile refrigerator vendors have all the means to port their firmware to 64 bit to take advantage of 64 bit features or just keep buying 32 bit ARM to save money. ARM install base is either throwaway or de-facto rented software, unlike x86 where you own software binaries and want them to run faster on your new and faster machine.
Title: Re: The phasing out of 32 bit
Post by: newbrain on June 11, 2021, 03:33:21 pm
Windows 10 still uses command prompt DOS mode !
It depends what you mean by "DOS mode".
But yes, most batch files of MS-DOS era might still work in cmd.exe, as the syntax is largely the same.
This is a testament to backwards compatibility.
Just an example from the top of my mind:
Code: [Select]
C:\Users\newbrain>set /a (3+4)*10
70
C:\Users\newbrain>set /P VAR1="Please enter var1: " && echo %VAR1%
Please enter var1: 1234
1234
Title: Re: The phasing out of 32 bit
Post by: nctnico on June 11, 2021, 05:01:24 pm
Say what?

ARM has been making 32 bit processors for 35 years and they and their customers have a HUGE installed base of 32 bit software, both embedded and Linux.
He is right, though. No customer will buy a 64 bit refrigerator expecting it to run his old 32 bit refrigerator firmware. Meanwhile refrigerator vendors have all the means to port their firmware to 64 bit to take advantage of 64 bit features or just keep buying 32 bit ARM to save money. ARM install base is either throwaway or de-facto rented software, unlike x86 where you own software binaries and want them to run faster on your new and faster machine.
I agree. For embedded devices where ARM rules the world, binary backward compatibility is not necessary. Changing to a different platform requires to recompile the software anyway. This is already the case for microcontrollers for which there are at least a dozen different ARM cores in use.
Title: Re: The phasing out of 32 bit
Post by: Nominal Animal on June 12, 2021, 04:58:52 am
I do not believe for a second that ARM Cortex M processor family is being "phased out".
It's like claiming English is being phased out, because more and more people speak X.

Just look at the existing 8-bit processor families, which are still used in new designs.
And they will be used in new designs for as long as it makes commercial sense.
For obvious reasons, the 32-bit processor niche is much, much larger.

It is, however, true that with at least the GNU toolchain, it is relatively painless to move between hardware implementations; and in that sense, those looking for ARM chips for new designs don't care that much about backwards compatibility.

Nevertheless, the existing codebase for ARMv6-M to ARMv8.1-M is so large, that existing product lines are easier to upgrade with somewhat compatible newer processors.  And this is a very big reason why ARM Cortex M 32-bit family at least will not be going anywhere soon.

If we extend this to 32-bit processors in general, we'd need to look at the business cases where a 64-bit processor would be a hindrance compared to 32-bit, if not for anything else, then because of the added code size, or the need to port existing codebase to 64-bit compatibility (which traditionally has revealed quite a lot of idiotic assumptions programmers make, and have to be fixed in such transitions).

The vast number of processors are in embedded devices.  Cellphones are just one class, albeit a big one; and because of their multipurpose use, they probably do benefit from 64-bit support.  But what about the display controllers ("graphics cards") in them; the biggest scalars those use are 32-bit, so 32-bit plus SIMD extensions on vectors makes most sense.  And what about modems, routers, TVs, storage devices, and so on?
A vast majority of embedded devices at this point gains basically nothing from having 64-bit support.  Typical routers, modems, TVs, etc. that you do not notice, have 32 - 256 megabytes of memory, and basically gain nothing from having more than that; they just aren't even hitting the limits of 32 bits.  (Again, multifunction devices do differ.  And perhaps programmers are worse and worse year after year, so that it makes more economical sense to buy more powerful hardware, so that even bloated crappy code works, somewhat, on them.  After all, Microsoft has already managed to convince at least one generation of humans that devices are supposed to crash every now and then for no discernible reason.)

One should examine what kind of processors human interface peripherals – mice, keyboards, etc. – still use, when considering answering the above question.  They do not need more than a few hundred bytes of memory, plus a full native USB support, so the vast majority of them run on some 8-bit processor.  Why would they move to 32-bit?  The same is with currently 32-bit appliances that are not hitting any limitations for the kinds of workloads customers have.

Put simply, I think the claims in the initial post are complete bullshit by someone who can only perceive a small slice of the world and believes that is everything there is, and thinks it is his Dog-given mission to convince everybody else.  Or at least get them to part with their money.  The only reason the video exists is to gain views by sowing discord (like this thread here), just like news no longer report actual events or facts, but just spin everything in an effort to rile people up.
Title: Re: The phasing out of 32 bit
Post by: Siwastaja on June 12, 2021, 09:17:58 am
For embedded, and by embedded I don't mean smartphones*, depending on case, 32-bit MCUs are either extremely overkill, or just slightly overkill, most of the time. There is very little to gain from going to 64 bits. In some embedded algorithms, 64-bit number crunching is needed, but that is very efficient on a 32-bit core by just upping frequency a bit and "software emulating" 64-bit arithmetic; and for example a Cortex-M7 already includes 64-bit features like a 64-bit wide memory data bus and a few instructions operating on 64-bit data, while still being a nominally 32-bit CPU.

32-bit is kind of sweet spot by being able to satisfy 99.99% of embedded needs while not being too much of an overkill for the smallest tasks. You can have a Cortex M0 for some 30-40 cents. An 8-bit PIC won't be any/much cheaper.

In classical embedded the memory requirements seem to vary from a few bytes to maybe half a gigabyte max. Special applications requiring more are expensive anyway and can just choose to use FPGAs, custom ASICs or, of course, just general purpose desktop CPUs.

*) IMHO, smartphones are exactly what desktop computing is, just not physically on a desk; general purpose computing. Smartphones have a bit different use case, namely mostly rented out simplified single-purpose entertainment software, enabling recompiling to changing architectures, but from CPU capability point of view, they benefit from all general-purpose performance enhancing features exactly like classic desktop CPUs do, making transition into 64 bits appealing. The same can't be said about microcontrollers in refrigerators, routers or ECUs. These applications naturally fit into 8 to just 32 bit architectures and anything excess is waste.
Title: Re: The phasing out of 32 bit
Post by: DiTBho on June 12, 2021, 11:44:35 am
32-bit is kind of sweet spot by being able to satisfy 99.99% of embedded needs

Yup, indeed  :D
Title: Re: The phasing out of 32 bit
Post by: DiTBho on June 12, 2021, 11:52:17 am
I do not believe for a second that ARM Cortex M processor family is being "phased out".
It's like claiming English is being phased out, because more and more people speak X.

LOL, more Russian people I meet, more I think the soviet dialects are spoken by more people who talk English, according to this, if I was an "A.I." - programmed with such statistic-based deductive logic - , I would infer that "English is being phased out" because more and more met people speak Soviet dialects ;D


(funny, there are really A.I. programs that do think that way, with a statistic-based deductive logic)
Title: Re: The phasing out of 32 bit
Post by: SiliconWizard on June 12, 2021, 05:13:55 pm
Of course, 32-bit is likely not going away anytime soon for small embedded stuff (microcontrollers).

Now for "desktop" and "server" applications, 32-bit has already been largely phased out.

And I agree with Siwastaja, powerful mobile devices suchs as smartphones and tablets are just small computers in disguise. Given that even mid-range phones these days have 6GB or 8GB of RAM, going 64-bit makes sense. (Now whether having that much power and memory in a phone makes sense, that's another story entirely, but that's just the way it is now.)
Title: Re: The phasing out of 32 bit
Post by: SilverSolder on June 12, 2021, 09:07:33 pm
Of course, 32-bit is likely not going away anytime soon for small embedded stuff (microcontrollers).

Now for "desktop" and "server" applications, 32-bit has already been largely phased out.

And I agree with Siwastaja, powerful mobile devices suchs as smartphones and tablets are just small computers in disguise. Given that even mid-range phones these days have 6GB or 8GB of RAM, going 64-bit makes sense. (Now whether having that much power and memory in a phone makes sense, that's another story entirely, but that's just the way it is now.)

To the last point, I think it does make sense to have serious amounts of RAM in a mobile platform, simply because all the active apps tend to be in memory at the same time in order to keep the thing responsive to the user (loading from Flash memory can be slow).  I'm not convinced that having a boatload of RAM necessitates having a 64 bit data word size, though...  that seems to have a big potential for wasting resources more than anything else.

Modern programmers / dev environments appear to be hugely non-caring about efficiency generally, irrespective of 32 bit vs. 64 bit...




Title: Re: The phasing out of 32 bit
Post by: magic on June 12, 2021, 10:12:12 pm
Would you rather have it with x86-style segmentation or AVR-style X,Y,Z registers? :D

(I would probably prefer the latter, perhaps because it wouldn't be me to implement the hardware).
Title: Re: The phasing out of 32 bit
Post by: brucehoult on June 12, 2021, 10:42:10 pm
32-bit is kind of sweet spot by being able to satisfy 99.99% of embedded needs while not being too much of an overkill for the smallest tasks. You can have a Cortex M0 for some 30-40 cents. An 8-bit PIC won't be any/much cheaper.

Waaaaay out, if you're talking about embedded cores doing some controlling task inside some chip. By a factor of 1000.

A Cortex-M0+ is 0.009 mm^2 on 40nm or 0.035 mm^2 on 90nm. [1]

TSMC price for a 300 mm wafer in 2020 was $2274 for 40nm and $1650 for 90nm. [2]

That makes the cost for the core 0.03 cents on 40nm or 0.08 cents on 90nm.

An 8051 or PIC will cost quite a lot less, while a 64 bit M0+ would cost something a little under twice more. If ARM offered 64 bit M0s, which they don't. But SiFive do (with RV64I or RV64E instruction set of course).


If your idea of "embedded" is buying off the shelf chips and assembling them onto a board then, yeah, you can get any of 8, 16, or 32 bit cores with some SRAM and flash and peripherals, in a package, for 30-40 cents. A 64 bit chip wouldn't cost any more to make. The packaging costs are the same and most of the silicon area is taken up by the pads not the core.

There's little to no advantage in putting a 64 bit core in such a packaged chip because you won't have 4 GB of RAM inside it and you don't have an external address bus. You quite likely do have more than 64 KB though. And often need numbers bigger than 255 or 65535, which need multiple instructions to process on an 8 or 16 bit core, using more energy than a single instruction a 32 bit core.

But for little embedded cores doing some task inside a larger chip, such as controlling a SERDES, or 5G radio, or any other peripheral that needs real-time supervision, if the main application processors are 64 bit and the addressing inside the chip is 64 bit, then it makes a lot of sense to use a "64 bit M0" for the minions.

[1] https://www.anandtech.com/show/8400/arms-cortex-m-even-smaller-and-lower-power-cpu-cores (https://www.anandtech.com/show/8400/arms-cortex-m-even-smaller-and-lower-power-cpu-cores)
[2] https://www.tomshardware.com/news/tsmcs-wafer-prices-revealed-300mm-wafer-at-5nm-is-nearly-dollar17000 (https://www.tomshardware.com/news/tsmcs-wafer-prices-revealed-300mm-wafer-at-5nm-is-nearly-dollar17000)
Title: Re: The phasing out of 32 bit
Post by: David Hess on June 13, 2021, 10:14:22 am
For deep embedded applications, I have already run into problems using ARM microcontrollers to replace 8 and 16 bit parts; none of them support the lowest power applications as well.

Another complication is that microcontrollers must be built on larger legacy processes to support embedded memory, which prevents ARM microcontrollers from taking advantage of greater integration from a denser process.
Title: Re: The phasing out of 32 bit
Post by: Nominal Animal on June 13, 2021, 11:36:01 am
So, wouldn't the proper statement here be more like "We now have a variety of processor size classes to choose from", then?

It's not like 32 bit hardware is going away; it's more that we now have 64-bit hardware that can be used for the tasks where we hit 32-bit limits.
Title: Re: The phasing out of 32 bit
Post by: AntiProtonBoy on June 14, 2021, 04:55:47 am
Intel, Transmeta, and DEC all tried  that and failed, and I think Apple's attempts hurt them more than they helped.  Maybe these were all implementation failures, but if every attempt has failed, then that argues that the concept is flawed.
Transmeta didn't really fail, per se. They had a fairly successful prototype showcasing proof of concept. Their issue was more to do with patents and the use of x86 instruction set. Intel took them to court and settled out of court in the end, with terms of basically killing the project.
Title: Re: The phasing out of 32 bit
Post by: ejeffrey on June 14, 2021, 04:09:49 pm
Would you rather have it with x86-style segmentation or AVR-style X,Y,Z registers? :D

(I would probably prefer the latter, perhaps because it wouldn't be me to implement the hardware).

I suspect what SilverSolder was suggesting was something like PAE.  32 bit virtual memory addresses with 40+ bit physical addresses and page table entries.  x86 had this since the Pentium Pro.  With it, individual applications on a 32 bit CPU can access ~4 GiB of RAM at once, but allows the OS to manage up to 64 GiB -- allowing for multiple large processes, or additional memory for disk cache.  It makes more work for the kernel which cannot directly access all physical memory at once, but provides a relatively simple way to use more than 4 GB of memory without require changes in user mode code.
Title: Re: The phasing out of 32 bit
Post by: SilverSolder on June 14, 2021, 06:47:03 pm
Would you rather have it with x86-style segmentation or AVR-style X,Y,Z registers? :D

(I would probably prefer the latter, perhaps because it wouldn't be me to implement the hardware).

I suspect what SilverSolder was suggesting was something like PAE.  32 bit virtual memory addresses with 40+ bit physical addresses and page table entries.  x86 had this since the Pentium Pro.  With it, individual applications on a 32 bit CPU can access ~4 GiB of RAM at once, but allows the OS to manage up to 64 GiB -- allowing for multiple large processes, or additional memory for disk cache.  It makes more work for the kernel which cannot directly access all physical memory at once, but provides a relatively simple way to use more than 4 GB of memory without require changes in user mode code.

This kind of stuff was supported back in the days of MS Server 2003...   worked well!

Speaking of which, I recently ran up a copy of Server 2003 in a virtual machine.  The performance was stunning...   just so lean and mean!

Title: Re: The phasing out of 32 bit
Post by: magic on June 14, 2021, 10:04:34 pm
Okay, I suppose PAE is an option too.

But I just happen to use one old 32 bit machine for web browsing sometimes and let me tell you - a dozen or two tabs, particularly with images, a few weeks of uptime, and then despite a few gigs of RAM and swap available, the browser runs out of 32 bit address space and crashes.
|O
Title: Re: The phasing out of 32 bit
Post by: Berni on June 15, 2021, 10:40:29 am
This kind of stuff was supported back in the days of MS Server 2003...   worked well!

Speaking of which, I recently ran up a copy of Server 2003 in a virtual machine.  The performance was stunning...   just so lean and mean!

Yeah PAE was a bit poorly supported by Windows, mostly only seen in the server versions of the OS.

Tho these Win 2000 and its derivatives really was nicely lean and mean. I once stuck one of those on some old Pentium II machine and the thing just flew on that. Comes on a regular CD and not even filling all of it. Once installed needs only a few 100s of MB of disk space and once booted uses about 32MB of RAM for itself. Ran great on anything that could win Win 95 and onwards (provided you weren't missing some important NT compatible driver) while being rock solid stable. Once it got to Win XP it started accumulating extra fat and got real chunky once Vista came around.

Okay, I suppose PAE is an option too.

But I just happen to use one old 32 bit machine for web browsing sometimes and let me tell you - a dozen or two tabs, particularly with images, a few weeks of uptime, and then despite a few gigs of RAM and swap available, the browser runs out of 32 bit address space and crashes.
|O

Yep this is the reason why at some point way back i switched over to using Firefox Developer Edition for the sole reason of being available in 64bit way before the actual consumer Firefox releases got 64bit versions years later.

With my usual behavior of opening way too many tabs (Like >100 tabs and >5 windows) the 64bit version appeared to run way more stable. While for some 32bit versions of Firefox it really stumbled badly over the windows 2GB per process limit, it would run out of memory without noticing it, then it produced visual glitches, hangs, crashes at random etc. They probably improved this later on but i jumped ship to the developer 64bit version by then.

At this point my only 32bit machine is a old Core 2 Duo laptop that is so old at this point that it can't handle watching YouTube while doing other stuff at the same time, the fix was to instead watch Youtube on my phone. The Chromium browser runs a lot better on the old dinosaur ,but sometimes does not open websites properly that use too advanced scripting functionality (The lack of these latest and greatest advanced features is likely why it runs so fast on old hardware)
Title: Re: The phasing out of 32 bit
Post by: brucehoult on June 15, 2021, 10:15:38 pm
I don't know how to break this to you, but Core 2 Duo is a 64 bit computer.
Title: Re: The phasing out of 32 bit
Post by: Ed.Kloonk on June 15, 2021, 11:55:48 pm
I don't know how to break this to you, but Core 2 Duo is a 64 bit computer.

I fell into that trap a while back. The devil is in the detail. One of my core i something or other got dusted off to use as a test bed. Bloody thing is 64 bit, sure, a bit slow but wont run any virtualization in 64 bit.  >:( Ask me how I found that out.
Title: Re: The phasing out of 32 bit
Post by: Berni on June 16, 2021, 05:50:20 am
Yes i know a Core 2 Duo can do 64bit. Consumer multicore and 64bit sort of happened simultaneously in Intel. But i purposely put 32bit Win7 on that because it didn't have enough RAM to need it.

I just didn't see any benefit in going 64bit while i could still remember the issues that 64bit Win XP had. So since i am running a 32bit OS makes it essentially a 32bit machine in my case. Quite a few motherboards for these early 64bit CPUs didn't even accept more than 4GB total of RAM because most users didn't need it. Still have a Core 2 Quad around that is running a 64bit OS because it has enough RAM to actually need it.

I did actually end up using the 32bit machine for compatibility reasons. The version of Altium i was using (it was still early days of altium back then) seamed to sometimes hang when generating gerber files, or sometimes generate gerbers with a bit of garbage in them. However doing the gerber export on that 32bit machine always worked fine (And yes Altium was installed from the exact same iso file and both ran Win7). Seamed pretty strange but i assumed it was 64bit at fault.

Didn't know about the 64bit limitation in virtualization, I kept using 32bit VMs because again they didn't need to address enough RAM to actually need 64bit.

Lots of software PC releases out there are still 32bit. The 32bit software is forward compatible with 64bit anyway, while simple apps don't need lots of RAM or wider registers for some heavy math optimizations. So id say 32bit apps are still going to be around for a good while even tho almost everyone is running a 64bit OS these days (I wouldn't be surprised if the next version of windows drops support for 32bit CPUs).
Title: Re: The phasing out of 32 bit
Post by: Ed.Kloonk on June 16, 2021, 07:03:55 am

Didn't know about the 64bit limitation in virtualization, I kept using 32bit VMs because again they didn't need to address enough RAM to actually need 64bit.



I've been looking at my old notes on this issue. The CPU was i5-4670k. I've vaguely noted that the cpu flags command reported inability after failing to boot past grub a guest 64-bit linux on a 64 host. It looks as though that's as far as I was willing to investigate. Aside from ensuring the last known bios update and flipping obvious bios settings.

Today, having a looked around for "i5-4670k virtualization 64 bit" there are numerous questions about similar setups. Most of the answers point to incorrect bios virt settings or buggy bios. So I now wonder if the chip -can- do it with the right mobo settings.

Still have the mobo. Will put it on the list of rainy day projects.

Found this for anyone playing along at home and interested in some semantics..
https://en.wikipedia.org/wiki/X86_virtualization#Hardware_support

edit: I did find this:
Quote
All models apart from Intel i5-4670K support Intel VT-d.
https://en.wikipedia.org/wiki/List_of_Intel_Core_i5_processors#Haswell-DT_(quad-core,_22_nm)

Quote
I/O MMU virtualization (AMD-Vi and Intel VT-d)
An input/output memory management unit (IOMMU) allows guest virtual machines to directly use peripheral devices, such as Ethernet, accelerated graphics cards, and hard-drive controllers, through DMA and interrupt remapping. This is sometimes called PCI passthrough.
https://en.wikipedia.org/wiki/X86_virtualization#Intel-VT-d

Title: Re: The phasing out of 32 bit
Post by: viperidae on June 20, 2021, 01:13:45 am
Has anyone mentioned the fact modern processors don't have transistors to implement each instruction? The instructions are decoded into a set of micro-ops. Supporting old instruction sets is only a little more than writing the microcode to execute it.
Title: Re: The phasing out of 32 bit
Post by: brucehoult on June 20, 2021, 06:49:43 am
Has anyone mentioned the fact modern processors don't have transistors to implement each instruction? The instructions are decoded into a set of micro-ops. Supporting old instruction sets is only a little more than writing the microcode to execute it.

Yes, that is somewhat correct information as at the late 1970s. However micro-ops and microcode are very different things.

Microcode is a kind of computer program consisting of micro-instructions. It is effectively an interpreter for the instruction set the programmer sees. Each programmer-visible instruction might be require on the order of ten micro-instructions to decode and interpret it.

Micro-ops are where a mildly complicated instruction is very simply expanded into two or three instructions or maybe a short sequence of similar instructions. For example a "shift-then-add" instruction might on some low end models be replaced in the pipeline by a shift instruction and then an add instruction. Or a "push multiple" instruction with bitmap of registers might be replaced by a series of normal store instructions with one for each register.

Original x86 implementations were entirely microcode. Starting with the 486 many common instructions were expanded to micro-ops, and more complicated instructions used microcode.

ARM processors do some micro-op expansion but the 64 bit ISA has been designed to not require sequencers for this, but only at most simple macro-expansion. There is no microcode.

RISC-V processors with which I am familiar do no micro-op expansion. User-visible instructions and micro-ops are 1:1. On the contrary, there is talk of high-end processors combining multiple instructions into a single micro-op -- as modern x86 and ARM processors do for "CMP;Bcc" pairs.
Title: Re: The phasing out of 32 bit
Post by: DiTBho on June 20, 2021, 10:30:09 am
micro-op expansion

an example of this?
Title: Re: The phasing out of 32 bit
Post by: brucehoult on June 20, 2021, 01:17:30 pm
micro-op expansion

an example of this?

I gave two potential examples already. Others would include load-then-op and load-op-store instructions or complex addressing modes.
Title: Re: The phasing out of 32 bit
Post by: SiliconWizard on June 20, 2021, 04:13:20 pm
On the contrary, there is talk of high-end processors combining multiple instructions into a single micro-op -- as modern x86 and ARM processors do for "CMP;Bcc" pairs.

That's instruction fusing, right?
Title: Re: The phasing out of 32 bit
Post by: DiTBho on June 21, 2021, 11:49:47 am
back to *64bit computers* planned before "RISC" ... in the 60s, the IBM 7030 (also known as "Stretch") was actually a 64 bit computer, with multi-processing units for floating point, integers and other operations like n bit character processing.

Don't you believe? Let's check it out!

Instructions were either 32-bit or 64-bit, and Fixed-point numbers were variable in length, stored in either binary, 1 to 64 bits  :D

Surprisingly, IBM 7030's 0 register is called "$ Z", and it's a true 64-bit register that always reads as zero, can't be changed by writes, which is quite RISCish even if we're talking about a designed machine years before the term RISC was invented.

The problem is that in the 1960s looking ahead 30-40 years, well, it was quite limited by the technology of the time, and the results were much slower than expected and failed to hit any performance goals.
Title: Re: The phasing out of 32 bit
Post by: DiTBho on June 21, 2021, 11:55:08 am
Yesterday on Skype, a senior engineer  commented "... anyway, RISC is not RISC today but something easy to pipeline, but just as complex a CISC design ... ".

pearl of wisdom  :D
Title: Re: The phasing out of 32 bit
Post by: NiHaoMike on June 21, 2021, 01:56:39 pm
Has anyone mentioned the fact modern processors don't have transistors to implement each instruction? The instructions are decoded into a set of micro-ops. Supporting old instruction sets is only a little more than writing the microcode to execute it.
The overhead must still be significant since ARM has removed 32 bit compatibility from some of their highest performance cores.
Title: Re: The phasing out of 32 bit
Post by: SiliconWizard on June 21, 2021, 05:06:45 pm
Yesterday on Skype, a senior engineer  commented "... anyway, RISC is not RISC today but something easy to pipeline, but just as complex a CISC design ... ".

pearl of wisdom  :D

Well. RISC-V is pretty RISCy in essence.
Now, if you consider a recent ARM ISA, for instance... that may be questionable. There are so many instructions, some of which doing some pretty fancy stuff...
But what makes RISC-V more RISCy is basically its modularity. Take a RISC-V instruction set with all currently defined extensions, and you get closer to ARM. Actually, if you include the B extension that's not ratified yet (but which is pretty big if you implement it all), the resulting IS is probably going to be larger than ARM.

But as Bruce mentioned, beyond the "complexity" of the instruction set, a relatively good indicator of "CISCyness" would be the use of microcode.

The ease of pipelining is certainly a consideration here, but all processors designed since at least the late 80's or early 90's have made that a priority, be they considered CISC or RISC...
Title: Re: The phasing out of 32 bit
Post by: SiliconWizard on June 21, 2021, 05:09:29 pm
Has anyone mentioned the fact modern processors don't have transistors to implement each instruction? The instructions are decoded into a set of micro-ops. Supporting old instruction sets is only a little more than writing the microcode to execute it.
The overhead must still be significant since ARM has removed 32 bit compatibility from some of their highest performance cores.

The decoding step is of course significantly more complex to do this. Even implementing the "C" extension on RISC-V adds significant overhead.

And, as Bruce also mentioned, do not confuse microcode with micro-op expansion. Not quite the same thing.
Title: Re: The phasing out of 32 bit
Post by: DiTBho on June 21, 2021, 06:30:22 pm
Well. RISC-V is pretty RISCy in essence.

He was referring to designs like ARM64, M1, and (since he is a retired IBM engineer) IBM POWER9 and POWER10.
Title: Re: The phasing out of 32 bit
Post by: nigelwright7557 on July 15, 2021, 02:32:50 am

64 bit can double the memory requirements in some applications...   64 bit isn't a universal "good"...

X86 can load 8,16, 32 or 64 bits according to the instruction.
You dont use 64 bit instructions unless really needed so very little extra code is needed.
In fact having to expand a 32 bit data to 64 bit string will take even more memory than just using plain 64 bits to start with.

32 bit Windows pc's were limited to 4gb memory addressing.




Title: Re: The phasing out of 32 bit
Post by: brucehoult on July 15, 2021, 04:49:44 am
64 bit can double the memory requirements in some applications...   64 bit isn't a universal "good"...

X86 can load 8,16, 32 or 64 bits according to the instruction.
You dont use 64 bit instructions unless really needed so very little extra code is needed.
In fact having to expand a 32 bit data to 64 bit string will take even more memory than just using plain 64 bits to start with.

32 bit Windows pc's were limited to 4gb memory addressing.

Breaking the 4 GB memory limit is the main reason to go to a 64 bit CPU, so pointers are 64 bits, so programs that use a lot of pointers can have increased memory usage by up to 2x. Not many programs actually approach this, but languages with dynamic typing that can store any of char, int, float, or a pointer in each variable have to use as much space as the biggest one i.e. the pointer variant.

On x86, the move to 64 bit registers coincided with going from 8 to 16 registers. Some people want to use the extra registers but don't care about the increased address space, so the "x32" ABI was developed. As I understand it, x32 supports 64 bit integers in registers, so it still has to save/restore 8 bytes per register in stack frames. Uptake never seemed to be very high, and there are recent moves to deprecate it.
Title: Re: The phasing out of 32 bit
Post by: PKTKS on July 16, 2021, 11:43:02 am
64 bit can double the memory requirements in some applications...   64 bit isn't a universal "good"...

X86 can load 8,16, 32 or 64 bits according to the instruction.
You dont use 64 bit instructions unless really needed so very little extra code is needed.
In fact having to expand a 32 bit data to 64 bit string will take even more memory than just using plain 64 bits to start with.

32 bit Windows pc's were limited to 4gb memory addressing.

Breaking the 4 GB memory limit is the main reason to go to a 64 bit CPU, so pointers are 64 bits, so programs that use a lot of pointers can have increased memory usage by up to 2x. Not many programs actually approach this, but languages with dynamic typing that can store any of char, int, float, or a pointer in each variable have to use as much space as the biggest one i.e. the pointer variant.

On x86, the move to 64 bit registers coincided with going from 8 to 16 registers. Some people want to use the extra registers but don't care about the increased address space, so the "x32" ABI was developed. As I understand it, x32 supports 64 bit integers in registers, so it still has to save/restore 8 bytes per register in stack frames. Uptake never seemed to be very high, and there are recent moves to deprecate it.

May be I am old school..
and all those Z80 Kb got me spoiled...

But AFAIK 4G ( 4 G_i_g_a_B_y_t_e_s_... )  of memory is a lot.

So far have not seen any "usual" average joe applet to use that much..

Compiling LINUX with PAE support enabling 64G visible support has been
around for +decades.. and works fine ..

*MAY BE*  some insane huge images above 10k pixels wide..
MAY some huge databases.. or of course insanely huge 3D data in need to be moved around..

MAY be..  but still 4G is a lot and having the OS supporting 64G via PAE while using 32b native and keeping binary compatible distros around..

May be a point to consider in saving some good hardware and pockets

Paul  :-+
Title: Re: The phasing out of 32 bit
Post by: brucehoult on July 16, 2021, 12:20:36 pm
Web browsers can get over 4 GB. Yeah, that's bloat and I'm sure they could do better, but people demand the functionality and spend half their lives in their web browser.

Linking LLVM binaries takes something like 6 or 8 GB of RAM using the standard GNU ld linker. That's bloat too, and gold and Apple's linker and others don't use so much space and time linking the same programs.

Virtual machines need as much RAM as you're giving the guest OS, plus some.

Even if a program is using less than 4 GB of actual memory, being able to space thing out in the address space is handy.

I started on 6502s and z80s too, and I still do a lot of things with ARM and RISC-V boards with less RAM than an Apple ][ had -- not to mention Arduino Unos with a whopping 2K of RAM, and my favourite bare AtTiny85 chip with 512 bytes.

But I'm not posting on eevblog/forum from one of those.
Title: Re: The phasing out of 32 bit
Post by: PKTKS on July 16, 2021, 03:41:36 pm
I see your point...

I remember reading here on the forum some folk complaining that he could not open 120 tabs on firefox.. because it crashed...  ::)

My parents also here and there are crazy about Android dullness..

Well in both cases the system has about 100 browser context open and going..

Chrome and/or Firefox on android with about 10 tiny windows is already a sick mule

A folk opening 120 tabs on a browser with 4G.. probably will open 600 tabs with 16G...

There is no "solution" for that.. just to raise some limits...

Alas.. they *should* already been there... but who cares..   :o

The more hardware the better..
Paul