Author Topic: The phasing out of 32 bit  (Read 10535 times)

0 Members and 1 Guest are viewing this topic.

Online Berni

  • Super Contributor
  • ***
  • Posts: 4911
  • Country: si
Re: The phasing out of 32 bit
« Reply #75 on: June 15, 2021, 10:40:29 am »
This kind of stuff was supported back in the days of MS Server 2003...   worked well!

Speaking of which, I recently ran up a copy of Server 2003 in a virtual machine.  The performance was stunning...   just so lean and mean!

Yeah PAE was a bit poorly supported by Windows, mostly only seen in the server versions of the OS.

Tho these Win 2000 and its derivatives really was nicely lean and mean. I once stuck one of those on some old Pentium II machine and the thing just flew on that. Comes on a regular CD and not even filling all of it. Once installed needs only a few 100s of MB of disk space and once booted uses about 32MB of RAM for itself. Ran great on anything that could win Win 95 and onwards (provided you weren't missing some important NT compatible driver) while being rock solid stable. Once it got to Win XP it started accumulating extra fat and got real chunky once Vista came around.

Okay, I suppose PAE is an option too.

But I just happen to use one old 32 bit machine for web browsing sometimes and let me tell you - a dozen or two tabs, particularly with images, a few weeks of uptime, and then despite a few gigs of RAM and swap available, the browser runs out of 32 bit address space and crashes.
|O

Yep this is the reason why at some point way back i switched over to using Firefox Developer Edition for the sole reason of being available in 64bit way before the actual consumer Firefox releases got 64bit versions years later.

With my usual behavior of opening way too many tabs (Like >100 tabs and >5 windows) the 64bit version appeared to run way more stable. While for some 32bit versions of Firefox it really stumbled badly over the windows 2GB per process limit, it would run out of memory without noticing it, then it produced visual glitches, hangs, crashes at random etc. They probably improved this later on but i jumped ship to the developer 64bit version by then.

At this point my only 32bit machine is a old Core 2 Duo laptop that is so old at this point that it can't handle watching YouTube while doing other stuff at the same time, the fix was to instead watch Youtube on my phone. The Chromium browser runs a lot better on the old dinosaur ,but sometimes does not open websites properly that use too advanced scripting functionality (The lack of these latest and greatest advanced features is likely why it runs so fast on old hardware)
« Last Edit: June 15, 2021, 10:42:24 am by Berni »
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 3971
  • Country: nz
Re: The phasing out of 32 bit
« Reply #76 on: June 15, 2021, 10:15:38 pm »
I don't know how to break this to you, but Core 2 Duo is a 64 bit computer.
 

Online Ed.Kloonk

  • Super Contributor
  • ***
  • Posts: 4000
  • Country: au
  • Cat video aficionado
Re: The phasing out of 32 bit
« Reply #77 on: June 15, 2021, 11:55:48 pm »
I don't know how to break this to you, but Core 2 Duo is a 64 bit computer.

I fell into that trap a while back. The devil is in the detail. One of my core i something or other got dusted off to use as a test bed. Bloody thing is 64 bit, sure, a bit slow but wont run any virtualization in 64 bit.  >:( Ask me how I found that out.
iratus parum formica
 

Online Berni

  • Super Contributor
  • ***
  • Posts: 4911
  • Country: si
Re: The phasing out of 32 bit
« Reply #78 on: June 16, 2021, 05:50:20 am »
Yes i know a Core 2 Duo can do 64bit. Consumer multicore and 64bit sort of happened simultaneously in Intel. But i purposely put 32bit Win7 on that because it didn't have enough RAM to need it.

I just didn't see any benefit in going 64bit while i could still remember the issues that 64bit Win XP had. So since i am running a 32bit OS makes it essentially a 32bit machine in my case. Quite a few motherboards for these early 64bit CPUs didn't even accept more than 4GB total of RAM because most users didn't need it. Still have a Core 2 Quad around that is running a 64bit OS because it has enough RAM to actually need it.

I did actually end up using the 32bit machine for compatibility reasons. The version of Altium i was using (it was still early days of altium back then) seamed to sometimes hang when generating gerber files, or sometimes generate gerbers with a bit of garbage in them. However doing the gerber export on that 32bit machine always worked fine (And yes Altium was installed from the exact same iso file and both ran Win7). Seamed pretty strange but i assumed it was 64bit at fault.

Didn't know about the 64bit limitation in virtualization, I kept using 32bit VMs because again they didn't need to address enough RAM to actually need 64bit.

Lots of software PC releases out there are still 32bit. The 32bit software is forward compatible with 64bit anyway, while simple apps don't need lots of RAM or wider registers for some heavy math optimizations. So id say 32bit apps are still going to be around for a good while even tho almost everyone is running a 64bit OS these days (I wouldn't be surprised if the next version of windows drops support for 32bit CPUs).
 

Online Ed.Kloonk

  • Super Contributor
  • ***
  • Posts: 4000
  • Country: au
  • Cat video aficionado
Re: The phasing out of 32 bit
« Reply #79 on: June 16, 2021, 07:03:55 am »

Didn't know about the 64bit limitation in virtualization, I kept using 32bit VMs because again they didn't need to address enough RAM to actually need 64bit.



I've been looking at my old notes on this issue. The CPU was i5-4670k. I've vaguely noted that the cpu flags command reported inability after failing to boot past grub a guest 64-bit linux on a 64 host. It looks as though that's as far as I was willing to investigate. Aside from ensuring the last known bios update and flipping obvious bios settings.

Today, having a looked around for "i5-4670k virtualization 64 bit" there are numerous questions about similar setups. Most of the answers point to incorrect bios virt settings or buggy bios. So I now wonder if the chip -can- do it with the right mobo settings.

Still have the mobo. Will put it on the list of rainy day projects.

Found this for anyone playing along at home and interested in some semantics..
https://en.wikipedia.org/wiki/X86_virtualization#Hardware_support

edit: I did find this:
Quote
All models apart from Intel i5-4670K support Intel VT-d.
https://en.wikipedia.org/wiki/List_of_Intel_Core_i5_processors#Haswell-DT_(quad-core,_22_nm)

Quote
I/O MMU virtualization (AMD-Vi and Intel VT-d)
An input/output memory management unit (IOMMU) allows guest virtual machines to directly use peripheral devices, such as Ethernet, accelerated graphics cards, and hard-drive controllers, through DMA and interrupt remapping. This is sometimes called PCI passthrough.
https://en.wikipedia.org/wiki/X86_virtualization#Intel-VT-d

« Last Edit: June 16, 2021, 07:47:20 am by Ed.Kloonk »
iratus parum formica
 

Offline viperidae

  • Frequent Contributor
  • **
  • Posts: 306
  • Country: nz
Re: The phasing out of 32 bit
« Reply #80 on: June 20, 2021, 01:13:45 am »
Has anyone mentioned the fact modern processors don't have transistors to implement each instruction? The instructions are decoded into a set of micro-ops. Supporting old instruction sets is only a little more than writing the microcode to execute it.
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 3971
  • Country: nz
Re: The phasing out of 32 bit
« Reply #81 on: June 20, 2021, 06:49:43 am »
Has anyone mentioned the fact modern processors don't have transistors to implement each instruction? The instructions are decoded into a set of micro-ops. Supporting old instruction sets is only a little more than writing the microcode to execute it.

Yes, that is somewhat correct information as at the late 1970s. However micro-ops and microcode are very different things.

Microcode is a kind of computer program consisting of micro-instructions. It is effectively an interpreter for the instruction set the programmer sees. Each programmer-visible instruction might be require on the order of ten micro-instructions to decode and interpret it.

Micro-ops are where a mildly complicated instruction is very simply expanded into two or three instructions or maybe a short sequence of similar instructions. For example a "shift-then-add" instruction might on some low end models be replaced in the pipeline by a shift instruction and then an add instruction. Or a "push multiple" instruction with bitmap of registers might be replaced by a series of normal store instructions with one for each register.

Original x86 implementations were entirely microcode. Starting with the 486 many common instructions were expanded to micro-ops, and more complicated instructions used microcode.

ARM processors do some micro-op expansion but the 64 bit ISA has been designed to not require sequencers for this, but only at most simple macro-expansion. There is no microcode.

RISC-V processors with which I am familiar do no micro-op expansion. User-visible instructions and micro-ops are 1:1. On the contrary, there is talk of high-end processors combining multiple instructions into a single micro-op -- as modern x86 and ARM processors do for "CMP;Bcc" pairs.
 
The following users thanked this post: newbrain, DiTBho

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3772
  • Country: gb
Re: The phasing out of 32 bit
« Reply #82 on: June 20, 2021, 10:30:09 am »
micro-op expansion

an example of this?
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 3971
  • Country: nz
Re: The phasing out of 32 bit
« Reply #83 on: June 20, 2021, 01:17:30 pm »
micro-op expansion

an example of this?

I gave two potential examples already. Others would include load-then-op and load-op-store instructions or complex addressing modes.
 
The following users thanked this post: DiTBho

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14230
  • Country: fr
Re: The phasing out of 32 bit
« Reply #84 on: June 20, 2021, 04:13:20 pm »
On the contrary, there is talk of high-end processors combining multiple instructions into a single micro-op -- as modern x86 and ARM processors do for "CMP;Bcc" pairs.

That's instruction fusing, right?
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3772
  • Country: gb
Re: The phasing out of 32 bit
« Reply #85 on: June 21, 2021, 11:49:47 am »
back to *64bit computers* planned before "RISC" ... in the 60s, the IBM 7030 (also known as "Stretch") was actually a 64 bit computer, with multi-processing units for floating point, integers and other operations like n bit character processing.

Don't you believe? Let's check it out!

Instructions were either 32-bit or 64-bit, and Fixed-point numbers were variable in length, stored in either binary, 1 to 64 bits  :D

Surprisingly, IBM 7030's 0 register is called "$ Z", and it's a true 64-bit register that always reads as zero, can't be changed by writes, which is quite RISCish even if we're talking about a designed machine years before the term RISC was invented.

The problem is that in the 1960s looking ahead 30-40 years, well, it was quite limited by the technology of the time, and the results were much slower than expected and failed to hit any performance goals.
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3772
  • Country: gb
Re: The phasing out of 32 bit
« Reply #86 on: June 21, 2021, 11:55:08 am »
Yesterday on Skype, a senior engineer  commented "... anyway, RISC is not RISC today but something easy to pipeline, but just as complex a CISC design ... ".

pearl of wisdom  :D
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline NiHaoMikeTopic starter

  • Super Contributor
  • ***
  • Posts: 8951
  • Country: us
  • "Don't turn it on - Take it apart!"
    • Facebook Page
Re: The phasing out of 32 bit
« Reply #87 on: June 21, 2021, 01:56:39 pm »
Has anyone mentioned the fact modern processors don't have transistors to implement each instruction? The instructions are decoded into a set of micro-ops. Supporting old instruction sets is only a little more than writing the microcode to execute it.
The overhead must still be significant since ARM has removed 32 bit compatibility from some of their highest performance cores.
Cryptocurrency has taught me to love math and at the same time be baffled by it.

Cryptocurrency lesson 0: Altcoins and Bitcoin are not the same thing.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14230
  • Country: fr
Re: The phasing out of 32 bit
« Reply #88 on: June 21, 2021, 05:06:45 pm »
Yesterday on Skype, a senior engineer  commented "... anyway, RISC is not RISC today but something easy to pipeline, but just as complex a CISC design ... ".

pearl of wisdom  :D

Well. RISC-V is pretty RISCy in essence.
Now, if you consider a recent ARM ISA, for instance... that may be questionable. There are so many instructions, some of which doing some pretty fancy stuff...
But what makes RISC-V more RISCy is basically its modularity. Take a RISC-V instruction set with all currently defined extensions, and you get closer to ARM. Actually, if you include the B extension that's not ratified yet (but which is pretty big if you implement it all), the resulting IS is probably going to be larger than ARM.

But as Bruce mentioned, beyond the "complexity" of the instruction set, a relatively good indicator of "CISCyness" would be the use of microcode.

The ease of pipelining is certainly a consideration here, but all processors designed since at least the late 80's or early 90's have made that a priority, be they considered CISC or RISC...
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14230
  • Country: fr
Re: The phasing out of 32 bit
« Reply #89 on: June 21, 2021, 05:09:29 pm »
Has anyone mentioned the fact modern processors don't have transistors to implement each instruction? The instructions are decoded into a set of micro-ops. Supporting old instruction sets is only a little more than writing the microcode to execute it.
The overhead must still be significant since ARM has removed 32 bit compatibility from some of their highest performance cores.

The decoding step is of course significantly more complex to do this. Even implementing the "C" extension on RISC-V adds significant overhead.

And, as Bruce also mentioned, do not confuse microcode with micro-op expansion. Not quite the same thing.
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3772
  • Country: gb
Re: The phasing out of 32 bit
« Reply #90 on: June 21, 2021, 06:30:22 pm »
Well. RISC-V is pretty RISCy in essence.

He was referring to designs like ARM64, M1, and (since he is a retired IBM engineer) IBM POWER9 and POWER10.
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline nigelwright7557

  • Frequent Contributor
  • **
  • Posts: 689
  • Country: gb
    • Electronic controls
Re: The phasing out of 32 bit
« Reply #91 on: July 15, 2021, 02:32:50 am »

64 bit can double the memory requirements in some applications...   64 bit isn't a universal "good"...

X86 can load 8,16, 32 or 64 bits according to the instruction.
You dont use 64 bit instructions unless really needed so very little extra code is needed.
In fact having to expand a 32 bit data to 64 bit string will take even more memory than just using plain 64 bits to start with.

32 bit Windows pc's were limited to 4gb memory addressing.




 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 3971
  • Country: nz
Re: The phasing out of 32 bit
« Reply #92 on: July 15, 2021, 04:49:44 am »
64 bit can double the memory requirements in some applications...   64 bit isn't a universal "good"...

X86 can load 8,16, 32 or 64 bits according to the instruction.
You dont use 64 bit instructions unless really needed so very little extra code is needed.
In fact having to expand a 32 bit data to 64 bit string will take even more memory than just using plain 64 bits to start with.

32 bit Windows pc's were limited to 4gb memory addressing.

Breaking the 4 GB memory limit is the main reason to go to a 64 bit CPU, so pointers are 64 bits, so programs that use a lot of pointers can have increased memory usage by up to 2x. Not many programs actually approach this, but languages with dynamic typing that can store any of char, int, float, or a pointer in each variable have to use as much space as the biggest one i.e. the pointer variant.

On x86, the move to 64 bit registers coincided with going from 8 to 16 registers. Some people want to use the extra registers but don't care about the increased address space, so the "x32" ABI was developed. As I understand it, x32 supports 64 bit integers in registers, so it still has to save/restore 8 bytes per register in stack frames. Uptake never seemed to be very high, and there are recent moves to deprecate it.
 

Offline PKTKS

  • Super Contributor
  • ***
  • Posts: 1766
  • Country: br
Re: The phasing out of 32 bit
« Reply #93 on: July 16, 2021, 11:43:02 am »
64 bit can double the memory requirements in some applications...   64 bit isn't a universal "good"...

X86 can load 8,16, 32 or 64 bits according to the instruction.
You dont use 64 bit instructions unless really needed so very little extra code is needed.
In fact having to expand a 32 bit data to 64 bit string will take even more memory than just using plain 64 bits to start with.

32 bit Windows pc's were limited to 4gb memory addressing.

Breaking the 4 GB memory limit is the main reason to go to a 64 bit CPU, so pointers are 64 bits, so programs that use a lot of pointers can have increased memory usage by up to 2x. Not many programs actually approach this, but languages with dynamic typing that can store any of char, int, float, or a pointer in each variable have to use as much space as the biggest one i.e. the pointer variant.

On x86, the move to 64 bit registers coincided with going from 8 to 16 registers. Some people want to use the extra registers but don't care about the increased address space, so the "x32" ABI was developed. As I understand it, x32 supports 64 bit integers in registers, so it still has to save/restore 8 bytes per register in stack frames. Uptake never seemed to be very high, and there are recent moves to deprecate it.

May be I am old school..
and all those Z80 Kb got me spoiled...

But AFAIK 4G ( 4 G_i_g_a_B_y_t_e_s_... )  of memory is a lot.

So far have not seen any "usual" average joe applet to use that much..

Compiling LINUX with PAE support enabling 64G visible support has been
around for +decades.. and works fine ..

*MAY BE*  some insane huge images above 10k pixels wide..
MAY some huge databases.. or of course insanely huge 3D data in need to be moved around..

MAY be..  but still 4G is a lot and having the OS supporting 64G via PAE while using 32b native and keeping binary compatible distros around..

May be a point to consider in saving some good hardware and pockets

Paul  :-+
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 3971
  • Country: nz
Re: The phasing out of 32 bit
« Reply #94 on: July 16, 2021, 12:20:36 pm »
Web browsers can get over 4 GB. Yeah, that's bloat and I'm sure they could do better, but people demand the functionality and spend half their lives in their web browser.

Linking LLVM binaries takes something like 6 or 8 GB of RAM using the standard GNU ld linker. That's bloat too, and gold and Apple's linker and others don't use so much space and time linking the same programs.

Virtual machines need as much RAM as you're giving the guest OS, plus some.

Even if a program is using less than 4 GB of actual memory, being able to space thing out in the address space is handy.

I started on 6502s and z80s too, and I still do a lot of things with ARM and RISC-V boards with less RAM than an Apple ][ had -- not to mention Arduino Unos with a whopping 2K of RAM, and my favourite bare AtTiny85 chip with 512 bytes.

But I'm not posting on eevblog/forum from one of those.
« Last Edit: July 16, 2021, 11:35:22 pm by brucehoult »
 

Offline PKTKS

  • Super Contributor
  • ***
  • Posts: 1766
  • Country: br
Re: The phasing out of 32 bit
« Reply #95 on: July 16, 2021, 03:41:36 pm »
I see your point...

I remember reading here on the forum some folk complaining that he could not open 120 tabs on firefox.. because it crashed...  ::)

My parents also here and there are crazy about Android dullness..

Well in both cases the system has about 100 browser context open and going..

Chrome and/or Firefox on android with about 10 tiny windows is already a sick mule

A folk opening 120 tabs on a browser with 4G.. probably will open 600 tabs with 16G...

There is no "solution" for that.. just to raise some limits...

Alas.. they *should* already been there... but who cares..   :o

The more hardware the better..
Paul

 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf