The 8088 in the original PC had only 20 address lines, good for 1 MB. The maximum address FFFF:FFFF addresses 0x10ffef, and this would silently wrap to 0x0ffef. When the 286 (with 24 address lines) was introduced, it had a real mode that was intended to be 100% compatible with the 8088. However, it failed to do this address truncation (a bug), and people found that there existed programs that actually depended on this truncation. Trying to achieve perfect compatibility, IBM invented a switch to enable/disable the 0x100000 address bit. Since the 8042 keyboard controller happened to have a spare pin, that was used to control the AND gate that disables this address bit. The signal is called A20, and if it is zero, bit 20 of all addresses is cleared.
Present
Why do we have to worry about this nonsense? Because by default the A20 address line is disabled at boot time, so the operating system has to find out how to enable it, and that may be nontrivial since the details depend on the chipset used.
Well, Professor Andrew S. Tanenbaum commented "if Intel had designed humans, we would have had some cr0 register to switch atavistic regression to chimp mode"
These were all business decisions more so than engineering ones. Calling Intel's engineers brain-dead is completely silly.
Cost has certainly been a major concern back in the days and explains many of the mediocre solutions that had to be devised (but still allowed personal computing to exist at all in the end.)
These were all business decisions more so than engineering ones. Calling Intel's engineers brain-dead is completely silly.
In the case of Intel and also those around at the time, these companies were pioneering technology
The original x86 architecture is like the width of two horses bums side by side.
Get UEFI hardware and forget about 16 bit modes :P
Yesterday I found a shocking fact: although the technical manuals speak of 16-bit registers (e.g. AX is { AH, AL}), in real mode, you can still access all the 32bit registers (e.g. EAX contains AX) if you use a prefix to your "move" instruction.
So, all of the 32-bit registers are usable by simply adding "0x66" to the beginning of any instruction, and it seems to work even on Cyrix CPUs.
I don't know if it's a hardware bug, but I am happy it's there ;D
In the case of Intel and also those around at the time, these companies were pioneering technology
Motorola was a pioneering company as well as Intel, and if you look at m68000 there are issues as well. One regards the bus-error handling, which is unrecoverable by design (wrong), the other issue involves an instruction that leakages privileges between user mode and kernel mode (wrong).
Both are fixed with 68010, 68020, ... and note, 68k moved from 8bit (6800) to 32bit (68k) with the same problem you have in 80286: only 24 bits of physical addressing (68000), but note, the design was 32bit since the beginning, so this was not a problem when 68020 added 32 bits of physical addressing.
Intel ...
... let me understand, you have an 8086 CPU with 20bit of physical and logical addresses, and you think it's a good idea, while your CPU will grow over time and expand, to make a circuit in the next CPU, 80286, that disables some high address lines to make it back-compatible?
and, worse still, allowing a hardware bug like the Unreal Mode in every CPU they made? And since Unreal-Mode is known by software like DOS32 and Grub, why don't they consolidate it as a documented feature?
That's what I do find stupid and completely wrong.
That is not a bug. It is supposed to work that way.
[16 bit]
check_bz:
pusha
; Test magic number of the BZ header
mov eax, dword [hdraddr + 0x202]
cmp eax, 'HdrS' ; <-------------------------- yeah, 32bit, it works!
mov dx, msg_sys_error1
jne panic
; Test jump instruction is there
mov al, byte [kernseg * 0x10]
cmp al, 0xeb
mov dx, msg_sys_error2
jne panic
popa
ret
Dude .... :palm:
I HAVE a business, I said several times that I develop ICEs and debuggers, plus other stuff, and if x86 takes me 500 hours to develop something, while other architectures/platforms take 50 hours, I think I am right to say that x86 is bad, especially if customers don't pay enough!
(and they don't)
We are in the "programming section" of the forum, it's supposed to talk about software/hardware problems (the title is "Unrel Mode"), but instead, you see and focus on "ideal" crusades for which you have to feel to defend (with x86 fanboyism) something with zero technical background just to see if your useless bullshit can make your life better.
Frankly, it's a troll attitude, and if last time I gave you the benefit of the doubt, now I have enough: stay on my ignore list!
p.s.
you also like to offend people personally, which LOL, it's fine with you that I don't give a shit about filing lawsuits against random people on the internet as I have many more interests in my life that spending my time making bad blood with people like you.
While I'm not familiar with UK law, my comments while not directly intended to offend you, are protected free speech by the laws of the USA, and legally speaking you haven't got a leg to stand on, you can't sue somebody for offending you even if it was deliberate.You can't sue somebody for offending you. He can :-DD
LOL :popcorn:While I'm not familiar with UK law, my comments while not directly intended to offend you, are protected free speech by the laws of the USA, and legally speaking you haven't got a leg to stand on, you can't sue somebody for offending you even if it was deliberate.You can't sue somebody for offending you. He can :-DD
Going by memory, each code segment has a bit which specifies the default data width, which can be 8/16 or 8/32. When the data width prefix is used, then the other data width is selected for each instruction.
So in real mode the default is 8/16, and when the prefix is used, the 32 bit data width becomes available.
The danger is that while this makes the 32-bit registers available, a 16 bit operating system is unlikely to save the 32-bit register state so only that program can make use of them. This problem also sometimes comes up with vector register extensions.
0000 0040 0032 bootstrap<-
0000 0680 2800 kernel
0000 2e80 8ef0 Extra
____________
| |
| hAllo! |
|____________|
Installing interrupt service routines (ISRs).
Enabling external interrupts.
Initializing serial (IRQ 4).
Initializing dynamic memory ... failed, found corrupted memory
dram: 00010000 00100000
> show reg.cr0
CPU.cr0=0x00000011
> check mem
sys_mem_size=32M
app_mem_size=25K
app_mem_addr=0x00010000
addr=0x00001000 ... 4k is_eol=00000000 (skipped)
addr=0x00002000 ... 8k is_eol=00000000 (skipped)
addr=0x00004000 ... 16k is_eol=00000000 (skipped)
addr=0x00008000 ... 32k is_eol=00000000 (skipped)
addr=0x00010000 ... 64k is_eol=00000000, OK
addr=0x00020000 ... 128k is_eol=00000000, OK
addr=0x00040000 ... 256k is_eol=00000000, OK
addr=0x00080000 ... 512k is_eol=00000000, OK
addr=0x00100000 ... 1024k is_eol=00000000, corrupted
addr=0x00200000 ... 2M is_eol=00000000, corrupted
addr=0x00400000 ... 4M is_eol=00000000, corrupted
addr=0x00800000 ... 8M is_eol=00000000, corrupted
addr=0x01000000 ... 16M is_eol=00000000, corrupted
addr=0x02000000 ... 32M is_eol=00000001, corrupted
> load app 0x00100000
name=kernel
kind=elf, 32-bit, little-endian
entry=0x00100200
size=6Mbyte
> exec elf
corrupted image
______________________
| |
| CPU-x86 |
| |
| |
| A31 ___________|___ A31
| ... | ...
| A21 ___________|___ A21
| ____ |
| +----------|AND \__|___ A20
| | A20 ---|____/ |
| | A19 ___________|___ A19
| | ... | ...
| | A00 ___________|___ A00
| | |
| +---------------------- #A20
|_____________________|
Thanks! Indeed it's not clear to me. Too many pages to read, and too many exceptions.
Plus, it seems that if you force "unreal mode" interrupts don't work as expected :-//
Get UEFI hardware and forget about 16 bit modes :P
eheheh I wish ;D
My Soekris net5501 is a 2006 design, and its Geode CPU is ~i586
So, it's pre-UEFI.
It would be less brain dead to just redo the whole thing in a RISC architecture
less than optimum solutions
OK, I am now looking at a GRUB implementation that switches the CPU into "protected mode" but then executes the BIOS interrupt calls in Virtual 8086 mode.
I am impressed that it actually works, but so it does :o :o :o
Fluke, I got the exact point on the fifth attempt.
So, I reverse-engineered that piece of BIOS code and found how things are managed for the A20-gate.
(the dummy keyboard controller is much worse than the 8042, that's it)
Then I modified the kernel setup to enter directly into protected mode.
so, the bootloader directly loads the uncompressed kernel as elf32, and jumps into it in protect-mode!
I rewrote all the setup parts and the relative Makefile to compile everything in 32bit, there are really no 16bit parts.
So, the unreal mode is neither needed nor considered an option.
Problem solved. It works!
Restyling the code, and finalizing.
Thanks for your contribution :-+
It would be less brain dead to just redo the whole thing in a RISC architecture
less than optimum solutions
Thanks for your zero technical contribution, it's not useless as it confirms I am right to refuse to sign a contract, as x86/16 and x86/32 are too much pain, and I am not even paid enough for that.
Sorry, I won't give more details.
You can't sue somebody for offending you. He can :-DD
But also don't let frustration cloud your judgement; these things absolutely existed for a reason, and that reason is money, and backwards compatibility. Every single attempted mainstream PC, that broke backwards compatibility, lost market share and died an ignominious death. Microsoft may have a lot of problems with their OSs (and software, and, y'know, everything else), but -- despite how janky it may seem -- they did a TREMENDOUS amount of work ensuring most published software continued to be functional through history. The fact that old software on newer Windowses is as glitchy as it sometimes (often?) is, is a backhanded testament to all that work -- that is, that they were able to maintain enough compatibility that those poorly-written programs ran at all, let alone mostly usably, is a remarkable feat. And that line of compatibility extends all the way back to the earliest PC hardware, and the programs written for it -- not in a continuous unbroken line, but with enough overlap between multiple generations that you could continue to migrate all your work, and many of your programs, without loss of data or added expense. THAT is how the PC won, and that is how Microsoft dominated it.
And so, A20, and real mode, and all the other bullshit -- is just other aspects of the same dynamic. It ain't pretty, but success isn't always pretty, or easy.
Tim
So you're gonna sign the contract?
The whole computing world is pretty much made out of crap that just works. Am I exaggerating? :)You know you are!
As I said, I'm pretty sure x86 is going to die eventually. It's already a lot less relevant these days than it used to be. But that's just one piece. The whole computing world is pretty much made out of crap that just works. Am I exaggerating? :)
recompile" between generations. This mostly works for embedded devices, including smartphones, but did not work for personal computers, workstations, and servers, except maybe for IBM and Apple
When Jean-Louis Gassée, a former Apple Computer executive, founded BeOS, he thought PowerPC 603/4 (G2) was a good platform to develop BeOS, imagine if he also had the i960 at his disposal. Most likely his BeBOX PPC-G2 would not have wasted millions on research and development before realizing that there was no future with PowerPC because it was too niche.
Later development releases of BeOS/PPC were ported to run on the Macintosh, and Macintosh clone makers, including Power Computing and Motorola, signed deals to ship BeOS with their hardware when the OS was finalized.
In light of this, Be stopped production on the BeBox after selling only around 2000 units, and focused entirely on the development of BeOS/x86.
Apple has successfully managed to completely change hardware architectures several times over the years with little friction, but this is almost an exception, due to the fact they have a very specific business approach (can't be denied) that has yet to be copied, and the fact they entirely control boith the hardware and the software, which has allowed them a lot of flexibility.
I feel like a broken record saying this, but backward compatibility is king,Who would have thought in 2023 we would still have DOS mode or command prompt mode.
The reason Intel abandoned the i960 is relatively simple and summed up in the Wikipedia article.
It was tightly linked to them acquiring StrongARM, all a consequence of a lawsuit with DEC, etc, which basically replaced the i960.
I had a couple of 16 bit program I used a lot but they wont run now.
The reason Intel abandoned the i960 is relatively simple and summed up in the Wikipedia article.
It was tightly linked to them acquiring StrongARM, all a consequence of a lawsuit with DEC, etc, which basically replaced the i960.
The PXA (Xscale, ARMv5TE) used by Sharp for their PDA line? Intel ARM ... is only a part of the reason, and not reported in any Wikipedia article,
Intel's i960 (or 80960) was a RISC-based microprocessor design that became popular during the early 1990s as an embedded microcontroller. It became a best-selling CPU in that segment, along with the competing AMD 29000. In spite of its success, Intel stopped marketing the i960 in the late 1990s, as a result of a settlement with DEC whereby Intel received the rights to produce the StrongARM CPU. The processor continues to be used for a few military applications.
The intel core strength seems:
- focus on IA-32, rather than a "competing" architecture i960
- focus on IA-32, rather than a "competing" architecture ARM
So, it seems that the Intel leadership has repeated the same error of judgment first with i960 and then with Arm, plus a third error of judgment, even in the opposite direction (allocating money to a wrong solution), with Itanium.
Considering Intel's success
There is still a thriving market for expandable x86 personal computers, workstations, and servers, but there is no such market even now for ARM.
the Apple equivalents of my x86 workstations with massive expandability going back three or four generations do not exist, and such an ARM alternative has never materialized.
Again, the same policy from the ruling class, and that's exactly the point: they didn't want to do anything except x86 because, according to them, x86 would bring in more money.
The funny thing is that they are so bad at evaluating things that they then invested in Itanium and today have to pay AMD a lot of money to be allowed to produce x86-64.
Which is LOL :-DD
My Boss's IBM-Tyran POWER9 workstation is superior by every means to every XEON-based workstation.
Consumes less electricity, it's more efficient and has the same expandability in terms of the number of PCIe slots, and it's even more reliable than XEON and its multi-core mechanisms are more robust.
sure, success ... worst intel cpus (https://www.xda-developers.com/worst-intel-cpus/) :popcorn:
not sure what drives DiTHBo's hatred
So, it seems that the Intel leadership has repeated the same error of judgment first with i960 and then with Arm, plus a third error of judgment, even in the opposite direction (allocating money to a wrong solution), with Itanium.
I'm not sure what drives DiTHBo's hatred of the X86 family. He is not wrong about some of the difficult issues, but those don't explain the level of antipathy.As I see it, the answer is both simple and complicated at the same time.
I'm not sure what drives DiTHBo's hatred of the X86 family. He is not wrong about some of the difficult issues, but those don't explain the level of antipathy.
compatibility
You're assuming, even claiming, that this was an error. But Intel is still here, and financially well off. Is this an error?
Was MS-DOS an innovation?
retaining compatibility. Thus, the reason for the success and prevalence of x86 isn't so much because of technical reasons, but because of business choices
So, I can certainly understand the antipathy, even if I do not feel strongly about the issue myself.
retaining compatibility. Thus, the reason for the success and prevalence of x86 isn't so much because of technical reasons, but because of business choices
Yup, precisely.
Popular is popular :o :o :o
Was MS-DOS an innovation? Heck no, it was quite a step backwards compared to what the architecture ended up being capable of, but because of backwards compatibility and other business reasons, ended up being a construct of compromises than a clean design.
Was Windows an innovation? Heck no. You can look at Xerox Alto and then Apple Mac OS for innovations in that area.
I've been too scared to mention byte order.
You're assuming, even claiming, that this was an error. But Intel is still here, and financially well off. Is this an error?
I am assuming nothing, I am talking about facts!
From financial articles, Itanium and Atom were two big financial flops for intel itself.
I'm not sure what drives DiTHBo's hatred of the X86 family. He is not wrong about some of the difficult issues, but those don't explain the level of antipathy.
everyone repeats it, over and over - compatibility - and what do you want? Atom x86 even on smartphones? to be binary compatible with your PC? Would you like to run DOS on your smartphone?
LOL :-DD
no one sane would put an x86 on phones and tablets because it sucks about power consumption, when we talk about ARM and Intel's decision to decommission Xscale, we talk about this, and I don't understand why we need to bend reality to justify that it was objectively - by facts - a wrong choice by the leadership Intel, as all phones and tablets use ARM!
Mental illness I suspect,
Mental illness I suspect,
Aaaand any rational argument, any hope of convincing the other party of your position, is gone.
(Actually you probably did that with your earlier reply but you're really doubling down now.)
Tim
Mental illness I suspect,
Aaaand any rational argument, any hope of convincing the other party of your position, is gone.
(Actually you probably did that with your earlier reply but you're really doubling down now.)
Tim
I gave up seriously trying to convince him a long time ago
Is the purpose of this thread convincing the OP to like x86?
We gotta try harder :-DD
We gotta try harder :-DD
The funny part would be if he had typed this all on some x86 computer.Doubt it. Too loud a RISC fanboy to touch one.
Mental illness I suspect,
Aaaand any rational argument, any hope of convincing the other party of your position, is gone.
(Actually you probably did that with your earlier reply but you're really doubling down now.)
Tim
I don't know what the purpose is. At first the OP ranted about some low-level x86 annoyances. Fair enough if you have to deal with these.
Then his rant turned into blaming it all on Intel engineers and claiming they all have been morons.
Then he claimed Intel has kept making strategic errors, in spite of its success.
The thing you have to remember is that this was before the iPhone was introduced and no one knew what the iPhone would do... At the end of the day, there was a chip that they were interested in that they wanted to pay a certain price for and not a nickel more and that price was below our forecasted cost. I couldn't see it.
[..]
It wasn't one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought
james_s is in my ignore list for insulting me personally, I see from your quote he is doing it again, and I haven't read it for a while, as I clearly wrote, and why does it continue when I don't care what it says?
Again, it's called trolling
I opened this topic in the hope that someone would tell me something about the "unreal mode" and found that everyone talks well about x86 then nobody ever knows anything, yet everyone talks.
that's indeed its "being popular"
meaning that Intel bet that ARM would take as long to reach x86-level of performance as Intel would take to reduce x86 power consumption to SOC levels.
In the last post, I pointed out how they bet on being able to maintain a hair above the level of "decent" multi-core support for the machines, and how much they underestimate, again, their "botched solutions", specifically that it won't be a problem when the machines have more than 80 cores or an interface to the quantum computing engine.
I gave up seriously trying to convince him a long time ago
So you left the thread and ticked "ignore" on it..?
"Try, try again; then give up. There's no point being a damned fool about it."
Tim
And why do you have to *convince* me that x86 is cool? I don't have to approve myself in front of anyone, and the purpose of the forums is precisely to bring different positions because you can learn new stuff this way, that's why I offered my honest thinking, otherwise discussing is completely useless.
You know what, don't be offended, but this forum has lost a lot of polish, which is disappointing
Greetings.
Ah, my apologies for echoing it then. Matter of fact, that sounds like a good enough vote that moderators may want to get involved. Cheers.
Tim
If I could, I would probably throw away pretty much everything that currently makes the computing industry. It's sort of rotten. But it sort of works, so it's still useful and while I have some neat ideas (like many others I guess), I'm not sure they would end up resulting in anything significantly better. I'm afraid I (as most of us) would tend to focus on details while missing out on the big picture.
It is not ARM that has reached x86 levels of performance, but Apple
It is not ARM that has reached x86 levels of performance, but Apple
Arm chips exist for all markets.
Apple makes the Arm chips that can compete in the desktop and laptop consumer space, while Ampere(1), Amazon, and Nvidia supply the Arm chips which can compete in data centers!
I'm afraid I (as most of us) would tend to focus on details while missing out on the big picture.
Apple competes with various system makers, but they do not compete directly with Intel or AMD.
I used to own a cloud computing company - we were not massive but we had racks filled with Intel Xeon CPU. I was often asked why we didn't consider AMD and the reason is one I never hear you mention: you have to cripple CPU features to allow hot transfer of workload between AMD and Intel. So in an environment where we were counting cores in the thousands or higher, introducing new CPU that are not "compatible" with the fleet would be an insane choice. You'd have to prevent live migration of workload between heads or alternatively, modify hypervisors to limit features to guest workloads. Because of this, we were effectively "married" to Intel. Moving some devices to AMD would have required too much development time to modify control systems, limited interoperability or have meant that we would have had to make very large purchases on AMD to essentially split into two, distinct farms, and that would have been quite a bet to have taken.I can confirm it's a BIG problem in a farm!
ARM is the most rubbish arch in the world today. And it goes mostly into smartphones, hmm, mabye not a coincidence.
Linus Torvalds to Andrey, March 31, 2021
> You obviously have to write non-transactional path, and it will have its pitfalls, but the point
> is that you could have better best-case and average performance with TSX.
No, you really really don't.
TSX was slow even when it worked and didn't have aborts, and never gave you "best-case" performance at all due to that. Simple non-contended non-TSX locks worked better.
And TSX was a complete disaster when you had any data contention, and just caused overhead and aborts and fallbacks to locked code, so - no surprise - plain non-TSX locks worked better. And data contention is quite common, and happened for a lot of trivial reasons (statistics being one).
And no, TSX didn't have better average performance either, because in order to avoid the problems, you had to do statistics in software, which added its own set of overhead.
As far as I know, there were approximately zero real-world loads that were better with TSX than without.
The only case that TSX ever did ok on was when there was zero data contention at all, and lots of cache coherence costs due almost entirely due to locking, and then TSX can keep the lock as a shared cache line. Yes, this really can happen, but most of the time it happens is when you also have big enough locked regions that they don't get caught by the transactional memory due to size overflows.
And making the transaction size larger makes the costs higher too, so now you need to do a much better job at predicting ahead of time whether transactions will succeed or not. Which Intel entirely screwed up, and I blame them completely. I told them at the first meeting they had (before TSX was public) that they need to add a TSX predictor, and they never did.
And the problems with TSX were legion, including dat aleaks and actual outright memory ordering bugs.
TSX was garbage, and remains so.
This is not to say that you couldn't get transactional memory right, but as it stands right now, I do not believe that anybody has ever had an actual successful and useful implementation of transactional memory.
And I can pretty much guarantee that to do it right you need to have a transaction success predictor (like a branch predictor) so that software doesn't have to deal with yet another issue of "on this uarch, and this load, the transaction size is too small to fit this lock".
I'm surprised that ARM made it part of v9 (and surprised that ARM kept the 32-bit compatibility part - I really thought they wanted to get rid of it).
Linus
And precisely you focus on details while missing out on the big picture: I am just saying -1- do not buy x86 only because it's the most popular when you can buy something else, and -2- inform yourself before pointing the finger only at the most popular solutions!
Yes, it's garbage. You can't even migrate a disk with Linux installation from one SBC to another, but you complain that live migration of VMs is fussy on x86 because of some chipset differences between Intel and AMD.
Yes, they could have agreed on hardware standards instead of hiding it all behind an abstract turd like ACPI.
But look at the level of hardware standardization in ARM SOCs :-DD
POWER9
On POWER9N DD2.01 and below, TM is disabled. ie HWCAP2[PPC_FEATURE2_HTM] is not set.
On POWER9N DD2.1 TM is configured by firmware to always abort a transaction when tm suspend occurs. So tsuspend will cause a transaction to be aborted and rolled back. Kernel exceptions will also cause the transaction to be aborted and rolled back and the exception will not occur. If userspace constructs a sigcontext that enables TM suspend, the sigcontext will be rejected by the kernel. This mode is advertised to users with HWCAP2[PPC_FEATURE2_HTM_NO_SUSPEND] set. HWCAP2[PPC_FEATURE2_HTM] is not set in this mode.
On POWER9N DD2.2 and above, KVM and POWERVM emulate TM for guests (as described in commit 4bb3c7a0208f), hence TM is enabled for guests ie. HWCAP2[PPC_FEATURE2_HTM] is set for guest userspace. Guests that makes heavy use of TM suspend (tsuspend or kernel suspend) will result in traps into the hypervisor and hence will suffer a performance degradation. Host userspace has TM disabled ie. HWCAP2[PPC_FEATURE2_HTM] is not set. (although we make enable it at some point in the future if we bring the emulation into host userspace context switching).
POWER9C DD1.2 and above are only available with POWERVM and hence Linux only runs as a guest. On these systems TM is emulated like on POWER9N DD2.2.
Guest migration from POWER8 to POWER9
will work with POWER9N DD2.2 and POWER9C DD1.2. Since earlier POWER9 processors don't support TM emulation, migration from POWER8 to POWER9 is not supported there.
I have migrated my FreeBSD router and NAS between multiple generations of x86 hardware
I have migrated my FreeBSD router and NAS between multiple generations of x86 hardware
you could do this because there are layers and layers of compatibility software, just like I've already pointed out for Grub and Lilo that work at the cost of putting several orders of magnitude higher the number of lines of code that would be needed!
Which is annoying for devs and maintainers!
You can't even migrate a disk with Linux installation from one SBC to another
look at the level of hardware standardization in ARM SOCs
Apparently less annoying than doing it for ARM SBCs
I'm afraid I (as most of us) would tend to focus on details while missing out on the big picture.
And precisely you focus on details while missing out on the big picture:
I am just saying -1- do not buy x86 only because it's the most popular when you can buy something else, and -2- inform yourself before pointing the finger only at the most popular solutions!
Second, because it is really difficult to give an estimate of the deadline both to the customers and to oneself for the calculation of the hours, and therefore we often end up "working practically for free".
In my opinion, this thread is veering too close to the 'analyzing the tone' or 'reading between the lines' domain, which is not really useful.
I have migrated my FreeBSD router and NAS between multiple generations of x86 hardware
you could do this because there are layers and layers of compatibility software, just like I've already pointed out for Grub and Lilo that work at the cost of putting several orders of magnitude higher the number of lines of code that would be needed!
Which is annoying for devs and maintainers!
Apparently less annoying than doing it for ARM SBCs. Or just far, far more worthwhile because there are many more useful x86 machines in the world.
That said, what I've been trying here is to say that there is usually a reason why such decisions are taken, and you are refusing to understand those reasons. And the reasons are not just all because everyone is a stupid moron. It's a bit more complicated than this. If you had just said what you said right above, I don't think your thread would have triggered the reactions it did.
So yes the PC standard is crufty with lots of layers but the alternative isn't a better designed platform with nicer APIs for system configuration , it's nothing.
The ARM approach is nice for fast booting of fixed configuration platforms and deterministic device naming. It's super inconvenient for having a modular system that you can download an OS for and boot it up.
It's relatively little to do with x86 vs arm, it's the system standard. It's "IBM PC compatible" vs nothing. ARM system makers make basically no attempt to support any system level standard. A new PC today can boot up using bios emulation and legacy devices that emulate the keyboard, display, and serial ports from the 80s. Then it can use a combination of probing for well known devices, looking up ACPI tables, and PCIe enumeration to find all the relevant hardware settings it needs to operate. ARM has none of that other than possible PCIe, but on chip peripherals connected via AXI do not support plug and play enumeration. Even very common peripherals are often located at random addresses. Every ARM os has to be customized for the particular hardware platform it's running on, with a custom device tree to tell the bootloader and OS where everything is.
So yes the PC standard is crufty with lots of layers but the alternative isn't a better designed platform with nicer APIs for system configuration , it's nothing.
The ARM approach is nice for fast booting of fixed configuration platforms and deterministic device naming. It's super inconvenient for having a modular system that you can download an OS for and boot it up.
Note that nothing prevents fast booting with enumeration, and some x86 PCs support this, however there are fixed minimum timeouts on some legacy devices, for instance hard drives which must spin up, which cannot be shortened.
Note that nothing prevents fast booting with enumeration, and some x86 PCs support this, however there are fixed minimum timeouts on some legacy devices, for instance hard drives which must spin up, which cannot be shortened.
I see no reason these delays couldn't be bypassed if no legacy hard drives are found to be present though. Even a lot of older legacy systems that are still in service have been upgraded with modern SSDs.
Apple competes with various system makers, but they do not compete directly with Intel or AMD.
Talking about data centers and scientific simulators, the fastest supercomputers today are built using Arm and POWER9, not x86 chips, why? well, have you seen the last Intel and AMD CPUs?
. (https://www.youtube.com/watch?v=ambaCzFTyo8)
(Intel Xeon Platinum 8468)
Sure, they are able to outperform the chips made by Apple, Ampere, and NVidia, but only at the cost of insane power usage!
When you have a 20Kwatt cabinet it makes a BIG difference!
The Green 500 is equally dominated by x86, because even if intel isn't doing so good, Amd is doing a fenomenal work in the server space.