I recall reading about some ancient PGA chips (serdes maybe?) that, ran rather hot to begin with, but were always on the verge of failure and were clearly a wear item, to be replaced regularly. If not by design intent, but certainly on repair. Which, of course, made them rather expensive to replace, on top of being ceramic PGAs already, but particularly as they went obsolete and supplies dried up...
I forget what that was in, some vintage minicomputer system I think, or perhaps video processor. Probably this is enough to jog someone's memory...
Even the best get things wrong from time to time; that might've been an IDT part, plus the Intel example above. And I mean Intel is on the working group so it's not like they weren't aware of the design issues. High speed interfaces, I think, are a tight compromise between robustness (especially ESD and cable discharge), speed (maximum bandwidth for minimum current consumption) and reliability (push out aging, electromigration, etc. failure modes to some decades at operating limits). All the while, refining process control so the chips are being made the way they were designed, etc. New IO interfaces like those (i.e. back when SATA, PCIe, etc. were new) might well be tested down to physical simulations of the process node, but maybe also just from SPICE models extracted from whatever combination of physical simulation plus test fab. The challenge is, to maximize bandwidth, you maximize current density, and therefore gm/C (bandwidth, or figure-of-merit thereto), but with the IO transistors basically short-circuited (most of VDD dropping, maximum current density), and being such small and delicate features, they're only going to last so long, and failure is inevitable. It's a matter of tuning those parameters (bandwidth, geometry, etc.) to get just enough worst-case margin that yield, product quality and reliability are acceptable.
And yeah I know LVDS style interfaces are a bit different than that, I don't know offhand what size (W, L) transistors they use, but obviously the low current means big savings, and they're biased a bit inbetween VDD/GND so the voltage stress isn't as crazy, besides which these IOs usually operate lower anyway (e.g. PCIe, DDR5, etc. regularly use 1.2V or lower supplies).
You see a hint of this when looking at FPGA IO ratings: typically they caution against DC loads at all, or more than a couple mA say, even though you might only need them to be 74HC-equivalent (and they might be configurable to comparable ratings otherwise). It's when robustness is dictated by the finest output structures (many are configurable for LVDS etc.), that these ratings need to be posted, and
honored.
As for VCOREs, it seems very popular these days: even AVRs (Dx, etc.) appear to be using that scheme, allowing them to offer even richer peripherals and memory, on smaller chips, more stable performance vs. supply (no CLK/VCC limitations as on MEGAs), and in the AVR case, 5V IOs since that's the kind of thing they're going for, but tons of 3.3V devices out there as well. It seems they've cracked a no-external-cap regulator scheme, so it works transparently, only a couple power management settings (and, I suppose, startup time limitations) even hinting at the architecture.
Internal switching regulators are a relatively new thing too. Usually not so highly integrated that the inductor and capacitor are included, but nothing else than that required externally, besides the usual bypasses of course.
Tim