Regarding semiconductor and especially microprocessor design and manufacturing - it comes with a lot of intricate complexity. FWIW, here is an excerpt from Wikipedia (maybe not the world's authority on microprocessors but I think it's a reasonable attempt at describing die size):
Die shrinks are the key to improving price/performance at semiconductor companies such as Samsung, Intel, TSMC, and SK Hynix, and fabless manufacturers such as AMD (including the former ATI), NVIDIA and MediaTek.
Intel, in particular, formerly focused on leveraging die shrinks to improve product performance at a regular cadence through its Tick-Tock model. In this business model, every new microarchitecture (tock) is followed by a die shrink (tick) to improve performance with the same microarchitecture.[2]
Die shrinks are beneficial to end-users as shrinking a die reduces the current used by each transistor switching on or off in semiconductor devices while maintaining the same clock frequency of a chip, making a product with less power consumption (and thus less heat production), increased clock rate headroom, and lower prices.[2] Since the cost to fabricate a 200-mm or 300-mm silicon wafer is proportional to the number of fabrication steps, and not proportional to the number of chips on the wafer, die shrinks cram more chips onto each wafer, resulting in lowered manufacturing costs per chip.
Whatever the definition or significance of die size might be at the end of the day I think customers are looking at several decision-making criteria including:
1. Performance - this might start with clock speed but for some users it will get to performance per core, number of threads, etc.
2. Power consumption - this might influence some users but maybe not in terms of concern about a few more Watts as much as more heat. (Here we are speaking of CPUs; for GPUs power consumption is a much bigger consideration.)
3. Reliability and longevity - related to power consumption
4. Compatibility - I think for a long time this was something that gave Intel a huge advantage; people were confident that if anything was Microsoft compatible it had to be Intel. Over time AMD has maybe earned itself a reasonably close second place.
5. Security - Intel has had a few hiccups but they have shown a willingness and some ability to respond with current product patches, and to apply lessons learned to subsequent designs. Not sure how far along AMD is in this area.
6. Price - I think Intel and AMD now are close enough on the above that while customers will pay a premium for one or the other depending on their criteria and affinity, the two are now somewhat locked into a reasonably close price proximity with one another.
7. Marketing/brand awareness - at the enterprise level no one ever got fired for going with Intel Inside; at the consumer level Intel Inside is still pretty catchy but maybe AMD has earned a foothold or better.
In some (many?) cases Intel wins on price and performance - but for various reasons AMD seems to have earned some reasons to exist - possibly AMD is just more nimble and able to focus on selective market segments where it is able to press it's advantages.
Back to die size. From my perspective, the competition in die size - whether in terms of real technical merit or simply marketing - is interesting but not likely to be the driving force in my decision regarding which CPU to purchase. All other things being even, and maybe 10-15% more expensive for Intel, I'd go with Intel. Certainly, Intel at less than the price of AMD is an attractive proposition (see attachment below). Until this recent computer build, I'd never strayed from Intel and price difference was never really even a consideration. In fact, if everything else was somehow even but the die size was 2x or maybe 3x larger for Intel vs AMD, I'd likely still go with Intel. The reality however is that at some point die size will probably impact functionality, power requirements, and clock frequency. But that's not what cost Intel one processor unit (I'm sure they are not crying in their beer over losing the sale of one unit).
So, we're a little off the original thread but to be clear, I just threw in die size as a passing reference in the list of things to think about when building a video gaming computer (especially for anyone asking "can AMD really compete?"). In terms of priority die size as a metric by itself is likely last on the list (except for the fact that it might eventually impact other items on the list).
What ultimately drove me to experiment with AMD was the desire for PCIe 4.0.
If I was an Intel marketing manager I'd be more concerned with getting from PCIe 3.0 to 4.0 than I would be with getting from 14nm to 10nm to 7nm. Now if I was the engineering manager I might care more about die size, but as the consumer I like double the bandwidth on the bus more than I like half the die size.
If anyone has ever tried to back up disks running at 250 MBytes per second, I think they are going to really like 5 GBytes per second, and pretty soon even faster. This is not all attributable to PCIe 4.0 - it's mostly NVMe - but why not build a computer that can double the bus bandwidth for roughly the same price as half the bus bandwidth?
(And BTW, PCIe 5.0 is coming, it's only a matter of time. PCIe 5.0 will double the bandwidth over PCIe 4.0. Why should we care so much about doubling? After all it's more or less Moore's Law and it's been going on for a long time - and some people think it might be nearing it's end... or maybe not. Either way, the point is that when you double from 1 to 2 you get an impressive thing (a double), and likewise when you double from 2 to 4 you get another impressive thing (another double), but at this point you are still at 4, and only 3 up from where you started. When you double from say 64 to 128 you are up 64 with one jump; and when you double from 128 to 256 you are up 128 in one jump. For anyone who likes performance increases, doubling might be more cool later on the curve than earlier on the curve, I think. In any event, each time we take a bottleneck out of the overall system we create the opportunity to go address another bottleneck or limiting factor somewhere else. Personally, I think the jump from spinning disks to SSDs (and NVMe sitting on PCIe buses rather than SSDs connected with SATA in particular) is very cool; kind of like moving from diskettes to hard disks, but further up the curve.)
We can talk more about die size, microprocessor design, and semiconductor manufacturing (or maybe open another thread on that); back to video game computer building....
EF