Author Topic: is there any reason why amd and NVIDIA could not sell socketed gpus?  (Read 492 times)

0 Members and 2 Guests are viewing this topic.

Offline aqarwaen

  • Regular Contributor
  • *
  • Posts: 67
  • Country: us
is there any reason,why it is not possible install gpus into motherboard  socket like we can install cpus atm??
 

Offline thm_w

  • Super Contributor
  • ***
  • Posts: 2407
  • Country: ca
Do you mean onto the main motherboard? Or on the regular PCB that GPU comes with.
Cost and convenience mostly.
A graphics card board design might have at most two or three specific GPUs that would make sense to install onto a specific board layout. Whereas a computer motherboard could have 20+ CPUs that will work.
Also a socket takes up vertical height.
 

Offline aqarwaen

  • Regular Contributor
  • *
  • Posts: 67
  • Country: us
Do you mean onto the main motherboard? Or on the regular PCB that GPU comes with.
Cost and convenience mostly.
A graphics card board design might have at most two or three specific GPUs that would make sense to install onto a specific board layout. Whereas a computer motherboard could have 20+ CPUs that will work.
Also a socket takes up vertical height.

yes i mean mainboard.for example i buy asus motherboard and it will have sockets for cpu and gpu
 

Online ataradov

  • Super Contributor
  • ***
  • Posts: 7129
  • Country: us
    • Personal site
GPUs have very specific design and layout requirements. So you would have a motherboard that can only fit one GPU type.

From the motherboard manufacturer point of view, they would have to make motherboards with all permutations of CPU and GPU sockets. This makes no sense.

Also, it would essentially require putting the whole GPU board on the main board, so space will be an issue.
Alex
 

Online coppice

  • Super Contributor
  • ***
  • Posts: 6183
  • Country: gb
What would be the point? Every new generation of GPU fits into a totally new environment. A socket would just make a thicker structure, leaving less space for the cooling system.
 

Offline NiHaoMike

  • Super Contributor
  • ***
  • Posts: 6859
  • Country: us
  • "Don't turn it on - Take it apart!"
    • Facebook Page
Besides the GPU chip itself and the VRAM that's tightly coupled to it, the only other parts of a modern GPU module are the power supply (which is also closely coupled to the GPU chip), the I/O connectors, and the cooling assembly. Simply put, the cost of the remaining parts is such a small part that you won't save much not replacing them when upgrading the GPU.

If you're after a slimline/low profile design, it theoretically could be done with a motherboard that has the PCIe connector going sideways on the edge. I'm just not aware of any manufacturer doing that.
Cryptocurrency has taught me to love math and at the same time be baffled by it.

Cryptocurrency lesson 0: Altcoins and Bitcoin are not the same thing.
 

Offline blueskull

  • Supporter
  • ****
  • Posts: 14374
  • Country: cn
  • BA7LKP
VRAMs run at a few times higher data rate than your plain DDR (like higher clock, PAM, and more), with wider bus bits, so the routing is considerably more tricky.

There are copackaged VRAM+GPU modules, but those are at this moment only used for high performance computing (or very old ATi things).

Maybe when copackaged GPUs become mainstream, we can see this day.
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 3226
  • Country: hr
They do exactly that. You buy graphics card and install it in PCIe socket on your motherboard.

GPU by itself cannot work. It needs supporting chips, same as CPU.
Support structure for CPU is the motherboard.
Graphics card is the motherboard for the GPU.

GPU has wide data bus, that runs at much higher frequencies than CPU buses.. All specs are much tighter, and parasitics are more critical. Graphics card PCB is part of design. So no separation of those two.
 
The following users thanked this post: tom66

Offline Microdoser

  • Regular Contributor
  • *
  • Posts: 91
  • Country: gb
With all the associated components and circuitry that are GPU specific, you would need a board as large as they currently are so to answer your question:

GPUs are currently socketed, they use a PCIE socket.
 

Offline hans

  • Super Contributor
  • ***
  • Posts: 1113
  • Country: nl
I agree, calling PCI-e a "socket" is technically correct, but not in spirit of the question..

Because a socketed CPU is mainly such that you can freely pair a motherboard and CPU of choice. There is nothing from stopping you putting a Ryzen 3100 into a high-end X570 motherboard with 128GB of RAM, onboard 10Gigabit LAN, extra M.2 carriers, etc.. Does it make sense? I wouldn't say "definitely not": I for one run a dual-core Pentium in a dual Gigabit LAN Motherboard with 24GB RAM, as it serves as my home router, docker server & ZFS storage box.

But then PC applications have a very large dynamic range of I/O, RAM or CPU requirements, in particular if you look at the ratios. How would such component interchangeability compare with a GPU? Usually GPU manufacturers target only a handful use cases: gaming at 1080p, 1440p, 4K, VR, and compute. Usually these use cases scale both in terms of VRAM and FLOPS. E.g. 4K uses more VRAM but also needs a powerful GPU core. Maybe compute is the one exceptional case where in AI you want to load large models and do relatively few calculations on them, or vice versa. But that's a niche IMO

So then, if you want to socket a GPU, do you also want socketed graphics memory? Okay, hypothetically let's say it's possible to build such a system in a compact way... (as GDDR6 VRAM also needs a lot of cooling), I think the winner stories for such upgrade paths are limited. E.g. hypothetically you could swap a RTX2060Super for a RTX3070 and keep the same memory system (both have 256-bit 14Gbps 8GB GDDR6), and get a 50% performance gain. Great! But if you want to upgrade from a 2080Super or 2080Ti to even a 3080 core, you would also need to replace the VRAM. So at the low/mid-end users may benefit the most, but margins are the thinnest at that end. High-end card users may pay the additional extra for socketed systems, but would have the least benefit as virtually everything changes in the high-end GPU generations..

That also brings me to a final point: I wouldn't consider upgrade paths between multiple generations. For example, the 3080 and 3090 are the first cards to use GDDR6X. That makes it very likely that the next generation x60 and x70 cards may also use it (or some "upgraded" version of GDDR6 14Gbps). If you need to keep on swapping VRAM in combination with GPUs to get the most benefit, than that is very different from slow vs fast DDR4 that is very unlikely to severely bottleneck a system if you move it around 3 or 4 of CPU generations.
Any other "platform" support is hypothetical. Like, AM4 boards would support many generations of CPU's.. but then it also does not. Some boards have dropped support because the BIOS wouldn't fit on the onboard FLASH. Even if it would, it's a hassle to research and wait for BIOS updates so you can pair an old board with the latest CPU. Although it's nice to theoretically use a B350 board with the latest 5950X, I think that almost all PC manufacturers only have a working window scope of 2 or 3 generations of products before everything is upgraded again.. And really, I don't upgrade my PC any faster than 2 or 3 generations, so in that case I might as well build a new system (CPU+MB+RAM+GPU).
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 4257
  • Country: gb
  • Electron Fiddler, FPGA Hacker, Embedded Systems EE
I was going to say what 2N3055 says.  A graphics card is essentially that.  It's a module with a common interface, it just happens to integrate RAM, GPU, power supply etc into one device.  Whilst it may not be as fast, you can take a 2020 GPU and run it in a PC with a 2006 motherboard and CPU - provided it has PCI-express!

I do expect that in the near future we'll see a return to the Intel idea of supplying the CPU with a fixed, low supply voltage which is then buck-converted on-die to the VRM supply using very high frequency switchmode converters.  This is because you need faster and faster load transient response times with modern CPUs and the VRM can begin to limit that with its distance from the CPU.

You may also see integrated RAM though given most PCs now have 16GB+, the space for this on a CPU die is limited.  Maybe an eDDR5 type, with a few GB of ultrafast RAM closely located on the package, as an "L4 cache".  Not sure what the performance implications would be, compared to on-die L3 cache.
« Last Edit: Today at 11:47:02 am by tom66 »
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf