


If I understand correctly they use ASMedia ASM1480 switches to switch 4 lanes per chip.
You have forgotten about one problem, timing. Given the speed of operation of PCIe you will need to make sure that a number of signals arrive at the connector at PRECISELY the same time, this is a significant exercise that may require a multi-layer PCB.
You have forgotten about one problem, timing. Given the speed of operation of PCIe you will need to make sure that a number of signals arrive at the connector at PRECISELY the same time, this is a significant exercise that may require a multi-layer PCB.
You can only configure how the lanes are split by modding your MoBo, if that's even possible. In the video you linked to they're telling MoBo designers how they can set pins for different configurations that their board supports...If I understand correctly they use ASMedia ASM1480 switches to switch 4 lanes per chip.
They're multiplexers, they don't let you switch the lane configuration on the fly, they let you switch between different devices on the same bus. So you can have say more than 1 PCIE 16 device hooked up even though there's only one x16 connection to the CPU. You use the multiplexers to decide which device you;'re talking to on the same serial bus.
You have forgotten about one problem, timing. Given the speed of operation of PCIe you will need to make sure that a number of signals arrive at the connector at PRECISELY the same time, this is a significant exercise that may require a multi-layer PCB.
The second example card you have shown likely uses a PCIe-PCIe bridge. While the BIOS must support such a bridge in order for the connected cards to be seen at boot time, I don't think this is generally an issue. I would not be surprised if most BIOS do support them, since it is a common function. SATA3/USB3 combo cards for example usually have such a bridge. I have also seen a high performance RAID card with a PCIe bridge to enable a PCIe gen 2 chipset to have a fast interface to a PCIe gen 1 motherboard in an x4 slot.
With the amount of skill, time & equipment you'd need, it would be far far cheaper just to buy a motherboard that does what you want.
PS why would you want so many PCI slots? A bitcoin miner?
Note I'm talking about how these chips are implemented on motherboards, they are used in sets of 4.
Note I'm talking about how these chips are implemented on motherboards, they are used in sets of 4.
They're 4 lane multiplexers, so you need 4 working in parallel to switch between 2 16 lane devices, 2 for 8 lane, 1 for 4 lane...

So why use 4 ASM1480, capable of 16x multiplexing if you are running both slots at 8x/8x?
So why use 4 ASM1480, capable of 16x multiplexing if you are running both slots at 8x/8x?
There are 4 red PCIE slots (guessing for the hardly ever works quad SLI) and 8 multiplexers...
I need to figure out what these signals are and what to do with them.
1) SMBus Clock
2) SMBus Data
3) TCK
4) TDI
5) TDO
6) TMS
7) +TRST#
8 ) Link Reactication
9) Power Good
10 and 11) Reference Clock pair
You can't just split the lanes without appropriate hardware support.
The magic search term you need is PCIe switch. The 32+ lane ones are, as you've noticed, quite expensive.
I need to figure out what these signals are and what to do with them.
1) SMBus Clock
2) SMBus Data
I2C-like configuration and monitoring bus, just put the two connectors in parallel and it will likely work if the cards use it.3) TCK
4) TDI
5) TDO
6) TMS
7) +TRST#
JTAG. Useful if you want to debug the connected hardware, but then you must know how to access the JTAG chain on your mobo. Not likely to be very useful unless you are the manufacturer of the board. The cards probably don't really need this neither for normal operation.8 ) Link Reactication
No clue, should be described in the PCIe spec.9) Power Good
Likely the system PSU power good signal. Used to tell the peripherals that it is safe to come out of reset and start powering up, because the PSU is stable already. Simply connect it to both slots.10 and 11) Reference Clock pair
System clock distribution? No idea. Should be in the spec.

You can't just split the lanes without appropriate hardware support.
The magic search term you need is PCIe switch. The 32+ lane ones are, as you've noticed, quite expensive.
Maybe, it might be simple in terms of hardware, hardware is just hardware and can be done by third parties. I fear it also needs software. ie the BIOS needs to do a handshake and have the code to allow for multiple cards. Seeing as a normal ATX/mATX have dual PCIe 8x slots and a mITX board with only 1 16x slot it should be doable from hardware, but if they removed the code in the BIOS to allow for this then it is going to be difficult/impossible.
You can't just split the lanes without appropriate hardware support.
The magic search term you need is PCIe switch. The 32+ lane ones are, as you've noticed, quite expensive.
Maybe, it might be simple in terms of hardware, hardware is just hardware and can be done by third parties. I fear it also needs software. ie the BIOS needs to do a handshake and have the code to allow for multiple cards. Seeing as a normal ATX/mATX have dual PCIe 8x slots and a mITX board with only 1 16x slot it should be doable from hardware, but if they removed the code in the BIOS to allow for this then it is going to be difficult/impossible.
Just because the chipset can be used that way doesn't mean you can just split the lanes on any old board and expect it to work. Both the board and the BIOS need to be built to allow it.
If you want to use an existing board you will need to use a packet switch. They are expensive if you want 32+ lanes.
If you can get away with 12 lanes (x4 host > two x4 cards) they're more affordable.
Scenario:
Chipset is not important for me, because Intel CPUs of the Ivy-bridge generation and later have >16 PCIe 3.0 lanes available directly from the CPU, no chipset is between the GPU and the CPU.
The ASUS Maximus VII Hero has 2 PCIe 16x slots, but when 2 cards are installed these operate at 8x they only use the ASM1480 chips to change lane allocation from 16x/0x to 8x/8x no fancy chip needed.
the ASUS Rampage VII Impact is based on the exact same platform as the Hero, but it is in mITX format so it only has 1 physical PCIe slot.
What is stopping me from just adding the same components ASUS added to the motherboard to allow for the lanes to split?
How would this card work then?
RSC-R2UG-2e4e8
http://www.acmemicro.com/Product/13545/Supermicro-RSC-R2UG-2E4E8-LHS-Passive-2U-PCI-E-Riser-Card?c_id=356
It has the following specs
Output Type: Output: (3) PCI-E x16 , Signal: (1) PCI-E x8, (2) PCI-E x4
GenN: gen2: Yes , gen3: No
GPU / PHI Support:
Auto Detect: Yes
Compatible System: 2026GT-TRF, 2026GT-TRF-FM407, 2026GT-TRF-FM409, 2026GT-TF
There are 5 undefined pins on a 16x PCIe slot, they might have used those, but would that be enough to add 2 more PCIe slots?
How would this card work then?
RSC-R2UG-2e4e8
http://www.acmemicro.com/Product/13545/Supermicro-RSC-R2UG-2E4E8-LHS-Passive-2U-PCI-E-Riser-Card?c_id=356
It has the following specs
Output Type: Output: (3) PCI-E x16 , Signal: (1) PCI-E x8, (2) PCI-E x4
GenN: gen2: Yes , gen3: No
GPU / PHI Support:
Auto Detect: Yes
Compatible System: 2026GT-TRF, 2026GT-TRF-FM407, 2026GT-TRF-FM409, 2026GT-TF
There are 5 undefined pins on a 16x PCIe slot, they might have used those, but would that be enough to add 2 more PCIe slots?
My guess this riserboard only works on certain server boards that supermicro makes (first guess, didn't actually do any research). Will prolly be cheaper to just supply some support for this on their mobos in case the user wants to use it. So the chips it lacks compared to that other splitter card will simply be present on the motherboard.
