Author Topic: PCIe on SOC  (Read 3032 times)

0 Members and 1 Guest are viewing this topic.

Online theoldwizard1Topic starter

  • Regular Contributor
  • *
  • Posts: 172
PCIe on SOC
« on: October 14, 2019, 04:56:17 pm »
Both the Rockchip RK3399 and the Broadcom BCM2711 (and I am sure other SOC) have 1 lane of PCIe. To me this is a "breakthrough" in SOCs ! With 4 lanes of (Rev 3.x) PCIe you can easily hang a lot of hardware off of one of these chips. (SDe "cards" should start shipping in 2020 !)

My question is, how many pins does 1 additional PCIe lane consume on a SOC ? Or stated another way, realistically, how many lanes of PCIe are going to be available in the "near" future from SOC vendors ?
 

Offline ejeffrey

  • Super Contributor
  • ***
  • Posts: 3709
  • Country: us
Re: PCIe on SOC
« Reply #1 on: October 14, 2019, 05:21:47 pm »
In principle each additional lane only takes 4 pins (two for tx and two for rx).  At least when you make connectors they typically add ground pins between pairs.

I doubt your are going to see high lane counts in inexpensive SoCs anytime soon.  You only need lots of lanes if you need to feed very high bandwidth links or if you need a lot of of directly attached devices.  These type of SoCs don't generally have the performance to feed a large number of multi-gigabit links and aren't usually connected to a large number of independent peripherals.  Remember, it is perfectly possible to have an integrated controller with several devices and an internal PCIe switch communicate over a single width link.  If for some reason you really connectivity to a bunch of separate chips, a discrete PCIe bridge is a reasonable option.  And for jellybean IPs like ethernet and USB they can just be integrated into the SoC rather than added on as an external.  A single lane or two provides a huge advantage: you can take whatever new whizbang chip you are developing, slap a PCIe interface on it and connect it to an existing SoC that has everything else you need. 

Of course, what I say doesn't matter.  This will be dictated by what the large customers want or need.  If Sony or GM calls up Broadcom and says they have an application for an SoC that needs 16 lanes of PCIe and they are willing to pay for it, that is what they will make.  The rest of us just have to live with whatever is developed to serve the needs of the big customers.  I just don't see the appeal of such a device in mass market applications.
 

Offline fchk

  • Regular Contributor
  • *
  • Posts: 243
  • Country: de
Re: PCIe on SOC
« Reply #2 on: October 15, 2019, 12:37:54 pm »
nVidia Jetson Nano aka X1 has 1 PCIe x4 and additional 1 lane of either USB3.0 or PCIe (fixed to USB3 on Nano, selectable on TX1). You can add a PCIe Packet Switch if you need more lanes.

fchk
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26875
  • Country: nl
    • NCT Developments
Re: PCIe on SOC
« Reply #3 on: October 15, 2019, 01:24:01 pm »
Both the Rockchip RK3399 and the Broadcom BCM2711 (and I am sure other SOC) have 1 lane of PCIe. To me this is a "breakthrough" in SOCs ! With 4 lanes of (Rev 3.x) PCIe you can easily hang a lot of hardware off of one of these chips. (SDe "cards" should start shipping in 2020 !)

My question is, how many pins does 1 additional PCIe lane consume on a SOC ? Or stated another way, realistically, how many lanes of PCIe are going to be available in the "near" future from SOC vendors ?
AFAIK PCIe has been available for SoCs for a while now. For each PCIe connection you'll need a reference clock and several RX and TX pairs. So the number of pins is: lanes *4 +2.  4 pins in total for each RX/TX pair per lane and 2 pins for the reference clock.
« Last Edit: October 15, 2019, 01:25:32 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2729
  • Country: ca
Re: PCIe on SOC
« Reply #4 on: October 15, 2019, 05:29:53 pm »
AFAIK PCIe has been available for SoCs for a while now. For each PCIe connection you'll need a reference clock and several RX and TX pairs. So the number of pins is: lanes *4 +2.  4 pins in total for each RX/TX pair per lane and 2 pins for the reference clock.
I would add a bunch of GND pins to reduce impedance discontinuities (take a look at FPGA/SoCs pinout and you will see that SERDES pins are typically surrounded by a ton of ground pins). Also typically there is a separate power pins for transceivers and PLL, as well as a pin for calibration resistor.

Offline Berni

  • Super Contributor
  • ***
  • Posts: 4944
  • Country: si
Re: PCIe on SOC
« Reply #5 on: October 15, 2019, 06:06:13 pm »
Yep each additional lane needs a extra TX and RX diff pair so that is 4 pins.

But it probably does not make sense to have much more than 4x on a small SoC since with PCIe 3.0 even a single lane can carry 8 Gbit/s so very few devices even need more than a 1x or 2x PCIe bus. With PCIe 4.0 now becoming a thing this is doubled to 16 Gbit/s per lane.

In practice the only things you see using slots larger than 4x PCIe are graphics cards (Apart from some rare weird oddball cards), and even those don't really need the full 16x slot they tend to sit in. People are now running external graphics cards on laptops over thunderbolt where you effectively get a 4x PCIe 3.0 bus out of it and yet they still run with little or none performance degradation.
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2729
  • Country: ca
Re: PCIe on SOC
« Reply #6 on: October 15, 2019, 06:38:28 pm »
But it probably does not make sense to have much more than 4x on a small SoC since with PCIe 3.0 even a single lane can carry 8 Gbit/s so very few devices even need more than a 1x or 2x PCIe bus. With PCIe 4.0 now becoming a thing this is doubled to 16 Gbit/s per lane.

In practice the only things you see using slots larger than 4x PCIe are graphics cards (Apart from some rare weird oddball cards), and even those don't really need the full 16x slot they tend to sit in. People are now running external graphics cards on laptops over thunderbolt where you effectively get a 4x PCIe 3.0 bus out of it and yet they still run with little or none performance degradation.
Having more lanes allows you to connect several devices via hub. And there are devices that can easily saturate PCIe 3 x4 or even x16 - for example backplane connections to interconnect several computing nodes together. Don't make mistake thinking that PCIE usage is limited only to desktop computers and laptops. There are CPUs out there which have multiple PCIE root ports - like NXP Layerscape CPUs.

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26875
  • Country: nl
    • NCT Developments
Re: PCIe on SOC
« Reply #7 on: October 15, 2019, 06:58:22 pm »
Yep each additional lane needs a extra TX and RX diff pair so that is 4 pins.

But it probably does not make sense to have much more than 4x on a small SoC since with PCIe 3.0 even a single lane can carry 8 Gbit/s so very few devices even need more than a 1x or 2x PCIe bus. With PCIe 4.0 now becoming a thing this is doubled to 16 Gbit/s per lane.

In practice the only things you see using slots larger than 4x PCIe are graphics cards (Apart from some rare weird oddball cards), and even those don't really need the full 16x slot they tend to sit in. People are now running external graphics cards on laptops over thunderbolt where you effectively get a 4x PCIe 3.0 bus out of it and yet they still run with little or none performance degradation.
Usually you can create several PCIe busses. One for NVME SSD and another for a Wifi module for example where each can use more than one lane. I don't see that much use for having a PCIe slot. Just peripherals which need to move lot of data.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 4944
  • Country: si
Re: PCIe on SOC
« Reply #8 on: October 15, 2019, 07:13:21 pm »
Having more lanes allows you to connect several devices via hub. And there are devices that can easily saturate PCIe 3 x4 or even x16 - for example backplane connections to interconnect several computing nodes together. Don't make mistake thinking that PCIE usage is limited only to desktop computers and laptops. There are CPUs out there which have multiple PCIE root ports - like NXP Layerscape CPUs.

In not trying to say that larger PCIe buses aren't useful. They are certainly an excellent way of moving around huge amounts of data.

I was speaking of having wide buses on the kind of chips mentioned by the OP such as the RK3399 or the Raspberry BCM2711 where you don't tend to have a need to move quite that much data at once in the usual applications.
 

Online theoldwizard1Topic starter

  • Regular Contributor
  • *
  • Posts: 172
Re: PCIe on SOC
« Reply #9 on: October 16, 2019, 02:10:31 am »
I doubt your are going to see high lane counts in inexpensive SoCs anytime soon. 
Concur on that, but I was thinking 4 would be a good number.

Remember, it is perfectly possible to have an integrated controller with several devices and an internal PCIe switch communicate over a single width link.  If for some reason you really connectivity to a bunch of separate chips, a discrete PCIe bridge is a reasonable option. 
I did not know that such a thing as a PCIe bridge existed that would allow multiple devices to talk to a SOC on 1 lane !

And for jellybean IPs like ethernet and USB they can just be integrated into the SoC rather than added on as an external. 
If your statement is correct, please speculate on why Broadcom did NOT integrate the USB and Etherent chips into BCM2711 ?

The way I look at it, if you want number crunching performance (and clearly RPi did, did by selecting quad A-72 processors) die space can best be used by as much cache as possible !

Of course, what I say doesn't matter.  This will be dictated by what the large customers want or need.
Concur !  So far, the BCM2711 seems to have only one customer, RPi.  It is hard for me to believe that the volume of Pi 4 can cover the cost of the chip layout, but I have been out of that loop for a long, LONG time !

EDIT :
The USB controller is a VIA Labs VL805 is a quad USB 3.0 (5 Gb) part that existed complete with a PCIe 2.0 interface.



Maybe the next Broadcom part will have the USB controller "built in" !

From here
Quote from: jamesh Principal Software Engineer at Raspberry Pi (Trading) Ltd.
The Ethernet is a native Broadcom device on the SoC, attached directly to the memory bus, not via PCIe.

The actual part is a BCM54213PE mounted on the RPi 4 board

« Last Edit: October 16, 2019, 02:59:47 am by theoldwizard1 »
 

Online theoldwizard1Topic starter

  • Regular Contributor
  • *
  • Posts: 172
Re: PCIe on SOC
« Reply #10 on: October 16, 2019, 02:20:22 am »
I was speaking of having wide buses on the kind of chips mentioned by the OP such as the RK3399 or the Raspberry BCM2711 where you don't tend to have a need to move quite that much data at once in the usual applications.
My think is that PCIe (<V4.0) is sort of a SPI on super duper steroids !

As I mentioned before, SDe cards should be shipping in 2020.  They will have 1x PCIe.  Now SDe is never going to challenge M.2 but you could easily and cost effectively have a 128MB card on a RPi 5 with plenty of space for applications and data and, assuming they upgrade the flash, without the fear of failure.
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 4944
  • Country: si
Re: PCIe on SOC
« Reply #11 on: October 16, 2019, 05:32:59 am »
Quote from: jamesh Principal Software Engineer at Raspberry Pi (Trading) Ltd.
The Ethernet is a native Broadcom device on the SoC, attached directly to the memory bus, not via PCIe.

The actual part is a BCM54213PE mounted on the RPi 4 board



Well that is just a Ethernet PHY and these are usually in the form of external chips rather than built in. I have no idea why, possible due to the analog stuff happening in them and that you probably wouldn't want your CPU directly connected to a 100m long lightning magnet

The actual Ethernet controller is still inside the SoC and talks to the PHY via a RGMII bus (Bus made specifically for gigabit ethernet PHYs, in this case in reduced bus size mode where clock speeds are higher as a penalty). So saying that Ethernet is external on this is like looking at a MCU with a CAN transceiver and saying the CAN is not built in, its external in a chip. Tho to be fair Ethernet and USB PHY chips actually have quite a bit more smarts in them compared to a CAN or RS485 transceivers, but they still are mostly glorified level shiftier and serdes chips.

Moving the Ethernet was a pretty exiting thing on the Raspberry Pi 4. Til now we always had to access Ethernet trough a built USB to Ethernet adapter that is sharing the bandwidth of a single USB 2.0 port with all the other USB pots on a hub.
 

Online theoldwizard1Topic starter

  • Regular Contributor
  • *
  • Posts: 172
Re: PCIe on SOC
« Reply #12 on: October 16, 2019, 02:32:18 pm »

AFAIK PCIe has been available for SoCs for a while now. For each PCIe connection you'll need a reference clock and several (at least 1) RX and TX pairs. So the number of pins is: lanes *4 +2.  4 pins in total for each RX/TX pair per lane and 2 pins for the reference clock.
Concur !
 

Online theoldwizard1Topic starter

  • Regular Contributor
  • *
  • Posts: 172
Re: PCIe on SOC
« Reply #13 on: October 16, 2019, 02:36:32 pm »
Well that is just a Ethernet PHY and these are usually in the form of external chips rather than built in. I have no idea why, possible due to the analog stuff happening in them and that you probably wouldn't want your CPU directly connected to a 100m long lightning magnet

The actual Ethernet controller is still inside the SoC and talks to the PHY via a RGMII bus (Bus made specifically for gigabit ethernet PHYs, in this case in reduced bus size mode where clock speeds are higher as a penalty). So saying that Ethernet is external on this is like looking at a MCU with a CAN transceiver and saying the CAN is not built in, its external in a chip. Tho to be fair Ethernet and USB PHY chips actually have quite a bit more smarts in them compared to a CAN or RS485 transceivers, but they still are mostly glorified level shiftier and serdes chips.

Moving the Ethernet was a pretty exiting thing on the Raspberry Pi 4. Til now we always had to access Ethernet trough a built USB to Ethernet adapter that is sharing the bandwidth of a single USB 2.0 port with all the other USB pots on a hub.
Thank you for that clarification !
 

Online theoldwizard1Topic starter

  • Regular Contributor
  • *
  • Posts: 172
Re: PCIe on SOC
« Reply #14 on: October 20, 2019, 12:26:30 am »
This table is from the Rockchip RK3399



2 pins for the differential reference clock
2 pins per channel for each data transmit channel
2 pins per channel for each data receive channel
1 pin for clock request

So each channel take 4 pins OVER the base 3 pins.

Wild speculation ... The the Rockchip RK3399 which same out at least a year before the Broadcom BCM2711, do we thing there is more than one channel on the BCM2711 ??????   >:D   :-//
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf