Author Topic: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1  (Read 54450 times)

0 Members and 1 Guest are viewing this topic.

Online EEVblogTopic starter

  • Administrator
  • *****
  • Posts: 37661
  • Country: au
    • EEVblog
EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« on: October 15, 2016, 12:01:59 am »
Dave sketches up an idea for a more integrated Raspberry Pi supercomputer cluster with integrated ethernet and power.

 
The following users thanked this post: frank_gamefreak, SgtTech

Offline riyadh144

  • Supporter
  • ****
  • Posts: 111
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #1 on: October 15, 2016, 12:35:15 am »
You have to be careful, as the 5v pin has no input protection, I have killed a few RPis doing exactly this.
 

Offline johnkeates

  • Contributor
  • Posts: 21
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #2 on: October 15, 2016, 01:49:17 am »
Not only the power pins, but the GPIO pins have practically no protection either.

Regarding networking, there are more options besides ethernet, you could use the serial port on the thing and PPP over it, at least one UART is on the header if I'm not mistaken.
To connect them together, you'd need a way to have them all feed back to one Pi just doing the PPP to Ethernet bridging and you can use it's existing ethernet connection to plug in to your network. It would take some sort of expander or multiplier to get all those serial connections connected back to that one Pi, so an extra chip is still needed, but only one and not one for every Pi. The nice thing about Linux is that as long as something does stuff like TCP/IP, any application running on top of it won't know or care about what transport is used on the lower levels. There is support for at least serial, parallel, usb, ethernet (of course), firewire, pcie, IRDA, ISDN and a while back someone was working on TCP/IP over I2C (but I'm not sure if that was ever completed).

With the networking thing on the hardware side, there is a software side too: proper hardware has MAC addresses for network interfaces on Ethernet, and other busses have arbitrary configurable or serial number based addresses in many cases. If you were to want to configure your cluster in a somewhat automated manner, you could just have the IP addresses preconfigured based on those addresses in a DHCP server. The compute module would request an address, the DHCP server would recognise it's MAC address or port ID and assign it the correct address.

Regarding the Software, the Raspberry Pi guys have 'Raspbian' which is a modified version of Debian, specifically for the Pi. Since Debian is completely free/libre/open/wank-word-of-the-day and the process of adding Raspbian modifications on top of it is documented, the software should always be maintained and maintainable. There is no secret sauce or a commercial entity required to keep it going on the Linux-based side. On the bootloader and GPU side of things, it's a bit tricky, since Broadcom still thinks that has to be 'secret sauce' for some reason (as if nobody else has a CPU that can boot... or a GPU that does graphics). There is the bootloader binary blob file that reads config.txt, and that's the part you can't really maintain if you're not a broadcom-zombie, so that's something that may not always be kept up-to-date. On the other hand, does it really need to be 'maintained', since it's just there for one thing: initialise the GPU, and start it, then let it start the CPU (yes, that's how the thing does it) and setup the DRAM, and kick the kernel into action. So as long as the hardware doesn't change and the kernel is fine with whatever the bootloader parks in memory, it should work indefinitely.

For other boards, this may not work out so well. There is this Banana Pi, and the Orange Pi, and there are probably a ton of other Pies (enough to fill a pastry cookbook?), and we have the 'chip' and then there are the minnow boards, beagle bones, and LeMaker is doing stuff, and everyone is coming to join the club. Specs-wise, that's nice, but when it comes to software, unless you have one of those Linux distributions (like Debian or Fedora) to build on and a platform specific community (like with Raspbian) to maintain tweaks for the board-specific things, it's impossible to maintain or develop in a workable and long-term way, because the resources are simply not there. Since software and the internet (and everything connected to it) is always changing, at some point, you'll always have to update, upgrade or modify the devices and/or software in order to keep using it or keep it connected to others. Try finding a serial modem, or an IrDA adapter, heck, try finding an analog phone line! It's annoying, but as long as stuff is connected or has to work together, everything has to keep up. (up to a certain level, of course)

So when picking a board or software distribution, the community or upstream projects feeding the software that runs it are about just as important as the hardware specs themselves.
Without good software, those tiny boards won't even cut it as a paperweight (too light to hold anything down!).
 
The following users thanked this post: EEVblog, elgonzo

Offline SL4P

  • Super Contributor
  • ***
  • Posts: 2318
  • Country: au
  • There's more value if you figure it out yourself!
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #3 on: October 15, 2016, 04:13:01 am »
i'd think twice about routing the slots in the backplane to accomodate the Pi headers...
it may be smarter to make small right-angle transition PCBs that would allow you to keep the backplane intact for bussing and mechanical strength.
The transition 'board' may also offer other connection and mechanical benefits as it evolves.
Don't ask a question if you aren't willing to listen to the answer.
 
The following users thanked this post: elgonzo

Online NiHaoMike

  • Super Contributor
  • ***
  • Posts: 8973
  • Country: us
  • "Don't turn it on - Take it apart!"
    • Facebook Page
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #4 on: October 15, 2016, 04:22:30 am »
What about use a small FPGA to connect all the SPI lines with a master node? Or if possible, set the USB port to gadget mode and use ordinary USB hubs to link to master nodes.

I have a cluster of cheap smartphones for mining altcoins. Connectivity is just a cheap wireless router dedicated to serving the cluster. A spare 120mm fan connected to a wall wart keeps them cool.
Cryptocurrency has taught me to love math and at the same time be baffled by it.

Cryptocurrency lesson 0: Altcoins and Bitcoin are not the same thing.
 

Offline 6581

  • Supporter
  • ****
  • Posts: 79
  • Country: fi
EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #5 on: October 15, 2016, 04:26:12 am »
Two ideas - not sure how great or viable - for physical density:  1)  45 degree connectors on backplane/motherboard, could allow sliding these tighter next to each other - (like some memory modules), 2) two boards on top of each other - top board upside down, raspberries interleaved. Just my thoughts while watching.

Great project, very interesting.
 

Offline rs20

  • Super Contributor
  • ***
  • Posts: 2317
  • Country: au
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #6 on: October 15, 2016, 04:30:33 am »
Really cool project! A couple of ideas:
  • If you're already getting a PCB made, perhaps you could make little adapter PCBs that have a standard female header, and a card-edge connector on the side. Then you can use cheap-as-dirt (because they're so common) PCI-express connectors. This way, you don't need the slots in the PCB (and the associated routing annoyances), and you don't need any extra spacing between the modules to accommodate the sideways mounting motion. You also gain some nice features built into PCI-express, like ground-before-power, making hot-plugging modules significantly safer, perhaps.
  • It might be a nice idea to devote a few GPIO pins to a simple slot identifier: literally just a binary pattern of grounded pins (cost: $0). That way, it'd be straightforward to write a little script on the Orange Pi that enabled pullups, read in what slot it's on, and define a static IP address accordingly. (Bonus marks for silkscreening [partial] IP address directly onto the board!) The way, if you see a failing board and want to SSH in to see what's going on*, you know which IP address to use. All while the OPi's are running exactly identical system images.

* Or want to use a particular one because you're using the GPIO breakouts that you mentioned.
« Last Edit: October 15, 2016, 04:36:20 am by rs20 »
 

Offline facosta

  • Newbie
  • Posts: 7
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #7 on: October 15, 2016, 04:48:41 am »
I'm not sure about the price Dave quotes for the Orange one. The cheapest ones I can find are AU$25 from HK delivered and the cost will double when you buy from the States once you include delivery. Obviously I'm looking at the wrong side of the distribution line. By the way, if you are in a hurry and need to buy it one of these puppies in  the local market, the average cost will be similar to the USA plus taxes or AU$55ish.
Any ideas where to look for a better deal?
 

Offline MauriceS

  • Contributor
  • Posts: 24
  • Country: us
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #8 on: October 15, 2016, 04:54:32 am »
There are more ways even with the Pi Zero, one of them is to use the USB connector and use USB Ethernet emulation mode:

https://learn.adafruit.com/turning-your-raspberry-pi-zero-into-a-usb-gadget/overview

So that would mean to have a bunch of micro USB sockets on a board and a USB hub. There is one USB Host needed, and the Zero's could be slaves, so one Pi 2 or 3 being the cluster master. Costwise that would be likely even less than having a ENC28J60 in use there.

Using the ENC28J60 (and there is a 100mbit version too) - there is a possibility of cost saving to use capacitive ethernet coupling, which i know that works. The company I work at uses that on the backplane of one of our systems, and we don't have problems. That would save extra magnetics...

The most funkiest solution would be to use SPI, but the problem would be that that would need a linux driver, and the second issue is that it looks that the broadcom MPU only supports master mode, so a (semi) smart slave would be needed based on an FPGA (expensive) or a microcontroller... Either one with as many SPI slave interfaces as possible - I found Microchip makes some chips with 4 SPI busses.
 

Offline Stefan Payne

  • Contributor
  • Posts: 36
  • Country: de
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #9 on: October 15, 2016, 05:29:05 am »
Hm...

2 Things:
1. Why not use some (short) flat ribbon cables? Than you don't Need the Slot inside the board, you just put a connector there and off you go. Though it might be some additional work for you to make These cables...
And 40 pin Flat ribbon cables should be easily obtainable via eBay. You may be able to get a box full of those -> old IDE Cables.
I don't know if the 80pin ones would work though...

2. In Ethernet the 'magnetics' are some Kind of Isolation transformers. So it may be possible to get away without it. But I've never done this, so don't Quote me on that...

Some Kind of mechanical Thing to screw the board into is needed anyways, is it not?
So I think the Version with the flat ribbon connector could be more viable, especially since you can use it to get around the 'different' Pinout between the Orange Pi and the normal Pi...
 

Offline Towger

  • Super Contributor
  • ***
  • Posts: 1645
  • Country: ie
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #10 on: October 15, 2016, 05:37:39 am »
Wifi Dave wifi...  Just whack a couple of dollars wifi dongle into each one and use a dedicated wifi router.
 
The following users thanked this post: Fungus, mib

Online Brumby

  • Supporter
  • ****
  • Posts: 12288
  • Country: au
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #11 on: October 15, 2016, 05:44:19 am »
All this faffing around because of non right-angled headers in place.

Dave - I'll offer to remove (up to 100 of them) for you for free - just to take this irritating limitation out of the equation.  My ZD-985 works brilliantly on headers.
 

Offline bktemp

  • Super Contributor
  • ***
  • Posts: 1616
  • Country: de
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #12 on: October 15, 2016, 05:51:53 am »
Adding an ENC28J60 will waste a lot of power: Those things get pretty hot, because they draw 120mA when idle and 160mA when transmitting. That's an additional 0.5W per chip!
ENC424J600 is faster and draws less current.
 

Offline somlioy

  • Regular Contributor
  • *
  • Posts: 128
  • Country: no
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #13 on: October 15, 2016, 07:49:13 am »
Finally a eevBlog-project. Please complete it.  8)
 
The following users thanked this post: thm_w

Offline DJVG

  • Contributor
  • Posts: 14
  • Country: nl
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #14 on: October 15, 2016, 07:53:36 am »
I'm currently working on something similiar with the same boards (and the ones with 2GB ram) and I'm very suprised to see a video like this. Really nice!!

If you want to go smaller you might want to look at the NanoPi NEO: http://www.friendlyarm.com/index.php?route=product/product&path=69&product_id=132. It uses the same AllWinner H3 CPU and it's only 40x40mm!
 
The following users thanked this post: thm_w

Offline hans

  • Super Contributor
  • ***
  • Posts: 1626
  • Country: nl
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #15 on: October 15, 2016, 09:26:35 am »
My thoughts:

ENC424J600 is an overall nicer chip and all.. but via SPI it won't give much more bandwidth. In addition, unless you tell it to fix to 10Mbit, at 100Mbit network speeds the chance of a ENC buffer overflow is much larger. 100Mbit/s potentially going in, only 14Mbit/s can go out (max SPI speed).
So apart from power, it won't buy you much. The package (QFP44) is also larger.

In terms of magnetics I would watch out what voltage reference (GND or VCC) the ethernet tx/rx pairs are connected to. You could also use capacitive coupling of ethernet lines instead of magnetics, seperating the DC reference, which is indeed still much cheaper and smaller than magnetics.

I would also likely add a 60-80mm fan in the enclosure. Because 40W of dissipation sounds like quite a bit of heat.
« Last Edit: October 15, 2016, 11:38:08 am by hans »
 

Offline metRo_

  • Regular Contributor
  • *
  • Posts: 90
  • Country: pt
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #16 on: October 15, 2016, 09:34:37 am »
Adding an ENC28J60 will waste a lot of power: Those things get pretty hot, because they draw 120mA when idle and 160mA when transmitting. That's an additional 0.5W per chip!
ENC424J600 is faster and draws less current.

And I think faster is the key here... if you can't spread the data fast enough to all boards you are wasting the parallel computing power of this kind of solution.
 

Online mariush

  • Super Contributor
  • ***
  • Posts: 4983
  • Country: ro
  • .
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #17 on: October 15, 2016, 10:05:52 am »
Considering the low speeds, it would be cheaper to just make your own RJ45 to 1x8 or 2x4  or 2x5 0.1" header at the end of 5cm of cable or something like that of network cable. Just enough to make a 180 degree turn and go down towards the main pcb, or to route the cable to a convenient location where it could go in a network switch. And honestly, switches are so cheap these days that it would probably be cheaper to just buy one from the store and remove it from its case and attach that internal board to your board that holds all those pi's in place. There's new 8 port switches for 10$ and 5 port switches for around 8$ in my local store in a country with 20% vat .. probably can't even buy the parts separately from stores like Digikey for that price.  You could probably buy 24-48 port used switches on eBay for less than 30-40$.

Also, 40 pin cables ribbon cables should be fairly cheap if you buy in volume,just look on ebay for ide cables - 1$ a piece ... here's an example: http://www.ebay.com/itm/40-Pin-IDE-ATA-HDD-Hard-Drive-Ribbon-Band-Cable-Dual-Device-Disk-Connectors-/171288701698?hash=item27e19a2b02:g:EgIAAOxyhXRTPLJc

You could convert such a cable into two separate cables just by cutting the ribbon cable near the middle connector and installing a new 40 pin connector at the end of the loose ribbon cable. On digikey, such connectors seem to be more expensive than an IDE cable from ebay, looks like they're about 1.5$ each : http://www.digikey.com/product-detail/en/3m/89140-0001/MSC40A-ND/229687

But alternatively, especially if you don't need all 40 pins, you could make yourself three ribbon cables just by cutting the ribbon cable in the middle or to the length you desire and install smaller connectors at the other ends (the ones which would go on your base board).
For example, you could use 10 position (2x5) connectors which are 30 cents each (which you could also reuse for the network jack) :  http://www.digikey.com/product-detail/en/on-shore-technology-inc/101-106/ED10500-ND/2794212
or you could use 20 position (2x10) connectors which are around 46 cents each on Digikey : http://www.digikey.com/product-detail/en/assmann-wsw-components/AWP-20-7240-T/HHKC20H-ND/4864473

So for 1$ ide cable + 3x 30 cents = ~ 2$ you got yourself 3 x  40 pin ->10 pin cables or for ~ 2.5$ you could have 3 x 40pin -> 20 pin cables

An extra RJ45 network jack and a 30 cent header could save you 2-3$ in that microchip IC

I was thinking you could make boards like PCI-E cards on which you'd attach up to 7 PIs (so that you'd use 8 port network switches) and those 10-20 wires from each PI would be routed to the PCI-E slot (which if i remember correctly has around 150 pins on the long side), and you could use the tiny side to send 12v or 24v on the pci-e like card and have some dc-dc converters on the card to convert that down to 5v for each pi
The pci-e slots are easy to buy and could also be positioned on a motherboard in a way that could allow you to screw these fake pci-e board to a computer case for rigidity, support whatever... you get a case with around 10 slots so you could have 10 x 7 PIs or something like that all powered from an atx power supply on 12v with regulators on each pci-e like card producing 5v

 
« Last Edit: October 15, 2016, 10:11:36 am by mariush »
 
The following users thanked this post: elgonzo

Offline suku

  • Contributor
  • Posts: 46
  • Country: hu
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #18 on: October 15, 2016, 10:10:31 am »
you could actually design the board to be fairly low profile, so it's possible to put the motherboard upside down into the case and use standard respberry pi's... i think it'd be nice to make it compatible with both boards....
 

Offline alexanderbrevig

  • Frequent Contributor
  • **
  • Posts: 700
  • Country: no
  • Musician, developer and EE hobbyist
    • alexanderbrevig.com
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #19 on: October 15, 2016, 10:25:24 am »
If you stagger them you'd probably get twice the density with a cost on its width
 

Offline CM800

  • Frequent Contributor
  • **
  • Posts: 882
  • Country: 00
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #20 on: October 15, 2016, 10:40:16 am »
Might I suggest this special snowflake of a connector?




PCB mount Ethernet Plug!  :-DD
 
The following users thanked this post: cowana, rs20, ckambiselis, chris_leyson

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26755
  • Country: nl
    • NCT Developments
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #21 on: October 15, 2016, 10:50:12 am »
I doubt it makes much difference in cost to order multiple PCBs with orange pies mounted flat (and mechanically fixed to the board!) or one board with 10 standing (well, hanging on a connector). I'd mount them flat for mechanical stability. Either way a dense solution with many pies may need forced air cooling.
« Last Edit: October 15, 2016, 10:52:24 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline technix

  • Super Contributor
  • ***
  • Posts: 3507
  • Country: cn
  • From Shanghai With Love
    • My Untitled Blog
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #22 on: October 15, 2016, 10:53:00 am »
Instead of cutting slots into the backplane, you can use an adapter board that have a straight 40-way pin header socket, cutouts allowing the USB and Ethernet jacks poke through, a card edge connector to the motherboard for easier node removal, network circuitry if you are using GPIO headers for networking, and maybe some power supply and protection circuitry (so your motherboard don't have to carry too much current.) In fact by doing this your backplane will also be compatible with Raspberry Pi, if a different adapter board is used.

Each adapter board have an buck converter that converts 12V to 5V, the ENC28J60 chip and half of the termination resistors. The backplane then would be a plain old Ethernet switch.
« Last Edit: October 15, 2016, 11:17:03 am by technix »
 
The following users thanked this post: SeanB

Offline mikeselectricstuff

  • Super Contributor
  • ***
  • Posts: 13695
  • Country: gb
    • Mike's Electric Stuff
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #23 on: October 15, 2016, 11:00:10 am »
Connector issue is easy - something like a Samtec SSW right-angle header with long pins.

http://suddendocs.samtec.com/catalog_english/ssw_th.pdf

PCB pin lengths of 0.3 are available, which would probably get you high enough off the board
Youtube channel:Taking wierd stuff apart. Very apart.
Mike's Electric Stuff: High voltage, vintage electronics etc.
Day Job: Mostly LEDs
 
The following users thanked this post: rs20

Offline mikeselectricstuff

  • Super Contributor
  • ***
  • Posts: 13695
  • Country: gb
    • Mike's Electric Stuff
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #24 on: October 15, 2016, 11:03:01 am »
What about use a small FPGA to connect all the SPI lines with a master node?
My thoughts exactly - either emulate multiple ENC28J60 chips and a swithc in the FPGA, or if there is a mechanism in the ENC protocol to add waits, maybe mux the SPIs into a single ENC chip.

Youtube channel:Taking wierd stuff apart. Very apart.
Mike's Electric Stuff: High voltage, vintage electronics etc.
Day Job: Mostly LEDs
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf