EEVblog Electronics Community Forum

EEVblog => EEVblog Specific => Topic started by: EEVblog on October 15, 2016, 12:01:59 am

Title: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: EEVblog on October 15, 2016, 12:01:59 am
Dave sketches up an idea for a more integrated Raspberry Pi supercomputer cluster with integrated ethernet and power.

https://www.youtube.com/watch?v=KI7YLXhovb8 (https://www.youtube.com/watch?v=KI7YLXhovb8)
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: riyadh144 on October 15, 2016, 12:35:15 am
You have to be careful, as the 5v pin has no input protection, I have killed a few RPis doing exactly this.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: johnkeates on October 15, 2016, 01:49:17 am
Not only the power pins, but the GPIO pins have practically no protection either.

Regarding networking, there are more options besides ethernet, you could use the serial port on the thing and PPP over it, at least one UART is on the header if I'm not mistaken.
To connect them together, you'd need a way to have them all feed back to one Pi just doing the PPP to Ethernet bridging and you can use it's existing ethernet connection to plug in to your network. It would take some sort of expander or multiplier to get all those serial connections connected back to that one Pi, so an extra chip is still needed, but only one and not one for every Pi. The nice thing about Linux is that as long as something does stuff like TCP/IP, any application running on top of it won't know or care about what transport is used on the lower levels. There is support for at least serial, parallel, usb, ethernet (of course), firewire, pcie, IRDA, ISDN and a while back someone was working on TCP/IP over I2C (but I'm not sure if that was ever completed).

With the networking thing on the hardware side, there is a software side too: proper hardware has MAC addresses for network interfaces on Ethernet, and other busses have arbitrary configurable or serial number based addresses in many cases. If you were to want to configure your cluster in a somewhat automated manner, you could just have the IP addresses preconfigured based on those addresses in a DHCP server. The compute module would request an address, the DHCP server would recognise it's MAC address or port ID and assign it the correct address.

Regarding the Software, the Raspberry Pi guys have 'Raspbian' which is a modified version of Debian, specifically for the Pi. Since Debian is completely free/libre/open/wank-word-of-the-day and the process of adding Raspbian modifications on top of it is documented, the software should always be maintained and maintainable. There is no secret sauce or a commercial entity required to keep it going on the Linux-based side. On the bootloader and GPU side of things, it's a bit tricky, since Broadcom still thinks that has to be 'secret sauce' for some reason (as if nobody else has a CPU that can boot... or a GPU that does graphics). There is the bootloader binary blob file that reads config.txt, and that's the part you can't really maintain if you're not a broadcom-zombie, so that's something that may not always be kept up-to-date. On the other hand, does it really need to be 'maintained', since it's just there for one thing: initialise the GPU, and start it, then let it start the CPU (yes, that's how the thing does it) and setup the DRAM, and kick the kernel into action. So as long as the hardware doesn't change and the kernel is fine with whatever the bootloader parks in memory, it should work indefinitely.

For other boards, this may not work out so well. There is this Banana Pi, and the Orange Pi, and there are probably a ton of other Pies (enough to fill a pastry cookbook?), and we have the 'chip' and then there are the minnow boards, beagle bones, and LeMaker is doing stuff, and everyone is coming to join the club. Specs-wise, that's nice, but when it comes to software, unless you have one of those Linux distributions (like Debian or Fedora) to build on and a platform specific community (like with Raspbian) to maintain tweaks for the board-specific things, it's impossible to maintain or develop in a workable and long-term way, because the resources are simply not there. Since software and the internet (and everything connected to it) is always changing, at some point, you'll always have to update, upgrade or modify the devices and/or software in order to keep using it or keep it connected to others. Try finding a serial modem, or an IrDA adapter, heck, try finding an analog phone line! It's annoying, but as long as stuff is connected or has to work together, everything has to keep up. (up to a certain level, of course)

So when picking a board or software distribution, the community or upstream projects feeding the software that runs it are about just as important as the hardware specs themselves.
Without good software, those tiny boards won't even cut it as a paperweight (too light to hold anything down!).
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: SL4P on October 15, 2016, 04:13:01 am
i'd think twice about routing the slots in the backplane to accomodate the Pi headers...
it may be smarter to make small right-angle transition PCBs that would allow you to keep the backplane intact for bussing and mechanical strength.
The transition 'board' may also offer other connection and mechanical benefits as it evolves.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: NiHaoMike on October 15, 2016, 04:22:30 am
What about use a small FPGA to connect all the SPI lines with a master node? Or if possible, set the USB port to gadget mode and use ordinary USB hubs to link to master nodes.

I have a cluster of cheap smartphones for mining altcoins. Connectivity is just a cheap wireless router dedicated to serving the cluster. A spare 120mm fan connected to a wall wart keeps them cool.
Title: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: 6581 on October 15, 2016, 04:26:12 am
Two ideas - not sure how great or viable - for physical density:  1)  45 degree connectors on backplane/motherboard, could allow sliding these tighter next to each other - (like some memory modules), 2) two boards on top of each other - top board upside down, raspberries interleaved. Just my thoughts while watching.

Great project, very interesting.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: rs20 on October 15, 2016, 04:30:33 am
Really cool project! A couple of ideas:

* Or want to use a particular one because you're using the GPIO breakouts that you mentioned.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: facosta on October 15, 2016, 04:48:41 am
I'm not sure about the price Dave quotes for the Orange one. The cheapest ones I can find are AU$25 from HK delivered and the cost will double when you buy from the States once you include delivery. Obviously I'm looking at the wrong side of the distribution line. By the way, if you are in a hurry and need to buy it one of these puppies in  the local market, the average cost will be similar to the USA plus taxes or AU$55ish.
Any ideas where to look for a better deal?
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: MauriceS on October 15, 2016, 04:54:32 am
There are more ways even with the Pi Zero, one of them is to use the USB connector and use USB Ethernet emulation mode:

https://learn.adafruit.com/turning-your-raspberry-pi-zero-into-a-usb-gadget/overview

So that would mean to have a bunch of micro USB sockets on a board and a USB hub. There is one USB Host needed, and the Zero's could be slaves, so one Pi 2 or 3 being the cluster master. Costwise that would be likely even less than having a ENC28J60 in use there.

Using the ENC28J60 (and there is a 100mbit version too) - there is a possibility of cost saving to use capacitive ethernet coupling, which i know that works. The company I work at uses that on the backplane of one of our systems, and we don't have problems. That would save extra magnetics...

The most funkiest solution would be to use SPI, but the problem would be that that would need a linux driver, and the second issue is that it looks that the broadcom MPU only supports master mode, so a (semi) smart slave would be needed based on an FPGA (expensive) or a microcontroller... Either one with as many SPI slave interfaces as possible - I found Microchip makes some chips with 4 SPI busses.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: Stefan Payne on October 15, 2016, 05:29:05 am
Hm...

2 Things:
1. Why not use some (short) flat ribbon cables? Than you don't Need the Slot inside the board, you just put a connector there and off you go. Though it might be some additional work for you to make These cables...
And 40 pin Flat ribbon cables should be easily obtainable via eBay. You may be able to get a box full of those -> old IDE Cables.
I don't know if the 80pin ones would work though...

2. In Ethernet the 'magnetics' are some Kind of Isolation transformers. So it may be possible to get away without it. But I've never done this, so don't Quote me on that...

Some Kind of mechanical Thing to screw the board into is needed anyways, is it not?
So I think the Version with the flat ribbon connector could be more viable, especially since you can use it to get around the 'different' Pinout between the Orange Pi and the normal Pi...
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: Towger on October 15, 2016, 05:37:39 am
Wifi Dave wifi...  Just whack a couple of dollars wifi dongle into each one and use a dedicated wifi router.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: Brumby on October 15, 2016, 05:44:19 am
All this faffing around because of non right-angled headers in place.

Dave - I'll offer to remove (up to 100 of them) for you for free - just to take this irritating limitation out of the equation.  My ZD-985 works brilliantly on headers.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: bktemp on October 15, 2016, 05:51:53 am
Adding an ENC28J60 will waste a lot of power: Those things get pretty hot, because they draw 120mA when idle and 160mA when transmitting. That's an additional 0.5W per chip!
ENC424J600 is faster and draws less current.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: somlioy on October 15, 2016, 07:49:13 am
Finally a eevBlog-project. Please complete it.  8)
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: DJVG on October 15, 2016, 07:53:36 am
I'm currently working on something similiar with the same boards (and the ones with 2GB ram) and I'm very suprised to see a video like this. Really nice!!

If you want to go smaller you might want to look at the NanoPi NEO: http://www.friendlyarm.com/index.php?route=product/product&path=69&product_id=132 (http://www.friendlyarm.com/index.php?route=product/product&path=69&product_id=132). It uses the same AllWinner H3 CPU and it's only 40x40mm!
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: hans on October 15, 2016, 09:26:35 am
My thoughts:

ENC424J600 is an overall nicer chip and all.. but via SPI it won't give much more bandwidth. In addition, unless you tell it to fix to 10Mbit, at 100Mbit network speeds the chance of a ENC buffer overflow is much larger. 100Mbit/s potentially going in, only 14Mbit/s can go out (max SPI speed).
So apart from power, it won't buy you much. The package (QFP44) is also larger.

In terms of magnetics I would watch out what voltage reference (GND or VCC) the ethernet tx/rx pairs are connected to. You could also use capacitive coupling of ethernet lines instead of magnetics, seperating the DC reference, which is indeed still much cheaper and smaller than magnetics.

I would also likely add a 60-80mm fan in the enclosure. Because 40W of dissipation sounds like quite a bit of heat.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: metRo_ on October 15, 2016, 09:34:37 am
Adding an ENC28J60 will waste a lot of power: Those things get pretty hot, because they draw 120mA when idle and 160mA when transmitting. That's an additional 0.5W per chip!
ENC424J600 is faster and draws less current.

And I think faster is the key here... if you can't spread the data fast enough to all boards you are wasting the parallel computing power of this kind of solution.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: mariush on October 15, 2016, 10:05:52 am
Considering the low speeds, it would be cheaper to just make your own RJ45 to 1x8 or 2x4  or 2x5 0.1" header at the end of 5cm of cable or something like that of network cable. Just enough to make a 180 degree turn and go down towards the main pcb, or to route the cable to a convenient location where it could go in a network switch. And honestly, switches are so cheap these days that it would probably be cheaper to just buy one from the store and remove it from its case and attach that internal board to your board that holds all those pi's in place. There's new 8 port switches for 10$ and 5 port switches for around 8$ in my local store in a country with 20% vat .. probably can't even buy the parts separately from stores like Digikey for that price.  You could probably buy 24-48 port used switches on eBay for less than 30-40$.

Also, 40 pin cables ribbon cables should be fairly cheap if you buy in volume,just look on ebay for ide cables - 1$ a piece ... here's an example: http://www.ebay.com/itm/40-Pin-IDE-ATA-HDD-Hard-Drive-Ribbon-Band-Cable-Dual-Device-Disk-Connectors-/171288701698?hash=item27e19a2b02:g:EgIAAOxyhXRTPLJc (http://www.ebay.com/itm/40-Pin-IDE-ATA-HDD-Hard-Drive-Ribbon-Band-Cable-Dual-Device-Disk-Connectors-/171288701698?hash=item27e19a2b02:g:EgIAAOxyhXRTPLJc)

You could convert such a cable into two separate cables just by cutting the ribbon cable near the middle connector and installing a new 40 pin connector at the end of the loose ribbon cable. On digikey, such connectors seem to be more expensive than an IDE cable from ebay, looks like they're about 1.5$ each : http://www.digikey.com/product-detail/en/3m/89140-0001/MSC40A-ND/229687 (http://www.digikey.com/product-detail/en/3m/89140-0001/MSC40A-ND/229687)

But alternatively, especially if you don't need all 40 pins, you could make yourself three ribbon cables just by cutting the ribbon cable in the middle or to the length you desire and install smaller connectors at the other ends (the ones which would go on your base board).
For example, you could use 10 position (2x5) connectors which are 30 cents each (which you could also reuse for the network jack) :  http://www.digikey.com/product-detail/en/on-shore-technology-inc/101-106/ED10500-ND/2794212 (http://www.digikey.com/product-detail/en/on-shore-technology-inc/101-106/ED10500-ND/2794212)
or you could use 20 position (2x10) connectors which are around 46 cents each on Digikey : http://www.digikey.com/product-detail/en/assmann-wsw-components/AWP-20-7240-T/HHKC20H-ND/4864473 (http://www.digikey.com/product-detail/en/assmann-wsw-components/AWP-20-7240-T/HHKC20H-ND/4864473)

So for 1$ ide cable + 3x 30 cents = ~ 2$ you got yourself 3 x  40 pin ->10 pin cables or for ~ 2.5$ you could have 3 x 40pin -> 20 pin cables

An extra RJ45 network jack and a 30 cent header could save you 2-3$ in that microchip IC

I was thinking you could make boards like PCI-E cards on which you'd attach up to 7 PIs (so that you'd use 8 port network switches) and those 10-20 wires from each PI would be routed to the PCI-E slot (which if i remember correctly has around 150 pins on the long side), and you could use the tiny side to send 12v or 24v on the pci-e like card and have some dc-dc converters on the card to convert that down to 5v for each pi
The pci-e slots are easy to buy and could also be positioned on a motherboard in a way that could allow you to screw these fake pci-e board to a computer case for rigidity, support whatever... you get a case with around 10 slots so you could have 10 x 7 PIs or something like that all powered from an atx power supply on 12v with regulators on each pci-e like card producing 5v

 
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: suku on October 15, 2016, 10:10:31 am
you could actually design the board to be fairly low profile, so it's possible to put the motherboard upside down into the case and use standard respberry pi's... i think it'd be nice to make it compatible with both boards....
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: alexanderbrevig on October 15, 2016, 10:25:24 am
If you stagger them you'd probably get twice the density with a cost on its width
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: CM800 on October 15, 2016, 10:40:16 am
Might I suggest this special snowflake (http://uk.farnell.com/metz-connect/ajp92a8813/connector-rj45-plug-1port-8p8c/dp/2442534?ost=AJP92A8813&selectedCategoryId=&categoryNameResp=All%2BCategories&searchView=table&iscrfnonsku=false) of a connector?

(http://uk.farnell.com/productimages/standard/en_GB/2442534-40.jpg)


PCB mount Ethernet Plug!  :-DD
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: nctnico on October 15, 2016, 10:50:12 am
I doubt it makes much difference in cost to order multiple PCBs with orange pies mounted flat (and mechanically fixed to the board!) or one board with 10 standing (well, hanging on a connector). I'd mount them flat for mechanical stability. Either way a dense solution with many pies may need forced air cooling.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: technix on October 15, 2016, 10:53:00 am
Instead of cutting slots into the backplane, you can use an adapter board that have a straight 40-way pin header socket, cutouts allowing the USB and Ethernet jacks poke through, a card edge connector to the motherboard for easier node removal, network circuitry if you are using GPIO headers for networking, and maybe some power supply and protection circuitry (so your motherboard don't have to carry too much current.) In fact by doing this your backplane will also be compatible with Raspberry Pi, if a different adapter board is used.

Each adapter board have an buck converter that converts 12V to 5V, the ENC28J60 chip and half of the termination resistors. The backplane then would be a plain old Ethernet switch.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: mikeselectricstuff on October 15, 2016, 11:00:10 am
Connector issue is easy - something like a Samtec SSW right-angle header with long pins.

http://suddendocs.samtec.com/catalog_english/ssw_th.pdf (http://suddendocs.samtec.com/catalog_english/ssw_th.pdf)

PCB pin lengths of 0.3 are available, which would probably get you high enough off the board
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: mikeselectricstuff on October 15, 2016, 11:03:01 am
What about use a small FPGA to connect all the SPI lines with a master node?
My thoughts exactly - either emulate multiple ENC28J60 chips and a swithc in the FPGA, or if there is a mechanism in the ENC protocol to add waits, maybe mux the SPIs into a single ENC chip.

Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: EEVblog on October 15, 2016, 11:16:59 am
Another option is the ODROID-C0 http://www.hardkernel.com/main/products/prdt_info.php?g_code=G145326484280 (http://www.hardkernel.com/main/products/prdt_info.php?g_code=G145326484280)
It comes without populated GPIO and USB connectors. Not ultra cheap but it might suit some uses.

Someone on youtube pointed me to the nanopi-neo:
http://nanopi.io/nanopi-neo.html (http://nanopi.io/nanopi-neo.html)
It's only $8 for a 4 core H3
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: EEVblog on October 15, 2016, 11:18:03 am
What about use a small FPGA to connect all the SPI lines with a master node?
My thoughts exactly - either emulate multiple ENC28J60 chips and a swithc in the FPGA, or if there is a mechanism in the ENC protocol to add waits, maybe mux the SPIs into a single ENC chip.

Because that's another complex step. Far easier to just use the chips.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: EEVblog on October 15, 2016, 11:22:07 am
Adding an ENC28J60 will waste a lot of power: Those things get pretty hot, because they draw 120mA when idle and 160mA when transmitting. That's an additional 0.5W per chip!
ENC424J600 is faster and draws less current.

Hadn't looked at the power consumption yet, thanks for the heads up.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: EEVblog on October 15, 2016, 11:24:00 am
Might I suggest this special snowflake (http://uk.farnell.com/metz-connect/ajp92a8813/connector-rj45-plug-1port-8p8c/dp/2442534?ost=AJP92A8813&selectedCategoryId=&categoryNameResp=All%2BCategories&searchView=table&iscrfnonsku=false) of a connector?
(http://uk.farnell.com/productimages/standard/en_GB/2442534-40.jpg)
PCB mount Ethernet Plug!  :-DD

I couldn't find one of those!  :o

10 bucks says the height does not match the required vertical USB micro PCB plug!
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: EEVblog on October 15, 2016, 11:28:33 am
If you want to go smaller you might want to look at the NanoPi NEO: http://www.friendlyarm.com/index.php?route=product/product&path=69&product_id=132 (http://www.friendlyarm.com/index.php?route=product/product&path=69&product_id=132). It uses the same AllWinner H3 CPU and it's only 40x40mm!

Yeah ,someone else pointed this out and I really like it. Very cheap, available by the looks of it, and tiny.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: CM800 on October 15, 2016, 11:30:21 am
Might I suggest this special snowflake (http://uk.farnell.com/metz-connect/ajp92a8813/connector-rj45-plug-1port-8p8c/dp/2442534?ost=AJP92A8813&selectedCategoryId=&categoryNameResp=All%2BCategories&searchView=table&iscrfnonsku=false) of a connector?
(http://uk.farnell.com/productimages/standard/en_GB/2442534-40.jpg)
PCB mount Ethernet Plug!  :-DD

I couldn't find one of those!  :o

Went on RJ45 on Farnell, then clicked plug and solder / through hole, this came up..


Think you might use it? Much more elegant solution.

It might even be possible to make a fancy footprint to use this rightangle USB Type A right angle SMT connector as a THT vertical:

http://uk.farnell.com/multicomp/mc000991/usb-2-0-type-a-plug-smt/dp/2476092?MER=sy-me-pd-mi-alte (http://uk.farnell.com/multicomp/mc000991/usb-2-0-type-a-plug-smt/dp/2476092?MER=sy-me-pd-mi-alte)

Don't know how well the boards would line up however... I suspect not so well. May not matter. (depends what board you go for)



If you want to go smaller you might want to look at the NanoPi NEO: http://www.friendlyarm.com/index.php?route=product/product&path=69&product_id=132 (http://www.friendlyarm.com/index.php?route=product/product&path=69&product_id=132). It uses the same AllWinner H3 CPU and it's only 40x40mm!

Yeah ,someone else pointed this out and I really like it. Very cheap, available by the looks of it, and tiny.

They look small enough you could use them horizontally with standoffs and use pogo pins for the ethernet connection... that could be quite elegant.



EDIT:

Brought 4 of them! Look perfect to get antiquated with ARM / Linux. Maybe in the future I can fit a nice CPLD or FPGA under it too.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: bktemp on October 15, 2016, 11:43:56 am
Someone on youtube pointed me to the nanopi-neo:
http://nanopi.io/nanopi-neo.html (http://nanopi.io/nanopi-neo.html)
It's only $8 for a 4 core H3
You could use the PCB mount ethernet plug next to an PCB mound USB A connector and plug the NanoPi NEO into both connectors: Ethernet for communication and USB for power (you will be using the USB output as power input, but it should be ok, the schematics shows the 5V USB connected directly to the internal 5V rail).
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: rob77 on October 15, 2016, 11:57:22 am
regarding all the networking suggestions for a "fun" project like this ... the top priority is the question: does linux have a working and usable driver for that solution ? if the answer is no, then you should ditch the idea right away because it would increase the complexity of the project (and not everyone is able to write network device drivers for linux).

so the viable options are (simplest to most complicated):
- wifi dongles + dedicated wifi router
- pppd over serial to a "master" with ethernet connection and acting as a router/bridge
- dedicated SPI ethernet chip for each board (ENC28J60 or enc424j600) + ethernet switch (without magnetics of course)

the other possibilities are countless , virtually you can do anything.. but the question is the drivers. of course you can write you application specific communication routines but that would be single purpose...  if you are able to write the network device drivers for your solution, then go ahead with it... otherwise go for a solution which is already available.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: EEVblog on October 15, 2016, 12:04:40 pm
They look small enough you could use them horizontally with standoffs and use pogo pins for the ethernet connection... that could be quite elegant.

Yep, that was my first thought. Although I wouldn't use pogo pins, just suck out the RJ45 and USB and solder them in place. Could have multiple flat motherboards stacked, along with thermal sheets going over the boards and processors maybe. density would be super high. 5 per board, 5 boards stacked would be bugger-all volume for a 100 core ARM system.
Best passive thermal coupling would be 5 boards mounted back to back with thermal pads going to a thins case top and bottom.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: EEVblog on October 15, 2016, 12:07:07 pm
You could use the PCB mount ethernet plug next to an PCB mound USB A connector and plug the NanoPi NEO into both connectors:

Murphy will ensure it's not that easy and the two are not compatible height wise.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: EEVblog on October 15, 2016, 12:08:31 pm
regarding all the networking suggestions for a "fun" project like this ... the top priority is the question: does linux have a working and usable driver for that solution ?

Doesn't matter, it's just like having dozens of RPi's plugged into a switch. Linux doesn't know any  different.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: rob77 on October 15, 2016, 12:13:42 pm
regarding all the networking suggestions for a "fun" project like this ... the top priority is the question: does linux have a working and usable driver for that solution ?

Doesn't matter, it's just like having dozens of RPi's plugged into a switch. Linux doesn't know any  different.

i was referring to the solutions suggested above - like the multiplexed SPI... if there is no driver in the kernel for it, then it will over-complicate the project because you will have to write the divers for it.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: rs20 on October 15, 2016, 12:22:34 pm
i was referring to the solutions suggested above - like the multiplexed SPI... if there is no driver in the kernel for it, then it will over-complicate the project because you will have to write the divers for it.

Mike's suggestion explicitly mentioned emulating the ENC28J60 on the FPGA, in which case the pre-existing ENC28J60 drivers would work. However, I'd personally prefer just to use real chips though, impersonating chips on an FPGA sounds like quite the rabbit-hole.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: CM800 on October 15, 2016, 12:26:45 pm
They look small enough you could use them horizontally with standoffs and use pogo pins for the ethernet connection... that could be quite elegant.

Yep, that was my first thought. Although I wouldn't use pogo pins, just suck out the RJ45 and USB and solder them in place. Could have multiple flat motherboards stacked, along with thermal sheets going over the boards and processors maybe. density would be super high. 5 per board, 5 boards stacked would be bugger-all volume for a 100 core ARM system.
Best passive thermal coupling would be 5 boards mounted back to back with thermal pads going to a thins case top and bottom.

Could machine a beautiful long aluminum or copper block under them all with raises in them to touch the processor, then throw a hole through the length of it and pump water to keep each stack cool. I think this could be a fun project, even competition.... how many cores per square cm can we fit in with using only off-the-shelf ARM boards (no custom ARM boards)
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: EEVblog on October 15, 2016, 12:30:19 pm
Could machine a beautiful long aluminum or copper block under them all with raises in them to touch the processor, then throw a hole through the length of it and pump water to keep each stack cool. I think this could be a fun project, even competition.... how many cores per square cm can we fit in with using only off-the-shelf ARM boards (no custom ARM boards)

Water cooling would not be required, passive would work fine.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: EEVblog on October 15, 2016, 12:34:39 pm
Mike's suggestion explicitly mentioned emulating the ENC28J60 on the FPGA, in which case the pre-existing ENC28J60 drivers would work. However, I'd personally prefer just to use real chips though, impersonating chips on an FPGA sounds like quite the rabbit-hole.

Exactly. Why you'd bother to go to that effort just for a project like this is beyond me.
It's a huge job, look at the datasheet and the hundreds of registers and whatnot. What worthwhile benefit d you get for all that effort?
Time and effort much better spent on other aspects.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: rob77 on October 15, 2016, 12:39:27 pm
i was referring to the solutions suggested above - like the multiplexed SPI... if there is no driver in the kernel for it, then it will over-complicate the project because you will have to write the divers for it.

Mike's suggestion explicitly mentioned emulating the ENC28J60 on the FPGA, in which case the pre-existing ENC28J60 drivers would work. However, I'd personally prefer just to use real chips though, impersonating chips on an FPGA sounds like quite the rabbit-hole.

i was more talking about this one

What about use a small FPGA to connect all the SPI lines with a master node?
My thoughts exactly - either emulate multiple ENC28J60 chips and a swithc in the FPGA, or if there is a mechanism in the ENC protocol to add waits, maybe mux the SPIs into a single ENC chip.

muxing into a single ENC chip would mean you have to implement some kind of virtualization in the driver - the driver would have to hold a "virtual" ENC chip instance and it would have to re-inicialize the real ENC oince it get it's "machine time". at least the MAC address would have to be re-initialized on every context switch of the real ENC - otherwise the whole cluster would appear as a single node on the network (single MAC address).
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: hans on October 15, 2016, 12:39:57 pm
Multiplexed SPI sounds like a big can of worms. You'd need to multiplex 4 I/O's in 2 directions 8 times, which does not cut down on the number of chips and just a little on cost (ENC28J60 is 2.35 EUR each, a 4ch mux is probably more like 0.50 EUR).

Also there are no drivers. And I think this is the biggest problem; because I doubt it's even possible to create a pleasing workable system. The ENC28J60 is a nice chip because it has filters like unicast and such, so it drops any frame from the buffer that is not targetted at it's own MAC. If you have 8 linux boxes talking through the same ENC28J60 they would need to be assigned individual software MAC's. They then need to poll the ENC one by one to check if there is a packet they can take. And how are you going to handle stuff like broadcasts, multicasts and packets that do not belong to the cluster? Again: can of worms.

Emulating the ENC28J60 sounds doable, but probably a lot of work even for a dumbed down emulation. And I imagine you need a pretty large FPGA to emulate the frame buffers for each ENC28J60. If you have 8 emulations running, times 8kB of buffer, that's 64kB of RAM + all logic required to basically implement an ethernet switch. All this work for 25 euro worth of chips. Could be fun, but it is an enormous undertaking and ultimately does not add unique business logic that doesn't already exist and you absolutely need to implement in a FPGA.

ENC28J60 and ENC424J600 drivers are already in the kernel and can be used via SPI, but like I said earlier both are limited to around 10-14 Mbit/s respectively.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: CM800 on October 15, 2016, 12:46:14 pm
Could machine a beautiful long aluminum or copper block under them all with raises in them to touch the processor, then throw a hole through the length of it and pump water to keep each stack cool. I think this could be a fun project, even competition.... how many cores per square cm can we fit in with using only off-the-shelf ARM boards (no custom ARM boards)

Water cooling would not be required, passive would work fine.

True... I'm just looking for an excuse to use a tiny radiator like this:
(http://puu.sh/rJJOA/70ce086958.jpg)
(http://www.hobbyking.com/hobbyking/store/catalog/51510s1.jpg)


Adorable  :scared: :scared:!


Quote
Although I wouldn't use pogo pins,

Also, is there any reason you don't like the idea of pogo pins? (to save desoldering / enhance plug / playability)

Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: microcircuit on October 15, 2016, 01:01:03 pm
Hi Dave,
Just a thought, rather than routing slots in your motherboard, if you were to fit the right angle connector to a Pie before soldering it to the MB this would provide the required spacing stand off between the MB and Pie.  Not having a connector or Pie to test I'm unsure if the connector pins are long enough for this to work.
Excellent videos,
Phil   
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: crasbe on October 15, 2016, 01:54:44 pm
Hi Dave,

I don't know if you've seen my comment on YouTube, but maybe the NXP SC16IS740/750/760 Slave UART might be a solution to look into.
That's a I2C or SPI-attached Slave UART and you can put up to 16 of them onto one I2C-bus.

You'd have to dedicate one Orange Pi as a master for PPP over serial. There is also support for the SC16IS7x0 in the Linux kernel, so the software side shouldn't be too complex.

Greetings,
Chris
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: CM800 on October 15, 2016, 02:18:29 pm
I've made a nice CAD model of this now, the only inaccurate dimensions are the usb and ethernet height, the length is correct, as is the height of the BGA relitive to the pcb.

I'm not sure about the thickness of the PCB, I made it 1.5mm.


(http://puu.sh/rJNDm/d192c34ec9.png)

Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: BurtyB on October 15, 2016, 02:28:57 pm
Someone on youtube pointed me to the nanopi-neo:
http://nanopi.io/nanopi-neo.html (http://nanopi.io/nanopi-neo.html)
It's only $8 for a 4 core H3

I've been playing with the NanoPi NEO in my v1.3 Cluster HAT (http://clusterhat.com/dist/img/ClusterHAT-Zero-NanoPi.jpg (http://clusterhat.com/dist/img/ClusterHAT-Zero-NanoPi.jpg)) which uses USB Gadget mode to provide Ethernet, Serial Console and powering it over a single USB connector.

I can't see the NanoPi NEO without headers on the FriendlyARM site anymore which is a shame but on the up side it should be possible to boot the NEO over USB without an SD card (http://linux-sunxi.org/FEL/USBBoot (http://linux-sunxi.org/FEL/USBBoot)) making it even cheaper to use and easier to deploy. I'm hoping the Pi Zero will also support it properly one day but it doesn't currently work (https://github.com/raspberrypi/tools/tree/master/usbboot (https://github.com/raspberrypi/tools/tree/master/usbboot)).

Chris.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: mikeselectricstuff on October 15, 2016, 02:41:39 pm
Mike's suggestion explicitly mentioned emulating the ENC28J60 on the FPGA, in which case the pre-existing ENC28J60 drivers would work. However, I'd personally prefer just to use real chips though, impersonating chips on an FPGA sounds like quite the rabbit-hole.

Exactly. Why you'd bother to go to that effort just for a project like this is beyond me.
It's a huge job, look at the datasheet and the hundreds of registers and whatnot. What worthwhile benefit d you get for all that effort?
Time and effort much better spent on other aspects.
I was just thinking about cost ( and power)  if you wanted to do this on  a bigger  scale. Though it might be easier to write an ethernet driver that did parallel comms over the IO lines...
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: @rt on October 15, 2016, 02:45:22 pm
Turning the boards 45 degrees would solve the routing problem if the CAD can rotate the footprint at arbitrary angles.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: CM800 on October 15, 2016, 03:08:48 pm
I'm trying to make a PCB baseplate for the NanoPi... Ran into something strange.
(http://puu.sh/rJQbi/6a69616f1d.png)

I've got the USB connector there, trying to add holes for it, but look, they overlap. I'm certain my drawing on the left is correct, the hole positions are a direct overlay of the DXF they provide, and all the 2.48mm headers fit just fine.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: BurtyB on October 15, 2016, 03:36:50 pm
I've got the USB connector there, trying to add holes for it, but look, they overlap. I'm certain my drawing on the left is correct, the hole positions are a direct overlay of the DXF they provide, and all the 2.48mm headers fit just fine.

Measuring the board I have it's more like a ~0.8mm diameter hole (~1.14mm diameter ring) on 1.4mm spacing.

Chris.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: Stupid Beard on October 15, 2016, 04:20:46 pm
Not sure if anyone's mentioned it yet, but a couple of things you will want to consider are the ability to toggle power or at the very least reset each node individually. You may not need to use it very often, but it's valuable when something goes wrong with an update or something. I guess this doesn't have to be more than a physical switch if you want to keep it simple, it just has to be accessible and hardware based so that you don't have to pull everything apart to fix one troublesome node.

Also, whilst your primary use may be low bandwidth, updating the software on all the nodes is not. You may want to take that into consideration before you lock yourself into slow networking. You may decide you don't care, but thought I'd mention it just in case.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: CM800 on October 15, 2016, 04:22:08 pm
behold, a little CAD model concept.

What do you think of something like this, Dave?

(http://puu.sh/rJTP3/f05b7551ed.jpg)
(http://puu.sh/rJTYi/362e4ead4e.png)
(http://puu.sh/rJUKs/f3ef2656cb.png)
if you use SMD holders for the 2.54mm pin headers you can fit one on each side of the board.
then a card edge.

the entire board assembly there is only 14.5mm wide. This could be shrunk further by thinning the cooling blocks, however I don't think it'd be worth it as you'll need space between the card edge connectors anyway.

You could break out the Ethernet connections to the board easily enough too, as I've flipped them over.

12x 1.2GHz cores, with water blocks (or just passive)

93 x 14.5 x 60mm space.

That's 81cm3,  6.7cm3 per core


You could make this board double height, fitting 8 x 4 = 32 cores per PCB, the entire base board would fit in a space under 100x100 which means you could probibly get a 4 layer board made up for $50 for 10 pcs in china.

that brings the est price of each board to $85 (8*$9.99 for the 512MB ram version) , so for $850 + backplane cost you could make  a 320 core, 1.2GHz cluster..

... I should probibly slow down now before I end up trying to do Dave's project myself.  :-DD :-DD
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: positivenucleus on October 15, 2016, 04:59:04 pm
Regarding going thru SPI to Ethernet to get internet access...

You can use the Rx/Tx (RS232 style, 5V) with PPP.  On this port you make PPP run, not huge speed: 115200 bit/s.

You will need to have a node that has all the other sides of each serial line, which kind of replaces the ethernet switch, but I think it would be cheaper.

One simple idea that I can see for it, is a bunch of USB serial ports connected to a USB hub, and that connected to one of the boards.  Yes cables (USBSerial <--> hub <--> "master node") but way more cheaper in $ and power, and easy to replace.  Maybe you can get the hub chips and create a usb hub on the "motherboard", and have only 1 single cable to master node.  Microchip USB251x can have up to 4 ports and < 100mA total current, but I guess there might be others out there ;)

Serial port + PPP: http://elinux.org/RPi_Serial_Connection (http://elinux.org/RPi_Serial_Connection)


Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: crasbe on October 15, 2016, 05:31:29 pm
You can use the Rx/Tx (RS232 style, 5V) with PPP.  On this port you make PPP run, not huge speed: 115200 bit/s.

Actually you can go a lot higher with a Raspberry Pi. Let me quote the BCM2835 ARM Peripherals document:
Quote
4) The UART itself has no throughput limitations in fact it can run up to 32 Mega baud. But doing so requires significant CPU involvement as it has shallow FIFOs and no DMA support.
https://www.raspberrypi.org/wp-content/uploads/2012/02/BCM2835-ARM-Peripherals.pdf (https://www.raspberrypi.org/wp-content/uploads/2012/02/BCM2835-ARM-Peripherals.pdf) Page 10, at the very bottom.

The Allwinner H3 seems to be pretty similar but I wasn't able to find reliable numbers.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: technix on October 15, 2016, 08:10:44 pm
You can use the Rx/Tx (RS232 style, 5V) with PPP.  On this port you make PPP run, not huge speed: 115200 bit/s.

Actually you can go a lot higher with a Raspberry Pi. Let me quote the BCM2835 ARM Peripherals document:
Quote
4) The UART itself has no throughput limitations in fact it can run up to 32 Mega baud. But doing so requires significant CPU involvement as it has shallow FIFOs and no DMA support.
https://www.raspberrypi.org/wp-content/uploads/2012/02/BCM2835-ARM-Peripherals.pdf (https://www.raspberrypi.org/wp-content/uploads/2012/02/BCM2835-ARM-Peripherals.pdf) Page 10, at the very bottom.

The Allwinner H3 seems to be pretty similar but I wasn't able to find reliable numbers.
Actually the Alwinner H3 have built-in RGMII for 1Gbps Ethernet. If the required pins are broken out you may want to forget about SPI-based Ethernet and wire up some GbE direct attach connection on the backplane. RGMII interfaces can be connected directly without using a PHY in the middle (This kind of direct attach connection is fairly common especially for faster links, like SFP Direct Attach cables used in low-cost 10Gbps Ethernet stacks.) So if your Ethernet switch chipset supports it you can design your backplane using RGMII direct attach to the processors.

In fact RGMII Direct Attach and SPI-based connection can be used in tandem in a cluster like this. The high throughput low latency 1Gbps connection can be used to transfer bulk data across nodes, while the SPI-based connection can be used to transfer out-of-band events and management packets.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: uwe on October 15, 2016, 08:41:17 pm
Hi Dave,

go for the Raspberry Pi Compute Module. There will be  a new version until the end of the year ;)

From https://www.raspberrypi.org/blog/compute-module-nec-display-near-you/ (https://www.raspberrypi.org/blog/compute-module-nec-display-near-you/)

Each display has an internal bay which accepts an adapter board loaded with either the existing Compute Module, or the upcoming Compute Module 3, which incorporates the BCM2837 application processor and 1GB of LPDDR2 memory found on the Raspberry Pi 3 Model B. We’re expecting to do a wider release of Compute Module 3 to everybody around the end of the year.

Greetings

Uwe
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: ebclr on October 15, 2016, 08:43:58 pm
Rapsberry Pi is a bad choice

Better choice

(https://www.parallella.org/wp-content/uploads/2015/10/15455177266_b8efd1b25a_o-600x400.png)


https://www.parallella.org/ (https://www.parallella.org/)
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: Wilksey on October 15, 2016, 09:30:05 pm
I looked at using a Pi Zero but couldn't find where to get them from all of the suppliers seem to only allow you to purchase 1.
Where can you buy them (in the UK, where they are made...) in multiple quantities, I think a Farnell link said it was discontinued.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: Rasz on October 15, 2016, 10:30:49 pm
 :-//  impressive waste of time

at the end of the day after investing >100h you will end up with something equaling performance of two year old used $40 Intel g3258 on $25 motherboard with $20 of ram ... _in some specific tasks_ ... :palm:

btw ZERO all have host and _gadget mode_ USB, meaning they all can act like ethernet over USB. $5 USB hub is the easiest solution
already done to death here: http://www.mycustard.com/ (http://www.mycustard.com/) Edit: you will notice indicative lack of any performance/usefulness metrics, its because none exist for such a thing.

$30 off the shelf for custom 4 board pointless cluster pcb https://shop.pimoroni.com/products/cluster-hat (https://shop.pimoroni.com/products/cluster-hat)


still a total waste of time. This project is EE/node.js web developer trying real computing equivalent of mounting turbo in your mums pinto/morris mini/daihatsu charade/whatever small shitty eco town car.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: EEVblog on October 16, 2016, 01:58:03 am
behold, a little CAD model concept.
What do you think of something like this, Dave?
(http://puu.sh/rJTP3/f05b7551ed.jpg)

That's like what I had in mind. Either back-to-back to center heatsink, back-to-backoutside to a long thin machined aluminium brick that becomes the housing as well. i.e. it's like a "blade" cluster module. Ethernet and 12V/24V power one end (+maybe serial monitor), and status leds on the other end.

Other option is an extruded aluminium case as I had in mind before with rows of vertical boards inside. Have to be mounted longitudinally of course for airflow.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: EEVblog on October 16, 2016, 02:02:23 am
:-//  impressive waste of time

So is your post.
If you don't have anything positive to contribute them please just ignore it.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: Rasz on October 16, 2016, 03:09:12 am
If you don't have anything positive to contribute them please just ignore it.

did you miss the second half?

EDIT: actually I was hoping your angle in the video would be debunking, proving how delusional project like this is, most people being suckered into this either dont do any calculations or are incapable of assessing performance and genuinely expect at the very least workstation performance (if not mini server).

TLDR:
 very best case scenario one Pee 3 = ~6-8 Pee Zero, and this is using PERFECTLY scalable cluster optimized tests.
in same test one $40 intel processor = ~three Pee 3
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: optoisolated on October 16, 2016, 03:59:01 am
Was very excited to see Dave's video and figured it'd be a good way to learn some more about dealing with more complicated concepts.

Reading up on the requirements of transformer-less configuration and it seems rather straightforward; especially considering a 10mbps connection is more than ample for the stated purpose. Section 4 of this TI guide goes into a lot of detail of the best way to achieve it. http://www.ti.com/lit/an/snla088a/snla088a.pdf (http://www.ti.com/lit/an/snla088a/snla088a.pdf)

It's made even easier wen using something like the Microchip KSZ8895MQX Integrated 5-port Ethernet Switch chip. Its manageable, but by default will function as a dumb switch; and even includes termination resistors, and power regulator internally, further simplifying the design requirements.

Started designing a circuit using the ENC28J60 and the KSZ8895MQX to see if I can and so far I haven't hit any roadblocks. Using the SPI bus as a Ethernet interface, that never even occurred to me!  :clap:

This is one of those projects where it's possible to get the same results in simpler ways, but what's the fun in that?  :-DMM   :D
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: EEVblog on October 16, 2016, 04:52:39 am
did you miss the second half?

You mean the part with "already done to death", pointless, and "still a total waste of time"  ::)
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: Rasz on October 16, 2016, 05:07:01 am
I think it was made perfectly clear in the video why this was being done and why it was being done in this way.
yes yes, just because

Performance was specifically stated as NOT the primary aim. So my question is why make performance the primary focus of your criticism? Now that's an epic facepalm moment for you.

then dont call it supercomputer. Dave mentions it not being as fast as latest modern intel cpu while in fact the it wont even beat 2 year old budget product.

Learning how to set up ethernet over an SPI bus is generically useful information that may be applicable in other situations.

Its my autistic brain :/  There are only correct or wrong solutions. Correct is one optimizing for something. This one seems to be optimizing clicks, its neither a supercomputer nor has Pee in it. :(
its like https://hackaday.io/project/12122-raspberry-pi-project (spoiler, its a parody)

Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: obiwanjacobi on October 16, 2016, 07:29:57 am
Nice project!

If you make small adapter boards for each OrangePi you can solve the angle AND the pinout problem. Also, you can do away with the cutout, which makes routing way easier and leaves you more board space. Perhaps even put them a little closer, because now they can be inserted from the top.

This would allow you to mix any compute module that has your bus-signals somewhere on its header connector, opening the door for future enhancements - when a new, better, faster compute module comes out.

[2c]
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: CM800 on October 16, 2016, 09:00:28 am
behold, a little CAD model concept.
What do you think of something like this, Dave?
(http://puu.sh/rJTP3/f05b7551ed.jpg)

That's like what I had in mind. Either back-to-back to center heatsink, back-to-backoutside to a long thin machined aluminium brick that becomes the housing as well. i.e. it's like a "blade" cluster module. Ethernet and 12V/24V power one end (+maybe serial monitor), and status leds on the other end.

Other option is an extruded aluminium case as I had in mind before with rows of vertical boards inside. Have to be mounted longitudinally of course for airflow.

From what I've been reading online, the Allwinner H3 throttles itself when it gets too hot and has been known to get pretty overheated sometimes, hence why they usually have a heat sink on the bottom.
If you wanted to pack as many of these boards in as smaller space as possible. putting the processors opposite each other (inside) and having an aluminum block with a water pass-through in it might be quite suitable, you could then chain them up to another water block on the case and use a little micro-pump to push the water through. The great thing about this is it will also be silent. I noticed quite a few people have put fans on the heatsinks, even then I've seen reports of it getting up to 57*C here is a pic of the block I concepted:

(http://puu.sh/rKIus/6b2d51a7f6.png)
(http://puu.sh/rKIvz/63f8493a2f.png)
(http://puu.sh/rKINO/c7e4586149.jpg)

It's an L shaped block of aluminium or copper with 3 holes drilled in it then partially tapped so that a screw can fit in the horizontal one and two pipe connectors can fit in the other two, cheap and quick to machine. If you were making 10 blades of 4 or 8 boards, I think the benifits out way the additional work as you could put each blade next to eachother with only about 1mm spacer.


Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: SeanB on October 16, 2016, 09:21:59 am
Daughterboard with a small DC Dc converter on it, to provide 5V for the pi, and with status LEDs for power, and a small tactile switch on the one side to disable the DC Dc converter board as a reset function. As well then you can put the SPI ethernet chip on there as well with the termination circuit, and simply have 3 sets of zero ohm links to disable the SPI bus from the 40 pin connector on the bottom. the addition of another (probably 10pin) connector on the bottom will allow you to have the 12V power rail ( lower draw on the main board), the 4 differential data paths and 5 ground pins to supply power. This leaves the 40 pin connector free and standard ( with the 3 links if you need SPI on there, otherwise you just leave off the 3 jumpers and do not have the stubs to cause reflections) for further use if needed.

Main board then can be spaced so you have the daughter boards able to channel air flow from a fan through the slots, so allowing the chips to have small stick on heatsinks to cool them, powered by a single 120mm fan on one side of the case and a vent the other side.

Get the board dimensions right and you can have 3 different boards with identical placement of the main 40 pin and 10 pin connectors on the bottom, but with each variant able to accommodate either of the 3 pi variants described, as they are all electrically the same, just with a different pin position on them, or design a 4 layer board that can fit any of the 3 if you solder in the right socket for the board you want to use.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: rs20 on October 16, 2016, 10:42:01 am
CM800: What is the most elegant way to re-fill the third hole? Any method more elegant than a bolt+O-ring?
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: CM800 on October 16, 2016, 11:40:36 am
CM800: What is the most elegant way to re-fill the third hole? Any method more elegant than a bolt+O-ring?

That's generally how most people do it. you could use a rubber plug and a grub screw, or a grubscrew and a dab of epoxy over the end of it.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: ziggyfish on October 16, 2016, 12:04:58 pm
In terms of networking, you could subnet the group of Pis.

For example, if you can fit 30 on a single motherboard, then set the sub netmask to 255.255.255.224, so that 192.168.0.1 to 192.168.0.30 are on one network and 192.168.0.33 to 192.168.0.62 are on another network. (with default being the first address on that switch, for example, 192.168.0.1 and 192.168.0.33 etc).

Then configure the routeing tables on the first device on each board.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: SeanB on October 16, 2016, 12:11:23 pm
If it is a soft aluminium alloy that you machined simplest is to make a solid metal slug that is 10-20 micrometers larger in diameter than the hole and press fit it in there, or make it a snug fit and use some thread sealer on it. Not much pressure, so the press fit method will be easier, but you need to either have the initial hole a very controlled diameter or ream it out to a controlled diameter.

Done that with a cooling block, though there it needed a more serpentine cooling loop so there were multiple long holes through the block, with end plugs on the ends done with press fitted plugs, and the internal ones were end drilled to intersect multiple galleys with unwanted paths ( to force the serpentine flow) filled with press fitted plugs pressed into the block. Anther just used a long length machine tap to thread the entire cross channel, and then simply put in some threaded plugs and sealer to the required points to block the passages., with the outer hole being plugged as well before being milled to final dimension so there are almost no visible marks of the plugs.

Another method if the block allows it is to drill the pocket for the pipe fittings and then angle drill 2 intersecting smaller diameter holes for the fluid to travel through.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: EEVblog on October 16, 2016, 12:20:53 pm
Its my autistic brain :/  There are only correct or wrong solutions.

Then you have nothing to contribute to this thread. Please do us a favor and ignore it.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: chris_leyson on October 16, 2016, 12:23:44 pm
Hi Dave, interesting project. I've used the METZ AJP92A8813 pcb mount connector for Gige 1000BASE-T in the past and they work OK, Element 14 stock them. Also been using 0.5mm pitch flat flex cable as co-planar waveguide for Gige ethernet with no problems, I tried to measure the differential cross talk in 50mm and 200mm long flat flex cables at 125MHz but my home brewed test jig was limited to about -50dB cross talk. The connecting cable made very little difference <0.25dB so I guess cross talk in a 200mm long flat flex cable is perhaps better than -65dB or -70dB. Shouldn't be a problem with 10/100 ethernet and you might even be able to get away ribbon cable.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: frank_gamefreak on October 16, 2016, 12:34:37 pm
Hello Community,
first of all my English is not the best, so please excuse that I don't read the complete thread. Maybe someone already mention my ideas but I would like to give my opinion to the product.
 
I like the idea of staking Pis together to a compact cluster. And I wish we could make a kickstarter out of it.

My first suggestion for the backplain is, could we make a „master bus“. I think of a pin that is zero if no Pi is alive and the first one that it to one. My idea is that the first Pi knows it is the first and starts a DHCP server. Also the other Pis can scan this pin and recognize if the „master Pi“ go offline and the next one have to start a dhcp server.

My second suggestion would be a switch to change the pin-out between raspberry and orange Pi. I think someone like you have an idea how to make this possible.  :-+

And my last one for now is to make the connectors different to your idea mentioned in the video. I think it is possible to place the connectors into the free space. Ok, it have do be a double sided board for this but I think it should be possible. I can't make a good picture of this. But think about a strait connector with one side of pins of each side of the board and the female part of the connector in the space for the Pi.

Please think about it and tell me your opinion about it.
Thank you.
 


Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: rs20 on October 16, 2016, 01:14:32 pm
My first suggestion for the backplain is, could we make a „master bus“. I think of a pin that is zero if no Pi is alive and the first one that it to one. My idea is that the first Pi knows it is the first and starts a DHCP server. Also the other Pis can scan this pin and recognize if the „master Pi“ go offline and the next one have to start a dhcp server.

Nice idea. Earlier I suggested using a series of pins to allow the Orange Pi to detect which slot it was in, and choose a static IP accordingly. This has the advantage that all of them would have a well-defined IP address.

My second suggestion would be a switch to change the pin-out between raspberry and orange Pi. I think someone like you have an idea how to make this possible.  :-+

Earlier suggestions included using little adapters with a straight female pin header + a card edge connector; one advantage of this approach is that different adapters could support different boards.

And my last one for now is to make the connectors different to your idea mentioned in the video. I think it is possible to place the connectors into the free space. Ok, it have do be a double sided board for this but I think it should be possible. I can't make a good picture of this. But think about a strait connector with one side of pins of each side of the board and the female part of the connector in the space for the Pi.

I couldn't follow what you were trying to express here?
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: frank_gamefreak on October 16, 2016, 02:10:13 pm
My first suggestion for the backplain is, could we make a „master bus“. I think of a pin that is zero if no Pi is alive and the first one that it to one. My idea is that the first Pi knows it is the first and starts a DHCP server. Also the other Pis can scan this pin and recognize if the „master Pi“ go offline and the next one have to start a dhcp server.

Nice idea. Earlier I suggested using a series of pins to allow the Orange Pi to detect which slot it was in, and choose a static IP accordingly. This has the advantage that all of them would have a well-defined IP address.
I see a problem here with multiple backplains in a stack. The advantage of my idea is that you only connect the master bus to the next board and all pis know ther is a dhcp.


And my last one for now is to make the connectors different to your idea mentioned in the video. I think it is possible to place the connectors into the free space. Ok, it have do be a double sided board for this but I think it should be possible. I can't make a good picture of this. But think about a strait connector with one side of pins of each side of the board and the female part of the connector in the space for the Pi.

I couldn't follow what you were trying to express here?
I hope this "CAD" helps you to understand.
(http://bilder-upload.3server.de/thumb.php?image=1476626956_eevblog.png) (http://bilder-upload.3server.de/index.php?info=1476626956_eevblog.png)
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: rob77 on October 16, 2016, 02:46:52 pm
for those who think this project is a waste of time... no it's not... it's a fun project which will lead to a usable product at the end.
if you go for the highly optimized off-the-shelf solution, then where is the fun and learning ? 
i bet my bottom dollar this very thread already helped to share a lot of ideas and knowledge, despite the project is in it's infancy yet ;)
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: doobedoobedo on October 16, 2016, 04:41:48 pm
Performance was specifically stated as NOT the primary aim. So my question is why make performance the primary focus of your criticism? Now that's an epic facepalm moment for you.

then dont call it supercomputer. Dave mentions it not being as fast as latest modern intel cpu while in fact the it wont even beat 2 year old budget product.

Think of it as a model of a supercomputer. It uses the same cluster architecture of many actual supercomputers, but at a fraction of the cost and power consumption.

I hope Dave finds time to build it to scale and paint it :).
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: Fungus on October 16, 2016, 05:39:26 pm
Think of it as a model of a supercomputer. It uses the same cluster architecture of many actual supercomputers, but at a fraction of the cost and power consumption.
It might actually be better in terms of bang per buck.

Yes, an i7 is probably faster than this but an i7 needs external RAM, a fancy motherboard, etc. It all costs money. For the price of an i7 plus support hardware you could buy a lot of Pis.

It will be interesting to see the final numbers if/when this thing gets built.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: cv007 on October 16, 2016, 05:51:26 pm
My 2 cents-

You already have the orange pi's,  I would think step 1 would be to just mount them together via standoffs (or not at all), hook up power, connect to a switch via ethernet, get them booting, working together, etc.- then see what you have. If all works as planned, go to step 2. Step 1 is 10 times easier than step 2, and you most likely have everything needed on hand to do it. (You could make a video about it- would give viewers an idea of what the end result is all about, and if step 2 never happens for whatever reason at least you will have had something to show).

You will find out what is important, what is not, you will get actual power measurements, heat measurements, what works, what doesn't. You may discover many things that could be useful to know. Let the set of pi's run for a week and see if they are what you expect- maybe they are flaky and full of little problems, or maybe they are great- you will at least get a good idea before putting in a lot of time on a board design.

Designing a board before you know you have a working 'circuit' seems like a 'trap for young players'  :)

Just a thought. (There is still 2 cents left, so must have been free advice).

Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: Fungus on October 16, 2016, 06:13:32 pm
You already have the orange pi's,  I would think step 1 would be to just mount them together via standoffs (or not at all), hook up power, connect to a switch via ethernet, get them booting, working together, etc.

I'm pretty sure they'll boot up and run BOINC - Dave already did this in his review of the OrangePi IIRC.

It might be a good idea to get a couple of ENC28J60 (http://www.ebay.com/sch/i.html?_nkw=enc28j60+module) modules off eBay and try that part of it though, before making a PCB.  :popcorn:
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: nctnico on October 16, 2016, 06:34:30 pm
How about this: create a full size ISA/PCI style board which has an onboard ethernet switch and two external network ports which can be daisy chained. The pies (mounted flat) can be connected to the onboard switch using short RJ45 cables soldered into the board. Ditto for the power. I think each board can hold 8 to 10 Orange Pi zeroes. This would address the cabling problems Dave is trying to avoid. Also the board would fit in a standard PC casing (which provides features like forced air cooling and a power supply) using a standard PCI or ISA backplane (cheap!).
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: Frant on October 16, 2016, 06:38:56 pm
Before the project becomes too complicated and overengineered, it may be a good idea to take a breath and think about the actual goal. For example, I would probably choose to make the simplest possible prototype (proof of concept), just to play with it and see what can be expected from such a system in terms of its computing power and practical usability. Significant effort in order to design a professional grade hardware only makes sense if the prototype shows that the expected functionality and/or performance can be achieved. A decent off-the-shelf power supply and a 16-port Ethernet switch will be sufficient for the start. The software aspect of the project can be much more challenging than it seems at this point.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: Fungus on October 16, 2016, 06:49:09 pm
Before the project becomes too complicated and overengineered

The engineering is the project.

Significant effort in order to design a professional grade hardware only makes sense if the prototype shows that the expected functionality and/or performance can be achieved.

The performance really doesn't matter. It's obvious it's a complete waste of time if "ultimate performance" is your goal. A MicroATX PC with an i7 in it will be much faster/easier to build

The "design effort" is what makes it worthwhile to Dave.

(I'm putting words into Dave's mouth as I understand this. Correct me if I'm wrong...)

Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: nctnico on October 16, 2016, 06:57:20 pm
I understand the same thing. According to the first few minutes of the video the goal is to come up with a solution which doesn't need a lot of wiring and external ethernet switches as shown in the systems which already exist. IMHO going the SPI to ethernet route is not the best one because it takes a lot of effort to build, create drivers for and it will still be slow.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: Fungus on October 16, 2016, 07:04:28 pm
IMHO going the SPI to ethernet route is not the best one because it takes a lot of effort to build, create drivers for and it will still be slow.

The video clearly says:
a) The OrangePi kernel already has a driver for those particular SPI-to-Ethernet chips - just edit a text file and enable it
b) Network speed isn't important.

(start at 11:40 in the video if you missed it)
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: cv007 on October 16, 2016, 07:16:37 pm
Quote
I'm pretty sure they'll boot up and run BOINC - Dave already did this in his review of the OrangePi IIRC.
Booting up and running is one thing, running continuously is possibly another. They seem to require heatsinks (according to one of his reviews), so how big is big enough?  Will the required size heatsinks affect the spacing between boards (physical spacing because of heatsink size)?  I'm sure a lot could be learned by getting a flock of pi's setup and working as intended before finalizing any board design, which is my only point.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: Fungus on October 16, 2016, 07:32:49 pm
Quote
I'm pretty sure they'll boot up and run BOINC - Dave already did this in his review of the OrangePi IIRC.
Booting up and running is one thing, running continuously is possibly another. They seem to require heatsinks (according to one of his reviews), so how big is big enough? 

I doubt it will be very much if they sell them without heatsinks. If it was a big problem they'd have them welded onto the chips.

But that's just speculation. We need engineering data and I'm sure there's a video on this topic in the pipeline.  :popcorn:
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: nctnico on October 16, 2016, 09:36:09 pm
They seem to require heatsinks (according to one of his reviews), so how big is big enough? 
I doubt it will be very much if they sell them without heatsinks. If it was a big problem they'd have them welded onto the chips.
Don't be so sure about it. I'm using a 'SoC on a module' for a commercial project and it needs a huge friggin heatsink which is sold seperately to keep the module within specs and it is mounted in a casing with plenty of natural convection.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: CM800 on October 16, 2016, 10:23:28 pm
They seem to require heatsinks (according to one of his reviews), so how big is big enough? 
I doubt it will be very much if they sell them without heatsinks. If it was a big problem they'd have them welded onto the chips.
Don't be so sure about it. I'm using a 'SoC on a module' for a commercial project and it needs a huge friggin heatsink which is sold seperately to keep the module within specs and it is mounted in a casing with plenty of natural convection.

Exactly right. These modules are designed to be as flexible as possible for as many customers as possible. Some customers can't use fans due to noise requirements, others can't have a heatsink, it takes up too much space or there won't be airflow inside the case, it'll have to be thermally coupled to said case. Others may be running in a very hot environment where the only way they could possibly cool it (along with other components) would be water cooling (say the unit is out in the desert in the sun, some electronics will be cooled via a large heat exchange unit.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: jolshefsky on October 17, 2016, 12:34:41 am
(http://puu.sh/rJTYi/362e4ead4e.png)

I think this has the same-ish problem as the flat-packed Pies: lots of wasted board space under them.

My first thought (echoed early on by several others) was to make a simple board that converted the dual-row pin-header to a card-edge connector. These adapter boards would be super simple and small: barely a dual-row header and one side with card-edge fingers. I very much like the idea of going with a very common connector (e.g. PCI, even though it's more pins than necessary.) For that matter, it wouldn't be much more effort to design the board with a second set of holes for another female header to handle the other rotation of the board (e.g. Raspberry vs. Orange).

The question is, should you make the adapter board include things like LEDs, pass-through pins, a local LDO regulator, or the SPI-Ethernet adapter.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: mariush on October 17, 2016, 12:53:20 am
Another idea just came to me  :D

use four bars to space the pi's evenly.  Fish tank , mineral oil , put the pi's all in the oil hanging down by the network and power and io cables all  going up onto a top board . add a couple of fans to circulate the mineral oil around. done.

Since they run on low voltages and relatively low currents, you probably don't have to insulate each cpu (with regular computers some hard to pour insulating material around the socket because they got shots or arcs between contacts near the cpu sockets.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: Frant on October 17, 2016, 01:12:47 am
IMHO going the SPI to ethernet route is not the best one because it takes a lot of effort to build, create drivers for and it will still be slow.

Although a software driver for the SPI/Ethernet chip exists (Dave mentioned it in the video), I would rather try to stick with Ethernet. The idea to use right-angle pcb-mount Ethernet plugs was a good one and it remains unclear to me why it was dismissed.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: TheRevva on October 17, 2016, 01:14:59 am
All that work to achieve the ethernet I/O?
I'd be SORELY tempted to outlay a HUGE extra US$2.01 per board and use the Orange Pi Lite which has onboard WiFi!
Whether I would actually USE the supplied WiFi antennae or not is another question.
I'm willing to bet they could all 'cross-communicate' without ANY antenna being connected if they were all co-resident in a single enclosure with just a 50ohm resistor soldered across the u.fl connector?
The FIRST 'PI' within any such 'array' / 'cluster' could then serve as a wired-to-wireless protocol converter as well as a basic local WiFi 'Access Point'
It would probably work out cheaper overall and significantly reduce the internal wiring complexity.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: MisterDiodes on October 17, 2016, 01:54:14 am
A couple "Head's Up" suggestions for Dave:

1.  Check out Samtec for a right angle female header socket connector - part of their SSQ line I believe.  The female header clips on to your Pi header, then the square leads come out and turn right angle, and you can get these in a variety of lengths - say .3" or .5" long.  No need to remove existing 40 pin header on the Pi, and you can use any variety of vertical-entry sockets on the backplane - and that makes it a breeze to layout without slots in the way.  For a couple bucks you turn your Pi into a module than can stand on edge and plug into your backplane, no slots required. 

2.  Watch out connecting the Ethernet PHY ports together without magnetics!  Ask me how I know.  It works sometimes, but not always - just depends on what chips you'll use.  What happens is the current drivers get confused if they are trying to negotiate auto-MDI crossover when first making the connection - and if you're using an un-managed port switch chip that can be an issue.  Those PHY port drivers are designed from ground zero to see the mags attached.  You'll also see weird noise pickup effects on a crowded board if you're not careful.

I suggest you lay in the pads for the 1:1 magnetics ALONG WITH your resistor pads on your proto board, and then you can easily experiment to see what does and doesn't work.  You definitely want to have your magnetics in place for the first few tests, and then you can always remove them to start playing with direct-connect resistors.

Remember those current driver resistors will have to be changed for the direct-connect situation.



Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: technix on October 17, 2016, 02:21:17 am
A couple "Head's Up" suggestions for Dave:

1.  Check out Samtec for a right angle female header socket connector - part of their SSQ line I believe.  The female header clips on to your Pi header, then the square leads come out and turn right angle, and you can get these in a variety of lengths - say .3" or .5" long.  No need to remove existing 40 pin header on the Pi, and you can use any variety of vertical-entry sockets on the backplane - and that makes it a breeze to layout without slots in the way.  For a couple bucks you turn your Pi into a module than can stand on edge and plug into your backplane, no slots required. 

2.  Watch out connecting the Ethernet PHY ports together without magnetics!  Ask me how I know.  It works sometimes, but not always - just depends on what chips you'll use.  What happens is the current drivers get confused if they are trying to negotiate auto-MIDI crossover when first making the connection - and if you're using an un-managed port switch chip that can be an issue.  Those PHY port drivers are designed from ground zero to see the mags attached.  You'll also see weird noise pickup effects on a crowded board if you're not careful.

I suggest you lay in the pads for the 1:1 magnetics ALONG WITH your resistor pads on your proto board, and then you can easily experiment to see what does and doesn't work.  You definitely want to have your magnetics in place for the first few tests, and then you can always remove them to start playing with direct-connect resistors.

Remember those current driver resistors will have to be changed for the direct-connect situation.
I think that if you can bypass PHY entirely you may be able to connect the RGMII interfaces together directly.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: optoisolated on October 17, 2016, 03:24:23 am
Quote
2.  Watch out connecting the Ethernet PHY ports together without magnetics!  Ask me how I know.  It works sometimes, but not always - just depends on what chips you'll use.
One of the reasons I liked the KSZ8895MQX chip is that you could configure the important registers with pull-ups. The MDI/MDIX Negotiation just happens to be switchable by pulling up pin 1. Winning! It's also programmable, but for the project Dave had in mind, that wouldn't be worth the effort. Not bad for 8 bucks.

Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: kyndal on October 17, 2016, 03:30:44 am
just use a 3 pin high connector. block /disregard  bottum row of pins

Kyndal
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: Fungus on October 17, 2016, 05:33:38 am
They seem to require heatsinks (according to one of his reviews), so how big is big enough? 
I doubt it will be very much if they sell them without heatsinks.
Don't be so sure about it. I'm using a 'SoC on a module' for a commercial project and it needs a huge friggin heatsink which is sold seperately to keep the module within specs and it is mounted in a casing with plenty of natural convection.

I re-checked the previous video where Dave ran one of these at 100%. It got up to 90 degrees but it didn't die.

I guess the size of the heatsink will come down to airflow in the case.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: CM800 on October 17, 2016, 06:53:09 am
(http://puu.sh/rJTYi/362e4ead4e.png)

I think this has the same-ish problem as the flat-packed Pies: lots of wasted board space under them.

My first thought (echoed early on by several others) was to make a simple board that converted the dual-row pin-header to a card-edge connector. These adapter boards would be super simple and small: barely a dual-row header and one side with card-edge fingers. I very much like the idea of going with a very common connector (e.g. PCI, even though it's more pins than necessary.) For that matter, it wouldn't be much more effort to design the board with a second set of holes for another female header to handle the other rotation of the board (e.g. Raspberry vs. Orange).

The question is, should you make the adapter board include things like LEDs, pass-through pins, a local LDO regulator, or the SPI-Ethernet adapter.

I'd beg to differ there, Wasted board space is not the biggest issue, cores per cm^3 is something more to think about. PCBs are cheap! Each of these boards could even possibly be done on a two layer, having four, or even eight of the Pis on them (1 on each side, using SMT header connectors)

The heatsink would be in the middle, then each board can be placed on a back-plane directly next to each-other, it would be the most compact solution we've discussed here no doubt.



All that work to achieve the ethernet I/O?
I'd be SORELY tempted to outlay a HUGE extra US$2.01 per board and use the Orange Pi Lite which has onboard WiFi!
Whether I would actually USE the supplied WiFi antennae or not is another question.
I'm willing to bet they could all 'cross-communicate' without ANY antenna being connected if they were all co-resident in a single enclosure with just a 50ohm resistor soldered across the u.fl connector?
The FIRST 'PI' within any such 'array' / 'cluster' could then serve as a wired-to-wireless protocol converter as well as a basic local WiFi 'Access Point'
It would probably work out cheaper overall and significantly reduce the internal wiring complexity.

Few things to think about there: You're creating lots of traffic in the wifi spectrum, this thing isn't infinite, believe it or not, if you set a lot of networked devices over wifi sending data constantly, you will find all the wireless networks slow down in the vicinity. (You're broadcasting on all the available channels in the local air) I might be incorrect here but from what I've heard it's true and a real effect.

The other thing is that holds no interest to most of us I'd imagine, if your going to do it with wi-fi, you might as well do it wired. I feel the main goal (or at least what I, and others are  getting out of this project is learning electrical design knowledge, not how to turn on a load of pies then configure them to communicate with each-other through wifi. Most, if not all of us here could do that in a few hours after they first get their hands on them and read up on google. There is much more to learn developing a compact baseboard.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: tszaboo on October 17, 2016, 09:50:20 am
What I would do:
Make a Eurocard PCB with the DIN41612 connector at the end (lets call it adapter board). Design the board, so it can take 2 Orange PIs, 5ish raspberry Pi zeros, 4 CHIP. or your choice of computing. Orange PI is mounted upside down on the adapter board, RPi zero has the connector soldered on the bottom, and USB pins tapped with a pogo pin. Route 24V, maybe 12V to the adapter board, have an onboard POL converter. Design a backplane, it routes power + ethernet to each board. Each adapter board has an ethernet switch, in case of necessary. Maybe it even has hot swap control. The backplane has 21 connectors (AFAIK, that is standard). I reckon, you can fit 42 RPis, 105 RPi zeros into a 3U rack. All mechanical parts are off the shelf. Cooling vertically. Hot plugging the cards, if you feel like. Mix different cards, if you feel like.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: MisterDiodes on October 17, 2016, 10:27:41 am
I think that if you can bypass PHY entirely you may be able to connect the RGMII interfaces together directly.

RGMII is generally not thought of being perfectly symmetrical in both the hardware and protocol sense - although I know some Micrel (et al) PHY chips can be connected back to back like that if you're building a repeater or media converter.  Usually that isn't the case for a generic connection - Normally RGMII is meant for a MAC interfacing to PHY... Not MAC to MAC.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: Towger on October 17, 2016, 05:25:58 pm
Few things to think about there: You're creating lots of traffic in the wifi spectrum, this thing isn't infinite, believe it or not, if you set a lot of networked devices over wifi sending data constantly, you will find all the wireless networks slow down in the vicinity.

I also suggested wifi, but using the small (ebay type) dongle which fit into a usb socket.  Bandwidth should not be a problem, except if all machines first start Bonic at the same time.  After Bonic downloads a block of data it (depending on cpu power) can take days before it finishes and needs more.

In saying that I like the mineral oil cooling idea, totally impractical, but leads to a more interesting video (more views).  I don't think 2.4ghz will have much of a range in oil [emoji3]
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: nctnico on October 17, 2016, 05:35:09 pm
Few things to think about there: You're creating lots of traffic in the wifi spectrum, this thing isn't infinite, believe it or not, if you set a lot of networked devices over wifi sending data constantly, you will find all the wireless networks slow down in the vicinity.
I also suggested wifi, but using the small (ebay type) dongle which fit into a usb socket.  Bandwidth should not be a problem, except if all machines first start Bonic at the same time.  After Bonic downloads a block of data it (depending on cpu power) can take days before it finishes and needs more.
If you want to run tasks in parallel they are likely to need data pumped in&out so network traffic could be massive. I don't know the Allwinner's performance when it comes to compressing video but if it is any good Dave could use the cluster to compress his Youtube videos.

edit: typo
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: KenGaler on October 17, 2016, 05:55:37 pm
I agree with the suggestion of a 90Deg adapter board and get rid the slots.  They can be designed on the same layout as the mother board and v-scored.  This way they are essentially free.

Ken
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: onarch on October 17, 2016, 06:50:02 pm
How about using the microUSB OTG to connect all the boards together?

You can get microUSB plug connector for PCB mount. Like ZX80-B-5SA from Hirose, see [3]. (Didn't know these existed before I checked)

Use one board as the master (USB host) and connect the other ones (USB devices) to it using a USB hub chip[2]. USB OTG can act as either a device or host depening the state of the ID pin. In Linux you can then USB Ethernet gadget on the devices to create a Ethernet connection to the master (host).  See: http://linux-sunxi.org/USB_Gadget (http://linux-sunxi.org/USB_Gadget)
The master can then setup a network bridge between its external Ethernet interface and the USB Ethernet connections from the devices, and will thus act like a switch.

8x NanoPi           $7.99      $63.92 [1]
8x ZX80-B-5SA       $1.60      $14.40 [2]
1x TUSB2077         $4.79      $ 4.79 [3]
+ PSU and board

Advantages:

Disadvantages:

[1] http://www.friendlyarm.com/index.php?route=product/product&product_id=132 (http://www.friendlyarm.com/index.php?route=product/product&product_id=132)
[2] http://www.digikey.com/product-detail/en/texas-instruments/TUSB2077APTR/296-37871-1-ND/4878718 (http://www.digikey.com/product-detail/en/texas-instruments/TUSB2077APTR/296-37871-1-ND/4878718)
[1] http://www.digikey.com/product-detail/en/hirose-electric-co-ltd/ZX80-B-5SA/H11612-ND/1963857 (http://www.digikey.com/product-detail/en/hirose-electric-co-ltd/ZX80-B-5SA/H11612-ND/1963857)
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: nctnico on October 17, 2016, 06:55:40 pm
I think that if you can bypass PHY entirely you may be able to connect the RGMII interfaces together directly.
RGMII is generally not thought of being perfectly symmetrical in both the hardware and protocol sense - although I know some Micrel (et al) PHY chips can be connected back to back like that if you're building a repeater or media converter.  Usually that isn't the case for a generic connection - Normally RGMII is meant for a MAC interfacing to PHY... Not MAC to MAC.
Connecting RGMII back to back should be possible either with an external clock or one interface as master but the point is rather moot. Routing ethernet is going to be much easier. Because of the short distance on a board you don't need matched impedance traces so as long as they are routed as pairs away from noise everything is fine (100Mbit ethernet has frequency components up 100MHz so up to 40cm long traces don't behave as transmission lines yet).
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: claus on October 17, 2016, 07:21:36 pm
I think I understood that performance is not so important here, but a "supercomputer" (even if it's a pocket size supercomputer is (wikipedia): A supercomputer is a computer with a high-level computational capacity compared to a general-purpose computer. Performance of a supercomputer is measured in floating-point operations per second (FLOPS). Have in mind that a "general purpose computer" in this context is a PI.

So essentially you want to build something faster combining various cpus. The problem here is that if communications are slow, the resulting computer might be slower, and this would just be a waste of energy. Those Pi's are not so terribly bad at crunching numbers, especially the 4-cores cpus, see http://www.roylongbottom.org.uk/Raspberry%20Pi%20Multithreading%20Benchmarks.htm, (http://www.roylongbottom.org.uk/Raspberry%20Pi%20Multithreading%20Benchmarks.htm,) they get around 2 GFlops for the multithreaded version.

So if you want the final thing to be faster (on an application that is not embarrassingly parallel) than a single PI you need good (fast) communication, otherwise a multithreaded program might run slower on 4 Pi's than on 2. The orange pi has a gigabit Ethernet port which should give a reasonable connection for parallel computation, I would use it with a gigabit switch for a model "supercomputer". If all applications will be embarrassingly parallel the SPI/Ethernet approach might make sense, but only as an application specific "supercomputer".

If on the other hand the design of a sexy SPI/Ethernet interface without cables for various Pi's is the main pointl, the Pi-zero would be the better candidate, imho, as each cpu is much slower and so are communication requirements. You might get some speedup adding Pi-zeros and thus at least get a "zero-Supercomputer" ;)
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: free_electron on October 17, 2016, 07:51:55 pm
i have one question : what are you going to run on it ? not operating system , what APPLICATION are you going to run on it ?
What ? why ? and is it made for a cluster setup ?
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: rob77 on October 17, 2016, 09:05:18 pm
i have one question : what are you going to run on it ? not operating system , what APPLICATION are you going to run on it ?
What ? why ? and is it made for a cluster setup ?

i assume it will be boinc client on every board. it's not a computing cluster.... it's a cluster of independent single board computers ;)
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: Poolkeeper on October 17, 2016, 09:10:03 pm
Interesting project to learn from.

Found this about running Ethernet connection without magnetics:

http://www.ti.com/lit/an/snla088a/snla088a.pdf (http://www.ti.com/lit/an/snla088a/snla088a.pdf)

Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: rob77 on October 18, 2016, 12:24:17 am
of course it's not a real supercomputer... a crucial component of a real supercomputer is RDMA (remote DMA) as far as i know only 2 technologies provide native RDMA , it's Infiniband and 10Gb Ethernet , and neither of those is present. (of course you could write a software RDMA implementation on top of any connection.. but that's  not the real thing ;) )
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: Brumby on October 18, 2016, 01:10:43 am
It may not be a 'supercomputer' - but I think it's a super computer idea.

Engineering solutions that deliver is what enthuses me.  Who cares if performance doesn't stack up against an Intel flagship CPU - it was never meant to.  Getting the thing to work as intended is the magic.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: technix on October 19, 2016, 01:40:21 am
I think that if you can bypass PHY entirely you may be able to connect the RGMII interfaces together directly.
RGMII is generally not thought of being perfectly symmetrical in both the hardware and protocol sense - although I know some Micrel (et al) PHY chips can be connected back to back like that if you're building a repeater or media converter.  Usually that isn't the case for a generic connection - Normally RGMII is meant for a MAC interfacing to PHY... Not MAC to MAC.
Connecting RGMII back to back should be possible either with an external clock or one interface as master but the point is rather moot. Routing ethernet is going to be much easier. Because of the short distance on a board you don't need matched impedance traces so as long as they are routed as pairs away from noise everything is fine (100Mbit ethernet has frequency components up 100MHz so up to 40cm long traces don't behave as transmission lines yet).
RGMII carries a 1Gbps link at 125MHz DDR, so not that much more difficult to route than 100BASE-TX. And those are single-ended signals so only length matching is required for longer runs.

Twisted pair PHY have much higher latency than other PHY due to its line encoding. This latency is not significant to 100Mbit Ethernet but when you are coming to loading down a 1Gbps connection it can become a significant portion of the system latency. There are existing backplane Ethernet standards, but those are either based on fiber PHY or use GMII/XAUI (GMII is functionally identical to RGMII but requires almost 2x the pins, XAUI is a functional sucessor to [R]GMII but carries a 10/40/100Gbps link) direct attach.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: haybailes on October 20, 2016, 10:00:21 pm
i have just finished watching the video and dont have time to read all the post
i dont know if anyone has said anything along these lines but sorry if someone has

instead of holes in the PCB and having the 90° header pins
you could you just normal straight female header pins going to a PCB with 90° header pins
then the raspberry pi attaches to that

pros: the PCB can be changed to handle other mini computer like C.H.I.P or orange pi, can be placed from the top down, more room on mother board,...
cons: more PCBs more and more components cost, may sway a lot in the slots,...
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: xDR1TeK on October 23, 2016, 12:35:29 am
How is the Ethernet to SPI being done?
Emulation of TCP/IP to SPI?
Does the Linux kernel have a module to skip over the OSI physical layer which is connected to ethernet port?
Would the ARM controller be built with the whole digital TCP/IP circuitry or just the analog circuitry part to operate the port channel on the output?
This has been something eating at me for a long time, and finding books that cover this much is impossible.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: xDR1TeK on October 23, 2016, 12:47:15 am
Ok figured out this much, Application, Transport, Network layers are all in software. Then the Link layer is going to be SPI interface. The active components with the wires are the physical layer.
So the encapsulation is being routed in software to port over the SPI.
then the SPI to ethernet would be just a dummy converter.
Very interesting.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: iXod on October 23, 2016, 05:50:21 am
Dave mentioned a price of $10 for pi. Where are these to be found at this price?
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: Fungus on October 23, 2016, 07:44:31 am
How is the Ethernet to SPI being done?

Those Ethernet ships do everything - complete IP stack on a chip. You just give them a MAC address and tell them to start.

You communicate with them over SPI just to check for incoming connections and receive/send the data.

Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: rob77 on October 23, 2016, 07:56:18 am
How is the Ethernet to SPI being done?

Those Ethernet ships do everything - complete IP stack on a chip. You give them a MAC address

You just communicate with them over SPI to check for incoming connections and grab the data as it arrives.

actually there are 2 different kind of ethernet chips for micros... ethernet only and ethernet + IP stack

the mentioned ENC28J60 is a MAC + PHY with SPI interface... so it's just a plain ethernet "network card" connected via SPI and it does have a driver in the linux kernel.

the other kind of chips are with IP stack implemented but those are mainly targeted for small microcontrollers - so you don't have to implement the IP stack on your small micro. an example of such a chip is Wiznet W5100.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: Fungus on October 23, 2016, 07:31:57 pm
actually there are 2 different kind of ethernet chips for micros... ethernet only and ethernet + IP stack

the mentioned ENC28J60 is a MAC + PHY with SPI interface... so it's just a plain ethernet "network card" connected via SPI and it does have a driver in the linux kernel.

the other kind of chips are with IP stack implemented but those are mainly targeted for small microcontrollers - so you don't have to implement the IP stack on your small micro. an example of such a chip is Wiznet W5100.

OK, I got them mixed up.

I've used the Wiznet W5100 but not the ENC28J60. I thought they were similar.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: eneuro on October 25, 2016, 10:17:10 pm
How is the Ethernet to SPI being done?

Those Ethernet ships do everything - complete IP stack on a chip. You just give them a MAC address and tell them to start.
Is it gigabit ethernet? If not, than forget about any supercomputing and we should... SMELL BULLSHIT  instead of supercomputing >:D
[youtube]https://www.youtube.com/watch?v=TL7xrE9EYd4 (https://www.youtube.com/watch?v=TL7xrE9EYd4)[/youtube]

Update: Yep it is  :bullshit:  Orange Pi One – 10/100M Ethernet   :popcorn:
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: rob77 on October 25, 2016, 10:41:11 pm
How is the Ethernet to SPI being done?

Those Ethernet ships do everything - complete IP stack on a chip. You just give them a MAC address and tell them to start.
Is it gigabit ethernet? If not, than forget about any supercomputing and we should... SMELL BULLSHIT  instead of supercomputing >:D
[youtube]https://www.youtube.com/watch?v=TL7xrE9EYd4 (https://www.youtube.com/watch?v=TL7xrE9EYd4)[/youtube]

Update: Yep it is  :bullshit:  Orange Pi One – 10/100M Ethernet   :popcorn:

please read the whole thread...
and you're talking even bigger bullshit because gigabit ethernet doesn't provide remote DMA, therefore it's not suitable for computing node interconnect in a supercomputer.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: tszaboo on October 26, 2016, 03:17:35 pm
Ok figured out this much, Application, Transport, Network layers are all in software. Then the Link layer is going to be SPI interface. The active components with the wires are the physical layer.
So the encapsulation is being routed in software to port over the SPI.
then the SPI to ethernet would be just a dummy converter.
Very interesting.
It actually does not matter. Newer linux kernel supports it out of the box, so you just need to recompile the kernel, and it will appear as a network card.
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: rob77 on October 26, 2016, 09:08:27 pm
Ok figured out this much, Application, Transport, Network layers are all in software. Then the Link layer is going to be SPI interface. The active components with the wires are the physical layer.
So the encapsulation is being routed in software to port over the SPI.
then the SPI to ethernet would be just a dummy converter.
Very interesting.
It actually does not matter. Newer linux kernel supports it out of the box, so you just need to recompile the kernel, and it will appear as a network card.

agree, it doesn't matter... and in fact the SPI is replacing the PCI or USB bus... so it has nothing to do with link layer.. link layer will be still ethernet ;)
Title: Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
Post by: SeoulBigChris on February 02, 2017, 11:56:35 am
Too bad this Ethernet controller can't implement 10BASE2 (or at least I don't think it can, after only 10 minutes of study).