Author Topic: EEVblog #946 - Apple (Raspberry) Pi Cluster - PART 2  (Read 30857 times)

0 Members and 1 Guest are viewing this topic.

Offline EEVblogTopic starter

  • Administrator
  • *****
  • Posts: 37730
  • Country: au
    • EEVblog
EEVblog #946 - Apple (Raspberry) Pi Cluster - PART 2
« on: November 22, 2016, 07:17:22 am »
Part 2 of building the Raspberry Pi computer cluster.
Dave strips down an old Apple G5 PowerMac to use as the enclosure.

 

Offline djlorenz

  • Contributor
  • Posts: 43
Re: EEVblog #946 - Apple (Raspberry) Pi Cluster - PART 2
« Reply #1 on: November 22, 2016, 08:01:19 am »
Being in Australia for work helps being in the first watcher of the video :D
The idea of a "apple pi" is great!

The 2 devices board is good, wiring that one is already interesting, cabling also ethernet will be a great engineering experience but imho cabling and tplink switches are good and cheap enough for the ethernet wiring. Naked TPlink pcbs on the top part or maybe there is enough space for two 24 ports switches (the switches are probably taking more space than the PIs?).

I will re-use the front switch for power up of the system, looking forward to see updates on this! good luck!
 

Offline Towger

  • Super Contributor
  • ***
  • Posts: 1645
  • Country: ie
Re: EEVblog #946 - Apple (Raspberry) Pi Cluster - PART 2
« Reply #2 on: November 22, 2016, 08:02:37 am »
A part 2 of a project...  Yipee.  But will it be finished! 

Says he who has 5:1 ratio of half finished to finished projects.
 

Offline Barny

  • Frequent Contributor
  • **
  • Posts: 311
  • Country: at
  • I'm from Austria, not Australia ;)
Re: EEVblog #946 - Apple (Raspberry) Pi Cluster - PART 2
« Reply #3 on: November 22, 2016, 08:06:58 am »
When the lid is closed nobody will see whats inside.
Specially, when it stands somewhere under a desk.

Because of that, I would focuse on function instad of the look.
Build the way you think this project work the best and most flexible.

PS.: Mmmmmmm, Apple - Bannana - Raspberry pie *Insert Homer voice here*
« Last Edit: November 22, 2016, 08:09:22 am by Barny »
 

Offline ataradov

  • Super Contributor
  • ***
  • Posts: 11236
  • Country: us
    • Personal site
Re: EEVblog #946 - Apple (Raspberry) Pi Cluster - PART 2
« Reply #4 on: November 22, 2016, 08:07:05 am »
Naked TPlink pcbs on the top part or maybe there is enough space for two 24 ports switches
Came here to write exactly this. There is plenty of space to wire normal Ethernet and not bother with SPI adapters.
Alex
 

Offline rs20

  • Super Contributor
  • ***
  • Posts: 2318
  • Country: au
Re: EEVblog #946 - Apple (Raspberry) Pi Cluster - PART 2
« Reply #5 on: November 22, 2016, 09:15:09 am »
I'd still like to see the SPI ethernet approach, it's vastly more interesting than ethernet, since everyone else does ethernet solutions of varying inelegance. As you've mentioned before, Dave, the bandwidth limitations of SPI are not really relevant; since most cluster computing is not bandwidth-limited.

I'd consider a solution that mounted ethernet switch ICs directly onto the motherboard as a fair compromise though. I don't think you'd need magnetics on the board if they're already built-in to the RPi/OPi's. There's no rule saying that you need more than 1 set of magnetics between two devices; and indeed as discussed in the last video you don't even need that 1, can use capacitors instead -- the magnetics provide protection against high voltage damage from 100m long cables running in harsh environments, not really relevant to a connection within a metal case.

I'd suggest designing the riser boards so that the OPi/RPi connector is always closest to the motherboard. In other words, have both riser boards be the "short"/"squat" variety. It ought to be trivial to deal with the routing challenges that result*; I doubt you'll be actually using very many of those pins? The majority can be no-connects or power pins.

* To clarify, I meant the routing challenges within the riser boards. You'd still have a totally consistent pinout on the motherboard of course (although if I were you I'd design the motherboard card edge connector pinouts to be easiest to route as far as the motherboard is concerned, and then deal with whatever challenges emerge in the riser boards later.)
« Last Edit: November 22, 2016, 09:38:00 am by rs20 »
 

Offline Geerant101

  • Contributor
  • Posts: 14
  • Country: au
Re: EEVblog #946 - Apple (Raspberry) Pi Cluster - PART 2
« Reply #6 on: November 22, 2016, 10:26:29 am »
I like the idea of having 2 Raspberry Pi's on a single riser board, makes better use of the space. Perhaps even using a PCI express edge connector back to the main board supported by PCB card guides. The PCB rails can then all be joined together for additional support. A double row header may hold the pcb in place better without needing a PCB guide but I think it would still need additional support, especially orienting the boards horizontally. Lining up pins on a double row header would be a difficult using a card guide - it only takes one slightly bent pin to ruin your day ;)
 

Offline bktemp

  • Super Contributor
  • ***
  • Posts: 1616
  • Country: de
Re: EEVblog #946 - Apple (Raspberry) Pi Cluster - PART 2
« Reply #7 on: November 22, 2016, 10:29:36 am »
I wouldn't use the SPI ethernet solution, because except from a nicer wiring it only has drawbacks:
- It is more expensive (you need to add another ethernet chip instead of a cheap RJ45 connector/cable)
- It draws more power (0.5W for ENC28J60, that's a 14% increase in power consumption).
- It is slower (limited by SPI instead of USB)

But wiring 32 or 64 boards using RJ45 cables will not be easy unless you use many cable ties making access to a single module difficult.

If you use riser boards, you could add a stepdown with enable to each board and add a large microcontroller on the mainboard as a system status controller. Maybe you could also add UART or some other communication bus, so it can talk to each module an cycle power if it does not respond.

I like the idea of using the pcb mount RJ45 connectors. It would simplifiy the wiring, especially when you also plug the ethernet switch into the mainboard.
 
The following users thanked this post: flextard

Offline Krokkodillo

  • Newbie
  • Posts: 1
EEVblog #946 - Apple (Raspberry) Pi Cluster - PART 2
« Reply #8 on: November 22, 2016, 11:51:14 am »
Hi guys, I'm new to electronics but I wanted to share my thoughts on how to set the modules. In terms of what to use spi or rj45 i would prefer simpler implementation, for rj45 i would definitely go for plug in option no cables, for spi version i would say that the modules should be connected horizontal way so it won't restrict the airflow from the front of the panel, i would also suggest to have smaller adapter connector to the mainboard.


Sent from my iPhone using Tapatalk
 

Offline EEVblogTopic starter

  • Administrator
  • *****
  • Posts: 37730
  • Country: au
    • EEVblog
Re: EEVblog #946 - Apple (Raspberry) Pi Cluster - PART 2
« Reply #9 on: November 22, 2016, 12:12:13 pm »
Naked TPlink pcbs on the top part or maybe there is enough space for two 24 ports switches
Came here to write exactly this. There is plenty of space to wire normal Ethernet and not bother with SPI adapters.

Yeah, there is. SPI Ethernet is neat from a wiring point of view, but that's it's only advantage.
But then what's left for a motherboard, power and some LED's?
Guess you could MUX the serial port from each board or something.
 

Offline Fungus

  • Super Contributor
  • ***
  • Posts: 16642
  • Country: 00
Re: EEVblog #946 - Apple (Raspberry) Pi Cluster - PART 2
« Reply #10 on: November 22, 2016, 12:26:45 pm »
Hi guys, I'm new to electronics but I wanted to share my thoughts on how to set the modules. In terms of what to use spi or rj45 i would prefer simpler implementation

If you want "simpler implementation" then just don't do it at all. A single i7 chip will probably be faster than this whole rig.

Loads of cables will cover up all the pretty electronics and obstruct airflow.
 

Offline Barny

  • Frequent Contributor
  • **
  • Posts: 311
  • Country: at
  • I'm from Austria, not Australia ;)
Re: EEVblog #946 - Apple (Raspberry) Pi Cluster - PART 2
« Reply #11 on: November 22, 2016, 12:31:52 pm »
Yes, a I7 is faster.
But the Rasperrys are more efficient then the I7.

And the question isn't why, the question is why not.
 

Offline UpLateGeek

  • Contributor
  • Posts: 16
  • Country: au
Re: EEVblog #946 - Apple (Raspberry) Pi Cluster - PART 2
« Reply #12 on: November 22, 2016, 01:34:49 pm »
I would avoid socketing the switch to the motherboard. Not only does it lock you into an exact brand and model of switch (so a switch failure would mean a motherboard replacement if you can no longer get the exact same switch, and with cheapie switches it's not a question of if the switch will fail, but when), but it will take up room on the motherboard that would better be used for the RPi's.

The neatest solution would be the SPI Ethernet, but this would be very expensive, as it would require duplicating the SPI Ethernet circuit for each of the 32-64 RPi's. Not to mention how slow it would be compared to the 100Mbit Ethernet, let alone to the 1Gbit on the OPi 2.

However, it looks like you'll have plenty of room under the motherboard, so I would route out a small slot next to each board and run the Ethernet cable through the motherboard, up behind it and over to the Ethernet switches above. You'll need jam in a whole bunch of those little switches up top, but there should be enough room if you rip out the optical drive and everything else. For 64 RPi's you'll need 10 of the 8 port switches - 7 RPi's + 1 uplink per-switch. 16, 24, or even 48 port switches would be better, but I guess it depends what you can get for a reasonable price, and the total number of RPi's you want. If you're running OPi 2's or other gigabit boards, you probably don't want to run all the switches to an aggregation switch, with just the one uplink to feed the lot, since that would effectively give you a maximum of 64:1 contention ratio. Even if you're using 100Mbit boards like the RPi 2/3 or the OPi 1, you'd still have 6.4:1 contention ratio. With 8 port switches, you'd have 10 uplinks, so I'd just bring all the uplinks to the back panel. I would cut out the connector panel, either to the right size to snap in those Keystone Cat6 Ethernet couplers, or mount sockets along the top of the motherboard to pass the Ethernet signals through to sockets on the back panel. You could even 3D print a custom connector panel to get the exact port layout you want.

But then again, I am a network engineer rather than an electronics engineer, so obviously I'm a bit biased when it comes to network connectivity!

As for power, here's the G5 "service manual". The power supply pinouts are on pages 179-180. The cheapest solution would be a Wun Hung Lo buck converter off ebay (massively over-rated to allow for their huge power rating fudge factor). Building one into the motherboard wouldn't cost much more though. The toughest part would be finding somewhere out of the way of all the RPi's to mount the heatsink for the mosfet and power diode.
 

Offline Towger

  • Super Contributor
  • ***
  • Posts: 1645
  • Country: ie
Re: EEVblog #946 - Apple (Raspberry) Pi Cluster - PART 2
« Reply #13 on: November 22, 2016, 02:24:09 pm »
Those bodge wires on the back of the motherboard must have been slipped past Steve Jobs.  You don't see that sort of crap on a PC motherboard. 

Don't trust those power connector pins are standard, the left connector looks standard, but it is a bloody Apple!

Copious use of the offical EEVBlog J Cloth and the watch magically changes mid video.
 

Offline CJay

  • Super Contributor
  • ***
  • Posts: 4136
  • Country: gb
Re: EEVblog #946 - Apple (Raspberry) Pi Cluster - PART 2
« Reply #14 on: November 22, 2016, 02:28:10 pm »
Lots of switches tend to use the same 'harmonica' port connectors at that price band so it's highly likely that a failed switch would render the whole board useless IMHO

I've commented on the video but I'd expand on that comment, I like the idea of a pluggable sub board to carry the Pi and mates with a 'standard' connector on the motherboard, one that perhaps mates USB, GPIO, Power, Ethernet to the motherboard.

That way the backplane/motherboard can be re-used many times as new versions of Pi/Orange/Banana etc. become available, you just need to respin the sub board to accomodate new GPIO, power and USB requirements.

Designing the backplane/motherboard to route Ethernet and USB signals could be a challenge but far from impossible, add in a little 'expansion room' with extra GPIO and alternative power rails (can you 'back feed' the Pi with 3.3V through the GPIO?) and it would be future proof.

 

Offline LightPathVertex

  • Newbie
  • Posts: 1
  • Country: de
Re: EEVblog #946 - Apple (Raspberry) Pi Cluster - PART 2
« Reply #15 on: November 22, 2016, 04:07:17 pm »
For the networking, I think the nicest solution would be to design a long PCB with 8 PCB-mount connectors that just directly plug into the RJ45s of the PIs and an ethernet switch IC.
Since you have no actual cable, I guess you could even get rid of the magnetics and just connect them straight into the IC - that way, you're just left with a strip-shaped PCB with 8 plugs with the right spacing, a IC like the KSZ8999 and one jack at the top.

For a 8x6x2 arrangement of PIs, you'd need 12 of those strips, and could then plug them all into a 16-port switch at the top of the case.
« Last Edit: November 22, 2016, 04:08:49 pm by LightPathVertex »
 
The following users thanked this post: richfiles

Online mariush

  • Super Contributor
  • ***
  • Posts: 5016
  • Country: ro
  • .
Re: EEVblog #946 - Apple (Raspberry) Pi Cluster - PART 2
« Reply #16 on: November 22, 2016, 04:49:52 pm »
Dave, @ 20:00

You CAN'T use both 5v @ 19A AND 3.3v @ 22 A at same time.  Look at the drawing on the label, like with ATX power supplies 3.3v and 5v rails can each output up to that much current but combined they're limited to some limit, like 100-150 watts.
Then, the total power on 3.3v+5v COMBINED with 12v rails can not exceed 340 watts.

There's two 12v rails with max 14A on each rail, but total for both is only 23A .. so 12v x 23A = 276 watts on 12v, which means there's 340w-276w =  64w on 3.3v+5v... which is obviously not right.  So my guess there's more power available for 5v + 3.3v, maybe 100-120w, but the transformer inside or something isn't suitably sized (or maybe it's an efficiency/heat dissipation thing) inside that limits the total to 340 watts.

Anyway, point is I wouldn't rely on more than 15A from that 5v.  I wouldn't use that power supply in the first place, because chances are it's probably around 70% efficient. You could just buy an industrial 5v 40-60A power supply from Digikey for $40-60 running at 75-80% efficiency and take up less space and be less noisy.

 

Offline kcozens

  • Contributor
  • Posts: 44
  • Country: ca
Re: EEVblog #946 - Apple (Raspberry) Pi Cluster - PART 2
« Reply #17 on: November 22, 2016, 04:59:34 pm »
I also like the look of the Apple case. It was amazing to see examples of how other people have re-purposed them. BTW, early on in the video you said 1.6GHz instead of 1.8GHz.

The riser idea is an interesting one. When you started talking using a riser and then about a DC/DC converter I had a thought. If the height is available (or if you have enough space on the riser board) you could add a connector that would accommodate a DC/DC converter, or build a DC/DC converter as part of the riser. You could add some jumpers on to the riser so that the two Pi boards connected on the riser can use either the 5V rail from the power supply or the 12V rail via the on-board DC/DC converter to power the Pi boards.
 

Offline lpickup

  • Regular Contributor
  • *
  • Posts: 98
  • Country: us
  • Uncle Bobby Dazzler
Re: EEVblog #946 - Apple (Raspberry) Pi Cluster - PART 2
« Reply #18 on: November 22, 2016, 05:48:56 pm »
If you are looking to maximize the number of Pi's, how about a staggered arrangement that may let you squeeze in a few more rows (assuming the predominantly horizontal arrangement that you described) such that the connectors of one row wind up being above the narrower part (or maybe the space between) of the row below.
 

Offline mxmarek

  • Contributor
  • Posts: 18
Re: EEVblog #946 - Apple (Raspberry) Pi Cluster - PART 2
« Reply #19 on: November 22, 2016, 06:05:59 pm »
I wonder if ethernet switch(es) will consume more power than Pis'..
 

Offline Lightages

  • Supporter
  • ****
  • Posts: 4314
  • Country: ca
  • Canadian po
Re: EEVblog #946 - Apple (Raspberry) Pi Cluster - PART 2
« Reply #20 on: November 22, 2016, 06:29:53 pm »
I am not too sure how to handle the inter-connectivity. I am sure of one thing though. I would use two motherboards instead of double capacity riser cards. I would put one motherboard in its usual place and mount the other just inside the cover with the cards facing in. This board would need to be on hinges or some other easy way to access it. This way you have less complexity in the adapter/riser boards and have a nice wind tunnel for cooling,  guess if my approach were to be used then the SPI approach for the networking would be much less difficult to manage.
 

Offline timgiles

  • Regular Contributor
  • *
  • Posts: 236
  • Country: se
  • Programmer, DB architect
Re: EEVblog #946 - Apple (Raspberry) Pi Cluster - PART 2
« Reply #21 on: November 22, 2016, 06:37:45 pm »
Dave,

Great video, glad you are moving forwards with the project. I thought that the vertical male eth connectors looked fun but I worry they will lock in place and you will need 8 fingers pressing on 8 tabs to release the switch once it is pushed on. Personally, I hope you go down the route of either adding a few ethernet bridge ICs on to the board or the SPI route. This being an EE blog, I would rather see electrical design than just plugging wires in to ports (!).... although I do temper my argument that the PCB design and smaller daughter boards will not be a trivial effort either way.

Hope you are going to do a video with some discussions on BOM, PCB design and maybe choice of parts. Seeing how a prof does these in Altium or one of the entry level CAD packages would be great.

Also, the power connector decision. Surely having 8 (was it 8?) connectors you to screw off is a PITA compared to a single Molex type connector. At least when it comes to disconnecting. But I do accept, the other connection method oozes class! Just a shame the dodge wires killed the appreciation later on in the video!

 

Offline ataradov

  • Super Contributor
  • ***
  • Posts: 11236
  • Country: us
    • Personal site
Re: EEVblog #946 - Apple (Raspberry) Pi Cluster - PART 2
« Reply #22 on: November 22, 2016, 06:46:44 pm »
But then what's left for a motherboard, power and some LED's?
In that case the whole thing can actually be updated to be a wired harness instead of a solid motherboard. Plus some mechanical design for mounting the boards. If anything, that presents a bigger challenge.

BTW, there are flat Ethernet cables (all twisted pairs are molded in parallel), I used them to wire 150 devices to 24-port switches and the result was pretty neat, they are very easy to route in a controllable way, unlike regular Ethernet cables.
Alex
 

Offline Towger

  • Super Contributor
  • ***
  • Posts: 1645
  • Country: ie
Re: EEVblog #946 - Apple (Raspberry) Pi Cluster - PART 2
« Reply #23 on: November 22, 2016, 07:00:54 pm »
I wonder if ethernet switch(es) will consume more power than Pis'..

May not be far off it, certainly some older ones. But those little domestic switches don't seem to generate much heat. 
The simplest solution to build, but not the neatest would a be 48 port switch and stack the pies up using PCB standoffs. With short patch and power cables to interconnect them.  Just be aware the fans if the switch has any (older models) may be loud.  It should be possible to remove the switch from its case and just use the PCs case fans to cool it and maybe power from the case's psu as well.

Any ideas on how to monitor the individual pies status, even if just a flashing 'Alive' LED on each unit?
 

Offline bktemp

  • Super Contributor
  • ***
  • Posts: 1616
  • Country: de
Re: EEVblog #946 - Apple (Raspberry) Pi Cluster - PART 2
« Reply #24 on: November 22, 2016, 07:16:47 pm »
Any ideas on how to monitor the individual pies status, even if just a flashing 'Alive' LED on each unit?
I would use UART and connect it to a medium sized microcontroller with an own ethernet interface.
Writing a simple script or program running on each Pi to querry the status (temperature, CPU load, etc.) from UART should be easy.
If the microcontroller starts the query and waits for the answer it can query all Pies sequentially using a cheap multiplexers for the UART signals.
That allows to monitor the cluster without having to connect to each Pi.
You could also add some more advanced features like current monitoring of each Pi and remote shutdown and power cycle capability.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf