Author Topic: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1  (Read 54677 times)

0 Members and 1 Guest are viewing this topic.

Offline CM800

  • Frequent Contributor
  • **
  • Posts: 882
  • Country: 00
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #50 on: October 15, 2016, 03:08:48 pm »
I'm trying to make a PCB baseplate for the NanoPi... Ran into something strange.


I've got the USB connector there, trying to add holes for it, but look, they overlap. I'm certain my drawing on the left is correct, the hole positions are a direct overlay of the DXF they provide, and all the 2.48mm headers fit just fine.
 

Offline BurtyB

  • Regular Contributor
  • *
  • Posts: 66
  • Country: gb
    • 8086 Consultancy
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #51 on: October 15, 2016, 03:36:50 pm »
I've got the USB connector there, trying to add holes for it, but look, they overlap. I'm certain my drawing on the left is correct, the hole positions are a direct overlay of the DXF they provide, and all the 2.48mm headers fit just fine.

Measuring the board I have it's more like a ~0.8mm diameter hole (~1.14mm diameter ring) on 1.4mm spacing.

Chris.
 
The following users thanked this post: CM800

Offline Stupid Beard

  • Regular Contributor
  • *
  • Posts: 221
  • Country: gb
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #52 on: October 15, 2016, 04:20:46 pm »
Not sure if anyone's mentioned it yet, but a couple of things you will want to consider are the ability to toggle power or at the very least reset each node individually. You may not need to use it very often, but it's valuable when something goes wrong with an update or something. I guess this doesn't have to be more than a physical switch if you want to keep it simple, it just has to be accessible and hardware based so that you don't have to pull everything apart to fix one troublesome node.

Also, whilst your primary use may be low bandwidth, updating the software on all the nodes is not. You may want to take that into consideration before you lock yourself into slow networking. You may decide you don't care, but thought I'd mention it just in case.
 

Offline CM800

  • Frequent Contributor
  • **
  • Posts: 882
  • Country: 00
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #53 on: October 15, 2016, 04:22:08 pm »
behold, a little CAD model concept.

What do you think of something like this, Dave?




if you use SMD holders for the 2.54mm pin headers you can fit one on each side of the board.
then a card edge.

the entire board assembly there is only 14.5mm wide. This could be shrunk further by thinning the cooling blocks, however I don't think it'd be worth it as you'll need space between the card edge connectors anyway.

You could break out the Ethernet connections to the board easily enough too, as I've flipped them over.

12x 1.2GHz cores, with water blocks (or just passive)

93 x 14.5 x 60mm space.

That's 81cm3,  6.7cm3 per core


You could make this board double height, fitting 8 x 4 = 32 cores per PCB, the entire base board would fit in a space under 100x100 which means you could probibly get a 4 layer board made up for $50 for 10 pcs in china.

that brings the est price of each board to $85 (8*$9.99 for the 512MB ram version) , so for $850 + backplane cost you could make  a 320 core, 1.2GHz cluster..

... I should probibly slow down now before I end up trying to do Dave's project myself.  :-DD :-DD
« Last Edit: October 15, 2016, 04:42:05 pm by CM800 »
 
The following users thanked this post: dekra54

Offline positivenucleus

  • Newbie
  • Posts: 2
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #54 on: October 15, 2016, 04:59:04 pm »
Regarding going thru SPI to Ethernet to get internet access...

You can use the Rx/Tx (RS232 style, 5V) with PPP.  On this port you make PPP run, not huge speed: 115200 bit/s.

You will need to have a node that has all the other sides of each serial line, which kind of replaces the ethernet switch, but I think it would be cheaper.

One simple idea that I can see for it, is a bunch of USB serial ports connected to a USB hub, and that connected to one of the boards.  Yes cables (USBSerial <--> hub <--> "master node") but way more cheaper in $ and power, and easy to replace.  Maybe you can get the hub chips and create a usb hub on the "motherboard", and have only 1 single cable to master node.  Microchip USB251x can have up to 4 ports and < 100mA total current, but I guess there might be others out there ;)

Serial port + PPP: http://elinux.org/RPi_Serial_Connection


 

Offline crasbe

  • Newbie
  • Posts: 2
  • Country: de
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #55 on: October 15, 2016, 05:31:29 pm »
You can use the Rx/Tx (RS232 style, 5V) with PPP.  On this port you make PPP run, not huge speed: 115200 bit/s.

Actually you can go a lot higher with a Raspberry Pi. Let me quote the BCM2835 ARM Peripherals document:
Quote
4) The UART itself has no throughput limitations in fact it can run up to 32 Mega baud. But doing so requires significant CPU involvement as it has shallow FIFOs and no DMA support.
https://www.raspberrypi.org/wp-content/uploads/2012/02/BCM2835-ARM-Peripherals.pdf Page 10, at the very bottom.

The Allwinner H3 seems to be pretty similar but I wasn't able to find reliable numbers.
 

Offline technix

  • Super Contributor
  • ***
  • Posts: 3507
  • Country: cn
  • From Shanghai With Love
    • My Untitled Blog
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #56 on: October 15, 2016, 08:10:44 pm »
You can use the Rx/Tx (RS232 style, 5V) with PPP.  On this port you make PPP run, not huge speed: 115200 bit/s.

Actually you can go a lot higher with a Raspberry Pi. Let me quote the BCM2835 ARM Peripherals document:
Quote
4) The UART itself has no throughput limitations in fact it can run up to 32 Mega baud. But doing so requires significant CPU involvement as it has shallow FIFOs and no DMA support.
https://www.raspberrypi.org/wp-content/uploads/2012/02/BCM2835-ARM-Peripherals.pdf Page 10, at the very bottom.

The Allwinner H3 seems to be pretty similar but I wasn't able to find reliable numbers.
Actually the Alwinner H3 have built-in RGMII for 1Gbps Ethernet. If the required pins are broken out you may want to forget about SPI-based Ethernet and wire up some GbE direct attach connection on the backplane. RGMII interfaces can be connected directly without using a PHY in the middle (This kind of direct attach connection is fairly common especially for faster links, like SFP Direct Attach cables used in low-cost 10Gbps Ethernet stacks.) So if your Ethernet switch chipset supports it you can design your backplane using RGMII direct attach to the processors.

In fact RGMII Direct Attach and SPI-based connection can be used in tandem in a cluster like this. The high throughput low latency 1Gbps connection can be used to transfer bulk data across nodes, while the SPI-based connection can be used to transfer out-of-band events and management packets.
 
The following users thanked this post: CM800

Offline uwe

  • Newbie
  • Posts: 6
  • Country: de
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #57 on: October 15, 2016, 08:41:17 pm »
Hi Dave,

go for the Raspberry Pi Compute Module. There will be  a new version until the end of the year ;)

From https://www.raspberrypi.org/blog/compute-module-nec-display-near-you/

Each display has an internal bay which accepts an adapter board loaded with either the existing Compute Module, or the upcoming Compute Module 3, which incorporates the BCM2837 application processor and 1GB of LPDDR2 memory found on the Raspberry Pi 3 Model B. We’re expecting to do a wider release of Compute Module 3 to everybody around the end of the year.

Greetings

Uwe
 
The following users thanked this post: k2teknik

Offline ebclr

  • Super Contributor
  • ***
  • Posts: 2328
  • Country: 00
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #58 on: October 15, 2016, 08:43:58 pm »
Rapsberry Pi is a bad choice

Better choice




https://www.parallella.org/
 

Offline Wilksey

  • Super Contributor
  • ***
  • Posts: 1329
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #59 on: October 15, 2016, 09:30:05 pm »
I looked at using a Pi Zero but couldn't find where to get them from all of the suppliers seem to only allow you to purchase 1.
Where can you buy them (in the UK, where they are made...) in multiple quantities, I think a Farnell link said it was discontinued.
 

Offline Rasz

  • Super Contributor
  • ***
  • Posts: 2616
  • Country: 00
    • My random blog.
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #60 on: October 15, 2016, 10:30:49 pm »
 :-//  impressive waste of time

at the end of the day after investing >100h you will end up with something equaling performance of two year old used $40 Intel g3258 on $25 motherboard with $20 of ram ... _in some specific tasks_ ... :palm:

btw ZERO all have host and _gadget mode_ USB, meaning they all can act like ethernet over USB. $5 USB hub is the easiest solution
already done to death here: http://www.mycustard.com/ Edit: you will notice indicative lack of any performance/usefulness metrics, its because none exist for such a thing.

$30 off the shelf for custom 4 board pointless cluster pcb https://shop.pimoroni.com/products/cluster-hat


still a total waste of time. This project is EE/node.js web developer trying real computing equivalent of mounting turbo in your mums pinto/morris mini/daihatsu charade/whatever small shitty eco town car.
« Last Edit: October 15, 2016, 10:32:52 pm by Rasz »
Who logs in to gdm? Not I, said the duck.
My fireplace is on fire, but in all the wrong places.
 

Offline EEVblogTopic starter

  • Administrator
  • *****
  • Posts: 37717
  • Country: au
    • EEVblog
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #61 on: October 16, 2016, 01:58:03 am »
behold, a little CAD model concept.
What do you think of something like this, Dave?


That's like what I had in mind. Either back-to-back to center heatsink, back-to-backoutside to a long thin machined aluminium brick that becomes the housing as well. i.e. it's like a "blade" cluster module. Ethernet and 12V/24V power one end (+maybe serial monitor), and status leds on the other end.

Other option is an extruded aluminium case as I had in mind before with rows of vertical boards inside. Have to be mounted longitudinally of course for airflow.
 

Offline EEVblogTopic starter

  • Administrator
  • *****
  • Posts: 37717
  • Country: au
    • EEVblog
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #62 on: October 16, 2016, 02:02:23 am »
:-//  impressive waste of time

So is your post.
If you don't have anything positive to contribute them please just ignore it.
 

Offline Rasz

  • Super Contributor
  • ***
  • Posts: 2616
  • Country: 00
    • My random blog.
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #63 on: October 16, 2016, 03:09:12 am »
If you don't have anything positive to contribute them please just ignore it.

did you miss the second half?

EDIT: actually I was hoping your angle in the video would be debunking, proving how delusional project like this is, most people being suckered into this either dont do any calculations or are incapable of assessing performance and genuinely expect at the very least workstation performance (if not mini server).

TLDR:
 very best case scenario one Pee 3 = ~6-8 Pee Zero, and this is using PERFECTLY scalable cluster optimized tests.
in same test one $40 intel processor = ~three Pee 3
« Last Edit: October 16, 2016, 03:19:34 am by Rasz »
Who logs in to gdm? Not I, said the duck.
My fireplace is on fire, but in all the wrong places.
 

Offline optoisolated

  • Supporter
  • ****
  • Posts: 71
  • Country: au
  • If in doubt, it's probably user error.
    • OpsBros
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #64 on: October 16, 2016, 03:59:01 am »
Was very excited to see Dave's video and figured it'd be a good way to learn some more about dealing with more complicated concepts.

Reading up on the requirements of transformer-less configuration and it seems rather straightforward; especially considering a 10mbps connection is more than ample for the stated purpose. Section 4 of this TI guide goes into a lot of detail of the best way to achieve it. http://www.ti.com/lit/an/snla088a/snla088a.pdf

It's made even easier wen using something like the Microchip KSZ8895MQX Integrated 5-port Ethernet Switch chip. Its manageable, but by default will function as a dumb switch; and even includes termination resistors, and power regulator internally, further simplifying the design requirements.

Started designing a circuit using the ENC28J60 and the KSZ8895MQX to see if I can and so far I haven't hit any roadblocks. Using the SPI bus as a Ethernet interface, that never even occurred to me!  :clap:

This is one of those projects where it's possible to get the same results in simpler ways, but what's the fun in that?  :-DMM   :D
 

Offline EEVblogTopic starter

  • Administrator
  • *****
  • Posts: 37717
  • Country: au
    • EEVblog
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #65 on: October 16, 2016, 04:52:39 am »
did you miss the second half?

You mean the part with "already done to death", pointless, and "still a total waste of time"  ::)
 

Offline Rasz

  • Super Contributor
  • ***
  • Posts: 2616
  • Country: 00
    • My random blog.
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #66 on: October 16, 2016, 05:07:01 am »
I think it was made perfectly clear in the video why this was being done and why it was being done in this way.
yes yes, just because

Performance was specifically stated as NOT the primary aim. So my question is why make performance the primary focus of your criticism? Now that's an epic facepalm moment for you.

then dont call it supercomputer. Dave mentions it not being as fast as latest modern intel cpu while in fact the it wont even beat 2 year old budget product.

Learning how to set up ethernet over an SPI bus is generically useful information that may be applicable in other situations.

Its my autistic brain :/  There are only correct or wrong solutions. Correct is one optimizing for something. This one seems to be optimizing clicks, its neither a supercomputer nor has Pee in it. :(
its like https://hackaday.io/project/12122-raspberry-pi-project (spoiler, its a parody)

Who logs in to gdm? Not I, said the duck.
My fireplace is on fire, but in all the wrong places.
 

Offline obiwanjacobi

  • Frequent Contributor
  • **
  • Posts: 988
  • Country: nl
  • What's this yippee-yayoh pin you talk about!?
    • Marctronix Blog
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #67 on: October 16, 2016, 07:29:57 am »
Nice project!

If you make small adapter boards for each OrangePi you can solve the angle AND the pinout problem. Also, you can do away with the cutout, which makes routing way easier and leaves you more board space. Perhaps even put them a little closer, because now they can be inserted from the top.

This would allow you to mix any compute module that has your bus-signals somewhere on its header connector, opening the door for future enhancements - when a new, better, faster compute module comes out.

[2c]
Arduino Template Library | Zalt Z80 Computer
Wrong code should not compile!
 

Offline CM800

  • Frequent Contributor
  • **
  • Posts: 882
  • Country: 00
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #68 on: October 16, 2016, 09:00:28 am »
behold, a little CAD model concept.
What do you think of something like this, Dave?


That's like what I had in mind. Either back-to-back to center heatsink, back-to-backoutside to a long thin machined aluminium brick that becomes the housing as well. i.e. it's like a "blade" cluster module. Ethernet and 12V/24V power one end (+maybe serial monitor), and status leds on the other end.

Other option is an extruded aluminium case as I had in mind before with rows of vertical boards inside. Have to be mounted longitudinally of course for airflow.

From what I've been reading online, the Allwinner H3 throttles itself when it gets too hot and has been known to get pretty overheated sometimes, hence why they usually have a heat sink on the bottom.
If you wanted to pack as many of these boards in as smaller space as possible. putting the processors opposite each other (inside) and having an aluminum block with a water pass-through in it might be quite suitable, you could then chain them up to another water block on the case and use a little micro-pump to push the water through. The great thing about this is it will also be silent. I noticed quite a few people have put fans on the heatsinks, even then I've seen reports of it getting up to 57*C here is a pic of the block I concepted:





It's an L shaped block of aluminium or copper with 3 holes drilled in it then partially tapped so that a screw can fit in the horizontal one and two pipe connectors can fit in the other two, cheap and quick to machine. If you were making 10 blades of 4 or 8 boards, I think the benifits out way the additional work as you could put each blade next to eachother with only about 1mm spacer.


« Last Edit: October 16, 2016, 09:07:04 am by CM800 »
 

Offline SeanB

  • Super Contributor
  • ***
  • Posts: 16276
  • Country: za
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #69 on: October 16, 2016, 09:21:59 am »
Daughterboard with a small DC Dc converter on it, to provide 5V for the pi, and with status LEDs for power, and a small tactile switch on the one side to disable the DC Dc converter board as a reset function. As well then you can put the SPI ethernet chip on there as well with the termination circuit, and simply have 3 sets of zero ohm links to disable the SPI bus from the 40 pin connector on the bottom. the addition of another (probably 10pin) connector on the bottom will allow you to have the 12V power rail ( lower draw on the main board), the 4 differential data paths and 5 ground pins to supply power. This leaves the 40 pin connector free and standard ( with the 3 links if you need SPI on there, otherwise you just leave off the 3 jumpers and do not have the stubs to cause reflections) for further use if needed.

Main board then can be spaced so you have the daughter boards able to channel air flow from a fan through the slots, so allowing the chips to have small stick on heatsinks to cool them, powered by a single 120mm fan on one side of the case and a vent the other side.

Get the board dimensions right and you can have 3 different boards with identical placement of the main 40 pin and 10 pin connectors on the bottom, but with each variant able to accommodate either of the 3 pi variants described, as they are all electrically the same, just with a different pin position on them, or design a 4 layer board that can fit any of the 3 if you solder in the right socket for the board you want to use.
 

Offline rs20

  • Super Contributor
  • ***
  • Posts: 2318
  • Country: au
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #70 on: October 16, 2016, 10:42:01 am »
CM800: What is the most elegant way to re-fill the third hole? Any method more elegant than a bolt+O-ring?
 

Offline CM800

  • Frequent Contributor
  • **
  • Posts: 882
  • Country: 00
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #71 on: October 16, 2016, 11:40:36 am »
CM800: What is the most elegant way to re-fill the third hole? Any method more elegant than a bolt+O-ring?

That's generally how most people do it. you could use a rubber plug and a grub screw, or a grubscrew and a dab of epoxy over the end of it.
 

Offline ziggyfish

  • Regular Contributor
  • *
  • Posts: 113
  • Country: au
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #72 on: October 16, 2016, 12:04:58 pm »
In terms of networking, you could subnet the group of Pis.

For example, if you can fit 30 on a single motherboard, then set the sub netmask to 255.255.255.224, so that 192.168.0.1 to 192.168.0.30 are on one network and 192.168.0.33 to 192.168.0.62 are on another network. (with default being the first address on that switch, for example, 192.168.0.1 and 192.168.0.33 etc).

Then configure the routeing tables on the first device on each board.
 

Offline SeanB

  • Super Contributor
  • ***
  • Posts: 16276
  • Country: za
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #73 on: October 16, 2016, 12:11:23 pm »
If it is a soft aluminium alloy that you machined simplest is to make a solid metal slug that is 10-20 micrometers larger in diameter than the hole and press fit it in there, or make it a snug fit and use some thread sealer on it. Not much pressure, so the press fit method will be easier, but you need to either have the initial hole a very controlled diameter or ream it out to a controlled diameter.

Done that with a cooling block, though there it needed a more serpentine cooling loop so there were multiple long holes through the block, with end plugs on the ends done with press fitted plugs, and the internal ones were end drilled to intersect multiple galleys with unwanted paths ( to force the serpentine flow) filled with press fitted plugs pressed into the block. Anther just used a long length machine tap to thread the entire cross channel, and then simply put in some threaded plugs and sealer to the required points to block the passages., with the outer hole being plugged as well before being milled to final dimension so there are almost no visible marks of the plugs.

Another method if the block allows it is to drill the pocket for the pipe fittings and then angle drill 2 intersecting smaller diameter holes for the fluid to travel through.
 
The following users thanked this post: CM800

Offline EEVblogTopic starter

  • Administrator
  • *****
  • Posts: 37717
  • Country: au
    • EEVblog
Re: EEVblog #934 - Raspberry Pi Supercomputer Cluster PART 1
« Reply #74 on: October 16, 2016, 12:20:53 pm »
Its my autistic brain :/  There are only correct or wrong solutions.

Then you have nothing to contribute to this thread. Please do us a favor and ignore it.
 
The following users thanked this post: CM800


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf