Author Topic: Open Compute servers are the best thing you never knew  (Read 2237 times)

0 Members and 1 Guest are viewing this topic.

Offline nth_degreeTopic starter

  • Regular Contributor
  • *
  • Posts: 52
  • Country: us
Open Compute servers are the best thing you never knew
« on: January 02, 2022, 11:46:14 pm »
Ok so I want to tell yall about my open compute projects because they've become sooooo fun!
Did you know you can buy 4x Xeon 2680v2 servers that run totally silent and are power efficient for no money and fill them with cheap AF DDR3R? What if I said not only were they designed by facebook but, these are actually surplus from facebook? Yes it is true. LMK if anyone wants to hear all about it?
« Last Edit: January 03, 2022, 05:59:18 pm by nth_degree »
 

Offline xrunner

  • Super Contributor
  • ***
  • Posts: 7639
  • Country: us
  • hp>Agilent>Keysight>???
Re: Open Compute servers are the best thing you never knew
« Reply #1 on: January 03, 2022, 12:06:37 am »
Yes let's hear about it ...  :popcorn:
I told my friends I could teach them to be funny, but they all just laughed at me.
 

Offline nth_degreeTopic starter

  • Regular Contributor
  • *
  • Posts: 52
  • Country: us
Re: Open Compute servers are the best thing you never knew
« Reply #2 on: January 03, 2022, 12:38:01 am »
Ok good.
So Facebook was growing fast in late 2010 and realized they needed to build their own datacenters. Someone there figured there was an opportunity to design their own servers and many benefits to doing so. What they created was an open specification for a server called windmill. It used 1.5U instead of the industry standard 1U height to increase fan size, air flow, heatsink fins. They doubled capacity having two ‘nodes’ in each server. These ‘sushi trays’ are modular servers in a hot swappable factor. The entire design and specification is fully open and documented. You can have schematic and BIOS source code, everything. They stripped all the branding from the design and unnecessary vanity features to reduce weight. These things are a dream of practical design. I’m in love.

The only complication is you need to step up to 220-40v for the PSU but I've managed that (ELC).

So anyway Facebook provided this spec to two major OEMs quanta and wiwynn and contracted them to manufacture in massive massive quantities. There’s still a huge brand new surplus filling warehouses in San Jose. I hesitate to provide a link and promote a seller. I have no affiliation except that I buy myself. But get this. For under $1200 I have an 80 thread Xeon running at 700W w 1/2TB of DDR3 registered and 10GbE. Even if you go to ES Epyc with the price of DDR4 you can’t come close to matching that.
« Last Edit: January 03, 2022, 02:39:30 am by nth_degree »
 

Offline nth_degreeTopic starter

  • Regular Contributor
  • *
  • Posts: 52
  • Country: us
Re: Open Compute servers are the best thing you never knew
« Reply #3 on: January 03, 2022, 01:10:24 am »
Ok I’m kind of in a rush so I’ll provide a crash course covering the parts of the build that are difficult to overcome and the valuable tips I discovered. Also again I’m not involved with any of these sellers.

Sonitek is the one place to get the surplus servers here is the golden link
https://www.ebay.com/itm/391293435900

These come with heatsinks and the PSU btw^ and they are brand new. They even preinstalled the thermal compound at the factory using IBMs spacer technique, and it’s still good. Use #2 Phillips for heatsinks.

You need to use *only* 4x Xeon 2680v2. 115W TDP is the max and 2680v2 is the highest performing part that these can use. You will get all 10 cores functional, so 40 threads per 2x cpu node w/ 50MB cache running up to 2.8GHz without turbo enabled. They can be had new on ebay still for example

https://www.ebay.com/itm/115081632640

Next you need some registered DDR3. Low voltage DDR3RL is ok. The best deals come from Cisco equipment being decommissioned. I suggest Samsung M393B2G70BH0 as I’ve used it in all my servers and buying larger batches it can be very cheap indeed. No link here you have to go hunt

Next some complications;

1. You need a 120 to 22-240v single phase step up transformer if in the US. This is very tricky but I have found the right model it’s ELC T3000. Just follow my lead on that. There’s a lot to it, only that model.

2. Get a HDPE slice cut with these dimensions (I use TAP plastics in the US) for a lid.

HDPE Cutting Boards - White
1/2 (.500)" Thick, 18-7/8 inches Wide, 24 inches Long, Color: White – 135

Attach with two of these only on the right side where one is 90 degrees relative to the other; https://www.amazon.com/Sticky-Back-Strips-Adhesive-Fasteners/dp/B08SQT51KP

3. Get this because you need to flash a bios directly to the SOIC16 using windows https://www.ebay.com/itm/294090202538

You have to use the wiwynn v3 BIOS to support 2680v2, I include the .bin below. There’s no way to flash v3 through software, but the SOIC is in a easy remove housing you can just clip it out and reflash taking care to return the pin 1 orientation correctly. The whole ordeal lasts 2 minutes.

4. Suggest reading up on the debug port in the windmill spec because if you set the BIOS options correctly you can have serial console via 3.3v TTL for headless install.
Use this connector below and short pins 12+13. Then TTL 3.3v TX to 10 and RX to 9 and GND to 13 again. Thats the cable no VCC. I use Adafruit FTDI cable because I'm on a Mac w USB C and they require no driver, just use screen as with a Cisco console cable.

https://www.ebay.com/itm/181554395996
https://www.adafruit.com/product/4331
-or-
https://www.ebay.com/itm/124002417914

5. You can exchange the PSU fan if you want for even more silent operation (Noctua NF-A6x25 FLX but red LED will flash because it's 3 pin and socket is 4 pin PWM, but cutting the control line is deliberate. Soldering required here.). It runs very quiet out of the box, **so long as you disable turbo in the BIOS.

6. Want 10GbE without losing a PCIe slot? Get the CX341A from a US seller for $20 per node and use DAC cables if possible. Remember update the firmware.

7. Need 16 lane PCIe? This system can bifurcate the one 16x slot with the included daughterboard into 2 8x 3.0 slots or even 4 4x with your own hardware. But all you need to do is pull out the daughterboard and install a gamers extension cable and you have 16x;

https://www.amazon.com/icepc-Extender-Shielding-Connector-Compatible/dp/B099DLKKQ6

8. The lazy will want a cheap radeon card to do quick installs. I use an old Firepro 2.0 card.

That’s it. This has been one of my most fun projects ever. Hope someone else gets the experience. Oh and these systems are all running Ubuntu 20.04 server and tearing it up with RoCEv2. Good times!

I also want to add for anyone planning on following this guide, a 36x24 dual layer rubber ESD mat, a wrist strap and a ground plug aren’t really optional. Do it right

Happy New Year!

Everything you need is here https://drive.google.com/file/d/1wmFIWOGaYl4ocHXeNDNBLpze6o0UF9YP/view?usp=sharing

***

The BIOS is configured like this;

disable Intel Management Engine (especially for Ubuntu)

disable Xeon Turbo mode

enable auto ACPI config

superIO> serial port console redirection> console redirection settings> SIO COM1> BPS @ 115200

superIO> serial port console redirection> console redirection settings> OOB> Type VT100 + BPS 115200

for Tesla cards only enable PCIe 4G decoding, after Ubuntu install

***

How to flash Mellanox NIC firmware

sudo apt-get install mstflint zip unzip

lspci | grep Mellanox

0a:00.0 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3]

Identify the card by label ie. CX341A-XCEN.

Go to Firmware Downloads  and find the download link for the specific NIC you need to update.

wget http://www.mellanox.com/downloads/firmware/ConnectX3-rel-2_31_1598-MCX341A-XCE_Ax-ConnectX-4099_3.4.151_EN_10_3_34_RELEASE_2_1_EN_0x1003.bin.zip

md5sum ConnectX3-rel-2_31_1598-MCX341A-XCE_Ax-ConnectX-4099_3.4.151_EN_10_3_34_RELEASE_2_1_EN_0x1003.bin.zip

*** Check the site for MD5SUM: 5da0b72a95e32d6fa4d0c641fc5dd8b3

unzip ConnectX3-rel-2_31_1598-MCX341A-XCE_Ax-ConnectX-4099_3.4.151_EN_10_3_34_RELEASE_2_1_EN_0x1003.bin.zip

sudo mstflint -d 0a:00.0 -i ConnectX3-rel-2_31_1598-MCX341A-XCE_Ax-ConnectX-4099_3.4.151_EN_10_3_34_RELEASE_2_1_EN_0x1003.bin burn
** note that -d is the address of the PCIe card we recovered previously but that the lspci command didn’t provide the full model number necessary to select the firmware at the Mellanox site.

sudo reboot
« Last Edit: January 04, 2022, 09:26:12 am by nth_degree »
 
The following users thanked this post: hasithvm, duckduck

Online Monkeh

  • Super Contributor
  • ***
  • Posts: 8051
  • Country: gb
Re: Open Compute servers are the best thing you never knew
« Reply #4 on: January 03, 2022, 05:03:04 am »
"run totally silent"
"4x 115W TDP CPUs"

Does not compute.
 
The following users thanked this post: TomS_

Offline nth_degreeTopic starter

  • Regular Contributor
  • *
  • Posts: 52
  • Country: us
Re: Open Compute servers are the best thing you never knew
« Reply #5 on: January 03, 2022, 05:33:57 am »
Because of the 1.5U form factor. In 1U the heatsinks have reduced surface area and the fans need to push more air. The 40mm 1U fans you’re likely familiar with are very loud. But with a slower airflow 60mm can be near silent. I quite literally sleep with my head not far from a stack of these units. It's something like a watercooled desktop level.
« Last Edit: January 03, 2022, 06:40:17 am by nth_degree »
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf