General > General Technical Chat

The BIG EEVblog Server Fire

<< < (22/36) > >>

Syntax Error:
@eevblog : Welcome back :)


--- Quote from: james_s on April 08, 2021, 05:33:27 pm ---The pictures Dave posted look like a random light industrial park of the sort of place my friend's machine shop is in, there's even a sports bar & grill in one of the units. Looks like the genset has to be in the same space, there isn't anywhere else to put it.

--- End quote ---

I agree. I was wondering how water ended up on the racks. And there I was thinking the backup genny was in a shipping container some 100 feet from the main building. Just in case it caught fire. Nar, that is extra rental space.

Maybe not odd in Utah, but certainly odd looking in the Uk, the power control boxes, transformer and what I assume is the deisel fuelling point, are not caged in collision resistant fencing - or even a crash carrier. All it would take is a truck driver choking on a hotdog from the grill... and back to square one.

Morale of the story from both WebNX and OVH is, backup power systems are highly flammable! Just let the power go off and rebuild the filesystems, not the entire data center.

hamster_nz:

--- Quote from: ve7xen on April 09, 2021, 07:57:18 am ---Even so, the wheels win when it takes longer to transfer the data than it would to drive the media. At 100G you can transfer ~45TB/hr. That's only 4 or 5 hard drives or LTO-8 tapes so it's not hard for the car to win. If you fill a typical station wagon small SUV (~2000L cargo area) with LTO-8 tapes (~275cm^3), you can fit about 7,500 tapes (x12GB native, no compression here) for 90PB. At 45TB/hr on your 100Gbps that would take 2000 hours during which you should be able to drive/sail anywhere on the planet. Of course it's usually much more practical to transfer it; copying the data to/from the media becomes a significant time sink itself, but that may or may not matter.

Fibre can only reasonably win this race when the data volume is relatively small, even when you start talking about 400G systems or multiple links, what you can transfer in an hour still fits in a suitcase.

--- End quote ---

LOL Writing 7500 LTO-8 tapes and read them back in under 2,000 hours.  :-DD

Sure, the channel bandwidth of an SUV is high, but you can't transmit or receive the data at anything like that rate.

Each LTO-8 tape can take 8 hours to write (source: https://en.wikipedia.org/wiki/Linear_Tape-Open) and I'm guessing about the same to read back, that is around 15 hours to move 30TB - about 2TB per hour per drive pair (about 5Gb/s), even just moving data across the room.

To fill a SUV with tapes using a single drive will take about 12.8 years, and maybe another 12.8 years to read it back.

I'll take the 2000 hours (85 days or so) using a 100Gb fibre...



Ultrapurple:
Whilst it's been fun discussing the merits and demerits of high-capacity microSD cards vs terabit fibre vs No 8 wire, I think we have lost sight of an important point.

Dave provides the world - us - with a fantastic meeting place to discuss our ideas, and he does it without any cost to us. I salute him, and also all those who work with him to make this wonderful place happen.

Thank you Dave.

madires:

--- Quote from: hamster_nz on April 09, 2021, 09:19:10 am ---To fill a SUV with tapes using a single drive will take about 12.8 years, and maybe another 12.8 years to read it back.

I'll take the 2000 hours (85 days or so) using a 100Gb fibre...

--- End quote ---

Exactly! It's the same fallacy for moving a SAN with backups from one data center to another for restoring servers. The SAN has a limited read/write throughput, let's say 100 Gbps. So you would get a 100 Gbps link between both data centers to be able to backup with full throughput. The same is true for the other direction, i.e. restoring servers. Moving the SAN would cause a delay by the transport and would also add the risk of a traffic accident and possibly the complete loss of the backups. But it wouldn't speed up the restore process because its read/write throughput is still 100 Gbps.

Ian.M:
That's not what Amazon claim for their AWS Snowmobile 100PB data transfer storage in a 45' shipping container.  They claim up to 1 TB/s aggregated over multiple 40Gb/s interfaces.  See their FAQ for details: https://aws.amazon.com/snowmobile/faqs/

Navigation

[0] Message Index

[#] Next page

[*] Previous page

There was an error while thanking
Thanking...
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod