General > General Technical Chat

The BIG EEVblog Server Fire

<< < (23/36) > >>

Algoma:
https://en.m.wikipedia.org/wiki/Terabit_Ethernet

112Gb/s SerDes in a single channel.. I can only imagine the rate that signal is being switched on and off, let alone trying to sample such a frequency.

madires:

--- Quote from: Ian.M on April 09, 2021, 12:01:34 pm ---That's not what Amazon claim for their AWS Snowmobile 100PB data transfer storage in a 45' shipping container.  They claim up to 1 TB/s aggregated over multiple 40Gb/s interfaces.  See their FAQ for details: https://aws.amazon.com/snowmobile/faqs/

--- End quote ---

Sorry, but I don't get your point. :-//

SilverSolder:

--- Quote from: blueskull on April 09, 2021, 08:36:27 am ---[...]  we had to patch on an old stupid design with the oldest technology. [...]


--- End quote ---

Some old technology was built really well, and has stood the test of time and proven reliable and inexpensive.  (Incandescent light bulbs?)

The problem is - how to identify what technology of today, that will gain a good reputation and respect over the next several decades! 

bd139:

--- Quote from: SilverSolder on April 09, 2021, 12:29:59 pm ---The problem is - how to identify what technology of today, that will gain a good reputation and respect over the next several decades!

--- End quote ---

That's fairly easy. Look out for stuff at conferences, then avoid the shit out of it. You want the stale unsexy things everyone takes for granted.

My favourite tools this week are SQLite and Python. Both ancient in the scale of things :)


--- Quote from: ve7xen on April 09, 2021, 07:57:18 am ---100G at 80km is trivial with off the shelf gear these days, and I don't imagine many datacentre interconnects are only 10G.

Even so, the wheels win when it takes longer to transfer the data than it would to drive the media. At 100G you can transfer ~45TB/hr. That's only 4 or 5 hard drives or LTO-8 tapes so it's not hard for the car to win. If you fill a typical station wagon small SUV (~2000L cargo area) with LTO-8 tapes (~275cm^3), you can fit about 7,500 tapes (x12GB native, no compression here) for 90PB. At 45TB/hr on your 100Gbps that would take 2000 hours during which you should be able to drive/sail anywhere on the planet. Of course it's usually much more practical to transfer it; copying the data to/from the media becomes a significant time sink itself, but that may or may not matter.

Fibre can only reasonably win this race when the data volume is relatively small, even when you start talking about 400G systems or multiple links, what you can transfer in an hour still fits in a suitcase.

--- End quote ---

Actually a surprising chunk of peering is only at 10G per link. The ISP I use has only got 320G aggregate across all links which isn't a lot in the scale of things and it only averages 100-200G. That has tens of thousands of leechers on it.

As for transit, as mentioned AWS snowball/snowmobile type solutions are best for moving stuff around in large chunks. You can get 100G into one of them without having to dig up any roads.

But better to boil the frog slowly. When I migrated 130TB over to S3 a couple of years back, we built a service abstraction over the SAN and S3 so it used S3 as read-write-through cache for the SAN. This allowed us to sling all the stuff up over a dedicated DirectConnect up to S3 over the space of a few months without introducing any link capacity problems or having to do any nasty switch overs.

schmitt trigger:
This incident clearly indicates how dependent we have become of technology in general, the WWW in particular.

As such I am sure, well almost sure, that Dave will release a video of the incident, once that the investigation has been completed.
It will be an extremely interesting episode.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

There was an error while thanking
Thanking...
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod