Dave scares gnif.
Sounds normal.
How many miles apart do your data centres have to be before it's quicker to send the data via wires than wheels?
With a stupid number of media devices (e.g. tape drives) at either end the answer is:
10km / 6 mile
That is about the limit for single-mode fibre running at 10G Ethernet....
There is no easy solution to server backup, because you never know what common vulnerabilities there are.
And how much do you want to pay? EEVBLOG probably doesn't make millions
A fully redundant solution isn't cheap.
I run a few sites on virtual servers, with various backup policies (which I won't write about openly for obvious reasons) and if the virtual server company blew up and vanished for ever, I could start up a backup server which is a media PC running on an FTTP (80/30mbps) ADSL line
That would actually be fast enough for EEVBLOG, on a bad day. Times have changed; this would not have been possible 10 years ago, and the bw required for EEVBLOG is probably of the order of 100-500GB/month which is nothing. And the whole real server is rsynced (only changed files copied) to this media PC every night.
There are other backups of course because you cannot mirror the whole server, due to many open files etc. I would merely need to manually edit the DNS panel which is hosted by a company different to the server company. So the worst case is losing a day's data. It is practically impossible to lose everything, in this setup, and it is very cheap.
How many miles apart do your data centres have to be before it's quicker to send the data via wires than wheels?
With a stupid number of media devices (e.g. tape drives) at either end the answer is:
10km / 6 mile
That is about the limit for single-mode fibre running at 10G Ethernet....
100G at 80km is trivial with off the shelf gear these days, and I don't imagine many datacentre interconnects are only 10G.
Even so, the wheels win when it takes longer to transfer the data than it would to drive the media. At 100G you can transfer ~45TB/hr. That's only 4 or 5 hard drives or LTO-8 tapes so it's not hard for the car to win. If you fill a typical
station wagon small SUV (~2000L cargo area) with LTO-8 tapes (~275cm^3), you can fit about 7,500 tapes (x12GB native, no compression here) for 90PB. At 45TB/hr on your 100Gbps that would take 2000 hours during which you should be able to drive/sail anywhere on the planet. Of course it's usually much more practical to transfer it; copying the data to/from the media becomes a significant time sink itself, but that may or may not matter.
Fibre can only reasonably win this race when the data volume is relatively small, even when you start talking about 400G systems or multiple links, what you can transfer in an hour still fits in a suitcase.
@eevblog : Welcome back
The pictures Dave posted look like a random light industrial park of the sort of place my friend's machine shop is in, there's even a sports bar & grill in one of the units. Looks like the genset has to be in the same space, there isn't anywhere else to put it.
I agree. I was wondering how water ended up on the racks. And there I was thinking the backup genny was in a shipping container some 100 feet from the main building. Just in case it caught fire. Nar, that is extra rental space.
Maybe not odd in Utah, but certainly odd looking in the Uk, the power control boxes, transformer and what I assume is the deisel fuelling point, are not caged in collision resistant fencing - or even a crash carrier. All it would take is a truck driver choking on a hotdog from the grill... and back to square one.
Morale of the story from both WebNX and OVH is, backup power systems are highly flammable! Just let the power go off and rebuild the filesystems, not the entire data center.
Even so, the wheels win when it takes longer to transfer the data than it would to drive the media. At 100G you can transfer ~45TB/hr. That's only 4 or 5 hard drives or LTO-8 tapes so it's not hard for the car to win. If you fill a typical station wagon small SUV (~2000L cargo area) with LTO-8 tapes (~275cm^3), you can fit about 7,500 tapes (x12GB native, no compression here) for 90PB. At 45TB/hr on your 100Gbps that would take 2000 hours during which you should be able to drive/sail anywhere on the planet. Of course it's usually much more practical to transfer it; copying the data to/from the media becomes a significant time sink itself, but that may or may not matter.
Fibre can only reasonably win this race when the data volume is relatively small, even when you start talking about 400G systems or multiple links, what you can transfer in an hour still fits in a suitcase.
LOL Writing 7500 LTO-8 tapes and read them back in under 2,000 hours.
Sure, the channel bandwidth of an SUV is high, but you can't transmit or receive the data at anything like that rate.
Each LTO-8 tape can take 8 hours to write (source:
https://en.wikipedia.org/wiki/Linear_Tape-Open) and I'm guessing about the same to read back, that is around 15 hours to move 30TB - about 2TB per hour per drive pair (about 5Gb/s), even just moving data across the room.
To fill a SUV with tapes using a single drive will take about 12.8 years, and maybe another 12.8 years to read it back.
I'll take the 2000 hours (85 days or so) using a 100Gb fibre...
Whilst it's been fun discussing the merits and demerits of high-capacity microSD cards vs terabit fibre vs No 8 wire, I think we have lost sight of an important point.
Dave provides the world - us - with a fantastic meeting place to discuss our ideas, and he does it without any cost to us. I salute him, and also all those who work with him to make this wonderful place happen.
Thank you Dave.
To fill a SUV with tapes using a single drive will take about 12.8 years, and maybe another 12.8 years to read it back.
I'll take the 2000 hours (85 days or so) using a 100Gb fibre...
Exactly! It's the same fallacy for moving a SAN with backups from one data center to another for restoring servers. The SAN has a limited read/write throughput, let's say 100 Gbps. So you would get a 100 Gbps link between both data centers to be able to backup with full throughput. The same is true for the other direction, i.e. restoring servers. Moving the SAN would cause a delay by the transport and would also add the risk of a traffic accident and possibly the complete loss of the backups. But it wouldn't speed up the restore process because its read/write throughput is still 100 Gbps.
That's not what Amazon claim for their AWS Snowmobile 100PB data transfer storage in a 45' shipping container. They claim up to 1 TB/s aggregated over multiple 40Gb/s interfaces. See their FAQ for details:
https://aws.amazon.com/snowmobile/faqs/
That's not what Amazon claim for their AWS Snowmobile 100PB data transfer storage in a 45' shipping container. They claim up to 1 TB/s aggregated over multiple 40Gb/s interfaces. See their FAQ for details: https://aws.amazon.com/snowmobile/faqs/
Sorry, but I don't get your point.
[...] we had to patch on an old stupid design with the oldest technology. [...]
Some old technology was built really well, and has stood the test of time and proven reliable and inexpensive. (Incandescent light bulbs?)
The problem is - how to identify what technology of today, that will gain a good reputation and respect over the next several decades!
The problem is - how to identify what technology of today, that will gain a good reputation and respect over the next several decades!
That's fairly easy. Look out for stuff at conferences, then avoid the shit out of it. You want the stale unsexy things everyone takes for granted.
My favourite tools this week are SQLite and Python. Both ancient in the scale of things
100G at 80km is trivial with off the shelf gear these days, and I don't imagine many datacentre interconnects are only 10G.
Even so, the wheels win when it takes longer to transfer the data than it would to drive the media. At 100G you can transfer ~45TB/hr. That's only 4 or 5 hard drives or LTO-8 tapes so it's not hard for the car to win. If you fill a typical station wagon small SUV (~2000L cargo area) with LTO-8 tapes (~275cm^3), you can fit about 7,500 tapes (x12GB native, no compression here) for 90PB. At 45TB/hr on your 100Gbps that would take 2000 hours during which you should be able to drive/sail anywhere on the planet. Of course it's usually much more practical to transfer it; copying the data to/from the media becomes a significant time sink itself, but that may or may not matter.
Fibre can only reasonably win this race when the data volume is relatively small, even when you start talking about 400G systems or multiple links, what you can transfer in an hour still fits in a suitcase.
Actually a surprising chunk of peering is only at 10G per link. The ISP I use has only got 320G aggregate across all links which isn't a lot in the scale of things and it only averages 100-200G. That has tens of thousands of leechers on it.
As for transit, as mentioned AWS snowball/snowmobile type solutions are best for moving stuff around in large chunks. You can get 100G into one of them without having to dig up any roads.
But better to boil the frog slowly. When I migrated 130TB over to S3 a couple of years back, we built a service abstraction over the SAN and S3 so it used S3 as read-write-through cache for the SAN. This allowed us to sling all the stuff up over a dedicated DirectConnect up to S3 over the space of a few months without introducing any link capacity problems or having to do any nasty switch overs.
This incident clearly indicates how dependent we have become of technology in general, the WWW in particular.
As such I am sure, well almost sure, that Dave will release a video of the incident, once that the investigation has been completed.
It will be an extremely interesting episode.
A backup generator's keword is 'backup', isn't it. Some robustness is supposed to be embedded in it from the get go starting from specs.
On the other hand outside of some motorsports applications, where spectacular failure is just part of the game, I can't think of harder duty for an IC engine than standby generator service. It's always either sitting around or put on a heavy load from a cold start.
I once met a guy who rebuilt big diesel engines, and asked what most of them came from, the answer was mostly generators. Im not sure how much was mandatory maintenance and how much were failures but either way they live a quite hard life.
maybe it is time to move to other countries with less power interruptions
I'm in Missouri, current uptime is 177 days, no UPS. I run my web server out of my house.
Jon
Whilst it's been fun discussing the merits and demerits of high-capacity microSD cards vs terabit fibre vs No 8 wire, I think we have lost sight of an important point.
Dave provides the world - us - with a fantastic meeting place to discuss our ideas, and he does it without any cost to us. I salute him, and also all those who work with him to make this wonderful place happen.
Thank you Dave.
Yes, INDEED! Thanks, Dave, and all the hard workers at webNX and GorillaServers, and glad to see eevblog back up!
Jon
As someone who has lived in both North Carolina (the worst-ranked place in those graphs) and Switzerland (the best-ranked in those graphs), anecdotally, my experience completely agrees with that! (When I moved from USA to Switzerland the second time, I didn’t even bother buying a UPS, since the power here never goes out. Good enough for my home computing.)
I suspect averages like that are misleading, it really depends on where you live. In my location for example the downtown urban areas may go years without a single power interruption while the outlying rural areas may lose power several times a month over the winter and the average of two extremes just a few miles apart is not a very useful number. If you were to look at my whole state, power outages are probably common, however at my house they have generally been rare, this last winter being an unusual exception where I had 3 significant outages. Downtown Seattle which is only about 12 miles from here I have never seen a power outage other than a very localized one due to something like a transformer failure that knocks out a specific building.
To fill a SUV with tapes using a single drive will take about 12.8 years, and maybe another 12.8 years to read it back.
I'll take the 2000 hours (85 days or so) using a 100Gb fibre...
LOL yeah, of course, it's completely impractical, but really it's the same equation regardless of media, the practical limit is going to be how quickly you can read/write the media. You can swap the LTOs with hard drives (though probably can't take as many due to weight) if getting the data off the media faster is important, maybe you just plug them into a chassis on the other end that can use them directly, or even truck the entire chassis of storage in whatever form it's used in production. If you can provision the practical storage bandwidth on the fibre, then it's a wash, otherwise shipping it is going to win (though cost will likely push you to shipping way before you hit the practical limit). The bandwidth of the proverbial station wagon is still immense, even taking practical considerations into account.
Actually a surprising chunk of peering is only at 10G per link. The ISP I use has only got 320G aggregate across all links which isn't a lot in the scale of things and it only averages 100-200G. That has tens of thousands of leechers on it.
We're not talking about Internet, we're talking about private datacentre interconnect. You're certainly not going to run your PB storage migration over a 10Gb peering link and the Internet. I can't imagine any service provider that would be running lots of uncoloured 10G over dark fibre these days, it's too costly; for your 10G peerings it's either a leased wavelength or more usually just a patch cable within the data centre between cheap ports because you don't need more to that peer (with several peers / transit at the POP, and likely Nx100G to your core). As an SP, you're either leasing a wave on someone else's OTN or running your own WDM system that can carry at least 100Gb per pair ('cheap' bog standard systems do 40x10G, state of the art off the shelf systems do 12x100G or more). Of course there are small SPs that only ever lease 10G or even 1G waves/EPLs, but I thought we were discussing the capacity of the fibre not what a small business actually buys on it
.
But better to boil the frog slowly. When I migrated 130TB over to S3 a couple of years back, we built a service abstraction over the SAN and S3 so it used S3 as read-write-through cache for the SAN. This allowed us to sling all the stuff up over a dedicated DirectConnect up to S3 over the space of a few months without introducing any link capacity problems or having to do any nasty switch overs.
Clever solution, I like it!
Whilst it's been fun discussing the merits and demerits of high-capacity microSD cards vs terabit fibre vs No 8 wire, I think we have lost sight of an important point.
Dave provides the world - us - with a fantastic meeting place to discuss our ideas, and he does it without any cost to us. I salute him, and also all those who work with him to make this wonderful place happen.
Thank you Dave.
+1000! I actually had to get work done this week
As someone who has lived in both North Carolina (the worst-ranked place in those graphs) and Switzerland (the best-ranked in those graphs), anecdotally, my experience completely agrees with that! (When I moved from USA to Switzerland the second time, I didn’t even bother buying a UPS, since the power here never goes out. Good enough for my home computing.)
I suspect averages like that are misleading, it really depends on where you live. In my location for example the downtown urban areas may go years without a single power interruption while the outlying rural areas may lose power several times a month over the winter and the average of two extremes just a few miles apart is not a very useful number. If you were to look at my whole state, power outages are probably common, however at my house they have generally been rare, this last winter being an unusual exception where I had 3 significant outages. Downtown Seattle which is only about 12 miles from here I have never seen a power outage other than a very localized one due to something like a transformer failure that knocks out a specific building.
The main issue in North America is the "pioneering spirit" electrical system where wires are strung up among the trees in rural / suburban areas... what could possibly go wrong?
Quite a crazy turn of events. Comes to show though that even professional commercial setups can and do fail. Always have to be prepared and have offsite backups. In this case they were not needed but things could have been worse.
Wow, EEVblog has a history of attracting water!
Wow, EEVblog has a history of attracting water!
It's from all the dissing of those poor, harmless God botherers.
Having a gripe about the Hillsong peeps in the solar upgrade vid during their Easter festivus..well karma is a bitch.
Thou shalt be baptised.
Praise be. And God bless you, Dave.
Whilst it's been fun discussing the merits and demerits of high-capacity microSD cards vs terabit fibre vs No 8 wire
You don't seem to have been here long, every thread descends into discussion of that type.
If you are trying to change that behaviour, you will be as effective as King Canute.