This sounds a lot like some distributed datacenter scenarios that, James Hamilton, then of Microsoft, now of Amazon Web Services, did 5+ years ago.
Potential upsides:
- Places computing closer to users, reducing latency.
- Utility pays for distribution transformers, instead of "datacenter" operator.
- "Low-grade" waste heat close to point of use.
- Geographic diversity.
Of course, there are downsides too, but batshit crazy? Why do you think it so?
Networks are not really setup that way, fetching data from someones home will increase latency, your ping time to me will be orders of magnitude higher than current datacenters.
Huh? Networks aren't set up to obey fundamental laws of physics? A significant part of the front-end latency on the Internet is the time it takes light to travel in an optical fiber.
Or are you talking about some property of the "last mile" networks serving residential customers? The extremities of networks are often passive and communication between nodes on the same segment may require a round trip to something further up stream. Is that what you mean? That can add some latency, but I'm pretty sure the distance travelled is less than 10 miles.
Or is this about the asymmetric upload/download bandwidth of many residential broadband connections? That's a (not insurmountable issue), but doesn't have a direct bearing on latency.
Or do you have some misunderstanding about how TCP/IP works? Connection setup requires round trip communication, and the window for unacked packets is based on round trip latency. It is true that datacenters go to lengths to minimize latency, and they generally have more options for network connectivity, but residential ISPs don't get a free-pass on latency.
Ultimately though, for such a scheme to work, it would need appropriate network design and management. This would have been even harder to dance around 5+ years ago when residential broadband above 50mbps was quite rare.
waste heat will be problematic in summer time or is that also taken into account?
Indeed. I can't find the paper, but as I recall, it was quite thorough, and I doubt it would have dodged this. There are a few ways I can think of to deal with this 1) it could still have value for pre-heating shower/wash water in the summer. 2) it could be vented directly outside. 3) If active cooling was required, it would likely be a marginal capital and/or operating cost on top of existing residential cooling.
The other two points: "Utility pays for distribution transformers, instead of "datacenter" operator" and "Geographic diversity" don't look like an upside to me because of the same reasons, what to do with heat dissipation on summer time and what to do about peer to peer latency.
Not sure why you don't think of these as upsides? Utility paying for distribution transformers is a capital and operating cost reduction and geographic diversity is generally desirable. I've already addressed the peer-to-peer latency, and suggested approaches to mitigating he issue of heat during summer time.
Also one more thing not taken into account is the uploading limits (restricted by most providers so you can't really run a server from your home).
You are assuming this would be deployed on residential broadband. I don't know what Quarnot's plan is, but certainly there are other options. I'm pretty sure I could have Comcast switch me over to a business connection. My price would go up, and peak speeds might drop some, but as I recall from last time I looked, the Business Class was more flexible about servers and bandwidth.
Interesting points. Stepping back a bit, the website is a bit vague. My assumption is their plan is to set things up like #1, but it seems to me that #2 would be far more efficient:
1) The unit (Q.Rad) is an isolated CPU on my wall to which I have no direct access. I submit the job from my PC to a central server, which then distributes the workload amongst the remote units, maybe even the one in my house. In effect, the company is paying to use my home as a heatsink.
2) The unit (Q.Rad) serves as my primary CPU (IE: I run my day-to-day computing on it). When not in use, I farm out the downtime to be used by the cloud service as a node in a distributed processing system. When I want to do some heavy lifting, my personal unit acts as the primary CPU, initiating and distributing the workload across available resources in the network.
To the quoted points above, it would seem that your first point is only valid if the organization is akin to #2. If its like #1, the job would need to go from your PC to a central point before then being distributed, as opposed to #2 where your unit would initiate, distribute and manage the job on the network.
Perhaps you do a lot of computing that would benefit from Qarnot's offering, but my assumption was that while some of their customers might opt to install Qarnot nodes on premises, most of the people housing nodes wouldn't be computational customers.
Looking at their example workloads, most look like they tend towards the embarrassingly parallel end of the spectrum, which relaxes the requirements on node-to-node latency. Most also look like the ratio of computation to input/and/or output data is relatively high, which reduces the time/cost of distributing the jobs and collecting the results.
My point in the original post was to say that the "waste heat" is of nil value as far as warming your home. Assume the unit puts out 500 watts of heat-That's equivalent to your average desk heater. I'd bet thats a pretty generous heat output estimation. So the company is effectively picking up the equipment and electric cost so you can cool the unit (after all, thats the money pit in server farms).
So the equipment and power are fixed costs-whether they put the severs in a big building or your home, they have to buy them. They're banking on the fact that you like to keep your home at a comfortable 70-80F and will foot their cooling bill. That's my rationale for calling it malarkey.
Some people pay for heat. Some people pay for cooling. Some do both, and some neither. Perhaps free heat doesn't appeal to you, but does it seem that unlikely that there are a significant number of people who would love free heat? As for the cooling bill, they may be asking people to foot it in exchange for free heat, but I wouldn't think that paying people for added cooling load would wreck an otherwise viable business model.
You reduce this to equipment and power costs, which only makes sense if you believe that the cost of buying land and constructing a datacenter etc is trivial. It's not. Probably just as important, the cost of a datacenter starts accruing before its built, and continues whether or not it is filled to 100% capacity. Even if they have to pay for housing the servers with more than just waste heat, that expense should scale with the business, rather than requiring a large initial outlay.
I think, in the short term, the question is whether they can get enough paying customers to have sufficient scale. In the long term, I think its whether they can get enough units deployed, in areas with good enough network infrastructure, that they can handle a wider range of workloads.