Author Topic: Heat your home with The Cloud  (Read 11803 times)

0 Members and 1 Guest are viewing this topic.

Offline gilbenlTopic starter

  • Regular Contributor
  • *
  • Posts: 170
  • Country: us
Heat your home with The Cloud
« on: August 15, 2015, 07:29:42 pm »
My math says this is batsh*t crazy, but wanted to see what y'all think.

To overcome the issue of heat generation in cloud-based computing, Qarnot Computing (http://www.qarnot-computing.com/technology) has devised a scheme in which processing units called "Q.Rads" are installed in peoples homes. Processing jobs are sent to the units, which in due course produce and emit heat into your home.

 www.calculator.net/btu-calculator.html
According to the above website, you'd need a 2900 watt heater to heat a 10x10x10 bedroom.

Intel TDP White Paper
The maximum heat emitted by a CPU is referred to a TDP or thermal design power. These are considered "worst-case," under maximum heat generating loads. According to the white paper above, Intel Xeon server CPUs can be purchased with TDP up to 130 Watts. Below is a clipping from the above document that shows the TDP of typical server configurations.

So, Qarnot, you're telling me I'd need to install approximately 20 CPUs in my bedroom to replace my heater? Okay, sure sign me up. Maybe you can get the solar roadway people to provide the power... :palm:
 
<iframe src="https://player.vimeo.com/video/38095665" width="500" height="281" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe> <p> from Qarnot computing on Vimeo.</p>
« Last Edit: August 15, 2015, 07:43:12 pm by gilbenl »
What doesn't kill you, probably hurts a lot.
 

Offline eas

  • Frequent Contributor
  • **
  • Posts: 601
  • Country: us
    • Tech Obsessed
Re: Heat your home with The Cloud
« Reply #1 on: August 16, 2015, 01:44:04 am »
This sounds a lot like some distributed datacenter scenarios that, James Hamilton, then of Microsoft, now of Amazon Web Services, did 5+ years ago.

Potential upsides:
  • Places computing closer to users, reducing latency.
  • Utility pays for distribution transformers, instead of "datacenter" operator.
  • "Low-grade" waste heat close to point of use.
  • Geographic diversity.

Of course, there are downsides too, but batshit crazy? Why do you think it so?
« Last Edit: August 16, 2015, 06:08:52 am by eas »
 

Offline TheElectricChicken

  • Frequent Contributor
  • **
  • Posts: 480
  • Country: au
Re: Heat your home with The Cloud
« Reply #2 on: August 16, 2015, 04:23:36 am »
Old idea, works just fine. Great IF it was in the reader's area, but probably is not.
 

Offline German_EE

  • Super Contributor
  • ***
  • Posts: 2399
  • Country: de
Re: Heat your home with The Cloud
« Reply #3 on: August 16, 2015, 08:25:46 am »
This technology was featured on Slashdot some time ago. It was my understanding that Qarnot Computing pay for the internet connection and electricity used via a separate meter.
Should you find yourself in a chronically leaking boat, energy devoted to changing vessels is likely to be more productive than energy devoted to patching leaks.

Warren Buffett
 

Offline miguelvp

  • Super Contributor
  • ***
  • Posts: 5550
  • Country: us
Re: Heat your home with The Cloud
« Reply #4 on: August 16, 2015, 08:35:23 am »
This sounds a lot like some distributed datacenter scenarios that, James Hamilton, then of Microsoft, now of Amazon Web Services, did 5+ years ago.

Potential upsides:
  • Places computing closer to users, reducing latency.
  • Utility pays for distribution transformers, instead of "datacenter" operator.
  • "Low-grade" waste heat close to point of use.
  • Geographic diversity.

Of course, there are downsides too, but batshit crazy? Why do you think it so?

Networks are not really setup that way, fetching data from someones home will increase latency, your ping time to me will be orders of magnitude higher than current datacenters.

waste heat will be problematic in summer time or is that also taken into account?

The other two points: "Utility pays for distribution transformers, instead of "datacenter" operator" and "Geographic diversity" don't look like an upside to me because of the same reasons, what to do with heat dissipation on summer time and what to do about peer to peer latency.

Also one more thing not taken into account is the uploading limits (restricted by most providers so you can't really run a server from your home).
 

Offline gilbenlTopic starter

  • Regular Contributor
  • *
  • Posts: 170
  • Country: us
Re: Heat your home with The Cloud
« Reply #5 on: August 16, 2015, 03:26:07 pm »
This sounds a lot like some distributed datacenter scenarios that, James Hamilton, then of Microsoft, now of Amazon Web Services, did 5+ years ago.

Potential upsides:
  • Places computing closer to users, reducing latency.
  • Utility pays for distribution transformers, instead of "datacenter" operator.
  • "Low-grade" waste heat close to point of use.
  • Geographic diversity.

Of course, there are downsides too, but batshit crazy? Why do you think it so?

Interesting points. Stepping back a bit, the website is a bit vague. My assumption is their plan is to set things up like #1, but it seems to me that #2 would be far more efficient:

1) The unit (Q.Rad) is an isolated CPU on my wall to which I have no direct access. I submit the job from my PC to a central server, which then distributes the workload amongst the remote units, maybe even the one in my house. In effect, the company is paying to use my home as a heatsink.

2) The unit (Q.Rad) serves as my primary CPU (IE: I run my day-to-day computing on it). When not in use, I farm out the downtime to be used by the cloud service as a node in a distributed processing system. When I want to do some heavy lifting, my personal unit acts as the primary CPU, initiating and distributing the workload across available resources in the network.

To the quoted points above, it would seem that your first point is only valid if the organization is akin to #2. If its like #1, the job would need to go from your PC to a central point before then being distributed, as opposed to #2 where your unit would initiate, distribute and manage the job on the network.

My point in the original post was to say that the "waste heat" is of nil value as far as warming your home. Assume the unit puts out 500 watts of heat-That's equivalent to your average desk heater. I'd bet thats a pretty generous heat output estimation. So the company is effectively picking up the equipment and electric cost so you can cool the unit (after all, thats the money pit in server farms).

So the equipment and power are fixed costs-whether they put the severs in a big building or your home, they have to buy them. They're banking on the fact that you like to keep your home at a comfortable 70-80F and will foot their cooling bill. That's my rationale for calling it malarkey.
What doesn't kill you, probably hurts a lot.
 

Offline NiHaoMike

  • Super Contributor
  • ***
  • Posts: 9021
  • Country: us
  • "Don't turn it on - Take it apart!"
    • Facebook Page
Re: Heat your home with The Cloud
« Reply #6 on: August 16, 2015, 05:49:50 pm »
Altcoiners (e.g. the Bitcoiners who started it) have done it before. It really does work, though it obviously makes the most sense in colder climates. In some cases, it's possible to net a profit.
Cryptocurrency has taught me to love math and at the same time be baffled by it.

Cryptocurrency lesson 0: Altcoins and Bitcoin are not the same thing.
 

Offline eas

  • Frequent Contributor
  • **
  • Posts: 601
  • Country: us
    • Tech Obsessed
Re: Heat your home with The Cloud
« Reply #7 on: August 18, 2015, 02:38:02 am »
This sounds a lot like some distributed datacenter scenarios that, James Hamilton, then of Microsoft, now of Amazon Web Services, did 5+ years ago.

Potential upsides:
  • Places computing closer to users, reducing latency.
  • Utility pays for distribution transformers, instead of "datacenter" operator.
  • "Low-grade" waste heat close to point of use.
  • Geographic diversity.

Of course, there are downsides too, but batshit crazy? Why do you think it so?

Networks are not really setup that way, fetching data from someones home will increase latency, your ping time to me will be orders of magnitude higher than current datacenters.

Huh? Networks aren't set up to obey fundamental laws of physics?  A significant part of the front-end latency on the Internet is the time it takes light to travel in an optical fiber.

Or are you talking about some property of the "last mile" networks serving residential customers? The extremities of networks are often passive and communication between nodes on the same segment may require a round trip to something further up stream. Is that what you mean? That can add some latency, but I'm pretty sure the distance travelled is less than 10 miles.

Or is this about the asymmetric upload/download bandwidth of many residential broadband connections?  That's a (not insurmountable issue), but doesn't have a direct bearing on latency.

Or do you have some misunderstanding about how TCP/IP works? Connection setup requires round trip communication, and the window for unacked packets is based on round trip latency. It is true that datacenters go to lengths to minimize latency, and they generally have more options for network connectivity, but residential ISPs don't get a free-pass on latency.

Ultimately though, for such a scheme to work, it would need appropriate network design and management. This would have been even harder to dance around 5+ years ago when residential broadband above 50mbps was quite rare.

Quote
waste heat will be problematic in summer time or is that also taken into account?

Indeed. I can't find the paper, but as I recall, it was quite thorough, and I doubt it would have dodged this. There are a few ways I can think of to deal with this 1) it could still have value for pre-heating shower/wash water in the summer. 2) it could be vented directly outside.  3) If active cooling was required, it would likely be a marginal capital and/or operating cost on top of existing residential cooling.

Quote
The other two points: "Utility pays for distribution transformers, instead of "datacenter" operator" and "Geographic diversity" don't look like an upside to me because of the same reasons, what to do with heat dissipation on summer time and what to do about peer to peer latency.

Not sure why you don't think of these as upsides?  Utility paying for distribution transformers is a capital and operating cost reduction and geographic diversity is generally desirable. I've already addressed the peer-to-peer latency, and suggested approaches to mitigating he issue of heat during summer time.


Quote
Also one more thing not taken into account is the uploading limits (restricted by most providers so you can't really run a server from your home).

You are assuming this would be deployed on residential broadband. I don't know what Quarnot's plan is, but certainly there are other options. I'm pretty sure I could have Comcast switch me over to a business connection. My price would go up, and peak speeds might drop some, but as I recall from last time I looked, the Business Class was more flexible about servers and bandwidth.


Interesting points. Stepping back a bit, the website is a bit vague. My assumption is their plan is to set things up like #1, but it seems to me that #2 would be far more efficient:

1) The unit (Q.Rad) is an isolated CPU on my wall to which I have no direct access. I submit the job from my PC to a central server, which then distributes the workload amongst the remote units, maybe even the one in my house. In effect, the company is paying to use my home as a heatsink.

2) The unit (Q.Rad) serves as my primary CPU (IE: I run my day-to-day computing on it). When not in use, I farm out the downtime to be used by the cloud service as a node in a distributed processing system. When I want to do some heavy lifting, my personal unit acts as the primary CPU, initiating and distributing the workload across available resources in the network.

To the quoted points above, it would seem that your first point is only valid if the organization is akin to #2. If its like #1, the job would need to go from your PC to a central point before then being distributed, as opposed to #2 where your unit would initiate, distribute and manage the job on the network.

Perhaps you do a lot of computing that would benefit from Qarnot's offering, but my assumption was that while some of their customers might opt to install Qarnot nodes on premises, most of the people housing nodes wouldn't be computational customers.

Looking at their example workloads, most look like they tend towards the embarrassingly parallel end of the spectrum, which relaxes the requirements on node-to-node latency. Most also look like the ratio of computation to input/and/or output data is relatively high, which reduces the time/cost of distributing the jobs and collecting the results.

Quote
My point in the original post was to say that the "waste heat" is of nil value as far as warming your home. Assume the unit puts out 500 watts of heat-That's equivalent to your average desk heater. I'd bet thats a pretty generous heat output estimation. So the company is effectively picking up the equipment and electric cost so you can cool the unit (after all, thats the money pit in server farms).

So the equipment and power are fixed costs-whether they put the severs in a big building or your home, they have to buy them. They're banking on the fact that you like to keep your home at a comfortable 70-80F and will foot their cooling bill. That's my rationale for calling it malarkey.

Some people pay for heat. Some people pay for cooling. Some do both, and some neither. Perhaps free heat doesn't appeal to you, but does it seem that unlikely that there are a significant number of people who would love free heat? As for the cooling bill, they may be asking people to foot it in exchange for free heat, but I wouldn't think that paying people for added cooling load would wreck an otherwise viable business model.

You reduce this to equipment and power costs, which only makes sense if you believe that the cost of buying land and constructing a datacenter etc is trivial.  It's not.  Probably just as important, the cost of a datacenter starts accruing before its built, and continues whether or not it is filled to 100% capacity. Even if they have to pay for housing the servers with more than just waste heat, that expense should scale with the business, rather than requiring a large initial outlay.

I think, in the short term, the question is whether they can get enough paying customers to have sufficient scale. In the long term, I think its whether they can get enough units deployed, in areas with good enough network infrastructure, that they can handle a wider range of workloads.
 

Offline miguelvp

  • Super Contributor
  • ***
  • Posts: 5550
  • Country: us
Re: Heat your home with The Cloud
« Reply #8 on: August 18, 2015, 05:29:09 am »
Huh? Networks aren't set up to obey fundamental laws of physics?  A significant part of the front-end latency on the Internet is the time it takes light to travel in an optical fiber.

Or are you talking about some property of the "last mile" networks serving residential customers? ...

Apologies to shorten your reply. What I mean is the number of hops to do peer to peer communications with each one adding delay plus network address translation negotiation among other things.

The Internet is not designed for peer to peer communications so clusters close to Tier 1 have less latency  and more bandwidth than what you will ever hope to have on a residential endpoint.
 

Offline TheElectricChicken

  • Frequent Contributor
  • **
  • Posts: 480
  • Country: au
Re: Heat your home with The Cloud
« Reply #9 on: August 18, 2015, 07:50:47 am »
The Internet is not designed for peer to peer communications [.......]

 

Offline miguelvp

  • Super Contributor
  • ***
  • Posts: 5550
  • Country: us
Re: Heat your home with The Cloud
« Reply #10 on: August 18, 2015, 07:57:56 am »
The Internet is not designed for peer to peer communications [.......]



Sorry don't speak troll, do you care to elaborate?
 

Offline gilbenlTopic starter

  • Regular Contributor
  • *
  • Posts: 170
  • Country: us
Re: Heat your home with The Cloud
« Reply #11 on: August 19, 2015, 12:19:38 am »

You reduce this to equipment and power costs, which only makes sense if you believe that the cost of buying land and constructing a datacenter etc is trivial.  It's not.  Probably just as important, the cost of a datacenter starts accruing before its built, and continues whether or not it is filled to 100% capacity. Even if they have to pay for housing the servers with more than just waste heat, that expense should scale with the business, rather than requiring a large initial outlay.


This is a very good point that I did not consider. A system such is this does allow for almost instantaneous scalability of physical resources assuming the number of willing Q.Rad "owners" matches or surpasses the number of Q.Rad units needed to satisfy workload demand.

I still can't get past the heating schtick, however. Even in a subarctic or arctic climate, how is 500watts of power going to help? Not to mention that using a CPU to heat your home is the same as using a game console or old TV. Its inherently inefficient because its not specifically designed to convert electrical energy to heat.

Why not just pay folks to keep a unit in their home and ditch the wank? For example, an existing telecom company offers a 10% discount on your monthly bill to house a unit, which they pay to power. I have a cable modem/router/VOIP box on my desk. Make it a smidge bigger to accommodate a CPU or two and send folks that box instead. You could build a 100,000 CPU distributed processing network in no time.
What doesn't kill you, probably hurts a lot.
 

Offline helius

  • Super Contributor
  • ***
  • Posts: 3643
  • Country: us
Re: Heat your home with The Cloud
« Reply #12 on: August 19, 2015, 12:44:30 am »
Distributed computing is a very old idea, and I haven't read a good reason yet why this scheme would require yet another box.
"Waste heat" is worthless because it is generally too cold to do useful work, including heating spaces. If your computer exhausts air at a temperature of 90°F, that is not hot enough to effectively warm a room in the wintertime. You would like to have an exhaust temperature closer to 160°F to be able to actually heat a living space, and higher if you want to heat water or run an engine. There's no escape from thermodynamics, and small temperature gradients just aren't good for much.
The only way you can use the heat is if it is really quite hot compared to normal computer exhaust air. This would require unusual thermal design (Industrial/Military rated chips, and exotic heat transfer technology).
 

Offline NiHaoMike

  • Super Contributor
  • ***
  • Posts: 9021
  • Country: us
  • "Don't turn it on - Take it apart!"
    • Facebook Page
Re: Heat your home with The Cloud
« Reply #13 on: August 19, 2015, 03:30:18 am »
Distributed computing is a very old idea, and I haven't read a good reason yet why this scheme would require yet another box.
"Waste heat" is worthless because it is generally too cold to do useful work, including heating spaces. If your computer exhausts air at a temperature of 90°F, that is not hot enough to effectively warm a room in the wintertime. You would like to have an exhaust temperature closer to 160°F to be able to actually heat a living space, and higher if you want to heat water or run an engine. There's no escape from thermodynamics, and small temperature gradients just aren't good for much.
The only way you can use the heat is if it is really quite hot compared to normal computer exhaust air. This would require unusual thermal design (Industrial/Military rated chips, and exotic heat transfer technology).
90F is more than good enough for indirect heat and that is about what the older heat pumps did. 105F or so (depends on humidity) is enough to avoid the "cool air" feeling and is more or less what the newer heat pumps are designed to do. (It's also just hot enough for shower water, though 110F would give more margin.) For that matter, a heat pump can be used to boost the temperature if it is needed. It's trivial to build a heat pump designed for up to 140F or so using standard HVAC parts and solid state heat pumps are able to go even higher.
Cryptocurrency has taught me to love math and at the same time be baffled by it.

Cryptocurrency lesson 0: Altcoins and Bitcoin are not the same thing.
 

Offline fake-name

  • Regular Contributor
  • *
  • Posts: 75
Re: Heat your home with The Cloud
« Reply #14 on: August 20, 2015, 03:54:51 am »
The Internet is not designed for peer to peer communications [.......]



Sorry don't speak troll, do you care to elaborate?


Allow me to try to tranlate:

The internet was/is designed from the very start to be a decentralized, fault-tolerant system

It is true that modern economy-of-scale does tend to lead to the centralization of data-center-type tasks, but that is counter to the overall system design, and it's still not true when you realize the internet is many, many data centers that are distributed everywhere.

So either:
 - Wat?
 - Are you using a different internet?

Also, what are you smoking, that shit must be AMAZING.


------

Distributed computing is a very old idea, and I haven't read a good reason yet why this scheme would require yet another box.
"Waste heat" is worthless because it is generally too cold to do useful work, including heating spaces. If your computer exhausts air at a temperature of 90°F, that is not hot enough to effectively warm a room in the wintertime. You would like to have an exhaust temperature closer to 160°F to be able to actually heat a living space, and higher if you want to heat water or run an engine. There's no escape from thermodynamics, and small temperature gradients just aren't good for much.
The only way you can use the heat is if it is really quite hot compared to normal computer exhaust air. This would require unusual thermal design (Industrial/Military rated chips, and exotic heat transfer technology).

This is flat out wrong. I presently manage the great majority of my heating expenses with just ~300W of computing power in a closet.

Admittedly, I don't live in arctic climes, but I can keep the inside of at least one room quite comfortable when it's as low as 40°F out, and even then I still circulate some cold external air in to keep things from getting too stuffy.

While the comment about the exhaust air not being hot enough is true if you're applying it to external air taken in from the cold, if you have the computer recirculate the air from the room multiple times (e.g. you have a heat source just in the room) it will heat things quite nicely.

If you have a computer drawing ~300W sitting in a room, 90%+ of that power is becoming heat, which will the be exhausted by the fans. The room will heat up, unless you have some special exception to thermodynamics in your house.
« Last Edit: August 20, 2015, 04:00:00 am by fake-name »
 

Offline miguelvp

  • Super Contributor
  • ***
  • Posts: 5550
  • Country: us
Re: Heat your home with The Cloud
« Reply #15 on: August 20, 2015, 04:38:24 am »
Allow me to try to tranlate:

The internet was/is designed from the very start to be a decentralized, fault-tolerant system

It is true that modern economy-of-scale does tend to lead to the centralization of data-center-type tasks, but that is counter to the overall system design, and it's still not true when you realize the internet is many, many data centers that are distributed everywhere.

So either:
 - Wat?
 - Are you using a different internet?

Also, what are you smoking, that shit must be AMAZING.

Smoke this:

Average peer to peer round trip for gaming is about 100ms, best you can probably get is about 75ms. Typical is probably around 200ms with a lot of players with over 250ms that produce a ton of lag.

Average client/server round trip is about 10ms, best I've seen so far is 5ms to AWS, Terramark, Google, etc. Edit: which are very close to Tier 1 networks.

But you can count the hops the packets take with a trace route and you'll find out that peer to peer is not that great because of the infrastructure of the net. You can count TIER 1 networks with your fingers and toes, so it's very centralized to major providers. Goes downhill from there.
https://en.wikipedia.org/wiki/Tier_1_network

« Last Edit: August 20, 2015, 04:43:20 am by miguelvp »
 

Offline gilbenlTopic starter

  • Regular Contributor
  • *
  • Posts: 170
  • Country: us
Re: Heat your home with The Cloud
« Reply #16 on: August 20, 2015, 01:52:12 pm »


This is flat out wrong. I presently manage the great majority of my heating expenses with just ~300W of computing power in a closet.

Admittedly, I don't live in arctic climes, but I can keep the inside of at least one room quite comfortable when it's as low as 40°F out, and even then I still circulate some cold external air in to keep things from getting too stuffy.

While the comment about the exhaust air not being hot enough is true if you're applying it to external air taken in from the cold, if you have the computer recirculate the air from the room multiple times (e.g. you have a heat source just in the room) it will heat things quite nicely.

If you have a computer drawing ~300W sitting in a room, 90%+ of that power is becoming heat, which will the be exhausted by the fans. The room will heat up, unless you have some special exception to thermodynamics in your house.

Would you mind sharing the details of your setup? Some pictures would be nice as well.

Here's an experiment comparing PC to space heater. Anything but rigorous, nevertheless interesting. https://www.pugetsystems.com/labs/articles/Gaming-PC-vs-Space-Heater-Efficiency-511/
What doesn't kill you, probably hurts a lot.
 

Offline fake-name

  • Regular Contributor
  • *
  • Posts: 75
Re: Heat your home with The Cloud
« Reply #17 on: August 21, 2015, 01:56:38 am »
Would you mind sharing the details of your setup? Some pictures would be nice as well.

Here's an experiment comparing PC to space heater. Anything but rigorous, nevertheless interesting. https://www.pugetsystems.com/labs/articles/Gaming-PC-vs-Space-Heater-Efficiency-511/

There's not really much to it. I basically have a half-height server rack in my closet, with some fans in the door. Air blows in through the door (and the fans) at the bottom, and there are some plain holes at the top of the door where the air re-enters my bedroom.

I get decent air-exchange with the closet that way, and I have one of those pleated furnace filters in front of the intake, which does a remarkable job of preventing me from ever needing to clean the fans in the servers.

Basically, there really just isn't much to the thing. Computers radiate 90+% of the energy they consume as heat. It's gotta go somewhere.

As it is, the closet air temperature is generally about 5-10°C hotter then the intake air temperature, but that's mostly because the circulation fans aren't running at full tilt (they're pretty loud that way, and I sleep in this room).

---

For the summer, when it's too hot, I have some holes punched in the ceiling of the closet. Air there vents out into the house's plenum, and there are vent-grills to the outside through it.

Air routing is managed with some bits of paper I tape over one set of vent holes or the other. It's very low-tech.
 

Offline fake-name

  • Regular Contributor
  • *
  • Posts: 75
Re: Heat your home with The Cloud
« Reply #18 on: August 21, 2015, 01:59:22 am »
Allow me to try to tranlate:

The internet was/is designed from the very start to be a decentralized, fault-tolerant system

It is true that modern economy-of-scale does tend to lead to the centralization of data-center-type tasks, but that is counter to the overall system design, and it's still not true when you realize the internet is many, many data centers that are distributed everywhere.

So either:
 - Wat?
 - Are you using a different internet?

Also, what are you smoking, that shit must be AMAZING.

Smoke this:

Average peer to peer round trip for gaming is about 100ms, best you can probably get is about 75ms. Typical is probably around 200ms with a lot of players with over 250ms that produce a ton of lag.

Average client/server round trip is about 10ms, best I've seen so far is 5ms to AWS, Terramark, Google, etc. Edit: which are very close to Tier 1 networks.

But you can count the hops the packets take with a trace route and you'll find out that peer to peer is not that great because of the infrastructure of the net. You can count TIER 1 networks with your fingers and toes, so it's very centralized to major providers. Goes downhill from there.
https://en.wikipedia.org/wiki/Tier_1_network

I'm confused as to why you think ping times are even relevant here. Sure, they're important for one specific work-load (web-serving), but the Qarnot people have indicated that this isn't for web-content-hosting.

There are lots of compute workloads that don't give a crap about latency or even bandwidth. Why are you saying this won't work because of latency issues?
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf