Author Topic: The phasing out of 32 bit  (Read 5328 times)

0 Members and 1 Guest are viewing this topic.

Offline NiHaoMike

  • Super Contributor
  • ***
  • Posts: 7420
  • Country: us
  • "Don't turn it on - Take it apart!"
    • Facebook Page
The phasing out of 32 bit
« on: June 06, 2021, 08:00:38 pm »
Excluding embedded systems, that is.

Although that video focuses on ARM, just how much extra complexity in a modern 64 bit x86 CPU goes towards making it compatible with 32 bit? How much would there be to gain by making it compatible with 32 bit only at the app level (as ARM did with some of their CPUs) and how much by removing 32 bit compatibility from the hardware and moving it to software emulation?
Cryptocurrency has taught me to love math and at the same time be baffled by it.

Cryptocurrency lesson 0: Altcoins and Bitcoin are not the same thing.
 

Offline SilverSolder

  • Super Contributor
  • ***
  • Posts: 5147
  • Country: 00
Re: The phasing out of 32 bit
« Reply #1 on: June 07, 2021, 12:27:41 am »

64 bit can double the memory requirements in some applications...   64 bit isn't a universal "good"...
 

Offline NiHaoMike

  • Super Contributor
  • ***
  • Posts: 7420
  • Country: us
  • "Don't turn it on - Take it apart!"
    • Facebook Page
Re: The phasing out of 32 bit
« Reply #2 on: June 07, 2021, 01:09:34 am »
There's a mode called "x32" that solves that problem, while still retaining most of the advantages of 64 bit. I don't think it's popular because not that many apps get enough of a performance boost to make it worthwhile, especially with RAM being much cheaper than it was when software support for it was being developed.
https://en.wikipedia.org/wiki/X32_ABI
Cryptocurrency has taught me to love math and at the same time be baffled by it.

Cryptocurrency lesson 0: Altcoins and Bitcoin are not the same thing.
 

Offline bson

  • Supporter
  • ****
  • Posts: 1972
  • Country: us
Re: The phasing out of 32 bit
« Reply #3 on: June 07, 2021, 02:26:29 am »
I don't think there's that much complexity to support legacy 32-bit in x64.  Register save and load at traps and faults, MMU lookups, and some arithmetic ops.  The big complexity is in software, for things likes like fetch system call parameters - same system call, but different size parameters depending on whether the caller is 32 or 64-bit (in Unix this made things like ioctl exceedingly painful to get right due to its arguments going to a driver), different trap and VM operations, different context in an interrupt, etc.  At Sun we made the kernel 64-bit with 32-bit process support.  We had zero support for 32-bit kernel components like drivers or file systems.  But even then making it work was not a small undertaking.  (This in the mid/late 90s.)  We made the kernel SMP and run thread hot around the same time.

 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 13439
  • Country: us
  • DavidH
Re: The phasing out of 32 bit
« Reply #4 on: June 07, 2021, 02:34:37 am »
Although that video focuses on ARM, just how much extra complexity in a modern 64 bit x86 CPU goes towards making it compatible with 32 bit? How much would there be to gain by making it compatible with 32 bit only at the app level (as ARM did with some of their CPUs) and how much by removing 32 bit compatibility from the hardware and moving it to software emulation?

One of the features of x86 which allowed it to succeed is backwards compatibility.

The extra complexity represents a significant verification issue but once that is crossed, the physical cost is low compared to other things which must be included.
 
The following users thanked this post: Masa, james_s

Offline NiHaoMike

  • Super Contributor
  • ***
  • Posts: 7420
  • Country: us
  • "Don't turn it on - Take it apart!"
    • Facebook Page
Re: The phasing out of 32 bit
« Reply #5 on: June 07, 2021, 03:07:15 am »
One of the features of x86 which allowed it to succeed is backwards compatibility.

The extra complexity represents a significant verification issue but once that is crossed, the physical cost is low compared to other things which must be included.
So move the legacy support into firmware, then it becomes a part of the firmware that only needs to be verified once per revision, which is far less often than the times the core is updated. Or move it into an Atom/Quark like core that could be repurposed for stuff like power management or audio DSP when the CPU is operating in 64 bit mode.

I wonder how long before some Spectre-like vulnerability is found that is only possible (or even merely made easier to exploit) because of the legacy support, thereby making the legacy support a security liability.
Cryptocurrency has taught me to love math and at the same time be baffled by it.

Cryptocurrency lesson 0: Altcoins and Bitcoin are not the same thing.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 13439
  • Country: us
  • DavidH
Re: The phasing out of 32 bit
« Reply #6 on: June 07, 2021, 03:42:03 am »
So move the legacy support into firmware, then it becomes a part of the firmware that only needs to be verified once per revision, which is far less often than the times the core is updated. Or move it into an Atom/Quark like core that could be repurposed for stuff like power management or audio DSP when the CPU is operating in 64 bit mode.

Intel, Transmeta, and DEC all tried  that and failed, and I think Apple's attempts hurt them more than they helped.  Maybe these were all implementation failures, but if every attempt has failed, then that argues that the concept is flawed.
 

Offline NiHaoMike

  • Super Contributor
  • ***
  • Posts: 7420
  • Country: us
  • "Don't turn it on - Take it apart!"
    • Facebook Page
Re: The phasing out of 32 bit
« Reply #7 on: June 07, 2021, 03:49:25 am »
Intel, Transmeta, and DEC all tried  that and failed, and I think Apple's attempts hurt them more than they helped.  Maybe these were all implementation failures, but if every attempt has failed, then that argues that the concept is flawed.
Didn't Apple just cut out legacy support altogether, starting with removing support from software and not having it at all with newer internally designed hardware?
Cryptocurrency has taught me to love math and at the same time be baffled by it.

Cryptocurrency lesson 0: Altcoins and Bitcoin are not the same thing.
 

Offline SilverSolder

  • Super Contributor
  • ***
  • Posts: 5147
  • Country: 00
Re: The phasing out of 32 bit
« Reply #8 on: June 07, 2021, 04:36:06 am »

I guess backwards compatibility isn't important to a certain level of consumer, but it might matter for professional / business applications?
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 2324
  • Country: nz
  • Formerly SiFive, Samsung R&D
Re: The phasing out of 32 bit
« Reply #9 on: June 07, 2021, 04:55:34 am »
So move the legacy support into firmware, then it becomes a part of the firmware that only needs to be verified once per revision, which is far less often than the times the core is updated. Or move it into an Atom/Quark like core that could be repurposed for stuff like power management or audio DSP when the CPU is operating in 64 bit mode.

Intel, Transmeta, and DEC all tried  that and failed, and I think Apple's attempts hurt them more than they helped.  Maybe these were all implementation failures, but if every attempt has failed, then that argues that the concept is flawed.

Apple has changed the CPU family in "Macintosh" computers three times now i.e. used four different ISAs: Motorola 68000, IBM PowerPC, Intel x86_64, and now ARM Aarch64. Actually, five, as there were a couple of 32 bit Core models at the start of the Intel era.

Each time they have for some years provided software emulators for the old ISA which have been pretty transparent.

For example, it was possible to run a PowerPC version of MacOS 9 on an Intel Mac up until Snow Leopard was replaced by Lion in July 2011 -- and you could run 68000 apps in that MacOS 9 using it's own built in emulator.
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 3719
  • Country: si
Re: The phasing out of 32 bit
« Reply #10 on: June 07, 2021, 05:51:49 am »
I don't think supporting 32bit is really all that much extra transistors.

Even if you are a 64bit chip you still need a way to move around 8bit 16bit 32bit words. You don't want an architecture that requires a 8bit value in a memory structure to be padded out with extra 56 zeroes to fit into 64bits. Also a 64bit chip supporting 32bit pointers makes sense since 95% of processes running under a typical OS use significantly less than 4GB of memory while typically holding onto quite a bit of pointers due to the prevalence of object oriented programing that allocates most things dynamically and thus needs pointers to everything. So a well designed 64bit architecture will end up including most of the abilities of a 32bit architecture.

The x86 architecture however does take backwards compatibility to a point where it might be getting annoying for chip designers. Even that shiny new 12 core Intel i9 processor once coming out of reset acts like it is a 16bit 8086 processor. So if its fed 8086 machine code it will run it just fine just like it did on a IBM PC XT in the 1980s. Its only when the OS does a special register dance the chip switches into acting like a 32bit Intel 386 and the CPU is now fully binary compatible with all 386 machine code. Then with another magic register dance ritual the processor finally starts acting like its a 64bit CPU. I'm guessing at this point Intel just sticks a tiny 8086 into a few square microns of die space and uses that to run the startup baggage then just turns it off and forgets about it. But the 16bit instructions from the 8086 are still valid on x64 since it still needs a way of moving 16bit words, they simply added extra variations of that instruction that work on 32 and 64 bits(along with a truckload of other new instructions as its usual for the confusing giant mountain instructions in x86).

It's more about maintaining support for it, it costs extra engineering time to implement the backwards compatibility and even more engineering time to properly test that it works in all weird cases that old software might abuse it. While on the fly translating 32bit machine code to 64bit machine code for the same architecture is probably reasonably easy to do without a significant performant hit.
 

Online Kleinstein

  • Super Contributor
  • ***
  • Posts: 9603
  • Country: de
Re: The phasing out of 32 bit
« Reply #11 on: June 07, 2021, 06:38:36 am »
X86 backward compatibility even goes back to 8080 code for a large part. Another odd backward point is the ominous A20 gate, to also emulate an old hardware quirk of early PC implementations when there 20 Bit address space overflows. At least it can be turned off - but it was needed to run old MS-DOS one x386 and on.

How much extra transistors are needed depends on the ISA and how much the 64 bit ISA also supports subsets of the 64 bits. I would not worry so much about the extra transistors, as much of the CPU are FPU and cache anyway. The actual interger ALU is tiny. The problem is more that the extra support / deoding can add delays. A new clean instruction set can be also be more compact or faster to decode. 64 Bit code also needs more memory and thus also more memory bandwidth - so in some cases 32 bit code can be faster.

The main reason to go beyound 32 bit is that memory beyound 4 GB gets practical. With word addressing 32 bit addresses the limit would be 16 GB, which is still limited. For a while apps could still live with that, but it makes sense to plan ahead a little.
 

Online james_s

  • Super Contributor
  • ***
  • Posts: 15874
  • Country: us
Re: The phasing out of 32 bit
« Reply #12 on: June 07, 2021, 07:23:16 am »
Transistors are cheap, backward compatibility is paramount. It is the reason that "Wintel" PCs have absolutely dominated for 30+ years and continue to absolutely dominate the desktop/laptop market to this day. Something like 90% of all of the personal computers in the entire world are x86 running Windows, not because either x86 or Windows are particularly amazing but because they offer backward compatibility with an absolutely enormous library of software. Other innovative systems have come and gone, the BeBox was totally cool at the time but there was no software for it so it was a flop. Apple is the only other platform that is even a serious contender on the consumer desktop and they are a very, very distant second place.
« Last Edit: June 07, 2021, 07:25:02 am by james_s »
 

Offline PKTKS

  • Super Contributor
  • ***
  • Posts: 1185
  • Country: br
Re: The phasing out of 32 bit
« Reply #13 on: June 07, 2021, 12:55:42 pm »
I have read comments with quite interest...
But this discussion is oversimplified..

Given that since 90/00s all CPUs are not exactly 16/32 or 64 bits
they all have a messy combination of things (8086 A20, 80x86 SIMD...)
and..

given those "extensions" instructions..

they just can now handle 128 bits.. 256 bits and even 512 bits w/latest AVX

This discussion is nothing but vapor once anyone can write a 16 bit app
or 32 app and they will run fine on modern OS/hardware.

In particular AMD CPUs are quite well designed for that.
Seems mostly the ARM vaporware showing up  ::)

NV ARM takeover will certainly want things above the 128 lanes...
no competition on that raceway where they will have vertical IP.

Paul
« Last Edit: June 07, 2021, 12:59:45 pm by PKTKS »
 

Offline NiHaoMike

  • Super Contributor
  • ***
  • Posts: 7420
  • Country: us
  • "Don't turn it on - Take it apart!"
    • Facebook Page
Re: The phasing out of 32 bit
« Reply #14 on: June 07, 2021, 01:33:53 pm »
I guess backwards compatibility isn't important to a certain level of consumer, but it might matter for professional / business applications?
If it's still supported through emulation, the old stuff will still work.
How much extra transistors are needed depends on the ISA and how much the 64 bit ISA also supports subsets of the 64 bits. I would not worry so much about the extra transistors, as much of the CPU are FPU and cache anyway. The actual interger ALU is tiny. The problem is more that the extra support / deoding can add delays. A new clean instruction set can be also be more compact or faster to decode. 64 Bit code also needs more memory and thus also more memory bandwidth - so in some cases 32 bit code can be faster.
My understanding is that "x32" is in fact the 64 bit instruction set with 32 bit pointers, thereby solving the memory use problem.
Cryptocurrency has taught me to love math and at the same time be baffled by it.

Cryptocurrency lesson 0: Altcoins and Bitcoin are not the same thing.
 

Offline SilverSolder

  • Super Contributor
  • ***
  • Posts: 5147
  • Country: 00
Re: The phasing out of 32 bit
« Reply #15 on: June 07, 2021, 04:02:53 pm »
I guess backwards compatibility isn't important to a certain level of consumer, but it might matter for professional / business applications?
If it's still supported through emulation, the old stuff will still work.
How much extra transistors are needed depends on the ISA and how much the 64 bit ISA also supports subsets of the 64 bits. I would not worry so much about the extra transistors, as much of the CPU are FPU and cache anyway. The actual interger ALU is tiny. The problem is more that the extra support / deoding can add delays. A new clean instruction set can be also be more compact or faster to decode. 64 Bit code also needs more memory and thus also more memory bandwidth - so in some cases 32 bit code can be faster.
My understanding is that "x32" is in fact the 64 bit instruction set with 32 bit pointers, thereby solving the memory use problem.

I guess you can do pretty much anything with a virtual machine...
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 21485
  • Country: nl
    • NCT Developments
Re: The phasing out of 32 bit
« Reply #16 on: June 07, 2021, 05:27:37 pm »
I don't think supporting 32bit is really all that much extra transistors.

Even if you are a 64bit chip you still need a way to move around 8bit 16bit 32bit words. You don't want an architecture that requires a 8bit value in a memory structure to be padded out with extra 56 zeroes to fit into 64bits.
If you look closer you'll see an integer (the most common;ly used storage type in C software which is the base of most of the software) is still 32 bit on most platforms. In the end the only difference between 32 bit and 64 bit is the memory space.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline PKTKS

  • Super Contributor
  • ***
  • Posts: 1185
  • Country: br
Re: The phasing out of 32 bit
« Reply #17 on: June 07, 2021, 05:57:27 pm »
I don't think supporting 32bit is really all that much extra transistors.

Even if you are a 64bit chip you still need a way to move around 8bit 16bit 32bit words. You don't want an architecture that requires a 8bit value in a memory structure to be padded out with extra 56 zeroes to fit into 64bits.
If you look closer you'll see an integer (the most common;ly used storage type in C software which is the base of most of the software) is still 32 bit on most platforms. In the end the only difference between 32 bit and 64 bit is the memory space.

For that...  clever header #define and #include should be enough...
and are already enough

But mostly  IOMMU is the main piece of the puzzle..
and that ..    goes to GPUs and these "modern" single plug (like USB)
"fit all serve all"  peripherics..

DMA is still the most complicated piece to bundle.

Paul
 

Offline langwadt

  • Super Contributor
  • ***
  • Posts: 2607
  • Country: dk
Re: The phasing out of 32 bit
« Reply #18 on: June 07, 2021, 06:11:05 pm »
I don't think supporting 32bit is really all that much extra transistors.

Even if you are a 64bit chip you still need a way to move around 8bit 16bit 32bit words. You don't want an architecture that requires a 8bit value in a memory structure to be padded out with extra 56 zeroes to fit into 64bits.
If you look closer you'll see an integer (the most common;ly used storage type in C software which is the base of most of the software) is still 32 bit on most platforms. In the end the only difference between 32 bit and 64 bit is the memory space.

and that's an issue for naughty code that assumes a pointer fits in an int

 

Offline ejeffrey

  • Super Contributor
  • ***
  • Posts: 2559
  • Country: us
Re: The phasing out of 32 bit
« Reply #19 on: June 07, 2021, 06:23:56 pm »
So move the legacy support into firmware, then it becomes a part of the firmware that only needs to be verified once per revision, which is far less often than the times the core is updated. Or move it into an Atom/Quark like core that could be repurposed for stuff like power management or audio DSP when the CPU is operating in 64 bit mode.

Intel, Transmeta, and DEC all tried  that and failed, and I think Apple's attempts hurt them more than they helped.  Maybe these were all implementation failures, but if every attempt has failed, then that argues that the concept is flawed.

Transmeta was mostly sunk by emulation of ancient 16 bit code.  Their binary translation layer apparently worked well enough for 32 bit code, but windows at the time had too much 16 bit code which they had to emulate in software slowly.  They were also trying to virtualize the entire OS, including protected instructions and hardware access which tend to be the things hardest to translate and require slow emulation.  Intel's x86 emulation on ia64 was a problem because they are such radically different architectures.  x86 and amd64 are quite similar, so I expect binary translation would be much more successful.  Also, with modern OSes if you get the kernel on board supporting the translation it should be even better.

I wouldn't really expect it to happen any time soon.  It is still a lot of work to replace something that isn't broken, but I don't think the performance or compatibility would be nearly the problems people had trying to do this 2 decades ago.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 13439
  • Country: us
  • DavidH
Re: The phasing out of 32 bit
« Reply #20 on: June 07, 2021, 07:36:29 pm »
I guess backwards compatibility isn't important to a certain level of consumer, but it might matter for professional / business applications?

It has less importance in embedded applications, which not coincidentally is where the various failed processors which dropped compatibility live on.

Arguably things have changed with managed applications and walled gardens like the iPhone for consumer use, which eventually leads back to a discussion about whether personal computers will survive at all.  The various companies selling walled platforms sure say they will not, but the RISC vendors said the same about Intel's x86 and Microsoft Windows, and look where they ended up.

ARM is now doing to Intel what Intel did to the RISC vendors with economy of scale pushing performance up from the low end, but I am not convinced that will be enough to displace x86 if ARM systems remain closed, or perhaps "curated".  And any advantage from a simpler ISA and simplicity from lack of support for backwards compatibility may not be enough.  That is a fancy way of asking, "where is my open ARM desktop replacement?"  And like the failed RISC vendors of the past, companies that do not make desktops, including Apple, say that I do not need one.  Well of course I do not need one if they are not making them; just ask them!

Apple has changed the CPU family in "Macintosh" computers three times now i.e. used four different ISAs: Motorola 68000, IBM PowerPC, Intel x86_64, and now ARM Aarch64. Actually, five, as there were a couple of 32 bit Core models at the start of the Intel era.

Each time they have for some years provided software emulators for the old ISA which have been pretty transparent.

For example, it was possible to run a PowerPC version of MacOS 9 on an Intel Mac up until Snow Leopard was replaced by Lion in July 2011 -- and you could run 68000 apps in that MacOS 9 using it's own built in emulator.

I think that explains why Apple is now primarily a maker of phones and consumer electronics, who just happens to also make some personal computers.

I wouldn't really expect it to happen any time soon.  It is still a lot of work to replace something that isn't broken, but I don't think the performance or compatibility would be nearly the problems people had trying to do this 2 decades ago.

The various processor manufacturers crushed by Intel's lower performance x86 always said, "just recompile!"  Of course you could run JAVA on anything, right?  RIGHT?
 

Online james_s

  • Super Contributor
  • ***
  • Posts: 15874
  • Country: us
Re: The phasing out of 32 bit
« Reply #21 on: June 07, 2021, 10:38:27 pm »
Arguably things have changed with managed applications and walled gardens like the iPhone for consumer use, which eventually leads back to a discussion about whether personal computers will survive at all.  The various companies selling walled platforms sure say they will not, but the RISC vendors said the same about Intel's x86 and Microsoft Windows, and look where they ended up.

Personal computers will survive into the foreseeable future. There are a lot of people out there whose needs are met by mobile devices, but those are the people who never really needed a PC in the first place, it was just the only way to get on the internet. Now they have other options that work for their use case but millions of other people need a PC. Nobody is developing smartphone apps ON a smartphone, they use a PC. Mobile devices are fine for content consumption but somebody has to make all that content. The PC market is not growing like it once was but that isn't because it's dead, it's because it has matured and there is far less reason to upgrade regularly than there once was, even a 10 year old PC can run most modern software just fine, imagine trying to use a 10 year old PC in 1995 when multimedia was taking off.
 
The following users thanked this post: SilverSolder

Offline NiHaoMike

  • Super Contributor
  • ***
  • Posts: 7420
  • Country: us
  • "Don't turn it on - Take it apart!"
    • Facebook Page
Re: The phasing out of 32 bit
« Reply #22 on: June 08, 2021, 12:26:30 am »
Transmeta was mostly sunk by emulation of ancient 16 bit code.  Their binary translation layer apparently worked well enough for 32 bit code, but windows at the time had too much 16 bit code which they had to emulate in software slowly.  They were also trying to virtualize the entire OS, including protected instructions and hardware access which tend to be the things hardest to translate and require slow emulation.
So in other words, if they came up with that when 2000 or XP was in mainstream use, things might have went quite different for them.
Cryptocurrency has taught me to love math and at the same time be baffled by it.

Cryptocurrency lesson 0: Altcoins and Bitcoin are not the same thing.
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 2324
  • Country: nz
  • Formerly SiFive, Samsung R&D
Re: The phasing out of 32 bit
« Reply #23 on: June 08, 2021, 01:53:18 am »
I don't think supporting 32bit is really all that much extra transistors.

Even if you are a 64bit chip you still need a way to move around 8bit 16bit 32bit words. You don't want an architecture that requires a 8bit value in a memory structure to be padded out with extra 56 zeroes to fit into 64bits.
If you look closer you'll see an integer (the most common;ly used storage type in C software which is the base of most of the software) is still 32 bit on most platforms. In the end the only difference between 32 bit and 64 bit is the memory space.

and that's an issue for naughty code that assumes a pointer fits in an int

That already didn't work on either 8086 or 68000, 40+ years ago.

"long" is a better bet, and works pretty much everywhere except 64 bit Windows, where both int and long are 32 bit and for pointers you need "long long". Grrrr.

"intptr_t" or "uintptr_t" is the only correct way to do it. ("size_t" will usually work, but isn't strictly correct)
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 13439
  • Country: us
  • DavidH
Re: The phasing out of 32 bit
« Reply #24 on: June 08, 2021, 02:01:03 am »
Personal computers will survive into the foreseeable future. There are a lot of people out there whose needs are met by mobile devices, but those are the people who never really needed a PC in the first place, it was just the only way to get on the internet. Now they have other options that work for their use case but millions of other people need a PC. Nobody is developing smartphone apps ON a smartphone, they use a PC. Mobile devices are fine for content consumption but somebody has to make all that content. The PC market is not growing like it once was but that isn't because it's dead, it's because it has matured and there is far less reason to upgrade regularly than there once was, even a 10 year old PC can run most modern software just fine, imagine trying to use a 10 year old PC in 1995 when multimedia was taking off.

I think the PC will survive as well but for a different reason.  It existed before the advent of "consumer" personal computers for those who were interested in the form of business computers and development systems.  Various CP/M systems come to mind.

What is less clear is if that market is large enough to support the development and production of the parts needed to build those systems, including CPUs, RAM, GPUs, etc.  These have been payed for by consumer demand for a long time now leading to a economy of scale which will not exist in the future.  Microsoft will abandon it by that time but Linux will be well placed to take up the slack.

Transmeta was mostly sunk by emulation of ancient 16 bit code.  Their binary translation layer apparently worked well enough for 32 bit code, but windows at the time had too much 16 bit code which they had to emulate in software slowly.  They were also trying to virtualize the entire OS, including protected instructions and hardware access which tend to be the things hardest to translate and require slow emulation.

So in other words, if they came up with that when 2000 or XP was in mainstream use, things might have went quite different for them.

I do not remember the details but Linus Torvalds who worked there has quite a lot to say about the subject which you can find online through a search.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf