Author Topic: Which programming languages do you use most?  (Read 26634 times)

0 Members and 1 Guest are viewing this topic.

Offline denizcan

  • Regular Contributor
  • *
  • Posts: 59
Re: Which programming languages do you use most?
« Reply #100 on: October 26, 2013, 08:59:14 pm »
Well, maybe you should re-read my post again as you seem to have read things that I didn't write. E.g. I mentioned encapsulation and inheritance as positive examples which you can use without heavy costs. You seem to have read the opposite, so it's kinda hard to argue with people who don't even closely read the things they get upset about.

Anyway, as soon as you create an instance of a non-static object (which you do all the time in a normal object oriented program - most of the time without even noticing it), you allocate RAM and the constructor is called if there is one (else the object members are initialized as 0). If you consider a normal OOP scenario, where objects are created and destroyed all the time, this creates a significant overhead.
Besides, as I pointed out, dynamic allocation alone is a no-go in a typical microcontroller project.

Nope I read it.. I think you didn't read mine..

From your claim in real time application C++ in few kilobytes is bad idea.. Those include encapsulation, inheritance, and polymorhpism.. Encapsulation, and inheritance are ~usable according to you, but polymorphism is not.. What I understood is: not using any of those is preferable.. Am I wrong? I just wanted to say that what you claimed "may"s have no cost.. Nothing different from C, you do not need to think about, they are neutral element. Ok, lets drop encapsulation and inheritance.. Let's say we are saying same..

I wrote inheritance accidentally, but what I meant was polymorphism. I said that your similar switch case, or function pointer C implementation would be probably be worst than what C++ produces.. Calling a virtual function just adds one direct load (vtable), and an extra indexed load (method address).. Two instructions more.. Just that.. But if you want to implement same behaviour in C, you use switch statement, which adds lots of comparison and jump.. C++ is more efficient than C in that sense.. In C, if you want to prevent switch case you use function pointer, than you lost a pointer size of RAM, which is equivalent to the cost of vtable pointer. What about 3 functions? It would be better using a struct that holds function pointers, and a link to that struct in RAM.. Produced code would be even worst than what C++ produces.. So, what's the difference? Let say C compiler is clever and produced the same instructions C++ produced, than the difference may not be the MCU resource you use, but it is the time resource you spend typing.. In any sense, polymorphism is more powerful and more efficient than C equivalent..

I also I tried to mention that you do not need dynamic allocation as much as you do not need it in C. From your and others claim, C++ seems to mean dynamic allocation. This is as much overhead in C as it is in C++.. Which is unrelated to C++..

OOP is one thing, C++ is other thing.. You can do OOP in C as well as in C++.. For instance you can write a window manager, and different widgets such as Label's, Button's etc. -which are nice examples to OOP- in both languages.. In both examples they would eat nearly same resources.. I also claim that C implementation would cost more if you are not carefull.. But if you use C++, your code would be more solid and you would spend less time.. Also, if one language is able to to do something easyly, which eat resources, does not mean it is a resource eating language..

Keeping in mind the above, even in realtime applications, if it solves nice, using polymorphism is better than not using it..

If you are using a MCU with 128B of RAM today, unless it does something very very simple, and unless it is wise using it, I am sorry but you are doing it as hobby, not as a professional. Or your time is cheaper than what you create.. Are you designing devices that are mass produced, in millions? If not, lets say, you spend 1 dolar more, and you produce 1000 units. You will lose $1000. If using some tool saves me a few days in development and even if it saves me after production (in app programming, remote access, etc. etc), than it's worth wasting some resources and spending $1000 more.  If $1000 is much for your project, than your time is cheap. I recommend you leaving that project and finding a better one. For my projects, most of the time, the cost of the board is much less than how much my time costs. As I said, C++ is more efficient than C in general coding habbits.. Than I am not wasting though.. :)

If you do a little search you would see that new chips are cheaper and more resourceful than old "toy" chips..
 

Offline denizcan

  • Regular Contributor
  • *
  • Posts: 59
Re: Which programming languages do you use most?
« Reply #101 on: October 26, 2013, 10:24:32 pm »
Could we not define (in programming language sence, not mathmatical sence) our own implementation of any concept? I don't care what Queue means in the point of view of a Computer Scientist.
You obviously don't care about communicating with other developers either. I've run into a couple of programmers who insisted on inventing everything themselves and using their own nomenclature. Every time it was a pain trying to figure out what they actually meant when using supposedly standard concepts like "lists" and "queues" and so on. Didn't help that by ignoring all the litterature and work that had gone before them, their re-invented wheels were usually pretty bad.

I am trying to have nice library that is "MCU ready" and I think you are exaggerating a little bit.. If STL was common, well known, used by everybody, than I would agree with you.. It is interresting to taste, teach in schools, but you won't see it in programs much. Even its naming conventions is not compatible with mine, which will make me force wrapping around it..

Besides, every well known library has its own Queue definition.. Qt has QQueue, .net has Queue.. as such.. I have a very comprehensive library -let say Qt of MCU- so mine has its own..

Queue::push pushes the given argument into the list, ::pop pops from the list and returns it, in first in first out order.. Comes from Fifo, that's why push and pop there.. I will rename those to enqueue, and dequeue. However I have to refactor my projects. And I will for the sake of clearity. List::add adds the item to the last, List::insert inserts the item before.. etc. etc.. Similar to Qt's and .net's Queue and List classes. I am not using exceptions. pop returns value filled with 0 if there is no item in the list. Queue::push returns true if successful, however if you want result for pop, you use Word Queue::pop(T& value)  This makes usage a lot easier:

Code: [Select]
Short v;
if (queue.pop(v))
{
  // use v, as an example
  dac.setValue(v);
}

the other is easier when you use as parameter:
Code: [Select]
if (queue.hasItem())
  dac.setValue(queue.pop());

Another difference from the standard Queue is the fact that my implementation is not dynamic. It allocates in constructor.. You may give how many items to allocate, default is 16.. If you don't want to use heap, which I prefer as it cost 8 bytes more for heap linked list record for each heap allocation in 32 bit environments, you may:

Code: [Select]
Short buffer[16];
Queue<Short> queue(buffer, sizeof(buffer));

Static allocation is not as dynamic as dynamic :) allocation however it is still powerfull. At least you dont have to think about head-tail indexes etc.. You may pass queue object to other classes/functions/methods.. It may act something like a link between objects..

STL's definition is a little ugly. queue::pop pops, you access it by queue::front which calls destructor.. Not efficient, two calls.. Unnecessary. Did someone said destructor?

What's wrong with mine? What's the "reinvention"? It is just adaption, making it MCU safe.. Err, may be adoption.. :)
 

Offline 0xdeadbeef

  • Super Contributor
  • ***
  • Posts: 1576
  • Country: de
Re: Which programming languages do you use most?
« Reply #102 on: October 26, 2013, 10:45:26 pm »
Nope I read it.. I think you didn't read mine..
Well, as you claimed I wrote the opposite of what I wrote, I kinda gave up on reading all the futile details.

From your claim in real time application C++ in few kilobytes is bad idea.. Those include encapsulation, inheritance, and polymorhpism.. Encapsulation, and inheritance are ~usable according to you, but polymorphism is not.. What I understood is: not using any of those is preferable.. Am I wrong? I just wanted to say that what you claimed "may"s have no cost.. Nothing different from C, you do not need to think about, they are neutral element. Ok, lets drop encapsulation and inheritance.. Let's say we are saying same.
Pretty simplye: everything that can be done during compile time usually doesn't cost much during runtime (apart from wasting a bit of RAM and code maybe). Everything that has to be done during runtime (e.g. polymorphism) should be avoided.

I wrote inheritance accidentally, but what I meant was polymorphism. I said that your similar switch case, or function pointer C implementation would be probably be worst than what C++ produces.. Calling a virtual function just adds one direct load (vtable), and an extra indexed load (method address).. Two instructions more.. Just that.. But if you want to implement same behaviour in C, you use switch statement, which adds lots of comparison and jump.. C++ is more efficient than C in that sense.. In C, if you want to prevent switch case you use function pointer, than you lost a pointer size of RAM, which is equivalent to the cost of vtable pointer. What about 3 functions? It would be better using a struct that holds function pointers, and a link to that struct in RAM.. Produced code would be even worst than what C++ produces.. So, what's the difference? Let say C compiler is clever and produced the same instructions C++ produced, than the difference may not be the MCU resource you use, but it is the time resource you spend typing.. In any sense, polymorphism is more powerful and more efficient than C equivalent..
I wouldn't deny that OOP is elegant for many problems. I do deny though that a compiler can ever create more efficient code in case of polymorphism (and many other things) than a clever programmer.
Again: yes, OOP may save you some time during development, but you lose control control over resources. Which is ok for powerful systems but not for typical microcontroller stuff.

I also I tried to mention that you do not need dynamic allocation as much as you do not need it in C. From your and others claim, C++ seems to mean dynamic allocation. This is as much overhead in C as it is in C++.. Which is unrelated to C++..
And again: dynamic allocation of objects is what you do in a typical OO program all the time. When manipulating Strings, Lists, Vectors etc. you create and destroy objects all the time - even if you might not even notice it. In C, you can easily avoid dynamic allocation altogether. And in a microcontroller with small RAM you better do so. Firstly because real dynamic heap allocation costs runtime, secondly because you can't afford to end in a heap overflow.

OOP is one thing, C++ is other thing..
True. But it doesn't make much sense to program in C++ and avoid OOP. Or let's say than you're programming C and just call it C++.

You can do OOP in C as well as in C++..
Not really.

For instance you can write a window manager, and different widgets such as Label's, Button's etc. -which are nice examples to OOP- in both languages.. In both examples they would eat nearly same resources.. I also claim that C implementation would cost more if you are not carefull.. But if you use C++, your code would be more solid and you would spend less time.. Also, if one language is able to to do something easyly, which eat resources, does not mean it is a resource eating language..
GUI programming is really the best example where object orientation makes sense. In C you're forced to copy large section of code over and over again or you need to mess your code with switch cases and/or function pointers. But do you create large GUIs in a typical microcontroller project? Usually not.

Keeping in mind the above, even in realtime applications, if it solves nice, using polymorphism is better than not using it..
Nope.

If you are using a MCU with 128B of RAM today, unless it does something very very simple, and unless it is wise using it, I am sorry but you are doing it as hobby, not as a professional. Or your time is cheaper than what you create..
Wrong again. I would never mess around with a lowend PIC as a hobby. And yes, they are still used in commercial applications for stuff to wake up the main µC etc.
Indeed I recently analyzed PIC ASM code at work to estimate effort for a functional change or porting to a newer PIC device. Luckily we then decided to stay with the old solution.

Are you designing devices that are mass produced, in millions? If not, lets say, you spend 1 dolar more, and you produce 1000 units. You will lose $1000. If using some tool saves me a few days in development and even if it saves me after production (in app programming, remote access, etc. etc), than it's worth wasting some resources and spending $1000 more.  If $1000 is much for your project, than your time is cheap. I recommend you leaving that project and finding a better one. For my projects, most of the time, the cost of the board is much less than how much my time costs. As I said, C++ is more efficient than C in general coding habbits.. Than I am not wasting though.. :)
Our projects are usually produced in the hundreds of thousands. Saving 5 Cents for some transistor saves 15000€  for the part alone if you produce 300000 units, even letting aside stockkeeping and set-up costs. So wasting 1€ or more on a bigger CPU is out of question. Indeed, when looking at the overall costs, the SW development is only a small (and fixed) fraction. So some additional man hours for SW efforts really don't make much of a difference.

If you do a little search you would see that new chips are cheaper and more resourceful than old "toy" chips..
If you had ever worked on a real big-scale commercial project, you would know how unrealistic it is to quickly google a new chip and than use it in a safety-relevant environment. Validation of new devices, stockkeeping etc. costs a lot. Much more than a few weeks of SW development anyway.
« Last Edit: October 26, 2013, 10:49:53 pm by 0xdeadbeef »
Trying is the first step towards failure - Homer J. Simpson
 

Offline denizcan

  • Regular Contributor
  • *
  • Posts: 59
Re: Which programming languages do you use most?
« Reply #103 on: October 31, 2013, 11:26:56 am »
You claim something but you do not give any measurable data.. For instance:

Quote
Everything that has to be done during runtime (e.g. polymorphism) should be avoided.

Why? How did you measure the cost of polymorphism.. You just say "NO".. Give an example to a problem. Cost in C, cost in C++. I gave you one problem, 4 solutions, one solved by switch-case, other function pointer, other pointer table, and final one is virtual method. Virtual method costs: one vtable pointer read, one method address read. Which is a lot faster, a lot less code, and neater than usual, I insist on usual, C coding style.. Here I insist on coding style. You can optimize everything but usual coding approach is more important.. This is a measurement, what is yours?

Quote
nd again: dynamic allocation of objects is what you do in a typical OO program all the time.

Again, no examples.. It is not C++ issue, it is library you are using or your approach.. You don't have to do any dynamic allocation. It's totally up to your implementation. I coded operating system, HAL, graphics drivers, window management system, string handling routines, etc. etc. with NO HEAP USAGE.. Only GUI system has its own buffer. Window manager uses given buffer for clipping map calculation. Think about stuffed widgets, the behind is obstructed by others, graphic system calculates maps of open areas which needs to be done in runtime.. GUI uses some part of given buffer as line buffer -to reduce clip region check-, and overlay handling.. All of them allocate incrementally, and release totally, again no heap, GUI's internal memory manager does this.. Which is ultra fast.. This is the nature of the system. Even if you coded it in C you also have to use the same method.. It's not the language, it is the method.

To sum: In common practice yes you do a lot of dynamic stuff. But you can, I did, avoid it.

Quote
Quote from: denizcan on October 27, 2013, 07:59:14 AM
You can do OOP in C as well as in C++..
Not really.

Yes you can.. Just think about window manager written in C. All C implementations are ugly BTW. You create a widget, while creating you pass its handler callback function. The manager creates a struct that holds information about that widget, your widget has a switch statement in its handler function that handles the incoming message. If it does not handle a message or handles but needs some help from default, it calls default handler.. etc. etc. Windows and containers has a list of containers, and widgets. Etc. From start to finish OOP. And if this isn't a RAM and ROM costly polymorphism, what is it?

The bad thing is, usual implementations -I insist on implementation- do use dynamic allocation to hold widget record. With C++, as object also can store window manager stuff, you can avoid dynamic allocation.. C++ implementation is A LOT neater.. Just look at this:
Code: [Select]
Label label;
Button button;
Window window;
KS108Display display;

void handleButtonClick(void* sender, void* param)

  label.setText("Button clicked");
}

int main()
{
  core.init();
  display.open(); // with default pin settings

  window.setBounds(display.bounds);
  label.setLocation(0, 0);
  label.setTest("Label1");
  button.setLocation(64, 0);
  button.setText("Press Me");
  button.onClick = handleButtonClick;
  window.add(label);
  window.add(button);
 
  application.setDisplay(display);
  application.setMainWindow(window);
  application.run();
}
You just created a window, a label, a button and an action when button is clicked. I bypassed keyboard, pointer stuff, they are similar, giving keyboard object to scan pins and corresponding keys such as KEY_LEFT, KEY_RETURN etc, and introducing this keyboard to application. Rest is handled by system.. As Window, Button etc classes also holds stuff related to window management actually C++ prevented dynamic allocation intrinsically..

Also strings in setText etc are not string class, they are plain Char* pointers.. No allocation. If you want Label etc to hold its text, you explicitly mention that:
Code: [Select]
label.allocateTextBuffer(80);

or
Code: [Select]
Char labelBuffer[80];
label.setTextBuffer(labelBuffer);

This time label marks a flag that what Char* text field points to is allocated buffer. label.setText now copies the given null terminated string to the buffer pointed by text field..

As you can see, dynamic allocation is implementation dependent, not a MUST.

Quote
I wouldn't deny that OOP is elegant for many problems. I do deny though that a compiler can ever create more efficient code in case of polymorphism (and many other things) than a clever programmer.

Compiler is not as clever as programmer. However, programmer is not as careful as compiler.. You are talking about a programmer that is always careful. This is really back braking. Not all of the code needs super optimization. For instance for a GUI, pixel processing needs to be ultrafast. But context stuff, such as graphics.drawString(..), graphics.drawLine(..) does not have to be. At that level, the method is more important than the brute force power. You will have nearly no benefit from making calling them fast. You will have millions of pixels to be send, but hundreds of call to the given methods. So, I focus my optimization TIME to the low level stuff, not high level. Compilers help with polymorphism solves that different behavior need.

Quote
Quote
Keeping in mind the above, even in realtime applications, if it solves nice, using polymorphism is better than not using it..
Nope.

Again, no solid info, no measurement. Please at least give a task and why polymorphism is not up to that..

Quote
Our projects are usually produced in the hundreds of thousands. Saving 5 Cents for some transistor saves 15000€  for the part alone if you produce 300000 units, even letting aside stockkeeping and set-up costs. So wasting 1€ or more on a bigger CPU is out of question. Indeed, when looking at the overall costs, the SW development is only a small (and fixed) fraction. So some additional man hours for SW efforts really don't make much of a difference.

Ok, I admit, you are in different area with different problems.. And you are right, the setup and production cost is important on your products.. My approach does not suit your business. What I design are usually relatively big sophisticated systems produced in at most a few thousands. 128 Byte MCU has no place to fit in. The software development time is the main issue, not the hardware. Ethernet, graphical GUI, multiple boards with LINBUS/CANBUS/RF communication are just starters.. As hardware is usually ADC, DAC, a few analog signal conditioning and power stuff, the processing, filtering, detection, decision, signal generation etc are usually all done by software.. The customer always wants new things and the time to market is important for them. They don't question a few $ more for a system.. So for me, using same code on different platforms and being able to program without reading details in MCU documentation is a lot more important than a few parts of the code to be super fast, or super resource/cost efficient.. I am lucky, huh? :)
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: Which programming languages do you use most?
« Reply #104 on: October 31, 2013, 07:26:49 pm »
Quote
Everything that has to be done during runtime (e.g. polymorphism) should be avoided.
Isn't a lot of polymorphism handed at compile time?  Function overloading is considered polymorphism, for instance.
 

Offline 0xdeadbeef

  • Super Contributor
  • ***
  • Posts: 1576
  • Country: de
Re: Which programming languages do you use most?
« Reply #105 on: October 31, 2013, 09:51:38 pm »
Maybe it's less confusing when we differ between runtime/dynamic polymorphism and static polymorphism. The wording doesn't seem to be 100% consistent. E.g. in a Java book I read some years ago, the author used the term only for runtime polymorphism.
Anyway, again: obviously what can be done during compile time might create code overhead, but usually doesn't create much runtime hit.

Besides, initially I said that the language depends on the task and environment. Of course if you limit yourself to certain language features, use your own class libraries etc., C++ can also be used in a microcontroller environment with limited resources. Still with every layer of abstraction you lose control over the resources. E.g. if you use (packed) structs and unions, you know exactly where each byte of your data is located. If you use a static object to encapsulate the data structure, this is not the case anymore. If you use a function pointer to handle e.g. configuration on the function level, you know exactly what the CPU will be doing. If cast objects during runtime, you can create a large runtime hit without even noticing.

And regarding the "you can use OOP in plain C" argument of denizcan. Of course you can mimic OOP in C. Hell, you could even mimic Logo or Lisp in C. Still, due to missing features of C, you can never e.g. create encapsulation with the same level of security. E.g. if a C module exports a function or variable to other modules of the same function group (->function/variable can't be static), this global function or variable will be always accessible to the whole project. Either by importing the private header containing the according declarations or by just adding the missing declarations to your file. So your encapsulation is worthless.

Indeed, proper encapsulation would be the main benefit in our projects from my point of view. Then again, working with hundreds of developers all over the world is already complicated enough with C as we need pretty extensive coding guidelines to limit the things you could do in C to those that should be used in our projects. With C++ it would be so much harder to define which language features to use in which situations and which to avoid under which circumstances.
Trying is the first step towards failure - Homer J. Simpson
 

Offline denizcan

  • Regular Contributor
  • *
  • Posts: 59
Re: Which programming languages do you use most?
« Reply #106 on: October 31, 2013, 10:08:57 pm »
Quote
Everything that has to be done during runtime (e.g. polymorphism) should be avoided.
Isn't a lot of polymorphism handed at compile time?  Function overloading is considered polymorphism, for instance.

overloads done in compile time. virtual methods are handled in runtime.. to call a virtual method, cpu reads the first pointer field in the object (begin first depends on single or multiple inheritance), this is vtable pointer, then it reads the address of the method from that table.. this is the debade:

virtual function call
Code: [Select]
0x000001E6 6801      LDR      r1,[r0,#0x00] ; load vptr to r1
0x000001E8 6809      LDR      r1,[r1,#0x00] ; load function pointer to r1
0x000001EA 4788      BLX      r1 ; call r1

direct function call
Code: [Select]
0x000001E6 F000FD7C  BL.W     Dispaly::open (0x00000CE2)

On both r0 points to object.. As you see virtual method call adds two extra load instructions, if you use pointer, such as display->open().  That means two extra cycles.  They add nothing on static variables such as: display.open(); On static variables as compiler knows the last overridden method, it is same..

I think people are exaggerating. I frequently come across what C++ haters write:
Code: [Select]
void UART_Open(int iUartId)
{
  switch(iUartId)
  {
    case 0:
    //open uart 0, bla bla
    break;
    case 1:
    // oprn uart 1 bla bla
    break;
  }
}

int main()
{
  UART_Open(0);
}

This is a lot, far more, horribly unoptimized. I cannot find an adjective to describe it. Also a guy -I do not mean 0xdeadbeef- writing this code cannot say C++ is not for realtime or anything else.. Just compile that, and compare with my virtual uart0.open(); as you can see it is call to static object, compiler selects the correct method address in compile time, and just calls it. This is something like switch statement with 0 cost.. The C implementation creates a lot of instructions. C++ implementation is faster.. Only cost is a pointer in RAM (vtable) for that object, and pointers to methods, the rest is the same as C..
 

Offline Slothie

  • Regular Contributor
  • *
  • Posts: 66
Re: Which programming languages do you use most?
« Reply #107 on: October 31, 2013, 10:18:01 pm »
Hahaha^^^^ this one made me laugh. These are really good responses! For those that have already posted as well as those who are yet to post what's the most interesting program you've had to write so far and how long did it take you?

In the 80s in my first job I wrote a UHT Miilk production scheduling system on a Commodore PET 32K, with twin 400K disk drives. I was working at a minicomputer software house at the time and they gave me the job because someone from United Dairies called them up and I'd mentioned I'd written a timetable management program on the PET when I was at school.....

Probably not the most difficult challenge of my career but certainly the oddest. As a result I know more about the production of UHT milk (at least in the 80s) than I'd prefer to mention.
 

Offline denizcan

  • Regular Contributor
  • *
  • Posts: 59
Re: Which programming languages do you use most?
« Reply #108 on: October 31, 2013, 10:33:21 pm »
Quote
E.g. if you use (packed) structs and unions, you know exactly where each byte of your data is located.

Nope, you can.. class and struct are the same. If you have a packed class, than you have packed struct.. Only exception is vptr if you have virtual method..

Quote
If cast objects during runtime, you can create a large runtime hit without even noticing.

Nope you dont. Unless you explicitly enable run time type info on compiler and use dynamic_cast<> operator.. Those are nice tools on PC helping to find errors and other exotic stuff. However not needed on embedded.. I must reprase: you explicitly write dynamic_cast<MyClass*>. It is also really uggly to write though.. :)

this is just a pointer cast:
Code: [Select]
Widget* w = &button;
Widget* w = (Widget*)0x1234;

They are exactly same in C and C++. It means you know the type, just assign the address.. Predictable to the last bit.. Same code..

As far as I understand, people are strongly biased from the C++ stuff produced for PC.. As C is less powerful, programmers can not go beyond, this inablity made people think that C was more efficient.. If I were to write Linux from scratch I would write total in C++ with my additions, such as better method pointers.. I mean I would add it to GCC.. :)
« Last Edit: October 31, 2013, 10:37:13 pm by denizcan »
 

Offline Tris20Topic starter

  • Regular Contributor
  • *
  • Posts: 84
  • Country: gb
Re: Which programming languages do you use most?
« Reply #109 on: November 02, 2013, 01:08:08 am »
In the 80s in my first job I wrote a UHT Miilk production scheduling system on a Commodore PET 32K, with twin 400K disk drives. I was working at a minicomputer software house at the time and they gave me the job because someone from United Dairies called them up and I'd mentioned I'd written a timetable management program on the PET when I was at school.....

Probably not the most difficult challenge of my career but certainly the oddest. As a result I know more about the production of UHT milk (at least in the 80s) than I'd prefer to mention.

^^ haha, nice!.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf