You claim something but you do not give any measurable data.. For instance:
Everything that has to be done during runtime (e.g. polymorphism) should be avoided.
Why? How did you measure the cost of polymorphism.. You just say "NO".. Give an example to a problem. Cost in C, cost in C++. I gave you one problem, 4 solutions, one solved by switch-case, other function pointer, other pointer table, and final one is virtual method. Virtual method costs: one vtable pointer read, one method address read. Which is a lot faster, a lot less code, and neater than
usual, I insist on usual, C coding style.. Here I insist on coding style. You can optimize everything but usual coding approach is more important.. This is a measurement, what is yours?
nd again: dynamic allocation of objects is what you do in a typical OO program all the time.
Again, no examples.. It is not C++ issue, it is library you are using or your approach.. You don't have to do any dynamic allocation. It's totally up to your implementation. I coded operating system, HAL, graphics drivers, window management system, string handling routines, etc. etc. with NO HEAP USAGE.. Only GUI system has its own buffer. Window manager uses given buffer for clipping map calculation. Think about stuffed widgets, the behind is obstructed by others, graphic system calculates maps of open areas which needs to be done in runtime.. GUI uses some part of given buffer as line buffer -to reduce clip region check-, and overlay handling.. All of them allocate incrementally, and release totally, again no heap, GUI's internal memory manager does this.. Which is ultra fast.. This is the nature of the system. Even if you coded it in C you also have to use the same method.. It's not the language, it is the method.
To sum: In common practice yes you do a lot of dynamic stuff. But you can, I did, avoid it.
Quote from: denizcan on October 27, 2013, 07:59:14 AM
You can do OOP in C as well as in C++..
Not really.
Yes you can.. Just think about window manager written in C. All C implementations are ugly BTW. You create a widget, while creating you pass its handler callback function. The manager creates a struct that holds information about that widget, your widget has a switch statement in its handler function that handles the incoming message. If it does not handle a message or handles but needs some help from default, it calls default handler.. etc. etc. Windows and containers has a list of containers, and widgets. Etc. From start to finish OOP. And if this isn't a RAM and ROM costly polymorphism, what is it?
The bad thing is, usual implementations -I insist on
implementation- do use dynamic allocation to hold widget record. With C++, as object also can store window manager stuff, you can avoid dynamic allocation.. C++ implementation is A LOT neater.. Just look at this:
Label label;
Button button;
Window window;
KS108Display display;
void handleButtonClick(void* sender, void* param)
{
label.setText("Button clicked");
}
int main()
{
core.init();
display.open(); // with default pin settings
window.setBounds(display.bounds);
label.setLocation(0, 0);
label.setTest("Label1");
button.setLocation(64, 0);
button.setText("Press Me");
button.onClick = handleButtonClick;
window.add(label);
window.add(button);
application.setDisplay(display);
application.setMainWindow(window);
application.run();
}
You just created a window, a label, a button and an action when button is clicked. I bypassed keyboard, pointer stuff, they are similar, giving keyboard object to scan pins and corresponding keys such as KEY_LEFT, KEY_RETURN etc, and introducing this keyboard to application. Rest is handled by system.. As Window, Button etc classes also holds stuff related to window management actually C++ prevented dynamic allocation intrinsically..
Also strings in setText etc are not string class, they are plain Char* pointers.. No allocation. If you want Label etc to hold its text, you explicitly mention that:
label.allocateTextBuffer(80);
or
Char labelBuffer[80];
label.setTextBuffer(labelBuffer);
This time label marks a flag that what Char* text field points to is allocated buffer. label.setText now copies the given null terminated string to the buffer pointed by text field..
As you can see, dynamic allocation is implementation dependent, not a MUST.
I wouldn't deny that OOP is elegant for many problems. I do deny though that a compiler can ever create more efficient code in case of polymorphism (and many other things) than a clever programmer.
Compiler is not as clever as programmer. However, programmer is not as careful as compiler.. You are talking about a programmer that is always careful. This is really back braking. Not all of the code needs super optimization. For instance for a GUI, pixel processing needs to be ultrafast. But context stuff, such as graphics.drawString(..), graphics.drawLine(..) does not have to be. At that level, the method is more important than the brute force power. You will have nearly no benefit from making
calling them fast. You will have millions of pixels to be send, but hundreds of call to the given methods. So, I focus my optimization TIME to the low level stuff, not high level. Compilers help with polymorphism solves that different behavior need.
Keeping in mind the above, even in realtime applications, if it solves nice, using polymorphism is better than not using it..
Nope.
Again, no solid info, no measurement. Please at least give a task and why polymorphism is not up to that..
Our projects are usually produced in the hundreds of thousands. Saving 5 Cents for some transistor saves 15000€ for the part alone if you produce 300000 units, even letting aside stockkeeping and set-up costs. So wasting 1€ or more on a bigger CPU is out of question. Indeed, when looking at the overall costs, the SW development is only a small (and fixed) fraction. So some additional man hours for SW efforts really don't make much of a difference.
Ok, I admit, you are in different area with different problems.. And you are right, the setup and production cost is important on your products.. My approach does not suit your business. What I design are usually relatively big sophisticated systems produced in at most a few thousands. 128 Byte MCU has no place to fit in. The software development time is the main issue, not the hardware. Ethernet, graphical GUI, multiple boards with LINBUS/CANBUS/RF communication are just starters.. As hardware is usually ADC, DAC, a few analog signal conditioning and power stuff, the processing, filtering, detection, decision, signal generation etc are usually all done by software.. The customer always wants new things and the time to market is important for them. They don't question a few $ more for a system.. So for me, using same code on different platforms and being able to program without reading details in MCU documentation is a lot more important than a few parts of the code to be super fast, or super resource/cost efficient.. I am lucky, huh?