They are a bit more than static code analysis, or they pretend to be...
I'm wondering why no one mentioned Rust.
It's too new, and not there yet. Last time I looked at it, it seemed like one person had managed to build a "Hello world"-style app for a microcontroller. Maybe the situation has improved since then, but C++14 combined with the
Core Guidelines and
Guidelines Support Library will provide most, if not all, of the features of Rust. Some of the guideline checkers will be implemented in the next update of VS2015, I don't know if GCC has started implementing them yet.
For an introduction to the core guidelines, watch Bjarne Stroustrup's CppCon talk "
" and Herb Sutter's talk "
."
Does it have runtime bounds checking by default?
I do tend to agree it's a little out of hand...
The technical reference manual for the new TI Sitara AM572x is about 7000 pages. That's just insane!
Seven. Thousand.http://www.ti.com/lit/ug/spruhz6d/spruhz6d.pdfThe AM5728 is powering the new BeagleBoard X15 -
http://www.elinux.org/Beagleboard:BeagleBoard-X15This looks like a nice evaluation kit without the need to try to cobble together an initial boot image from a BSP that barely even cross compiles. (Xilinx, looking at you.)
I like how TI is embracing gcc and open source. They even added mainline gcc support for the C66x DSP (which the AM5728 has two of in addition to the two Cortex-A15 and
four Cortex-M4 cores, and the two oddball real-time PRU cores).
C and C++ are the cancer of the (embedded) computing. If the C / C++ would adopt Ada's strong type checking and runtime range checking, C/C++ would be quite usable languages for production work.
young kids always think they have more than enough... i mean cpu resources.. btw, there are few modern languages that fit your criteria, go get it. hint basic, java, python? etc... i'm tired of C/C++ vs another languages fight BS dont get me wrong i dont treat C/C++ as holy but there are lot more important things than that really...
Strong type checking is a compile-time feature which doesn't consume any extra cycles on the target processor. The runtime checking will add extra cycles and increase the code size, but the runtime checking can be enabled and disabled as needed, so the impact can be controlled.
Ada-like strict type checking won't cost a thing as it will performed at compilation time. The runtine-checks can be enabled or disabled as needes per module so the impact of the runtime-checks are well controlled. Runtime exception handling is not necessary to keep things simple and avoid code-bloat and it can be handeled by a general trap-function which will be invoked when something nasty happens, print the error location and restart the device.
Oh! i need that one, right now!
Well, if you do not need that or understand what the feature implies, it doesn't mean that the feautre is useless. Do you think that it is better that the software continues its execution as if nothing has happened even if something nasty happens - like an array index overflows, variable value overflows etc - or should the system or an application print an error message to the debug interface, reset and try to recover?
Are you talking about static code analysis?
Static code analysis is almost useless in C/C++ due to the fact that the language itself has very weak type system. Static analysis is more useful in the languages in which the type checking and type declarations are more specific about the numeric range etc. For example, SPARK is a subset of Ada language and it can perform extensive static analysis of the program code. SPARK's ultimate aim is to be able to prove the software mathematically correct. I do not know SPARK enough so that I could go into details.
C and C++ are the cancer of the (embedded) computing. If the C / C++ would adopt Ada's strong type checking and runtime range checking, C/C++ would be quite usable languages for production work. And the macros with side-effects create quite often bugs that are hard to spot.
Although Linux is an example of a large-scale C project, even Linux would benefit for improved type checking, range checking etc.
Ada-like strict type checking won't cost a thing as it will performed at compilation time. The runtime-checks can be enabled or disabled as needs per module so the impact of the runtime-checks are well controlled. Runtime exception handling is not necessary to keep things simple and avoid code-bloat and it can be handled by a general trap-function which will be invoked when something nasty happens, print the error location and restart the device.
Did you hear about MIRSA ?, that seems to be the answer of the "(automotive) industry" to your exact requirement... instead of building a MISRA-C compiler, they go for a check after you write.... sorry but that is an abomination
MISRA contains good set of rules for avoiding the bad features of C/C++. MISRA is an attempt to fix broken language by limiting the allowed constructs and giving rules, guidelines and recommendations on how to write better and safer programs in C/C++ language. Like you said, there are compilers available that will validate whether the source code adheres MISRA's rules and recommendations. Even the GCC will check the source code for possible problems by enabling all warnings.
So in 5-10 years from now how will embedded world look like? Will there for example be 2 dollar MCU's containing
10 very fast CPU's and only rudimentary interface logic and SW based peripherals? Wouldn't that be nice?
Will this MCU be running ADA since the downsides of C is becoming more and more evident?
I suspect that the market will continue much as it is, you can roughly divide this into three groups.
1. Very cheap 8bit micros, like the Atmel AVR. ARM will probably remain at the upper end of this space.
2. Decent sized ARM chips running a full OS like Linux. These will have a range but I expect in ten years even the bottom end will be multiple cores with a solid amount of RAM thrown in.
3. Flexible programmable hardware with cores, such as FPGAs and Cypress PSOC chips.
You can roughly break the current market up like this at the moment. I think over the next ten years there will be movement between the categories, Intel is playing with introducing programmable logic into CPUs and I expect to see specialised elements like GPU cores in FPGAs for floating point or massively parallel tasks. You can also expect to see continuing advances in power usage, probably the hardware and OS working together to do things like shut down chunks of ram or peripherals when idle.
The programming language question is I feel a bit misguided because it depends on the micro. The cheap micros will continue to be ASM/C/C++ as there are substantial advantages to being very close to the hardware, probably shifting towards C over time. The big core systems run a full OS, once you take that step the system will probably involve multiple programs written in whatever language best suits the task.
I do tend to agree it's a little out of hand...
The technical reference manual for the new TI Sitara AM572x is about 7000 pages. That's just insane! Seven. Thousand.
http://www.ti.com/lit/ug/spruhz6d/spruhz6d.pdf
the AM5728 has two of in addition to the two Cortex-A15 and four Cortex-M4 cores, and the two oddball real-time PRU cores).
Say it with emphasis, like this!
SEVENTHOUSAND PAGES........
That's just INSANE! With more cores and less pheripherials the manuals will indeed shrink! The focus as of today... why-dont-my-pheripherial-work will'then shift to why-dont-my-function-work.....and endless debates about how a particular function is correctly implemented...with references to incomplete language standards....
I wonder what Agatha Christie would have tought about todays product manuals?!
SEVENTHOUSAND PAGES........IT'S INSANE!
If it is not over 9000, then it is normal. However, if it is over 9000, then:
_______
moving to:
Summary: Are developers starting to think about moving beyond C and C++?
C sucks
He mentioned this without saying what does he hate about C. He himself said over 90% of developers use C... That can not be purposeless I guess.
He mentioned this without saying what does he hate about C. He himself said over 90% of developers use C... That can not be purposeless I guess.
We're lazy and the market isn't very good at punishing long tail failures.
He mentioned this without saying what does he hate about C. He himself said over 90% of developers use C... That can not be purposeless I guess.
We're lazy and the market isn't very good at punishing long tail failures.
Not only that but he is probably salty because C++ programmers are the top paid ones.
He mentioned this without saying what does he hate about C. He himself said over 90% of developers use C... That can not be purposeless I guess.
We're lazy and the market isn't very good at punishing long tail failures.
Not only that but he is probably salty because C++ programmers are the top paid ones.
Can you explain why is that?
I mean, someone who program MCUs like PIC or ARM ones using C, he does the job right. What is the difference between programming it in C and C++?
Not only that but he is probably salty because C++ programmers are the top paid ones.
Can you explain why is that?
I mean, someone who program MCUs like PIC or ARM ones using C, he does the job right. What is the difference between programming it in C and C++?
I wouldn't use C++ for an MCU like PIC or and ARM Cortex-M. I was just stating that in general programming C++ is the highest paid one.
Why? it seems C++ has the best ratio of productivity, performance, resource overhead, re-usability and large team friendly.
Just the performance of task per watt makes it most cost effective. Memory is a one time cost so it doesn't matter much
I wouldn't use C++ for an MCU like PIC or and ARM Cortex-M.
There's (AFAIK) no C++ compilers for the 8- and 16-bit PICs, but there's nothing preventing its use on either PIC32s or Cortex-Ms.
I wouldn't use C++ for an MCU like PIC or and ARM Cortex-M.
There's (AFAIK) no C++ compilers for the 8- and 16-bit PICs, but there's nothing preventing its use on either PIC32s or Cortex-Ms.
There are also some other reasons why you may not want to use C++ for MCU development. Unless you really need the OOP machinery with classes, inheritance, templates and what not you are only imposing a significant compilation time and code size overhead (the C++ runtime) on yourself with few to none benefits over straight C. You don't need things like operator overloading or classes for twiddling bits on a port or to drive an SPI peripheral.
Do us a favor and compile a bit of test code for your favorite platform in both C and C++ modes, optimization at least -O1 and -flto if available, and see if there is any overhead for the "runtime". A great many useful C++ features add none on any I have used, or no more than you would use to implement the feature yourself.
This is an old myth on all but the shittiest compilers.
Here, I'll compile some things for AVR, using various C++ features, and show you what the overhead is.
First, a quick test to make sure there isn't any overhead in just switching to C++. Here's a test, toggling PA0:
#include <avr/io.h>
int main(void)
{
VPORT0.DIR = 0x01;
for (;;) {
VPORT0.OUT |= 0x01;
VPORT0.OUT &= ~0x01;
}
}
Same file saved as both test.c and test.cpp.
% avr-gcc -mmcu=atxmega32e5 -g -O1 test.c -o test.o
% avr-size -C --mcu=atxmega32e5 test.o
AVR Memory Usage
----------------
Device: atxmega32e5
Program: 210 bytes (0.6% Full)
(.text + .data + .bootloader)
Data: 0 bytes (0.0% Full)
(.data + .bss + .noinit)
% avr-gcc -mmcu=atxmega32e5 -g -O1 test.cpp -o test.o
% avr-size -C --mcu=atxmega32e5 test.o
AVR Memory Usage
----------------
Device: atxmega32e5
Program: 210 bytes (0.6% Full)
(.text + .data + .bootloader)
Data: 0 bytes (0.0% Full)
(.data + .bss + .noinit)
So no increase there.
Let's try some C++ features, shall we? Using a templated class to wrap a pin:
#include <avr/io.h>
#include <stdint.h>
#define VPORT(addr) ((VPORT_t*)(addr))
template<intptr_t vport, uint8_t npin>
class VPortPin
{
public:
void init(void) {
VPORT(vport)->DIR |= (1 << npin);
}
bool operator=(bool rhs) {
if (rhs) {
VPORT(vport)->OUT |= (1 << npin);
} else {
VPORT(vport)->OUT &= ~(1 << npin);
}
return rhs;
}
};
int main(void)
{
// Unfortunately gcc or C++ isn't smart enough to handle &(*(a))
// in a constant expression. 0x0010 is the address to VPORT0.
VPortPin<0x0010, 0> PA0;
PA0.init();
for (;;) {
PA0 = true;
PA0 = false;
}
}
% avr-gcc -mmcu=atxmega32e5 -g -O1 test.cpp -o test.o
% avr-size -C --mcu=atxmega32e5 test.o
AVR Memory Usage
----------------
Device: atxmega32e5
Program: 208 bytes (0.6% Full)
(.text + .data + .bootloader)
Data: 0 bytes (0.0% Full)
(.data + .bss + .noinit)
What? It's smaller?? But I thought C++ made everything larger! Let's take a look at the assembly:
% avr-objdump -S test.o
test.o: file format elf32-avr
Disassembly of section .text:
**SNIP VECTOR TABLE
**SNIP CONSTRUCTORS
000000c4 <main>:
template<intptr_t vport, uint8_t npin>
class VPortPin
{
public:
void init(void) {
VPORT(vport)->DIR |= (1 << npin);
c4: 80 9a sbi 0x10, 0 ; 16
}
bool operator=(bool rhs) {
if (rhs) {
VPORT(vport)->OUT |= (1 << npin);
c6: 88 9a sbi 0x11, 0 ; 17
} else {
VPORT(vport)->OUT &= ~(1 << npin);
c8: 88 98 cbi 0x11, 0 ; 17
ca: fd cf rjmp .-6 ; 0xc6 <main+0x2>
Yup. It was smart enough to boil down the templated class complete with method call and operator overload to single instructions, sbi (set bit in I/O) to set the pin, cbi (clear bit in I/O) to clear the pin, and rjmp (relative jump) to loop. The class instance takes no SRAM because it has no variables, and the conditional and return statement and functions calls themselves are removed because everything is known at compile time.
Anything else we could test? Ah - how about polymorphism? Let's make a couple subclasses of a virtual parent, and see how it deals with that.
#include <avr/io.h>
#include <stdint.h>
class A
{
public:
virtual void do_thing() = 0;
};
class B: public A
{
virtual void do_thing() {
PORTA.OUTTGL = 0x01;
}
};
class C: public A
{
virtual void do_thing() {
PORTA.OUTTGL = 0x80;
}
};
int main(void)
{
B b;
C c;
A *pa_b = &b;
A *pa_c = &c;
PORTA.DIR = 0x81;
for (;;) {
pa_b->do_thing();
pa_c->do_thing();
}
}
% avr-gcc -mmcu=atxmega32e5 -g -O1 test.cpp -o test.o
% avr-size -C --mmcu=atxmega32e5 test.o
AVR Memory Usage
----------------
Device: atxmega32e5
Program: 322 bytes (0.9% Full)
(.text + .data + .bootloader)
Data: 12 bytes (0.3% Full)
(.data + .bss + .noinit)
% avr-objdump -S test.o
test.o: file format elf32-avr
Disassembly of section .text:
**SNIP VECTOR TABLE
**SNIP CONSTRUCTORS
000000da <main>:
PORTA.OUTTGL = 0x80;
}
};
int main(void)
{
da: cf 93 push r28
dc: df 93 push r29
de: 00 d0 rcall .+0 ; 0xe0 <main+0x6>
e0: 00 d0 rcall .+0 ; 0xe2 <main+0x8>
e2: cd b7 in r28, 0x3d ; 61
e4: de b7 in r29, 0x3e ; 62
{
public:
virtual void do_thing() = 0;
};
class B: public A
e6: 84 e0 ldi r24, 0x04 ; 4
e8: 90 e2 ldi r25, 0x20 ; 32
ea: 8b 83 std Y+3, r24 ; 0x03
ec: 9c 83 std Y+4, r25 ; 0x04
virtual void do_thing() {
PORTA.OUTTGL = 0x01;
}
};
class C: public A
ee: 8a e0 ldi r24, 0x0A ; 10
f0: 90 e2 ldi r25, 0x20 ; 32
f2: 89 83 std Y+1, r24 ; 0x01
f4: 9a 83 std Y+2, r25 ; 0x02
B b;
C c;
A * const pa_b = &b;
A * const pa_c = &c;
PORTA.DIR = 0x81;
f6: 81 e8 ldi r24, 0x81 ; 129
f8: 80 93 00 06 sts 0x0600, r24
for (;;) {
pa_b->do_thing();
fc: eb 81 ldd r30, Y+3 ; 0x03
fe: fc 81 ldd r31, Y+4 ; 0x04
100: 01 90 ld r0, Z+
102: f0 81 ld r31, Z
104: e0 2d mov r30, r0
106: ce 01 movw r24, r28
108: 03 96 adiw r24, 0x03 ; 3
10a: 09 95 icall
pa_c->do_thing();
10c: e9 81 ldd r30, Y+1 ; 0x01
10e: fa 81 ldd r31, Y+2 ; 0x02
110: 01 90 ld r0, Z+
112: f0 81 ld r31, Z
114: e0 2d mov r30, r0
116: ce 01 movw r24, r28
118: 01 96 adiw r24, 0x01 ; 1
11a: 09 95 icall
11c: ef cf rjmp .-34 ; 0xfc <main+0x22>
Two or three instructions' worth of initialization that I think I could probably avoid. Two or three instructions isn't much. As for the indirect, polymorphic method calls, eight instructions: two to load the pointer, two to presumably get the method pointer from the vtable, three to prepare registers for the call (I used PORT_t.OUTTGL rather than VPORT_t.OUT here, that requires multiple instructions and spoils registers to access, being in the high section of memory), and the call itself. I'm unconvinced I could have done better.
Followup: the C++ pin-toggle is one instruction shorter because I used |= to initialize VPORT0.DIR in C++, which compiles to a single sbi, but I used = in C, which compiles to ldi followed by out. Had I been consistent, they would have been instruction-for-instruction identical.
When using C++ for the embedded programming, it is always a good thing (TM) to take a look at the assembly listing the compiler produces, just like c4757p did in his postings, so that you will not do goofy things.
Edit: It won't hurt even if you are using plain C.
Edit: It won't hurt even if you are using plain C.
Indeed not. I've been working on a lightweight AVR HAL in C recently. I'm writing a lot of functions that are specifically designed to compile down to single instructions or close to it, many of which have significant branching structures in them. One learns a lot about the way compiler optimizations work, that way. Highly recommended for anyone. You'll be pleasantly surprised - it's better than it gets credit for, or at least avr-gcc is. It knows almost as much at compile time as I do and makes significant use of that information.
RE C++ on 32 bit MCUs (Cortex-M)
PSoC Creator even if it's not designed to compile C++ can be coerced to use the C++ compiler:
http://www.mbedded.ninja/programming/microcontrollers/psoc/using-cplusplus-with-psoc-creatorOf course, it's not a supported feature, so you can't go around asking for Cypress customer support.
Also, it's not like you could use the latest Clang (as far as I'm aware) so you don't get the nice C++11 features unless they use GCC 4.8 which I doubt but have not really look into in.
@c4757p:
Thank you for doing that. I'm a C++ developer but when I tinker with MCUs I fall back to C, maybe I should revisit this since I'm mostly using ARM processors (Cortex-M0 and M3), so maybe I will enable C++ on Creator and dig more into what version of GCC they are using and If it would be possible to use Clang as the toolchain.
Followup: according to the following document, PSoC Creator 3.0 uses gcc 4.7, so it does come with some of the C++11 features
Cypress PSoC Creator Release Notes:
http://www.cypress.com/file/124626/downloadGCC 4.7 C++11 features:
https://gcc.gnu.org/gcc-4.7/cxx0x_status.htmlLooking at the latest version of Creator (3.3) I found the release notes (log-in required so I'm not linking it) and it states they are using GCC 4.9-2015-q1
So it has support for C++11 and experimental C++14, C++17(or 201z), I should play with it to see if that's the case:
https://gcc.gnu.org/gcc-4.9/changes.html
This is an old myth on all but the shittiest compilers.
I guess you are not using new/delete operators to allocate memory. Or exceptions. Or RTTI. Or std::string ...
Your template code is very rudimentary, essentially "macros" which get converted into generated code at compile time and don't touch the standard library. So no surprise it is the same code, it would be disappointing if gcc was that poor at optimization to screw this up.
Since you have mentioned gcc options, do yourself a favour and look in the libstdc++ library (use: nm -D -C libstdc++.so ) You will see what code gets actually linked in such cases. And gcc is hardly a "shitty compiler". Clang will be similar.
Even if you don't use any of the above in your code explicitly, a lot of "boilerplate" will be linked to your code to support its possible use, unless you disable all those things using compiler options (but then why are you using C++ in the first place?).
This is an old myth on all but the shittiest compilers.
I guess you are not using new/delete operators to allocate memory. Or exceptions. Or RTTI. Or std::string ...
Your template code is very rudimentary, essentially "macros" which get converted into generated code at compile time and don't touch the standard library. So no surprise it is the same code, it would be disappointing if gcc was that poor at optimization to screw this up.
C++ provides all kinds of features that are useless in small embedded systems. Typically, you do not use new/delete. Instead you use resource pools and possibly malloc to allocate them only once the system boot up. Using resource pools the system is deterministic and it will not suffer from heap fragmentation or heap exhaustion. Exceptions are not needed either. Nor RTTI. Nor std::string.
You can take a subset of the C++ and use it for your advantage in the embedded software development. You do not have to use every bells and whistle of the language just because it is there and you can use it. You just use the subset that is needed to get the job done and which doesn't cause you trouble.
You can take advantage of the classes and simple inheritance with little or no penalty. You can use virtual methods with little overhead. Encapsulation, data hiding and generalization can be used without too much of code bloat in favor of creating reusable software components.
The templates provides a way to create parametrized classes with type checking far better than preprocessor macros. Templates can be used for generalization and object instantiation. When used properly, they will create code bloat very little if any.
This is an old myth on all but the shittiest compilers.
I guess you are not using new/delete operators to allocate memory. Or exceptions. Or RTTI. Or std::string ...
What the hell kind of argument is that? I don't use malloc and free on a microcontroller either, typically don't use floating-point, etc... We're talking about a language, not a shovelful of every excessively complicated thing in its standard library just thrown in because we like them. One always has to tread carefully when doing embedded development, that says nothing about the language.
Even if you don't use any of the above in your code explicitly, a lot of "boilerplate" will be linked to your code to support its possible use
...and then stripped right back out at the end. Or did you not see where I
did not use any of the above in my code explicitly and the generated code
did not contain any "boilerplate" to support its possible use?