-
float x,y;
x = 10 * y;
x = 10.0 * y;
I see both versions used but AFAICT the presence of any float/double anywhere in an expression makes the whole thing evaluated as floats.
But if you had
float x,y;
int z;
x = ( 10 * x ) / ( z / 10 );
that's different because the z/10 will be evaluated with integer division (with the potential for data loss due to underflow etc). You have to make sure z is appropriately scaled to avoid loss of precision in the /10.
Is that right?
-
float x,y;
x = 10 * y;
x = 10.0 * y;
I see both versions used but AFAICT the presence of any float/double anywhere in an expression makes the whole thing evaluated as floats.
Um... So, float or double? The first expression is evaluated as float. The second - as double. I'm still not sure what the point of the above example is.
And no, "anywhere in an expression" is not correct. This rule is applied locally to binary arithmetic operators. If at least one operand of a binary arithmetic operator is a floating-point value, the operator is evaluated in floating-point domain.
But if you had
float x,y;
int z;
x = ( 10 * x ) / ( z / 10 );
that's different because the z/10 will be evaluated with integer division (with the potential for data loss due to underflow etc). You have to make sure z is appropriately scaled to avoid loss of precision in the /10.
Is that right?
Yes. Again, because the rule is applied locally to binary arithmetic operators, not to entire expressions. So, `z / 10` is integer division, but `10 * x` is float multiplication. And the final division is float as well.
-
Unless specified, some compilers (if not most) will interpret the decimal as double.
10 = integer
10.0 = double
10.0f = float
For example, a lot of MCUs have float-only FPU.
So doing var/10.0 will be processed as double (Using software routines) and then converted to float, taking >2000x processing than expected.
-
Unless specified, some compilers (if not most) will interpret the decimal as double.
All compliant compilers will do that. This is required by the spec section 6.4.4.2 "Floating constants" (C11 case):
An unsuffixed floating constant has type double. If suffixed by the letter f or F, it has type float. If suffixed by the letter l or L, it has type long double.
-
I see both versions used but AFAICT the presence of any float/double anywhere in an expression makes the whole thing evaluated as floats.
No, the expression is evaluated according to the order of operations and then each binary operation is subject to numerical promotion individually based on the operand types. While a floating point value anywhere in the expression will generally result in the final evaluated expression being floating point, intermediate values can and will be evaluated using integer arithmetic.
But if you had
float x,y;
int z;
x = ( 10 * x ) / ( z / 10 );
that's different because the z/10 will be evaluated with integer division (with the potential for data loss due to underflow etc). You have to make sure z is appropriately scaled to avoid loss of precision in the /10.
Is that right?
You are right that z / 10 will be evaluated as integer division, but "make sure z is appropriately scaled" isn't the correct general purpose solution although it may work in some situations. The answer here is to use 10.0f or convert z to a floating point type.
-
It isn't only division you have to watch out for:
#include <stdio.h>
int main(void) {
int meters = 8;
double nanometers = meters * 1000000000;
printf("%d meters is %f micrometers\n", meters, nanometers);
return 0;
}
My first approximation/rule of thumb, when nothing larger than 'int' or 'double' are involved:
- If either operand is floating point, it's calculated as a floating point operation of the largest floating point size.
- If both operands are int or smaller, it is an integer operation with the size of 'int'.
That explains why this code prints 16, not 0:
#include <stdio.h>
int main(void) {
char x = 8;
char y = (x * 126) / 63;
printf("%d gives %d\n", x, y);
return 0;
}
-
I am amazed that in
float x,y,z;
x = 10 * z;
y = 10.0 * z;
x will be evaluated as a float and y as a double (and then be converted back to float). IOW the .0 signifies the use of double.
What about
double x,y,z;
x = 10 * z;
y = 10.0 * z;
Will x and y both be evaluated as double, or will x be evaluated as a float and then be converted to double? It would mean that all numeric constants used around doubles need to have the .0 on the end.
Have I got that right?
Also is double 2000x slower than float? Is single floating hardware completely useless for doubles? Also on a 32F4 which has a 32 x 32 mult or div and a 32 bit barrel shifter?
-
All of this would be double. For each binary operation the "strongest" type is selected. In this case at least one of the operands is double, so the whole expression would be evaluated as double.
How much double is slower depends on the hardware and the software library used for the software implementation . On Cortex-M4F it would be slow. None of the float hardware is useful for doubles.
-
Could you estimate the time for a double multiply or divide? 168MHz.
-
I am amazed that in
float x,y,z;
x = 10 * z;
y = 10.0 * z;
x will be evaluated as a float and y as a double (and then be converted back to float). IOW the .0 signifies the use of double.
The actual evaluation model is implementation-dependent. On x86 platform using the "classic" x87 FPU command set, all these will be loaded into extended-precision 80-bit FPU registers, evaluated with that high precision and then converted back to `float`. Language specification allows this kind of excessive precision in intermediate floating-point evaluations.
What about
double x,y,z;
x = 10 * z;
y = 10.0 * z;
Will x and y both be evaluated as double, or will x be evaluated as a float and then be converted to double? It would mean that all numeric constants used around doubles need to have the .0 on the end.
There's no `float` anywhere in this example, so everything here is evaluated as `double`. And, again, see the remark above.
-
Unless specified, some compilers (if not most) will interpret the decimal as double.
All compliant compilers will do that. This is required by the spec section 6.4.4.2 "Floating constants" (C11 case):
An unsuffixed floating constant has type double. If suffixed by the letter f or F, it has type float. If suffixed by the letter l or L, it has type long double.
AVR-GCC does (did?) everything in floats and even defined double as float
on targets with a single precision FPU, -Wdouble-promotion -fsingle-precision-constant might be useful
-
Could you estimate the time for a double multiply or divide? 168MHz.
Speed of the floating point operations depends on the operands a lot. There is no simple way to estimate that. It is easier to get a draft of your entire algorithm (not just a single operation) and time it on the real hardware.
Keep in mind, the same applies to the hardware floating point. Some of those instructions take a lot of cycles.
-
I am writing a ton of code for analog sensor measurement: PT100, TC, etc. I am storing calibration coefficients (zero and scale) as 4 byte floats and that's important. But I am doing calculations as doubles; exec time is irrelevant.
I've just timed this at 15 secs
printf("start");
volatile float x,y;
x=0; y=1.5;
for (int i=0;i<10000000;i++)
{
x=y*3.2;
}
printf("end");
so 1.5us per loop, but a corresponding double version runs much faster so clearly isn't compiling; optimisation must be removing code. I won't spend a lot of time on it but for sure software floats are a lot slower.
The 32F4 does a mult32 in 1 clock i.e. 7ns so the 1.5us means nothing either. Clearly this kind of code needs to be done so the compiler doesn't remove anything. And with varying data.
Anyway, bottom line is that 10 versus 10.0 has no meaning, or does it?
It is similar to people writing 1000L when using a uint32_t integer. ST do this a lot in their code. Why? How can there be a difference between 10 and 10L and 10UL?
-
I am writing a ton of code for analog sensor measurement: PT100, TC, etc. I am storing calibration coefficients (zero and scale) as 4 byte floats and that's important. But I am doing calculations as doubles; exec time is irrelevant.
I've just timed this at 15 secs
printf("start");
volatile float x,y;
x=0; y=1.5;
for (int i=0;i<10000000;i++)
{
x=y*3.2;
}
printf("end");
the type of your constants are important
https://godbolt.org/z/P76q9xhjb
-
AVR-GCC does (did?) everything in floats and even defined double as float
No, it did not.
Could you do
float *pf = ...;
double *pd = pf; /* type mismatch? */
?
If the above code triggered a type mismatch diagnostics, it meant that it did not "define double as float". It still kept them as two separate types.
These two types might share the same size and representation - this is not prohibited by the language spec. Which also means that AVR-GCC does everything in `double` - exactly as the language specification expects/requires it to, not in `float`.
-
x=y*3.2;
As said above, this would be a double math. 3.2 is double, so on CM4F this whole thing would be calculated in software plus calls to f2d and d2f for type conversion. Double inefficient.
Change the constant to "3.2f" and compare the performance between those two versions with just this one change. This will give you a much better idea the difference.
And, yes, use godbolt. It is an invaluable tool if want to know what compilers would be doing in one case or another.
-
AVR-GCC does (did?) everything in floats and even defined double as float
No, it did not.
These two types might share the same size and representation - this is not prohibited by the language spec. Which also means that AVR-GCC does everything in `double` - exactly as the language specification expects/requires it to, not in `float`.
I'll assume you are right, I haven't used AVR in eons, I just remember that they were both 32bit
-
Simple: When any decimal operand lacks "f", the whole operation is made in double.
There's no such (integer*float), (float*double), (double*int) math, it's one thing or another.
You can multiply tomatoes or apples, but not tomatoes with apples.
(Yes, I know you can graft different vegetables in real life. Please don't argue with that lol)
-
All compliant compilers will do that. This is required by the spec section 6.4.4.2 "Floating constants" (C11 case):
An unsuffixed floating constant has type double. If suffixed by the letter f or F, it has type float. If suffixed by the letter l or L, it has type long double.
Hey! That was my line! :-DD
-
so 1.5us per loop, but a corresponding double version runs much faster so clearly isn't compiling; optimisation must be removing code.
FFS. You really do have a one-track mind.
-
now, now, he's still in the phase where he's suspicious of the compiler. give it anothe six months
-
OK you experts while you are taking the piss from somebody less super-clever than you... I hate to disappoint you, so here is some more.
float fred = 0;
This ends up in BSS, presumably, because 0 is zero in float as well.
What about
float fred = 0.0;
It should be the same. No runtime calculation.
What about
float fred = 3.2;
It won't be in BSS because it isn't zero. It will be in DATA or COMMON or... but am I right that there is still no runtime calculation i.e. 3.2 will be converted by the compiler into 3.2f ?
I don't think this is widely known because I have never seen 3.2f used anywhere. And on arm32 this should be much faster than doubles.
-
Assignment, even as part of the initialization acts as an operator. you have a float variable initialized by a double constant. Of course it will be converted to float.
Types used for calculation and the final type don't have to match. "int a = 10 * 0.5;" would result in integer 5 stored in "a", even though the right part is double 5.0.
But I don't see what any of this has to do with anything here.
You have not looked at the embedded code that uses floating point a lot. This is used everywhere where people care about performance. Including on the X86 because floats pack better into vectored operations. All performance-oriented code (like games) use floats for storage of coordinates and stuff like this.
And just as an example, I googled "stm32 dsp github" and clicked on the first link that was not ARM CMSIS package. Here is the code I've got https://github.com/YetAnotherElectronicsChannel/STM32_DSP_Reverb/blob/master/code/Src/main.c . It uses floats and 'f' suffix everywhere. Honestly the first click on a random search. So, yes, people know about this.
And just for fun, I looked at the ARM DSP library. Again, all 'f's everywhere https://github.com/ARM-software/CMSIS-DSP/blob/main/Source/FastMathFunctions/arm_atan2_f32.c
And just to clarify, they are using 'f' even in the initialization of variables. This is not strictly necessary, but it indicates intent. Plus if you work with floating point a lot, typing 'f' at the end of constants becomes automatic thing.
-
OK you experts while you are taking the piss from somebody less super-clever than you...
It's not a question of being clever. Many millions of people use gcc and it has been continuously developed for 35 years. Almost all of what it does is completely machine-independent, certainly the kinds of things you seem to be worried about. If there are any remaining bugs then they are very obscure ones, not in the super-common kinds of things you are writing.
What about
float fred = 3.2;
It won't be in BSS because it isn't zero. It will be in DATA or COMMON or... but am I right that there is still no runtime calculation i.e. 3.2 will be converted by the compiler into 3.2f ?
I will say it is probably completely undefined how it is done and all that is required is that 0x404ccccd ends up in fred somehow at the end.
If you insist on using -O0 (at least on a compiler more stupid than gcc) as you seem to like to then the compiler might well store a 64 bit double 0x400999999999999A into RAM somehow, then load it into some register, then use an instruction to convert it to float if there is an FPU or a runtime library function otherwise.
If you use -O1 or more than I personally would be disappointed if 0x404ccccd was not loaded directly into the final place.
-
It's not a question of being clever. Many millions of people use gcc and it has been continuously developed for 35 years. Almost all of what it does is completely machine-independent, certainly the kinds of things you seem to be worried about. If there are any remaining bugs then they are very obscure ones, not in the super-common kinds of things you are writing.
did i write anything different?
If you insist on using -O0 (at least on a compiler more stupid than gcc) as you seem to like to
where did i say that?
-
now, now, he's still in the phase where he's suspicious of the compiler. give it anothe six months
Why only 6 months? You should always be suspicious of what a C compiler is doing.
The output can vary between compilers.
The output can vary between optimisation levels.
The output can vary between language standards.
The output can vary over time as the compiler becomes capable of more optimisations. That's up to and including starting to remove blocks of your code because you didn't understand where the nasal dæmons lie.
And, of course, because you[1] don't understand all the interacting "features" of the C language as well as you thought you do.
[1] I am, of course, using "you" in the modern English parlance, to "mean a generic person" rather than one specific person. Shame that it is now regarded as archaic to avoid using "And, of course, because one doesn't understand all the... of one's code..." :)
-
OK you experts while you are taking the piss from somebody less super-clever than you...
It's not a question of being clever. Many millions of people use gcc and it has been continuously developed for 35 years. Almost all of what it does is completely machine-independent, certainly the kinds of things you seem to be worried about. If there are any remaining bugs then they are very obscure ones, not in the super-common kinds of things you are writing.
Back in the early 80s (when I first used C), it was common knowledge that multithreading and multiprocessor uses of C were explicitly outside the language definition.
Over the decades the complexity of C and its compilers increased dramatically, and apparently it became so complicated that people forgot that. No doubt that was aided and abetted by some compilers "doing the right thing" when the wind was in the right direction. In any case, in 2004 it became necessary for Hans Boehm (of the conservative C garbage collector fame) to remind people that you couldn't write threading libraries in C.
I am told that has changed with more recent C language standards, but fortunately I've been able to avoid finding out the extent to which recent compilers get it right.
-
In any case, in 2004 it became necessary for Hans Boehm (of the conservative C garbage collector fame) to remind people that you couldn't write threading libraries in C.
That was not his claim.
His claim was that C programs can't USE threads implemented in libraries (no matter how they are implemented), under the assumption that they try to synchronise themselves using access to shared global variables instead of using the lock/semaphore primitives provided by the library -- so called "lock free" programming.
If you use the locks to manage any shared globals then all is ok semantically, it's just that on a many core multiprocessor doing this frequently you quickly grind to a halt and get less than single-threaded performance.
I am told that has changed with more recent C language standards,
It has changed. C now has a memory model, with the ability to specify acquire and/or release semantics on memory accesses.
but fortunately I've been able to avoid finding out the extent to which recent compilers get it right.
It's more for the users to get right than the compilers.
-
OK you experts while you are taking the piss from somebody less super-clever than you...
Not related, those are not operations, but initialized data, will be handled by the preprocessor and create constants loaded from flash.
The problem comes when a variable enters the game.
float a = 1.05; // This is float from the beginning
void foo(){
float b = a * 1.3; // 'a' will be converted and processed as double, later converted into float again
}
-
Interesting. I went through all my code and didn't find any cases where the 'f' would have helped. I use doubles in places where it doesn't matter e.g. calcs related to a 22 bit ADC with a 90ms conversion time :)
I use floats for storing stuff in flash, and I am storing a lot of values, so don't want to use 8 bytes for each one.
Re thread safety of C code, various threads on that. You can look up the ones about the Newlib libc.a in GCC and its bogus mutex stubs, which were not 'weak' so fixing them was nontrivial. This related to printf (if outputting longs or floats) and the heap (itself, and its use by printf). But that is another topic. AFAIK C is thread safe in general, but some libs use statics and such.
-
Try this in the f446, it has single-precision FPU so the first loop should take a lot longer. (Obviously, remember to enable DWT)
void test(void){
volatile float f=1.0f; // Volatile to bypass optimizations
DWT->CYCCNT = 0
for(uint16_t i=0; i<10000; i++)
f *= 1.05;
asm("nop"); //Breakpoint
f=1.0f;
DWT->CYCCNT = 0;
for(uint16_t i=0; i<10000; i++)
f *= 1.05f;
asm("nop"); //Breakpoint
}
Edit: Ran this on my phone (Coding C app, great for these little tests), the results were:
67135ns
43959ns
-
Let me check two more things:
float x,y;
x=2*y; - is there any double promotion?
x=2.0*y; - double promotion
x=2f*y; - no promotion
x=2.0f*y; - no promotion
Is that right and what happens in the 1st one?
Disregard the possibility of 2x being done by incrementing the exponent (I used to do that in asm) :)
What is the meaning of
uint32_t x;
x = 1000UL;
x = 1000L;
x = 1000;
Surely all are identical, for positive numbers where bit 31 = 0.
-
In any case, in 2004 it became necessary for Hans Boehm (of the conservative C garbage collector fame) to remind people that you couldn't write threading libraries in C.
That was not his claim.
His claim was that C programs can't USE threads implemented in libraries (no matter how they are implemented), under the assumption that they try to synchronise themselves using access to shared global variables instead of using the lock/semaphore primitives provided by the library -- so called "lock free" programming.
If you use the locks to manage any shared globals then all is ok semantically, it's just that on a many core multiprocessor doing this frequently you quickly grind to a halt and get less than single-threaded performance.
If a language precludes using a multithreading library, then it also precludes implementing a multithreading library.
I am told that has changed with more recent C language standards,
It has changed. C now has a memory model, with the ability to specify acquire and/or release semantics on memory accesses.
Boehm wrote his paper because it was necessary to force people to realise that a memory model was required. That is remarkable (and IMHO damning) since it had been obvious in other languages for a very long time.
but fortunately I've been able to avoid finding out the extent to which recent compilers get it right.
It's more for the users to get right than the compilers.
What was the interval between the C++(99?) standard appearing and the first complete compiler appearing? 6 years, IIRC - but that should be taken with a pinch of salt since I chose not to use C++ in the late 80s, and I've seen nothing to make me doubt that decision.
C/C++ compiler writers traditionally blame the user before examining the alternatives. Whether that is justified is a separate issue. The main point is it indicates that the ecosystem has become too complicated for its own good (and that of its users).
-
"2f” is not valid, floats/doubles must be decimal.
UL = Unsigned Long
L = Signed Long
-
Yes I know UL and L but does the compiler do anything different? ISTM that this notation is just a reminder for the programmer.
So anything with an f must have a DP in it. OK.
-
Ensures it's not threated as a shorter type.
Sometimes you can make something like:
#define THIS 64
unsigned long var_A = THIS*2048;
// var_ A is 0, not 131072 ?!
The compiler could threat THIS as 8-bit 16bit, as 64 fitted perfectly in it, causing an unintended overflow if used as math operand:
64 1000000
*2048 100000000000
=131072 10000000000000000
out: 0000000000000000
By specifying the type, you proof yourself from those problems, causing bugs sometimes can be very tricky to catch.
#define THIS 64UL
//#define THIS (uint32_t)64
unsigned long var_A = THIS*2048;
// var_ A is 131072!
But this is more common in 8/16 bit architectures, where the working registers can be smaller than the integer used.
-
#define THIS 64
unsigned long var_A = THIS*128;
// var_ A is 0, not 8192 ?!
The compiler could threat THIS as 8-bit, as 64 fitted perfectly in it.
I find that staggering. Isn't there a rule that each part of the RH expression is cast to the type of the destination, prior to it being evaluated? That is really horrible.
-
I only had these issues in 8-bit compilers.
On the end you simply learn these tricks.
Nowadays it's rarely required with so many of 32-bitters.
-
I find that staggering. Isn't there a rule that each part of the RH expression is cast to the type of the destination, prior to it being evaluated? That is really horrible.
In C or in C++ the left-hand side hever has any influence on the right-hand side (with perhaps some narrow exceptions).
-
Isn't there a rule that each part of the RH expression is cast to the type of the destination, prior to it being evaluated?
No, there is not. There may not even be a destination. Expressions don't have to have assignments in them.
C has fixed promotion rules. It helps to know them https://wiki.sei.cmu.edu/confluence/display/c/INT02-C.+Understand+integer+conversion+rules
-
Ensures it's not threated as a shorter type.
Sometimes you can make something like:
#define THIS 64
unsigned long var_A = THIS*128;
// var_ A is 0, not 8192 ?!
The compiler could threat THIS as 8-bit, as 64 fitted perfectly in it.
Not, it couldn't.
Firstly, `64` is an `int` by the rules of the language. And `int` is required to be at least 16-bit wide, everywhere.
Secondly, C and C++ never preform evaluations in types smaller than `int`, thanks to integral promotions. E.g. even when you multiply two 8-bit `char`s, they will still be multiplied as `int`s.
In other words, no conforming C or C++ implementation can have 8-bit arithmetics.
I only had these issues in 8-bit compilers.
There's no such thing as "8-bit compiler". More precisely, a compiler can refer to itself as "8-bit" as much as it wants, but it will always be required to perform integer arithmetics in 16 bits at least. No way around it.
-
I might be wrong about the 16-bit min. integer size , I can't remember whether the issue happened with ints or longs.
But that's the idea.
-
I might be wrong about the 16-bit min. integer size , I can't remember whether the issue happened with ints or longs.
I'm not sure what you are trying to say here. In your previous example you claimed that you saw implementations with 8-bit `int` (i.e. `8192` becomes `0`). Eight, not sixteen.
How does your "I might be wrong about the 16-bit min. integer size" fit into that picture?
-
I think its pretty clear, isn't?
"I can't remember whether the issue happened with ints or longs".
Isn't that self-explanatory? It feels really weird to have to explain this to someone apparently smart who just wrote half chapter of the C spec.
That code was a simple example of unexpected overflow, not real code.
I thought It happened with 8-bit, so I made a example for it, but if the rules use at least 16-bit, then just replace 128 by 2000 in your mind.
As I say, I don't remember, happened several years back, a 32bit var was getting a weird result, the problem was caused by the naked define.
-
Not related, those are not operations, but initialized data, will be handled by the preprocessor and create constants loaded from flash.
The preprocessor does not enter the picture at all here.
To be more precise, it has entered the scene and left by the end of translation phase 4, while semantic and syntactic analysis of the resulting tokens happens in translation phase7.
-
#define THIS 64
unsigned long var_A = THIS*128;
// var_ A is 0, not 8192 ?!
The compiler could threat THIS as 8-bit, as 64 fitted perfectly in it.
I find that staggering. Isn't there a rule that each part of the RH expression is cast to the type of the destination, prior to it being evaluated? That is really horrible.
Certainly not! Whatever gave you that idea?
Semantically, the evaluation of the expression is totally unrelated to the place the result will eventually be put. Each constant or variable in an expression has its own type, the result of each operator depends only on the type of its operands.
If you're going to program in a language then you really should learn the rules of that language, not make up something about how you think a language should work.
-
I can only relate to experience! Can't tell if It was a compiler bug or my fault!
-
Come on... there has to be something like that, otherwise
int32_t x = -5
would just load -5 into the LS byte :)
-
Come on... there has to be something like that, otherwise
int32_t x = -5
would just load -5 into the LS byte :)
-5 is an int, as it is not large enough to need to be a long. There is no such thing as a char-sized literal in C. Not even 'A', which is (on ASCII systems) an int with value 65.
The = operator converts the value on the right hand side to whatever type is required by the left hand side.
In this case there is no conversion to be done unless you are compiling for an 8 or 16 bit machine where int is only 16 bits, or a 64 bit machine with ILP64 model where int is 64 bits (this is unusual, almost all 64 bit machines use LP64 with int being 32 bits)
Try reading a book on C, seriously. Don't just make stuff up.
-
Or, uh... use Rust? :-DD
-
Try reading a book on C, seriously. Don't just make stuff up.
Or find a good tutor, or follow a course - I think I also suggested this a couple of times already.
Relying on assumptions is bound to leave grey areas, and we can't possibly clear them all* as we can't know what you don't know.
This where a good book can help (not suggesting to use the standard - as much it is my go-to reference - cppreference.com (https://en.cppreference.com/w/c) is a more down-to-earth and still accurate** substitute).
Another good resource are the secure programming oriented SEI-CERT rules and recommendation (https://wiki.sei.cmu.edu/confluence/display/c/SEI+CERT+C+Coding+Standard). Reading and understanding them gives a good insight on most of C pitfalls.
The willingness to delve deeper is there, and that's really good.
But I sometimes feel it tries to unveil inconsequential things or connect to past experiences that might or might not be relevant (e.g. discussing about BSS, DATA and COMMON is usually not important, unless there is a very specific need).
* As a community, I would say the regulars here are very keen to help, with variable levels of grumpiness.
** Never found a fault there, whereas I did a couple of times in the SEI-CERT rules (C++, corrections were quick once I convinced them).
-
Or, uh... use Rust? :-DD
I would not know :-//
If even Linus Torvalds himself is admitting Rust code in the kernel, it might not be just another fad.
We just finished running a pilot I suggested -we had an intern rewrite a newly C implemented small feature using Rust - and the results were better than expected: slightly slower compilation, very similar speed except for an outlier where Rust was 500% faster than C, worse memory occupation.
I should put some serious effort in learning it.
-
Use Rust and you won't have to worry about intransparent typing nonsense.
Read the C standard and you'll know how the compiler ought to behave.
-
Rust still has promotion rules. And while they are logical if you look at them as a whole, they may be surprising in some cases. So, reading the spec is still required of you will be in the same exact situation.
But vendor supplied libraries are not going to be in Rust for a long time, if ever.
-
Last time I used rust, it had no implicit type conversion. You had to convert eg i16 to i32 using "as."
-
For whatever reason I thought this proposal https://internals.rust-lang.org/t/implicit-widening-polymorphic-indexing-and-similar-ideas/1141 was actually implemented, since it makes sense.
But yes, it looks like it was just a discussion and was not implemented in any way.
This actually sucks. Like yes, it is "safe" on the technical level, but logically you just make programmer do more work and potentially more logical mistakes.
-
Try reading a book on C, seriously. Don't just make stuff up.
Just so.
-
For whatever reason I thought this proposal https://internals.rust-lang.org/t/implicit-widening-polymorphic-indexing-and-similar-ideas/1141 was actually implemented, since it makes sense.
But yes, it looks like it was just a discussion and was not implemented in any way.
This actually sucks. Like yes, it is "safe" on the technical level, but logically you just make programmer do more work and potentially more logical mistakes.
I (mostly) don't agree. It adds extra notation but being explicit about your types saves so much pain in the long run.
The way Rust does it does seem to invite potential for error -- ideally you'd have a cheap widening operation that's guaranteed to work, an overflowing or saturating narrowing operation that's also guaranteed to work, and a narrowing operation that returns an error value if the input is incompatible with the target type.
-
Yep. Still sad to see how many people using C never actually learned it. But we keep saying that and saying it won't change a thing...
And now sure C has its quirks, but most other languages have their own.
To be fair, I do think the OP would be a typical developer that would benefit from using Oberon, the maintainer of a set of commercial tools is on this forum (Astrobe). Now the problem IMO is that (as many other people) the OP is largely relying on third-party libs that may not be available in any other language than C. I do not know if Astrobe has any kind of Ethernet library, for instance. But if so, the maintainer can chime in.
-
Or, uh... use Rust? :-DD
Sure. In which case: read a book on Rust. It's different to C.
-
Oberon
Porting Oberon to MIPS5+ is *very* challenging.
-
Last time I used rust, it had no implicit type conversion. You had to convert eg i16 to i32 using "as."
exactly what my-C does!
exactly what DO178B requires!
-
Oberon
Porting Oberon to MIPS5+ is *very* challenging.
I can't see why it would be.
-
Try reading a book on C, seriously. Don't just make stuff up.
Or find a good tutor, or follow a course - I think I also suggested this a couple of times already.
Or, even simpler, whenever you find yourself making assumptions without knowing, just Google it. 99% of time a Stack Overflow discussion pops up, and 99% of the time it's at least 99% correct. Much better than guessing, and less effort than trying to find it from the standard or from a book.
That's a good rule of thumb anyway - don't assume, check. It is tempting to think it saves time to assume, but the opposite is true. Googling some C detail takes 1-2 minutes, and assuming seems to save that much, but ends up wasting countless of hours by banging head into wall.
-
Or, uhh, use my-c :-DD
Well. It's now able to recompile itself.
my-c compliled with gcc, outputs my-c-cc1
my-c-cc1 is able to compile my-c-cc2
my-c-cc2 is able to compile my-c-cc2
Keep getting better :o :o :o
-
Yes, this is bootstrapping. Glad you have managed to design a language (well, looks like a variant of C) that works well for you.
So, why would Oberon be hard to implement for a target for which C works well? The programming model is pretty similar.
-
why would Oberon be hard to implement for a target for which C works well? The programming model is pretty similar.
most of the my-c implementation comes from the libraries used for ICE and has been integrated with monads, that's something I use on a daily basis, so it's easier.
-
comes from the libraries used for ICE
Beware! Given current and foreseen global warming, all your code is at risk!
-
Yeah, OK... ::)
-
Or find a good tutor, or follow a course - I think I also suggested this a couple of times already.
I used K&R to learn C.
-
Or find a good tutor, or follow a course - I think I also suggested this a couple of times already.
I used K&R to learn C.
1st edition (1978) is short (226 pages plus index) but obsolete.
2nd edition (1988) is still only 260 pages, and is essentially close enough for the basics of modern C (though missing things such as stdint.h, introduced in C99).
However your posts here are incompatible with having read (and understood) either edition. e.g. section 2.7 "Type Conversions" in 2nd edition or section 6 in Appendix A in 1st edition.