Computing > Programming

constant sufixes

(1/10) > >>

I have had two days of fun trying to work out why I could not get the correct result from a calculation in 32 bit envelope on a mega 0 series (8 bit CPU). I am using defines to work things out with various calculations that I expect to be optimized out. But it turns out that writing 5000 does not mean 5000, it's down to what the compiler chooses to interpret it as. In this case possibly an 8 bit number? so gibberish. I solved all problems relating to math errors by running around putting "u" or "ul" on the end of every defined constant.

OK so lesson learnt but u means "unsigned int" which it seems in this context is 16 bits. What if I have a 16 bit signed value what do I put? Looking around the net most explanations seem to assume a non 8 bit system that seems to treat everything as 8 bit unless told.

What do I do?

I could try using constants but then I cannot carry out calculations so easily. I can't calculate a constant that is in turn calculated, the compiler will complain.

In C all numerical literals are integer or double unless otherwise specified. Even hex or octals!
You can specify further with suffixes u/U for unsigned, l/L for long, both UL, ll/LL for long long and ull for unsigned long long.
If it doesn't fit, it will go up one size automatically (adding long)

Hence why sometimes ops with 0x80... will give you a warning relating to discarding the sign.

To make a float instead of double use the f/F suffix.

5000 = integer
0x123 = integer
5000u = unsigned integer
5000l = long integer
5000ul = unsigned long
3.14 = double
3.14f = float

Whatever these types end up being is in inttypes.h

except I have problems with 16 bit numbers, unless I put u on the end they go wrong. C standards seem to be 16 bit native, I don't know what MPLABX and it's compiler/pre-processor do. I'd rather be explicit. This also helps if I move the high level code to another architecture where int means something different.

It would be helpful if you post a snippet of your code where the problem happens. Most likely root of the problem lies not in compiler but in your code.

I'm multiplying numbers, if I don't specify a size for any number over 255 the results go screwy often just spitting out "0". Unless I cast stuff it get's it wrong even though the variable the result goes into is 32 bit and all the figures are worked out so that I do not exceed that limit.


[0] Message Index

[#] Next page

There was an error while thanking
Go to full version