easily infer that cast itselfThat's not a cast.
Very well but it is regularly referred to by that term, and it could be inferred, the compiler has access to enough information to make the inference.
Very well but it is regularly referred to by that term, and it could be inferred, the compiler has access to enough information to make the inference.
Compiler could have that information, yes - but this is one of the fundamental rules of C: in assignments, right side is evaluated on its own, with no information fed-forward from the left side.
This again makes the language rules simpler and more predictable.
The same confusion arises when doing arithmetics with different types:
uint32_t a;
uint32_t b;
uint64_t c;
c = a + b;
Per language rules, the result of (a+b) will be of type uint32_t, and only during the assignment is converted to uint64_t. So if a and b are near the limit or numerical range, the result will wrap around; maybe the programmer's intent was:
c = (uint64_t)a + (uint64_t)b;
Maybe a more "clever" language, where the type of destination of assignment was considered, would be good? It would remove one footgun. On the other hand, it would create more subtle ones, as the rules of conversions would get even more complicated than they already are. With current solution, you at least quickly learn the rule - right side is in isolation - and can control the necessary conversions yourself.
In fact, we can go further - if you want such automation, why stop there? Why not remove all the manual type hassle completely? Python does that, and infers the type automagically for you. Some like it, some do not; but clearly the use cases are different. Those who work with microcontrollers, often need the control.
int ZZZ = 123456;
char MMM;
MMM = ZZZ;
int ZZZ = 123456;
char MMM;
MMM = (char)ZZZ;
MyType datum;
datum = (MyType){0};
SomeType A;
A = (SomeOtherType){0};
Code: [Select]
int ZZZ = 123456;
char MMM;
MMM = ZZZ;
The compiler does not object and insist I do this:
Look at every example in real code, you'll never find an example of:Code: [Select]
SomeType A;
A = (SomeOtherType){0};
typedef struct { int a; } type1_t;
typedef struct { char b; } type2_t;
int main()
{
type1_t t1;
t1 = (type1_t){0}; // OK
t1 = (type2_t){0}; // error
return 0;
}
hrst@jorma:~$ gcc t2.c
t2.c: In function ‘main’:
t2.c:10:7: error: incompatible types when assigning to type ‘type1_t’ {aka ‘struct <anonymous>’} from type ‘type2_t’ {aka ‘struct <anonymous>’}
10 | t1 = (type2_t){0};
twotypes.h
typedef struct {uint32_t thing} thing_wide_t; // high numerical range thing
typedef struct {uint16_t thing} thing_narrow_t; // low-cost version of thing, same accuracy, less range
static inline thing_narrow_t wide_to_narrow(thing_wide_t in)
{
thing_narrow_t ret;
ret.thing = (in.thing > UINT16_MAX):UINT16_MAX:in.thing; // saturate instead of overflow
return ret;
}
Code: [Select]
int ZZZ = 123456;
char MMM;
MMM = ZZZ;
The compiler does not object and insist I do this:
Because compiler knows ZZZ is of type int! Hence, ZZZ can be used as right-side operator in assigments, it can be used in comparisons, arithmetics - wherever.
But you can't write
datum = {0};
only because the language has no idea how to interpret {0}. At lowest level, how many bytes does {0} consist of? No idea, without the type. Compared to that, 0 is treated as int internally. But there is no such default interpretation for {0} (what would it be?), so you need to specify the type (with the cast-like syntax).QuoteLook at every example in real code, you'll never find an example of:Code: [Select]
SomeType A;
A = (SomeOtherType){0};
You won't find such example, because it won't compile, because that assignment is invalid as there is no implicit conversion available for the assignment. Yes, C is more strongly typed than you might think!Code: [Select]typedef struct { int a; } type1_t;
typedef struct { char b; } type2_t;
int main()
{
type1_t t1;
t1 = (type1_t){0}; // OK
t1 = (type2_t){0}; // error
return 0;
}
hrst@jorma:~$ gcc t2.c
t2.c: In function ‘main’:
t2.c:10:7: error: incompatible types when assigning to type ‘type1_t’ {aka ‘struct <anonymous>’} from type ‘type2_t’ {aka ‘struct <anonymous>’}
10 | t1 = (type2_t){0};
This is why you need to decide how to convert between those types, and create a conversion function, like:Code: [Select]twotypes.h
typedef struct {uint32_t thing} thing_wide_t; // high numerical range thing
typedef struct {uint16_t thing} thing_narrow_t; // low-cost version of thing, same accuracy, less range
static inline thing_narrow_t wide_to_narrow(thing_wide_t in)
{
thing_narrow_t ret;
ret.thing = (in.thing > UINT16_MAX):UINT16_MAX:in.thing; // saturate instead of overflow
return ret;
}
Built-in integer types, for example, have built-in conversions.
DataStructure datum = {0};
MyStructure a = {0};
MyStructure b;
b = {0};
Code: [Select]
MyStructure a = {0};
MyStructure b;
b = {0};
As a compiler writer, I'm having a hard time understanding the difficulty, surely there's more to all this?
Code: [Select]
MyStructure a = {0};
MyStructure b;
b = {0};
As a compiler writer, I'm having a hard time understanding the difficulty, surely there's more to all this?
In C, a literal constructed with curly brackets has no definite type, and its type can't be derived from the literal itself. What is '{0}'? Are you able to give it a type? Nope.
What is '{1,2,3}'? Not any better. Could be an array (of what), could be a struct (with what members?)
For historical reasons, C accepts such *initializers* for practical purposes. In this context, this in an initializer, not a literal that has semantic by itself.
In C99, they introduced compound literals. But you have to give them a type, otherwise they have none. For backwards compatibility, C99 still accepts initializers which are not fully-formed (so, given a type) compound literals. But it doesn't in the context of an assignment, because in this context, the literal has no type and is thus not an expression you can use (again apart from the context of initialization.)
This is all a semantic question. Sure a compiler could 'guess' the type of a compound literal without explicitely giving it a type in the context of an assignment, but it wouldn't fit in the C language with its promotion and implicit conversion rules.
Fact is, defining a syntax with which 'complex' literals (so beyond base scalar types) fully embed their own type is hard. Depending on the "strength" of your type system, it may or may not matter. For a weakly-type language, or even a dynamically-typed language, it may not matter at all. Also depends on what kind of grammar you define and how context-dependent it is.
So can the above proposal you made be implemented? Of course. Are there "corner cases" for which it could be less trivial? Probably.
If you find that useful, why not. But I have enough gripes about all the implicit conversions in C which allow pretty sloppy programming, so I'm not sure I'd want to add a layer of that with "typeless" compound literals.
MyStructure a = {0};
MyStructure a;
a = {0};
dcl a MyStructure = default();
// code...
a = default();
// for any type.
I don't mind typing a bit of extra information, when it conveys important information for us programmers, like the type. This adds safety; the C standard requiring me to type the type (pun intended), means:
* I need to keep my brain enabled, less likely to do a mistake with the type
* If I make a mistake and use a wrong type, the compiler will check it for me and error out
* If I want to assign or compare incompatible types, that is only possible by writing the required conversion functions (or doing nasty pointer casting etc., which pokes in the eye)
This is a feature of a strongly typed language. And yet, I would prefer C to be even more strongly typed. Implicit conversions are classic footguns. I don't want any more of them, and oh boy how much I struggled with PHP and its dynamic typing system (although PHP is a disaster in every other regard, too).
Trying to make typing a bit faster is not a valid target. Less than 1% of time in programming is spent actually writing characters on code editor screen.
I like to point out that in C the variables are strongly typed, whereas the values (i.e. what's out there in the memory) is untyped. That's how you can cast a car to be a camel.
Initialization is indeed exception to the rule. Such is life, rules are arbitrary and not always perfectly logical. That does not mean rules are stupid or useless. I don't mind typing a bit of extra information, when it conveys important information for us programmers, like the type. This adds safety; the C standard requiring me to type the type (pun intended), means:
* I need to keep my brain enabled, less likely to do a mistake with the type
* If I make a mistake and use a wrong type, the compiler will check it for me and error out
* If I want to assign or compare incompatible types, that is only possible by writing the required conversion functions (or doing nasty pointer casting etc., which pokes in the eye)
This is a feature of a strongly typed language. And yet, I would prefer C to be even more strongly typed. Implicit conversions are classic footguns. I don't want any more of them, and oh boy how much I struggled with PHP and its dynamic typing system (although PHP is a disaster in every other regard, too).
Trying to make typing a bit faster is not a valid target. Less than 1% of time in programming is spent actually writing characters on code editor screen.
int a;
short b;
long c;
char d;
a = (int)100;
b = (short)100;
c = (long)100;
d = (long)100;
I like to point out that in C the variables are strongly typed, whereas the values (i.e. what's out there in the memory) is untyped. That's how you can cast a car to be a camel.
This weird mix of safety and unsafety is also what makes C powerful and popular: it offers decent level of safety (camel = car is not possible), but also easy enough mechanisms to bypass that safety: camel = *((*camel_t)&car) (or the same via union type punning since C99 or so*). But the latter still - usually! - happens on purpose. Much better than by accident.
There are and always will be three classes of people:
* Those who think C is too unsafe and too weakly typed, and should not allow bypassing type safety with any syntax,
* Those who think C is too tedious to work with with all that type bullshit ("come on, I don't want to type, why can't compiler do it for me?")
* Those who are relatively satisfied with the compromise.
It's worth understanding the first two are actually polar opposite of each other. Sometimes you can see a "C complainer" complain about these two opposite problems in the same post, which makes it look... schizophrenic. Or just that sometimes people just want to complain, without having any idea what they are talking about. (And I'm not referring to anyone particular this time.)
// fixed point decimal literals
1
11
101010
010101
100 000
100_000
256:D
123.45
// fixed point binary literals
1:B
101.101:B
001 110 010:B
101 110.110:b
110_001:B
11 01101 01.1:B
// fixed point hexadecimal literals
E:H
17FA:H
0AC0 7F4A:H
1AC0_7F4A:h
4AC6.7C3:H
// Octal literals (no justification/use case for octal fractions...?)
274:O
11 223 752:O
11_223_752:O
// floating point hex literals - 'p' (or 'P') eliminates the need for base designator
123.FE6p+3
123 334.FE6P-3
E05_22B.02p2
// floating point binary literals - 'b' (or 'B') eliminates the need for base designator
110.00101b+4
110.00101b3
1.0101101b-2
Well the inability of C to deduce (I use that word intentionally because there is no uncertainty) the type when we assign a value {0} is nothing to do with strong or weak typing. The type of the target is known and fixed, at compile time.
If being explicit about the type is so helpful then why do we not see this on every assignment:
QuoteIf being explicit about the type is so helpful then why do we not see this on every assignment:The problem with yourself is...
QuoteIf being explicit about the type is so helpful then why do we not see this on every assignment:The problem with yourself is...this is going to be a looong thread
Well the inability of C to deduce (I use that word intentionally because there is no uncertainty) the type when we assign a value {0} is nothing to do with strong or weak typing. The type of the target is known and fixed, at compile time.
Sure is, but target is target! It's different thing! Did you not realize you can assign variables of different types to each other if they are compatible and have conversions? For example, uint32_t can be assigned into uint8_t, because narrowing conversion exists.QuoteIf being explicit about the type is so helpful then why do we not see this on every assignment:
The problem with yourself is, because you don't read, you never learn.
I already mentioned that 0 is int, this is in language specification, but you ignored this and wasted our time to provide the stupid pseudo example I wanted to avoid by mentioning this early on.
This is why you don't need to specify the type of the literal 100, it's int by default. You can however modify the type, for example 100ULL is unsigned long long int. And you can assign int into float with built-in conversion, so you can write float a = 100. Isn't C fancy?
char number;
number = 100; // no need to prefix with (char)...an implicit conversion will be inserted.
But what would {0} be? Would it be {int}, or maybe {int, int}, or how about {int, int, int}? Tell me. (And no, "same type as target of assignment" is not the answer.)
So C has already rules how to interpret literals, and how to convert between types. But it has no rule how to interpret more complex compound literals based on some other nearby code.
If being explicit about the type is so helpful then why do we not see this on every assignment:
The problem with yourself is, because you don't read, you never learn.
Likely not, I don't usually play these games for too long, it's getting near to the point of getting nothing out of it.
So what is it? is the 100 a char or an int? Yes, it IS an int and it is being assigned to a char without a (char) on the basis of your earlier argument about "safety" and "footguns" I must assume that you always insert the (char) in such cases, if not why not?
If being explicit about the type is so helpful then why do we not see this on every assignment:
The problem with yourself is, because you don't read, you never learn.
Yes, everything does indicate that.
Now there's nothing wrong with not having read everything; we all start out as beginners. Nor is there anything wrong with going through some of the same thought processes as people have done in the past.
But not knowing the literature, not bothering to read things people suggest would advance his understanding and thinking, and still expecting other people to "discuss" and "explain" things is bad form.Likely not, I don't usually play these games for too long, it's getting near to the point of getting nothing out of it.
I, and some others, decided that a while ago. The only reason I bother to look at this thread is that some of the peripheral things mentioned by other people are interesting.
But, hey, go knock yourself out