Even for the people who just want Carry, I highly doubt they are prepared for the different implementations of the carry flag with respect to subtraction and compare.
And it's simply not hard to express what you ACTUALLY WANT in portable C anyway. The compiler will automatically use the flags where available.
Bignums aren't easy in portable C
Most of the complaining here is not about specific suggestions about language improvements or changes, but more about the fact that someone is actually daring to rock the boat and discuss the subject dispassionately, point out the elephant in the room!
Bullshit.
It's frequently my job to "rock the boat" by proposing changes and improvements to processes, libraries, programming languages, and even new machine code instructions. You can find my name for example in the credits for the RISC-V base instruction set and the B and V extensions.
Your proposed changes address ONE aspect of the problem:
- does the change make a programmer's life slightly more convenient?
You totally ignore every other aspect, such as:
- can the proposed feature be efficiently implemented on the target devices?
- is it a significant improvement vs using a library (function, macro) or maybe code generator instead?
- is the product of the improvement and the frequency of use sufficient to justify the incremental cumulative increase in complexity of the language, compiler, manuals, training?
I used a naked "8-pin Arduino" in my Vetinari clock, because that was the quick way to get an undemanding result.
But when making something that required very low power consumption, I simply wrote C to peek/poke the atmega328 registers. That avoided me having to work out how Arduino libraries might be frustrating me.
I note that one of the desires there was "give easy access to the flags". I pointed out that many ISAs don't even *have* flags.
Most of the complaining here is not about specific suggestions about language improvements or changes, but more about the fact that someone is actually daring to rock the boat and discuss the subject dispassionately, point out the elephant in the room!
Bullshit.
It's frequently my job to "rock the boat" by proposing changes and improvements to processes, libraries, programming languages, and even new machine code instructions. You can find my name for example in the credits for the RISC-V base instruction set and the B and V extensions.
Your position is pessimistic, discouraging, not open minded, this is nothing to do with your undoubted credentials either, it is you, your hostile tone. This is just a discussion and you're frothing and fuming.
Most of the complaining here is not about specific suggestions about language improvements or changes, but more about the fact that someone is actually daring to rock the boat and discuss the subject dispassionately, point out the elephant in the room!
Bullshit.
It's frequently my job to "rock the boat" by proposing changes and improvements to processes, libraries, programming languages, and even new machine code instructions. You can find my name for example in the credits for the RISC-V base instruction set and the B and V extensions.
Your position is pessimistic, discouraging, not open minded, this is nothing to do with your undoubted credentials either, it is you, your hostile tone. This is just a discussion and you're frothing and fuming.Oh come on, now you're just sounding like a petulant child that is disappointed that someone is pointing out a flaw in their Perfect Plan yourself; one that is expecting adulation and praise. Get your ego out of the discussion, and concentrate on the subject matter instead.
Bignums aren't easy in portable C
So, what specifically did you want to say about language grammar, extensibility, features that could be helpful to MCU developers?
So, what specifically did you want to say about language grammar, extensibility, features that could be helpful to MCU developers?
There is no surprise that there are engineers quite content with C, or even committed to C.
When talking about low level programming languages –– which is what I understand 'a hardware "oriented" programming language' to mean ––, C is just the one with the best proven track record decades long. It isn't that great, it's just the 'benchmark' one for others, due to its widespread use and role in systems programming and embedded development.
For examples of nearly bug-free programs written in C in the systems programming domain, go check out D. J. Bernstein's djbdns, qmail, daemontools, cdb. This is the guy behind Curve25519, having released it in 2005.
Like it, dislike it, doesn't matter, C is just a tool. But as a tool, its features and track record are significant. So are its deficiencies, and that means any real effort to do better is valuable.
In comparison, C# is a managed language. .NET Micro requires at least 256k of RAM. .NET nanoFramework requires at least 64k of RAM, and runs on Cortex-M and RISC-V (ESP32-C3) cores. So, perhaps suitable for medium to large embedded devices, but decidedly unsuitable for small ARMs and anything less than 32-bit architecures.
Ada can be used to program AVR 8-bit microcontrollers (see AVR-Ada), but it is still relatively little used. One possible reason is that while GCC GNAT is GPL3+ licensed with a runtime library exception, AdaCore sells GNAT Pro, and the FSF/GCC GNAT is seen as "inferior", with the "proper" version being the sole product of a commercial company. (Or maybe that's just me.)I get that some consider this pointless
No, that's not it at all. Not pointless, more like bass-ackwards. We want the results too, we just have seen your approach before leading to nowhere. We're trying to steer you to not repeat that, but actually produce something interesting.
If you start a language design from scratch, you must understand the amount of design choices already made for existing languages. The ones in languages that have survived use in anger, are the ones where the choices support a programming paradigm the users find intuitive and effective.
Why did DiTBHo not start from scratch, and instead pared down C to a subset with some changes and additions, to arrive at their my-C, designed for strictly controlled and enforced embedded use cases? Because they needed a tool fit for a purpose, and it was a straightforward way to achieve it. Results matter.
Why did SiliconWizard's Design a better "C" thread 'not go anywhere'? It just sprawled around, with individual features and other languages discussed. In fact, it really showed how complicated and hard it is to do better than C from scratch; with other languages like Ada discussed but nobody knowing exactly why they never got as much traction as C. Just consider this post by brucehoult about midway in the thread, about how C with its warts and all still maps to different hardware so well.
Me, I have worked on replacing the standard C library with something better. Because the C standard defines freestanding environment where the C standard library is not available in quite a detail –– unlike say C++, which also has the same concept, but leaves it basically completely up to implementations to define what it means ––, this is doable. I aim to fix many of the issues others have with C. With C23 around the corner, the one change I think might actually make a difference is to arrays not decay to pointers, and instead conceptually use arrays everywhere to describe memory ranges. Even just allowing type variation based on a later variable in the same argument list would make it possible to replace buffer overrun prone standard library functions with almost identical replacements, that would allow the C compiler to detect buffer under- and overruns at compile time. It would only be a small addition, perhaps a builtin, to make it possible to prove via static analysis that all memory accesses are valid.
In other words, I'm looking to change the parts of C that hinder me or others, not start from scratch.
Am I a C fanboi? No. If you look at my posting history, you'll see that I actually recommend using an interpreted language, currently Python, for user interfaces (for multiple reasons).
I currently use C for some embedded (AVRs, mainly), and a mixed C/C++ freestanding environment for embedded ARM development; I also use POSIX C for systems programming in Linux (mostly on x86-64). (I sometimes do secure programming, dealing with privileges and capabilities; got some experience as a sysadmin at a couple of universities, and making customized access solutions for e.g. when you have a playground with many users at different privilege levels, and subsections with their own admins, including sub-websites open to the internet. It's not simple when you're responsible nothing leaks that shouldn't leak.)
Okay, so if we believe that a ground-up design from scratch is unlikely to lead to an actual project solving the underlying problems OP (Sherlock Holmes) wants to solve, what would?
Pick a language, and a compiler, you feel you can work with. It could be C, it could be Ada, it could be whatever you want. Obviously, it should have somewhat close syntax to what you prefer, but it doesn't have to be an exact match. I'll use C as the language example below for simplicity only; feel free to substitute it with something else.
Pick a problem, find languages that solve it better than C, or invent your own new solution. Trace it down to the generated machine code, and find a way to port it back to C, replacing the way C currently solves it. Apply it in real life, writing real-world code that heavily uses that modified feature. Get other people to comment on it, and maybe even test it. Find out if the replacement solution actually helps with real-world code. That often means getting an unsuspecting victim, and having them re-solve a problem using the modified feature, using only your documentation of the feature as a guide.
Keep a journal of your findings.
At some point, you find that you have enough of those new solutions to construct a completely new language. At this point, you can tweak the syntax to be more to your liking. Start writing your own compiler, but also document the language the compiler works with, precisely. As usual, something like ABNF is sufficient for syntax, but for the paradigm, the approach, I suggest writing additional documentation explaining your earlier findings, and the solution approach. Small examples are gold here. The idea is that other people, reading this additional documentation, can see how you thought, so they can orient themselves to best use the new language.
Theory is nice, but practical reality always trumps theory. Just because the ABNF of a language looks nice, doesn't mean it is an effective language. As soon as you can compile running native binaries, start creating actual utilities – sort, grep, bc for example –, and look at the machine code the compiler generates. Just because the code is nice and the abstractions just perfect, does not mean they are fit for generating machine code. Compare the machine code to what the original language and other languages produce, when optimizations are disabled (for a more sensible comparison).
During this process, do feel free to occasionally branch into designing your language from scratch. If you keep tabs on your designs as your understanding evolves, you'll understand viscerally what the 'amount of design choices' I wrote above really means. It can be overwhelming, if you think of it, but going at it systematically, piece by piece, with each design choice having an explanation/justification in your journal and/or documentation, it can be done, and done better than what we have now.
Finally: I for one prefer passionate, honest, detailed posts over dispassionate politically correct smooth-talk.
When talking about low level programming languages –– which is what I understand 'a hardware "oriented" programming language' to mean ––, C is just the one with the best proven track record decades long. It isn't that great, it's just the 'benchmark' one for others, due to its widespread use and role in systems programming and embedded development.
For examples of nearly bug-free programs written in C in the systems programming domain, go check out D. J. Bernstein's djbdns, qmail, daemontools, cdb. This is the guy behind Curve25519, having released it in 2005.
Like it, dislike it, doesn't matter, C is just a tool. But as a tool, its features and track record are significant. So are its deficiencies, and that means any real effort to do better is valuable.
In comparison, C# is a managed language. .NET Micro requires at least 256k of RAM. .NET nanoFramework requires at least 64k of RAM, and runs on Cortex-M and RISC-V (ESP32-C3) cores. So, perhaps suitable for medium to large embedded devices, but decidedly unsuitable for small ARMs and anything less than 32-bit architecures.
Ada can be used to program AVR 8-bit microcontrollers (see AVR-Ada), but it is still relatively little used. One possible reason is that while GCC GNAT is GPL3+ licensed with a runtime library exception, AdaCore sells GNAT Pro, and the FSF/GCC GNAT is seen as "inferior", with the "proper" version being the sole product of a commercial company. (Or maybe that's just me.)
No, that's not it at all. Not pointless, more like bass-ackwards. We want the results too, we just have seen your approach before leading to nowhere. We're trying to steer you to not repeat that, but actually produce something interesting.
If you start a language design from scratch, you must understand the amount of design choices already made for existing languages. The ones in languages that have survived use in anger, are the ones where the choices support a programming paradigm the users find intuitive and effective.
Why did DiTBHo not start from scratch, and instead pared down C to a subset with some changes and additions, to arrive at their my-C, designed for strictly controlled and enforced embedded use cases? Because they needed a tool fit for a purpose, and it was a straightforward way to achieve it. Results matter.
Okay, so if we believe that a ground-up design from scratch is unlikely to lead to an actual project solving the underlying problems OP (Sherlock Holmes) wants to solve, what would?
Theory is nice, but practical reality always trumps theory. Just because the ABNF of a language looks nice, doesn't mean it is an effective language. As soon as you can compile running native binaries, start creating actual utilities – sort, grep, bc for example –, and look at the machine code the compiler generates. Just because the code is nice and the abstractions just perfect, does not mean they are fit for generating machine code. Compare the machine code to what the original language and other languages produce, when optimizations are disabled (for a more sensible comparison).
So, what specifically did you want to say about language grammar, extensibility, features that could be helpful to MCU developers?There is very little (if anything) about language grammar that could be useful to MCU developers.
The first challenge I ran into with Rust was getting my firmware to run on hardware varying from 4-button dev-kit PCBs to the left/right halves of a wireless split to a single Atreus.
Varying the features of firmware at compile-time is known as “conditional compilation”. (It needs to be done at compile-time rather than run-time because microcontrollers have limited program space, roughly 10–100kB in my case.). Rust’s solution to this problem is “features”.
Conceptually Zig’s inline for is solving the same problem that Rust’s syntax macro solves (generating type-specific code at compile-time), but without the side quest of learning a lil’ pattern matching/expansion language. Rust has many language features and they’re all largely disjoint from each other, so knowing some doesn’t help me guess the others.
Conversely, this “consistency” principle also explains why I had such an easy time picking up Zig — it absolutely excels in this department. Not only are there many fewer features to learn in the first place, they seem to all fit together nicely: The comptime and inline for keywords, for example, allowed me to leverage at compile-time all the looping, conditions, arithmetic, and control flow I wanted using the syntax and semantics I’d already learned — Zig!
If extensibility is important, then domain specific libraries are the way to go.
Your initial questions were about far more than grammar, and therefore potentially more interesting. That's why people contributed.
You have consistently failed to understand that the main topic of importance is the runtime behaviour (in a broad sense) of something expressed in a language. Anything that improves the runtime behaviour is potentially important.
So, what specifically did you want to say about language grammar, extensibility, features that could be helpful to MCU developers?I'm saying that they are not at all important to microcontroller or embedded developers at all.
I am saying that you are trying to solve a problem by daydreaming about irrelevant stuff, when you could be working on making things actually better.
Are you sure you're not a Windows C# enthusiast who dreams of creating a C#-like language that makes all embedded devices look like Windows or at least .Net runtime –– because "Windows is basically the entire world, after all", and being adulated for it? You definitely sound like one. I've met many, and all have either crashed or burned, or keep making unreliable unsecure shit and getting by with their social skills alone.
So, what specifically did you want to say about language grammar, extensibility, features that could be helpful to MCU developers?There is very little (if anything) about language grammar that could be useful to MCU developers.
How did you establish that? what evidence supports this claim?
Here's a post by an engineer that proves you wrong, I quote:QuoteThe first challenge I ran into with Rust was getting my firmware to run on hardware varying from 4-button dev-kit PCBs to the left/right halves of a wireless split to a single Atreus.
Varying the features of firmware at compile-time is known as “conditional compilation”. (It needs to be done at compile-time rather than run-time because microcontrollers have limited program space, roughly 10–100kB in my case.). Rust’s solution to this problem is “features”.QuoteConceptually Zig’s inline for is solving the same problem that Rust’s syntax macro solves (generating type-specific code at compile-time), but without the side quest of learning a lil’ pattern matching/expansion language. Rust has many language features and they’re all largely disjoint from each other, so knowing some doesn’t help me guess the others.QuoteConversely, this “consistency” principle also explains why I had such an easy time picking up Zig — it absolutely excels in this department. Not only are there many fewer features to learn in the first place, they seem to all fit together nicely: The comptime and inline for keywords, for example, allowed me to leverage at compile-time all the looping, conditions, arithmetic, and control flow I wanted using the syntax and semantics I’d already learned — Zig!
If extensibility is important, then domain specific libraries are the way to go.
They are certainly a way to go, I'm not advocating against libraries! A language that had no "volatile" specifier could be fine if it relied on a library to manipulate such data, but does that mean you therefore disapprove of C having a "volatile" keyword? Which do you think is preferable, a keyword for it or a librart?
Your initial questions were about far more than grammar, and therefore potentially more interesting. That's why people contributed.
You have consistently failed to understand that the main topic of importance is the runtime behaviour (in a broad sense) of something expressed in a language. Anything that improves the runtime behaviour is potentially important.
Please do not venture to speculate what you think I understand or do not understand, that's a disparaging remark, an ad-hominem.
So, what specifically did you want to say about language grammar, extensibility, features that could be helpful to MCU developers?There is very little (if anything) about language grammar that could be useful to MCU developers.
How did you establish that? what evidence supports this claim?
Observation over the decades. Basically MCU developers can swap grammars relatively easily and quickly. What trips them up is the behaviour, i.e. how the concepts expressed in a grammar (any grammar!) map onto the MCU's runtime behaviour.QuoteHere's a post by an engineer that proves you wrong, I quote:QuoteThe first challenge I ran into with Rust was getting my firmware to run on hardware varying from 4-button dev-kit PCBs to the left/right halves of a wireless split to a single Atreus.
Varying the features of firmware at compile-time is known as “conditional compilation”. (It needs to be done at compile-time rather than run-time because microcontrollers have limited program space, roughly 10–100kB in my case.). Rust’s solution to this problem is “features”.QuoteConceptually Zig’s inline for is solving the same problem that Rust’s syntax macro solves (generating type-specific code at compile-time), but without the side quest of learning a lil’ pattern matching/expansion language. Rust has many language features and they’re all largely disjoint from each other, so knowing some doesn’t help me guess the others.QuoteConversely, this “consistency” principle also explains why I had such an easy time picking up Zig — it absolutely excels in this department. Not only are there many fewer features to learn in the first place, they seem to all fit together nicely: The comptime and inline for keywords, for example, allowed me to leverage at compile-time all the looping, conditions, arithmetic, and control flow I wanted using the syntax and semantics I’d already learned — Zig!
One data point (and a very arguable data point at that) does not constitute a convincing argument.
Quote
If extensibility is important, then domain specific libraries are the way to go.
They are certainly a way to go, I'm not advocating against libraries! A language that had no "volatile" specifier could be fine if it relied on a library to manipulate such data, but does that mean you therefore disapprove of C having a "volatile" keyword? Which do you think is preferable, a keyword for it or a librart?
The significance of a keyword is not that it is one symbol (of many symbols) in a grammar. The significance is how it is mapped onto runtime behaviour. The objective is to ensure that multiple sources that cause data mutation (threads, hardware registers, interrupts etc) have defined predictable useful behaviour. All that is necessary is that primitives (language and hardware) exist to express the necessary concepts.
There are, of course, several such low-level mechanisms described in the literature over the decades, and different languages include different low-level mechanisms. Those low-level mechanisms are usually "wrapped" into several more useful high-level conceptual mechanisms in the form of libraries expressing useful "Design Patterns", e.g. Posix threads or Doug Lea's Java Concurrency Library.
"Naked" use of the primitives (rather than well designed and debugged) libraries of design patterns is a frequent source of subtle unrepeatable errors.
Notice that the language grammar is completely irrelevant in that respect; the runtime behaviour is what's important.
QuoteYour initial questions were about far more than grammar, and therefore potentially more interesting. That's why people contributed.
You have consistently failed to understand that the main topic of importance is the runtime behaviour (in a broad sense) of something expressed in a language. Anything that improves the runtime behaviour is potentially important.
Please do not venture to speculate what you think I understand or do not understand, that's a disparaging remark, an ad-hominem.
I call 'em as I see 'em. Others are making similar observations.
I don't think you understand what an ad-hominem argument is and isn't.
So, what specifically did you want to say about language grammar, extensibility, features that could be helpful to MCU developers?There is very little (if anything) about language grammar that could be useful to MCU developers.
How did you establish that? what evidence supports this claim?
Observation over the decades. Basically MCU developers can swap grammars relatively easily and quickly. What trips them up is the behaviour, i.e. how the concepts expressed in a grammar (any grammar!) map onto the MCU's runtime behaviour.QuoteHere's a post by an engineer that proves you wrong, I quote:QuoteThe first challenge I ran into with Rust was getting my firmware to run on hardware varying from 4-button dev-kit PCBs to the left/right halves of a wireless split to a single Atreus.
Varying the features of firmware at compile-time is known as “conditional compilation”. (It needs to be done at compile-time rather than run-time because microcontrollers have limited program space, roughly 10–100kB in my case.). Rust’s solution to this problem is “features”.QuoteConceptually Zig’s inline for is solving the same problem that Rust’s syntax macro solves (generating type-specific code at compile-time), but without the side quest of learning a lil’ pattern matching/expansion language. Rust has many language features and they’re all largely disjoint from each other, so knowing some doesn’t help me guess the others.QuoteConversely, this “consistency” principle also explains why I had such an easy time picking up Zig — it absolutely excels in this department. Not only are there many fewer features to learn in the first place, they seem to all fit together nicely: The comptime and inline for keywords, for example, allowed me to leverage at compile-time all the looping, conditions, arithmetic, and control flow I wanted using the syntax and semantics I’d already learned — Zig!
One data point (and a very arguable data point at that) does not constitute a convincing argument.
I agree, but is so much better than no data points - if you get my drift. Your observations are fine, but all observations are interpretations of data, you cannot exclude your own biases and prejudices.Quote
If extensibility is important, then domain specific libraries are the way to go.
They are certainly a way to go, I'm not advocating against libraries! A language that had no "volatile" specifier could be fine if it relied on a library to manipulate such data, but does that mean you therefore disapprove of C having a "volatile" keyword? Which do you think is preferable, a keyword for it or a librart?
The significance of a keyword is not that it is one symbol (of many symbols) in a grammar. The significance is how it is mapped onto runtime behaviour. The objective is to ensure that multiple sources that cause data mutation (threads, hardware registers, interrupts etc) have defined predictable useful behaviour. All that is necessary is that primitives (language and hardware) exist to express the necessary concepts.
There are, of course, several such low-level mechanisms described in the literature over the decades, and different languages include different low-level mechanisms. Those low-level mechanisms are usually "wrapped" into several more useful high-level conceptual mechanisms in the form of libraries expressing useful "Design Patterns", e.g. Posix threads or Doug Lea's Java Concurrency Library.
"Naked" use of the primitives (rather than well designed and debugged) libraries of design patterns is a frequent source of subtle unrepeatable errors.
Notice that the language grammar is completely irrelevant in that respect; the runtime behaviour is what's important.
Yes you keep saying that but you never answered the question - if libraries are the "way to go" do you think C should not have a "volatile" keyword and instead rely on a library? if you don't understand the question then say so, if you do then is your answer "yes" or "no" or "I don't know"?
Even for the people who just want Carry, I highly doubt they are prepared for the different implementations of the carry flag with respect to subtraction and compare.
And it's simply not hard to express what you ACTUALLY WANT in portable C anyway. The compiler will automatically use the flags where available.
Bignums aren't easy in portable C
Well, yes they are, as just demonstrated.
So, what specifically did you want to say about language grammar, extensibility, features that could be helpful to MCU developers?There is very little (if anything) about language grammar that could be useful to MCU developers.
How did you establish that? what evidence supports this claim?
Observation over the decades. Basically MCU developers can swap grammars relatively easily and quickly. What trips them up is the behaviour, i.e. how the concepts expressed in a grammar (any grammar!) map onto the MCU's runtime behaviour.QuoteHere's a post by an engineer that proves you wrong, I quote:QuoteThe first challenge I ran into with Rust was getting my firmware to run on hardware varying from 4-button dev-kit PCBs to the left/right halves of a wireless split to a single Atreus.
Varying the features of firmware at compile-time is known as “conditional compilation”. (It needs to be done at compile-time rather than run-time because microcontrollers have limited program space, roughly 10–100kB in my case.). Rust’s solution to this problem is “features”.QuoteConceptually Zig’s inline for is solving the same problem that Rust’s syntax macro solves (generating type-specific code at compile-time), but without the side quest of learning a lil’ pattern matching/expansion language. Rust has many language features and they’re all largely disjoint from each other, so knowing some doesn’t help me guess the others.QuoteConversely, this “consistency” principle also explains why I had such an easy time picking up Zig — it absolutely excels in this department. Not only are there many fewer features to learn in the first place, they seem to all fit together nicely: The comptime and inline for keywords, for example, allowed me to leverage at compile-time all the looping, conditions, arithmetic, and control flow I wanted using the syntax and semantics I’d already learned — Zig!
One data point (and a very arguable data point at that) does not constitute a convincing argument.
I agree, but is so much better than no data points - if you get my drift. Your observations are fine, but all observations are interpretations of data, you cannot exclude your own biases and prejudices.Quote
If extensibility is important, then domain specific libraries are the way to go.
They are certainly a way to go, I'm not advocating against libraries! A language that had no "volatile" specifier could be fine if it relied on a library to manipulate such data, but does that mean you therefore disapprove of C having a "volatile" keyword? Which do you think is preferable, a keyword for it or a librart?
The significance of a keyword is not that it is one symbol (of many symbols) in a grammar. The significance is how it is mapped onto runtime behaviour. The objective is to ensure that multiple sources that cause data mutation (threads, hardware registers, interrupts etc) have defined predictable useful behaviour. All that is necessary is that primitives (language and hardware) exist to express the necessary concepts.
There are, of course, several such low-level mechanisms described in the literature over the decades, and different languages include different low-level mechanisms. Those low-level mechanisms are usually "wrapped" into several more useful high-level conceptual mechanisms in the form of libraries expressing useful "Design Patterns", e.g. Posix threads or Doug Lea's Java Concurrency Library.
"Naked" use of the primitives (rather than well designed and debugged) libraries of design patterns is a frequent source of subtle unrepeatable errors.
Notice that the language grammar is completely irrelevant in that respect; the runtime behaviour is what's important.
Yes you keep saying that but you never answered the question - if libraries are the "way to go" do you think C should not have a "volatile" keyword and instead rely on a library? if you don't understand the question then say so, if you do then is your answer "yes" or "no" or "I don't know"?
Sigh. That's a false dichotomy.
Not only isn't your question the right question, it isn't even the wrong question. It is, however, a reflection of the point I've been trying (and failing) to get you to understand: the difference between syntax/language and semantics/meaning/behaviour. Most people here care deeply about the latter, but don't care about the former.
The behaviour I mentioned above is needed, the keyword isn't. Whatever syntax and primitives are used, they will usually be wrapped up in a library.
Even for the people who just want Carry, I highly doubt they are prepared for the different implementations of the carry flag with respect to subtraction and compare.
And it's simply not hard to express what you ACTUALLY WANT in portable C anyway. The compiler will automatically use the flags where available.
Bignums aren't easy in portable C
Well, yes they are, as just demonstrated.
Yep they are indeed.
I have implemented some kind of arbitrary precision library. It's able to do with various base integer widths without a problem. I've admittedly used some of the GCC's builtins to speed things up (such as the 'compute with overflow' kind of builtins), which themselves are reasonably portable if you stick to GCC, but I could have perfectly done without them and make the code 100% standard C code.
A grammar is necessary (not sufficient of course), unavoidable in fact so we need a grammar and I'm of the opinion that a grammar based on PL/I subset G is extremely attractive for reasons I've already articulated. A criticism I have of many, many newer languages like Rust, like Hare like Zig like Go like Swift even C# is that they are based on the C grammar and therefore do - unfortunately - stand to repeat some of the sins of the past.
I had implemented a 32 bit floating point package on a 6800 a decade earlier, in a medium level language. Without checking, I would have used the carry flag, which doesn't exist in C
I had implemented a 32 bit floating point package on a 6800 a decade earlier, in a medium level language. Without checking, I would have used the carry flag, which doesn't exist in C
This caught my eye "carry flag" are you of the opinion this could be useful? if it had existed in C?
Yes you keep saying that but you never answered the question - if libraries are the "way to go" do you think C should not have a "volatile" keyword and instead rely on a library? if you don't understand the question then say so, if you do then is your answer "yes" or "no" or "I don't know"?
Sigh. That's a false dichotomy.
Not only isn't your question the right question, it isn't even the wrong question. It is, however, a reflection of the point I've been trying (and failing) to get you to understand: the difference between syntax/language and semantics/meaning/behaviour. Most people here care deeply about the latter, but don't care about the former.
The behaviour I mentioned above is needed, the keyword isn't. Whatever syntax and primitives are used, they will usually be wrapped up in a library.
Very well I'll tell you, the answer is in fact "no" because C with a volatile keyword is preferable to a library method invocation both syntactically and semantically and for performance reasons too.
Now, what are examples of these semantic, behavioral concepts you "care deeply about" I'm as interested in this as I am grammar, but was focusing on grammar initially, if you want to start discussing behaviors then fine.
One cannot have a language without a grammar so one must - somehow - identify a grammar to use, it is an essential part of a programming language, you seem to be saying it is irrelevant, well if that were true use assembler or perhaps COBOL or RPG?
Finally as I just pointed out but you seem to have missed, libraries do not grow on trees, they are written and they are written using a language, you cannot wave away the language issue by replacing it with a library, all that does is move the code, it doesn't eliminate it.
A grammar is necessary (not sufficient of course), unavoidable in fact so we need a grammar and I'm of the opinion that a grammar based on PL/I subset G is extremely attractive for reasons I've already articulated. A criticism I have of many, many newer languages like Rust, like Hare like Zig like Go like Swift even C# is that they are based on the C grammar and therefore do - unfortunately - stand to repeat some of the sins of the past.
I see grammar as an arbitrary choice, related more to what the designer considers "pretty" than anything else.
It is the language paradigm (generic approach to problem solving), and concepts, that matter.
If you start with a grammar, you are basically starting with the statement "This is the way all possible concepts in this programming language shall be expressed."
If you start at the conceptual level, you sketch out the features of the language, and can then choose a grammar that best suits the needs.
In my opinion, the only reason to start from the grammar upwards is if you 1) believe you already know everything you need to know about the language being designed, and/or 2) how it looks is of paramount importance to you. (These are the two reasons I've seen in the wild, that is.)
Let's use my own gripes with C as a starting point, and examine how the above affects the design process and the result.
Because of embedded uses, I want to allow pointers to specific addresses and to individual objects, but I do not want them to be extensible to arrays, except via explicit constructs that also specify the size/range of the array.
I do not want to force the ABI to pass arrays as (origin, length) or (origin, step, length) tuples, because my aim is to be able to prove that all accesses are valid using static code analysis tools. If a cosmic ray flips a bit and that causes a crash because of lack of runtime bounds checking, I'm okay with that. (This, of course, is a design choice, and by no means the only one possible!)
I also like being able to redefine basic operators for custom object types. I don't want to go whole-hog object-oriented, making the operator overloading object-specific; I'm fine with the operator overloading being purely based on static types only. (I am aiming at small to medium-sized embedded devices, and the cost of the indirection via object pointers is too much for this niche, in my opinion. Another design choice.)
I now cobble together something that seems to compile a small snippet of source code to pseudo-assembly.
Uh-oh, problem. Because I do not force ABIs to explicitly pass a parameter describing the array length, I need a way to explicitly extract the origin and the length of an array. I could use built-in functions or address and length operators. Except that if I have already decided on a grammar, my choice is dictated by that grammar. Oops.
In real life, both options have their upsides and downsides. When the length is needed in a function where the array was passed as a parameter, we really do need to pass the length also. There are several options for this particular design choice, but they all boil down to the fact that in a function specification, we need a way to specify that a parameter is the length of a specific array also passed as a parameter. (With this information, my compiler can tell at compile-time whether the length operator or built-in function is allowed.) This significantly affects the grammar, of course!
If my grammar was locked in already, I'd have to cobble together a less than optimal workaround.
Even though I have piles of experience myself, I know I am not aware of all the features and details beforehand. But I also know that I can construct the grammar as I go along, and collect and implement all the features of the new programming language. Having the grammar be defined by the features, instead of vice versa, gives me the most leeway.
For example, if I wanted this language to be mostly compatible with C otherwise, I would make the address-of and length-of operators, so that those porting code would need to pay attention to every case where a pointer is used as an array or vice versa. C already has an address-of operator &, but it might be confusing, and make the static code analysis more difficult. Instead, I might choose say @ and #, or origin and length (keyword operators, like sizeof). But I do not know yet; I would experiment with it in practice –– on unsuspecting victims, preferably –– to see if they grasp the concept intuitively, and therefore would be likely to apply this to write memory-safer code. Locking in the grammar up front makes such experiments irrelevant; the decision has been made already.
Thus, the obvious conclusion from going at it grammar-first is that it is grandiose, futile, waste of time, or all three.
Do not forget that others are not commenting about you personally, they are commenting your approach and what you describe in your output.
This is also why I use a pseudonym, instead of my real name: it helps me remember that any negative feedback I get is based on my output, my communications, and not my person. I can change my communication style somewhat –– although verbosity seems to be a fixed feature for me ––, but there is no reason to think my person is under attack, so although discussion can get very heated at times, there is no reason to consider it as between persons: it is just a heated discussion. In a different concurrent discussion with the same members, I can be (and have been), quite calm and relaxed –– again, because it is not about persons, the heat is only about the subject matter. I think sometimes a little heat is good, because it shows that others care, and because it (at least for me) causes one to reconsider their own output, to see if the heat is on track, or just a mistargeted flamethrower. Useful, in other words.
I don't think its very easy to devise a programming language grammar that is 100% free from the influence of other previous languages. There's no such thing as a totally, brand new, fresh grammar, or it's very very rare. APL could be considered an example of that though, I suppose.
Consider too C#, Java, C++, Swift, Objective-C, Perl, PHP, JavaScript, Go, Rust and more, all are regarded as being derived from the C grammar. So all of these must have selected - up front - C as the basis for their grammar.
Someone else mentioned that "the carry flag isn't available in C" well this is the kind if thing I'd like to hear more about, would it, could it, be helpful to expose such a concept in the language?
I don't think its very easy to devise a programming language grammar that is 100% free from the influence of other previous languages. There's no such thing as a totally, brand new, fresh grammar, or it's very very rare. APL could be considered an example of that though, I suppose.
I've used LISP, Prolog, Forth, Smalltalk, Algol-60, various hardware definition languages. They all have radically different syntaxes.QuoteConsider too C#, Java, C++, Swift, Objective-C, Perl, PHP, JavaScript, Go, Rust and more, all are regarded as being derived from the C grammar. So all of these must have selected - up front - C as the basis for their grammar.
And there you unwittingly demonstrate our point. The interesting and important differences between those languages has nothing to do with grammar.
Someone else mentioned that "the carry flag isn't available in C" well this is the kind if thing I'd like to hear more about, would it, could it, be helpful to expose such a concept in the language?
Start by understanding the concepts in https://www.jameswhanlon.com/the-xc-programming-language.html amd https://www.xmos.ai/download/XMOS-Programming-Guide-(documentation)(F).pdf
I note that one of the desires there was "give easy access to the flags". I pointed out that many ISAs don't even *have* flags.Yep. In particular, you helped me realize I did not actually desire access to flags, just multiple function result values, to fulfill my needs.