Wow, I did not see this coming.
- micro kernel approach for linux, with a userspace scheduler already written as proof of concept
I wish the solution wasn't rust though. Rust isn't a standardized language unlike C++, ECMAScript, C# and others. Rust has made breaking changes.
alias(memory safe, less dangerous): technically and politically incorrect, but encouraging, and that's what we need"Memory safe language" makes people think its a solution to a problem. These languages may be a part of a solution, but claiming they are a solution is serious bullshit, that should get people thrown out of decent society,
to give an example, it's like claiming that "homo sapiens comes from monkeys" no one shouts "bullshit", even if scientifically speaking it's bullshit.
alias(memory safe, less dangerous): technically and politically incorrect, but encouraging, and that's what we need"Memory safe language" makes people think its a solution to a problem. These languages may be a part of a solution, but claiming they are a solution is serious bullshit, that should get people thrown out of decent society,
to give an example, it's like claiming that "homo sapiens comes from monkeys" no one shouts "bullshit", even if scientifically speaking it's bullshit.
It won't be an easy transition, quite the radical architectural change.There's a lot of work being done lately to do it transparently. See :
"Memory safe language" makes people think its a solution to a problem. These languages may be a part of a solution, but claiming they are a solution is serious bullshit, that should get people thrown out of decent society,
Since people like Michael Jackson in the 1970s until today hardly anyone has pushed their latest new software thing as something which helps a bit in putting solid applications together. Its always pushed as a magic bullet. What wrong with names that reflect reality, and set reasonable expectations? Maybe people would get less disillusioned when they find the real strengths and weaknesses of the new thing.alias(memory safe, less dangerous): technically and politically incorrect, but encouraging, and that's what we need"Memory safe language" makes people think its a solution to a problem. These languages may be a part of a solution, but claiming they are a solution is serious bullshit, that should get people thrown out of decent society,
to give an example, it's like claiming that "homo sapiens comes from monkeys" no one shouts "bullshit", even if scientifically speaking it's bullshit.
They are a solution to part of the problem. An important part of the problem.
The same is true of seatbelts and road safety. And speed limits and road safety.
Putting a big spike in the steering wheel is not a practical solution to road safety, any more than requiring "correct" usage of many of C's features is a practical solution for safe memory use.
Since people like Michael Jackson in the 1970s until today hardly anyone has pushed their latest new software thing as something which helps a bit in putting solid applications together. Its always pushed as a magic bullet. What wrong with names that reflect reality, and set reasonable expectations? Maybe people would get less disillusioned when they find the real strengths and weaknesses of the new thing.
Since people like Michael Jackson in the 1970s until today hardly anyone has pushed their latest new software thing as something which helps a bit in putting solid applications together. Its always pushed as a magic bullet. What wrong with names that reflect reality, and set reasonable expectations? Maybe people would get less disillusioned when they find the real strengths and weaknesses of the new thing.alias(memory safe, less dangerous): technically and politically incorrect, but encouraging, and that's what we need"Memory safe language" makes people think its a solution to a problem. These languages may be a part of a solution, but claiming they are a solution is serious bullshit, that should get people thrown out of decent society,
to give an example, it's like claiming that "homo sapiens comes from monkeys" no one shouts "bullshit", even if scientifically speaking it's bullshit.
They are a solution to part of the problem. An important part of the problem.
The same is true of seatbelts and road safety. And speed limits and road safety.
Putting a big spike in the steering wheel is not a practical solution to road safety, any more than requiring "correct" usage of many of C's features is a practical solution for safe memory use.
these days, Linus is discussing two things, well three things ...
- more Rust support for more drivers written in Rust
- micro kernel approach for linux, with a userspace scheduler already written as proof of concept
- development languages for future kernels must be memory safe, because with multi core and modern features we have already reached the point of no return in term of kernel complexity
(
oh, and from how some have responded - my speculation - it also seems that many Linux developers ...
... have now reached retirement age, so they would like to make way for younger people
mumble ... :-//
)
And most of these younger developers, with few exceptions, don't want or just plain can't write proper C. So, there is that.
Wow, I did not see this coming.
- micro kernel approach for linux, with a userspace scheduler already written as proof of concept
So Tanenbaum had a point, in the end (others were proven wrong by history).
Though i looked at the language a while ago and honestly, if i still have to go the unsafe route to do any peripheral access might as well keep writing C...
It depends what you are programming. A lot of small embedded processors are doing endless manipulation of peripherals. However, what you say is true for most larger embedded programs. You really ought to make life as easy as possible on the bulk of the code, so you have more time to put more focus of the gritty stuff.Though i looked at the language a while ago and honestly, if i still have to go the unsafe route to do any peripheral access might as well keep writing C...
That's a pretty silly argument even for embedded development. On all but the most trivial programs (which: who cares what you use), only a tiny fraction of the code is actually performing the register read/writes. I'm not completely thrilled with the way rust works on embedded, or the way unsafe works in rust, but most code even in embedded could be safer.
It depends what you are programming. A lot of small embedded processors are doing endless manipulation of peripherals. However, what you say is true for most larger embedded programs. You really ought to make life as easy as possible on the bulk of the code, so you have more time to put more focus of the gritty stuff.Though i looked at the language a while ago and honestly, if i still have to go the unsafe route to do any peripheral access might as well keep writing C...
That's a pretty silly argument even for embedded development. On all but the most trivial programs (which: who cares what you use), only a tiny fraction of the code is actually performing the register read/writes. I'm not completely thrilled with the way rust works on embedded, or the way unsafe works in rust, but most code even in embedded could be safer.
You just aren't going to make small embedded machines super safe to program, unless, perhaps, AI assistants get so good they pick up the errors. We need to take the wins where they present themselves. There are many weird and wonderful multi-core devices for communications applications, with very heterogeneous layouts of memory and peripherals. Good luck trying to build moderately safe programming environments for those They are a big security issue, as its often very hard to figure out from their documentation just what it is you are supposed to do in all cases.It depends what you are programming. A lot of small embedded processors are doing endless manipulation of peripherals. However, what you say is true for most larger embedded programs. You really ought to make life as easy as possible on the bulk of the code, so you have more time to put more focus of the gritty stuff.Though i looked at the language a while ago and honestly, if i still have to go the unsafe route to do any peripheral access might as well keep writing C...
That's a pretty silly argument even for embedded development. On all but the most trivial programs (which: who cares what you use), only a tiny fraction of the code is actually performing the register read/writes. I'm not completely thrilled with the way rust works on embedded, or the way unsafe works in rust, but most code even in embedded could be safer.
I prefer to get the computer to prevent me from making as many avoidable errors as possible. Attitudes like "I can safely handle loaded guns pointed at my foot" is macho libertarian nonsense.
Twiddling bits in a register is not a major issue. Ensuring the correct bits are twiddled at the correct time is far more complex and error prone.The
Thus I don't care if peek and poke are implemented in C or assembler. Correctly calculating the invocations of peek and poke in a multimode processor is far more challenging. Any tool that helps automatically predicting those values aren't incorrect is valuable.
It depends what you are programming. A lot of small embedded processors are doing endless manipulation of peripherals. However, what you say is true for most larger embedded programs. You really ought to make life as easy as possible on the bulk of the code, so you have more time to put more focus of the gritty stuff.
A government decree (...)
Agreed.A government decree (...)
This is actually the worst part of this story indeed.
Before joining the White House the guy was busy at the executive diversity and inclusion counsil at the CIA.
It depends what you are programming. A lot of small embedded processors are doing endless manipulation of peripherals.Though i looked at the language a while ago and honestly, if i still have to go the unsafe route to do any peripheral access might as well keep writing C...
That's a pretty silly argument even for embedded development. On all but the most trivial programs (which: who cares what you use), only a tiny fraction of the code is actually performing the register read/writes. I'm not completely thrilled with the way rust works on embedded, or the way unsafe works in rust, but most code even in embedded could be safer.
However, what you say is true for most larger embedded programs. You really ought to make life as easy as possible on the bulk of the code, so you have more time to put more focus of the gritty stuff.and i agree very much with this as well, but that's not what i usually work on
From perspective of somebody more familiar with device drivers than thread schedulers, memory safe languages are absolute joke when you are dealing with DMA capable hardware, limited IOMMUs (page granularity and high cost of changing page mappings) and potentially malicious external DMA peripherals (thunderbolt, USB4).
You just aren't going to make small embedded machines super safe to program, unless, perhaps, AI assistants get so good they pick up the errors. We need to take the wins where they present themselves. There are many weird and wonderful multi-core devices for communications applications, with very heterogeneous layouts of memory and peripherals. Good luck trying to build moderately safe programming environments for those They are a big security issue, as its often very hard to figure out from their documentation just what it is you are supposed to do in all cases.It depends what you are programming. A lot of small embedded processors are doing endless manipulation of peripherals. However, what you say is true for most larger embedded programs. You really ought to make life as easy as possible on the bulk of the code, so you have more time to put more focus of the gritty stuff.Though i looked at the language a while ago and honestly, if i still have to go the unsafe route to do any peripheral access might as well keep writing C...
That's a pretty silly argument even for embedded development. On all but the most trivial programs (which: who cares what you use), only a tiny fraction of the code is actually performing the register read/writes. I'm not completely thrilled with the way rust works on embedded, or the way unsafe works in rust, but most code even in embedded could be safer.
I prefer to get the computer to prevent me from making as many avoidable errors as possible. Attitudes like "I can safely handle loaded guns pointed at my foot" is macho libertarian nonsense.
Twiddling bits in a register is not a major issue. Ensuring the correct bits are twiddled at the correct time is far more complex and error prone.The
Thus I don't care if peek and poke are implemented in C or assembler. Correctly calculating the invocations of peek and poke in a multimode processor is far more challenging. Any tool that helps automatically predicting those values aren't incorrect is valuable.
From perspective of somebody more familiar with device drivers than thread schedulers, memory safe languages are absolute joke when you are dealing with DMA capable hardware, limited IOMMUs (page granularity and high cost of changing page mappings) and potentially malicious external DMA peripherals (thunderbolt, USB4).In what way? A buffer overflow in your code to handle DMA can be chained all the way to full system control. To me it seems even more important.
Another thing to watch out for is significantly different technologies that aren't sufficiently practical outside a laboratory/academia. Examples have been formal methods, Erlang, the various ML/Haskell/etc languages. Interesting, but not worth spending too much time on.I'd make an exception for Erlang. I'm not so fond of the language itself, but it cannot be considered an academic language with no practical use.
Another thing to watch out for is significantly different technologies that aren't sufficiently practical outside a laboratory/academia. Examples have been formal methods, Erlang, the various ML/Haskell/etc languages. Interesting, but not worth spending too much time on.I'd make an exception for Erlang. I'm not so fond of the language itself, but it cannot be considered an academic language with no practical use.
Its real word applications are, in fact, all around you. A number of large companies use it (e.g. Klarna in Sweden) - Facebook chat backend used to be implemented in Erlang, but they migrated away.
But, above all, it's deployed in literally millions of radio devices in the Radio Access Network for mobile communications.
What I've personally seen in development (Note: I do not work directly with it, but I'm a user of a number services implemented with it) is a quicker turnaround time from requirement to implementation, with a quality no worse than more classical languages.
Because of this WH announcement (which I read yesterday), one of the footnotes led me to the Google Project Zero (which I have been aware of for awhile), and that led me to blog post about the NSO zero-click iMessage exploit https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-into-nso-zero-click.html (https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-into-nso-zero-click.html). I was only vaguely aware of Pegasus and NSO (this was in the news a year or so back), but the actual exploit, and the mind set that it took to write it, is heart stopping.
This is likely a prime candidate for why software (in general) and those parts which are widely used (in particular) needs to have a much cleaner attack footprint. Who knew that an image parser could be manipulated in this way ?
Though i looked at the language a while ago and honestly, if i still have to go the unsafe route to do any peripheral access might as well keep writing C...
The recommended method is to use a HAL (Hardware Abstraction Layer) library for your platform. This adds a lot of extra checks besides memory safety. For example if you attempt to use the wrong pin for an unsupported function, such as SPI on a pin that can not do SPI, your program simply won't compile.
While a good design goal for a abstraction layer, I have never seen any of this in real life.
Maybe time to think about starting alternatives.
https://docs.rs/stm32-hal2/latest/stm32_hal2/
let mut scl = Pin::new(Port::B, 6, PinMode::Alt(4));
scl.output_type(OutputType::OpenDrain);https://docs.rs/stm32-hal2/latest/stm32_hal2/
Looked at it and it still seems to miss the mentioned feature, compile-time checking of peripheral availability on given pin:Code: [Select]let mut scl = Pin::new(Port::B, 6, PinMode::Alt(4));
scl.output_type(OutputType::OpenDrain);
Still seems to map based on magic numbers. I mean, if I change that to Alt(5), or to Port::B,10, where does the compilation fail?
To make that tractable you need to constrain the problem, and that is best done by having a constrained language and environment. The trick is to have constraints which are easy enough to work with and still offer useful guarantees for a large number of problems.Which is why it makes much sense to have a thin C/ C++ layer and use sand-boxed languages like Lua or Python to implement the logic. A program can still crash, but the C / C++ layer can do a graceful recovery.
Just use Ada then instead of a cryptic language sponsored by woke hipsters with no formal specification yet. ;D
A program can still crash, but the C / C++ layer can do a graceful recovery.
To make that tractable you need to constrain the problem, and that is best done by having a constrained language and environment. The trick is to have constraints which are easy enough to work with and still offer useful guarantees for a large number of problems.Which is why it makes much sense to have a thin C/ C++ layer and use sand-boxed languages like Lua or Python to implement the logic. A program can still crash, but the C / C++ layer can do a graceful recovery.
A practical simple example would be inertial navigation system which does dead reckoning.Or a centrifuge controller.
practical request by those doing programming to implement things in practice
I'm including the check writers in "those doing programming to implement things in practice", i.e. as in entire companies.practical request by those doing programming to implement things in practice
Indeed, some check writers have decided to stop listening to them though. In practice, the programmers will request money before C.
If a C compiler propagated array bounds checking across function prototypes at compile time, allowed modifier variables to be listed after the variably modified array instead of only before, then it would be rather easy to write memory-safe code in C, too.
The fact that we have objective-C and C++, but lack even compile-time bounds checking for variably-modified array bounds in C, tells me memory safety is more a niche and politics and research subject than a practical request by those doing programming to implement things in practice. In practice, memory safety is really just a programming attitude, an approach, rather than some magical property of the programming language itself.
Don't forget the consequences of aliasing.Read the post. noalias keyword would be equivalent to restrict for pointers.
Don't forget the consequences of what happens inside the library code that your [remarkably and completely competent] programmers and organisation didn't write.You mean, because the world is full of shit, it makes no sense to generate anything better than shit?
Don't forget the consequences of aliasing.Read the post. noalias keyword would be equivalent to restrict for pointers.
Don't forget the consequences of what happens inside the library code that your [remarkably and completely competent] programmers and organisation didn't write.You mean, because the world is full of shit, it makes no sense to generate anything better than shit?
Last I looked, which was a long time ago, noalias means
Those who are interested in memory safety and are get paid to get stuff done
Last I looked, which was a long time ago, noalias means
Maybe you missed the fact there is no "noalias" keyword in C. This was a suggestion.
While a good design goal for a abstraction layer, I have never seen any of this in real life.
In the case of Rust and common MCUs like STM32 the open source people already solved this. We have good crates with HALs with all the bells and whistles.
https://docs.rs/stm32-hal2/latest/stm32_hal2/
Errata
SDIO and ethernet unimplemented
DMA unimplemented on F4, and L552
H7 BDMA and MDMA unimplemented
H5 GPDMA unimplemented
USART interrupts unimplemented on F4
CRC unimplemented for F4
High-resolution timers (HRTIM), Low power timers (LPTIM), and low power usart (LPUSART) unimplemented
ADC unimplemented on F4
Low power modes beyond csleep and cstop aren't implemented for H7
WB and WL are missing features relating to second core operations and RF
L4+ MCUs not supported
WL is missing GPIO port C, and GPIO interrupt support
If using PWM (or output compare in general) on an Advanced control timer (eg TIM1 or 8), you must manually set the TIMx_BDTR register, MOE bit.
Octospi implementation is broken
DFSDM on L4x6 is missing Filter 1.
Only FDCAN1 is implemented; not FDCAN2 or 3 (G0, G4, H7).
H5 is missing a lot of functionality, including DMA.
I haven't spent much time on C since the committee spent years debating whether it should be possible or impossible to "cast away const". There are good arguments for and against either decision, which is a good indication that there are fundamental problems lurking in the language.If you consider that relevant to memory-safety, then by the same logic Rust isn't memory-safe, because it allows the programmer to write unsafe code.
Now, is is possible, within the language specification, to "cast away noalias"?In general, I do not believe a programming language should cater to the least common denominator, i.e. to try and stop people from shooting themselves in the face with the code they write. I am not willing to trade any efficiency or performance for safety, because I can do that myself at run time.
While a good design goal for a abstraction layer, I have never seen any of this in real life.
In the case of Rust and common MCUs like STM32 the open source people already solved this. We have good crates with HALs with all the bells and whistles.
https://docs.rs/stm32-hal2/latest/stm32_hal2/
And then you follow the link and read this:Code: [Select]
Errata
SDIO and ethernet unimplemented
DMA unimplemented on F4, and L552
H7 BDMA and MDMA unimplemented
H5 GPDMA unimplemented
USART interrupts unimplemented on F4
CRC unimplemented for F4
High-resolution timers (HRTIM), Low power timers (LPTIM), and low power usart (LPUSART) unimplemented
ADC unimplemented on F4
Low power modes beyond csleep and cstop aren't implemented for H7
WB and WL are missing features relating to second core operations and RF
L4+ MCUs not supported
WL is missing GPIO port C, and GPIO interrupt support
If using PWM (or output compare in general) on an Advanced control timer (eg TIM1 or 8), you must manually set the TIMx_BDTR register, MOE bit.
Octospi implementation is broken
DFSDM on L4x6 is missing Filter 1.
Only FDCAN1 is implemented; not FDCAN2 or 3 (G0, G4, H7).
H5 is missing a lot of functionality, including DMA.
Now, if the intent is to get even idiots and LLMs to write "safe" code, then the language needs to be designed from the get go for those who otherwise would be sitting in a quiet corner stuffing crayons up their nostrils and eating white glue.
I'm not interested in those. Given the choice between dangerous-but-unlimited and safe-but-limited, I always choose the first one, because I can do "safe" myself. Again, the large majority of existing C code is crappy not because C itself is crappy, but because the majority of C users are not interested in writing non-crappy code. One can write robust, safe code even in PHP (gasp!), although it does require some configuration settings to be set to non-insane values.
I haven't spent much time on C since the committee spent years debating whether it should be possible or impossible to "cast away const". There are good arguments for and against either decision, which is a good indication that there are fundamental problems lurking in the language.If you consider that relevant to memory-safety, then by the same logic Rust isn't memory-safe, because it allows the programmer to write unsafe code.
Now, is is possible, within the language specification, to "cast away noalias"?In general, I do not believe a programming language should cater to the least common denominator, i.e. to try and stop people from shooting themselves in the face with the code they write. I am not willing to trade any efficiency or performance for safety, because I can do that myself at run time.
In practice, whether casting away const or noalias/restrict should be allowed or not, depends on the exact situation and context. It is more about style and a part of code quality management tools an organization might use.
... If you or a library/debugger/etc does [cast away const/noalias], are there any guarantees about what happens - or are nasal daemons possible?
Simply put, the issue is social, not technological.
:PNow, if the intent is to get even idiots and LLMs to write "safe" code, then the language needs to be designed from the get go for those who otherwise would be sitting in a quiet corner stuffing crayons up their nostrils and eating white glue.
I'm not interested in those. Given the choice between dangerous-but-unlimited and safe-but-limited, I always choose the first one, because I can do "safe" myself. Again, the large majority of existing C code is crappy not because C itself is crappy, but because the majority of C users are not interested in writing non-crappy code. One can write robust, safe code even in PHP (gasp!), although it does require some configuration settings to be set to non-insane values.
Yeah, how dare you? You're a memory safety denier.
I start from assuming this world and its inhabitants.I start by claiming that it is impossible to create a world safe for all humans, and yet have any kind of free will or choice. Instead, I want to maximize the options each individual has. That includes tools that help, but do not enforce, with things like memory safety.
I would like to live in A Better World, but so far I haven't succeeded.I do not, because I cannot define exactly what a Better World would be, without modifying humans. (And that would be tyranny by definition.)
I'm kind of trained that way early on in my career and still have lots of checks in my code to make it robust but it makes programming in C super tedious. But it is hard to convince others of programming with a similar approach. Mastering C to a level to do something useful is hard enough.I'm including the check writers in "those doing programming to implement things in practice", i.e. as in entire companies.practical request by those doing programming to implement things in practice
Indeed, some check writers have decided to stop listening to them though. In practice, the programmers will request money before C.
I'm kind of trained that way early on in my career and still have lots of checks in my code to make it robust but it makes programming in C super tedious. But it is hard to convince others of programming with a similar approach. Mastering C to a level to do something useful is hard enough.I'm including the check writers in "those doing programming to implement things in practice", i.e. as in entire companies.practical request by those doing programming to implement things in practice
Indeed, some check writers have decided to stop listening to them though. In practice, the programmers will request money before C.
Something that is under-appreciated about that exploit, is that you don't need to be connected to the internet for it to run. If you have/had an air-gapped (or firewalled) phone / computer / laptop / etc, the mere fact that you rendered that speially crafted PDF document (which could be a datasheet), is all it took. Once it infected, the air-gapped device might not be so air-gapped after all.Because of this WH announcement (which I read yesterday), one of the footnotes led me to the Google Project Zero (which I have been aware of for awhile), and that led me to blog post about the NSO zero-click iMessage exploit https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-into-nso-zero-click.html (https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-into-nso-zero-click.html). I was only vaguely aware of Pegasus and NSO (this was in the news a year or so back), but the actual exploit, and the mind set that it took to write it, is heart stopping.
This is likely a prime candidate for why software (in general) and those parts which are widely used (in particular) needs to have a much cleaner attack footprint. Who knew that an image parser could be manipulated in this way ?
It is also an illustration that, while your company/team may have perfectly adept C programmers, what about that library from another company and how it interacts with something else that your perfect programmers didn't develop.
Problems do arise when managers/businesses don't want to pay for thorough checksHere we are in violent agreement. :-+
If you can cast away noalias/const then the compiler can (and with higher optimisation levels, will) generate incorrect code.
If you can cast away noalias/const then the compiler can (and with higher optimisation levels, will) generate incorrect code.More basically, if you can cast away const then const stuff can't go into NV memory, which would make C pretty useless for many kinds of machine. "const" was an absolutely essential feature for the MCU world. For the very first pass of standardising C in the 1980s, const had to go into make it a suitable language for the embedded market.
If you can cast away noalias/const then the compiler can (and with higher optimisation levels, will) generate incorrect code.More basically, if you can cast away const then const stuff can't go into NV memory, which would make C pretty useless for many kinds of machine. "const" was an absolutely essential feature for the MCU world. For the very first pass of standardising C in the 1980s, const had to go into make it a suitable language for the embedded market.
That was an issue in the 80s. These days, with most NV memory being flash, the debuggers just rewrite a page.If you can cast away noalias/const then the compiler can (and with higher optimisation levels, will) generate incorrect code.More basically, if you can cast away const then const stuff can't go into NV memory, which would make C pretty useless for many kinds of machine. "const" was an absolutely essential feature for the MCU world. For the very first pass of standardising C in the 1980s, const had to go into make it a suitable language for the embedded market.
And if you can't cast away constness then you can't write a debugger that pokes (ordinary) memory.
Damned if you can, damned if you can't => damned :)
The committee took years debating that in the early-mid 90s. That is damning in itself.
Problems do arise when managers/businesses don't want to pay for thorough checksHere we are in violent agreement. :-+
...
I am interested in the latter, but do not believe newer languages will solve all the problems: they will just replace the old set of problems with a new one, because humans just aren't very good at this sort of design, not yet anyway. I do hope Rust and others will become better than what we have now, but they're not there yet.
For the reasons I described earlier, I suggest better overall results would be achieved faster by smaller incremental changes, for example those I described for C. History provides additional reasons: objective-C and C++, when those who wrote code decided the C tool they had was insufficient.
PHP is a horrible example of what kind of a mess you may end up with if you try to cater for all possible paradigms: for some things, like string manipulation, it has at least two completely separate interfaces (imperative and object-oriented). Unfortunately, Python shows partial signs of this too, what with its strings/bytes separation, and increasing number of string template facilities.
To fix a popularity problem, a social problem, like programmers and companies being happy to produce buggy code and customers being happy to pay for buggy code, you need to apply social/human tools, not technological ones.
We do not get better software developers by teaching them the programming language du jour; we get better software developers by convincing them to try harder to not create bugs, to use the tools they have available to support them in detecting and fixing issues when they do happen. But most of all, we'd need to convince customers and business leadership that buying and selling buggy code is counterproductive, and that we can do better if we choose to. All we need to do is choose to.
In a very real way, software companies today are in a very similar position to mining companies a century and a half ago. They, too, could do basically what they pleased, and had their own "company towns" where employees had to rent from the company and buy company wares to survive. Campaign contributions to politicians kept those companies operations untouched, until people fed up with it. I'm waiting for people to get fed up with how crappy software generally speaking is. I just don't want a bloody revolution, just incremental changes that help fair competition, if that is what people want.
That was an issue in the 80s. These days, with most NV memory being flash, the debuggers just rewrite a page.If you can cast away noalias/const then the compiler can (and with higher optimisation levels, will) generate incorrect code.More basically, if you can cast away const then const stuff can't go into NV memory, which would make C pretty useless for many kinds of machine. "const" was an absolutely essential feature for the MCU world. For the very first pass of standardising C in the 1980s, const had to go into make it a suitable language for the embedded market.
And if you can't cast away constness then you can't write a debugger that pokes (ordinary) memory.
Damned if you can, damned if you can't => damned :)
The committee took years debating that in the early-mid 90s. That is damning in itself.
Flash pages are fixed size. How could the debugger get them wrong? Page read, erase, and rewrite with modifications is normal practice in debuggers these days.That was an issue in the 80s. These days, with most NV memory being flash, the debuggers just rewrite a page.If you can cast away noalias/const then the compiler can (and with higher optimisation levels, will) generate incorrect code.More basically, if you can cast away const then const stuff can't go into NV memory, which would make C pretty useless for many kinds of machine. "const" was an absolutely essential feature for the MCU world. For the very first pass of standardising C in the 1980s, const had to go into make it a suitable language for the embedded market.
And if you can't cast away constness then you can't write a debugger that pokes (ordinary) memory.
Damned if you can, damned if you can't => damned :)
The committee took years debating that in the early-mid 90s. That is damning in itself.
Oh, yuck. Q1: what happensifwhen the debugger gets the page size wrong?
Flash pages are fixed size. How could the debugger get them wrong? Page read, erase, and rewrite with modifications is normal practice in debuggers these days.That was an issue in the 80s. These days, with most NV memory being flash, the debuggers just rewrite a page.If you can cast away noalias/const then the compiler can (and with higher optimisation levels, will) generate incorrect code.More basically, if you can cast away const then const stuff can't go into NV memory, which would make C pretty useless for many kinds of machine. "const" was an absolutely essential feature for the MCU world. For the very first pass of standardising C in the 1980s, const had to go into make it a suitable language for the embedded market.
And if you can't cast away constness then you can't write a debugger that pokes (ordinary) memory.
Damned if you can, damned if you can't => damned :)
The committee took years debating that in the early-mid 90s. That is damning in itself.
Oh, yuck. Q1: what happensifwhen the debugger gets the page size wrong?
No. Many MCUs even have some small and some large pages within one chip. However, that's part of the MCU's spec, which the debugger knows about.Flash pages are fixed size. How could the debugger get them wrong? Page read, erase, and rewrite with modifications is normal practice in debuggers these days.That was an issue in the 80s. These days, with most NV memory being flash, the debuggers just rewrite a page.If you can cast away noalias/const then the compiler can (and with higher optimisation levels, will) generate incorrect code.More basically, if you can cast away const then const stuff can't go into NV memory, which would make C pretty useless for many kinds of machine. "const" was an absolutely essential feature for the MCU world. For the very first pass of standardising C in the 1980s, const had to go into make it a suitable language for the embedded market.
And if you can't cast away constness then you can't write a debugger that pokes (ordinary) memory.
Damned if you can, damned if you can't => damned :)
The committee took years debating that in the early-mid 90s. That is damning in itself.
Oh, yuck. Q1: what happensifwhen the debugger gets the page size wrong?
All MCUs and memory devices have exactly the same page size? That would surprise me.
No. Many MCUs even have some small and some large pages within one chip. However, that's part of the MCU's spec, which the debugger knows about.Flash pages are fixed size. How could the debugger get them wrong? Page read, erase, and rewrite with modifications is normal practice in debuggers these days.That was an issue in the 80s. These days, with most NV memory being flash, the debuggers just rewrite a page.If you can cast away noalias/const then the compiler can (and with higher optimisation levels, will) generate incorrect code.More basically, if you can cast away const then const stuff can't go into NV memory, which would make C pretty useless for many kinds of machine. "const" was an absolutely essential feature for the MCU world. For the very first pass of standardising C in the 1980s, const had to go into make it a suitable language for the embedded market.
And if you can't cast away constness then you can't write a debugger that pokes (ordinary) memory.
Damned if you can, damned if you can't => damned :)
The committee took years debating that in the early-mid 90s. That is damning in itself.
Oh, yuck. Q1: what happensifwhen the debugger gets the page size wrong?
All MCUs and memory devices have exactly the same page size? That would surprise me.
That makes sense. The issue is then to ensure the config information for the MCU is correct, and that the debugger is using the config related to the correct MCU.Modern debuggers get an update each time relevant new chips are released. They can read the chip ID out of most chips, so they match up the config data with the hardware in a fairly robust manner.
That's "do-able", but obviously is not the most pressing issue.
Can be interesting to look at: https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=rust
This is why I wrote against the stupidity of calling something memory safe or type safe. Try to stop one kind of corruption issue, and some new threading, DMA, GPU or other complexity will soon pick up the slack and keep the bug reporters in safe employment.Can be interesting to look at: https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=rustCan someone shed light at what's happening here? Use-after-free, heap buffer overflows. Wasn't Rust supposed to completely get rid of exactly these types of memory errors? What went wrong?
Problems do arise when managers/businesses don't want to pay for thorough checksHere we are in violent agreement. :-+
...
I am interested in the latter, but do not believe newer languages will solve all the problems: they will just replace the old set of problems with a new one, because humans just aren't very good at this sort of design, not yet anyway. I do hope Rust and others will become better than what we have now, but they're not there yet.
There we are, again, in violent agreement.
It then becomes about the philosophy of what to do: demand perfection or expect imperfection.
This is why I wrote against the stupidity of calling something memory safe or type safe. Try to stop one kind of corruption issue, and some new threading, DMA, GPU or other complexity will soon pick up the slack and keep the bug reporters in safe employment.Can be interesting to look at: https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=rustCan someone shed light at what's happening here? Use-after-free, heap buffer overflows. Wasn't Rust supposed to completely get rid of exactly these types of memory errors? What went wrong?
Can be interesting to look at: https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=rust
Can someone shed light at what's happening here? Use-after-free, heap buffer overflows. Wasn't Rust supposed to completely get rid of exactly these types of memory errors? What went wrong?
In the past, C tried to be a full-stack language, catering for everything from the lowest-level libraries to the highest-level abstractions. That didn't work, so objective-C and C++ bubbled off it by people who used the language to solve particular kinds of problems, using abstraction schemes they thought would make the new programming language a better tool.
Currently, C is mostly used as a systems programming language, for low-level implementation (kernels, firmwares) up to services (daemons in POSIX/Unix parlance) and libraries. In this domain, bugs related to memory accesses are prominent, and seen as a problem that needs fixing.
Thing is, memory safety is only one facet of issues, and is not sufficient criterion to be better than C.
Instead of introducing a completely new language, my logic is that since C has proven to be practical, but has these faults, fixing memory safety by adding the feature set I described in a backwards-compatible manner with zero runtime overhead, is likely to yield a better tool than designing a completely new one from scratch.
Moreover, any new abstraction or feature brings in its own set of problems. Always.
Any new feature will have its risks. A completely new programming language has an unproven track record, and an unknown set of risks and weaknesses. I am happy that others are developing the next generation of programming languages, even though I expect almost all of them to fail and lapse into niche use cases. It is unlikely that I will be using them in true anger (to solve real-world problems others are having) until they have at least a decade of development under their belt, though; they tend to take at least that long to find their "groove", and iron out their backwards-incompatible warts.
Because of the above, I do not believe a new language is likely to replace C anytime soon, but in the meantime, we might reap some significant rewards with relatively small backwards-compatible changes to C itself. This is the key. Why wait for the moon, when you can have a small asteroid now in the mean time?
Can be interesting to look at: https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=rust
Can someone shed light at what's happening here? Use-after-free, heap buffer overflows. Wasn't Rust supposed to completely get rid of exactly these types of memory errors? What went wrong?
You gives some very weird replies that seem to miss the point entirely.This is why I wrote against the stupidity of calling something memory safe or type safe. Try to stop one kind of corruption issue, and some new threading, DMA, GPU or other complexity will soon pick up the slack and keep the bug reporters in safe employment.Can be interesting to look at: https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=rustCan someone shed light at what's happening here? Use-after-free, heap buffer overflows. Wasn't Rust supposed to completely get rid of exactly these types of memory errors? What went wrong?
You're quite right, but you don't go far enough. Everything should be written in assembler.
Personally I prefer to apply my thought and concentration to my unique application, and prefer not to have to (re)do boring stuff that can be done by machines.
My biggest peeve on that front is the ubiquitous "We'll add error checking later, when we have more time", which is just a complicated way of saying "We're not gonna bother", because robustness is not something you add on top, it is something you either design in, or don't have.Truth!
I don't know how to change programmers' habits. Examples only sway those who are already looking for better tools, and they'd likely have found all these on their own given enough time. :-[I have run into this issue as well and have threatened the team with required pull requests is the laziness continues. Our team knows better... they're just lazy. The beatings will continue until morale improves! LOL
Laziness well applied can be a virtue in engineering. It's what drives you to elaborate architectures to make further development much easier after that initial effort, similarly to factor code so that you'll avoid a lot of repetitions and tedious coding after that. It's also what pushes leaner designs, rather than overbloated ones.
I think the whole point is in understanding that this initial effort is required to enjoy your laziness in the longer run. And so, IMO the main problem is not with software developers being lazy per se, but the need for immediate reward, preventing them from investing this initial effort to make their life much easier afterwards.
This appeal for immediate rewards is what plagues software engineering in particular, and our whole socieity in general.
Has the bonus of confusing the hell out of idiot managers who measure productivity by the number of lines of code :)Luckily, we are seeing the classical hierarchical employment structure dissolving away as companies have realized that people who are not producing value have no place in their company. Managers who generally do nothing but bark at people are [thankfully] becoming a scarcity these days; at least in the environments I have seen.
"Scrum style" work places where the entire team meets [called the "standup"] for 15 minutes each day at the same time and place and go around the circle saying this, "Yesterday, I worked on X. Today, I am working on Y. I have 0..N blocks."... very quickly makes it very obvious who is not contributing value to the team and thus, the company. Is there a place for managers? Absolutely. Do I think most of them sit on their butt most of the day and do nothing... also yes.
"thinking before doing" is more productive than "doing and finding it didn't work", a.k.a. "no time do do it right in the first place but always time to do it over"This typically ends up as: "no time do do it right in the first place and NO time to do it over"
Just so, but don't forget to add "show me the reward structure and I'll tell you how people will behave".Hey, full circle: that is exactly why I said memory safety is a social problem, not a technical one.
That too can be a problem. It fails for interesting projects where any of these apply...Eh... I'm not convinced that ever reporting what you did yesterday and reporting what you're going to do today has anything at all to do with the work or the context in which you're doing it. Transparency is transparency regardless of any other factors.
I think the whole point is in understanding that this initial effort is required to enjoy your laziness in the longer run. And so, IMO the main problem is not with software developers being lazy per se, but the need for immediate reward, preventing them from investing this initial effort to make their life much easier afterwards.
This appeal for immediate rewards is what plagues software engineering in particular, and our whole socieity in general.
That too can be a problem. It fails for interesting projects where any of these apply...Eh... I'm not convinced that ever reporting what you did yesterday and reporting what you're going to do today has anything at all to do with the work or the context in which you're doing it. Transparency is transparency regardless of any other factors.
But maybe I'm misunderstanding the point you're making. My point was that "standup" weeds out laziness.
I think the whole point is in understanding that this initial effort is required to enjoy your laziness in the longer run. And so, IMO the main problem is not with software developers being lazy per se, but the need for immediate reward, preventing them from investing this initial effort to make their life much easier afterwards.
This appeal for immediate rewards is what plagues software engineering in particular, and our whole socieity in general.
In embedded, I have developed this pattern which seems to serve me well:
The first prototype must be over-crappy, beyond any salvage. I mean a single long function, with while(1) loop calling blocking delay functions and writing some stuff on debug UART, no datatypes, no structure, copypasta if necessary. Bonus points from goto. Proof of concept can be demonstrated within hours and then it is (hopefully) obvious to both managers and programmers this piece of code cannot be used at all for the actual implementation. But because you have demonstrated the viability, now the team including management cannot pull out of the project so you get at least some time and resources to actually implement it, maybe a 3-4 days for a simple module before anyone starts asking questions.
And, because you tested some ideas, the structure is now starting to form in your head. It's best to do such initial tests near the end of the week so that your brain gets at least a few days of subconscious processing time before real implementation begins.