It depends what you are programming. A lot of small embedded processors are doing endless manipulation of peripherals. However, what you say is true for most larger embedded programs. You really ought to make life as easy as possible on the bulk of the code, so you have more time to put more focus of the gritty stuff.
A government decree (...)
A government decree (...)
This is actually the worst part of this story indeed.
Before joining the White House the guy was busy at the executive diversity and inclusion counsil at the CIA.
Though i looked at the language a while ago and honestly, if i still have to go the unsafe route to do any peripheral access might as well keep writing C...
That's a pretty silly argument even for embedded development. On all but the most trivial programs (which: who cares what you use), only a tiny fraction of the code is actually performing the register read/writes. I'm not completely thrilled with the way rust works on embedded, or the way unsafe works in rust, but most code even in embedded could be safer.It depends what you are programming. A lot of small embedded processors are doing endless manipulation of peripherals.
However, what you say is true for most larger embedded programs. You really ought to make life as easy as possible on the bulk of the code, so you have more time to put more focus of the gritty stuff.
From perspective of somebody more familiar with device drivers than thread schedulers, memory safe languages are absolute joke when you are dealing with DMA capable hardware, limited IOMMUs (page granularity and high cost of changing page mappings) and potentially malicious external DMA peripherals (thunderbolt, USB4).
Though i looked at the language a while ago and honestly, if i still have to go the unsafe route to do any peripheral access might as well keep writing C...
That's a pretty silly argument even for embedded development. On all but the most trivial programs (which: who cares what you use), only a tiny fraction of the code is actually performing the register read/writes. I'm not completely thrilled with the way rust works on embedded, or the way unsafe works in rust, but most code even in embedded could be safer.It depends what you are programming. A lot of small embedded processors are doing endless manipulation of peripherals. However, what you say is true for most larger embedded programs. You really ought to make life as easy as possible on the bulk of the code, so you have more time to put more focus of the gritty stuff.
I prefer to get the computer to prevent me from making as many avoidable errors as possible. Attitudes like "I can safely handle loaded guns pointed at my foot" is macho libertarian nonsense.
Twiddling bits in a register is not a major issue. Ensuring the correct bits are twiddled at the correct time is far more complex and error prone.The
Thus I don't care if peek and poke are implemented in C or assembler. Correctly calculating the invocations of peek and poke in a multimode processor is far more challenging. Any tool that helps automatically predicting those values aren't incorrect is valuable.You just aren't going to make small embedded machines super safe to program, unless, perhaps, AI assistants get so good they pick up the errors. We need to take the wins where they present themselves. There are many weird and wonderful multi-core devices for communications applications, with very heterogeneous layouts of memory and peripherals. Good luck trying to build moderately safe programming environments for those They are a big security issue, as its often very hard to figure out from their documentation just what it is you are supposed to do in all cases.
From perspective of somebody more familiar with device drivers than thread schedulers, memory safe languages are absolute joke when you are dealing with DMA capable hardware, limited IOMMUs (page granularity and high cost of changing page mappings) and potentially malicious external DMA peripherals (thunderbolt, USB4).
Another thing to watch out for is significantly different technologies that aren't sufficiently practical outside a laboratory/academia. Examples have been formal methods, Erlang, the various ML/Haskell/etc languages. Interesting, but not worth spending too much time on.
Another thing to watch out for is significantly different technologies that aren't sufficiently practical outside a laboratory/academia. Examples have been formal methods, Erlang, the various ML/Haskell/etc languages. Interesting, but not worth spending too much time on.I'd make an exception for Erlang. I'm not so fond of the language itself, but it cannot be considered an academic language with no practical use.
Its real word applications are, in fact, all around you. A number of large companies use it (e.g. Klarna in Sweden) - Facebook chat backend used to be implemented in Erlang, but they migrated away.
But, above all, it's deployed in literally millions of radio devices in the Radio Access Network for mobile communications.
What I've personally seen in development (Note: I do not work directly with it, but I'm a user of a number services implemented with it) is a quicker turnaround time from requirement to implementation, with a quality no worse than more classical languages.
Because of this WH announcement (which I read yesterday), one of the footnotes led me to the Google Project Zero (which I have been aware of for awhile), and that led me to blog post about the NSO zero-click iMessage exploit https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-into-nso-zero-click.html. I was only vaguely aware of Pegasus and NSO (this was in the news a year or so back), but the actual exploit, and the mind set that it took to write it, is heart stopping.
This is likely a prime candidate for why software (in general) and those parts which are widely used (in particular) needs to have a much cleaner attack footprint. Who knew that an image parser could be manipulated in this way ?
Though i looked at the language a while ago and honestly, if i still have to go the unsafe route to do any peripheral access might as well keep writing C...
The recommended method is to use a HAL (Hardware Abstraction Layer) library for your platform. This adds a lot of extra checks besides memory safety. For example if you attempt to use the wrong pin for an unsupported function, such as SPI on a pin that can not do SPI, your program simply won't compile.
While a good design goal for a abstraction layer, I have never seen any of this in real life.
Maybe time to think about starting alternatives.
https://docs.rs/stm32-hal2/latest/stm32_hal2/
let mut scl = Pin::new(Port::B, 6, PinMode::Alt(4));
scl.output_type(OutputType::OpenDrain);
https://docs.rs/stm32-hal2/latest/stm32_hal2/
Looked at it and it still seems to miss the mentioned feature, compile-time checking of peripheral availability on given pin:Code: [Select]let mut scl = Pin::new(Port::B, 6, PinMode::Alt(4));
scl.output_type(OutputType::OpenDrain);
Still seems to map based on magic numbers. I mean, if I change that to Alt(5), or to Port::B,10, where does the compilation fail?
To make that tractable you need to constrain the problem, and that is best done by having a constrained language and environment. The trick is to have constraints which are easy enough to work with and still offer useful guarantees for a large number of problems.
Just use Ada then instead of a cryptic language sponsored by woke hipsters with no formal specification yet.
A program can still crash, but the C / C++ layer can do a graceful recovery.