Author Topic: Future Software Should Be Memory Safe, says the White House  (Read 5381 times)

0 Members and 1 Guest are viewing this topic.

Online SiliconWizardTopic starter

  • Super Contributor
  • ***
  • Posts: 14665
  • Country: fr
 
The following users thanked this post: cfbsoftware

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3963
  • Country: gb
Re: Future Software Should Be Memory Safe, says the White House
« Reply #1 on: March 01, 2024, 07:37:32 am »
these days, Linus is discussing two things, well three things ...
  • more Rust support for more drivers written in Rust
  • micro kernel approach for linux, with a userspace scheduler already written as proof of concept
  • development languages for future kernels must be memory safe, because with multi core and modern features we have already reached the point of no return in term of kernel complexity

(
oh, and from how some have responded - my speculation - it also seems that many Linux developers ...
... have now reached retirement age, so they would like to make way for younger people
mumble ...  :-//
)
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19802
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Future Software Should Be Memory Safe, says the White House
« Reply #2 on: March 01, 2024, 10:17:11 am »
"Science Engineering progresses one funeral at a time".
https://en.wikipedia.org/wiki/Planck%27s_principle
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline newbrain

  • Super Contributor
  • ***
  • Posts: 1738
  • Country: se
Re: Future Software Should Be Memory Safe, says the White House
« Reply #3 on: March 01, 2024, 10:26:34 am »
  • micro kernel approach for linux, with a userspace scheduler already written as proof of concept
Wow, I did not see this coming.
So Tanenbaum had a point, in the end (others were proven wrong by history).

It won't be an easy transition, quite the radical architectural change.

DiTBho, do you have some good sources? DDG only return stuff about the original T-T debate...
Nandemo wa shiranai wa yo, shitteru koto dake.
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3963
  • Country: gb
Re: Future Software Should Be Memory Safe, says the White House
« Reply #4 on: March 01, 2024, 01:14:37 pm »
February 2024, Andrea Righi has started a blog "Writing a Scheduler for Linux in Rust that runs in user-space" series on writing a user-space CPU scheduler with the BPF-based extensible scheduler class:

".select_cpu() implements the logic to assign a target CPU to a task that wants to run, typically you have to decide if you want to keep the task on the same CPU or if it needs to be migrated to a different one, for example if the current CPU is busy; if you can find an idle CPU at this stage there's no reason to call the scheduler, the task can be immediately dispatched here.

see here  :o :o :o
« Last Edit: March 01, 2024, 03:52:14 pm by DiTBho »
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 
The following users thanked this post: newbrain

Online coppice

  • Super Contributor
  • ***
  • Posts: 8812
  • Country: gb
Re: Future Software Should Be Memory Safe, says the White House
« Reply #5 on: March 01, 2024, 01:32:17 pm »
I wish people wouldn't bullshit and call these things memory safe languages. Memory less dangerous is all you get. As usual names are not chosen to illuminate, but to obscure.
 
The following users thanked this post: Siwastaja, SiliconWizard, magic

Offline dferyance

  • Regular Contributor
  • *
  • Posts: 184
Re: Future Software Should Be Memory Safe, says the White House
« Reply #6 on: March 01, 2024, 04:43:04 pm »
Memory safety is very important and is good to have efforts in that direction. I can write memory safe code in C++. It isn't terribly difficult in most cases. However, I usually work in a team and anyone (or myself) can easily break that. Everyone starts somewhere and not everyone understands how to write safe code. I like having tooling check things for me. I like static typing for that reason and memory usage checking is helpful.

I wish the solution wasn't rust though. Rust isn't a standardized language unlike C++, ECMAScript, C# and others. Rust has made breaking changes. While there is work on gcc-rust, practical rust usage is limited to clang. I wish there was more of rust encouraging other languages borrowing from it and less of, let's all use rust.
 

Offline ejeffrey

  • Super Contributor
  • ***
  • Posts: 3769
  • Country: us
Re: Future Software Should Be Memory Safe, says the White House
« Reply #7 on: March 01, 2024, 05:14:03 pm »
I'm most excited about Herb Sutter's cppfront/cpp2.  In some ways it's a long way from the sort of "production readiness" as languages like rust -- I think classes were added just last year -- but in other ways it's decades ahead  because it essentially is C++, 100% compatible warts and all with existing C/C++. You can even  mix cpp1 and cpp2 syntax in a single source file, and it compiles down to regular C++ that any standards compliant C++ compiler can compile so it has the same platform support as gcc+clang+msvc.

Instead of making a completely new language it tries to make all the behaviors that we teach people "this is how to write C++ safely" the default.  In doing so it makes the syntax a lot more concise because you don't have to specify the non default behaviors you almost always should want.

It also compiles to C++ pretty much in the way you would expect -- it doesn't introduce it's own runtime or standard libraries and it doesn't wrap things in opaque containers.  This means it's also a relatively safe project to play with.  You can always just abandon the cppfront compiler and use the generated code with minimal to no cleanup.

Cpp2 doesn't have anywhere near the adoption of even rust but I hope people look at it when considering language options especially if they have a body of existing C++ to keep.
 

Offline ejeffrey

  • Super Contributor
  • ***
  • Posts: 3769
  • Country: us
Re: Future Software Should Be Memory Safe, says the White House
« Reply #8 on: March 01, 2024, 05:27:11 pm »
I wish the solution wasn't rust though. Rust isn't a standardized language unlike C++, ECMAScript, C# and others. Rust has made breaking changes.

It's not even just that.  C++ has made breaking changes as well.  But it's done through proposals that are published, vetted, tested against existing code bases to see how widespread actual breakage is, and in the end you can give your compiler of choice the --std=c++11 flag or whatever you need.  This is a huge development cost on both the C++ language development and the compiler / standard library developers, and it's a cost rust doesn't want to incurr which is understandable.  But it is a real issue and people pretending like it's no big deal are super annoying.

I actually like rust.  I think it's a pretty interesting language, but the "my way or the highway attitude" that seems to emanate both from their development strategy and a lot of the promoters is pretty off-putting. 
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3963
  • Country: gb
Re: Future Software Should Be Memory Safe, says the White House
« Reply #9 on: March 01, 2024, 05:48:11 pm »
alias(memory safe, less dangerous): technically and politically incorrect, but encouraging, and that's  what we need

to give an example, it's like claiming that "homo sapiens comes from monkeys" no one shouts "bullshit", even if scientifically speaking it's bullshit.
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online coppice

  • Super Contributor
  • ***
  • Posts: 8812
  • Country: gb
Re: Future Software Should Be Memory Safe, says the White House
« Reply #10 on: March 01, 2024, 05:56:22 pm »
alias(memory safe, less dangerous): technically and politically incorrect, but encouraging, and that's  what we need

to give an example, it's like claiming that "homo sapiens comes from monkeys" no one shouts "bullshit", even if scientifically speaking it's bullshit.
"Memory safe language" makes people think its a solution to a problem. These languages may be a part of a solution, but claiming they are a solution is serious bullshit, that should get people thrown out of decent society,
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19802
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Future Software Should Be Memory Safe, says the White House
« Reply #11 on: March 01, 2024, 06:12:09 pm »
alias(memory safe, less dangerous): technically and politically incorrect, but encouraging, and that's  what we need

to give an example, it's like claiming that "homo sapiens comes from monkeys" no one shouts "bullshit", even if scientifically speaking it's bullshit.
"Memory safe language" makes people think its a solution to a problem. These languages may be a part of a solution, but claiming they are a solution is serious bullshit, that should get people thrown out of decent society,

They are a solution to part of the problem. An important part of the problem.

The same is true of seatbelts and road safety. And speed limits and road safety.

Putting a big spike in the steering wheel is not a practical solution to road safety, any more than requiring "correct" usage of many of C's features is a practical solution for safe memory use.
« Last Edit: March 01, 2024, 06:24:35 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 
The following users thanked this post: cfbsoftware

Online Marco

  • Super Contributor
  • ***
  • Posts: 6747
  • Country: nl
Re: Future Software Should Be Memory Safe, says the White House
« Reply #12 on: March 01, 2024, 06:30:45 pm »
It won't be an easy transition, quite the radical architectural change.
There's a lot of work being done lately to do it transparently. See :
Intra-Unikernel Isolation with Intel Memory Protection Keys
Preventing Kernel Hacks with HAKC
etc.

By limiting what parts of the kernel is callable/accessible per kernel module, you are essentially turning it into a microkernel ... but the existing code of the kernel modules stays the same.

PS. I think AMD supports memory protection keys for zen3 and up.
« Last Edit: March 01, 2024, 06:38:14 pm by Marco »
 
The following users thanked this post: newbrain

Offline ejeffrey

  • Super Contributor
  • ***
  • Posts: 3769
  • Country: us
Re: Future Software Should Be Memory Safe, says the White House
« Reply #13 on: March 01, 2024, 06:54:21 pm »
"Memory safe language" makes people think its a solution to a problem. These languages may be a part of a solution, but claiming they are a solution is serious bullshit, that should get people thrown out of decent society,

It's much less BS that what you are saying.  Memory safety (which is well described and explained, even if it comes in several levels of strictness) is absolutely the solution to several problems.  It's not the solution to every problem, but that is an insane standard and not really an argument worth discussing.

Every year, a large number of exploited security problems in the wild are traced to use after free and out of bounds access.  And it's not like it's only "old" code, it happens in brand new code as well.  Both of those can 100% be eliminated by memory safe languages, although bounds checking adds runtime cost.  Yes, it sometimes makes sense to opt-out of those protections, and every memory safe language provides some way to over-ride the behavior, but it's almost never needed.  Use before initialization is another issue that can be eliminated as well. "Initialized" obviously doesn't mean "to a useful value", but in the context of security, avoiding allowing uninitialized pointers or capturing potentially sensitive data from previously deallocated objects solve a lot of real world security problems even when the program is going to malfunction either way.

I think a big part of the reaction against memory safety came from Java and how it was promoted, especially back in the 90s.  Java was heavily promoted as using managed memory and garbage collection to "prevent memory leaks."  It turns out that with nearly 30 years of hindsight memory leaks are among the least bad forms of memory errors.  It made sense at the time, programs and datasets were rapidly growing larger and more complex.  Getting things to work at all was a big deal.  A memory leak could make a program crash on a large data set.  Focusing on memory leaks was focusing on making it easier to get a program to work properly, but the focus on security now is much more on preventing a program from being made to work improperly.  The most important and useful properties of Java's memory management are not actually garbage collection per-se, but initialization safety, bounds checking, and use-after-free safety.

The other problem with Java was that people didn't really know how to use it properly.  While gargbage collection was well established in languages like LISP, the tools and practices to use it properly were not well known among Java's target audience.  If you tell 1990s C/C++ developers they can write something that looks basically like C++ and just not bother deleting things, you are going to get memory leaks galore. People had to be taught, and libraries had to be developed that made correct use of weak references.  In this way, more modern alternatives like C++ unique pointers and the rust borrow checker are IMO much better default options than garbage collection with much better semantics with respect to object destruction.  They don't prevent memory leaks -- a circular path of unique pointers will never be destroyed, but that's much less of a problem than use after free.  On the other hand, unique pointers guarantee timely destruction for the normal case rather than "whenever the GC gets run".
 
The following users thanked this post: nctnico, Siwastaja

Online coppice

  • Super Contributor
  • ***
  • Posts: 8812
  • Country: gb
Re: Future Software Should Be Memory Safe, says the White House
« Reply #14 on: March 01, 2024, 08:11:08 pm »
alias(memory safe, less dangerous): technically and politically incorrect, but encouraging, and that's  what we need

to give an example, it's like claiming that "homo sapiens comes from monkeys" no one shouts "bullshit", even if scientifically speaking it's bullshit.
"Memory safe language" makes people think its a solution to a problem. These languages may be a part of a solution, but claiming they are a solution is serious bullshit, that should get people thrown out of decent society,

They are a solution to part of the problem. An important part of the problem.

The same is true of seatbelts and road safety. And speed limits and road safety.

Putting a big spike in the steering wheel is not a practical solution to road safety, any more than requiring "correct" usage of many of C's features is a practical solution for safe memory use.
Since people like Michael Jackson in the 1970s until today hardly anyone has pushed their latest new software thing as something which helps a bit in putting solid applications together. Its always pushed as a magic bullet. What wrong with names that reflect reality, and set reasonable expectations? Maybe people would get less disillusioned when they find the real strengths and weaknesses of the new thing.
 

Offline ejeffrey

  • Super Contributor
  • ***
  • Posts: 3769
  • Country: us
Re: Future Software Should Be Memory Safe, says the White House
« Reply #15 on: March 01, 2024, 08:29:34 pm »
Since people like Michael Jackson in the 1970s until today hardly anyone has pushed their latest new software thing as something which helps a bit in putting solid applications together. Its always pushed as a magic bullet. What wrong with names that reflect reality, and set reasonable expectations? Maybe people would get less disillusioned when they find the real strengths and weaknesses of the new thing.

You can read the linked article.  And you can read the 20 page report linked from it.  And you can read the couple dozen references at the end of that report.  Or you can read academic literature on the subject or you can read programing guides on languages.  But it's actually you who is over-simplifying things and complaining without providing any nuance or context.

Memory safety is a well established and broad term of art.  It's well understood what it means, and it includes not just features of current languages, but current and future research to improve safety further.  Your complaint is just stupid.

The only people saying "XYZ language claims to fix all errors" are people arguing for attaching spikes to their steering wheel.  By the way, the same thing happened with car safety.  People railed against seat belts and airbags because they couldn't "make driving safe."  But by and large, everyone except a few whiners understood what was going on, the nomenclature was not actually confusing.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19802
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Future Software Should Be Memory Safe, says the White House
« Reply #16 on: March 01, 2024, 08:34:36 pm »
alias(memory safe, less dangerous): technically and politically incorrect, but encouraging, and that's  what we need

to give an example, it's like claiming that "homo sapiens comes from monkeys" no one shouts "bullshit", even if scientifically speaking it's bullshit.
"Memory safe language" makes people think its a solution to a problem. These languages may be a part of a solution, but claiming they are a solution is serious bullshit, that should get people thrown out of decent society,

They are a solution to part of the problem. An important part of the problem.

The same is true of seatbelts and road safety. And speed limits and road safety.

Putting a big spike in the steering wheel is not a practical solution to road safety, any more than requiring "correct" usage of many of C's features is a practical solution for safe memory use.
Since people like Michael Jackson in the 1970s until today hardly anyone has pushed their latest new software thing as something which helps a bit in putting solid applications together. Its always pushed as a magic bullet. What wrong with names that reflect reality, and set reasonable expectations? Maybe people would get less disillusioned when they find the real strengths and weaknesses of the new thing.

A large part of my professional career, from the late 70s onwards, involved assessing and - where appropriate - using and creating all sorts of new technologies.

If you look for and understand the fundamental properties and possibilities (i.e. not just the surface glitz), then it becomes relatively easy to spot when "new" is merely "different variation" rather than "better". Thus Delphi was the same as C, C# is the same as Java, 6800 is the same as Z80, xCORE is the same as Transputer, xC is the same as Occam, Objective-C is Smalltalk without GC and reflection, C++ is a mess. As an example of that lack of change, I was horrified at how few changes there had been between 2015 (when I re-started playing with embedded systems) and the early 80s (when I started). It was still C on 8-bit processors cross-compiled on UNIX and downloaded for debugging. The only difference was smaller, faster, cheaper.

Another thing to watch out for is significantly different technologies that aren't sufficiently practical outside a laboratory/academia. Examples have been formal methods, Erlang, the various ML/Haskell/etc languages. Interesting, but not worth spending too much time on.

Nonetheless, I've kept looking out for technologies that offer significant advantageous changes  - and jumped on them when they occurred. Major examples have been Smalltalk=>Objective C=>Java, and discrete logic=>semi-custom ICs=>PLAs=>FPGAs. And, of course, for hard realtime multicore embedded, C/Transputer/Occam=>xCore/xC.

I've had my eye on a few new languages for the last decade, with Rust and Go at the top of the list. They are both clean and tasteful,  and - within separate areas - potentially better than their alternatives. It is wryly pleasing to see that Rust has gathered a significant following and that it will probably, over the coming decades, relegate C/C++ to the level of COBOL. Maybe I will get the chance to use Rust, maybe not.

TL;DR: be very suspicious of the technology-du-jour, but be receptive to clean technologies that remove fundamental limitations of existing mainstream technologies.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 
The following users thanked this post: newbrain

Online SiliconWizardTopic starter

  • Super Contributor
  • ***
  • Posts: 14665
  • Country: fr
Re: Future Software Should Be Memory Safe, says the White House
« Reply #17 on: March 01, 2024, 08:49:05 pm »
these days, Linus is discussing two things, well three things ...
  • more Rust support for more drivers written in Rust
  • micro kernel approach for linux, with a userspace scheduler already written as proof of concept
  • development languages for future kernels must be memory safe, because with multi core and modern features we have already reached the point of no return in term of kernel complexity

For the micro-kernel approach, that would make it a completely different kernel. At this point, I fail to see how and why it could still be called Linux, and how this transition could be done while being able to keep most of existing code (otherwise, what's even the point of calling it Linux and doing this under the Linux umbrella? Probably just to take advantage of the traction and the money the Linux foundation makes? Around $260 millions a year.)

No doubt a micro-kernel approach would be better on many levels, but also with the downsides we all know and that Linus himself has advocated against for several decades.

As to Rust, I consider it a trojan horse here, and I'm being serious.

(
oh, and from how some have responded - my speculation - it also seems that many Linux developers ...
... have now reached retirement age, so they would like to make way for younger people
mumble ...  :-//
)

Yes, that is certainly a problem. And most of these younger developers, with few exceptions, don't want or just plain can't write proper C. So, there is that.
I said that a little while ago - when I heard one of Linus's latest talks, it definitely sounded very different from the usual Linus and it looked as though he was preparing his retirement, for sure.

Speaking of the LInux foundation, I'm starting to have a problem with it, but that's just like with almost all of these big foundations around large open-source projects.
Just look at who are the main contributors, and how its expenditures are distributed. The Linux kernel only accounts for about 2%.
Anyway.

Maybe time to think about starting alternatives.
 
The following users thanked this post: JPortici

Offline artag

  • Super Contributor
  • ***
  • Posts: 1091
  • Country: gb
Re: Future Software Should Be Memory Safe, says the White House
« Reply #18 on: March 01, 2024, 09:19:01 pm »
Future laws should be just, fair and free from loopholes too.
But I'm not holding my breath.
 
The following users thanked this post: coppice

Offline JPortici

  • Super Contributor
  • ***
  • Posts: 3475
  • Country: it
Re: Future Software Should Be Memory Safe, says the White House
« Reply #19 on: March 01, 2024, 09:34:21 pm »
And most of these younger developers, with few exceptions, don't want or just plain can't write proper C. So, there is that.

starting next week we're having a new intern for a couple of months. still in high scool, part of the curricula
he has an interest in embedded so apparently has been using C. All programmers looking for work i had the misfortune to interact with were adamant in thinking that frontend was the only true programming (as if the world needed more of them)
My father knows next nothing about programming (he's a chemist) besides what colleagues of him use and he's always fascinated by the fact that i write C for a living, he always thinks we would have moved on ages ago. And yet..

I don't know if rust is going to be "it", but it certainly looks like it has an appeal to the younger crowd (though i'm almost 33, i don't think myself as an old developer) and i also picked that vibe you mention from the recent linus talks.
Though i looked at the language a while ago and honestly, if i still have to go the unsafe route to do any peripheral access might as well keep writing C...
 

Offline ejeffrey

  • Super Contributor
  • ***
  • Posts: 3769
  • Country: us
Re: Future Software Should Be Memory Safe, says the White House
« Reply #20 on: March 01, 2024, 10:10:59 pm »
  • micro kernel approach for linux, with a userspace scheduler already written as proof of concept
Wow, I did not see this coming.
So Tanenbaum had a point, in the end (others were proven wrong by history).

Sort of, but also computers have changed a lot in 30 years.  SMP went from the domain of high end servers and supercomputers to universal, memory hierarchies got deeper and less uniform.  Even when Linux first supported SMP with the big kernel lock, lock contention in the kernel wasn't a huge deal, as long as kernel overhead overall was low.  Concurrency was really more important in userspace.  At the same time, pipelines have gotten deeper, and reorder buffers larger.  A few extra branch free instructions can literally be free (at least in time if not power) if the CPU would just be waiting for a load anyway.

One thing that hasn't changed is system call overhead.  A context switch or privilege level change still takes about 1 microsecond, only now it's two orders of magnitude more foregone computation.  A microkernel that increased context switches would be even worse than when Linus and Tanenbaum argued about it.

However, developments in lock free and wait free data structures, better locking algorithms, and the ability to have multiple cores running at the same time make it possible to have message passing between tasks with no context switches -- something that was impossible on the uniprocessor systems of the 90s.  Sandboxing, virtualization, and run-time compilation has dramatically changed both in hardware and software.  This allows isolation of code similar to independent tasks without the cost of context switches at all

And it doesn't need to be a radical change or re-architecture.  It's honestly been going on for years.  My desktop has 400+ kernel threads running right now.  Pulling individual components partly into user space doesn't necessarily make a huge radical change in the kernel itself, even if that is something as core as the scheduler.  The Linux kernel has *always* strived to be modular, and core components like schedulers, memory allocation, and IO have been refactored or rewritten multiple times.
 
The following users thanked this post: newbrain

Offline ejeffrey

  • Super Contributor
  • ***
  • Posts: 3769
  • Country: us
Re: Future Software Should Be Memory Safe, says the White House
« Reply #21 on: March 01, 2024, 10:33:56 pm »
Though i looked at the language a while ago and honestly, if i still have to go the unsafe route to do any peripheral access might as well keep writing C...

That's a pretty silly argument even for embedded development.  On all but the most trivial programs (which: who cares what you use), only a tiny fraction of the code is actually performing the register read/writes.  I'm not completely thrilled with the way rust works on embedded, or the way unsafe works in rust, but most code even in embedded could be safer.
 

Online coppice

  • Super Contributor
  • ***
  • Posts: 8812
  • Country: gb
Re: Future Software Should Be Memory Safe, says the White House
« Reply #22 on: March 01, 2024, 10:40:09 pm »
Though i looked at the language a while ago and honestly, if i still have to go the unsafe route to do any peripheral access might as well keep writing C...

That's a pretty silly argument even for embedded development.  On all but the most trivial programs (which: who cares what you use), only a tiny fraction of the code is actually performing the register read/writes.  I'm not completely thrilled with the way rust works on embedded, or the way unsafe works in rust, but most code even in embedded could be safer.
It depends what you are programming. A lot of small embedded processors are doing endless manipulation of peripherals. However, what you say is true for most larger embedded programs. You really ought to make life as easy as possible on the bulk of the code, so you have more time to put more focus of the gritty stuff.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19802
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Future Software Should Be Memory Safe, says the White House
« Reply #23 on: March 01, 2024, 11:04:11 pm »
Though i looked at the language a while ago and honestly, if i still have to go the unsafe route to do any peripheral access might as well keep writing C...

That's a pretty silly argument even for embedded development.  On all but the most trivial programs (which: who cares what you use), only a tiny fraction of the code is actually performing the register read/writes.  I'm not completely thrilled with the way rust works on embedded, or the way unsafe works in rust, but most code even in embedded could be safer.
It depends what you are programming. A lot of small embedded processors are doing endless manipulation of peripherals. However, what you say is true for most larger embedded programs. You really ought to make life as easy as possible on the bulk of the code, so you have more time to put more focus of the gritty stuff.

I prefer to get the computer to prevent me from making as many avoidable errors as possible. Attitudes like "I can safely handle loaded guns pointed at my foot" is macho libertarian nonsense.

Twiddling bits in a register is not a major issue. Ensuring the correct bits are twiddled at the correct time is far more complex and error prone.The

Thus I don't care if peek and poke are implemented in C or assembler. Correctly calculating the invocations of peek and poke in a multimode processor is far more challenging. Any tool that helps automatically predicting those values aren't incorrect is valuable.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 
The following users thanked this post: newbrain

Online coppice

  • Super Contributor
  • ***
  • Posts: 8812
  • Country: gb
Re: Future Software Should Be Memory Safe, says the White House
« Reply #24 on: March 02, 2024, 12:45:51 am »
Though i looked at the language a while ago and honestly, if i still have to go the unsafe route to do any peripheral access might as well keep writing C...

That's a pretty silly argument even for embedded development.  On all but the most trivial programs (which: who cares what you use), only a tiny fraction of the code is actually performing the register read/writes.  I'm not completely thrilled with the way rust works on embedded, or the way unsafe works in rust, but most code even in embedded could be safer.
It depends what you are programming. A lot of small embedded processors are doing endless manipulation of peripherals. However, what you say is true for most larger embedded programs. You really ought to make life as easy as possible on the bulk of the code, so you have more time to put more focus of the gritty stuff.

I prefer to get the computer to prevent me from making as many avoidable errors as possible. Attitudes like "I can safely handle loaded guns pointed at my foot" is macho libertarian nonsense.

Twiddling bits in a register is not a major issue. Ensuring the correct bits are twiddled at the correct time is far more complex and error prone.The

Thus I don't care if peek and poke are implemented in C or assembler. Correctly calculating the invocations of peek and poke in a multimode processor is far more challenging. Any tool that helps automatically predicting those values aren't incorrect is valuable.
You just aren't going to make small embedded machines super safe to program, unless, perhaps, AI assistants get so good they pick up the errors. We need to take the wins where they present themselves. There are many weird and wonderful multi-core devices for communications applications, with very heterogeneous layouts of memory and peripherals. Good luck trying to build moderately safe programming environments for those They are a big security issue, as its often very hard to figure out from their documentation just what it is you are supposed to do in all cases.
 

Offline Perkele

  • Regular Contributor
  • *
  • Posts: 56
  • Country: ie
Re: Future Software Should Be Memory Safe, says the White House
« Reply #25 on: March 02, 2024, 01:07:10 am »
A government decree won't stop corporations from churning out shit software or half-finished hardware.
This is the main cause of security issues.
Changing languages will not fix it.
 

Offline ejeffrey

  • Super Contributor
  • ***
  • Posts: 3769
  • Country: us
Re: Future Software Should Be Memory Safe, says the White House
« Reply #26 on: March 02, 2024, 01:26:33 am »
It depends what you are programming. A lot of small embedded processors are doing endless manipulation of peripherals. However, what you say is true for most larger embedded programs. You really ought to make life as easy as possible on the bulk of the code, so you have more time to put more focus of the gritty stuff.

Even most embedded things that do "endless peripheral manipulation" generally don't have (or at least don't need) a tremendously large number of lines of code actually accessing the registers.  I doubt very many even small projects would need a tremendous fraction of their code to be marked unsafe.  Write a handful of GPIO helpers that use unsafe code and all the code that calls them can be safe, etc.

That said, embedded systems design practices often have already solved the problems that memory safe languages address.  If you don't use dynamic allocation a huge class of errors go away.  If you make minimal use of arrays and pointer arithmetic, and don't pass array pointers to functions that don't have their size visible, another big chunk go away.   There is even some use (and yes, also misuse) of formal proofs.  If you can prove that you never access your statically allocated arrays out-of-bounds, that fixes yet another set of bugs.  So I agree that in some embedded situations the benefits of a memory safe language may be kind of moot.  Not because they are a bad idea, but because they already have much more draconian rules on memory use than you could tolerate in a more general purpose system.
 
The following users thanked this post: Siwastaja, JPortici

Online SiliconWizardTopic starter

  • Super Contributor
  • ***
  • Posts: 14665
  • Country: fr
Re: Future Software Should Be Memory Safe, says the White House
« Reply #27 on: March 02, 2024, 01:38:17 am »
A government decree (...)

This is actually the worst part of this story indeed.
 

Offline Wil_Bloodworth

  • Regular Contributor
  • *
  • Posts: 198
  • Country: us
Re: Future Software Should Be Memory Safe, says the White House
« Reply #28 on: March 02, 2024, 02:28:02 am »
A government decree (...)

This is actually the worst part of this story indeed.
Agreed.

Why would any intelligent person take ANY advice that comes from the White House?!

- Wil
 

Offline Bud

  • Super Contributor
  • ***
  • Posts: 6952
  • Country: ca
Re: Future Software Should Be Memory Safe, says the White House
« Reply #29 on: March 02, 2024, 02:32:42 am »
Before joining the White House the guy was busy at the executive diversity and inclusion counsil at the CIA. Why not to trust the experience.
Facebook-free life and Rigol-free shack.
 

Online SiliconWizardTopic starter

  • Super Contributor
  • ***
  • Posts: 14665
  • Country: fr
Re: Future Software Should Be Memory Safe, says the White House
« Reply #30 on: March 02, 2024, 03:43:34 am »
Before joining the White House the guy was busy at the executive diversity and inclusion counsil at the CIA.

Even better. :popcorn:
 

Offline JPortici

  • Super Contributor
  • ***
  • Posts: 3475
  • Country: it
Re: Future Software Should Be Memory Safe, says the White House
« Reply #31 on: March 02, 2024, 05:36:40 am »
Though i looked at the language a while ago and honestly, if i still have to go the unsafe route to do any peripheral access might as well keep writing C...

That's a pretty silly argument even for embedded development.  On all but the most trivial programs (which: who cares what you use), only a tiny fraction of the code is actually performing the register read/writes.  I'm not completely thrilled with the way rust works on embedded, or the way unsafe works in rust, but most code even in embedded could be safer.
It depends what you are programming. A lot of small embedded processors are doing endless manipulation of peripherals.

pretty much this.

Quote
However, what you say is true for most larger embedded programs. You really ought to make life as easy as possible on the bulk of the code, so you have more time to put more focus of the gritty stuff.
and i agree very much with this as well, but that's not what i usually work on
 

Offline magic

  • Super Contributor
  • ***
  • Posts: 6858
  • Country: pl
Re: Future Software Should Be Memory Safe, says the White House
« Reply #32 on: March 02, 2024, 07:07:33 am »
From perspective of somebody more familiar with device drivers than thread schedulers, memory safe languages are absolute joke when you are dealing with DMA capable hardware, limited IOMMUs (page granularity and high cost of changing page mappings) and potentially malicious external DMA peripherals (thunderbolt, USB4).
 

Online SiliconWizardTopic starter

  • Super Contributor
  • ***
  • Posts: 14665
  • Country: fr
Re: Future Software Should Be Memory Safe, says the White House
« Reply #33 on: March 02, 2024, 07:17:51 am »
From perspective of somebody more familiar with device drivers than thread schedulers, memory safe languages are absolute joke when you are dealing with DMA capable hardware, limited IOMMUs (page granularity and high cost of changing page mappings) and potentially malicious external DMA peripherals (thunderbolt, USB4).

Makes sense. It's then odd that Rust has been considered first (if I got it right) for device drivers in the Linux kernel.
But even for the core part of the kernel, I think it's the wrong answer to the wrong question.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19802
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Future Software Should Be Memory Safe, says the White House
« Reply #34 on: March 02, 2024, 09:13:15 am »
Though i looked at the language a while ago and honestly, if i still have to go the unsafe route to do any peripheral access might as well keep writing C...

That's a pretty silly argument even for embedded development.  On all but the most trivial programs (which: who cares what you use), only a tiny fraction of the code is actually performing the register read/writes.  I'm not completely thrilled with the way rust works on embedded, or the way unsafe works in rust, but most code even in embedded could be safer.
It depends what you are programming. A lot of small embedded processors are doing endless manipulation of peripherals. However, what you say is true for most larger embedded programs. You really ought to make life as easy as possible on the bulk of the code, so you have more time to put more focus of the gritty stuff.

I prefer to get the computer to prevent me from making as many avoidable errors as possible. Attitudes like "I can safely handle loaded guns pointed at my foot" is macho libertarian nonsense.

Twiddling bits in a register is not a major issue. Ensuring the correct bits are twiddled at the correct time is far more complex and error prone.The

Thus I don't care if peek and poke are implemented in C or assembler. Correctly calculating the invocations of peek and poke in a multimode processor is far more challenging. Any tool that helps automatically predicting those values aren't incorrect is valuable.
You just aren't going to make small embedded machines super safe to program, unless, perhaps, AI assistants get so good they pick up the errors. We need to take the wins where they present themselves. There are many weird and wonderful multi-core devices for communications applications, with very heterogeneous layouts of memory and peripherals. Good luck trying to build moderately safe programming environments for those They are a big security issue, as its often very hard to figure out from their documentation just what it is you are supposed to do in all cases.

Just so. That's why automated support from tools is necessary.

To make that tractable you need to constrain the problem, and that is best done by having a constrained language and environment. The trick is to have constraints which are easy enough to work with and still offer useful guarantees for a large number of problems.

Hence you can't have automated accurate GC management in C/C++ because of the pointer/integer "confusion" and aliasing. Having strongly typed data enables some very interesting and effective automated GC optimisations in HotSpot; those that only know C/C++ and "learned about" GC at university don't understand how effective they can be, even/especially in common multicore NUMA architectures.

Rust appears to achieve the balance. Naturally it isn't suitable for everything, any more than C/C++ is suitable for everything. Choose the best tool for the specific job.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 
The following users thanked this post: newbrain

Online Marco

  • Super Contributor
  • ***
  • Posts: 6747
  • Country: nl
Re: Future Software Should Be Memory Safe, says the White House
« Reply #35 on: March 02, 2024, 09:33:21 am »
From perspective of somebody more familiar with device drivers than thread schedulers, memory safe languages are absolute joke when you are dealing with DMA capable hardware, limited IOMMUs (page granularity and high cost of changing page mappings) and potentially malicious external DMA peripherals (thunderbolt, USB4).
In what way? A buffer overflow in your code to handle DMA can be chained all the way to full system control. To me it seems even more important.

Apple made a memory safe C dialect for the iOS bootloader.
 

Offline newbrain

  • Super Contributor
  • ***
  • Posts: 1738
  • Country: se
Re: Future Software Should Be Memory Safe, says the White House
« Reply #36 on: March 02, 2024, 10:17:57 am »
Another thing to watch out for is significantly different technologies that aren't sufficiently practical outside a laboratory/academia. Examples have been formal methods, Erlang, the various ML/Haskell/etc languages. Interesting, but not worth spending too much time on.
I'd make an exception for Erlang. I'm not so fond of the language itself, but it cannot be considered an academic language with no practical use.
Its real word applications are, in fact, all around you. A number of large companies use it (e.g. Klarna in Sweden) - Facebook chat backend used to be implemented in Erlang, but they migrated away.
But, above all, it's deployed in literally millions of radio devices in the Radio Access Network for mobile communications.

What I've personally seen in development (Note: I do not work directly with it, but I'm a user of a number services implemented with it) is a quicker turnaround time from requirement to implementation, with a quality no worse than more classical languages.
Nandemo wa shiranai wa yo, shitteru koto dake.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19802
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Future Software Should Be Memory Safe, says the White House
« Reply #37 on: March 02, 2024, 10:44:39 am »
Another thing to watch out for is significantly different technologies that aren't sufficiently practical outside a laboratory/academia. Examples have been formal methods, Erlang, the various ML/Haskell/etc languages. Interesting, but not worth spending too much time on.
I'd make an exception for Erlang. I'm not so fond of the language itself, but it cannot be considered an academic language with no practical use.
Its real word applications are, in fact, all around you. A number of large companies use it (e.g. Klarna in Sweden) - Facebook chat backend used to be implemented in Erlang, but they migrated away.
But, above all, it's deployed in literally millions of radio devices in the Radio Access Network for mobile communications.

What I've personally seen in development (Note: I do not work directly with it, but I'm a user of a number services implemented with it) is a quicker turnaround time from requirement to implementation, with a quality no worse than more classical languages.

Understood and accepted.

I have employed some of the Actor-related design patterns on high availability telecom systems written in Java. I've never been convinced that Erlang's pattern matching is "right", but that is definitely based on lack of practical understanding of how it can be used in practical systems

I heard rumours part of the IMDB website also used/uses Erlang.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline cosmicray

  • Frequent Contributor
  • **
  • Posts: 309
  • Country: us
Re: Future Software Should Be Memory Safe, says the White House
« Reply #38 on: March 02, 2024, 11:59:26 am »
Because of this WH announcement (which I read yesterday), one of the footnotes led me to the Google Project Zero (which I have been aware of for awhile), and that led me to blog post about the NSO zero-click iMessage exploit https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-into-nso-zero-click.html. I was only vaguely aware of Pegasus and NSO (this was in the news a year or so back), but the actual exploit, and the mind set that it took to write it, is heart stopping.

This is likely a prime candidate for why software (in general) and those parts which are widely used (in particular) needs to have a much cleaner attack footprint. Who knew that an image parser could be manipulated in this way ?
it's only funny until someone gets hurt, then it's hilarious - R. Rabbit
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19802
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Future Software Should Be Memory Safe, says the White House
« Reply #39 on: March 02, 2024, 12:18:56 pm »
Because of this WH announcement (which I read yesterday), one of the footnotes led me to the Google Project Zero (which I have been aware of for awhile), and that led me to blog post about the NSO zero-click iMessage exploit https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-into-nso-zero-click.html. I was only vaguely aware of Pegasus and NSO (this was in the news a year or so back), but the actual exploit, and the mind set that it took to write it, is heart stopping.

This is likely a prime candidate for why software (in general) and those parts which are widely used (in particular) needs to have a much cleaner attack footprint. Who knew that an image parser could be manipulated in this way ?

It is also an illustration that, while your company/team may have perfectly adept C programmers, what about that library from another company and how it interacts with something else that your perfect programmers didn't develop.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline baldurn

  • Regular Contributor
  • *
  • Posts: 189
  • Country: dk
Re: Future Software Should Be Memory Safe, says the White House
« Reply #40 on: March 02, 2024, 12:49:52 pm »
Though i looked at the language a while ago and honestly, if i still have to go the unsafe route to do any peripheral access might as well keep writing C...

Although you can access hardware directly using unsafe code, you typically wouldn't. The recommended method is to use a HAL (Hardware Abstraction Layer) library for your platform. This adds a lot of extra checks besides memory safety. For example if you attempt to use the wrong pin for an unsupported function, such as SPI on a pin that can not do SPI, your program simply won't compile. Which means your editor will make you aware that the code is invalid as you write it. You are also not allowed to read from a pin that first needs to be changed to read mode, or to make a mistake reusing the same pin for multiple purposes, etc.

Rust also provides easy concurrency (the so called "Fearless Concurrency" pattern). This is a typically problem, with many security exploits and by using this, you are eliminating another class of bugs. MCUs have multiple cores these days and you also have interrupts, DMA, etc. Your simple code to handle these things might not be as good as you think, because this is a really hard problem to solve correctly.

The simple act of manipulating some GPIO is the least of the advantages of Rust on a MCU. As described above you will get some benefits, but it is unlikely to be your GPIO code that has the security bugs. We find those in the same places as all other software: mistakes of memory allocation, deallocation, uninitialized pointers, reuse of memory containing old data, race conditions, boundary checks and so on. Why would you not have safety against all of that, just because you might need to use the unsafe keyword in some very specific parts of your code with the least changes of bugs?
 
The following users thanked this post: newbrain

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8243
  • Country: fi
Re: Future Software Should Be Memory Safe, says the White House
« Reply #41 on: March 02, 2024, 02:43:22 pm »
The recommended method is to use a HAL (Hardware Abstraction Layer) library for your platform. This adds a lot of extra checks besides memory safety. For example if you attempt to use the wrong pin for an unsupported function, such as SPI on a pin that can not do SPI, your program simply won't compile.

While a good design goal for a abstraction layer, I have never seen any of this in real life. Maybe it's in some high-quality projects, but manufacturer solutions for example are much crappier: consider STM32 vendor libraries for example and you would be initializing pin modes using alternative function enumerations with no way of checking whether SPI is available on that pin or not. No such checks are made either compile time or run-time, so you totally can enable the SPI peripheral and configure a wrong pin to a wrong alternative function. No memory safety is added either, it's actually the opposite: while the direct register write has little surface area for the common memory handling errors, STM32 HAL (and many others!) use pointers for both internal state and configuration structs supplied to functions, and such pointers can accidentally point to any type of data at any address. Sure, you need colossally stupid and careless mistakes to mess something that simple up, but claiming there is some added memory safety obviously isn't true.

So usually for your nice claims to hold true you would need to develop your compile-time-checking, memory-safe hardware abstraction layer yourself. Which is then again open for any mistakes within the layer, plus added mistakes in extra housekeeping chores and interfaces. For a large team where many players contribute to the firmware it totally makes sense: put the guy who is most experienced and careful develop the most "unsafe" parts and design simple interfaces to them as to minimize risk of misuse. Per-project hardware abstractions also have the advantage that they don't need to try to support everything and therefore might be able to replace complex configuration structs and state management by lot simpler function calls.

With a smaller project completely handled by one engineer, probably a solution which accesses UART->DR directly everywhere around the code is, albeit less elegant, and slower to port, probably more memory-safe than the one which abstract it behind C functions where state and control actions are passed through pointers.

But this is of course matter of implementation. The idea is valid, and it certainly can be implemented correctly. Using a so-called "memory-safe" language if not forces but nudges you in the right direction. I'm just saying that be careful because real-world implementations have pretty poor track record when it comes to hardware abstraction on microcontrollers.
 

Offline baldurn

  • Regular Contributor
  • *
  • Posts: 189
  • Country: dk
Re: Future Software Should Be Memory Safe, says the White House
« Reply #42 on: March 02, 2024, 02:59:55 pm »
While a good design goal for a abstraction layer, I have never seen any of this in real life.

In the case of Rust and common MCUs like STM32 the open source people already solved this. We have good crates with HALs with all the bells and whistles.

https://docs.rs/stm32-hal2/latest/stm32_hal2/

You can even generate your own from a SVD file:

https://docs.rs/svd2rust/latest/svd2rust/

 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3963
  • Country: gb
Re: Future Software Should Be Memory Safe, says the White House
« Reply #43 on: March 02, 2024, 03:11:15 pm »
Maybe time to think about starting alternatives.

if Linux were to implode ...
  • FreeBSD, NetBSD, OpenBSD <--- I've used it occasionally, but my developer chair has yet to start
  • Haiku <--- not on a daily basis, but there is already a corner of my desktop dedicated to experimental with this
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8243
  • Country: fi
Re: Future Software Should Be Memory Safe, says the White House
« Reply #44 on: March 02, 2024, 03:22:36 pm »
https://docs.rs/stm32-hal2/latest/stm32_hal2/

Looked at it and it still seems to miss the mentioned feature, compile-time checking of peripheral availability on given pin:
Code: [Select]
let mut scl = Pin::new(Port::B, 6, PinMode::Alt(4));
scl.output_type(OutputType::OpenDrain);

Still seems to map based on magic numbers. I mean, if I change that to Alt(5), or to Port::B,10, where does the compilation fail?
 

Offline baldurn

  • Regular Contributor
  • *
  • Posts: 189
  • Country: dk
Re: Future Software Should Be Memory Safe, says the White House
« Reply #45 on: March 02, 2024, 04:00:22 pm »
https://docs.rs/stm32-hal2/latest/stm32_hal2/

Looked at it and it still seems to miss the mentioned feature, compile-time checking of peripheral availability on given pin:
Code: [Select]
let mut scl = Pin::new(Port::B, 6, PinMode::Alt(4));
scl.output_type(OutputType::OpenDrain);

Still seems to map based on magic numbers. I mean, if I change that to Alt(5), or to Port::B,10, where does the compilation fail?

I haven't actually any experience with the STM32 HAL. I have used the RP2040 HAL which have these features. The RP2040 HAL does not use integers for pins but types, for example pins.gpio18. Each of those have different types, so you couldn't ask gpio18 to become a function which it is not implementing.

It is still early days for embedded Rust and not all the promises have been meet yet.
 
The following users thanked this post: Siwastaja, JPortici

Online SiliconWizardTopic starter

  • Super Contributor
  • ***
  • Posts: 14665
  • Country: fr
Re: Future Software Should Be Memory Safe, says the White House
« Reply #46 on: March 03, 2024, 11:42:10 pm »
Just use Ada then instead of a cryptic language sponsored by woke hipsters with no formal specification yet. ;D
 
The following users thanked this post: Siwastaja

Online nctnico

  • Super Contributor
  • ***
  • Posts: 27207
  • Country: nl
    • NCT Developments
Re: Future Software Should Be Memory Safe, says the White House
« Reply #47 on: March 04, 2024, 01:04:28 am »
To make that tractable you need to constrain the problem, and that is best done by having a constrained language and environment. The trick is to have constraints which are easy enough to work with and still offer useful guarantees for a large number of problems.
Which is why it makes much sense to have a thin C/ C++ layer and use sand-boxed languages like Lua or Python to implement the logic. A program can still crash, but the C / C++ layer can do a graceful recovery.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3963
  • Country: gb
Re: Future Software Should Be Memory Safe, says the White House
« Reply #48 on: March 04, 2024, 07:42:06 am »
Just use Ada then instead of a cryptic language sponsored by woke hipsters with no formal specification yet. ;D

The two problems with Ada are
  • all the OpenSource support just sucks
    there's no point even mentioning GNAT (even the pay-version) and all the crap gcc has to swallow to support the +=ada language(2)
    llvm side... well, a little better, but not much
  • Compared to GreenHills Ada, Gnat is a toy which completely lacks tools; so it's fine for "hAllo.ads", but you won't go that far, at least without having to spend 10 times your time and effort for serious projects

So, Rust is better here, at least from an OpenSource perspective, and it has a clean(1) backend support for Clang/llvm.

(1) well "clean" ...
say it's "clean" for the mainstream architectures { x86 {32, 64}, Arm{32, 64} }
very poor for { HPPA2, MIPS{*}, PowerPC{*}, POWER, SH{*}, ... }
except Risc-V for which it has experimental but decent support.
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8243
  • Country: fi
Re: Future Software Should Be Memory Safe, says the White House
« Reply #49 on: March 04, 2024, 08:28:32 am »
A program can still crash, but the C / C++ layer can do a graceful recovery.

For embedded control systems, this "graceful recovery" could be logically as complex as the application itself. Possibly crash+recovery, which could takes seconds, is not acceptable in the first place.

A practical simple example would be inertial navigation system which does dead reckoning. Hardware access to the gyroscopes, accelerometers etc. is quite tightly coupled with calculation, and on the other hand, it's completely unacceptable to just die for even some milliseconds, thus simple and monolithic solutions (written and tested to strict quality standards) are preferred instead of complicated layering and allowance of high-level bugs on sandboxed area; there is not much use for the sandbox in such system. Many embedded microcontroller things are like this, not even having a GUI, or then the GUI is physically separate on a different device altogether.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19802
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Future Software Should Be Memory Safe, says the White House
« Reply #50 on: March 04, 2024, 10:36:27 am »
To make that tractable you need to constrain the problem, and that is best done by having a constrained language and environment. The trick is to have constraints which are easy enough to work with and still offer useful guarantees for a large number of problems.
Which is why it makes much sense to have a thin C/ C++ layer and use sand-boxed languages like Lua or Python to implement the logic. A program can still crash, but the C / C++ layer can do a graceful recovery.

That's one workable approach. There are others, and there will need to be more invented in the future.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online Marco

  • Super Contributor
  • ***
  • Posts: 6747
  • Country: nl
Re: Future Software Should Be Memory Safe, says the White House
« Reply #51 on: March 04, 2024, 02:22:24 pm »
A practical simple example would be inertial navigation system which does dead reckoning.
Or a centrifuge controller.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6419
  • Country: fi
    • My home page and email address
Re: Future Software Should Be Memory Safe, says the White House
« Reply #52 on: March 04, 2024, 03:50:25 pm »
If a C compiler propagated array bounds checking across function prototypes at compile time, allowed modifier variables to be listed after the variably modified array instead of only before, then it would be rather easy to write memory-safe code in C, too.

In practice, it'd mean you'd declare e.g. strncmp() as
    int strncmp(const char s1[n], const char s2[n], size_t n);
which would be compatible with all existing C code, except that at compile time, the compiler could check that n does not exceed the array bounds at compile time.

This is valid today, ever since ISO C99, although one must declare n before the array parameters.  Unfortunately, compilers don't bother to check at compile time whether the array bounds thus specified exceed the array bounds it knows about.  They could; it's just that it isn't considered important or useful for anybody to implement it yet.

The only functions this cannot help with are those designed to work with all kinds of arrays via void pointers, i.e. memcpy(), memmove(), memset(), memcmp(), qsort().  For those, the compiler would need to support an array of unsigned char type (byte [] below, with noalias array attribute corresponding to restrict pointer attribute) with void * casting semantics, so that they could be declared as
    byte[n] memcpy(noalias byte dest[n], const noalias byte src[n], size_t n);
    byte[n] memmove(byte dest[n], const byte src[n], size_t n);
    byte[n] memset(byte dest[n], const byte value, size_t n);
    int memcmp(const byte src1[n], const byte src2[n], size_t n);
    void qsort(byte base[n * size], size_t n, size_t size, int (*compare)(const byte src1[size], const byte src2[size]));

For the cases where data is obtained in the form of a pointer and size –– consider malloc(), realloc() for example –– a construct that associates a pointer and size (in unsigned chars/bytes, or elements) to form an array, would complete the set of features.

With these features, programmers would still actually need to choose to write memory-access-safe code using the above pattern instead of always working with pointers, though.

All of the above generates the exact same machine code than the current declarations and implementations, too: there is absolutely no run-time overhead associated with any of this.  The entire idea is to expose the implicit array bounds at compile time, propagate them through function calls by letting the compiler know how each pointer and size are associated with each other (forming an "array"), and check for possible bounds breaking at compile time.

The fact that we have objective-C and C++, but lack even compile-time bounds checking for variably-modified array bounds in C, tells me memory safety is more a niche and politics and research subject than a practical request by those doing programming to implement things in practice.  In practice, memory safety is really just a programming attitude, an approach, rather than some magical property of the programming language itself.
« Last Edit: March 04, 2024, 03:53:38 pm by Nominal Animal »
 

Online Marco

  • Super Contributor
  • ***
  • Posts: 6747
  • Country: nl
Re: Future Software Should Be Memory Safe, says the White House
« Reply #53 on: March 04, 2024, 04:28:30 pm »
practical request by those doing programming to implement things in practice

Indeed, some check writers have decided to stop listening to them though. In practice, the programmers will request money before C.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6419
  • Country: fi
    • My home page and email address
Re: Future Software Should Be Memory Safe, says the White House
« Reply #54 on: March 04, 2024, 04:48:27 pm »
practical request by those doing programming to implement things in practice

Indeed, some check writers have decided to stop listening to them though. In practice, the programmers will request money before C.
I'm including the check writers in "those doing programming to implement things in practice", i.e. as in entire companies.

Put simply, those using C to get stuff done, aren't interested in memory safety.

Those who are interested in memory safety and are get paid to get stuff done, either target a specialized niche (aviation, medical, etc) or have specialized in a particular programming language (Ada, Fortran, COBOL, Java, JavaScript, etc).

Those who are interested in memory safety seem to be talking politics, pushing their new programming language, or doing research; and are not being paid to create real-world software tools, utilities and applications.  It is one thing to stand next to workers and talk about how they should be doing their work, and a completely another to do the work well and find ways to do it even better.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19802
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Future Software Should Be Memory Safe, says the White House
« Reply #55 on: March 04, 2024, 05:24:44 pm »
If a C compiler propagated array bounds checking across function prototypes at compile time, allowed modifier variables to be listed after the variably modified array instead of only before, then it would be rather easy to write memory-safe code in C, too.

Don't forget the consequences of aliasing.

Don't forget the consequences of what happens inside the library code that your [remarkably and completely competent] programmers and organisation didn't write.

Quote
The fact that we have objective-C and C++, but lack even compile-time bounds checking for variably-modified array bounds in C, tells me memory safety is more a niche and politics and research subject than a practical request by those doing programming to implement things in practice.  In practice, memory safety is really just a programming attitude, an approach, rather than some magical property of the programming language itself.

Memory safety is "ignored" because antiquated tools don't offer it. That's the equivalent of putting hands over your eyes and fingers in your ears.

Where safety does matter, rules have been created to sub-set the antiquated languages. Adherence to the rules is not trivial.

If modern tools avoid problems, they should be used.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6419
  • Country: fi
    • My home page and email address
Re: Future Software Should Be Memory Safe, says the White House
« Reply #56 on: March 04, 2024, 05:40:54 pm »
Don't forget the consequences of aliasing.
Read the post. noalias keyword would be equivalent to restrict for pointers.

You can argue whether aliasing should be marked the opposite way (i.e., aliased data forbidden unless marked by may_alias or similar), but that is not a technical point: it is a social one, and revolves around how to entice programmers to use the tools they have properly.

Don't forget the consequences of what happens inside the library code that your [remarkably and completely competent] programmers and organisation didn't write.
You mean, because the world is full of shit, it makes no sense to generate anything better than shit?

This is compile-time bounds checking via declared interfaces.  If a library exposes a function foo(byte a[n], size_t n, byte b[m], size_t m), we can only assume it is correctly implemented and only accesses the data at indexes 0 through n-1 and 0 through m-1, inclusive.  This applies to all compiled languages, even Rust.  Thing is, the compiler can check at the caller whether specifying such arrays is safe, at compile time.

If that library was implemented with the same approach, then the entire chain is bounds-safe.  The compiler verified that the library function does not access the arrays out-of-bounds, when compiling the library.  It just is that damn simple.
 
The following users thanked this post: Siwastaja

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19802
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Future Software Should Be Memory Safe, says the White House
« Reply #57 on: March 04, 2024, 07:21:48 pm »
Don't forget the consequences of aliasing.
Read the post. noalias keyword would be equivalent to restrict for pointers.

Last I looked, which was a long time ago, noalias means that the compiler can assume there is no aliasising, and can optimise the shit out of the code. Whether there is aliasing is a completely different issue. Halting problem springs to mind.

Dennis Ritchie, who knows far more about such things than I do, apparently "dislikes" noalias.
https://www.yodaiken.com/2021/03/19/dennis-ritchie-on-alias-analysis-in-the-c-programming-language-1988/

Quote
Don't forget the consequences of what happens inside the library code that your [remarkably and completely competent] programmers and organisation didn't write.
You mean, because the world is full of shit, it makes no sense to generate anything better than shit?

The world is - and always will be - full of shit programmers and business practices. Deal with it accordingly.

Don't cover your eyes and hope it improves. It won.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8243
  • Country: fi
Re: Future Software Should Be Memory Safe, says the White House
« Reply #58 on: March 04, 2024, 07:36:17 pm »
Last I looked, which was a long time ago, noalias means

Maybe you missed the fact there is no "noalias" keyword in C. This was a suggestion.
 

Online Marco

  • Super Contributor
  • ***
  • Posts: 6747
  • Country: nl
Re: Future Software Should Be Memory Safe, says the White House
« Reply #59 on: March 04, 2024, 07:48:07 pm »
Those who are interested in memory safety and are get paid to get stuff done

As I said, Apple made an entire memory safe C dialect for the iOS bootloader. Few companies will go to that much effort to maintain their own system programming language. The check writers have been led by the nose by the programmers for decades, they simply had no alternative and were propagandised by the programmers to prevent them from even investing in possible alternatives.

Alternatives are arising now, decades late. American government is a huge check writer, if the press release gets translated into procurement requirements the alternatives will see a lot of investment.
« Last Edit: March 04, 2024, 07:50:21 pm by Marco »
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19802
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Future Software Should Be Memory Safe, says the White House
« Reply #60 on: March 04, 2024, 07:59:23 pm »
Last I looked, which was a long time ago, noalias means

Maybe you missed the fact there is no "noalias" keyword in C. This was a suggestion.

What makes you think I missed it?

I haven't spent much time on C since the committee spent years debating whether it should be possible or impossible to "cast away const". There are good arguments for and against either decision, which is a good indication that there are fundamental problems lurking in the language.

Now, is is possible, within the language specification, to "cast away noalias"? If you or a library/debugger/etc does, are there any guarantees about what happens - or are nasal daemons possible?
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline uliano

  • Regular Contributor
  • *
  • Posts: 176
  • Country: it
Re: Future Software Should Be Memory Safe, says the White House
« Reply #61 on: March 04, 2024, 08:11:46 pm »
While a good design goal for a abstraction layer, I have never seen any of this in real life.

In the case of Rust and common MCUs like STM32 the open source people already solved this. We have good crates with HALs with all the bells and whistles.

https://docs.rs/stm32-hal2/latest/stm32_hal2/


And then you follow the link and read this:
Code: [Select]

Errata

SDIO and ethernet unimplemented
DMA unimplemented on F4, and L552
H7 BDMA and MDMA unimplemented
H5 GPDMA unimplemented
USART interrupts unimplemented on F4
CRC unimplemented for F4
High-resolution timers (HRTIM), Low power timers (LPTIM), and low power usart (LPUSART) unimplemented
ADC unimplemented on F4
Low power modes beyond csleep and cstop aren't implemented for H7
WB and WL are missing features relating to second core operations and RF
L4+ MCUs not supported
WL is missing GPIO port C, and GPIO interrupt support
If using PWM (or output compare in general) on an Advanced control timer (eg TIM1 or 8), you must manually set the TIMx_BDTR register, MOE bit.
Octospi implementation is broken
DFSDM on L4x6 is missing Filter 1.
Only FDCAN1 is implemented; not FDCAN2 or 3 (G0, G4, H7).
H5 is missing a lot of functionality, including DMA.
 
The following users thanked this post: Siwastaja

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6419
  • Country: fi
    • My home page and email address
Re: Future Software Should Be Memory Safe, says the White House
« Reply #62 on: March 04, 2024, 09:20:46 pm »
I haven't spent much time on C since the committee spent years debating whether it should be possible or impossible to "cast away const". There are good arguments for and against either decision, which is a good indication that there are fundamental problems lurking in the language.
If you consider that relevant to memory-safety, then by the same logic Rust isn't memory-safe, because it allows the programmer to write unsafe code.

Now, is is possible, within the language specification, to "cast away noalias"?
In general, I do not believe a programming language should cater to the least common denominator, i.e. to try and stop people from shooting themselves in the face with the code they write.  I am not willing to trade any efficiency or performance for safety, because I can do that myself at run time.

(I often use C99 flexible array members in a structure with both the number of elements allocated for (size) and the number of elements in use (used) in the array.  It is trivial to check accesses are within the range of used elements, and when additional room has to be allocated for additional elements.  You can even use the same "trick" as ELF uses internally, and reserve the initial element for invalid or none; only positive indexes are valid then, and you don't need to abort at run time.)

In practice, whether casting away const or noalias/restrict should be allowed or not, depends on the exact situation and context.  It is more about style and a part of code quality management tools an organization might use.

If you care enough, you can use _Generic() since C11 to map the calls with non-const/non-noalias/non-restrict arguments to one variant that can deal with that, and the others to the optimized version which relies on const/noalias/restrict.  You can even simply clone the symbol, and not even wrap the function calls with pre-vetted casts or copy-paste the code, when the two versions are effectively the same.

Very little C code uses _Generic(), though; typically only stuff effectively like
    #define  sqrt(x) __Generic(x, float: sqrtf, long double: sqrtl, _Float128: sqrtf128, default: sqrt)(x)
but it does work for qualifiers.  For example, you could have
    size_t  strnlen_cuc(const unsigned char s[n], size_t n);
    size_t  strnlen_csc(const signed char s[n], size_t n);
    size_t  strnlen_cc(const char s[n], size_t n);
    size_t  strnlen_uc(unsigned char s[n], size_t n);
    size_t  strnlen_sc(signed char s[n], size_t n);
    size_t  strnlen_c(char s[n], size_t n);
    #define strnlen(s, n) _Generic(s, const unsigned char *: strnlen_cuc, const signed char *: strnlen_csc, const char *: strnlen_cc, unsigned char *: strnlen_uc, signed char *: strnlen_sc, char *:strnlen_c)(s, n)
with all six being just aliases to the same strnlen function symbol (because the machine-code implementation stays exactly the same in all six cases).

For functions like strstr() you'd have many more symbols, yes, but the return value would have the correct qualifiers (based on the first argument), which the compiler can enforce.

So, it's not like we don't already have tools to solve the issues like avoiding having to cast away const'ness, simply by declaring all the acceptable variants and selecting the appropriate one at compile time using _Generic(); the issue is that C programmers do not want to.

Simply put, the issue is social, not technological.  If a C programmer wants to write memory-safe code, they will need to replace the standard C library with something better, or just use trivial wrappers (basically generating no extra code) around them, but then they absolutely can if they wish to.
 
The following users thanked this post: SiliconWizard

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6419
  • Country: fi
    • My home page and email address
Re: Future Software Should Be Memory Safe, says the White House
« Reply #63 on: March 04, 2024, 09:26:56 pm »
Now, if the intent is to get even idiots and LLMs to write "safe" code, then the language needs to be designed from the get go for those who otherwise would be sitting in a quiet corner stuffing crayons up their nostrils and eating white glue.

I'm not interested in those.  Given the choice between dangerous-but-unlimited and safe-but-limited, I always choose the first one, because I can do "safe" myself.  Again, the large majority of existing C code is crappy not because C itself is crappy, but because the majority of C users are not interested in writing non-crappy code.  One can write robust, safe code even in PHP (gasp!), although it does require some configuration settings to be set to non-insane values.

(Anyone else remember magic quotes?  More like "set this if you like to eat white glue and don't recognize all the letters of the alphabet yet, or are in too much of a hurry to even read what you wrote".)
 

Offline JPortici

  • Super Contributor
  • ***
  • Posts: 3475
  • Country: it
Re: Future Software Should Be Memory Safe, says the White House
« Reply #64 on: March 04, 2024, 09:56:51 pm »
While a good design goal for a abstraction layer, I have never seen any of this in real life.

In the case of Rust and common MCUs like STM32 the open source people already solved this. We have good crates with HALs with all the bells and whistles.

https://docs.rs/stm32-hal2/latest/stm32_hal2/


And then you follow the link and read this:
Code: [Select]

Errata

SDIO and ethernet unimplemented
DMA unimplemented on F4, and L552
H7 BDMA and MDMA unimplemented
H5 GPDMA unimplemented
USART interrupts unimplemented on F4
CRC unimplemented for F4
High-resolution timers (HRTIM), Low power timers (LPTIM), and low power usart (LPUSART) unimplemented
ADC unimplemented on F4
Low power modes beyond csleep and cstop aren't implemented for H7
WB and WL are missing features relating to second core operations and RF
L4+ MCUs not supported
WL is missing GPIO port C, and GPIO interrupt support
If using PWM (or output compare in general) on an Advanced control timer (eg TIM1 or 8), you must manually set the TIMx_BDTR register, MOE bit.
Octospi implementation is broken
DFSDM on L4x6 is missing Filter 1.
Only FDCAN1 is implemented; not FDCAN2 or 3 (G0, G4, H7).
H5 is missing a lot of functionality, including DMA.

that post and the other were a bit of propaganda that i chose not to answer to.
What's the point of HALs anyway? a MCU embedded system is not a linux SBC that can accomodate tons of different applications from the same base board plus daughterboard (or that is running linux that require special considerations when writing firmware anyway because it's being as generic as possible). When your company designs a board and writes firmware for it, the people designing the hardware will KNOW if the pin can do this or that function, then the firmware writer will KNOW what function to assign, and what peripheral to use. HAL is not an excuse for not ever reading the datasheet, hence it's basically a waste of time.
I'm sure Rust can bring other benefits to the table but in my opinion not when you are too close to the metal, have to write drivers or anything that interact a lot with peripherals. In a few months i'll probably give yet another try and see if things improved. They have, since last time
 

Online SiliconWizardTopic starter

  • Super Contributor
  • ***
  • Posts: 14665
  • Country: fr
Re: Future Software Should Be Memory Safe, says the White House
« Reply #65 on: March 04, 2024, 11:24:52 pm »
Now, if the intent is to get even idiots and LLMs to write "safe" code, then the language needs to be designed from the get go for those who otherwise would be sitting in a quiet corner stuffing crayons up their nostrils and eating white glue.

I'm not interested in those.  Given the choice between dangerous-but-unlimited and safe-but-limited, I always choose the first one, because I can do "safe" myself.  Again, the large majority of existing C code is crappy not because C itself is crappy, but because the majority of C users are not interested in writing non-crappy code.  One can write robust, safe code even in PHP (gasp!), although it does require some configuration settings to be set to non-insane values.

Yeah, how dare you? You're a memory safety denier.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19802
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Future Software Should Be Memory Safe, says the White House
« Reply #66 on: March 04, 2024, 11:46:19 pm »
I haven't spent much time on C since the committee spent years debating whether it should be possible or impossible to "cast away const". There are good arguments for and against either decision, which is a good indication that there are fundamental problems lurking in the language.
If you consider that relevant to memory-safety, then by the same logic Rust isn't memory-safe, because it allows the programmer to write unsafe code.

There are degrees of unsafety, as you are well aware :)

Quote
Now, is is possible, within the language specification, to "cast away noalias"?
In general, I do not believe a programming language should cater to the least common denominator, i.e. to try and stop people from shooting themselves in the face with the code they write.  I am not willing to trade any efficiency or performance for safety, because I can do that myself at run time.

I accept you are a perfect programmer that writes all the code in your application, and that you work with perfect compilers that correctly implement the full standard.

Lucky you.

Quote
In practice, whether casting away const or noalias/restrict should be allowed or not, depends on the exact situation and context.  It is more about style and a part of code quality management tools an organization might use.

You miss the point.

If you can't cast away noalias/const then you can't write some tools, debuggers being the classic example.
If you can cast away noalias/const then the compiler can (and with higher optimisation levels, will) generate incorrect code.
That is an insoluble dilemma with only one sensible answer: the dilemma should be un-asked.

Now, what's your response to
... If you or a library/debugger/etc does [cast away const/noalias], are there any guarantees about what happens - or are nasal daemons possible?

That question is valid, and can't be ignored.
Do other parties all agree with your response?

Quote
Simply put, the issue is social, not technological.

I start from assuming this world and its inhabitants.

I would like to live in A Better World, but so far I haven't succeeded.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6419
  • Country: fi
    • My home page and email address
Re: Future Software Should Be Memory Safe, says the White House
« Reply #67 on: March 05, 2024, 12:00:50 am »
Now, if the intent is to get even idiots and LLMs to write "safe" code, then the language needs to be designed from the get go for those who otherwise would be sitting in a quiet corner stuffing crayons up their nostrils and eating white glue.

I'm not interested in those.  Given the choice between dangerous-but-unlimited and safe-but-limited, I always choose the first one, because I can do "safe" myself.  Again, the large majority of existing C code is crappy not because C itself is crappy, but because the majority of C users are not interested in writing non-crappy code.  One can write robust, safe code even in PHP (gasp!), although it does require some configuration settings to be set to non-insane values.

Yeah, how dare you? You're a memory safety denier.
:P

Joking aside, there are use cases for domain-specific and general purpose scripting languages like Lua and Python and JavaScript, that can be embedded in a "native" application or service, and let nontechnical users specify business logic or UI elements and actions in a "safe" manner.

I'm personally happy to provide for them, and embed whatever interpreter they prefer; I like such designs, actually.  I'm just not interested in limiting myself to working with such programming languages only.  What I want, is maximal compile-time verification without runtime overhead.

For those familiar with Fortran 95 and later, it is interesting to consider its array implementation.  Essentially every array is passed as a triplet (origin, stride, count), which allows all regular linear slice operations to work without having to copy data.  There is no reason one could not implement the same in C.  In fact, I've mentioned a tiny variable-size vector-matrix library I created, where matrices and vectors are just views to arbitrary data –– lots of aliasing here! –– which relies on the same.  For matrix element r,c, the offset relative to data origin is r*rowstride+c*colstride.  This allows one to have a matrix, and separately a vector of its diagonal or off-diagonal elements.  On desktop-class machines, the extra multiplication per element access is basically irrelevant, and for array scanning, it converts to additive (signed) constants anyway.  Any kind of mirrored or transposed view is just a modification of the strides and data origin.  As each vector is one-dimensional with a number of elements, and each matrix has the number of rows and columns it contains, runtime bounds checking is lightweight (two unsigned comparisons).

For object-oriented dynamically typed memory-safe language, use JavaScript.  The current JIT compilers generate surprisingly effective code, but it isn't something you want to run on say a microcontroller.  Unless compiled to some byte-code representation, although they tend to be stack-based and not register-based like AVRs and ARM Cortex-M cores are.

I start from assuming this world and its inhabitants.
I start by claiming that it is impossible to create a world safe for all humans, and yet have any kind of free will or choice.  Instead, I want to maximize the options each individual has.  That includes tools that help, but do not enforce, with things like memory safety.

I would like to live in A Better World, but so far I haven't succeeded.
I do not, because I cannot define exactly what a Better World would be, without modifying humans.  (And that would be tyranny by definition.)

I am suggesting making incremental changes, with minimal disruption, so that programmers could apply the existing tools more effectively.
(Here, to apply compile-time bounds checking to all array accesses, after modifying the code to use array notation instead of pointer notation.)

You seem to suggest scratching everything we have, and replacing it with something new, that fixes all known problems at once.  History shows that that rarely works, and usually leads to chaos and suffering.  It has worked when the new has been a voluntary option, and the marketplace of human endeavours has preferred it, but even then, they just have replaced the old set of problems with new ones.
Therefore, the world of software not being A Better World is a social problem and not a technical one.

An incremental change for the better has better chances of affecting a Change towards Better, because it requires minimal effort from those using the language to adopt the new features.  If the results have an competitive edge, then programmers will do it; otherwise they do not.  Again, a social conundrum, not a technical one.  Yet, the smaller the change, the smaller the social nudge needed.

I hope you are not suggesting to force a specific technical solution onto everyone?  You'll have better luck becoming the Emperor of Earth than succeeding with that, I think.  That kind of dreaming belongs in "What If?" fiction, not in any kind of technical discussion.
 

Online SiliconWizardTopic starter

  • Super Contributor
  • ***
  • Posts: 14665
  • Country: fr
Re: Future Software Should Be Memory Safe, says the White House
« Reply #68 on: March 05, 2024, 12:20:10 am »
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 27207
  • Country: nl
    • NCT Developments
Re: Future Software Should Be Memory Safe, says the White House
« Reply #69 on: March 05, 2024, 01:01:57 am »
practical request by those doing programming to implement things in practice

Indeed, some check writers have decided to stop listening to them though. In practice, the programmers will request money before C.
I'm including the check writers in "those doing programming to implement things in practice", i.e. as in entire companies.
I'm kind of trained that way early on in my career and still have lots of checks in my code to make it robust but it makes programming in C super tedious. But it is hard to convince others of programming with a similar approach. Mastering C to a level to do something useful is hard enough.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19802
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Future Software Should Be Memory Safe, says the White House
« Reply #70 on: March 05, 2024, 08:53:49 am »
practical request by those doing programming to implement things in practice

Indeed, some check writers have decided to stop listening to them though. In practice, the programmers will request money before C.
I'm including the check writers in "those doing programming to implement things in practice", i.e. as in entire companies.
I'm kind of trained that way early on in my career and still have lots of checks in my code to make it robust but it makes programming in C super tedious. But it is hard to convince others of programming with a similar approach. Mastering C to a level to do something useful is hard enough.

Just so, but it is impractical (if not impossible) for your code to check some things, e.g. aliasing.

Problems do arise when managers/businesses don't want to pay for thorough checks, since "non-functional" code looks like a waste, and is against TSS/agile religious practices.

(N.B. for the avoidance of doubt, there is some value in TDD/agile practices - but not in rigorous adherence to the religious tenets)
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline cosmicray

  • Frequent Contributor
  • **
  • Posts: 309
  • Country: us
Re: Future Software Should Be Memory Safe, says the White House
« Reply #71 on: March 05, 2024, 11:12:23 am »
Because of this WH announcement (which I read yesterday), one of the footnotes led me to the Google Project Zero (which I have been aware of for awhile), and that led me to blog post about the NSO zero-click iMessage exploit https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-into-nso-zero-click.html. I was only vaguely aware of Pegasus and NSO (this was in the news a year or so back), but the actual exploit, and the mind set that it took to write it, is heart stopping.

This is likely a prime candidate for why software (in general) and those parts which are widely used (in particular) needs to have a much cleaner attack footprint. Who knew that an image parser could be manipulated in this way ?

It is also an illustration that, while your company/team may have perfectly adept C programmers, what about that library from another company and how it interacts with something else that your perfect programmers didn't develop.
Something that is under-appreciated about that exploit, is that you don't need to be connected to the internet for it to run. If you have/had an air-gapped (or firewalled) phone / computer / laptop / etc, the mere fact that you rendered that speially crafted PDF document (which could be a datasheet), is all it took. Once it infected, the air-gapped device might not be so air-gapped after all.
it's only funny until someone gets hurt, then it's hilarious - R. Rabbit
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6419
  • Country: fi
    • My home page and email address
Re: Future Software Should Be Memory Safe, says the White House
« Reply #72 on: March 05, 2024, 12:28:30 pm »
Problems do arise when managers/businesses don't want to pay for thorough checks
Here we are in violent agreement.  :-+

One could say there are two completely separate domains being discussed here.  One is the one that affects the majority of code being written, basically being thrown together as fast as possible with the least effort, to get paid by customers who are satisfied with appearances.  The other is the one I'm interested in, where reducing bug density and increasing the reliability of the software is a goal.

I am not really interested in the former.  It does not mean that it is irrelevant domain; it is very relevant for real life purposes.  I do believe it is better served by high-level abstraction languages like Python and JavaScript, and not by low-level languages, based on my own experience as a full-stack developer for a few years.  (One example of this is the pattern where I recommend writing user interfaces in Python + Qt, with heavy computation and any proprietary code in a native library referred to via the ctypes module.)

I suspect a large swath of the former domain will be covered by code generators based on LLMs, because so much of it is basically copy-paste code anyway, with very little to no "innovation" per se.  Code monkey stuff.

I am interested in the latter, but do not believe newer languages will solve all the problems: they will just replace the old set of problems with a new one, because humans just aren't very good at this sort of design, not yet anyway.  I do hope Rust and others will become better than what we have now, but they're not there yet.  For the reasons I described earlier, I suggest better overall results would be achieved faster by smaller incremental changes, for example those I described for C.  History provides additional reasons: objective-C and C++, when those who wrote code decided the C tool they had was insufficient.  (It is important to realize how often this happens: git, for example.  It does not happen by creating a perfect language, then forcing others to use it.  It only happens if you use it yourself to do useful stuff, and others decide they find your way more effective than what they use now.)

There are languages with quite long histories that are memory-safe.  Ada was already mentioned; another is Fortran 95.  (The interesting thing about Fortran 95/2003 is the comparison to C pointers: in Fortran, arrays are the first-level object type, with pointers only an extension to arrays.)  Yet, these tend to only live in their niches.  PHP is a horrible example of what kind of a mess you may end up with if you try to cater for all possible paradigms: for some things, like string manipulation, it has at least two completely separate interfaces (imperative and object-oriented).  Unfortunately, Python shows partial signs of this too, what with its strings/bytes separation, and increasing number of string template facilities.  Point is, even though these languages can be shown to be technically superior to C in many ways, they are not nearly as popular.  Why?
Because technical superiority does not correlate with popularity, when humans are involved.  To fix a popularity problem, a social problem, like programmers and companies being happy to produce buggy code and customers being happy to pay for buggy code, you need to apply social/human tools, not technological ones.

We do not get better software developers by teaching them the programming language du jour; we get better software developers by convincing them to try harder to not create bugs, to use the tools they have available to support them in detecting and fixing issues when they do happen.  But most of all, we'd need to convince customers and business leadership that buying and selling buggy code is counterproductive, and that we can do better if we choose to.  All we need to do is choose to.

Now that low-quality tech gadgets are extremely easily available from online stores, some humans are realizing getting the lowest price ones may not be the smart choice long-term: you end up paying more, because you keep buying the same crappy tools again and again.  Or renting software, in the hopes that the vendor will fix the issues you see, and you won't be stuck at the previous version with all the bugs in it because the vendor chose to fix them in the next version instead, which you'd need to pay to upgrade to.

In a very real way, software companies today are in a very similar position to mining companies a century and a half ago.  They, too, could do basically what they pleased, and had their own "company towns" where employees had to rent from the company and buy company wares to survive.  Campaign contributions to politicians kept those companies operations untouched, until people fed up with it.  I'm waiting for people to get fed up with how crappy software generally speaking is.  I just don't want a bloody revolution, just incremental changes that help fair competition, if that is what people want.

Apologies for the wall of text once again.
« Last Edit: March 05, 2024, 12:30:44 pm by Nominal Animal »
 
The following users thanked this post: Siwastaja, newbrain

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3963
  • Country: gb
Re: Future Software Should Be Memory Safe, says the White House
« Reply #73 on: March 05, 2024, 12:37:03 pm »
If you can cast away noalias/const then the compiler can (and with higher optimisation levels, will) generate incorrect code.

this was precisely the case for MIPS5++... a bloodbath that I remember well and that I wouldn't wish on anyone
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online coppice

  • Super Contributor
  • ***
  • Posts: 8812
  • Country: gb
Re: Future Software Should Be Memory Safe, says the White House
« Reply #74 on: March 05, 2024, 12:43:55 pm »
If you can cast away noalias/const then the compiler can (and with higher optimisation levels, will) generate incorrect code.
More basically, if you can cast away const then const stuff can't go into NV memory, which would make C pretty useless for many kinds of machine. "const" was an absolutely essential feature for the MCU world. For the very first pass of standardising C in the 1980s, const had to go into make it a suitable language for the embedded market.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19802
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Future Software Should Be Memory Safe, says the White House
« Reply #75 on: March 05, 2024, 12:53:26 pm »
If you can cast away noalias/const then the compiler can (and with higher optimisation levels, will) generate incorrect code.
More basically, if you can cast away const then const stuff can't go into NV memory, which would make C pretty useless for many kinds of machine. "const" was an absolutely essential feature for the MCU world. For the very first pass of standardising C in the 1980s, const had to go into make it a suitable language for the embedded market.

And if you can't cast away constness then you can't write a debugger that pokes (ordinary) memory.

Damned if you can, damned if you can't => damned :)

The committee took years debating that in the early-mid 90s. That is damning in itself.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online coppice

  • Super Contributor
  • ***
  • Posts: 8812
  • Country: gb
Re: Future Software Should Be Memory Safe, says the White House
« Reply #76 on: March 05, 2024, 12:56:23 pm »
If you can cast away noalias/const then the compiler can (and with higher optimisation levels, will) generate incorrect code.
More basically, if you can cast away const then const stuff can't go into NV memory, which would make C pretty useless for many kinds of machine. "const" was an absolutely essential feature for the MCU world. For the very first pass of standardising C in the 1980s, const had to go into make it a suitable language for the embedded market.

And if you can't cast away constness then you can't write a debugger that pokes (ordinary) memory.

Damned if you can, damned if you can't => damned :)

The committee took years debating that in the early-mid 90s. That is damning in itself.
That was an issue in the 80s. These days, with most NV memory being flash, the debuggers just rewrite a page.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19802
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Future Software Should Be Memory Safe, says the White House
« Reply #77 on: March 05, 2024, 01:21:49 pm »
Problems do arise when managers/businesses don't want to pay for thorough checks
Here we are in violent agreement.  :-+
...
I am interested in the latter, but do not believe newer languages will solve all the problems: they will just replace the old set of problems with a new one, because humans just aren't very good at this sort of design, not yet anyway.  I do hope Rust and others will become better than what we have now, but they're not there yet. 

There we are, again, in violent agreement.

It then becomes about the philosophy of what to do: demand perfection or expect imperfection.

Quote
For the reasons I described earlier, I suggest better overall results would be achieved faster by smaller incremental changes, for example those I described for C.  History provides additional reasons: objective-C and C++, when those who wrote code decided the C tool they had was insufficient. 

There we disagree. Sometimes the only winning move is not to play.

While I liked Objective-C in the mid-late 80s as being an honorable pragmatic way to import the Smalltalk philosophy to C, I rapidly decided C++ was something I wanted to avoid. Things like the the C++ committee refusing to realise they had created a language where a valid C++ program could never be compiled, just cemented my opinion.

C++ FQA is exceedingly amusing from a distance.

Modern C and C++ are castles carefully and deliberately built on the sand of obsolete presumptions about technology.

Quote
PHP is a horrible example of what kind of a mess you may end up with if you try to cater for all possible paradigms: for some things, like string manipulation, it has at least two completely separate interfaces (imperative and object-oriented).  Unfortunately, Python shows partial signs of this too, what with its strings/bytes separation, and increasing number of string template facilities. 

C++ deliberately took the decision that kind of thing was a benefit!

It seems all popular languages accrete features over time, growing like Topsy until they appear cancerous. Even Java is getting creeping featuritis, but at least it has a solid starting point.

Quote
To fix a popularity problem, a social problem, like programmers and companies being happy to produce buggy code and customers being happy to pay for buggy code, you need to apply social/human tools, not technological ones.

The principal social tool/technique is to choose tools that make it easy to avoid classes of problems.

EDIT: this directly relevent ACM article has just popped into my attention: https://queue.acm.org/detail.cfm?id=3648601
"Based on work at Google over the past decade on managing the risk of software defects in its wide-ranging portfolio of applications and services, the members of Google's security engineering team developed a theory about the reason for the prevalence of defects: It's simply too difficult for real-world development and operations teams to comprehensively and consistently apply the available guidance, which results in a problematic rate of new defects. Commonly used approaches to find and fix implementation defects after the fact can help (e.g., code review, testing, scanning, or static and dynamic analysis such as fuzzing), but in practice they find only a fraction of these defects. Design-level defects are difficult or impractical to remediate after the fact. This leaves a problematic residual rate of defects in production systems.
We came to the conclusion that the rate at which common types of defects are introduced during design, development, and deployment is systemic—it arises from the design and structure of the developer ecosystem, which means the end-to-end collection of systems, tooling, and processes in which developers design, implement, and deploy software. This includes programming languages, software libraries, application frameworks, source repositories, build and deployment tooling, the production platform and its configuration surfaces, and so forth.
...
Guidance for developers in memory-unsafe languages such as C and C++ is, essentially, to be careful: For example, the section on memory management of the SEI CERT C Coding Standard stipulates rules like, "MEM30-C: Do not access freed memory" (bit.ly/3uSMBSk).
While this guidance is technically correct, it's difficult to apply comprehensively and consistently in large, complex codebases. For example, consider a scenario where a software developer is making a change to a large C++ codebase, maintained by a team of dozens of developers. The change intends to fix a memory leak that occurs because some heap-allocated objects aren't deallocated under certain conditions. The developer adds deallocation statements based on the implicit assumption that the objects will no longer be dereferenced. Unfortunately, this assumption turns out to be incorrect, because there is code in another part of the program that runs later and still dereferences pointers to this object.
"

Quote
We do not get better software developers by teaching them the programming language du jour; we get better software developers by convincing them to try harder to not create bugs, to use the tools they have available to support them in detecting and fixing issues when they do happen.  But most of all, we'd need to convince customers and business leadership that buying and selling buggy code is counterproductive, and that we can do better if we choose to.  All we need to do is choose to.

Two relevant quotes from the 80s, but I can't find a source:
  • if you make it possible for English to be a programming language, you will find programmers cannot write English
  • (after "losing" a programming contest to a faster program that was mostly correct) if I had known it was allowable to generate incorrect answers, I could have written much a faster program much sooner
The former has to be re-learned every generation; currently ML generated programs are the silver bullet. Expect more Air Canada chatbot experiences :(
The latter wasn't originally about C/C++, but it is clearly and horribly relevant.

Quote
In a very real way, software companies today are in a very similar position to mining companies a century and a half ago.  They, too, could do basically what they pleased, and had their own "company towns" where employees had to rent from the company and buy company wares to survive.  Campaign contributions to politicians kept those companies operations untouched, until people fed up with it.  I'm waiting for people to get fed up with how crappy software generally speaking is.  I just don't want a bloody revolution, just incremental changes that help fair competition, if that is what people want.

The only route out of the mess will be legal liability. Hopefully the Air Canada chatbot case is the start of that. (See: I'm an optimist!)
« Last Edit: March 05, 2024, 01:36:54 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19802
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Future Software Should Be Memory Safe, says the White House
« Reply #78 on: March 05, 2024, 01:23:50 pm »
If you can cast away noalias/const then the compiler can (and with higher optimisation levels, will) generate incorrect code.
More basically, if you can cast away const then const stuff can't go into NV memory, which would make C pretty useless for many kinds of machine. "const" was an absolutely essential feature for the MCU world. For the very first pass of standardising C in the 1980s, const had to go into make it a suitable language for the embedded market.

And if you can't cast away constness then you can't write a debugger that pokes (ordinary) memory.

Damned if you can, damned if you can't => damned :)

The committee took years debating that in the early-mid 90s. That is damning in itself.
That was an issue in the 80s. These days, with most NV memory being flash, the debuggers just rewrite a page.

Oh, yuck. Q1: what happens if when the debugger gets the page size wrong?
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online coppice

  • Super Contributor
  • ***
  • Posts: 8812
  • Country: gb
Re: Future Software Should Be Memory Safe, says the White House
« Reply #79 on: March 05, 2024, 01:55:31 pm »
If you can cast away noalias/const then the compiler can (and with higher optimisation levels, will) generate incorrect code.
More basically, if you can cast away const then const stuff can't go into NV memory, which would make C pretty useless for many kinds of machine. "const" was an absolutely essential feature for the MCU world. For the very first pass of standardising C in the 1980s, const had to go into make it a suitable language for the embedded market.

And if you can't cast away constness then you can't write a debugger that pokes (ordinary) memory.

Damned if you can, damned if you can't => damned :)

The committee took years debating that in the early-mid 90s. That is damning in itself.
That was an issue in the 80s. These days, with most NV memory being flash, the debuggers just rewrite a page.

Oh, yuck. Q1: what happens if when the debugger gets the page size wrong?
Flash pages are fixed size. How could the debugger get them wrong? Page read, erase, and rewrite with modifications is normal practice in debuggers these days.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19802
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Future Software Should Be Memory Safe, says the White House
« Reply #80 on: March 05, 2024, 02:01:56 pm »
If you can cast away noalias/const then the compiler can (and with higher optimisation levels, will) generate incorrect code.
More basically, if you can cast away const then const stuff can't go into NV memory, which would make C pretty useless for many kinds of machine. "const" was an absolutely essential feature for the MCU world. For the very first pass of standardising C in the 1980s, const had to go into make it a suitable language for the embedded market.

And if you can't cast away constness then you can't write a debugger that pokes (ordinary) memory.

Damned if you can, damned if you can't => damned :)

The committee took years debating that in the early-mid 90s. That is damning in itself.
That was an issue in the 80s. These days, with most NV memory being flash, the debuggers just rewrite a page.

Oh, yuck. Q1: what happens if when the debugger gets the page size wrong?
Flash pages are fixed size. How could the debugger get them wrong? Page read, erase, and rewrite with modifications is normal practice in debuggers these days.

All MCUs and memory devices have exactly the same page size? That would surprise me.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online coppice

  • Super Contributor
  • ***
  • Posts: 8812
  • Country: gb
Re: Future Software Should Be Memory Safe, says the White House
« Reply #81 on: March 05, 2024, 02:06:12 pm »
If you can cast away noalias/const then the compiler can (and with higher optimisation levels, will) generate incorrect code.
More basically, if you can cast away const then const stuff can't go into NV memory, which would make C pretty useless for many kinds of machine. "const" was an absolutely essential feature for the MCU world. For the very first pass of standardising C in the 1980s, const had to go into make it a suitable language for the embedded market.

And if you can't cast away constness then you can't write a debugger that pokes (ordinary) memory.

Damned if you can, damned if you can't => damned :)

The committee took years debating that in the early-mid 90s. That is damning in itself.
That was an issue in the 80s. These days, with most NV memory being flash, the debuggers just rewrite a page.

Oh, yuck. Q1: what happens if when the debugger gets the page size wrong?
Flash pages are fixed size. How could the debugger get them wrong? Page read, erase, and rewrite with modifications is normal practice in debuggers these days.

All MCUs and memory devices have exactly the same page size? That would surprise me.
No. Many MCUs even have some small and some large pages within one chip. However, that's part of the MCU's spec, which the debugger knows about.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19802
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Future Software Should Be Memory Safe, says the White House
« Reply #82 on: March 05, 2024, 02:10:32 pm »
If you can cast away noalias/const then the compiler can (and with higher optimisation levels, will) generate incorrect code.
More basically, if you can cast away const then const stuff can't go into NV memory, which would make C pretty useless for many kinds of machine. "const" was an absolutely essential feature for the MCU world. For the very first pass of standardising C in the 1980s, const had to go into make it a suitable language for the embedded market.

And if you can't cast away constness then you can't write a debugger that pokes (ordinary) memory.

Damned if you can, damned if you can't => damned :)

The committee took years debating that in the early-mid 90s. That is damning in itself.
That was an issue in the 80s. These days, with most NV memory being flash, the debuggers just rewrite a page.

Oh, yuck. Q1: what happens if when the debugger gets the page size wrong?
Flash pages are fixed size. How could the debugger get them wrong? Page read, erase, and rewrite with modifications is normal practice in debuggers these days.

All MCUs and memory devices have exactly the same page size? That would surprise me.
No. Many MCUs even have some small and some large pages within one chip. However, that's part of the MCU's spec, which the debugger knows about.

That makes sense. The issue is then to ensure the config information for the MCU is correct, and that the debugger is using the config related to the correct MCU.

That's "do-able", but obviously is not the most pressing issue.
« Last Edit: March 05, 2024, 02:13:27 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online coppice

  • Super Contributor
  • ***
  • Posts: 8812
  • Country: gb
Re: Future Software Should Be Memory Safe, says the White House
« Reply #83 on: March 05, 2024, 02:19:32 pm »
That makes sense. The issue is then to ensure the config information for the MCU is correct, and that the debugger is using the config related to the correct MCU.

That's "do-able", but obviously is not the most pressing issue.
Modern debuggers get an update each time relevant new chips are released. They can read the chip ID out of most chips, so they match up the config data with the hardware in a fairly robust manner.
 
The following users thanked this post: Siwastaja

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8243
  • Country: fi
Re: Future Software Should Be Memory Safe, says the White House
« Reply #84 on: March 05, 2024, 05:01:10 pm »
Can be interesting to look at: https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=rust

Can someone shed light at what's happening here? Use-after-free, heap buffer overflows. Wasn't Rust supposed to completely get rid of exactly these types of memory errors? What went wrong?
 

Online coppice

  • Super Contributor
  • ***
  • Posts: 8812
  • Country: gb
Re: Future Software Should Be Memory Safe, says the White House
« Reply #85 on: March 05, 2024, 05:09:58 pm »
Can be interesting to look at: https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=rust
Can someone shed light at what's happening here? Use-after-free, heap buffer overflows. Wasn't Rust supposed to completely get rid of exactly these types of memory errors? What went wrong?
This is why I wrote against the stupidity of calling something memory safe or type safe. Try to stop one kind of corruption issue, and some new threading, DMA, GPU or other complexity will soon pick up the slack and keep the bug reporters in safe employment.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6419
  • Country: fi
    • My home page and email address
Re: Future Software Should Be Memory Safe, says the White House
« Reply #86 on: March 05, 2024, 05:26:31 pm »
Problems do arise when managers/businesses don't want to pay for thorough checks
Here we are in violent agreement.  :-+
...
I am interested in the latter, but do not believe newer languages will solve all the problems: they will just replace the old set of problems with a new one, because humans just aren't very good at this sort of design, not yet anyway.  I do hope Rust and others will become better than what we have now, but they're not there yet. 

There we are, again, in violent agreement.

It then becomes about the philosophy of what to do: demand perfection or expect imperfection.

Yep, and not necessarily.  (I'm not saying you are wrong, I am saying I see this differently, but am not sure either one is correct or "more correct".)

I like to at least think I am on the constant lookout for better tools, because a tool is always imperfect unless it is a simple statement of the answer.  That is, given a particular problem, there are at least small changes possible to apply to the tool to make it even better suited for that particular problem.  Perfection, therefore, is not a valid goal, unless we define it as the vague centerpoint related to a set of problems.

As an example, I use five wildly different programming languages just about every day: bash, awk, C, Python, and JavaScript.  Their domains are so different I do not see it is even possible for a single programming language to be better than each of them in their respective domains.  I can see adding to the set when the type of things I do changes, and replacing any one with a better one at that domain.  (Well, I've already done that a few times.  None of these were my "first" programming language, and I've gotten paid to write code in half a dozen to dozen other programming languages too.)

In the past, C tried to be a full-stack language, catering for everything from the lowest-level libraries to the highest-level abstractions.  That didn't work, so objective-C and C++ bubbled off it by people who used the language to solve particular kinds of problems, using abstraction schemes they thought would make the new programming language a better tool.

Currently, C is mostly used as a systems programming language, for low-level implementation (kernels, firmwares) up to services (daemons in POSIX/Unix parlance) and libraries.  In this domain, bugs related to memory accesses are prominent, and seen as a problem that needs fixing.

Thing is, memory safety is only one facet of issues, and is not sufficient criterion to be better than C.

Instead of introducing a completely new language, my logic is that since C has proven to be practical, but has these faults, fixing memory safety by adding the feature set I described in a backwards-compatible manner with zero runtime overhead, is likely to yield a better tool than designing a completely new one from scratch.

Essentially, by doing this derivative-language bubble, which simultaneously would mean a standard C library replacement with something else (which is not that big of a deal, considering the C standard explicitly defines the free-standing environment for that case), I claim that typical memory safety issues in C code can be easily and effectively avoided, while requiring relatively little adjustment from C programmers.

The more interesting thing here is to look at why such changes have not been proposed before.  (They might have; I just haven't found any yet.)
Nothing in it is "novel", as it is simply based on the fact that for arrays, C compilers already do effective bounds checking at runtime within a single scope, even for variably modified types.  Variants based on C seem to have simply added new abstractions, rather than delve into fixing C's known deficiencies wrt. code quality and bug type tendencies.

Moreover, any new abstraction or feature brings in its own set of problems.  Always.

An example of this is how initialization of static (global) C++ objects happen in microcontrollers.  Under fully featured OSes using ELF binaries, there is actually a section (.init_array) that contains only initializer function pointers that are called without arguments to initialize objects in the correct order.  (It can be used in C, too, via GNU constructor function attribute.)  On microcontrollers, the objects tend to be initialized as part of the RAM initialization process, copying or decompressing initial data from Flash/ROM to RAM, but a compiler may still generate similar initializer functions you need to call after initializing the RAM contents, but before the execution of the firmware image begins.  The order in which these initializer functions are called can be extremely important, when an object refers to the state of another object at initialization time.  (I am not sure if it is possible to construct a ring of dependencies that is impossible to implement in practice, although it would be a fun experiment; like proving the C++ template engine is Turing-complete.)

Any new feature will have its risks.  A completely new programming language has an unproven track record, and an unknown set of risks and weaknesses.  I am happy that others are developing the next generation of programming languages, even though I expect almost all of them to fail and lapse into niche use cases.  It is unlikely that I will be using them in true anger (to solve real-world problems others are having) until they have at least a decade of development under their belt, though; they tend to take at least that long to find their "groove", and iron out their backwards-incompatible warts.

Because of the above, I do not believe a new language is likely to replace C anytime soon, but in the meantime, we might reap some significant rewards with relatively small backwards-compatible changes to C itself.  This is the key.  Why wait for the moon, when you can have a small asteroid now in the mean time?

The feature that is analogous to the memory-safety issue here is the difference in line input functions in standard C (fgets()) and POSIX.1 (getline()/getdelim()).  The latter can easily deal with even binary data and embedded nuls (\0) in input, and has no inherent line length limitations.  It is also extremely well suited for tokenization in line-based inputs; for CSV and similar formats where record separators can appear in fields as long as quoted, you need slightly more complicated functions.  Yet, if you look at POSIX.1 C examples and tutorials, very few if any show how to use getline() or getdelim(), and instead focus on fgets().  Even moreso for opendir()/readdir()/closedir vs. nftw()/scandir()/glob().  Better tools exist in POSIX.1 C, but because one company (Microsoft) rejected it, most tutorials and guides teach the inferior tools.

You could say that my contribution is limited to showing how small changes to how people use C in anger could affect their bug density, especially memory-related bugs, in a worthwhile manner.  I do not have the charisma or social skills to make any of that popular, though, which heavily colors my opinion as to what kind of results one can get in the programming language as a tool arena in general.  To affect a change, you need PR and social manipulation, not new technology.  And definitely not political decrees as to what kind of programming languages developers should use.
 
The following users thanked this post: Siwastaja

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19802
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Future Software Should Be Memory Safe, says the White House
« Reply #87 on: March 05, 2024, 06:17:43 pm »
Can be interesting to look at: https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=rust
Can someone shed light at what's happening here? Use-after-free, heap buffer overflows. Wasn't Rust supposed to completely get rid of exactly these types of memory errors? What went wrong?
This is why I wrote against the stupidity of calling something memory safe or type safe. Try to stop one kind of corruption issue, and some new threading, DMA, GPU or other complexity will soon pick up the slack and keep the bug reporters in safe employment.

You're quite right, but you don't go far enough. Everything should be written in assembler.

Personally I prefer to apply my thought and concentration to my unique application, and prefer not to have to (re)do boring stuff that can be done by machines.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19802
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Future Software Should Be Memory Safe, says the White House
« Reply #88 on: March 05, 2024, 06:19:35 pm »
Can be interesting to look at: https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=rust

Can someone shed light at what's happening here? Use-after-free, heap buffer overflows. Wasn't Rust supposed to completely get rid of exactly these types of memory errors? What went wrong?

Thank dog we have other compilers which are always bug-free and completely implement other languages, as defined by their standard.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19802
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Future Software Should Be Memory Safe, says the White House
« Reply #89 on: March 05, 2024, 06:46:10 pm »
In the past, C tried to be a full-stack language, catering for everything from the lowest-level libraries to the highest-level abstractions.  That didn't work, so objective-C and C++ bubbled off it by people who used the language to solve particular kinds of problems, using abstraction schemes they thought would make the new programming language a better tool.

When Objective-C and C++ started in the mid-80s, C was only a systems programming language.
Numerical programming: use Fortran (still better, I'm informed).
Business programming: use COBOL (I decided against that before university)
IDEs: use Smalltalk, which showed the way for the next 15 years!

Quote
Currently, C is mostly used as a systems programming language, for low-level implementation (kernels, firmwares) up to services (daemons in POSIX/Unix parlance) and libraries.  In this domain, bugs related to memory accesses are prominent, and seen as a problem that needs fixing.

The problem was that in the early 80s C faced a choice: to be a systems programming language or to be a general purpose application language. Either would have been practical and reasonable. But in attempting to satisfy both requirements, the compromises and complexity caused it to be bad at both.

Now people have correctly decided it is deficient as a general purpose application language, and abandoned that usage. But it still has a lot of baggage.

Quote
Thing is, memory safety is only one facet of issues, and is not sufficient criterion to be better than C.

In many cases it is sufficient to be regarded as better than C. People have (correctly, IMHO) voted with their feet keyboards.

Quote
Instead of introducing a completely new language, my logic is that since C has proven to be practical, but has these faults, fixing memory safety by adding the feature set I described in a backwards-compatible manner with zero runtime overhead, is likely to yield a better tool than designing a completely new one from scratch.

The stuff added to C to try to bring it out of the 70s is baroquely complex. Better to start afresh with concepts and technology developed and proven since then.

Simplicity is a virtue; KISS.

Quote
Moreover, any new abstraction or feature brings in its own set of problems.  Always.

Agreed.

A well-conceived group of abstractions that work together harmoniously brings far more benefits than problems, and is thus a good tradeoff.

None of that applies to modern C or modern C++.

Quote
Any new feature will have its risks.  A completely new programming language has an unproven track record, and an unknown set of risks and weaknesses.  I am happy that others are developing the next generation of programming languages, even though I expect almost all of them to fail and lapse into niche use cases.  It is unlikely that I will be using them in true anger (to solve real-world problems others are having) until they have at least a decade of development under their belt, though; they tend to take at least that long to find their "groove", and iron out their backwards-incompatible warts.

We agree.

My career has consisted of evaluating languages/technologies - and choosing to ignore them because they are merely "shiny chrome" variations rather than fundamentally different and better.

Rust is getting there, after starting in 2006/2009/2015 depending on your preference.

Quote
Because of the above, I do not believe a new language is likely to replace C anytime soon, but in the meantime, we might reap some significant rewards with relatively small backwards-compatible changes to C itself.  This is the key.  Why wait for the moon, when you can have a small asteroid now in the mean time?

True.

COBOL isn't going away, and the PDP-11 will continue to be used and supported until 2050 at least. https://www.theregister.com/2013/06/19/nuke_plants_to_keep_pdp11_until_2050/

And the B-52 BUFF is still being upgraded.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 
The following users thanked this post: Nominal Animal

Offline baldurn

  • Regular Contributor
  • *
  • Posts: 189
  • Country: dk
Re: Future Software Should Be Memory Safe, says the White House
« Reply #90 on: March 05, 2024, 06:52:07 pm »
Can be interesting to look at: https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=rust

Can someone shed light at what's happening here? Use-after-free, heap buffer overflows. Wasn't Rust supposed to completely get rid of exactly these types of memory errors? What went wrong?

The first CVE from that link is:

"CVE-2024-27284   cassandra-rs is a Cassandra (CQL) driver for Rust. Code that attempts to use an item (e.g., a row) returned by an iterator after the iterator has advanced to the next item will be accessing freed memory and experience undefined behaviour. The problem has been fixed in version 3.0.0."

I followed the link which took me to a github pull request that fixes this bug. The freed memory they are talking about is freed by a C driver that is called from the Rust code. Make your own conclusions about C and Rust from that :-)
 
The following users thanked this post: Siwastaja

Online coppice

  • Super Contributor
  • ***
  • Posts: 8812
  • Country: gb
Re: Future Software Should Be Memory Safe, says the White House
« Reply #91 on: March 05, 2024, 06:54:06 pm »
Can be interesting to look at: https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=rust
Can someone shed light at what's happening here? Use-after-free, heap buffer overflows. Wasn't Rust supposed to completely get rid of exactly these types of memory errors? What went wrong?
This is why I wrote against the stupidity of calling something memory safe or type safe. Try to stop one kind of corruption issue, and some new threading, DMA, GPU or other complexity will soon pick up the slack and keep the bug reporters in safe employment.

You're quite right, but you don't go far enough. Everything should be written in assembler.

Personally I prefer to apply my thought and concentration to my unique application, and prefer not to have to (re)do boring stuff that can be done by machines.
You gives some very weird replies that seem to miss the point entirely.
 
The following users thanked this post: Siwastaja

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6419
  • Country: fi
    • My home page and email address
Re: Future Software Should Be Memory Safe, says the White House
« Reply #92 on: March 05, 2024, 08:50:44 pm »
One of the patterns I like to use in C is
    type *p = NULL;
and after possible dynamic allocation and resizing,
    free(p);
    p = NULL;
The key point is that free(NULL) is safe, and does nothing.

This does not fix the double-free or use-after-free cases that occur because the same thing is used in different threads with insufficient locking, or because a function uses a temporary copy of p while the original gets destroyed, but it does expose the common use-after-free cases and defuses double-free bugs using the original pointer.

Yet, for some reason, most C programmers see the initial NULL assignment and the final p = NULL; as superfluous/ugly/unstylish, even though it is just defensive programming, and costs minimal machine code.  (Their cost will be lost within optimizer noise.)

(I'd prefer free() to always return (void *)0, so one could do p = free(p);, which may look odd initially to some, but tends to generate quite sensible code on many architectures, and I feel could easily become a habit.  I don't like freep(&p);, because it hints at timing promises it cannot provide; the implicit "p is freed before it is NULLed" is useful to me as a pattern behaviour reminder.)

Similarly, my grow-as-needed pattern tends to be initialized to explicit "none" via
    type  *data = NULL;
    size_t size = 0;
    size_t used = 0;
with additional room allocated via
    if (used + need > size) {
        size_t  new_size = allocation_policy(used + need);
        void  *new_data = realloc(data, new_size * sizeof data[0]);
        if (!new_data) {
            // Failure, but data is still valid.
            free(data); data = NULL; used = 0; size = 0;
            return error;
        }
        data = new_data; // *dataptr = new_data;
        size = new_size; // *sizeptr = new_size;
    }
If data and size are aliases of pointer to same supplied by the caller (as in e.g. getline()), they're assigned initially and updated after each reallocation, otherwise all accesses are via data, size, and used.  (I omitted the overflow checks for used+need, allocation_policy() and new_size*sizeof data[0], for simplicity.)

This is pretty much bullet-proof memory access safety-wise.  I often use it to read data from potentially large data sets, in chunks up to 2 MiB or so (configurable at compile time), with each additional read reading up to (size-used-n) bytes, to data+used, where n is the number of additional trailing bytes needed when processing the input, and need > n, need <= configurable_chunk_size_limit.  For now, this balances the number of syscalls used and the overhead in setting up or updating the virtual memory mapping for the file contents.
As I use pipes extremely often to pass data to/from my programs, I don't use memory mapping unless I know the target/source is an actual file.

One detail to realize is that if nothing is added to the array, it may not be allocated at all.  I often avoid this by having a final optimizing realloc whenever used+n>size or (size>used && (size-used-n)*sizeof data[0]>limit), i.e. whenever needed or more than limit bytes would be wasted.  (Most hosted C libraries will only return allocated memory back to the OS if it was large enough originally.)

Yet, this pattern seems surprising to many C programmers, because they are not aware that realloc(NULL,N) is exactly equivalent to malloc(N).  In many cases, they believe the initial allocation must be done using malloc(), which tends to complicate the code.

For my own dynamically allocated and passed structures, they often have a data-ish C99 flexible array member,
    struct typename {
        // whatever elements I need, plus then
        size_t  size;
        size_t  used;
        type    data[];
    };
where size is the number of elements allocated for the flexible data array, and used is the number of initial elements currently in use there.

These are the tools I use to write "memory-safe" code in C.  It is not perfect über-skillz stuff.  It is just a set of sensible patterns, and a healthy dose of suspicion against any assumption on what a given parameter or variable value might be.  I like to use data aliasing to my advantage, so tend to check for it at run time when it matters.  I can see how some find this lot of effort, if they've not learned the defensive approaches from the get go.
My biggest peeve on that front is the ubiquitous "We'll add error checking later, when we have more time", which is just a complicated way of saying "We're not gonna bother", because robustness is not something you add on top, it is something you either design in, or don't have.

Fact is, this kind of practical defensive patterns in C code are rare.  They could be used, and if used they would reduce the number of memory-related bugs, but they aren't.  I'm very comfortable with them, and don't think they take any more "effort" than any other approach.   The reason these are not used is not technical, just social/cultural/habit.

I don't know how to change programmers' habits.  Examples only sway those who are already looking for better tools, and they'd likely have found all these on their own given enough time.  :-[
« Last Edit: March 05, 2024, 08:53:49 pm by Nominal Animal »
 

Online SiliconWizardTopic starter

  • Super Contributor
  • ***
  • Posts: 14665
  • Country: fr
Re: Future Software Should Be Memory Safe, says the White House
« Reply #93 on: March 05, 2024, 10:01:38 pm »
To elaborate on that, I've personally not used standard memory allocation functions in C *directly* in ages.
I've developed, long time ago already, a set of macros that would pretty much do what you describe above in a streamlined way.
Too bad for macro haters, this has worked wonderfully well as far as I'm concerned for many, many years.

I have also written my own allocators, that I use in some specific cases as a replacement for the standard ones (and I do that more and more these days). They are not general-purpose, global allocators such as malloc(), and thus require more thought when using them, about lifetime in particular.
 

Offline Wil_Bloodworth

  • Regular Contributor
  • *
  • Posts: 198
  • Country: us
Re: Future Software Should Be Memory Safe, says the White House
« Reply #94 on: March 05, 2024, 11:20:56 pm »
My biggest peeve on that front is the ubiquitous "We'll add error checking later, when we have more time", which is just a complicated way of saying "We're not gonna bother", because robustness is not something you add on top, it is something you either design in, or don't have.
Truth!

I don't know how to change programmers' habits.  Examples only sway those who are already looking for better tools, and they'd likely have found all these on their own given enough time.  :-[
I have run into this issue as well and have threatened the team with required pull requests is the laziness continues. Our team knows better... they're just lazy.  The beatings will continue until morale improves! LOL

- Wil
 

Online SiliconWizardTopic starter

  • Super Contributor
  • ***
  • Posts: 14665
  • Country: fr
Re: Future Software Should Be Memory Safe, says the White House
« Reply #95 on: March 05, 2024, 11:34:46 pm »
Laziness well applied can be a virtue in engineering. It's what drives you to elaborate architectures to make further development much easier after that initial effort, similarly to factor code so that you'll avoid a lot of repetitions and tedious coding after that. It's also what pushes leaner designs, rather than overbloated ones.

I think the whole point is in understanding that this initial effort is required to enjoy your laziness in the longer run. And so, IMO the main problem is not with software developers being lazy per se, but the need for immediate reward, preventing them from investing this initial effort to make their life much easier afterwards.

This appeal for immediate rewards is what plagues software engineering in particular, and our whole socieity in general.
 
The following users thanked this post: nctnico, Siwastaja, newbrain, Nominal Animal, DiTBho

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19802
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Future Software Should Be Memory Safe, says the White House
« Reply #96 on: March 06, 2024, 08:53:58 am »
Laziness well applied can be a virtue in engineering. It's what drives you to elaborate architectures to make further development much easier after that initial effort, similarly to factor code so that you'll avoid a lot of repetitions and tedious coding after that. It's also what pushes leaner designs, rather than overbloated ones.

I think the whole point is in understanding that this initial effort is required to enjoy your laziness in the longer run. And so, IMO the main problem is not with software developers being lazy per se, but the need for immediate reward, preventing them from investing this initial effort to make their life much easier afterwards.

This appeal for immediate rewards is what plagues software engineering in particular, and our whole socieity in general.

Just so, but don't forget to add "show me the reward structure and I'll tell you how people will behave".

I know my implementation is going well when the number of lines of code reduces. Has the bonus of confusing the hell out of idiot managers who measure productivity by the number of lines of code :)

Luckily I managed to avoid that all but once, and that place was an unpleasant place to work with code strategies that make people laugh in disbelief!
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Wil_Bloodworth

  • Regular Contributor
  • *
  • Posts: 198
  • Country: us
Re: Future Software Should Be Memory Safe, says the White House
« Reply #97 on: March 06, 2024, 05:01:03 pm »
Has the bonus of confusing the hell out of idiot managers who measure productivity by the number of lines of code :)
Luckily, we are seeing the classical hierarchical employment structure dissolving away as companies have realized that people who are not producing value have no place in their company. Managers who generally do nothing but bark at people are [thankfully] becoming a scarcity these days; at least in the environments I have seen.

"Scrum style" work places where the entire team meets [called the "standup"] for 15 minutes each day at the same time and place and go around the circle saying this, "Yesterday, I worked on X. Today, I am working on Y. I have 0..N blocks."... very quickly makes it very obvious who is not contributing value to the team and thus, the company.  Is there a place for managers? Absolutely.  Do I think most of them sit on their butt most of the day and do nothing... also yes.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19802
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Future Software Should Be Memory Safe, says the White House
« Reply #98 on: March 06, 2024, 05:29:30 pm »
"Scrum style" work places where the entire team meets [called the "standup"] for 15 minutes each day at the same time and place and go around the circle saying this, "Yesterday, I worked on X. Today, I am working on Y. I have 0..N blocks."... very quickly makes it very obvious who is not contributing value to the team and thus, the company.  Is there a place for managers? Absolutely.  Do I think most of them sit on their butt most of the day and do nothing... also yes.

That too can be a problem.

It is fine for a boring project, by which I mean one where it is obvious how to do it because you have done something very similar before. In such projects you can just steam ahead throwing lots of little bits of functionality together, in the knowledge they will all work together as expected. CRUD projects are classics (create read update delete).

It fails for interesting projects where any of these apply:
  • you are inventing something
  • you are finding a path through new territory, using new concepts
  • it is reasonable to expect that earlier work will have to be undone, as requirements/benefits become apparent
  • "thinking before doing" is more productive than "doing and finding it didn't work", a.k.a. "no time do do it right in the first place but always time to do it over"

I've made my luck: most of my projects had at least one of those characteristics :)
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 
The following users thanked this post: bpiphany

Online nctnico

  • Super Contributor
  • ***
  • Posts: 27207
  • Country: nl
    • NCT Developments
Re: Future Software Should Be Memory Safe, says the White House
« Reply #99 on: March 06, 2024, 05:54:50 pm »
"thinking before doing" is more productive than "doing and finding it didn't work", a.k.a. "no time do do it right in the first place but always time to do it over"
This typically ends up as: "no time do do it right in the first place and NO time to do it over"

In some projects I have consulting on, I had to put up quite a fight to convince management the only way forward is to take a few steps back and do the design properly. To avoid problems piling up so high that the company goes under. Things get very ugly when people tried to design hardware scrum style..  :scared:
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6419
  • Country: fi
    • My home page and email address
Re: Future Software Should Be Memory Safe, says the White House
« Reply #100 on: March 06, 2024, 06:17:03 pm »
Just so, but don't forget to add "show me the reward structure and I'll tell you how people will behave".
Hey, full circle: that is exactly why I said memory safety is a social problem, not a technical one.
 

Offline Wil_Bloodworth

  • Regular Contributor
  • *
  • Posts: 198
  • Country: us
Re: Future Software Should Be Memory Safe, says the White House
« Reply #101 on: March 06, 2024, 06:30:54 pm »
That too can be a problem.  It fails for interesting projects where any of these apply...
Eh... I'm not convinced that ever reporting what you did yesterday and reporting what you're going to do today has anything at all to do with the work or the context in which you're doing it.  Transparency is transparency regardless of any other factors.

But maybe I'm misunderstanding the point you're making.  My point was that "standup" weeds out laziness.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8243
  • Country: fi
Re: Future Software Should Be Memory Safe, says the White House
« Reply #102 on: March 06, 2024, 07:41:58 pm »
I think the whole point is in understanding that this initial effort is required to enjoy your laziness in the longer run. And so, IMO the main problem is not with software developers being lazy per se, but the need for immediate reward, preventing them from investing this initial effort to make their life much easier afterwards.

This appeal for immediate rewards is what plagues software engineering in particular, and our whole socieity in general.

In embedded, I have developed this pattern which seems to serve me well:

The first prototype must be over-crappy, beyond any salvage. I mean a single long function, with while(1) loop calling blocking delay functions and writing some stuff on debug UART, no datatypes, no structure, copypasta if necessary. Bonus points from goto. Proof of concept can be demonstrated within hours and then it is (hopefully) obvious to both managers and programmers this piece of code cannot be used at all for the actual implementation. But because you have demonstrated the viability, now the team including management cannot pull out of the project so you get at least some time and resources to actually implement it, maybe a 3-4 days for a simple module before anyone starts asking questions.

And, because you tested some ideas, the structure is now starting to form in your head. It's best to do such initial tests near the end of the week so that your brain gets at least a few days of subconscious processing time before real implementation begins.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19802
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Future Software Should Be Memory Safe, says the White House
« Reply #103 on: March 06, 2024, 08:10:30 pm »
That too can be a problem.  It fails for interesting projects where any of these apply...
Eh... I'm not convinced that ever reporting what you did yesterday and reporting what you're going to do today has anything at all to do with the work or the context in which you're doing it.  Transparency is transparency regardless of any other factors.

But maybe I'm misunderstanding the point you're making.  My point was that "standup" weeds out laziness.

Let me use the old aphorism as a hint: "research is what I am doing when I don't know what I am doing".

For the interesting projects I have worked on, only a very small proportion of the time has been spent writing code. Most of it has been spent on deciding what to do and how to do it, then noticing a big hole in those thoughts, and starting again. Eventually start coding, discover the tool has been oversold, start again. After a few repetitions, find a good path, then realise it is over-complicated, so chop half the code.

None of that occurs in boring CRUD projects.

None of that is captured or capturable in a daily standup.

The other obvious point is that I have never worked with lazy people! Good companies weed those out during interview :)
« Last Edit: March 06, 2024, 08:23:41 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19802
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Future Software Should Be Memory Safe, says the White House
« Reply #104 on: March 06, 2024, 08:18:27 pm »
I think the whole point is in understanding that this initial effort is required to enjoy your laziness in the longer run. And so, IMO the main problem is not with software developers being lazy per se, but the need for immediate reward, preventing them from investing this initial effort to make their life much easier afterwards.

This appeal for immediate rewards is what plagues software engineering in particular, and our whole socieity in general.

In embedded, I have developed this pattern which seems to serve me well:

The first prototype must be over-crappy, beyond any salvage. I mean a single long function, with while(1) loop calling blocking delay functions and writing some stuff on debug UART, no datatypes, no structure, copypasta if necessary. Bonus points from goto. Proof of concept can be demonstrated within hours and then it is (hopefully) obvious to both managers and programmers this piece of code cannot be used at all for the actual implementation. But because you have demonstrated the viability, now the team including management cannot pull out of the project so you get at least some time and resources to actually implement it, maybe a 3-4 days for a simple module before anyone starts asking questions.

And, because you tested some ideas, the structure is now starting to form in your head. It's best to do such initial tests near the end of the week so that your brain gets at least a few days of subconscious processing time before real implementation begins.

Yup, pretty much the case.

Fails when people don't understand the concept of "throwaway concept code", and want to put it in a version control system and start mutating it line by line. Been there, got the scars :(

If there is a GUI involved, one imperfect defence is to use the "napkin look and feel". The GUI is functional, but Very Clearly Not Deliverable.


There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf