http://menuetos.net/ (http://menuetos.net/) - Pure Assembly
Who isn't able to cut it in assembly and C should stay away from any OS related stuff.
any publicly available source? ;D
TOPS-10 - BLISS - https://en.wikipedia.org/wiki/TOPS-10 (https://en.wikipedia.org/wiki/TOPS-10)VMS might be a more interesting example from DEC. It started out mostly in BLISS, and migrated to C over time, with a few big step changes in the C content. Interestingly, BLISS continued for so long that after Compaq bought DEC they produced a BLISS compiler for IA64. This seems to indicate that they never got VMS entirely free of BLISS code.
I think a large chunk of the first Mac OS versions was written in Pascal. Ditto for Lisa OS which Mac OS was partly based on.That's my, rather vague and dim, recollection too.
I think a large chunk of the first Mac OS versions was written in Pascal. Ditto for Lisa OS which Mac OS was partly based on.That's my, rather vague and dim, recollection too.
One of the big selling points of Rust is that it is useable for the sort of low level systems programming that C and C++ are used for.
MESA was a nice language and had much in common with the language I used for my 3rd year project... Concurrent CLU.
Now regarding C... please correct me if I'm wrong, but I think C was actually not that popular outside of the Unix world in the 80's.By the early 80s C was a big deal all over the computing world.
There were not many OSes written in C + assembly, only the ones who actually matter and passed the test of time, the rest went as academic masturbation, hyper-specialized NIH stuff and forgotten attempts of enthusiasts, forever remaining at version 0.0.1alpha0 in a forgotten '90s stile web page or repository.
Rust will go like all the rest of over-engineered cretinic experiments, only to be eventually used in some kind of hypervisor or virtualization, where the heavy lifting will be done by C or assembly stuff, but it will be not visible to the mediocre programmer who will happily announce that "duuude, I've written a user mode driver in Python... Yeah dude, I written a scheduler in Rust, this OS stuff is so simple...".
Some times low level and difficulty is just that, not reducible to application level stuff and trying to sugarcoat it with automated garbage collectors and object paradigms, glorified interpreters of pseudocode and other crutches for mediocre programmers will always fail.
Do you know anything at all about Rust or are you just blathering on ignorantly and arrogantly like a senile old man?Are you blathering on ignorantly and arrogantly like a young idiot without the experience of seeing endless repeating cycles of the Next Big Thing that fades away? There's always that one wonderful little thing they do, that means the world is going to see the brilliance of the Next Big Thing and flock to it. Every once in a long while people actually do. Not often, though. Maybe Rust will hit the big time, but the odds are heavily against it unless one or more massive long term systems that are needed all over the place are written in it.
There were not many OSes written in C + assembly, only the ones who actually matter and passed the test of time, the rest went as academic masturbation, hyper-specialized NIH stuff and forgotten attempts of enthusiasts, forever remaining at version 0.0.1alpha0 in a forgotten '90s stile web page or repository.
Rust will go like all the rest of over-engineered cretinic experiments, only to be eventually used in some kind of hypervisor or virtualization, where the heavy lifting will be done by C or assembly stuff, but it will be not visible to the mediocre programmer who will happily announce that "duuude, I've written a user mode driver in Python... Yeah dude, I written a scheduler in Rust, this OS stuff is so simple...".
Some times low level and difficulty is just that, not reducible to application level stuff and trying to sugarcoat it with automated garbage collectors and object paradigms, glorified interpreters of pseudocode and other crutches for mediocre programmers will always fail.
Are you blathering on ignorantly and arrogantly like a young idiot without the experience of seeing endless repeating cycles of the Next Big Thing that fades away? There's always that one wonderful little thing they do, that means the world is going to see the brilliance of the Next Big Thing and flock to it. Every once in a long while people actually do. Not often, though. Maybe Rust will hit the big time, but the odds are heavily against it unless one or more massive long term systems that are needed all over the place are written in it.Whilst Rust certainly has a big hill to climb in order to avoid being just another Next Big Thing C/C++ killer that fades away, it does have the advantage that it can actually be used for all the things C/C++ can. It also seems to be getting traction in Linux - Rust code in Linux kernel looks more likely as language team lead promises support (https://www.theregister.com/2020/07/13/rust_code_in_linux_kernel/).
OCaml
OCaml
Do you think it is also worth to learn this language in 2021?
Do you know anything at all about Rust or are you just blathering on ignorantly and arrogantly like a senile old man?Are you blathering on ignorantly and arrogantly like a young idiot without the experience of seeing endless repeating cycles of the Next Big Thing that fades away? There's always that one wonderful little thing they do, that means the world is going to see the brilliance of the Next Big Thing and flock to it. Every once in a long while people actually do. Not often, though. Maybe Rust will hit the big time, but the odds are heavily against it unless one or more massive long term systems that are needed all over the place are written in it.
One can't write an OS in garbage collected languages
I think its a language you might want to learn, not so much to be able to use it but to see a rather different take on computer languages. There are a few languages its worth learning just to broaden your understanding of what a computer language can be like. Everyone should learn Snobol :)OCaml
Do you think it is also worth to learn this language in 2021?
I am so tempted, although at the moment my priority is Rust.
I enjoyed the above post with a link to an real OS written in Rust!
I am the kind of person who better learns by examples :D
I would say, let's tune it down with name calling and person attacks, call names the tools (even the concepts), not the people. Insulting the discussion partners does not anyones point stronger and does not make one seem smarter, au contraire.
This being said, I will leave this topic forever and let you in the capable hands of the soon to be cRUSTaceeans overlords.
DC1MC
There were not many OSes written in C + assembly, only the ones who actually matter and passed the test of time, the rest went as academic masturbation, hyper-specialized NIH stuff and forgotten attempts of enthusiasts, forever remaining at version 0.0.1alpha0 in a forgotten '90s stile web page or repository.
Rust will go like all the rest of over-engineered cretinic experiments, only to be eventually used in some kind of hypervisor or virtualization, where the heavy lifting will be done by C or assembly stuff, but it will be not visible to the mediocre programmer who will happily announce that "duuude, I've written a user mode driver in Python... Yeah dude, I written a scheduler in Rust, this OS stuff is so simple...".
Some times low level and difficulty is just that, not reducible to application level stuff and trying to sugarcoat it with automated garbage collectors and object paradigms, glorified interpreters of pseudocode and other crutches for mediocre programmers will always fail.
... There are a few languages its worth learning just to broaden your understanding of what a computer language can be like. Everyone should learn Snobol :)
Well, I'd list LISP as one of the interesting things to learn, but I'd put it in brackets.... There are a few languages its worth learning just to broaden your understanding of what a computer language can be like. Everyone should learn Snobol :)
But Lisp may be taking things too far. :)
Well, I'd list LISP as one of the interesting things to learn, but I'd put it in brackets.... There are a few languages its worth learning just to broaden your understanding of what a computer language can be like. Everyone should learn Snobol :)
But Lisp may be taking things too far. :)
It also seems to be getting traction in Linux - Rust code in Linux kernel looks more likely as language team lead promises support (https://www.theregister.com/2020/07/13/rust_code_in_linux_kernel/).The Reg got it backwards - for this to happen, Rust would have to gain acceptance of Linux developers, not Linux gain acceptance of Rust developers :-DD
Not necessarily, it will also be okay if you learn Standard ML instead.OCaml
Do you think it is also worth to learn this language in 2021?
It also seems to be getting traction in Linux - Rust code in Linux kernel looks more likely as language team lead promises support (https://www.theregister.com/2020/07/13/rust_code_in_linux_kernel/).The Reg got it backwards - for this to happen, Rust would have to gain acceptance of Linux developers, not Linux gain acceptance of Rust developers :-DD
The Reg got it backwards - for this to happen, Rust would have to gain acceptance of Linux developers, not Linux gain acceptance of Rust developers :-DD
Have anyone seen and can report here a link to an OS not written in C/C++/ObjC?
but the Rust language is fastly evolving, and because of this certain compatibility issues can arise, despite efforts to ensure forwards-compatibility wherever possible.
At no point I understand that Linus is going to consider replacing C with Rust for the Linux kernel, neither tomorrow nor maybe even in 10 years. All we can conclude, for what he said, I guess, is that at least he doesn't reject it violently as he has always done with C++.As I understand it, the rejection is logical: Unlike C's freestanding environment (which Linux relies on), C++'s still requires a runtime for <new> and <exception>, and that makes it a no-go, because the Linux kernel would have to be fundamentally redesigned to run on such a runtime. Also, C++'s freestanding environment leaves too many things implementation defined, which means kernel developers would have to beg the compiler developers to provide sane implementation-defined behaviour, and except for the last few years, dealing with GCC developers has been quite a headache for the kernel developers.
Re: VMS and BLISSAs I understanding it there may not be much BLISS in the VMS kernel itself, but a large part of the code surrounding the kernel, that makes it a complete operating system, was originally in BLISS.
My recollection was that VAX/VMS and PDP-11/RSX-11 were both written primarily in assembler. The VAX filesystem (FILES-11), however, had BLISS components at least.
I took the VAX/VMS internals courses in the 80's and wrote kernel modules for a few projects. (Digital alum 1983 - the acquisition of my division by Intel). Prior to Digital I wrote an I/O driver for RSX. I don't ever recall seeing any BLISS modules as part of the VAX VMS kernel prior to V5.0 -- perhaps BLISS was introduced after that.
BLISS-11 was used to build the StarOS operating system for CMU's Cm*.
This was one of the major things holding C++ back in the 90s. There was a period when almost every compiler update you received broke something in your code. If not for that, C++ compilers might well have displaced C compilers, with some of us writing like we were still using C, and some of us writing with the full features of C++.but the Rust language is fastly evolving, and because of this certain compatibility issues can arise, despite efforts to ensure forwards-compatibility wherever possible.This is a *major* issue indeed.
This was one of the major things holding C++ back in the 90s. There was a period when almost every compiler update you received broke something in your code. If not for that, C++ compilers might well have displaced C compilers, with some of us writing like we were still using C, and some of us writing with the full features of C++.but the Rust language is fastly evolving, and because of this certain compatibility issues can arise, despite efforts to ensure forwards-compatibility wherever possible.This is a *major* issue indeed.
TOPS-10 - BLISS -https://en.wikipedia.org/wiki/TOPS-10 (https://en.wikipedia.org/wiki/TOPS-10)I'm pretty sure that TOPS-10 was written mostly in Macro-10, the PDP-10s assembly language.It's origins pre-date Bliss, and TOPS-20 (which is NEWER than TOPS10) was pretty much all Macro-10.
I think a large chunk of the first Mac OS versions was written in Pascal. Ditto for Lisa OS which Mac OS was partly based on.Correct. The Lisa software and classic Mac OS (originally known simply as the Mac System Software) were originally written in Pascal. However, the Mac’s smaller ROM and RAM forced them to rewrite much of it in 68K assembler simply to make it fit. (In early Macs, there was about as much OS in ROM as on disk.)
The Macintosh used the same Motorola 68000 microprocessor as its predecessor, the Lisa, and we wanted to leverage as much code written for Lisa as we could. But most of the Lisa code was written in the Pascal programming language. Since the Macintosh had much tighter memory constraints, we needed to write most of our system-oriented code in the most efficient way possible, using the native language of the processor, 68000 assembly language. Even so, we could still use Lisa code by hand translating the Pascal into assembly language.
We directly incorporated Quickdraw, Bill Atkinson's amazing bit-mapped graphics package, since it was already written mostly in assembly language. We also used the Lisa window and menu managers, which we recoded in assembly language from Bill's original Pascal, reducing the code size by a factor of two or so. Bill's lovely Pascal code was a model of clarity, so that was relatively easy to accomplish.
The Mac lacked the memory mapping hardware prevalent in larger systems, so we needed a way to relocate memory in software to minimize fragmentation as blocks got allocated and freed. The Lisa word processor team had developed a memory manager with relocatable blocks, accessing memory blocks indirectly through "handles", so the blocks could be moved as necessary to reduce fragmentation. We decided to use it for the Macintosh, again by recoding it from Pascal to assembly language.
One can't write an OS in garbage collected languages, maybe some kind of student didactic implementation running on a virtual machine, you get rid of C/C++ and assembly, no OS, end of story.
http://menuetos.net/ (http://menuetos.net/) - Pure Assembly
One can't write an OS in garbage collected languages, maybe some kind of student didactic implementation running on a virtual machine, you get rid of C/C++ and assembly, no OS, end of story.
I vaguely remember parts of AmigaOS were written in BCPL. C is not that old. I figure most OS before Unix were written in something else but C.
OSes before Unix were written in assembly language! It was absolutely heretical to think that an OS could be written primarily in a high level language -- not to mention a portable one.
Burroughs Corporation introduced the B5000 in 1961 with the MCP (Master Control Program) operating system. The B5000 was a stack machine designed to exclusively support high-level languages, with no software, not even at the lowest level of the operating system, being written directly in machine language or assembly language; the MCP was the first[citation needed] OS to be written entirely in a high-level language - ESPOL, a dialect of ALGOL 60 - although ESPOL had specialized statements for each "syllable"[NB 2] in the B5000 instruction set. MCP also introduced many other ground-breaking innovations, such as being one of[NB 3] the first commercial implementations of virtual memory. The rewrite of MCP for the B6500 is still in use today in the Unisys ClearPath/MCP line of computers.
I vaguely remember parts of AmigaOS were written in BCPL. C is not that old. I figure most OS before Unix were written in something else but C.
OSes before Unix were written in assembly language! It was absolutely heretical to think that an OS could be written primarily in a high level language -- not to mention a portable one.
*Cough* Multics. *Cough*
TRIPOS (https://en.wikipedia.org/wiki/TRIPOS) was largely written in BCPL with a small bit of assembler in the kernel. Despite what that Wikipedia article says, device drivers were written in BCPL (all the ones I worked on were at least). Although, since BCPL is a predecessor of C it might not count.
One can't write an OS in garbage collected languages, maybe some kind of student didactic implementation running on a virtual machine, you get rid of C/C++ and assembly, no OS, end of story.
Not true, provided you are sufficiently imaginative and clever!
I vaguely remember parts of AmigaOS were written in BCPL.
And in the end, all the above examples proved to be mostly of academic and research interest (even there, just briefly) and abject commercial failures, this is why they are now a footnote in OS historu machines
The AmigaDOS part of AmigaOS was the corresponding bits of TRIPOS, which was written in BCPL, ported to use the AmigaOS kernel. It was replaced in later versions with C code. See the Wikipedia page on AmigaOS for more details (https://en.wikipedia.org/wiki/AmigaOS#AmigaDOS).I vaguely remember parts of AmigaOS were written in BCPL.
Do you have any reference to back up that claim?
What if I want to run a "Language Y" program on the highly optimized "Language X" machine ?
There have been only two interestingly innovative architectures recently: the Mill and xCORE.
There have been only two interestingly innovative architectures recently: the Mill and xCORE.
in my opinion, that's because in modern projects (softcore and researching) they don't have dreams, they have commercial goals ...
>:D >:D >:D >:D >:D >:D >:D
And in the end, all the above examples proved to be mostly of academic and research interest (even there, just briefly) and abject commercial failures, this is why they are now a footnote in OS history.
Of course, the people experimenting with this stuff in '70 had the excuse that it was not tried before, but really, "Language X" machines ?!?!
What if I want to run a "Language Y" program on the highly optimized "Language X" machine ?
Probably they're still wondering how come nobody was interested in such a perfect Lisp language optimized machine, that besides an ultra narrow application in academia, is fully useless for anything else,
I mean, because the CPU complexity becomes higher and higher, along with cheap RAM and lot of cores, it is conceivable that there will be a general purpose OS written in some higher level language in the future, other than a combination of C and assembly, to hold the hand of feeble minded incompetents, oh sorry, that sounded too harsh, to "increase security and reliability" and "reduce the development costs and time to market", but the most lame way to waste money is to produce language specific optimized CPUs and platforms, I don't see many Jazelle running CPU lately, do you ?
OK, it's planned for this weekend a new attempt to write a "scheduler" in assembly.
I can do it
I can do it
I can do it
Can I? :o :o :o
Not going to waste more time here.
...
There have been only two interestingly innovative architectures recently: the Mill and xCORE.
Not going to waste more time here.
Agreed.
The smell of terminal cluelessness or trolling is overwhelming.
C is now a major impediment to improvement just like the 80x86 ISA.I'm not clear what you mean by that. Can you give examples?
There have been only two interestingly innovative architectures recently: the Mill and xCORE.The Mill is an interesting CPU architecture. The interesting thing about xCORE is the way the CPUs cooperate, rather than anything about the architecture of the CPUs.
xCORE - (almost) dead, but was living briefly and still twitching, still nobody gave a rat behind about it and then it sunk in the bucket of bad ideas, no next Atmel or STM anytime soon.The xCORE dominates in some niches, like professional audio. Whether that is enough to keep the business alive long enough to broaden its appeal is the key question.
*Cough* Multics. *Cough*
Multics is not a good counter-example to his claim. Parts of it were written in PL/I but a large part was in assembly.
...
There have been only two interestingly innovative architectures recently: the Mill and xCORE.
The Mill - dead as a door nail, before being alive, ultra-proprietary patent troll stuff.
xCORE - (almost) dead, but was living briefly and still twitching, still nobody gave a rat behind about it and then it sunk in the bucket of bad ideas, no next Atmel or STM anytime soon.
So much for the innovation, anybody with a bit of brain can imagine a CPU architecture (see OpenCores CPU section), but if it does not offer massive advantages over the existing stuff, it will be just an exercise in spending venture or EU grants money.
IMHO, the advancements in code analyzers and simulators will make the existing stuff secure enough (fast already is) without the need of introducing strange cumbersome concepts, of course if the HW guys will stop cutting corners in an attempt to squeeze the last bit of performance.
OS security is actually pretty awful. Any attempt to put a safety net under clueless programmers who drop the ball every other hour is IMHO laudable. If that turns out to be an OS written in whitespace, so be it.
C is now a major impediment to improvement just like the 80x86 ISA.I'm not clear what you mean by that. Can you give examples?
There have been only two interestingly innovative architectures recently: the Mill and xCORE.The Mill is an interesting CPU architecture. The interesting thing about xCORE is the way the CPUs cooperate, rather than anything about the architecture of the CPUs.
OS security is actually pretty awful. Any attempt to put a safety net under clueless programmers who drop the ball every other hour is IMHO laudable. If that turns out to be an OS written in whitespace, so be it.
Clueless programmers should stick to "apps", and have no need for, or use in, OS programming. Regarding the "safety nets", who checks for safety of the safety net, we definitely need a TÜV for OSes.
OS security is actually pretty awful. Any attempt to put a safety net under clueless programmers who drop the ball every other hour is IMHO laudable. If that turns out to be an OS written in whitespace, so be it.
Clueless programmers should stick to "apps", and have no need for, or use in, OS programming. Regarding the "safety nets", who checks for safety of the safety net, we definitely need a TÜV for OSes.
As a matter of fact, TÜV does security certification to Common Criteria. But anyone can have that, if the threat model is lean enough.
Regarding Clueless programmers should stick to apps, that's just elitist crap. Rockstars are rare, supply never meets demand and if a venture relies only on them it is doomed to fail. But if your framework allows you to utilize the 'waterboys' instead of packing all the mundane tasks on your few quarterbacks, you will definitely have a competitive edge over the team of just Allstars.
People have loved writing "C is bad" articles" since C first appeared, but most don't really hold water.C is now a major impediment to improvement just like the 80x86 ISA.I'm not clear what you mean by that. Can you give examples?
Many other people have done that far better than I could. I liked C in the early 80s, but by the time the mid 90s had arrived it was clear that C and C++ were problems rather than answers.
I well remember the endless (over a year) debate about whether it should be possible/impossible to "cast away constness". There are good reasons for both, but they are obviously mutually incompatible.Tinkering with constness is more about the use of ROM than the CPU architecture.
There's the C++ FQA, of course.C++ doesn't count. Its so complex and quirky, it certainly does tie people's hands.
Amusingly, until recently it wasn't even possible to write an OS in C. I know it is ridiculous, but as late a 2004 Hans Boehm had to point that out to many people in "Threads Cannot be Implemented as a Library" http://www.hpl.hp.com/techreports/2004/HPL-2004-209.html (http://www.hpl.hp.com/techreports/2004/HPL-2004-209.html)I think Hans Boehm is also someone who wrote an excellent article about how poorly understood the memory models of various x86 cores was (even by their own developers), and how this shot holes in most threading systems. I don't see how this makes running C code an impediment to progress.
Sure, but you could do that with x86 or ARM cores. There is nothing particular about the xCORE cores that makes them special. I've encountered many people trying to tie their favourite ISA to an interesting scheme, that had almost nothing to do with the ISA.QuoteThere have been only two interestingly innovative architectures recently: the Mill and xCORE.The Mill is an interesting CPU architecture. The interesting thing about xCORE is the way the CPUs cooperate, rather than anything about the architecture of the CPUs.
A key point is multiprocessor/multiprocessing operation - but many processors have that. It is easy to make very parallel processors, but difficult to program them to make use of the parallelism.
The unique point is the tight integration of hardware capabilities with software capabilities such that together they are better than the sum.
I use a subset of c++ in many embedded projects. I feel the way I use it could be good within the linux kernel. Always thought there should be a "c+" specification that would be the stripped down functionality for this scenario (no stdlib, exceptions, etc).One of the things I really missed when using C after languages like CORAL66 was the ability to bundle groups of functions into their own module. That was already becoming a common features of languages in the ALGOL sphere when C was developed. Its a pity some basic management tools like this never made their way into the C spec.
I use a subset of c++ in many embedded projects. I feel the way I use it could be good within the linux kernel. Always thought there should be a "c+" specification that would be the stripped down functionality for this scenario (no stdlib, exceptions, etc).
I use a subset of c++ in many embedded projects. I feel the way I use it could be good within the linux kernel. Always thought there should be a "c+" specification that would be the stripped down functionality for this scenario (no stdlib, exceptions, etc).One of the things I really missed when using C after languages like CORAL66 was the ability to bundle groups of functions into their own module. That was already becoming a common features of languages in the ALGOL sphere when C was developed. Its a pity some basic management tools like this never made their way into the C spec.
-rwxr-xr-x 1 rstofer rstofer 8304 Apr 16 12:49 hello
-rw-r--r-- 1 rstofer rstofer 71 Apr 16 12:48 hello.c
-rwxr-xr-x 1 rstofer rstofer 3343520 Apr 16 12:47 main
-rw-r--r-- 1 rstofer rstofer 46 Apr 16 12:47 main.rs
People have loved writing "C is bad" articles" since C first appeared, but most don't really hold water.C is now a major impediment to improvement just like the 80x86 ISA.I'm not clear what you mean by that. Can you give examples?
Many other people have done that far better than I could. I liked C in the early 80s, but by the time the mid 90s had arrived it was clear that C and C++ were problems rather than answers.
I well remember the endless (over a year) debate about whether it should be possible/impossible to "cast away constness". There are good reasons for both, but they are obviously mutually incompatible.Tinkering with constness is more about the use of ROM than the CPU architecture.
There's the C++ FQA, of course.C++ doesn't count. Its so complex and quirky, it certainly does tie people's hands.
Amusingly, until recently it wasn't even possible to write an OS in C. I know it is ridiculous, but as late a 2004 Hans Boehm had to point that out to many people in "Threads Cannot be Implemented as a Library" http://www.hpl.hp.com/techreports/2004/HPL-2004-209.html (http://www.hpl.hp.com/techreports/2004/HPL-2004-209.html)I think Hans Boehm is also someone who wrote an excellent article about how poorly understood the memory models of various x86 cores was (even by their own developers), and how this shot holes in most threading systems. I don't see how this makes running C code an impediment to progress.
Sure, but you could do that with x86 or ARM cores. There is nothing particular about the xCORE cores that makes them special. I've encountered many people trying to tie their favourite ISA to an interesting scheme, that had almost nothing to do with the ISA.QuoteThere have been only two interestingly innovative architectures recently: the Mill and xCORE.The Mill is an interesting CPU architecture. The interesting thing about xCORE is the way the CPUs cooperate, rather than anything about the architecture of the CPUs.
A key point is multiprocessor/multiprocessing operation - but many processors have that. It is easy to make very parallel processors, but difficult to program them to make use of the parallelism.
The unique point is the tight integration of hardware capabilities with software capabilities such that together they are better than the sum.
I don't think I'm going to rush into Rust
Code: [Select]-rwxr-xr-x 1 rstofer rstofer 8304 Apr 16 12:49 hello
-rw-r--r-- 1 rstofer rstofer 71 Apr 16 12:48 hello.c
-rwxr-xr-x 1 rstofer rstofer 3343520 Apr 16 12:47 main
-rw-r--r-- 1 rstofer rstofer 46 Apr 16 12:47 main.rs
Look at the size of the 'main' executable compared to 'hello'.
I'll concede I know absolutely nothing about Rust but I got the example here:
https://doc.rust-lang.org/book/ch01-02-hello-world.html
I don't think I'm going to rush into Rust
Wise decision; neither am I!QuoteCode: [Select]-rwxr-xr-x 1 rstofer rstofer 8304 Apr 16 12:49 hello
-rw-r--r-- 1 rstofer rstofer 71 Apr 16 12:48 hello.c
-rwxr-xr-x 1 rstofer rstofer 3343520 Apr 16 12:47 main
-rw-r--r-- 1 rstofer rstofer 46 Apr 16 12:47 main.rs
Look at the size of the 'main' executable compared to 'hello'.
I'll concede I know absolutely nothing about Rust but I got the example here:
https://doc.rust-lang.org/book/ch01-02-hello-world.html
But microbenchmarks - which might be invalidated tomorrow - are not a good basis for avoiding any language.
I don't think I'm going to rush into Rust
Wise decision; neither am I!QuoteCode: [Select]-rwxr-xr-x 1 rstofer rstofer 8304 Apr 16 12:49 hello
-rw-r--r-- 1 rstofer rstofer 71 Apr 16 12:48 hello.c
-rwxr-xr-x 1 rstofer rstofer 3343520 Apr 16 12:47 main
-rw-r--r-- 1 rstofer rstofer 46 Apr 16 12:47 main.rs
Look at the size of the 'main' executable compared to 'hello'.
I'll concede I know absolutely nothing about Rust but I got the example here:
https://doc.rust-lang.org/book/ch01-02-hello-world.html
But microbenchmarks - which might be invalidated tomorrow - are not a good basis for avoiding any language.
But even this simple test can serve to dissuade a programmer from trying a language that, at best, is a poor substitute for C. C has been around a while, I expect it to join Fortran and COBOL as one of the long lasting languages.
I just followed the directions using Ubuntu under Win 10. I did no other research.
I don't think I'm going to rush into RustCode: [Select]-rwxr-xr-x 1 rstofer rstofer 8304 Apr 16 12:49 hello
-rw-r--r-- 1 rstofer rstofer 71 Apr 16 12:48 hello.c
-rwxr-xr-x 1 rstofer rstofer 3343520 Apr 16 12:47 main
-rw-r--r-- 1 rstofer rstofer 46 Apr 16 12:47 main.rs
Look at the size of the 'main' executable compared to 'hello'.
I'll concede I know absolutely nothing about Rust but I got the example here:
https://doc.rust-lang.org/book/ch01-02-hello-world.html
$ rustc main.rs
$ du -h main
336K main
$ rustc -C prefer-dynamic main.rs
$ du -h main
20K main
$ gcc -o main main.c
$ du -h main
16K main
$ gcc -static -o main main.c
$ du -h main
952K main
I used these instructions for installation (Section 1.1)
https://doc.rust-lang.org/book/ch01-01-installation.html
and this HelloWorld example (Section 1.2)
https://doc.rust-lang.org/book/ch01-02-hello-world.html
I didn't pursue the topic beyond looking at the size of the executable.
The standard library is linked statically by default, though your binary is still an order of magnitude larger than mine for some reason. If you do dynamic linking, comparable to the default with C
QuoteThe standard library is linked statically by default, though your binary is still an order of magnitude larger than mine for some reason. If you do dynamic linking, comparable to the default with C
Just to be clear, by 'static' linking you mean shoving library code in the executable, and by 'dynamic' you mean having the library code loaded at run-time? If so, that's a bit disingenuous since the executing application still needs the same linked-in code, just that with dynamic linking you don't count it. Put all the necessary stuff in one place and see what the size is - that's what it takes to run it.
And, in fact, all other things being equal, running a single executable with dynamic linking leads to more bloat since every library function will be required regardless as to whether it's used or not. With static linking only the bits actually used get linked in (or should be).
Try reading what I wrote again without the bad attitude. Hans Boehm is an important figure who pointed out a lot of sloppy thinking related to how poorly understood hardware memory models interact with threading and multi-CPU environments. I don't see how this makes running C code an impediment to progress. Single threaded C code has no real issues. Mullti-threaded C code certainly inhibits change, but so does every other kind of multi-threaded code. They are ALL incompatible with change.Amusingly, until recently it wasn't even possible to write an OS in C. I know it is ridiculous, but as late a 2004 Hans Boehm had to point that out to many people in "Threads Cannot be Implemented as a Library" http://www.hpl.hp.com/techreports/2004/HPL-2004-209.html (http://www.hpl.hp.com/techreports/2004/HPL-2004-209.html)I think Hans Boehm is also someone who wrote an excellent article about how poorly understood the memory models of various x86 cores was (even by their own developers), and how this shot holes in most threading systems. I don't see how this makes running C code an impediment to progress.
Strawman question!
Try reading what I wrote again without the bad attitude. Hans Boehm is an important figure who pointed out a lot of sloppy thinking related to how poorly understood hardware memory models interact with threading and multi-CPU environments. I don't see how this makes running C code an impediment to progress. Single threaded C code has no real issues. Mullti-threaded C code certainly inhibits change, but so does every other kind of multi-threaded code. They are ALL incompatible with change.Amusingly, until recently it wasn't even possible to write an OS in C. I know it is ridiculous, but as late a 2004 Hans Boehm had to point that out to many people in "Threads Cannot be Implemented as a Library" http://www.hpl.hp.com/techreports/2004/HPL-2004-209.html (http://www.hpl.hp.com/techreports/2004/HPL-2004-209.html)I think Hans Boehm is also someone who wrote an excellent article about how poorly understood the memory models of various x86 cores was (even by their own developers), and how this shot holes in most threading systems. I don't see how this makes running C code an impediment to progress.
Strawman question!
After this project, I'd like to play with a simple OS not written in C/C++.
I mean, I like to compile (modify?) some good piece of code, upload something on a board (68k? modern STM32?), and play with it.
Ada? Pascal? Modula2? Oberon? Assembly? All welcome :D
With dynamic linking, every program running on the system at the same time can use the same physical RAM copy of the dynamic libraries.
Right. Most big things are big because of the working data, not the code. I have libraries of a couple of hundred k, that take up gigabytes on running systems, because so many instances are used, and each instance has a lot of working data.With dynamic linking, every program running on the system at the same time can use the same physical RAM copy of the dynamic libraries.
Only the code and constant data part. All the rest is not shared.
Non-constant data (variables) can be a big part of a library.
So, in practice, the memory-savings by using dynamic linking is not as big as hoped for.
After this project, I'd like to play with a simple OS not written in C/C++.
I mean, I like to compile (modify?) some good piece of code, upload something on a board (68k? modern STM32?), and play with it.
Ada? Pascal? Modula2? Oberon? Assembly? All welcome :D
Other than assembly language, those are all isomorphic to C/C++, certainly the GNU version if not the standard (e.g. with nested functions possible). They differ only in surface syntax, the standard library, and things such as how visibility of names is controlled. Generate code is identical. The same goes for Lisp.
Right. Most big things are big because of the working data, not the code. I have libraries of a couple of hundred k, that take up gigabytes on running systems, because so many instances are used, and each instance has a lot of working data.With dynamic linking, every program running on the system at the same time can use the same physical RAM copy of the dynamic libraries.
Only the code and constant data part. All the rest is not shared.
Non-constant data (variables) can be a big part of a library.
So, in practice, the memory-savings by using dynamic linking is not as big as hoped for.
I'm not sure the original intent of dynamic link libraries was mostly about memory saving. The ability to swap out one bug fixed library, and fix every app using it, was an important driving force. That's more an area where the hype didn't work out well, as we fell into a DLL hell that the industry has taken a long time to cilmb out of.
With dynamic linking, every program running on the system at the same time can use the same physical RAM copy of the dynamic libraries.
Only the code and constant data part. All the rest is not shared.
Non-constant data (variables) can be a big part of a library.
So, in practice, the memory-savings by using dynamic linking is not as big as hoped for.
After this project, I'd like to play with a simple OS not written in C/C++.
I mean, I like to compile (modify?) some good piece of code, upload something on a board (68k? modern STM32?), and play with it.
Ada? Pascal? Modula2? Oberon? Assembly? All welcome :D
Other than assembly language, those are all isomorphic to C/C++, certainly the GNU version if not the standard (e.g. with nested functions possible). They differ only in surface syntax, the standard library, and things such as how visibility of names is controlled. Generate code is identical. The same goes for Lisp.
Really? You can get buffer overflow in Ada using the normal cliche programming style? Or access/mutate aCamel as if it was aHorse?
After this project, I'd like to play with a simple OS not written in C/C++.
I mean, I like to compile (modify?) some good piece of code, upload something on a board (68k? modern STM32?), and play with it.
Ada? Pascal? Modula2? Oberon? Assembly? All welcome :D
Other than assembly language, those are all isomorphic to C/C++, certainly the GNU version if not the standard (e.g. with nested functions possible). They differ only in surface syntax, the standard library, and things such as how visibility of names is controlled. Generate code is identical. The same goes for Lisp.
Really? You can get buffer overflow in Ada using the normal cliche programming style? Or access/mutate aCamel as if it was aHorse?
You don't get buffer overflows in properly-written C (or especially C++) code.
What problems there are are generally caused by using poorly designed libraries (including the convenient but dangerous standard library).
Why is everybody ignoring and not mentioning ADA, the best way that is not C/C++ to write an operating system ?
"The Army Secure Operating System (ASOS) was written almost entirely in Ada. It was designed to meet Orange Book A1 protection requirements, support Ada applications more directly, and run on a commodity Sun3. The total software was 55,000 lines of code. It even had checkpointing/restore and later a secure RDBMS."
Here, freshly declassified:
https://apps.dtic.mil/dtic/tr/fulltext/u2/a340370.pdf
https://moscow.sci-hub.se/1864/6e4bd79db1e4753ea76c251441a45133/waldhart1990.pdf
That is an unfair statement.After this project, I'd like to play with a simple OS not written in C/C++.
I mean, I like to compile (modify?) some good piece of code, upload something on a board (68k? modern STM32?), and play with it.
Ada? Pascal? Modula2? Oberon? Assembly? All welcome :D
Other than assembly language, those are all isomorphic to C/C++, certainly the GNU version if not the standard (e.g. with nested functions possible). They differ only in surface syntax, the standard library, and things such as how visibility of names is controlled. Generate code is identical. The same goes for Lisp.
Really? You can get buffer overflow in Ada using the normal cliche programming style? Or access/mutate aCamel as if it was aHorse?
You don't get buffer overflows in properly-written C (or especially C++) code.
I wondered how long it would be until the There's No True Scotsman fallacy would rear its head! https://en.m.wikipedia.org/wiki/No_true_Scotsman
That is an unfair statement.After this project, I'd like to play with a simple OS not written in C/C++.
I mean, I like to compile (modify?) some good piece of code, upload something on a board (68k? modern STM32?), and play with it.
Ada? Pascal? Modula2? Oberon? Assembly? All welcome :D
Other than assembly language, those are all isomorphic to C/C++, certainly the GNU version if not the standard (e.g. with nested functions possible). They differ only in surface syntax, the standard library, and things such as how visibility of names is controlled. Generate code is identical. The same goes for Lisp.
Really? You can get buffer overflow in Ada using the normal cliche programming style? Or access/mutate aCamel as if it was aHorse?
You don't get buffer overflows in properly-written C (or especially C++) code.
I wondered how long it would be until the There's No True Scotsman fallacy would rear its head! https://en.m.wikipedia.org/wiki/No_true_Scotsman
Unlike C++, C has two different "modes": hosted environment and freestanding environment. The former includes the standard C library – including functions like fgets(), strcpy(), and so on –; whereas the latter is used when programming kernels and microcontrollers, often with a replacement set of functions (see Linux kernel C functions, or the Arduino environment, for examples).
Buffer overflows are intrinsic part of the C standard library, but not the C freestanding environment. It is quite possible to replace the standard C library with a completely different API, including arrays with explicit bounds and garbage collection, but keep the C compiler and syntax.
Therefore, this is not a No True Scotsman argument, because it identifies the problematic but optional half of C. I know this, because I myself am working on a "better" substitute (for my own needs and uses).
..., so now Java, Python and possibly Rust are dominant.
..., so now Java, Python and possibly Rust are dominant.
(https://www.eevblog.com/forum/programming/any-examples-of-os-not-written-in-cc/?action=dlattach;attach=1212107)
https://www.tiobe.com/tiobe-index/ (https://www.tiobe.com/tiobe-index/)
No, I think it is an error to consider hosted C environment the only valid C, and call relying on freestanding C 'excluding "half" of C', when the main problems are all in the hosted C environment only.That is an unfair statement.After this project, I'd like to play with a simple OS not written in C/C++.
I mean, I like to compile (modify?) some good piece of code, upload something on a board (68k? modern STM32?), and play with it.
Ada? Pascal? Modula2? Oberon? Assembly? All welcome :D
Other than assembly language, those are all isomorphic to C/C++, certainly the GNU version if not the standard (e.g. with nested functions possible). They differ only in surface syntax, the standard library, and things such as how visibility of names is controlled. Generate code is identical. The same goes for Lisp.
Really? You can get buffer overflow in Ada using the normal cliche programming style? Or access/mutate aCamel as if it was aHorse?
You don't get buffer overflows in properly-written C (or especially C++) code.
I wondered how long it would be until the There's No True Scotsman fallacy would rear its head! https://en.m.wikipedia.org/wiki/No_true_Scotsman
Unlike C++, C has two different "modes": hosted environment and freestanding environment. The former includes the standard C library – including functions like fgets(), strcpy(), and so on –; whereas the latter is used when programming kernels and microcontrollers, often with a replacement set of functions (see Linux kernel C functions, or the Arduino environment, for examples).
Buffer overflows are intrinsic part of the C standard library, but not the C freestanding environment. It is quite possible to replace the standard C library with a completely different API, including arrays with explicit bounds and garbage collection, but keep the C compiler and syntax.
Therefore, this is not a No True Scotsman argument, because it identifies the problematic but optional half of C. I know this, because I myself am working on a "better" substitute (for my own needs and uses).
So you think it is an acceptable argument to exclude "half" of C uses and concentrate on the less inconvenient other half?
I'd like to play with a simple OS not written in C/C++.How much do you consider "a simple OS"?
No, I think it is an error to consider hosted C environment the only valid C, and call relying on freestanding C 'excluding "half" of C', when the main problems are all in the hosted C environment only.That is an unfair statement.After this project, I'd like to play with a simple OS not written in C/C++.
I mean, I like to compile (modify?) some good piece of code, upload something on a board (68k? modern STM32?), and play with it.
Ada? Pascal? Modula2? Oberon? Assembly? All welcome :D
Other than assembly language, those are all isomorphic to C/C++, certainly the GNU version if not the standard (e.g. with nested functions possible). They differ only in surface syntax, the standard library, and things such as how visibility of names is controlled. Generate code is identical. The same goes for Lisp.
Really? You can get buffer overflow in Ada using the normal cliche programming style? Or access/mutate aCamel as if it was aHorse?
You don't get buffer overflows in properly-written C (or especially C++) code.
I wondered how long it would be until the There's No True Scotsman fallacy would rear its head! https://en.m.wikipedia.org/wiki/No_true_Scotsman (https://en.m.wikipedia.org/wiki/No_true_Scotsman)
Unlike C++, C has two different "modes": hosted environment and freestanding environment. The former includes the standard C library – including functions like fgets(), strcpy(), and so on –; whereas the latter is used when programming kernels and microcontrollers, often with a replacement set of functions (see Linux kernel C functions, or the Arduino environment, for examples).
Buffer overflows are intrinsic part of the C standard library, but not the C freestanding environment. It is quite possible to replace the standard C library with a completely different API, including arrays with explicit bounds and garbage collection, but keep the C compiler and syntax.
Therefore, this is not a No True Scotsman argument, because it identifies the problematic but optional half of C. I know this, because I myself am working on a "better" substitute (for my own needs and uses).
So you think it is an acceptable argument to exclude "half" of C uses and concentrate on the less inconvenient other half?
The above linked argument is not a case of No True Scotsman, exactly because of the dual nature of the C standard, and the problems being avoidable by using freestanding C. It is a valid argument, because many C developers are not aware of the differences between the hosted environment and the freestanding environment, and conflate the two.
(Technically, one does not even need to write freestanding C, just avoid using most of the standard C library API; and use other APIs, like say Boehm GC for memory allocation, and mutable data buffers with explicit bounds instead of standard C strings, to basically completely avoid both buffer overruns and dynamic memory management problems.)
I fully agree that the C standard committee has dropped the ball over two decades ago, mainly due to increased vendor pressure and complete rejection of the POSIX standard, and instead veering into C++ and vendor-specific optional interfaces (like the so-called "safe I/O functions", which are nothing of the sort).
Put another way, buffer overruns and dynamic memory management issues are not an inherent part of C; only an inherent part of the library that forms the core of the hosted C environment: the standard C library. It is quite possible, and indeed very feasible, to either replace, or just augment the standard C library with something completely different that 1) does not suffer from buffer overruns because array boundaries are part of the data structures used by the replacement library interfaces, and 2) has an efficient automatic garbage collection; and the code will still be C that a typical C developer will be able to read and maintain. To develop such code, a typical C developer will have to learn those new interfaces, but that's it.
Having experimented and delved into this, it amazes me that no real work has been published on this front, because I'm basically drowning in possibilities and having to write a lot of test code just to see which options I prefer right now, for code running under the Linux kernel on typical ARM and Intel hardware.
To me, it feels like computer scientists are arguing amongst themselves how many sides should a polygonal wheel have, completely ignoring round, circular wheels... We really have not made much real progress in software engineering (and I'm suspecting in computer science too) in the last two or three decades. Small optimizations only.
Things like the Arduino library (which replaces the standard C library for Arduino development; and although the code is compiled using a C++ compiler, it relies on the GNU C++ compiler providing a freestanding C++ environment based on the C freestanding environment) are honestly quite horrible, possibly even worse than the standard C library. I shan't talk much about the various vendor-provided Hardware Abstraction Libraries, just that every single one I've seen has been a disappointment (in the software engineering sense – compare to a contractor seeing a house built with timber but using twine instead of nails or screws (or even pegs) to hold things together).
The dual nature is important to realize, because the non-library part of C is so simple yet powerful. It could be much better (code-level concurrency, barriers, memory semantics/coherency etc.), but that sort of stuff is better explored with other languages. However, the C library part, which is not a compulsory/required part of C, only an optional part, is the biggest problem with C, and is easily replaced with something else. (That is, all C compilers I have used, provide compile time flags and options that allow trivially replacing/omitting the standard C library with something else.)
many C developers are not aware of the differences between the hosted environment and the freestanding environment, and conflate the two.
I fully agree that the C standard committee has dropped the ball over two decades ago, mainly due to increased vendor pressure
Put another way, buffer overruns and dynamic memory management issues are not an inherent part of C;
To me, it feels like computer scientists are arguing amongst themselves how many sides should a polygonal wheel have, completely ignoring round, circular wheels... We really have not made much real progress in software engineering (and I'm suspecting in computer science too) in the last two or three decades. Small optimizations only.
Things like the Arduino library ... are honestly quite horrible, possibly even worse than the standard C library. I shan't talk much about the various vendor-provided Hardware Abstraction Libraries
The dual nature is important to realize, because the non-library part of C is so simple yet powerful.
It could be much better (code-level concurrency, barriers, memory semantics/coherency etc.), but that sort of stuff is better explored with other languages.
However, the C library part, which is not a compulsory/required part of C, only an optional part, is the biggest problem with C, and is easily replaced with something else.
Using C without the standard library doesn't make it any safer or more secure as a language. Proof of which is the long history of Linux kernel level exploits. Or exploits in no-OS network appliances. The MITRE database is a good place to look for examples.The original standard C library is certainly a problem. Its full of functions with no internal size checks. A good library isn't a magic cure for problems, but the original standard C library is like a banana skin.
To think that "libraries" are somehow a problem and not using them will improve the situation is odd. It is simply difficult to write secure code in C. It requires a certain mindset and a lot of experience. Rare traits in the industry.
Unlike C++, C has two different "modes": hosted environment and freestanding environment. The former includes the standard C library – including functions like fgets(), strcpy(), and so on –; whereas the latter is used when programming kernels and microcontrollers, often with a replacement set of functions (see Linux kernel C functions, or the Arduino environment, for examples).
Buffer overflows are intrinsic part of the C standard library, but not the C freestanding environment. It is quite possible to replace the standard C library with a completely different API, including arrays with explicit bounds and garbage collection, but keep the C compiler and syntax.
Can't believe I missed this thread. Right up my street.
100% agree with you. There is no way to make C and/or C++ a "safe" environment to write software in. It is absolutely 100% impossible. The problem is the compiler's fundamental model is memory regardless of what suit you dress your code in, whichever macros or libraries you use, whether or not Coverity have buggered you for cash or not and whether or not you have used clever compiler features to trip up people attacking your code.
Using C without the standard library doesn't make it any safer or more secure as a language. Proof of which is the long history of Linux kernel level exploits. Or exploits in no-OS network appliances. The MITRE database is a good place to look for examples.The original standard C library is certainly a problem. Its full of functions with no internal size checks. A good library isn't a magic cure for problems, but the original standard C library is like a banana skin.
To think that "libraries" are somehow a problem and not using them will improve the situation is odd. It is simply difficult to write secure code in C. It requires a certain mindset and a lot of experience. Rare traits in the industry.
typedef double float_t;
I stand by my original points.I really only object to the No True Scotsman fallacy claim.
I'll highlight and comment on a few of your points below...I understand your points, and fully acknowledge their basis in facts; I only disagree on some of your conclusions.
There is no way to make C and/or C++ a "safe" environment to write software in. It is absolutely 100% impossible.I agree; but I also think it is not necessary for a systems programming language to be "safe".
I explain those by pointing out that possible ≠ easy.Unlike C++, C has two different "modes": hosted environment and freestanding environment. The former includes the standard C library – including functions like fgets(), strcpy(), and so on –; whereas the latter is used when programming kernels and microcontrollers, often with a replacement set of functions (see Linux kernel C functions, or the Arduino environment, for examples).
Buffer overflows are intrinsic part of the C standard library, but not the C freestanding environment. It is quite possible to replace the standard C library with a completely different API, including arrays with explicit bounds and garbage collection, but keep the C compiler and syntax.
If you believe that, how do you explain the regular exploits like:
100 Million More IoT Devices Are Exposed and They Won't Be the Last (WiReD)
Gabe Goldberg <gabe@gabegold.com>
Wed, 14 Apr 2021 19:41:06 -0400
The Name:Wreck flaws in TCP/IP are the latest in a series of vulnerabilities with global implications.
https://www.wired.com/story/namewreck-iot-vulnerabilities-tcpip-millions-devices/ (https://www.wired.com/story/namewreck-iot-vulnerabilities-tcpip-millions-devices/)
or
A Casino Gets Hacked Through a Fish-Tank Thermometer (Entrepeneur)
Amos Shapir <amos083@gmail.com>
Fri, 16 Apr 2021 17:49:35 +0300
Hackers gain entry to a casino's internal net via a fish tank, and steal list of customers:
https://www.entrepreneur.com/article/368943 (https://www.entrepreneur.com/article/368943)
Both of those are from the yesterday's comp.risks (Volume 32 Issue 60 Saturday, 17th April 2021), which everybody should be reading.
See https://catless.ncl.ac.uk/Risks/ (https://catless.ncl.ac.uk/Risks/)
I stand by my original points.I really only object to the No True Scotsman fallacy claim.I'll highlight and comment on a few of your points below...I understand your points, and fully acknowledge their basis in facts; I only disagree on some of your conclusions.
As an example, consider PHP, a widely used but usually pretty horrible code. Especially its earlier versions were basically a security hole waiting to happen (magic quotes stuff in particular). Yet, one could write quite secure web service code with it, if one paid sufficient attention, and avoided using the features that usually lead to security problems. I know, because I have.
However, many of those security holes have been plugged (like magic quotes no longer supported, database interfaces switching from building query strings to using variable references so quoting is not even an issue, and so on). The most problematic design principle currently is that most PHP services are designed to be able to upgrade themselves, which necessarily means the installation is vulnerable to script drops/bombs et cetera. We could avoid that, and even things like password leaks, if we leveraged the POSIX/Unix user and group hierarchies, with server interpreters refusing to execute code owned by the user that can upload content to the server; and login/logout/account management facilities restricted to a few specific pages with all others not even having access to the sensitive fields of the user database...
We could do better with Python, but unfortunately Python insists on its "own" WSGI interfaces (as opposed to say FastCGI). (In particular, a page engine can be written as a FastCGI script, with each request (connection) served by a forked child. The engine can preload each instance by the main data structures, like navigation and file types supported, deduplicating most of the work done by most page loads.) As a result, typical widely used Python-based web services are vulnerable to similar bugs as PHP ones, on top of its own WSGI ones! No true forward development, just.. steps in odd directions, in my opinion.
This "proves" to me that the current software bugs and insecurity is really not a feature of the respective programming languages, but a consequence of us human developers accepting a software "engineering" culture that has discarded almost all good engineering principles, and is just sticking stuff together with spit and bubblegum, banking on the product working just long enough that they won't be held responsible for the crappiness.
And that we should not blame the languages for not trying to stop the developers for implementing idiotic designs.
Can low level programming be made totally "safe" just from features of the language itself?
I agree with Nominal Animal about good engineering practice being largely replaced with silver bullets and tools.
Various security reports I have read don't show C as the language with the most security exploits though in practice. I would have to dig a little now to provide links, but AFAIR, PHP, Javascript and even Java came largely sad winners here. It doesn't necessarily show anything related to the languages themselves though, but rather *how* (and probably by whom) they are typically used.
I explain those by pointing out that possible ≠ easy.Unlike C++, C has two different "modes": hosted environment and freestanding environment. The former includes the standard C library – including functions like fgets(), strcpy(), and so on –; whereas the latter is used when programming kernels and microcontrollers, often with a replacement set of functions (see Linux kernel C functions, or the Arduino environment, for examples).
Buffer overflows are intrinsic part of the C standard library, but not the C freestanding environment. It is quite possible to replace the standard C library with a completely different API, including arrays with explicit bounds and garbage collection, but keep the C compiler and syntax.
If you believe that, how do you explain the regular exploits like:
100 Million More IoT Devices Are Exposed and They Won't Be the Last (WiReD)
Gabe Goldberg <gabe@gabegold.com>
Wed, 14 Apr 2021 19:41:06 -0400
The Name:Wreck flaws in TCP/IP are the latest in a series of vulnerabilities with global implications.
https://www.wired.com/story/namewreck-iot-vulnerabilities-tcpip-millions-devices/ (https://www.wired.com/story/namewreck-iot-vulnerabilities-tcpip-millions-devices/)
or
A Casino Gets Hacked Through a Fish-Tank Thermometer (Entrepeneur)
Amos Shapir <amos083@gmail.com>
Fri, 16 Apr 2021 17:49:35 +0300
Hackers gain entry to a casino's internal net via a fish tank, and steal list of customers:
https://www.entrepreneur.com/article/368943 (https://www.entrepreneur.com/article/368943)
Both of those are from the yesterday's comp.risks (Volume 32 Issue 60 Saturday, 17th April 2021), which everybody should be reading.
See https://catless.ncl.ac.uk/Risks/ (https://catless.ncl.ac.uk/Risks/)
It is much, much easier to write buggy C than it is to write robust, secure C code. C is a dangerous tool, but so powerful and useful that many choose to use it nevertheless.
Nothing is perfect, so where do you draw the line for "safe"? Even if the code is guaranteed to work perfectly for all possible inputs on standard hardware, there are glitches. Most current AMD64 architecture laptop and desktop machines do not support ECC memory, so single-bit errors can occur because of an odd cosmic ray, or for a number of other reasons.
I don't think it is a No True Scotsman argument to say something like "a properly trained person will never point a firearm at anything or anyone, even when the safety is on, unless they are ready to kill them". This is a simple rule known for as long as firearms have existed; but the firearm itself does not enforce the rule. A lot of people (including soldiers) do not know or neglect to follow that rule, so accidents happen, and people get killed. With C, bugs and security failures occur often because the developers do not care (about making the code secure against unexpected inputs); I've heard countless times that "we don't have the time for that right now; we'll add those in later".
I'm not sold on the C/C++ is evil because of the dreaded buffer overrun. Some asshat can make some sloppy C code and chances are good it will only crash at best. But an interpreter, ironically written in C, could have a bug which could expose 100s of thousands of users to an exploit.
I'm not sold on the C/C++ is evil because of the dreaded buffer overrun. Some asshat can make some sloppy C code and chances are good it will only crash at best. But an interpreter, ironically written in C, could have a bug which could expose 100s of thousands of users to an exploit.
Better that exploit gets fixed in one patch than 100,000 independent programs in C that don’t.
It’s even lower level than that. Solve a problem once, properly. Not a million times, badly.
It’s even lower level than that. Solve a problem once, properly. Not a million times, badly.
NIH is endemic in the C++ community, from acadaemia to grunts in the trenches.
Too many of the academic papers on C++ reference only other C++ papers. To give a contrary example, Gosling's Java whitepaper was notable for nicking concepts from many other languages, where each concept had been proven in practice and all concepts played nicely with each other.
It’s even lower level than that. Solve a problem once, properly. Not a million times, badly.
NIH is endemic in the C++ community, from acadaemia to grunts in the trenches.
Too many of the academic papers on C++ reference only other C++ papers. To give a contrary example, Gosling's Java whitepaper was notable for nicking concepts from many other languages, where each concept had been proven in practice and all concepts played nicely with each other.
I think that’s out of necessity. Someone has to write their own “framework” at every C++ house I’ve seen and work to some poorly defined subset of the language which doesn’t have so many foot guns.
Rust is the same. But the academic papers are blog posts and brigading on tech news aggregators.
Similar with Go. It’s that fuzzy joy that Java was (if you avoided J2EE 1.x :) )
I'm not sold on the C/C++ is evil because of the dreaded buffer overrun. Some asshat can make some sloppy C code and chances are good it will only crash at best. But an interpreter, ironically written in C, could have a bug which could expose 100s of thousands of users to an exploit.
Better that exploit gets fixed in one patch than 100,000 independent programs in C that don’t.
I'm not sold on the C/C++ is evil because of the dreaded buffer overrun. Some asshat can make some sloppy C code and chances are good it will only crash at best. But an interpreter, ironically written in C, could have a bug which could expose 100s of thousands of users to an exploit.
Better that exploit gets fixed in one patch than 100,000 independent programs in C that don’t.
Only after the impact of a high-yield exploit which had been refined to maximize the damage verses random programs with bad code that hackers can't be bothered attacking.
I'm not sold on the C/C++ is evil because of the dreaded buffer overrun. Some asshat can make some sloppy C code and chances are good it will only crash at best. But an interpreter, ironically written in C, could have a bug which could expose 100s of thousands of users to an exploit.
Better that exploit gets fixed in one patch than 100,000 independent programs in C that don’t.
Only after the impact of a high-yield exploit which had been refined to maximize the damage verses random programs with bad code that hackers can't be bothered attacking.
Fully agreed.QuoteAnd that we should not blame the languages for not trying to stop the developers for implementing idiotic designs.
Again, agreed.
But when the fundamental properties of a language mean that large applications are "castles built on sand", we shouldn't shy away from recognising that choosing a different language ought to mean the "castle is built on rock". It is, of course, possible to choose a different language so that the "castle is built on a swamp".
We should distinguish swamps and sand from rock, and choose rock whereever possible.Absolutely! But, what would be the rock-analog to replace the swamp and sand that is C, in systems programming? (Specifically, efficient low level libraries and services interfaced to from higher level languages, for my own use cases?) I personally do not know of one.
Scheme makes a good systems programming language :popcorn: :-DD
As for single vendor controlled products, I've learned that opinionated vendors are a win for solving problems. I've got 20 years out of .Net so far.
Fully agreed.QuoteAnd that we should not blame the languages for not trying to stop the developers for implementing idiotic designs.
Again, agreed.
But when the fundamental properties of a language mean that large applications are "castles built on sand", we shouldn't shy away from recognising that choosing a different language ought to mean the "castle is built on rock". It is, of course, possible to choose a different language so that the "castle is built on a swamp".
Problem is, every programming language we currently have, have their drawbacks. I "solve" this by using multiple programming languages; for example, Python for UI, C for heavy computation (in a library form). I automate tasks using Bash or POSIX shell scripts, Makefiles, and Awk; the latter is especially useful for certain line/record based format data processing. I try to pick each tool, each programming language, that seems the most appropriate for each task at hand, while being careful to not let my personal preferences skew the choice too much.
Now that Rust has its own foundation, I'm hoping it will become a reasonable alternative to systems level programming. I dislike Go for the same reasons I dislike .Net: being single vendor controlled projects, their future is uncertain; and I've been bitten by single vendor products often enough to be wary of subjecting myself to those if reasonable alternatives exist.We should distinguish swamps and sand from rock, and choose rock whereever possible.Absolutely! But, what would be the rock-analog to replace the swamp and sand that is C, in systems programming? (Specifically, efficient low level libraries and services interfaced to from higher level languages, for my own use cases?) I personally do not know of one.
So, I personally am trying to use piling (driving heavy stilts or posts into the swamp and sand to create a stable enough base to build on; an analog for replacing just the standard C library with something better, more suited to the task), until those better than I in computer science can develop a language that can achieve the same or better end results.
(For example, having carefully considered and tested various cases, I've changed my mind regarding garbage collection. I do "like" pool based allocation more, but in objective terms, good garbage collection schemes simply have fewer drawbacks.)
Scheme makes a good systems programming language :popcorn: :-DD
As for single vendor controlled products, I've learned that opinionated vendors are a win for solving problems. I've got 20 years out of .Net so far.
I thought you made your money clearing up the messes dropped buy other people!
QuoteI'd like to play with a simple OS not written in C/C++.How much do you consider "a simple OS"?
QuoteI'd like to play with a simple OS not written in C/C++.How much do you consider "a simple OS"?
like DOS with TSR
DOS wasn't an operating system, it was a program loader with a few built-in peripheral libraries.
I liked it: load you program and the machine is yours!
DOS wasn't an operating system, it was a program loader with a few built-in peripheral libraries.
I liked it: load you program and the machine is yours!
DOS stands for "disk operating system"; DOS + TSR means you can suspend a program, and launch an other.
It's my definition of "simple OS".
Things like "UCOS/2/3" are of a different level (it's a static task RTOS), and also of a different level of complexity. I have already implemented it on a MIPS32 and on arm-classic machine, but I only wrote the specific low level while the 90% of the code was just "ok" out of the box. Then I wrote a couple of tasks and a couple of drivers, one of them was a Modbus driver, the other a driver for a radio-link.
Kind of producer consumer problem. Not do bad, both were nice experiences, but I cannot rewrite this stuff in assembly, and it's too much code for translating it in Ada/Rust/etc...
I need something shorter, simpler.
It ran a single programes at a time.
If you wanted to do something different, you shut down that programes and loaded another.
Hence the famous Small T-shirt slogan, "Don't mode me in"
Tesler developed the idea of copy and paste functionality and the idea of modeless software.
In user interface design, a mode is a distinct setting within a computer program or any physical machine interface, in which the same user input will produce perceived results different from those that it would in other settings.
I'm only too we'll aware of what MSDOS was
FORTH is not a language or an OS or a toolchain. It's just completely different.
I was barely inflicted with that crap fortunately. A relative died and left me a pile of cash so I got myself a nice Acorn A420 to replace my BBC. Eventually this was replaced with NT while I was using Sun kit at university and work...
MSDOS was horrible horrible horrible yuck. My father built a large business on it selling payroll software.
MSDOS was horrible horrible horrible yuck. My father built a large business on it selling payroll software.Yes it was. However, as a clone of CP/M for the 8086 it allowed a lot of CP/M software to be quickly ported to the new 8086 machines. The sad part about MSDOS is not that it existed, but that it stuck around for so long. It should have had quite a short market window.
Yes it was. However, as a clone of CP/M for the 8086 it allowed a lot of CP/M software to be quickly ported to the new 8086 machines. The sad part about MSDOS is not that it existed, but that it stuck around for so long. It should have had quite a short market window.
... Jeez, you lot don't half have tea-stain-tinted spectacles! ...
... Jeez, you lot don't half have tea-stain-tinted spectacles! ...
I liked DOS. You could have it on a floppy with enough space left to do other things. I didn't like or use Windows until 95. I got alot of milage out of DOS with QuickBASIC then later QuickC. For $99, those programs were a great bang for the buck in my opinion.
>> How much do you consider "a simple OS"?Well, that's why I asked. DOS implemented a filesystem and a bunch of utilities, but had no multi-tasking and not much in the way of device drivers.A bunch of the current "RTOS" implementations give you real-time and multitasking, but no file system, utilities, or program loading capability.
like DOS with TSR
Windows 95 was literally an application that ran on top of DOS and you retained the ability to do native DOS command line stuff.Meh. Since we admitted that DOS was little more than a program loader, it's not clear what the difference is between "an application loaded by DOS" and "an operating system loaded by a bootloader."
I liked DOS. You could have it on a floppy with enough space left to do other things. I didn't like or use Windows until 95. I got alot of milage out of DOS with QuickBASIC then later QuickC. For $99, those programs were a great bang for the buck in my opinion.
Well, that's why I asked. DOS implemented a filesystem and a bunch of utilities, but had no multi-tasking and not much in the way of device drivers.A bunch of the current "RTOS" implementations give you real-time and multitasking, but no file system, utilities, or program loading capability.
Well, that's why I asked. DOS implemented a filesystem and a bunch of utilities, but had no multi-tasking and not much in the way of device drivers.A bunch of the current "RTOS" implementations give you real-time and multitasking, but no file system, utilities, or program loading capability.
Current project has around 50,000 interrupts/second, does a bit of multi threading, computes linear least square fits to samples on an 8-bit processor. Not sure that even C could manage it. All on 5V and a few milliamp
PIC16F1455. 10MHz external clock, internal x4 PLL for internal clock, = 10MHz instruction time.Current project has around 50,000 interrupts/second, does a bit of multi threading, computes linear least square fits to samples on an 8-bit processor. Not sure that even C could manage it. All on 5V and a few milliamp
Which CPU are you using for this? :D
Interesting reading. I haven't written any higher language code for many years. I play with microprocessors and use assembler, because I want exact control over what is going on. Current project has around 50,000 interrupts/second, does a bit of multi threading, computes linear least square fits to samples on an 8-bit processor. Not sure that even C could manage it. All on 5V and a few milliamp....
That said, I found out 30 years ago that I had a misperception regarding assembler and speed. I transitioned from QuickBASIC to QuickC because I believed that C was faster than BASIC. And with C, I could include inline assembly (best of both worlds).
8086 and 68000 weren't really but are kind of OK, but not really.I remember reading many years ago that the 8086 segmented memory architecture was a natural match for Pascal, easing the job of the complier:
8086 and 68000 weren't really but are kind of OK, but not really.I remember reading many years ago that the 8086 segmented memory architecture was a natural match for Pascal, easing the job of the complier:Of course, those were times when 64 kB per each segment was a respectable size...
- The code segment is used for instructions
- The data segment holds global variables
- The stack segment holds return addresses and local variables
- The extra segment holds variables accessed via pointers (IIRC, in old standard Pascal pointers are not free to point to any variable, only to variables allocated with New()
1) whether the CPU was designed for running compiled languages such as Pascal or C. 6502 and z80 definitely weren't, but AVR was. 8086 and 68000 weren't really but are kind of OK, but not really. 6809 was a bit later and is pretty good for an 8-bitter. Anything from MIPS and ARM and on is designed for running compiled languages and you have to work very very hard to beat the compiler.
The key point about C is that it assumes the memory model is a single uniform address space where a each byte is uniquely addressable. That matches the 6800/9, 8080/8085, 68k, but not the 1802, 6502 and especially not the 8086/8.
add16:
clc
lda $00,X
adc $0000,Y
sta $00,X
lda $01,X
adc $0001,Y
sta $00,X
rts
ldy #src ; might already be correct, so can often be omitted
ldx #dst ; might already be correct, so can often be omitted
jsr add16
Best use for zero page on the 6502 is stack as it’s cheaper to access it. I think ARM started as 32 bit 6502 with orthogonal register utility. Apart from the bastard 26 bit PC/flags that cursed the first few chunks of silicon.
Compiler should make register allocation decisions based on what it’s compiling. Itanium was designed around that concept. The actual architecture was impenetrable by humans. We probably should let the machines design the ISA at this point like we do with the silicon.
Best use for zero page on the 6502 is stack as it’s cheaper to access it.
I think ARM started as 32 bit 6502 with orthogonal register utility.
Compiler should make register allocation decisions based on what it’s compiling. Itanium was designed around that concept. The actual architecture was impenetrable by humans. We probably should let the machines design the ISA at this point like we do with the silicon.