Computing > Programming

any examples of OS not written in C/C++?

<< < (37/39) > >>

rfclown:

--- Quote from: MIS42N on April 24, 2021, 01:01:57 pm ---Interesting reading. I haven't written any higher language code for many years. I play with microprocessors and use assembler, because I want exact control over what is going on. Current project has around 50,000 interrupts/second, does a bit of multi threading, computes linear least square fits to samples on an 8-bit processor. Not sure that even C could manage it. All on 5V and a few milliamp....

--- End quote ---

Completely off topic...

I too code microprocessors (usually AVRs lately) mostily in C, sometimes in assembler. I have no idea how to do multi threading. I only do very simple stuff with microcontrollers. I completely don't understand OOP. If I Google to find a function I want and it ends out being in C++, I'll figure out how to convert it to a C function.

That said, I found out 30 years ago that I had a misperception regarding assembler and speed. I transitioned from QuickBASIC to QuickC because I believed that C was faster than BASIC. And with C, I could include inline assembly (best of both worlds). My first incentive to do this involved reading RSSI values that were in dB and computing averages quickly for a TDM power control system (dB->watts->avg). C and assembler allowed me to do this. No doubt, assembler is going to be the fastest, but... it's amazing how fast code can be if it's written well in C. I discovered that the reason my QuickBASIC code was usually slower is because I didn't define the variables up-front, and the default was floating point. If I defined variables as int, QuickBASIC wasn't slow (compiled BASIC). It was comparable to QuickC. (this was early 90's I think).

My present day HLLs are C and LabVIEW. LabVIEW runs like compiled C, but since it (I think) does some parallelizing for you (if you have multiple processors) it is amazingly fast (if you code well). A while back I wrote an OFDM vector signal analyzer in LabVIEW. I had a receiver streaming in I/Q baseband samples over TCP at 1.625 Msps. Our TDM system had a 10msec uplink, 10 msec downlink. With a 128 point FFT and 16 point CP you get about 116 symbols in a 10msec frame. The LabVIEW program would use cross correlation to find the preample, do a frequency estimation based on training symbols in the waveform, correct the data for frequency offset, equalize, demodulate, compute EVM, display whatever you wanted to look at (individual symbols, multiple symbols). I think it was 108 active carriers out of a 128 point FFT. Carriers could be BPSK, QPSK... up to 1024 QAM. And the program on a quad core Lenovo laptop kept up in real time! It could do all the computation and presentation of the signal before the next TDM frame came. I had not had any expectation that the program would keep up in real time. I was doing the work because we were looking to remove the pilots from our waveform, and Keysight's VSA (what we were using at the time to analyzer our signals) choked on that.

brucehoult:

--- Quote from: rfclown on May 04, 2021, 12:21:52 am ---That said, I found out 30 years ago that I had a misperception regarding assembler and speed. I transitioned from QuickBASIC to QuickC because I believed that C was faster than BASIC. And with C, I could include inline assembly (best of both worlds).
--- End quote ---

That depends entirely on:

1) whether the CPU was designed for running compiled languages such as Pascal or C. 6502 and z80 definitely weren't, but AVR was. 8086 and 68000 weren't really but are kind of OK, but not really. 6809 was a bit later and is pretty good for an 8-bitter. Anything from MIPS and ARM and on is designed for running compiled languages and you have to work very very hard to beat the compiler.

2) how good you are at assembly language and working around and exploiting the weirdness of your CPU. Especially on 6502 and Z80 and the like. But early x86 too.


Something like LabView presents you with a high level language, but the people who wrote it made all those FFTs and so forth in assembly language, or something very close to it.

newbrain:

--- Quote from: brucehoult on May 04, 2021, 07:26:02 am ---8086 and 68000 weren't really but are kind of OK, but not really.

--- End quote ---
I remember reading many years ago that the 8086 segmented memory architecture was a natural match for Pascal, easing the job of the complier:

* The code segment is used for instructions
* The data segment holds global variables
* The stack segment holds return addresses and local variables
* The extra segment holds variables accessed via pointers (IIRC, in old standard Pascal pointers are not free to point to any variable, only to variables allocated with New() Of course, those were times when 64 kB per each segment was a respectable size...

brucehoult:

--- Quote from: newbrain on May 04, 2021, 07:54:17 am ---
--- Quote from: brucehoult on May 04, 2021, 07:26:02 am ---8086 and 68000 weren't really but are kind of OK, but not really.

--- End quote ---
I remember reading many years ago that the 8086 segmented memory architecture was a natural match for Pascal, easing the job of the complier:

* The code segment is used for instructions
* The data segment holds global variables
* The stack segment holds return addresses and local variables
* The extra segment holds variables accessed via pointers (IIRC, in old standard Pascal pointers are not free to point to any variable, only to variables allocated with New() Of course, those were times when 64 kB per each segment was a respectable size...

--- End quote ---

Right.

Yeah, it's simply untenable on a small machine for C where address arithmetic is standard and frequent, forcing every pointer to 32 bits segment:offset. It's not too bad for Pascal where you don't do arithmetic on pointers and a pointer can be just the segment number. The pain comes because there's only one extra segment so you have to keep reloading it to switch between objects, and that was slow (maybe not on original 8086). Pretty soon 64k got to be too small for array or big buffers for many programs. Even when you has only 1 MB you might want to do some scientific calculation using more than 64k for one array.

68000 had a similar problem in that there was only a 16x16 multiply instruction. Both MPW and THINK Pascal only took a 16 bit result from that, so array sizes were also limited to 64k there. I don't know why they did that when the MULU/MULS instructions *always* produced a 32 bit result.

At one point I got Rich Siegel to fix this from his personal copy of the THINK Pascal source code and send me the resulting compiler. This made things much much better as then you could have up to 64k elements of up to 64k bytes each. I may be the only person in the world who ever had a copy of this version of the compiler.

tggzzz:

--- Quote from: brucehoult on May 04, 2021, 07:26:02 am ---1) whether the CPU was designed for running compiled languages such as Pascal or C. 6502 and z80 definitely weren't, but AVR was. 8086 and 68000 weren't really but are kind of OK, but not really. 6809 was a bit later and is pretty good for an 8-bitter. Anything from MIPS and ARM and on is designed for running compiled languages and you have to work very very hard to beat the compiler.

--- End quote ---

The z80 was fine for C, inasmuch as it was fine for anything. But the IX,IY registers were surprisingly limited for most things. All the z80s bolt-on warts goodies made me appreciate the risc-like simplicity of the 8080 :)

The key point about C is that it assumes the memory model is a single uniform address space where a each byte is uniquely addressable. That matches the 6800/9, 8080/8085, 68k, but not the 1802, 6502 and especially not the 8086/8.

Remember the horrors of determining whether two segment+offset 8086 pointers referred to the same object, and all the horrible grotty workarounds to try to make it less grossly inefficient? Shudder.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

There was an error while thanking
Thanking...
Go to full version