Author Topic: Microprocessor (MPU 8/16) that can be programmed using C programming language  (Read 13599 times)

0 Members and 2 Guests are viewing this topic.

Offline GeorgeOfTheJungle

  • Super Contributor
  • ***
  • !
  • Posts: 2699
  • Country: tr
The limitations of these devices is what should make it fun to work with.

It isn't fun, when all you have is three 8 bit registers and 256 bytes of stack. You're a masochist :-)
The further a society drifts from truth, the more it will hate those who speak it.
 

Offline aiq25Topic starter

  • Regular Contributor
  • *
  • Posts: 241
  • Country: us
For any kind of retro computing without something concrete in mind it's hard to give some decent advice.

Recently I had a peek at the book
"STM32_Beginning_Developing_with_FreeRTOS_libopencm3_and_GCC.pdf"

And sure, it's Arm Cortex M3, but it's C (or C++ if you like).
The smaller 8-bitters tend to have a very limited amount of RAM, and can not execute code from it. While this thing can run code from RAM. But I have a better reason for mentioning this book. It has a whole section devoted to "overlays". Which is a technique to load functions from external storage into RAM to execute them. This used to be popular in the old days when RAM in PC's was very limited.

Quote
An 8-bitter that can rode from RAM, and has a C compiler is the Cypress CY7C68013A. It is used in all the cheap Sigrok/Pulseview Logic Analyzers, and you can use the "FX2LAFW" firmware as a starting point for your own experiments.

On hackaday a lot of articles have been written about retro computing. Lots of Z80 boards, but the 68000 also seems to be popular.

But what aspect of "retro computing" attracts you so?
For me it mostly equals lots of separate chips, which translate to lots of soldering and big expensive PCB's. Also lots of opportunity to make hardware errors.
Thanks for the suggestion. I will definitely look into this as well. I do like visually seeing all of the separate IC's for different functions and I also like assembling big PCB's (it feels relaxing) but I wanted to start with something that would be easier to work with. The best part of retro computing that attracts me is seeing all the cool projects and SW you can write using such limited HW. Also I just like to work with retro technology in general. I was born in the '90's so I never got to experience true retro 8-bit computers and in general would just like to build something on my own. It's been a while since I programmed an 8-bit MCU from scratch (excluding Arduino because now a days with all the example code and the Arduino IDE it's very easy to program something).


Quote
Apparently you do not know yet which processor you want to use. I also find it strange that you can only "find assembly". C compilers have been available for nearly any platform since the 80's.
I was wrong in my initial statement. What I meant was I saw a lot of articles on how assembly language was much more efficient than C compilers for MPU's like the Z80. That's why I was more so looking for an MPU that was as efficient using a C compiler.
 

Offline MK14

  • Super Contributor
  • ***
  • Posts: 4586
  • Country: gb
Hi. I want to learn about 8 and 16 bit microprocessors but all of the ones I can find seem to be programmed using assembly language.

Are there any microprocessors that can be programmed using C (embedded) programming language?

Edit (some clarifications): I would like to build a retro computer using modern IC’s that’s why I’m looking for a 8 or 16 bit MPU specifically. I would like an MPU that can run program code from the external memory.

This project is for fun. I would like to get back into more embedded projects and programming. I like C programming. I never enjoyed assembly language.

If not already mentioned, consider the AVR Mega2560.
You can configure it to have, 32K of external address/data bus space, because it has so many pins, available.
It is 8 bit, and has an instruction set, somewhat similar, to early 8 bit processors.
It can go fairly fast, if you want, I'd have to look it up, but something like 16 MHz. (Compared to early 8 bit cpus, which only ran at 1 MHz, originally, 6502).
It is a modern and very available chip.
It has lots of built in peripheral I/O devices, a lot more than the usual arduino offerings.
Is a fully fledged arduino member, with the information sources and ready built boards/hardware availability.
But you can buy just the chip.
Quite a bit of onboard flash.

But another choice is the modern day 6502 versions, which can run at 14 MHz (instead of the usual 1 MHz), can optionally use an improved instruction set (old 6502 instruction set, still available, I think its boot options).
E.g.   https://en.wikipedia.org/wiki/WDC_65C02

N.B. Different versions available, you choose.
At least one even in DIP 40 pin package, if you want.
Full external buses. Can't remember if multiplexed bus, probably is, if in 40 pin package, or some versions are multiplexed, I suspect.
List of cpus (WDC) here:  https://en.wikipedia.org/wiki/Western_Design_Center
WDC65C816, is optionally switchable between being an 8 bit cpu, and a 16 bit one. Yet it is 6502 based, but much faster and more capable, if you want.
https://en.wikipedia.org/wiki/WDC_65C816


Or go old school, and get a 20 MHz capable, old style Z80 cpu. (I.e. instead of the old 4 MHz ones, they later came out at up to 20 MHz). If that is not fast enough, there are the Z180's (later still EZ80's already mentioned in thread).
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14620
  • Country: fr
Re: Microprocessor that can be programmed using C programming language
« Reply #28 on: July 27, 2020, 02:12:31 pm »
Nowadays its *RARE* to find a MCU still in production that isn't supported by a C compiler, either GCC, SDCC, or the manufacturer's or a third party's proprietary C compiler.  Many MCU families have more than one C compiler to choose from.

Yes, it's more than rare. It's even an oddity. And by nowadays, you can make that at least for the last 20 years.

Now a few, "recent" niche MCUs do not have a C compiler, likely because their architecture is too simplistic for that. We can think of the Padauk MCUs, which are ultra simple (and ultra cheap), and come with a vendor-supplied compiler supporting some kind of very stripped-down C. But this is really part of the few exceptions, and it still looks like C, even though the language is a lot more limited than C.

I'd be curious to hear about other examples.

It's interesting to think (as a compiler writer) what would prevent something from running C reasonably efficiently.

Well, define "efficiently" ;)
I guess it's mostly in terms of how much machine code would be needed for a given C construct. If the number of registers and arithmetic operations is very limited, both execution time and code size could be a major problem, especially if on top of it the amount of data and code memory are also very limited.

One thing that would be a severe problem for a C compiler if missing  (I guess it would even make it impossible) is indirect addressing. Right now, I can't think of/remember a CPU that doesn't have indirect addressing, but I guess that may exist. Even if indirect addressing is supported, some CPUs may not support indirect calls, which would also make some C features very inefficient, or impossible to implement.

Beyond technical feasibility, there may be other reasons why a C compiler wouldn't be available for some CPU. The CPU architecture could be so odd that writing a full C compiler for it, or a back-end for an existing one, could be a very tough endeavor that the vendor is not ready/able to take - and if on top of it, said CPU has limited performance and memory, the vendor may decide that just providing an assembler instead would be much easier and make more sense.

Taking the Padauk example again, I haven't worked with them, but for those who have, what is, in your opinion, the main reason why they didn't provide a complete C compiler, but some kind of simplified C instead?
« Last Edit: July 27, 2020, 02:14:58 pm by SiliconWizard »
 

Online Doctorandus_P

  • Super Contributor
  • ***
  • Posts: 3431
  • Country: nl
Dave made the Padauk popular, and then some conglomerate of smart guys ported SDCC to it. Though I'm not sure how those 2 are inter connected with all things.

The Propeller chip has long been exempt from a C compiler, because it's such a weird chip. It was only programmable in a language called "spin". I'm not sure if there is a C compiler for it now. I lost interest in the thing.

For some niche applications some 4-bit uC's have been made for a long time, and there is probably no C compiler for those. Last time I checked though, even my toothbrush has an MPS430.
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4079
  • Country: nz
Quote
Apparently you do not know yet which processor you want to use. I also find it strange that you can only "find assembly". C compilers have been available for nearly any platform since the 80's.
I was wrong in my initial statement. What I meant was I saw a lot of articles on how assembly language was much more efficient than C compilers for MPU's like the Z80. That's why I was more so looking for an MPU that was as efficient using a C compiler.

Ok. "C is not possible" and "C is not as efficient as assembly language" are very different things. Most 8 bit microprocessors are very difficult to write an efficient C compiler for, including Z80 and 6502.

The first microprocessors which were ok for C (or Pascal) are the M6809 and the Intel 8088. Although one is described as an 8 bit processor and one as a 16 bit their capabilities are actually extremely similar. They both still suffer from a lack of registers. The 8088 comes out slightly ahead because of the availability of a version with a 16 bit data bus, and a hack to let it use between 64 KB and 1 MB of memory with somewhat awkward programming. The M6809 is limited to 64 KB without external bank-switching hardware (which admittedly makes it not much worse than the 8088 to actually program).

The first microprocessors which were really C-friendly were the M68000, ARM, and MIPS (all of which are also, coincidentally, 32 bit).
 

Offline greenpossum

  • Frequent Contributor
  • **
  • Posts: 408
  • Country: au
The first microprocessors which were really C-friendly were the M68000, ARM, and MIPS (all of which are also, coincidentally, 32 bit).

You're forgetting the LSI-11 though that wasn't a single chip.

Also around the same era as the 68k was the Z8000 which ran a port or workalike of Unix. IIRC the architecture was decent.
« Last Edit: July 28, 2020, 12:55:43 am by greenpossum »
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4079
  • Country: nz
Re: Microprocessor that can be programmed using C programming language
« Reply #32 on: July 28, 2020, 12:57:10 am »
Nowadays its *RARE* to find a MCU still in production that isn't supported by a C compiler, either GCC, SDCC, or the manufacturer's or a third party's proprietary C compiler.  Many MCU families have more than one C compiler to choose from.

Yes, it's more than rare. It's even an oddity. And by nowadays, you can make that at least for the last 20 years.

Now a few, "recent" niche MCUs do not have a C compiler, likely because their architecture is too simplistic for that. We can think of the Padauk MCUs, which are ultra simple (and ultra cheap), and come with a vendor-supplied compiler supporting some kind of very stripped-down C. But this is really part of the few exceptions, and it still looks like C, even though the language is a lot more limited than C.

I'd be curious to hear about other examples.

It's interesting to think (as a compiler writer) what would prevent something from running C reasonably efficiently.

Well, define "efficiently" ;)

I'd generally say that any missing feature should be able to be simulated with fewer than maybe 10 instructions of straight-line code. And preferably missing features should be things that are not required toooo frequently in practice.

Overall performance of compiled code should be within about a factor of two or three of the best hand-written assembly language, not 10x or 100x.

Quote
I guess it's mostly in terms of how much machine code would be needed for a given C construct. If the number of registers and arithmetic operations is very limited, both execution time and code size could be a major problem, especially if on top of it the amount of data and code memory are also very limited.

Code size expansion can always be limited by calling subroutines (relatively small execution speed penalty) or bytecode (indirect threading) or direct threading. The bytecode interpreter in Apple (UCSD) Pascal was reasonably usable for a full environment with editor and compiler on a 1 MHz 6502, despite the bytecodes being generic and not at all tuned to the 6502. Woz's "SWEET16" provided much more efficient bytecode execution that could be interleaved with native 6502 code and still look near optimal today. Woz claimed a 10x slowdown for bytecode.

I've got a compilation scheme for 6502 that allows using all 256 bytes of Zero Page as "registers" to hold C local variables and function arguments and globals, with 16 bit and 32 bit operations such as "Rn = Rm" or "Rn += Rm" or "Rn *= Rm" coded in at most 7 bytes (fewer if the previous operation used the same Rn or Rm, so quite often 5 bytes). 16 bit add or subtract take about 2x as long as inlined code (which needs 13 bytes). 32 bit add and subtract only take 1.5x as long as the 25 bytes of code needed if done inline.

I think this is a pretty good compromise with very similar code size to UCSD P-system or Java JVM bytecode, but many times faster in execution speed.

I may have to find time to implement it one day :-)

Quote
One thing that would be a severe problem for a C compiler if missing  (I guess it would even make it impossible) is indirect addressing. Right now, I can't think of/remember a CPU that doesn't have indirect addressing, but I guess that may exist. Even if indirect addressing is supported, some CPUs may not support indirect calls, which would also make some C features very inefficient, or impossible to implement.

I mention both of those explicitly in the message you replied to. (pointers and calls through pointers)

Quote
Beyond technical feasibility, there may be other reasons why a C compiler wouldn't be available for some CPU. The CPU architecture could be so odd that writing a full C compiler for it, or a back-end for an existing one, could be a very tough endeavor that the vendor is not ready/able to take - and if on top of it, said CPU has limited performance and memory, the vendor may decide that just providing an assembler instead would be much easier and make more sense.

My message was about *what* would make an architecture odd. I think I covered most of it.

One thing I didn't mention is the architectures based around a "current page" such as PIC and PDP8. That's annoying but can be worked around provided you can put fully general pointers in the current page and indirect via them.

Quote
Taking the Padauk example again, I haven't worked with them, but for those who have, what is, in your opinion, the main reason why they didn't provide a complete C compiler, but some kind of simplified C instead?

It's a while since I looked at the Paduak documentation, but I seem to recall it was very similar in design to 8 bit PIC.
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4079
  • Country: nz
The first microprocessors which were really C-friendly were the M68000, ARM, and MIPS (all of which are also, coincidentally, 32 bit).

You're forgetting the LSI-11 though that wasn't a single chip.

Also around the same era as the 68k was the Z8000 which ran a port or workalike of Unix. IIRC the architecture was decent.

I don't forget it at all.

I used (and maintained and helped develop compilers for) the PDP 11/34 and 11/70 but I've never in my life seen an LSI-11 machine. I bet most people haven't. It came out about the same time as the 68k but was both less capable and much more expensive.

The PDP-11/LSI-11 is just slightly better than the M6809 or 8088 in that all 8 registers are 16 bit. Except it's more like 7 registers because the PC is a general register. The addressing modes are very similar. 68k and z8000 (which I also helped write a compiler for) have 16 registers, similar to VAX and ARM (and AMD64, 25 years later).
 

Offline greenpossum

  • Frequent Contributor
  • **
  • Posts: 408
  • Country: au
The LSI 11 came out in 1975 and predated my exposure to Unix. I programmed for RT-11 on it. So it was already around when the 6809 came out.

NS also made a 32000 processor but that has disappeared into history.
« Last Edit: July 28, 2020, 01:23:22 am by greenpossum »
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4079
  • Country: nz
The LSI 11 came out in 1975 and predated my exposure to Unix. I programmed for RT-11 on it. So it was already around when the 6809 came out.

NS also made a 32000 processor but that has disappeared into history.

I used an Unix emulator (Eunice) on VAX/VMS but my first real Unix was on a Zilog System 8000 (with Z8000 chip). Which is why I was part of a team writing a Modula-2 compiler for it and the VAX.

We also evaluated the NS 32016 at the same time, but it was less capable. As I recall the promised 32032 looked much better but was late.
 

Online Ian.M

  • Super Contributor
  • ***
  • Posts: 12907
I'm quite fond of the Motorola 68K architecture, and specifically the 68008, which while being a full 16/32 bit 68000 internally, is externally a cut-down 8 bit bus variant, that's no harder to implement than any other 8 bit CPU, and is easier than many.  The only fly in the ointment is its limited address space - the DIP package 68008 can only access one megabyte because its only got 20 address lines.  Its PLCC package brings out two additional lines for a four megabyte address space.

Here's a 68008 on a breadboard running uCLinux: http://www.bigmessowires.com/68-katy/

Although the Motorola 68008 is no longer in production, PDIP package 'pulls' or N.O.S. are still fairly easy to obtain at a reasonable price.
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4079
  • Country: nz
I'm quite fond of the Motorola 68K architecture, and specifically the 68008, which while being a full 16/32 bit 68000 internally, is externally a cut-down 8 bit bus variant, that's no harder to implement than any other 8 bit CPU, and is easier than many.  The only fly in the ointment is its limited address space - the DIP package 68008 can only access one megabyte because its only got 20 address lines.  Its PLCC package brings out two additional lines for a four megabyte address space.

Here's a 68008 on a breadboard running uCLinux: http://www.bigmessowires.com/68-katy/

Although the Motorola 68008 is no longer in production, PDIP package 'pulls' or N.O.S. are still fairly easy to obtain at a reasonable price.

If I understand it correctly, this ColdFire processor sells for $16 one-off, runs at 250 MHz, has 64 KB of internal RAM and an external bus for ROM + SDRAM with 32 address lines and 8 data lines. https://www.nxp.com/docs/en/data-sheet/MCF54418.pdf

The instruction set differences between 68008 and ColdFire are pretty minimal -- some missing instructions and addressing modes, but I think ColdFire programs would run on 68008?

There are no PDIP packages, but you can get cheap breakout boards to convert LQFP to DIP.
 

Offline greenpossum

  • Frequent Contributor
  • **
  • Posts: 408
  • Country: au
Except it's more like 7 registers because the PC is a general register.

Only in the sense that the same addressing modes apply to R7 but turn out to be something relevant to PC operations. You couldn't use another register as the PC. For example load immediate was actually load indirect r7 with postincrement, skipping the constant. As you know there was no dedicated SP, but usually R6 was used. In the context of Unix R0 and R1 were usually used for intermediate and function results and R5 was usually the frame pointer so that left 3 registers for locals. You would know about the now ignored register keyword in C. But the indirect with offset modes were perfectly suited to C stack variables. Also I have always wondered if the PDP-11 architecture inspired the increment and decrement operators.

R&T estimated that using C for Unix incurred an overhead of about 20% code size compared to assembler. That's why I regard the PDP-11 as an excellent target. The as manual in V7 Unix was subtitled The ultimate dead language. To this day, my approach is C first if available, throwing in a bit more memory and speed if necessary, and the crucial or unavoidable parts in assembler.

As for Padauk's C compiler they probably didn't expect the typical customer to write anything complicated. The IDE supplies blocks of prefabricated code you can paste in. But of course the OSS Padauk developers have bigger plans for it.
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4079
  • Country: nz
Except it's more like 7 registers because the PC is a general register.

Only in the sense that the same addressing modes apply to R7 but turn out to be something relevant to PC operations. You couldn't use another register as the PC. For example load immediate was actually load indirect r7 with postincrement, skipping the constant.

My point is that you have r0..r7 but r7 isn't a useful register because you can't store some random variable into it -- doing so would jump to that address.

In contrast even r6 is usable as a temporary variable inside a function, as long as you restore the old contents before anything that tries to access the stack.

Quote
As you know there was no dedicated SP, but usually R6 was used. In the context of Unix R0 and R1 were usually used for intermediate and function results and R5 was usually the frame pointer so that left 3 registers for locals.

Not quite right. Certain instructions *did* assume r6 was the stack pointer, in particular JSR and RTS. You could save the subroutine return address in any register, but its old value was pushed on the r6 stack (and popped by RTS)

Quote
R&T estimated that using C for Unix incurred an overhead of about 20% code size compared to assembler. That's why I regard the PDP-11 as an excellent target.

It's not that I don't consider the PDP-11 to be an excellent target for C (relative to assembly language) -- it's that I don't consider the LSI-11 to be a microprocessor in the way the 68k is. The LSI-11 used 4 chips and cost $3000 (in the form of a PDP-11/03 with zero RAM and peripherals). The J-11 was the first proper PDP-11 microprocessor, in around 1980, by which time it had serious competition at much lower prices.
« Last Edit: July 28, 2020, 04:43:15 am by brucehoult »
 

Offline greenpossum

  • Frequent Contributor
  • **
  • Posts: 408
  • Country: au
it's that I don't consider the LSI-11 to be a microprocessor in the way the 68k is.

Ok, fair enough. But there are lots of places where lines can be drawn: one chip, affordable to some class of user, affordable for some application areas, etc.
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4219
  • Country: us
Quote
The first microprocessors which were really C-friendly were the M68000, ARM, and MIPS
Sparc and PPC, too...

Quote
I've never in my life seen an LSI-11 machine. I bet most people haven't. It came out about the same time as the 68k but was both less capable and much more expensive.

You're probably thinking about the DEC T-11, which was a single-chip implementation of the PDP11. and came out in ~1982.  The LSI-11 itself came out much earlier (originally a 4 to 5 chip set.)The T11 was apparently aimed at embedded use - wp says it was used in disk controllers and similar.  It was cute; one of the first chips with 'clever" support for DRAM by using a multiplexed address bus that did even/odd bits rather than high/low.

The HeathKit H-11 personal computer used an LSI-11 (NOT the T11), and wasn't horrendously expensive.The DEC Pro-3xx series of personal computer was based on the F-11 and J-11 single-package (multi-chip, though) LSI implementation.
DEC, alas, didn't know how to sell either chips or personal computers, and the H11 quickly more expensive when you had to add any reasonable peripherals (QBus!), so none of those caught on.
 

Offline oPossum

  • Super Contributor
  • ***
  • Posts: 1424
  • Country: us
  • Very dangerous - may attack at any time
The Propeller chip has long been exempt from a C compiler, because it's such a weird chip. It was only programmable in a language called "spin". I'm not sure if there is a C compiler for it now. I lost interest in the thing.

There has been a C compiler for the Propeller for about 10 years. I think SPIN, assembly and C can all be used together.
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4079
  • Country: nz
Quote
The first microprocessors which were really C-friendly were the M68000, ARM, and MIPS
Sparc and PPC, too...

And PA-RISC and ... yup.  Though PPC was 10 to 15 years after the others (POWER earlier of course).

Quote
Quote
I've never in my life seen an LSI-11 machine. I bet most people haven't. It came out about the same time as the 68k but was both less capable and much more expensive.

You're probably thinking about the DEC T-11, which was a single-chip implementation of the PDP11. and came out in ~1982.  The LSI-11 itself came out much earlier (originally a 4 to 5 chip set.)

I guess it's kind of interleaved. LSI-11 in 1975, 8086 in 1978, M68k in 1989, the J-11, F-11, T-11 from 1980 on.

By the time they came out, PDP-11 just wasn't *interesting* unless you had legacy software to run.

Quote
The HeathKit H-11 personal computer used an LSI-11 (NOT the T11), and wasn't horrendously expensive.

Good point. Do you know the price.

There was a time when I was *gagging* to buy a personal PDP-11 at a reasonable price. But then the PC/AT and Mac/Amiga/ST came out and it was ... whyyyyy?
 

Offline Canis Dirus Leidy

  • Regular Contributor
  • *
  • Posts: 216
  • Country: ru
You're probably thinking about the DEC T-11, which was a single-chip implementation of the PDP11. and came out in ~1982.
Or, maybe, about 1801VMx CPUs. ;)
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4079
  • Country: nz
Math is improving very fast, and the whole computer science is a subset of math; thus I am afraid it will be *RARE* in 2030 to find CPUs and MPUs still in production that are supported by a C compiler.

I *strongly* disagree.

While more and more things are being done using vector/matrix processors and GPUs those are additions to the scalar C world, not replacements for it.

Every parallel task is mixed in with scalar work. As Gene Amdahl famously observed, if you have a task that is 99% parallelizable and 1% scalar then the maximum speedup you can get with an infinitely parallel machine is 100x. If you want more than that then you need to make the scalar part go faster. And you have to minimize the overhead of the scalar and parallel parts talking to each other (this is one place GPUs really fall down).

Also see Tony Hoare in 1982: “I don't know what the language of the year 2000 will look like, but I know it will be called Fortran.”

Unusual for C.A.R. Hoare to be short-sighted. And since C got "restrict" (and C++ template containers and algorithms) there is no advantage to FORTRAN over C/C++. But he's essentially right. (and people do still use FORTRAN on supercomputers a lot)
 

Offline Canis Dirus Leidy

  • Regular Contributor
  • *
  • Posts: 216
  • Country: ru
By the way. Has anyone mentioned an 8OD, 8086 based and CPLD assisted SBC yet? And retroshields for Arduino Mega.
 

Online mikerj

  • Super Contributor
  • ***
  • Posts: 3271
  • Country: gb
Math is improving very fast,

Really?  In what respect?
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4079
  • Country: nz
As Gene Amdahl famously observed, if you have a task that is 99% parallelizable

You haven't got the point.

CPUs are going with multiple cores with a lot of built-in vector processing, but in order to use them efficiently, we need "something" to distributes processes across them, and this is actually one of the more difficult things in writing concurrent applications because it is not enough to just spread things into many processes, you actually have to make sure that these processes actually run concurrently, which means rethinking your algorithms, and can be difficult (sometimes very very difficult, practically impossible) with the current state of art of programming languages like C.

Now, since most programmers don't derives any gratification from their own pain or humiliation, "rethinking algorithms" points into the direction of "functional programming", and not because "researching new things is so damn cool" but rather because it has already demonstrated by facts that it can simplify by several order of magnitude the programming of things that actually run concurrently.

Less or equal effort, better results!

I'm a big fan of functional programming, and have been ever since I read FORTRAN inventor John Backus' 1977 Turing Award lecture "Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs". Which I read in, like, 1981 or something. I read Guy Steele's 1976-1979 series of papers "Lambda: The Ultimate Imperative", "Lambda: The Ultimate Declarative", "Lambda: The Ultimate GOTO", "Lambda: The Ultimate Opcode" soon after. And have re-read them a number of times since.

Saying that people will switch to using functional languages by 2030 (which I've been hearing since the early 80s) is possible (though dubious)  but is not at all the same thing as your claim that their machines will not have or not be capable of having C compilers.

People have been designing specialized computers optimized for Lisp, Smalltalk, Prolog and others since ... again ... the 1980s. What has happened every single time (so far)? It turns out that their functional language actually runs faster on a "general purpose" machine built for C and FORTRAN. Why? Partly because people implementing functional languages on standard hardware have been pretty ingenious. But mostly because there is so much competition and money spent on standard computers that they improve in performance faster.

 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4079
  • Country: nz
your claim that their machines will not have or not be capable of having C compilers.

I wrote that C will be very inefficient for future machines, therefore likely not used simply because it would not likely be a commercial success.

You wrote the very straightforward and plain words "I am afraid it will be *RARE* in 2030 to find CPUs and MPUs still in production that are supported by a C compiler".

Which, I contend, is simply implausible given the history of the last 75 years of computing.


I also disagree with your later claim that C will be inefficient on future machines (because, by Amdahl's law, this will make them slow on real world parallel tasks), but let's be clear that is a different and much weaker claim in the first place.

Quote
The new research processors is on enough high-level parallelism where you can suspend the threads that are waiting for data from memory and fill your execution units with instructions from others, which is very efficient and flexible enough even for tensor processing ... just the problem with such designs is that C programs tend to have few busy threads.

You are now switching from a speed argument to a throughput argument.

There are machines designed to run C programs that do exactly this already. Intel hyperthreading, obviously, in a small way, or xCORE microcontrollers.

Anyway, we are far from the topic of this thread. You've given your opinion, I've given mine, we're just boring everyone else. I'm out.
 
The following users thanked this post: MK14


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf