Author Topic: Microprocessor (MPU 8/16) that can be programmed using C programming language  (Read 13216 times)

0 Members and 1 Guest are viewing this topic.

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4040
  • Country: nz
Which is partly why Sweet16, a semi-bytecode like 16 bit ALU, was created in/for the Apple, by Steve Wozniak. Which somewhat overcomes, the 6502's limitations.
https://en.wikipedia.org/wiki/SWEET16

"runs at about one-tenth the speed of the equivalent native 6502 code", see?

It was removed very soon, with the AUTOSTART ROM, in 1978 (79?) IIRC. Not that anybody was using it anyways. The big loss if you ask me was the mini-assembler (F666G), also gone with that ROM "upgrade".

I liked and used SWEET16 and included a copy in my own programs. It was just over 300 bytes of code.
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4040
  • Country: nz
These days, processors are usually so fast, that a speed loss of /10 (or even /100), is not necessarily a big show stopper. Hence the popularity of relatively inefficient, scripting/interpreted languages, these days, such as Python.

Yes, Python and other similar languages are very very slow.

Javascript has become so important and critical that multiple organizations have invested huge money in analyzing and compiling it.

Quote
But the 6502, was somewhat relatively slow and a bit weak, that such a speed loss (/10), would be quite devastating. Especially if it was on top of a Basic Interpreter, which already slows things down by a factor of x100 or even hundreds, compared to hand crafted, well optimised machine code (assembly language),

I don't know how you'd get a speed loss of 10x ON TOP OF the speed loss of BASIC. I mean .. ok .. you could write a bytecode interpreter in BASIC.

SWEET16 is 10x slower than machine code, but it's probably at least 10x faster than Integer BASIC, which was itself several times faster than AppleSoft.

SWEET16 is also several times faster than the UCSD P-system bytecode interpreter (Apple Pascal), which was actually quite a usable system. It also made it much more natural and easy to write critical functions in assembly language and call them from Pascal.

Quote
The thing that "speeded up", the 6502, was the fact that, because of the lack of hardware floating point (home computers, in general, at that time). Floating point took so long (on a typical 6502 1 MHz, or equivalent cpu, e.g. Z80), that the relative slowness of a Basic Interpreter, didn't really matter that much.

True.

I think a floating point add probably took around 200 clock cycles.

Software floating point add, subtract, and multiply on the AVR take around 80 clock cycles each.
 
The following users thanked this post: MK14

Offline GeorgeOfTheJungle

  • Super Contributor
  • ***
  • !
  • Posts: 2699
  • Country: tr
Which is partly why Sweet16, a semi-bytecode like 16 bit ALU, was created in/for the Apple, by Steve Wozniak. Which somewhat overcomes, the 6502's limitations.
https://en.wikipedia.org/wiki/SWEET16

"runs at about one-tenth the speed of the equivalent native 6502 code", see?

It was removed very soon, with the AUTOSTART ROM, in 1978 (79?) IIRC. Not that anybody was using it anyways. The big loss if you ask me was the mini-assembler (F666G), also gone with that ROM "upgrade".

I liked and used SWEET16 and included a copy in my own programs. It was just over 300 bytes of code.

More or less, what year was that?

At the time I didn't even know what it was for :-) (*) and shortly after the mini-assembler was gone, I had to buy me another one and got Mike Westerfield's ORCA/M, and learned to do lots of things with macros then.

(*) The red book had a listing, but in 1978 it was still too soon for me to understand what was the point.
The further a society drifts from truth, the more it will hate those who speak it.
 

Offline MK14

  • Super Contributor
  • ***
  • Posts: 4540
  • Country: gb
You're getting very far from "the 6502 is faster here". You're at "a particular well equipped 6502 computer is faster than a particular poorly-equipped AVR computer".

Very true. I was being, ? something, factious, or whatever the right word(s), were/are.
In short, I was showing that if you contrive the situation enough, you can show any 'nonsense' (fake news), you want. I heard, statistics, is a good tool for doing it.

I'm pretty sure I can write a 6502 emulator for the AVR which will run faster than a real 6502. Using that external SRAM interface.

Writing emulators, JITs and compilers is my job and specialty.

I bet you could as well. Arguably, with some improvement (which was probably possible at the time). The odd extra register, here and there. Making some of the 8 bit registers, part/full time 16 bit, and making the instruction set more orthogonal. Would improve it, no end.
Accumulators A and B, 8 bit, A and B usable in one piece, as a 16 bit entity. X and Y index registers, but make them (and stack), full 16 bits. Maybe a C & D, 8/16 bit register set, as well.
More/better addressing modes. I guess I'm turning the 6502, into another 6809.

The 6502 is indeed not suited to any language that requires 16 or 32 bit variables. Or functions that are required to work if called recursively.

I consider it weak, even with 8 bit only values. Because the 256 byte range limit (in one go), as the index registers were only 8 bits, is a real pain, when you code in 6502, a lot.

My code generation scheme is 7 bytes and runs in 44 clock cycles. (ldx #REGA; ldy #REGB; jsr ADD16). X and Y are not modified by ADD16 so if the previous or next operations use A or B then those registers don't need to be reloaded.

That sounds very impressive, and clever.
By using the X/Y register pair, as 'pretend' accumulators A and B (the 6502's single accumulator, limits/hinders its assembly language). That is a neat trick.
« Last Edit: August 06, 2020, 11:33:07 am by MK14 »
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4040
  • Country: nz
I liked and used SWEET16 and included a copy in my own programs. It was just over 300 bytes of code.

More or less, what year was that?

While I was at university: 1981-1984.

Quote
At the time I didn't even know what it was for :-) (*) and shortly after the mini-assembler was gone, I had to buy me another one and got Mike Westerfield's ORCA/M, and learned to do lots of things with macros then.

(*) The red book had a listing, but in 1978 it was still too soon for me to understand what was the point.

The Apple ][ I had access to was a EuroPlus with AppleSoft ROMs. And had a green manual (which I still have). But the university library had the November 1977 BYTE magazine with the SWEET16 source code.

Early in my 2nd year we were studying 6502 machine code using an assembler on the VAX and Rockwell AIM65 kits in a lab. The lab was only available a few hours a day. One evening, frustrated at not being able to test my 6502 code, I wrote a 6502 emulator in VAX Pascal. Before I went home at 3 AM or so I sent an email about it to several friends so they could try it out. I came in the next afternoon to find about 50 people using it! Within a few days the lecturer said output from my emulator was acceptable for people's assignment submissions. There was only one bug: I'd got the sense of the carry flag reversed for the SBC instruction.

I also wrote a (very partial) VAX emulator for the Apple ][ (in assembly language) during the summer holidays. Don't even ask the execution speed :-) :-)
 
The following users thanked this post: newbrain

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4040
  • Country: nz
My code generation scheme is 7 bytes and runs in 44 clock cycles. (ldx #REGA; ldy #REGB; jsr ADD16). X and Y are not modified by ADD16 so if the previous or next operations use A or B then those registers don't need to be reloaded.

That sounds very impressive, and clever.
By using the X/Y register pair, as 'pretend' accumulators A and B (the 6502's single accumulator, limits/hinders its assembly language). That is a neat trick.

In this scheme, X and Y are not accumulators, but pointers to the accumulators, which are groups of 2 or 4 bytes located anywhere in Zero Page.

ADD16:
  clc
  lda $0000,y
  adc $00,x
  sta $00,x
  lda $0001,y
  adc $01,x
  sta $01,x
  rts
 
If variable A is in locations $05 and $06 and B is in locations $87 and $88 then you do A += B with:

  ldx #$05
  ldy #$87
  jsr ADD16

You can have up to 128 such 16 bit variables or 64 32 bit variables.

ADD32:
  clc
  lda $0000,y
  adc $00,x
  sta $00,x
  lda $0001,y
  adc $01,x
  sta $01,x
  lda $0002,y
  adc $02,x
  sta $02,x
  lda $0003,y
  adc $03,x
  sta $03,x
  rts
 
The following users thanked this post: MK14

Offline MK14

  • Super Contributor
  • ***
  • Posts: 4540
  • Country: gb
But the 6502, was somewhat relatively slow and a bit weak, that such a speed loss (/10), would be quite devastating. Especially if it was on top of a Basic Interpreter, which already slows things down by a factor of x100 or even hundreds, compared to hand crafted, well optimised machine code (assembly language),

I don't know how you'd get a speed loss of 10x ON TOP OF the speed loss of BASIC. I mean .. ok .. you could write a bytecode interpreter in BASIC.

Well, let me give you an actual example. I can't remember the exact details, so take it as an example, but similar things really happened in practice.

A well selling calculator, gets released, quite some time ago, from a major manufacturer. It is a popular, very powerful, Programmable/Scientific calculator, with a huge number of built in functions, and lots of programs are written for it.

Some time later, the cpu becomes unavailable, much better ones are around. Then the calculator manufacturer, releases an updated version of the calculator, which really (semi-secretly), runs an emulator, on a modern chip, which then pretends to be the old/obsolete mpu/mcu.

Then even later (I'm not sure how many iterations, this has gone on for in practice ?), an even newer release of the calculator comes out, with a somewhat fast, modern Arm core on it. Which emulates the previous cpu, which itself then again, emulates an even older (original) cpu. In order to be the calculator.

Under absolutely no circumstances will I reveal the name(s), of those calculator manufacturer's (ok, it was HP and I think Casio, maybe TI as well).

So the big inefficiencies of these (possibly multiply piled up, I'm not sure off-hand), emulators, shows how excessively fast, these modern Arm cores, are these days.

I guess, if you have a big, complicated calculator program (firmware), and it takes years (and lots of rare/expensive software engineering), to create. Also, it has most or all of its bugs discovered/removed, hopefully, by now. I can understand why they would go that route.
 

Offline newbrain

  • Super Contributor
  • ***
  • Posts: 1720
  • Country: se
I also wrote a (very partial) VAX emulator for the Apple ][ (in assembly language) during the summer holidays. Don't even ask the execution speed :-) :-)
The question almost ask itself:
Did you try your 6502 emulator on the VAX emulator?

Sounds a bit as my master degree dissertation:
Prof: Make a dissertation on Transputers, occam and graphics.
Me: Wonderful! Do we have any HW in the lab?
Prof: Nope.
Me: Ok, can we get some?
Prof: Nope.
Me: Ok, I'm going to write an occam compiler and emulator, with distributed processing on the LAN.
Prof: Go ahead!

Many years after, I learned the emulator had been used in many schools for a long time from a co-worker who just matched my name with his high-school memories of CS classes.
Nandemo wa shiranai wa yo, shitteru koto dake.
 
The following users thanked this post: MK14

Offline MK14

  • Super Contributor
  • ***
  • Posts: 4540
  • Country: gb
That sounds very impressive, and clever.
By using the X/Y register pair, as 'pretend' accumulators A and B (the 6502's single accumulator, limits/hinders its assembly language). That is a neat trick.

In this scheme, X and Y are not accumulators, but pointers to the accumulators, which are groups of 2 or 4 bytes located anywhere in Zero Page.

ADD16:
  clc
  lda $0000,y
  adc $00,x
  sta $00,x
  lda $0001,y
  adc $01,x
  sta $01,x
  rts
 
If variable A is in locations $05 and $06 and B is in locations $87 and $88 then you do A += B with:

  ldx #$05
  ldy #$87
  jsr ADD16

You can have up to 128 such 16 bit variables or 64 32 bit variables.

ADD32:
  clc
  lda $0000,y
  adc $00,x
  sta $00,x
  lda $0001,y
  adc $01,x
  sta $01,x
  lda $0002,y
  adc $02,x
  sta $02,x
  lda $0003,y
  adc $03,x
  sta $03,x
  rts

Sorry, I'd got mixed up there.  :-[
(Unrelated, somewhat NOT why I got mixed up) Because 8 bit cpu, index registers, are usually 16 bit (the 6502 is an exception), I think some schemes do use the X (and if available Y) index registers, as accumulators, temporarily. Just to get a speedup of a few cycles. Because of the limitations of the 8 bit accumulators, and the fiddling about, to perform 16 bit operations.
E.g. INCX, would then be a 16 bit INCA 'like' instruction (NOT on 6502, as index registers are 8 bit). LDX/STX would be as if you had a 16 bit accumulator for memory transfers, etc etc.

In defence of the 6502, the Zero Page mechanism, was a neat trick, which to a (slow, limited) extent. Was as if it had 256 extra 8 bit accumulators or 128, 16 bit index registers, etc.

In some respects the 6502, was an early RISC processor. Even though technically speaking it was a CISC one.

tl;dr
If there had been modern, 500 MHz 6502's. They would probably be somewhat powerful and fast, in a kind of quirky way. In some respects, the modern arm cores, came from an upcoming, upgraded 6502 processor. Which apparently, Arm/Acorn knew about, visited where it was being designed and discussed. Before inventing the new arm chips.
« Last Edit: August 06, 2020, 12:17:32 pm by MK14 »
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4040
  • Country: nz
Then even later (I'm not sure how many iterations, this has gone on for in practice ?), an even newer release of the calculator comes out, with a somewhat fast, modern Arm core on it. Which emulates the previous cpu, which itself then again, emulates an even older (original) cpu. In order to be the calculator.

Well, yes, sure, you can nest emulators.

It's quite ok to do this because the original machine was so much slower than the new ones, but the program was carefully written to run satisfactorily on it.

I may or may not have run "][ in a Mac", on "Basilisk II" (compiled for PowerPC), on the built in "Rosetta" PowerPC emulator on an Intel Mac. Because I could.

And that's fine if you have all those accumulated over the years.

At some point it becomes easier just to write a 6502 emulator for that Intel machine and skip the intermediate 68000 and PowerPC emulations.

Apple's new ARM Macs will have an Intel emulator, but that emulator won't run Rosetta inside it -- Rosetta hasn't been supported for many years already. So I'd need to find or write my own PowerPC emulator. It's *definitely* much easier to write a 6502 emulator than a PowerPC emulator.
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4040
  • Country: nz
I also wrote a (very partial) VAX emulator for the Apple ][ (in assembly language) during the summer holidays. Don't even ask the execution speed :-) :-)
The question almost ask itself:
Did you try your 6502 emulator on the VAX emulator?

No. I hadn't implemented enough of the VAX instruction set to run it.

However the VAX partial-emulator ran fine on the 6502 emulator on the VAX.

Quote
Many years after, I learned the emulator had been used in many schools for a long time from a co-worker who just matched my name with his high-school memories of CS classes.

Nice!
 
The following users thanked this post: newbrain

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4040
  • Country: nz
In some respects the 6502, was an early RISC processor. Even though technically speaking it was a CISC one.

It's neither. It's more of a minimal instruction set computer (I can find a video of Sophie Wilson saying so), much like the PIC, 8051, DEC PDP8, Data General Nova and other of the earliest microcomputers and minicomputers.

[That didn't really happen the same way with mainframes (aka "computers" from 1940 to 1965) because the early ones were build by or for "money is no object!" government organizations modelling atomic explosions or whatever. It was only in the early minicomputer and microcomputer eras that "any computer is better than no computer" was the rule. And some IBM machines such as the 1130 or the bottom end System/360 too I guess]
 
The following users thanked this post: MK14

Offline MK14

  • Super Contributor
  • ***
  • Posts: 4540
  • Country: gb
Then even later (I'm not sure how many iterations, this has gone on for in practice ?), an even newer release of the calculator comes out, with a somewhat fast, modern Arm core on it. Which emulates the previous cpu, which itself then again, emulates an even older (original) cpu. In order to be the calculator.

Well, yes, sure, you can nest emulators.

It's quite ok to do this because the original machine was so much slower than the new ones, but the program was carefully written to run satisfactorily on it.

I may or may not have run "][ in a Mac", on "Basilisk II" (compiled for PowerPC), on the built in "Rosetta" PowerPC emulator on an Intel Mac. Because I could.

And that's fine if you have all those accumulated over the years.

At some point it becomes easier just to write a 6502 emulator for that Intel machine and skip the intermediate 68000 and PowerPC emulations.

Apple's new ARM Macs will have an Intel emulator, but that emulator won't run Rosetta inside it -- Rosetta hasn't been supported for many years already. So I'd need to find or write my own PowerPC emulator. It's *definitely* much easier to write a 6502 emulator than a PowerPC emulator.

Opinions may vary. But arguably, Intel (and AMD), have actually run an x86 processor 'Emulator' on their x86 processors, for a very long time.
I.e. The instruction decoders, accept x86 code, then, on the fly, 'translate' it into tiny micro-ops, which then runs on risc like cpu(s).
In some ways (analogy wise), the out of order instruction execution (and maybe some other speedup mechanisms), are a bit like having a real-time compiler optimiser (step), inside the cpu as well.

What could get fun/interesting, is if you had that 'cascade' of nested emulators, you just described. Then rarely/occasionally, it would produce, weird/bugged result(s). Now find where the fault lies ?

Is it the hardware, original program (bug), or one of the later emulators at fault ?

To make it harder, the 'instruction' which sets off that series of issues, that results in a bug. Might have been run, many thousands (or more) of instructions, in the past . E.g. Because a floating point value, in one of the emulators, did the rounding bit, ever so slightly wrong (and ultra rarely).
Then, some hundreds of thousands of instructions later, the bug emerges, because the slightly incorrect floating point value, has finally been loaded back from memory, is being used, and has caused some kind of floating point exception or something. Overflow, divide by zero, Nan etc.
Or even, just a subtle value change, big enough to be a bug, but small enough, to be a pain to detect.
Example: The original Pentium Divide bug. (Although that was a hardware, rather than a software type of bug. On the other hand, it was a ROM lookup-table (kind of software like) in the Pentiums FPU, which had the wrong values in it, which caused the bug, in the first place).
« Last Edit: August 06, 2020, 01:21:12 pm by MK14 »
 

Offline MK14

  • Super Contributor
  • ***
  • Posts: 4540
  • Country: gb
In some respects the 6502, was an early RISC processor. Even though technically speaking it was a CISC one.

It's neither. It's more of a minimal instruction set computer (I can find a video of Sophie Wilson saying so), much like the PIC, 8051, DEC PDP8, Data General Nova and other of the earliest microcomputers and minicomputers.

[That didn't really happen the same way with mainframes (aka "computers" from 1940 to 1965) because the early ones were build by or for "money is no object!" government organizations modelling atomic explosions or whatever. It was only in the early minicomputer and microcomputer eras that "any computer is better than no computer" was the rule. And some IBM machines such as the 1130 or the bottom end System/360 too I guess]

You're right, although the original Motorola 6800 (I think Chuck Peddle, was significantly involved), was where the 6502 came from. It was a sort of, "cut costs to the minimum, but try and maintain speed and functionality".
So, Chuck Peddle left Motorola, who had rejected, his dramatically lower selling priced '6502', so he (and others), left and created a new company. The rest is history.

There were some similar microprocessors, similar era, which might have, at least in some cost optimisation senses, beaten it, there. E.g. The Z80, having a half-sized 4 bit ALU, which, because of its relatively large number of clock cycles, could fit that in, time wise.
Also, Motorola, did the extreme cost reduced (yet 8 bit to the outside world), 1 bit internally (i.e. bit serial), MC6804P2, which Hitachi and maybe others, second sourced.
Amazingly, I think it had a built in self-test capability. Presumably, to cleverly reduce costs, by eliminating, reducing the time to do it, by speeding up the time to test the cpu, on the production line.
I've heard the testing can and is, the most expensive part of making the IC. Because it is time consuming (on a busy production line), needs expensive personnel and very expensive equipment. Potentially even, very expensive, custom test hardware/jigs.

tl;dr
Chuck Peddle, realised, that a rather cheap, 6502, would sell like hot cakes, and be designed into all sorts of new products. He was right!
But didn't seem to be in a position, to maintain that market dominance. If he/they had, I guess I'd be typing this in on a 6 GHz x6502, with floating point, and full 64 bit (6502) improved instruction set.
« Last Edit: August 06, 2020, 02:46:04 pm by MK14 »
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4040
  • Country: nz
There were some similar microprocessors, similar era, which might have, at least in some cost optimisation senses, beaten it, there. E.g. The Z80, having a half-sized 4 bit ALU, which, because of its relatively large number of clock cycles, could fit that in, time wise.
Also, Motorola, did the extreme cost reduced (yet 8 bit to the outside world), 1 bit internally (i.e. bit serial), MC6804P2, which Hitachi and maybe others, second sourced.

I knew about the Z80 4 bit ALU. I didn't know about the MC6804P2.

The smallest FPGA implementation of RISC-V, Olof Kindgren's "SERV", is bit serial. Of course this makes it very slow, with most instructions taking 32 clock cycles (except jumps, load/store, SLT, shifts) but it seems to be popular. It runs at 50 MHz in an ICE40 and 220 MHz on Artix-7 so that's still around 1 to 7 32-bit MIPS https://github.com/olofk/serv
 
The following users thanked this post: newbrain, MK14

Offline MK14

  • Super Contributor
  • ***
  • Posts: 4540
  • Country: gb
I knew about the Z80 4 bit ALU. I didn't know about the MC6804P2.

The smallest FPGA implementation of RISC-V, Olof Kindgren's "SERV", is bit serial. Of course this makes it very slow, with most instructions taking 32 clock cycles (except jumps, load/store, SLT, shifts) but it seems to be popular. It runs at 50 MHz in an ICE40 and 220 MHz on Artix-7 so that's still around 1 to 7 32-bit MIPS https://github.com/olofk/serv

Thanks.
That is quite amazing (I enjoyed the Video about it, from the designer), a RISC-V soft core, bit-serial cpu, in around 250 LE's (FPGA type and options dependent).
A MIP or so, is fast enough for many applications, e.g. Sensing. Where the FPGA performs the bulk processing, and the slow cpu, only needs to do some diagnostics, accept some commands from outside, and send back, slower sensor data (very fast sensor processing/communications via cpu, would need another solution).

The slowest 1 MIP one, is like having a small memory/disk version of a (previously) massive/expensive VAX 11/780, for each sensor/task/section of the FPGA. In its day, the VAX 11/780 could keep a large number of users, reasonably happy, as regards their computing requirements.
« Last Edit: August 07, 2020, 12:48:50 am by MK14 »
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4040
  • Country: nz
(I enjoyed the Video about it, from the designer)

Olof is a funny guy. And smart. And funny.
 
The following users thanked this post: MK14

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Quote
You use a modern, powerful Arm processor, with its potentially, highly complicated peripheral set. Amazingly powerful peripherals, yes. But they can have 2,000 page manuals, which make for very heavy reading.
If what you're after is the equivalent of an 80's 8bit micro, you can ignore most of those 2000 pages, and still have the benefit of more memory and cheaper boards...  (plus, you know, several "beginner environments" to fall back on, should you want to do a printf() without having to read that section on the UART (which requires understanding the clock system and the power managers and ...)
 

Offline MK14

  • Super Contributor
  • ***
  • Posts: 4540
  • Country: gb
Quote
You use a modern, powerful Arm processor, with its potentially, highly complicated peripheral set. Amazingly powerful peripherals, yes. But they can have 2,000 page manuals, which make for very heavy reading.
If what you're after is the equivalent of an 80's 8bit micro, you can ignore most of those 2000 pages, and still have the benefit of more memory and cheaper boards...  (plus, you know, several "beginner environments" to fall back on, should you want to do a printf() without having to read that section on the UART (which requires understanding the clock system and the power managers and ...)

Yes, that can be the case. It depends on what you are trying to achieve, with any retro/vintage computer interests, one may have.
Some people are happy with emulators, or running modern stuff at full, or greatly reduced clock speeds. Some insist on the real thing, from the past. While others again, are happy to build a 'modern' retro/vintage computer, out of whatever parts are available.

E.g. With foundness, one remembers some BBC Micro computer, games, they use to play, such as Elite. Maybe if you have an old/original dusty one in the attack, or want to splash out around £200 (very approx), on a used ebay one. Spending the odd weekend here and there, cleaning it, recapping it and possibly repairing its ageing power supply.

Or just click here:
http://bbcmicro.co.uk//jsbeeb/play.php?autoboot&disc=http://bbcmicro.co.uk//gameimg/discs/366/Disc021-EliteD.ssd&noseek

http://bbcmicro.co.uk/game.php?id=366

Or here, for a massive list of A .. Z, online playable (even downloadable) games..
http://bbcmicro.co.uk/index.php

Other (similar sites available as well) E.g.
https://bbc.godbolt.org/
« Last Edit: August 07, 2020, 10:41:36 am by MK14 »
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4040
  • Country: nz
Being able to do full system full speed emulation of old machines in freaking JAVASCRIPT is I think one of the most ridiculous aspects of current PCs (and phones).
 
The following users thanked this post: MK14

Offline MK14

  • Super Contributor
  • ***
  • Posts: 4540
  • Country: gb
Being able to do full system full speed emulation of old machines in freaking JAVASCRIPT is I think one of the most ridiculous aspects of current PCs (and phones).

It is somewhat crazy!
Maybe in the future, you can request ones home 3D all items, printer. To make a genuine (clone/copy) vintage computer. Play with it over the weekend. Get fed up with it. Then put it into the all recycling things chute, and move on to ones next activity.

A bit like you can print a colour, A4/A3 sheet of any photograph today, fairly rapidly. Or a 3D model of something, in some kind of plastic (meltable or UV curing), for your next custom widget pencil holder, that fits EXACTLY under your specific model of monitor, that you use.

Some people have done FPGA patterns (Verilog/VHDL) of entire, ancient mainframe computers (some are detailed or mentioned, on this Forum), such as Cray Supercomputers. In some cases, they are even freely downloadable.
So we are getting there, bit by bit.

I suppose you could take a suitable (e.g. CP/M) hackerspace PCB plan  + BOM, send it to a Chinese PCB + Assembly service, and receive the completed CP/M clone computer board, a few weeks later. If you can afford it, and don't want to make it yourself.
 

Offline bson

  • Supporter
  • ****
  • Posts: 2270
  • Country: us
I also wrote a (very partial) VAX emulator for the Apple ][ (in assembly language) during the summer holidays. Don't even ask the execution speed :-) :-)
Compared to the VAX-11/780 I bet it was fast.  That thing was a pig!  ::)

(Yeah, okay, maybe it was 20x as fast as a 6502, which was pretty pathetic for the amount of iron.)

 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf