Author Topic: Emulation programming approach thoughts...  (Read 3528 times)

0 Members and 1 Guest are viewing this topic.

Offline alank2Topic starter

  • Super Contributor
  • ***
  • Posts: 2183
Emulation programming approach thoughts...
« on: June 16, 2017, 02:06:56 am »
I am thinking about writing some emulation for an 8080 type of project.  I've always loved the idea of one piece of equipment emulating another, but I've never thought about what type of approach to take to write code to do it.

Here are my thoughts on it, please make any suggestions for refinements or something I've got wrong or can improve.

First, I plan on implementing each IC as what I will call a device.  I plan on writing it in C, but I'm going to do a class style approach of putting all of a device's internal data into a structure and calling the function with a pointer to that structure, like "i8080_clk(i8080statetype *i8080state);"  This would allow me to reuse the function in case I wanted multiple devices of the same type.  The structure would contain all the internal data for the device and also what I am going to call pins.  A pin would be an interface outside the device.  A pin will be a struct itself containing 4 variables: direction, output, invertoutput, and netid.  Direction and output are set by the device itself during the course of operation.  Invertoutput and netid are configured by the initial process that creates what I am calling a net to link devices together.  A net has the properties of type, invertoutput, state, laststate, and a list of pointers to all the device pins that are connected to it.  Type is complicated in that it can be a net (pullup or pulldown) or gate (and, or, xor).  A pullup net for example starts with a state of 1 and looks for any pins attached to it set to output low to change to low.  If it finds two outputs set differently (a short!), it would cause a debug trap.  Pulldown starts at low and looks for output high pins to set its state to high.  Typically a net that should only have one device with its pin set as output at any given time.  The gate types do what you would expect, evaluate all the attached device pins and determine an output.

One issue I was concerned with is emulating the parallel nature with a sequential processing.  If you have two devices that are clocked with the same clock, you can't call them to do that at the same time because you must call them sequentially:

if (sharedclock)
  {
    i8080_clk(i8080statetype *i8080state);
    device2_clk(devicestatetype *devicestate);
  }

The issue here is that each may need to make decisions based on pin inputs that should not change while each function executes.  I originally thought I'd have to do something like a "state" and "nextstate" or something like that, but I think the nets that I am using to link pins together can accomplish the same thing.  If I change the code to:

if (sharedclock)
  {
    i8080_clk(i8080statetype *i8080state);
    device2_clk(devicestatetype *devicestate);
    update_nets();
  }

And each of the clk functions looks at the net states for its input pins state, it won't matter if i8080_clk changes its output pins because they will not take effect until update_nets() is finally called.  So any devices that are clocked or latched together must use this type of mechansim where each clk function is called and finally an update_nets function is called.

Then I was thinking about memory and how an i8080 communicates with it.  It uses a RD signal which is basically attached to the memory IC which enables the memory output, right?  This signal is a latch which is just like a clock in that it is in its own latch/clock domain, right?  So a new section would need to be added below the above section:

if (net(RD)=high)
  {
    memory_update(memorystatetype *memorystate);
    update_nets();
  }

In this case if the i8080 sets its RD signal high and update_nets updates the net state for that, the next section looks to see if the RD state is high, which is essentally what clocks/latches/enables the memory device.  I called the function _update because it really isn't being clocked per se, but just responding to the situation of RD going high.  In this case, it would read the address pins, grab the data at that address, put it on its data pins, and set its data pins to output.  The next update_nets() would adjust the data pin states to reflect the data being presented by the memory.  Obviously the i8080_clk that enabled the RD signal needed to switch the data pins to input.  On the next execution of the i8080_clk, it can sample the net data pins and acquire the data.

In the case of a disk controller that may have its own clock, it may have a _clk() function which operates its own internal logic and also a _latch() function which latches data into or out of some registers for or from the disk controllers internal logic process.

Some of the above IF blocks will run in response to conditions like the RD is enabled or disabled.  These should always run for each loop.  Others of the above IF blocks will need to run round robin depending on whether they are "clocked" or not.  Given that often timers provided by an OS are not as high a granularity as you would like, I thought of a plan to try to round robin clock them appropriately.  Let's say you have a CPU running at 2 MHz and a disk controller running on its own oscillator at 1 MHz.  You can keep a variable that represents the where they are in relation to the system clock.  Lets say you grab the system time and you can see that it has been bumped 15ms from the last time you checked the system time in the loop (which is probably what windows would do to you!).  You would multiply that 15ms to convert it to factor that has the granularity you need.  2 MHz at 15ms is 30000 cycles, so if 15ms goes by, you advance the system timer by 30000.  This means that your 2 MHz clock needs to fall behind this timer by 1 to be executed and the 1 MHz clock needs to call behind this timer by 2 to be executed.  To be able to properly round robin them you would need to evaluate which is more behind and run that one first.  You wouldn't need to figure out which one is more behind logic if the timer had the granularity to move forward one at a time, but if it moves forward 30000 at a time, you want to allow one the faster clock to run twice as fast as the slower one in this example so they catch up to the 30000 close together.

runclk1=clk1+1<=systime;
runclk2=clk2+2<=systime;
if (runclk1 && runclk2)
  {
    if (clk1<clk2)
      runclk2=0; //clk is more behind, do not run clk2 this time
    else runclk1=0; //clk2 is more behind, do not run clk1 this time
  }

if (runclk1)
  {
    icA_clk();
    icB_clk();
    update_nets();
    clk1+=1;
  }
if (runclk2)
  {
    icA_clk();
    icB_clk();
    update_nets();
    clk1+=2;
  }

To remove all throttling and still run things together in round robin order, you could always push the system clock ahead so it is far enough ahead that all functions can never catch up no matter how fast the main loop is.  Then they will still run in order of who is the most behind, but as fast as possible.

Any thoughts, ideas, improvents, or flaws?

I do realize there may be a diminishing point of return for how far you go emulating something.  My thought here though is that this could be cycle accurate which is one thing I am looking to accomplish so it could be stopped and single stepped.
 

Offline Bruce Abbott

  • Frequent Contributor
  • **
  • Posts: 627
  • Country: nz
    • Bruce Abbott's R/C Models and Electronics
Re: Emulation programming approach thoughts...
« Reply #1 on: June 16, 2017, 03:47:11 am »
Yet another 8080 emulator? Seems pointless...

What do you intend to use it for?

 

Offline legacy

  • Super Contributor
  • ***
  • !
  • Posts: 4415
  • Country: ch
Re: Emulation programming approach thoughts...
« Reply #2 on: June 16, 2017, 01:58:38 pm »
if you want to emulate a system with software on a PC, it makes no sense going at signals level. If you want to emulate simulate a system with HDL (so, within an HDL-simulator) you have to go at the RTL level.

Specific piece of hardware composed by analogical and digital parts might require a model, therefore pspice is the system you have to go.

« Last Edit: June 16, 2017, 08:35:08 pm by legacy »
 

Offline alank2Topic starter

  • Super Contributor
  • ***
  • Posts: 2183
Re: Emulation programming approach thoughts...
« Reply #3 on: June 16, 2017, 02:40:28 pm »
I hear you guys.  It is somewhat a learning project and I am fascinated with retrocomputing.  My thought would be to make a miniboard (18 cm by 7.5cm) that replicates the Altair 8800 control panel.  It would need signal level emulation to be able to light many of the LED"s properly.  Whether a project like this is worth the time and effort is a whole different question.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9886
  • Country: us
Re: Emulation programming approach thoughts...
« Reply #4 on: June 16, 2017, 02:58:59 pm »
Of course I'm going to suggest you do it with an FPGA and VHDL.  See, for example, the T80 project at OpenCores

http://opencores.org/project,t80

I have used this core to implement a general purpose Z80 machine that runs CP/M very fast.  I have also used it to implement PacMan.  The project used to be hosted at FPGAArcade.com but the site has changed focus...

If you want to simulate computers, you might like to look into 'simh' and see how they handled all of the sequencing.  Over at IBM1130.org, Brian has a simulator based on a bunch of C code and simh.  I have used it but I haven't studied it.

http://ibm1130.org/sim

I use his simulator to build the install deck that ultimately winds up on a Compact Flash for my FPGA implementation of the IBM1130.
 

Offline alank2Topic starter

  • Super Contributor
  • ***
  • Posts: 2183
Re: Emulation programming approach thoughts...
« Reply #5 on: June 16, 2017, 03:08:49 pm »
I was thinking of doing it on a STM32 as a first project for STM32.  I am familiar with SIMH.  In some respects I wonder if VHDL isn't a better approach though it pushes things in a different direction with storage, I want to implement SDCARD storage for disks and at that I'd like to use a FAT file system so I'm not sure if there are VHDL apporoaches to that or if it would be better handled by a microcontroller running FATFS.  I suppose it could have a FPGA for the main logic and I could interface that to a uC to do the FATFS stuff.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3137
  • Country: ca
Re: Emulation programming approach thoughts...
« Reply #6 on: June 16, 2017, 04:25:48 pm »
I was thinking of doing it on a STM32 as a first project for STM32.  I am familiar with SIMH.  In some respects I wonder if VHDL isn't a better approach though it pushes things in a different direction with storage, I want to implement SDCARD storage for disks and at that I'd like to use a FAT file system so I'm not sure if there are VHDL apporoaches to that or if it would be better handled by a microcontroller running FATFS.  I suppose it could have a FPGA for the main logic and I could interface that to a uC to do the FATFS stuff.

FPGA is certainly a better choice.

Hardware does things in parallel. Emulating parallel things on sequential CPU is much harder than building them in hardware. FPGA is essentially programmable hardware, so doing parallel things is easy. It'll also will run much faster.

However, you're doing it for yourself, so the most important thing is what you want to do. If you want to program in C, go with STM. You will be able to build most of your simulator on PC then simply port it to STM or any other MCU. It'll be many times slower than FPGA (possibly 100x, most likely more), but if that's what you want to do, why not?

If you'd rather go with FPGA, you would use VHDL or Verilog, which are completely different from C and require a different attitude - you're essentially describing hardware to be built. This will be more expensive. Also be prepared for slow tools. But it is the way to go if you want the best results.

 

Offline C

  • Super Contributor
  • ***
  • Posts: 1346
  • Country: us
Re: Emulation programming approach thoughts...
« Reply #7 on: June 16, 2017, 07:06:11 pm »
Z80PACK does this now

Altair 8800 system

Leds and switches work.

IMSAI 8080 system


and others

http://www.autometer.de/unix4fun/z80pack/] [url]http://www.autometer.de/unix4fun/z80pack/ [/url]

 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9886
  • Country: us
Re: Emulation programming approach thoughts...
« Reply #8 on: June 16, 2017, 07:53:21 pm »
I was thinking of doing it on a STM32 as a first project for STM32.  I am familiar with SIMH.  In some respects I wonder if VHDL isn't a better approach though it pushes things in a different direction with storage, I want to implement SDCARD storage for disks and at that I'd like to use a FAT file system so I'm not sure if there are VHDL apporoaches to that or if it would be better handled by a microcontroller running FATFS.  I suppose it could have a FPGA for the main logic and I could interface that to a uC to do the FATFS stuff.

The divide between hardware (the CPU) and software (FATFS) will probably remain unchanged.  My compact flash drive for my IBM1130 project emulates in all regards the logical interface described in IBM's documentation.  Essentially, the CPU sends the device the address of a parameter block and the device uses the DMA channel to read the parameter block at that address.  Those parameters include where to get/put the data and which sector to read/write.  All data transfers occur via DMA.  This isn't very complicated and ALL peripherals, card reader, typewriter, keyboard, printer and plotter, all use the same technique.  Even the CPU uses a DMA channel and competes as the lowest priority requester.  BTW, the card reader, typewriter, keyboard and printer all all implemented as serial ports -> USB and the plotter commands go to an 'mbed' processor where they are reformatted and sent as HPGL to a LaserJet.

When I hung the compact flash on the T80 core, all I did was expose the CF internal registers.  I left it up to the BIOS to deal with Large Block Addressing and virtual drives.  The CF implemented all 16 drives but it was left to the BIOS to compute the LBA which included an offset to sector 0 of the virtual drive.

You could bury a little CPU inside the FPGA to handle various IO devices.  I have often thought of one tiny CPU per device.  You could actually write code for the small CPU instead of trying to implement a peripheral in hardware.  I haven't done that.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9886
  • Country: us
Re: Emulation programming approach thoughts...
« Reply #9 on: June 16, 2017, 08:06:18 pm »
Z80PACK does this now

Altair 8800 system

Leds and switches work.


I have an earlier incantation of the Altair 8800 sitting in the garage.  It would be kind of interesting to scrap out the 8080 stuff and rebuild it with an FPGA.  OTOH, it would ruin the resale value - I don't see one on eBay at the moment but I seem to recall prices around $4000.

I really should get it running again.  Somewhere I have a FD controller I built based on the Western Digital FD1771 and a pair of 8" floppies.  I still have the diskettes and they still work.  I tested them on a CompuPro Z80 machine with dual floppies.

It would be far more practical to implement the disk system as a single Compact Flash with 16 virtual drives.

 

Offline helius

  • Super Contributor
  • ***
  • Posts: 3632
  • Country: us
Re: Emulation programming approach thoughts...
« Reply #10 on: June 16, 2017, 08:17:04 pm »
Make sure you understand the difference between emulation and simulation. A simulator seeks to predict the behavior of the real hardware, even if the design is incorrect or illegal situations occur, and it needs to reproduce the design's behavior on the sub-cycle level. Simulators have very high slowdown factors: a million seconds to simulate one second of real time isn't unusual. For very complicated designs it can be worse. Each component that is active at any step must be simulated in parallel, so the code is naturally very modular and fine-grained. Simulation is what you need to do before taping out your design to have it fabricated.

An emulator doesn't try to do these things. It assumes the design is correct and all that matters is to get the correct output for your input. Emulators do not need to accurately reproduce every cycle of the design, since most cycles don't have any useful visible behavior. When multiple components are working in parallel, an emulator tries to decompose them into a serial task that spends some time (milliseconds) on each part, because that's the only way the CPU's cache can be effective. The code is coarse-grained and not as modular: code to emulate a 6809 may be separate from a 6809E, for example, because different decisions have been made to split up the work of each bus cycle. Emulators can also peek ahead at what instructions are going to be executed and recompile them all into a native function that just spits out results without concern for counting cycles.
 

Offline legacy

  • Super Contributor
  • ***
  • !
  • Posts: 4415
  • Country: ch
Re: Emulation programming approach thoughts...
« Reply #11 on: June 16, 2017, 08:45:48 pm »
As far as I understood your purpose, I'd go for C software emulating, but you have to reduce the problem to a simple behavioral emulation, so forget about signals and everything under the hood unless you physically have to handle them.


edit:
p.s. do you know IBM AS/400 is like java virtual machine? IBM uses a strange terminology but it's almost the same concept: you have the hardware (a PowerPC, rather than an x86 machine), and you have a virtual machine on the top of which applications run.

It's exactly what you are going do to with 8080 (the virtual machine) on ARM (the virtualizer). If you design in the right way you should reduce the problem in a matter where you just have an HAL (hardware abstract layer) plus "end points" from the virtual devices and real hardware.

e.g. the 8080 soft-machine can believe it has a pATA interface as the ARM machine offers it an interface which looks like pATA, whereas the storage-device is physically implemented has a native SPI-master device.

This makes sense since
-1- the CP/M bios is usually written (1) for pATA-like devices
-2- ARM usually comes with SPI-master-device
-3- and the SD card is an SPI-slave-device



(1) of course you still have to modify it, but less effort is required.
« Last Edit: June 16, 2017, 08:53:10 pm by legacy »
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9886
  • Country: us
Re: Emulation programming approach thoughts...
« Reply #12 on: June 16, 2017, 09:13:56 pm »

do you know IBM AS/400 is like java virtual machine? IBM uses a strange terminology but it's almost the same concept: you have the hardware (a PowerPC, rather than an x86 machine), and you have a virtual machine on the top of which applications run.


This is exactly what Niklaus Wirth's P-Machine did for Pascal.  Regardless of the platform (CDC-6400 down to 8080), there was a P-code interpreter and everything compiled to P-code.

UCSD Pascal carried on the tradition for the microprocessors (8080,Z80, etc).

I once coded most of the interpreter on an FPGA.  I had all the instructions running except floating point and system calls (somehow the concept escaped me).
 

Offline alank2Topic starter

  • Super Contributor
  • ***
  • Posts: 2183
Re: Emulation programming approach thoughts...
« Reply #13 on: June 16, 2017, 09:31:31 pm »
I've been very fascinated with many of the approaches to CPU's (microcoded vs. not microcoded/RISC), and programming.  Compiling to a byte code instead of native instructions.  Who is to say where native instructions end and emulated ones begin!!
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf