Author Topic: Experimenting with TTL Cpu, 74LS chips, old vs New? Retro style switches?  (Read 7251 times)

0 Members and 1 Guest are viewing this topic.

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 21227
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Just a curious question, when using the FAST or the A families cant you still reap there speed benefits whiles using appropriately sized resistors to slow the rising edge slow enough in order to avoid issues? People do this to control the rise time of a FETs gate all the time, and aren't the newer chip family's based on FETs is that part of what makes there layout issues so much more crucial? Seems like if you were going to use them without trying to slow there edges the key is low impeadence ground, and smashing there SMT versions as close together as possible to avoid trace length.

There are tradeoffs between propagation delay, transition time, fanout, wire length, and noise margins. If you are prepared to slug the output, it would be better to simply use a slower logic family.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 22436
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Just generally all under the umbrella of signal quality.

Consider the chip's design itself, it's made of pins and bondwires.  Use the shortest pins you can get (prefer TSSOP over SOIC over DIP).

Try to avoid transitioning multiple pins high or low simultaneously.  This is particularly hard to avoid on, say, bus latches; at least they usually have schmitt trigger inputs, alleviating some concern with signal bounce and risetime.

Consider the entire net route, its drivers, receivers (input pins), ground support (keep it near ground plane as much as possible), and height above ground and trace width (which define the transmission line impedance).  Avoid tree routing; prefer linear (point-to-point-to-point) routing.

If the signal's minimum pulse width is much longer than the electrical length of the net, and there are no significant DC loads on the trace (which for TTL, means having a low fanout; for CMOS, it's pretty much whatever), source termination can be considered.  Note that the waveform at any intermediate node has a stairstep shape, as the incident and reflected waves cross it twice (which takes up to twice the electrical length of the net, hence the requirement that the pulse widths be much longer than this duration!).

Note that input pins have capacitance, so act to load down the trace.  If you have multiple inputs on a net, try to space them evenly so that they act more like a lumped-equivalent transmission line.  The trace impedance can be a little higher in this case, since the average including pin capacitance will be lower (impedance is sqrt(L/C), and the pins act to increase C).  There may still be ringing due to the between-inputs lengths, which can be dampened with additional resistance (say on a few of the inputs, or a small R+C at the end, or a FB at the driver(s)).

Speaking of ferrite beads (FBs), be careful using these; they have, not so much a long time constant, as, effectively, a distributed time constant.  In series, this will give nice, soft, rounded edges, which is good for reducing EMI on cables, but can be ill advised for high speed signals.  FBs are available in many values, so pick an impedance appropriate to the application -- for just taking off some ringing perhaps, a low value (say 10-30 ohms @ 100MHz) might be good, while for slower signals (especially going onto cables), larger values are an excellent choice (say 100 or 300 or 1k ohms).

And for cables, you might even add some parallel capacitance (or R+C) to provide additional filtering and dampening.  And maybe some ESD clamp diodes, because the outside world is a nasty place.

As for nets that can't so easily be source-terminated, load or source-load termination is an option.  This is quite traditional among TTL -- the relatively high driver voltage (roughly 0.4 to 3V typ.) and smaller input threshold range (0.8-2V) means some loss can be tolerated along the signal path.  A source-load terminated medium has, as the name suggests, a matched driver impedance (Zo at the ends, or Zo/2 in the middle), and termination at the ends (Zo for each end that doesn't have a permanent driver also attached*).

*Because Thevenin and superposition theorems.  A source at 0V (AC) and a series resistance of Zo, is... literally the definition of a termination resistor.  (More specifically, with 0V correlated to the driving source in question.  If they happen to be synchronized, they're not uncorrelated, and the effective impedance will be something else.)

Back in the day, divider resistor packs were quite common, something like 390 ohms pullup, 150 ohms pulldown -- the parallel combination being a perfect match to ribbon cable, when wired with alternating signal and ground, and the Thevenin equivalent voltage being perfect for TTL inputs.  The old ST-506 hard drive interface used exactly this for the control signals; the terminator was socketed, so it could be inserted in the last drive along the chain.  (The data signals however were preserved with better signal rate and quality, using RS-422 differential transceivers and point-to-point links -- hence one multi-position control cable and two separate data cables between the controller and a pair of drives!)

If you put a (IEEE 1284) parallel port on your project, you can consider using just such a termination with it; the transmitters are either 5V TTL, or 3.3V LVCMOS (typically 74HC family), either way having fairly comparable drive capability, suitable for load termination like this.

Which also tells you exactly how to construct it -- the old school way is literally an I/O address decoder, a couple bus latches, a bus interface (if full bidirectional), and, usually a 7406 or something (open collector) for the control signals I think?  (In PCs, this was quickly integrated into the system (SuperIO) chip, then later on, eliminated entirely.)


Ahem, anyway, signal quality doesn't need to affect your design much, or at all; it's not something you need much schematic consideration of.  For the most part, it is a separate and independent step, part of layout and routing.

If breadboarding, don't ignore it too hard -- your jumper wires are traces all the same, albeit with rather awful impedances, and relatively high coupling between them, making things more vulnerable.  Make sure the supplies are well bypassed and stitched (if using physical supply rails, tie them together at both ends of the board, say).  Maybe slip on a ferrite bead every so often, make sure the signals don't bounce too much.

Best part is, all this can be viewed on a typical scope (100MHz maybe isn't quite enough, but 200MHz or more is good), so you can see where signal bounce, supply noise, and common mode noise, are present.  You can always add or remove jumpers, and slip on ferrite beads as needed.  (Adding termination resistors would be harder to do!)

I once breadboarded a 4MHz Z80-CPU, with RAM, ROM, and a couple (74LS) bus latches, one pair of which drove an LED matrix display; it would often run fine, for days or weeks on end, but once I coded a LFSR (a type of random number generator), it would hang much more frequently (within days to hours).  Presumably, some bad combinations of data were causing the buses or supply to glitch; maybe improved bypassing, or grounding, or ferrite beads on some signals (or all of them?..), would've fixed it.  It goes to show you, something might look perfectly okay with average data -- the LED matrix routines were very repetitive -- yet an underlying problem hides in plain sight (in this case discovered by fuzzing with random bus data).

On another occasion, I had made an SPI peripheral module on a separate board, and plugged it into a breadboard with an ATMEGA; it was a disaster, just gibberish going through.  Slipped a ferrite bead on to SCK, MISO and MOSI, good as can be.  ATMEGA has faster pin transition times than some of what we're talking about here; with 74HC I think being, either a little bit slower, or comparable to it?

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Online David Hess

  • Super Contributor
  • ***
  • Posts: 17429
  • Country: us
  • DavidH
That tallies with my recollection, but one of the claimed advantages of FAST was that controlled edge rates minimised the problem.

That helped and what I referred to about the difference between bipolar and CMOS but they still never met their intended specifications.  SO packaged parts did better.  Non-saturating logic like ECL was much more forgiving.

Quote
Putting the power pins on the opposite corners always was a pessimal choice.

Easier layout was a good reason to place the power and ground pins on the corners.  Later logic families moved the power and ground pins to the center of the package where appropriate.

Just a curious question, when using the FAST or the A families cant you still reap there speed benefits whiles using appropriately sized resistors to slow the rising edge slow enough in order to avoid issues?

Using a series resistor to slow down the edge and reduce ground bounce helps but does not restore any performance which was lost.  I only saw it done to control EMI as a last resort when a better design was not practical.
 

Offline rwgast_lowlevellogicdesinTopic starter

  • Frequent Contributor
  • **
  • Posts: 659
  • Country: us
    • LowLevel-LogicDesign
Wow, thank you for that huge wealth of information, it's not every day you get I reply like that I will have to re-read it!

As far as front panel controls go, I'm no where near ready to implement them yet (pcb mount dip switches and jumpers are fine atm) but I've been looking all over and looking at old minicomputer and retro kit designs. Toggle switches like the ones on the PDP/IMASI/etc look like a huge PITA, from what I can tell you flip them in to the binary position you want and then hit a momentary store/continue button. So after that you have to manually flip them all back to 0? Using the front panel is already tedious but having to re 0 is just painful!! So using momentary rockers with latched LEDs seems a lot more productive, so the store/content button can be used to de latch the data switches and 0 them. I believe this is how the HP1000 refered to earlier did things? I'm sure in the day using momentarys in the way I described added extra cost/size do to the latching circuitry and the need for better debounce. I have no qualms about using a micro or plc to deal with the user interface though, I wouldn't learn anything new from doing it, and it just costs more and requires more room in the enclose that can be better utilized.

Now what I think would be a way cooler interface, that's totally unique is using STOP ACTION MAGNET rockers or something like them, for those who don't know there basically a SPST rocker which can be flipped by hand or electrically (both un and down) using solenoids, which in turn open or close a magnetic reed (although that part is not important). Here is a most likely very expensive example Sydney SAM which I'm sure exhibits awfully bounce. Something with electro-mechanical control like this could be made pretty easy though. Either by creating special rocker caps that a solenoid can push, or by using hall effect sensors if you want a cleaner signal with no bounce. If you had a setup like this you could make the switches automatically match the position of the stored data (when single stepping or running at extremely low clocks), as well as 0 out after store/continue. Unlike just latching leds to a momentary you get cool visual feedback and audio clacking. My biggest problem is designing the switch poles/caps I dont know anything about 3d printing or laser cutting nor do i live near a maker space so i would have to use my non cnc metal working skills to grind and braze them togather probably with aluminum, seems like a ton of effort and time but damn itd be cool!

Ps typed on my phone sorry about any misspelling or non sense auto correction I missed

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 22436
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
How about a lever spanning all the switches that flips them to all 1/0 when you push on it. ;D

Oh or hey, speaking of latches, you could use pushbuttons to set or toggle flip-flops, and a couple buttons off to the side to set all 1/0.  Can have it reset automatically when entered, or persist.

Could also have a live decoder on the value being composed, say for hex (just run it into a hexadecimal display driver), maybe ASCII (same but one of those smart matrix displays, or anything fancier and custom), maybe a debug display as well (if your instruction set shall be fixed width, the mnemonic could be decoded in the same way, but with a lookup table, which could be a few ROMs storing text).

Which... for a variable length instruction set, you'd have to, I think, have two address counters, one showing the base of the instruction, and the other what's currently being entered (which can be just a few bits long, at least).  The decoder would test all bytes inbetween and decode the instruction accordingly.  Can certainly be done in hardware, but it's kind of at the point where you might as well put all the buttons into IO space and write a little bootstrap/debugger to handle the work instead...

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline ebclr

  • Super Contributor
  • ***
  • Posts: 2331
  • Country: 00
" 70s 74LS chips in the drawerer and ordering all the parts in HC/HCT format? As far as moving to the 74Axx series "

A much small power supply, and less hot.

Did you consider to use an FPGA in schematic mode? ( you can use a standard logic  TTL, but will work on FPGA tension Levels
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9964
  • Country: us
Toggle switches like the ones on the PDP/IMASI/etc look like a huge PITA, from what I can tell you flip them in to the binary position you want and then hit a momentary store/continue button. So after that you have to manually flip them all back to 0? Using the front panel is already tedious but having to re 0 is just painful!!

Front panel toggle switches are NRZ encoded: Non-Return-To-Zero.
You set up the switches and pressed Deposit or Deposit Next and then set the next pattern, you didn't re-zero the switches.  Rinse and repeat...

And that's the way it was done back in the day.  Until the EPROM became common, it was the usual practice to toggle in a cold start loader.  That might only be a dozen instructions but would be enough to load memory from some external device - like a paper tape reader or even a disk drive.

I think the IBM PC may have been the first computer that didn't have switches and lights.  I may be off by a couple of machines but most everything into the '70s had switches and lights.
« Last Edit: June 06, 2020, 04:04:44 pm by rstofer »
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9964
  • Country: us
Did you consider to use an FPGA in schematic mode? ( you can use a standard logic  TTL, but will work on FPGA tension Levels

In the Xilinx world, Vivado doesn't support schematic entry which implies a need to use the older version, ISE, but, of course, ISE doesn't support the newest chips so you're kind of stuck.  I think schematic entry is a dead issue for Xilinx.

That doesn't mean that you couldn't create HDL entities that accurately describe the functionality of discrete chips.  Then it would just be a matter of instantiating as many as are necessary and 'wiring' them together with HDL.

There could be a hybrid approach of using schematic entry on ISE, letting the tool convert the schematic to HDL and then using the HDL with Vivado.

See about half way down here: 

https://forum.digilentinc.com/topic/530-need-to-make-a-deciscion-based-on-my-back-ground-nexys-2-or-4/
« Last Edit: June 06, 2020, 04:17:08 pm by rstofer »
 

Offline duak

  • Super Contributor
  • ***
  • Posts: 1048
  • Country: ca
Regarding the IBM PC being the first computer without switches and lighs.  The Apple II, Commodore PET and TRS-80 all preceeded the PC starting in 1977.

Motorola released their 6800 micro in 1975 with a ROM monitor in one of the system chips.  After getting an evaluation chip set, I found the bigger problem was to get or build an ASCII terminal to connect to the computer I'd built with them.

Well, you get good at flipping switches.  It is said that one gets used to hanging if you hang long enough...
« Last Edit: June 06, 2020, 05:20:53 pm by duak »
 

Offline jfiresto

  • Frequent Contributor
  • **
  • Posts: 896
  • Country: de
The LSI-11 (PDP-11/03), introduced in 1975, also had no front panel, beyond a couple switches.

EDIT: Or it might have had a third switch to control the Line Time Clock, rather than gaining it later.
« Last Edit: June 06, 2020, 05:35:22 pm by jfiresto »
-John
 

Offline rwgast_lowlevellogicdesinTopic starter

  • Frequent Contributor
  • **
  • Posts: 659
  • Country: us
    • LowLevel-LogicDesign
Just out of curiosity, if most of the time you just did manual entry on the front panel in order to tell the computer to load off paper tape or whatever input device you were using; wouldn't it have been easier and cheaper in labor to build indavidual diode based eproms with loading instructions for each input device and just insert that in a socket. Seems a lot faster than Dicking around with switches with every new software load, or no? Of course the computer would have had to be able to except the diode card but I'm sure modification or manufacturing in that feature wouldn't be a huge issue.

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9964
  • Country: us
Regarding the IBM PC being the first computer without switches and lighs.  The Apple II, Commodore PET and TRS-80 all preceeded the PC starting in 1977.

Obviously, you are correct.  All of these were 'personal' computers as opposed to 'commercial' or 'hobby' computers.  By 'commercial', I'm thinking about machines like the PDP-11, not Vaxen.

My beginnings with the 'hobby' computer were with the Altair 8800.  It certainly required a bit of toggling until EPROMs were used.  Even then, we had to set the starting address to force the CPU to the beginning of the EPROM.  Then we had sophistications like mapping and it became possible to just use the Reset switch and start from address 0000h

Those were good days!  A mere mortal could understand every aspect of the hardware and software.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9964
  • Country: us
Just out of curiosity, if most of the time you just did manual entry on the front panel in order to tell the computer to load off paper tape or whatever input device you were using; wouldn't it have been easier and cheaper in labor to build indavidual diode based eproms with loading instructions for each input device and just insert that in a socket. Seems a lot faster than Dicking around with switches with every new software load, or no? Of course the computer would have had to be able to except the diode card but I'm sure modification or manufacturing in that feature wouldn't be a huge issue.

The IBM 1130 had a coldstart hardware arrangement whereby it would read a cold start card from the card reader, unpack the code into low RAM (from 0000h) and execute it.  There were various coldstart cards and the scheme was also used to load diagnostics.

I don't know how the paper tape version worked but I suspect there was a cold start tape.  The machine I used (circa '70) used the card reader approach.

We had a lot of lights (on the order of 160) along with toggle switches and a bouncing ball typewriter at the console.  The typewriter might not be used very often because scientific jobs tended to be batch oriented with little to no operator intervention other than to confirm the paper was aligned on the plotter if the job required plotter output.

It was possible to use the switches and a big rotary selector switch to enter programs and do various debugging kinds of things.  I didn't play with that feature, I was a 'guest' user and I didn't want to rock the boat.

This image gives a feel for the console lights but is only presented because all of the photos I can find are even less helpful:

http://www.ibm1130.net/functional/Console.html#figure25

Here's a survivor:

http://computermuseum.informatik.uni-stuttgart.de/dev/ibm1130/ibm1130.html

Only gurus did reboots on minicomputers (PDP-11, again) and they weren't about to abandon their switches.  Knowing the coldstart code was a right of passage.  They most certainly did not have to refer to the crib sheets taped to the cabinet.

« Last Edit: June 06, 2020, 06:52:17 pm by rstofer »
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9964
  • Country: us
I miss the toggle switches and blinking lights.  That's why I have a couple of the PiDP-11 computers running BSD2.11 and a bit of a web server.  There's something comforting about knowing when your program runs off the rails by watching the lights.

https://obsolescence.wixsite.com/obsolescence/pidp-11-overview

The fun bit is using the original Unix tools with the original C compiler.  OK, the editor is a PITA but if you don't know vi, you don't know much about computers.  Emacs always seemed like too much effort...  Real K&R C, not this modern rubbish!

Note that BSD2.11 is very nearly identical to the more recent BSD4.3.  This isn't some stripped down, obsolete, OS.  I was never a PDP-11 user so I have a LOT to learn.




 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9964
  • Country: us
The PDP11-70 had boot code in ROM(s).  All the user had to do was set a start address and load the boot device address in the console switches.  Press Start and it was off to the races.

Figure 3-9

http://www.bitsavers.org/www.computer.museum.uq.edu.au/pdf/EK-11070-MM-002%20PDP11-70%20Maintenance%20And%20Installation%20Manual.pdf

This is similar to the cold start for the S-100 machines (like the Altair 8800) once we got more sophisticated memory boards.  Examine 0xF000 (or wherever) and push Run.  Magic commences now!

Remember, the 2102 RAM chip (1k x 1bit) was king.  A 64k machine generated a LOT of heat and the power supply on the Altair was totally inadequate.

https://www.nteinc.com/specs/2100to2199/pdf/nte2102.pdf

350 ns access time was described as high speed!

Those were good days but things are a lot simpler today.
 

Offline jfiresto

  • Frequent Contributor
  • **
  • Posts: 896
  • Country: de
... There's something comforting about knowing when your program runs off the rails by watching the lights....

My LSI-11/xx machines had no blinking lights, so I did the next best thing and ran an analog CPU load meter off a spare serial line. DEC thoughtfully organized their serial interfaces in a way that reduced its device driver to an increment instruction added to the operating system idle loop. Good fun.
-John
 

Online David Hess

  • Super Contributor
  • ***
  • Posts: 17429
  • Country: us
  • DavidH
Regarding the IBM PC being the first computer without switches and lighs.  The Apple II, Commodore PET and TRS-80 all preceeded the PC starting in 1977.

Obviously, you are correct.  All of these were 'personal' computers as opposed to 'commercial' or 'hobby' computers.  By 'commercial', I'm thinking about machines like the PDP-11, not Vaxen.

My beginnings with the 'hobby' computer were with the Altair 8800.  It certainly required a bit of toggling until EPROMs were used.  Even then, we had to set the starting address to force the CPU to the beginning of the EPROM.  Then we had sophistications like mapping and it became possible to just use the Reset switch and start from address 0000h

Those were good days!  A mere mortal could understand every aspect of the hardware and software.

Even before personal computers, there were S-100 based CP/M systems contemporary to the Altair, Imsai, and similar systems which only had power and reset buttons.  They were more direct predecessors to the IBM-PC than personal computers like the Apple ][.

 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9964
  • Country: us
Even before personal computers, there were S-100 based CP/M systems contemporary to the Altair, Imsai, and similar systems which only had power and reset buttons.  They were more direct predecessors to the IBM-PC than personal computers like the Apple ][.

I have one of the CompuPro Z80 machines with a rack mount chassis.  Plain black panel with a Reset and Power switch, nothing else.  I think it was intended for industrial applications but I bought it for the blistering fast 6 MHz Z80.  I also have dual 8" floppies in a rack mount chassis.  It all worked, the last time I tried it, but it seemed strange for a system to not have switches and lights.

I built up an FPGA Z80 system with a Compact Flash device for the disk drives.  I used the CompuPro to send over everything I had on 8" floppies.  It's been a while but I think the Z80 core was running at 50 MHz.  I have forgotten how I ported Kermit to the new system.

I also have a Zilog EZ80 board with a daughter card supporting a Compact Flash device and a pair of USB serial ports.  It also runs at 50 MHz.  CP/M is smokin' fast at 50 MHz.
 

Offline CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5571
  • Country: us
I miss the toggle switches and blinking lights.  That's why I have a couple of the PiDP-11 computers running BSD2.11 and a bit of a web server.  There's something comforting about knowing when your program runs off the rails by watching the lights.

https://obsolescence.wixsite.com/obsolescence/pidp-11-overview

The fun bit is using the original Unix tools with the original C compiler.  OK, the editor is a PITA but if you don't know vi, you don't know much about computers.  Emacs always seemed like too much effort...  Real K&R C, not this modern rubbish!

Note that BSD2.11 is very nearly identical to the more recent BSD4.3.  This isn't some stripped down, obsolete, OS.  I was never a PDP-11 user so I have a LOT to learn.

There are lots of tricks/hacks to know when the computer is running right.  On the IBM 1620 you could tune an AM radio to the master clock and listen to the machine run.  Some wrote code to make it play simple tunes, but you could tell when things were off on normal programs after a while listening.

Other machines took the blinking lights thing to an extreme.  An AstroData computer (sequencer?  It was programmable) I encountered in one lab literally covered a wall.  And had a plexiglass front through which you could watch a sea of blinking lights as it operated a rather trivial by today's standards test sequence.   Based on the time frame these lights probably weren't LED's, and it is frightening to think of the maintenance on that many incandescent indicator bulbs.  Not long after that machine was replaced with a Commodore PET.

While I too sometimes wax nostalgic about those good old days, I am really, really glad we don't have to program a cold start loader and so on any more.  Just as I am glad that punched cards and paper tape are in the rear view mirror. 
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 21227
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
I miss the toggle switches and blinking lights.  That's why I have a couple of the PiDP-11 computers running BSD2.11 and a bit of a web server.  There's something comforting about knowing when your program runs off the rails by watching the lights.

https://obsolescence.wixsite.com/obsolescence/pidp-11-overview

The fun bit is using the original Unix tools with the original C compiler.  OK, the editor is a PITA but if you don't know vi, you don't know much about computers.  Emacs always seemed like too much effort...  Real K&R C, not this modern rubbish!

Note that BSD2.11 is very nearly identical to the more recent BSD4.3.  This isn't some stripped down, obsolete, OS.  I was never a PDP-11 user so I have a LOT to learn.

There are lots of tricks/hacks to know when the computer is running right.  On the IBM 1620 you could tune an AM radio to the master clock and listen to the machine run.  Some wrote code to make it play simple tunes, but you could tell when things were off on normal programs after a while listening.

Several machines had simple loudspeakers.

The Elliott 803, a 576µs cycle time machine, has a loudpseaker connected to the top bit of the instruction register. Somewhere I have a tape of it playing various tunes, badly.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online David Hess

  • Super Contributor
  • ***
  • Posts: 17429
  • Country: us
  • DavidH
Even before personal computers, there were S-100 based CP/M systems contemporary to the Altair, Imsai, and similar systems which only had power and reset buttons.  They were more direct predecessors to the IBM-PC than personal computers like the Apple ][.

I have one of the CompuPro Z80 machines with a rack mount chassis.  Plain black panel with a Reset and Power switch, nothing else.  I think it was intended for industrial applications but I bought it for the blistering fast 6 MHz Z80.  I also have dual 8" floppies in a rack mount chassis.  It all worked, the last time I tried it, but it seemed strange for a system to not have switches and lights.

The one I was thinking of had the 8 inch disks in the same enclosure.  Whoa, I found a picture of it:

http://oldcomputers.net/NNC.html

Intel's Intellect microcomputer development systems came in versions which lacked all of the toggle switches and were intended to always operate from a terminal.
 

Offline duak

  • Super Contributor
  • ***
  • Posts: 1048
  • Country: ca
My first hands-on computer experience was also with an Altair 8800.  I could never get Microsoft/MITS Basic to boot up properly after it loaded.  No matter what, it always got itself stuck waiting for a character on an interface card that wasn't present.  I remember single stepping through the code as the software determined what options were needed and then modified the code in memory to correspond.  The problem ended up being a duff chip on the front panel that gated the front panel data switch states onto the data bus during the IN 255 instruction.  I seem to recall that the chip was either slow or didn't pull down its outputs low enough and so the loader code didn't see the right option.  Having a front panel that allowed single stepping operations really came in handy.

The first company I worked for professionally had developed a Z80 card and various supporting RAM and ROM memory cards for internal use.  They also developed the Break Point Logic card that allowed the developer to set up to four breakpoints, each of which could be a read, write or execute access of any memory address.  Unlike software debuggers, the BPL card could trace through ROM code because it didn't have to modify the memory location of the opcode to insert a trap instruction.

This company also started using the Intel 8086 and got an Intel development system with an ICE - In Circuit Emulator.  I wasn't on that project and never got to use the ICE.  I hear tell it had similar functionality to the BPL.

At the same company, I worked with a DEC PDP-11/23 (firmware serial monitor & minimal front panel) and a PDP-11/20 with a front panel just full of lights n' switches.  At least the latter had core memory so if I shut it off correctly, it would leave the boot loader in memory ready for the next start up.

For me, the bottom line is that it's difficult to debug something unless there is some sort of diagnostic facility like a front panel or ICE.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf