Author Topic: More educated attempt at my VGA question  (Read 5272 times)

0 Members and 1 Guest are viewing this topic.

Offline Dan MoosTopic starter

  • Frequent Contributor
  • **
  • Posts: 357
  • Country: us
More educated attempt at my VGA question
« on: October 14, 2017, 04:58:34 pm »
Thanks to those who were patient with my questions about implementing VGA for my 6502 build. I have read much of the posted info, and now feel better equipped to ask more useful questions :-+

Ok, first I think I can clarify my goals. I want to be able to connect my 6502 build to a standard LCD monitor, through the VGA connection.  I do not need a standard VGA resolution, and doubt I have the memory resources for anything like that anyway.

I do NOT want to use programmable logic. First, I've never messed with that kind of thing before, and know from experience that if I tackled it, the current project would fall to the wayside as I gained enthusiasm for the "new thing". Its happened before  ::)

Also, I want to do it all with physical hardware for purely hobby reasons. It would just be cooler to me.

Also, no microcontrollers.

I am willing to use semi-period accurate purpose designed video chips if that is an option

Ok, here are my questions.

My 6502 is running at 1 Mhz. In its current breadboard form, I've gotten the thing up to 5 Mhz, with no instability. So still dang slow. It seems that even for a fairly low resolution, my pixel clock would have to be significantly faster than my system clock speed. Is this an accurate conclusion?

So here is my "thinking out loud" idea. Lets say I have my image in video memory, waiting to be displayed. For now, lets think monochrome, so one bit per pixel. Lets say I have a parallel in/serial out shift register, being clocked by a dedicated crystal . So I basically dump the contents of video memory at a speed completely independent of the system clock.

I also was contemplating having video memory residing outside the address space of the processor. My notion was to have a dedicated SRAM chip for video, that I load serially. Basically, my display circuit would be completely self contained, the only link being a data connection to the computer. MY biggest down side is that I think that I'd still need to keep a copy of the image in system memory if I want to be able to randomly address individual pixels.

I have a question relating to how LCD's actually interface through VGA. I thought I read somewhere that modern LCDs don't necessarily have to run at 60 hz. If I wanted a slower refresh rate, how do I convey this to the monitor?

Also, what determines the length of the front porch and back porch regions? I found many charts showing the lengths for various standard resolutions, but  since I am likely to use a non-standard resolution, how does one derive this part of the timings?

Hope I'm making sense ;D  Thanks!
 

Online Ian.M

  • Super Contributor
  • ***
  • Posts: 12860
Re: More educated attempt at my VGA question
« Reply #1 on: October 14, 2017, 05:42:55 pm »
If you don't use standard timings there is absolutely *NO* guarantee that the monitor will lock to them and display a picture.  It may display something but is more likely to display a blank screen with an 'Input out of range' or 'Input error' message.  However unlike old CRT monitors, you cant damage a LCD or LED flat panel monitor by feeding it a signal with out of range H-sync or V-sync pulse rates.

Within the time window that corresponds to the visible portion of each scanline, you can do whatever you like regarding the dot clock, however if it isn't an integer fraction of the standard dot clock for that mode you may get unacceptable blurring or 'crawling' at the edges of your pixels as the signal is resampled to the panel's native resolution.

Consider using Don Lancaster's technique of mapping the video memory as CPU program memory, and intercepting the data bus so the CPU gets NOPs till the final location of each scanline where it gets a RTS instruction, and the supposed 'instruction' data read from RAM is actually diverted to your PISO shift register, (possibly via a character map ROM if you need to save RAM and don't want graphics)  from which the video pixels are clocked out at 8x the CPU clock rate.  For each frame, the software calls a list of addresses in the video memory, one for each scan line, and in-between calls, outputs a H-sync pulse.

The H-sync and V-sync pulses and front and back porches are generated by the CPU, ideally via a peripheral with programmable timers so the CPU is free to do other things outside the visible portion of the scanline, but possibly by bitbanging from software timing loops.

As Don's original design was for the 1MHz 6502 KIM-1, and generated NTSC timing video, a 2MHz 6502 system shouldn't have any problems generating VGA timings as standard VGA is double the scan rate of CGA which was NTSC compatible.

Resource links:
http://laughtonelectronics.com/Arcana/KimKlone/BrideOfSon%20KK%20Lancaster.html - a quick explanation for those who aren't lucky enough to own copies of Don's 'Cheap Video' cookbook series.

Build The TVT-6: A Low Cost Direct Video Display for the KIM-1 (1MHz 6502).

Of course you *COULD* source a N.O.S. 6845 chip and build a VGA timing video board around it instead of doing CPU generated timings. You'll need to do the research to check you can get acceptable VGA timings and that it can double, or quadruple enough scan lines to get the vertical resolution you want.
« Last Edit: October 14, 2017, 06:48:55 pm by Ian.M »
 

Online edavid

  • Super Contributor
  • ***
  • Posts: 3383
  • Country: us
Re: More educated attempt at my VGA question
« Reply #2 on: October 14, 2017, 05:47:25 pm »
I don't see any point to the Don Lancaster method these days.  Even if you run the CPU at 5MHz, it's no trouble to run the RAM at 10MHz so you can dual port it.

(OP doesn't seem like so much of a purist that he wouldn't use fast SRAMs.)
 

Online Ian.M

  • Super Contributor
  • ***
  • Posts: 12860
Re: More educated attempt at my VGA question
« Reply #3 on: October 14, 2017, 06:05:43 pm »
The advantage of Don's method is it does away with a *LOT* of the logic and still offers incredible timing and display organisation flexibility compared to dedicated logic.  To free up main processor time, it would be possible to dual port the video RAM and run a slave 6502 to generate the video output and timing.  Its code could be loaded into RAM from the main processor that would then release it from reset to start generating video.  The slave processor could also do stuff like update a bitmapped display with text from a ring buffer in off-screen dual port RAM, including handling control codes, during the vertical sync and blanking as it doesn't need to spend cycles outputting video in that interval.   Other stuff that can easily be handled is vertical and horizontal scrolling, simply by adjusting the addresses called for each scanline.
« Last Edit: October 14, 2017, 06:55:00 pm by Ian.M »
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: More educated attempt at my VGA question
« Reply #4 on: October 14, 2017, 06:15:32 pm »
Your pixel clock rate is determined by two things:  The number of pixels to be displayed on a line (you pick this) and the duration of the unblanked period (from the VGA spec).  If you need to lay down a lot of dots in a short period of time, you need to hurry.

All of which gets back to the fundamental, and most important, specification: screen resolution.
 

Offline Dan MoosTopic starter

  • Frequent Contributor
  • **
  • Posts: 357
  • Country: us
Re: More educated attempt at my VGA question
« Reply #5 on: October 14, 2017, 06:57:15 pm »
You mean this book!
Just getting at it. Still determining if it's the way to go.
 

Online edavid

  • Super Contributor
  • ***
  • Posts: 3383
  • Country: us
Re: More educated attempt at my VGA question
« Reply #6 on: October 14, 2017, 07:05:01 pm »
The advantage of Don's method is it does away with a *LOT* of the logic and still offers incredible timing and display organisation flexibility compared to dedicated logic.

I know that's what DL claimed, but if you use a dedicated CPU, you don't really save any packages.
 

Offline Dan MoosTopic starter

  • Frequent Contributor
  • **
  • Posts: 357
  • Country: us
Re: More educated attempt at my VGA question
« Reply #7 on: October 14, 2017, 07:17:47 pm »
Not sure why my pic posted sideways.


I'm still kinda confused by the timings as far as what is set in stone, and what can be customized.

Is the refresh rate of 60 hz set in stone? Is it basically the hard number from which all the other timings are related?

Do I understand that the time between H syncs and the time between V syncs is completely up to me, but that the validity of the image depends on if I chose intervals that mathematically work out with the 60 hz refresh rate? In other words, as long as I have a combo where the time between Vsyncs is 1/60 of a second, I get a stable image?

How does the monitor "know" the resolution? My main confusion is that what tells it how far physically to move down after each H sync. I can see the width of pixels being a function of how often I send new data, but what tells the monitor how tall a pixel is?

I'm still confused about what specially determines how long my front and back porch periods are. Is it set in stone, or is it arbitrary?

Thanks again for the help!
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: More educated attempt at my VGA question
« Reply #8 on: October 14, 2017, 07:24:43 pm »
The advantage of Don's method is it does away with a *LOT* of the logic and still offers incredible timing and display organisation flexibility compared to dedicated logic.

I know that's what DL claimed, but if you use a dedicated CPU, you don't really save any packages.

Why wouldn't you save on counters and comparators?  I would expect a bunch of chips to be involved with timing.  The FPGA version sure uses a few.
 

Online edavid

  • Super Contributor
  • ***
  • Posts: 3383
  • Country: us
Re: More educated attempt at my VGA question
« Reply #9 on: October 14, 2017, 07:35:19 pm »
Not sure why my pic posted sideways.
iPhone?

Quote
Is the refresh rate of 60 hz set in stone?
No, but each monitor design will only accept a certain range.  Check the monitor's spec sheet.

Quote
Do I understand that the time between H syncs and the time between V syncs is completely up to me, but that the validity of the image depends on if I chose intervals that mathematically work out with the 60 hz refresh rate? In other words, as long as I have a combo where the time between Vsyncs is 1/60 of a second, I get a stable image?
The horizontal and vertical frequencies have to be in the range that the monitor accepts.  The resolution has to be something the monitor can display.

Quote
How does the monitor "know" the resolution? My main confusion is that what tells it how far physically to move down after each H sync. I can see the width of pixels being a function of how often I send new data, but what tells the monitor how tall a pixel is?
The monitor has to count the number of lines per frame to figure this out.  It can take quite a while to switch to the right resolution and sync up.

Quote
I'm still confused about what specially determines how long my front and back porch periods are. Is it set in stone, or is it arbitrary?
Arbitrary within a certain range.  You can use the nearest VESA timing, or you can try to pick something reasonable.
 

Online edavid

  • Super Contributor
  • ***
  • Posts: 3383
  • Country: us
Re: More educated attempt at my VGA question
« Reply #10 on: October 14, 2017, 07:38:30 pm »
The advantage of Don's method is it does away with a *LOT* of the logic and still offers incredible timing and display organisation flexibility compared to dedicated logic.

I know that's what DL claimed, but if you use a dedicated CPU, you don't really save any packages.

Why wouldn't you save on counters and comparators?  I would expect a bunch of chips to be involved with timing.  The FPGA version sure uses a few.

You do save on counters, but by the time you add the support chips for the extra CPU, it ends up being about the same chip count.  I guess it depends on what kind of logic you prefer to design (& debug).
 

Online Ian.M

  • Super Contributor
  • ***
  • Posts: 12860
Re: More educated attempt at my VGA question
« Reply #11 on: October 14, 2017, 07:44:53 pm »
The advantage of Don's method is it does away with a *LOT* of the logic and still offers incredible timing and display organisation flexibility compared to dedicated logic.

I know that's what DL claimed, but if you use a dedicated CPU, you don't really save any packages.
Probably not - dual porting memory in a 6502 system, so it can be accessed by two processors on alternating clock phases, or by one processor and a raster video peripheral is a bit of a PITA requiring a lot of logic - a good bit of which is to generate suitable non-overlapping clock signals.

OTOH if you use a 6502 based MCU with an external memory bus and internal hardware timers capable of directly generating the sync pulse timing, and make the main processor access it via an I/O port rather than memory mapping it, it gets quite a bit simpler.   You still need the logic to stuff the bus, but you can cheat a bit there and use bus switches to isolate the data bus at the slave CPU/MCU, (which just weren't economically available back in the day), and a mix of pullups and pulldowns to stuff it.  There's only three bits different between the 0x60 RTS opcode and the 0xEA NOP opcode, so to terminate the scanline, you need  to toggle a pin feeding the other end of three of the pullup/pulldown resistors from high to low, which could be handled by a timer.  The bus switch would be controlled by another pin gated with an address range for the video RAM decoded from the address bus.
« Last Edit: October 14, 2017, 07:52:56 pm by Ian.M »
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: More educated attempt at my VGA question
« Reply #12 on: October 14, 2017, 08:07:51 pm »
Not sure why my pic posted sideways.


I'm still kinda confused by the timings as far as what is set in stone, and what can be customized.

Is the refresh rate of 60 hz set in stone? Is it basically the hard number from which all the other timings are related?


As I understand it, yes!  Unless you are in a 50 Hz country, I suppose.  ETA:  You can pick faster refresh rates and I have no idea how things work in a 50 Hz country.  Back in the phosphorous monitor era, matching line frequency was almost an absolute requirement if you didn't want waves in the edges.

Quote

Do I understand that the time between H syncs and the time between V syncs is completely up to me, but that the validity of the image depends on if I chose intervals that mathematically work out with the 60 hz refresh rate? In other words, as long as I have a combo where the time between Vsyncs is 1/60 of a second, I get a stable image?


I doubt it.  Most LCDs do a lot of post-processing on the image and they have certain expectations.  Consider displaying a 4x3 graphic output on a 16x9 physical display.

I don't know what the modern LCD does during the blanking intervals, maybe nothing.  I guess you could try different things and see what works but that is awfully tough to do with discrete components.

You definitely can use refresh rates other than 60 Hz.  None seem to be slower but there are a lot that are faster.  See web site linked below.

Quote
How does the monitor "know" the resolution? My main confusion is that what tells it how far physically to move down after each H sync. I can see the width of pixels being a function of how often I send new data, but what tells the monitor how tall a pixel is?

It knows how tall a line is!  There are pixels at discrete locations.  These aren't phosphorus tubes where every spot is a pixel.  When you define a wide pixel, you are really writing to multiple pixels.  You say a pixel is 4x wide (because your pixel clock is some submultiple) and the display says no, a pixel is still 1x wide but there are 4 of them in a row.  There is no clock signal coming over the VGA connection, everything is based on time after a horizontal sync pulse.

Quote

I'm still confused about what specially determines how long my front and back porch periods are. Is it set in stone, or is it arbitrary?

Thanks again for the help!

Nothing is arbitrary.  Were I you, I would look at the Spartan 3 Starter Board manual I linked earlier and plan on using their numbers.  Their display is 640x480 and requires a 25 MHz pixel clock.  That's pretty fast for discrete logic!  The vertical sync counter would need to be 19 bits or 5 ea 4 bit counter chips.  ETA:  There are other ways to do this...

Look at Table 5-3.  A line is exactly 32 us long (800 clocks) and you get to write during 25.6 us.  In that 25.6 us, you need to lay down 640 dots so each dot is 40 ns and the pixel clock is 1/40 ns = 25 MHz.

It is worth noting that Digilent derived the porch numbers from physical observation.

Here is a table showing the proper VGA timing specs for 640x480@60Hz:

http://tinyvga.com/vga-timing/640x480@60Hz

Note that is uses a 25.175 MHz clock and Digilent rounded off to 25 about 0.7% off.  The site presents timing for other resolutions.

1024x768 60 Hz resolution requires a 65 MHz clock:

http://tinyvga.com/vga-timing/1024x768@60Hz

That clock rate is smokin' for discrete logic.
« Last Edit: October 14, 2017, 08:16:07 pm by rstofer »
 

Offline Dan MoosTopic starter

  • Frequent Contributor
  • **
  • Posts: 357
  • Country: us
Re: More educated attempt at my VGA question
« Reply #13 on: October 14, 2017, 08:36:14 pm »
I uderstand how width of a pixel works, but still don't know how height is.

Lets say I want a 20 by 10 resolution (to use easy numbers)  On a crt, i imagine the horizontal scan just trucks along at a fixed speed, and how wide a pixel is is completely a function of how fast the data is coming in. So in my imaginary scenario, a pixel is as wide as the gun moves in 1/20 of the video signal sweep time.

But what is determining how tall it is?  how does it know to make the pix 1/10 of the height of the display? Does it initially measure how fast Vsyncs are coming in, and adjust accordingly? 
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: More educated attempt at my VGA question
« Reply #14 on: October 14, 2017, 09:03:05 pm »
I uderstand how width of a pixel works, but still don't know how height is.

Lets say I want a 20 by 10 resolution (to use easy numbers)  On a crt, i imagine the horizontal scan just trucks along at a fixed speed, and how wide a pixel is is completely a function of how fast the data is coming in. So in my imaginary scenario, a pixel is as wide as the gun moves in 1/20 of the video signal sweep time.

But what is determining how tall it is?  how does it know to make the pix 1/10 of the height of the display? Does it initially measure how fast Vsyncs are coming in, and adjust accordingly?

Sure, the display can handle a number of different resolutions - STANDARD resolutions, as given in the User Manual.  If you want to create 20x10 resolution, you may have to write the same horizontal line 48 times to fill up 480 scan lines.  The LCD will actually be displaying a resolution much different than what you are writing but you force the pixel size to be some bizarre value by what you transmit.  But the number of lines, sync pulses and sync timing, front and back porch size and other aspects will still need to comply with an acceptable standard - one that the monitor recognizes.

Yes, the monitor measure Vsyncs and Hsyncs.  Someplace in the system, the aspect ratio is handled.  Maybe that is done in the PC.  On my Surface Book, the resolution is 3000x2000.  This doesn't transform exactly to a 16x9 display so there are even wider front and back porches that work to center the image on the display.  The LCD itself is running 1920x1080.  I have no idea where that is converted.
 

Online Ian.M

  • Super Contributor
  • ***
  • Posts: 12860
Re: More educated attempt at my VGA question
« Reply #15 on: October 14, 2017, 09:10:28 pm »
OTOH, a 256x256 display would be fairly classic retro.  A square pixel 256x192 display would be even more classic.

Take that 65MHz XGA dot clock - one quarter is 16.25MHz, and 5V DIL packaged CMOS compatible 16.257 MHz oscillator modules are easily available: https://www.digikey.com/short/q70p4m.  That's less than 0.5% error so there should be no problems locking to it.   With scanline tripling, that would give a 256x256 pixel display (fat rectangular pixels).  I don't think a 6845 CRTC could handle that because of the scanline tripling.  With scanline quadrupling it gives 256x192 pixels - a classic resolution for 8 bitters.  For either the CPU (or CRTC) would be clocked at a reasonable 2.032MHz.  16MHz CMOS logic is not much of a problem as long as the proto-board has a ground plane, but it would be extremely hairy on breadboard.

Another possible thing to consider is implementing RGBI colour.  You'd need four RAM chips, and four bus switches to combine their data busses for the MCU.  During the scanline they'd all read out simultaneously  to four shift registers giving four bits per pixel. The red green and blue bits and the intensity bit would be combined to get 16 colours, CGA style and output to VGA R,G and B via weighted resistor DACs.  During CPU memory accesses they'd be mapped individually.

 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: More educated attempt at my VGA question
« Reply #16 on: October 14, 2017, 09:11:50 pm »
Here's another discussion but the important part is the diagram showing the timing and how it cares up the display.
https://electronics.stackexchange.com/questions/228825/programming-pattern-to-generate-vga-signal-with-micro-controller

 

Offline Dan MoosTopic starter

  • Frequent Contributor
  • **
  • Posts: 357
  • Country: us
Re: More educated attempt at my VGA question
« Reply #17 on: October 14, 2017, 09:47:43 pm »
rstofer, that last link was very helpful! Probably didn't have any new information, but it kinda stated things in a way that cleared some stuff up.


Ok, I think I really only have one more question. How do I send a pixels faster than my system clock? I s my original idea of sending the image stored in system RAM serially to a RAM chip on the video circuit, and then clocking out the data using a shift register, and an appropriate crystal a good one? Seems like a lot of intermediate pixel pushing, but doable. I can't see how to clock data out of system memory straight to the DAC (resistor network more likely). So That tells me  I need a RAM chip that isn't on the system data bus. But I feel I still need the image to initially be in system RAM so I can address individual pixels. I suppose I still could do that through my proposed serial data stream to the dedicated video RAM, but that sounds clunky. I think I can afford the RAM space anyway.

 

Online Ian.M

  • Super Contributor
  • ***
  • Posts: 12860
Re: More educated attempt at my VGA question
« Reply #18 on: October 14, 2017, 10:28:59 pm »
The video RAM needs to be dual ported if you don't use Don Lancaster's method of making the main CPU the video controller.  5V logic dual ported RAM chips are difficult to find, so interleaved access to the same  RAM using bus multiplexors is the best option.  The RAM needs to be more that twice as fast as it would need to be if only the CPU was accessing it.  If its synchronous to the CPU phi2 clock, and correctly phased, video access wont interfere with CPU access, otherwise you need to stretch the CPU clock to hold off access while the video controller is reading the RAM, which has the disadvantage of making the CPU average clock speed hard to calculate.

The key to getting the timing to work easily is to derive the system clock from the CRTC pixel clock by dividing it by eight.  The duty cycle may have to be tweaked a bit to get an asymmetric 65C02 phi0 clock input signal rather than using a straight squarewave to allow time for two memory accesses in one CPU clock cycle, one from the 65C02 and the other from your display controller.  One way of doing that would be to start with double the desired pixel clock, run it through a 4 bit synchronous counter and decode 0000 to set a flipflop and another state to reset it.  That would let you tweak the duty cycle away from 50% in 6.25% increments.  A simpler method that doesn't need fast logic would be to use a monostable to stretch either the '1' state or the '0' state as required.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: More educated attempt at my VGA question
« Reply #19 on: October 14, 2017, 10:36:48 pm »
Everything depends on how you architect the memory (CPU and/or graphics).  I've been thinking about this and, as mentioned above, there is no reason not to use some very fast RAM and multiplex it.

The 6502 has access on the odd cycles and the graphics control has access on the even cycles.  You have 40 ns per pixel or 320 ns per byte and you will surely be grabbing bytes and shifting them out.

You would need 80 bytes per line and 480 lines or 38400 bytes of ram.  I would probably allocate 128 bytes per line and 512 lines.  That way my addressing is simplified.  65536 bytes of ram required.  Not a problem if you go along with 32Kx8 sram.

The graphics engine will provide one address for the ram (even cycles) and the CPU will provide another address.  So, how to do the CPU side?

Mostly the CPU will want to address the graphics ram sequentially.  So, why not have a 16 bit register located at two contiguous byte wide IO addresses.  The CPU sets the address and the subsequent read or write (through another IO address) will occur at that address.  I was thinking that the address counter should autoincrement but it needs to be clever about wrapping around after the 80th byte, not waiting around for the 128th byte because that address isn't used.  Or code can reload the register after the end of the previous line - probably the easy way...

With this scheme, the CPU can sequentially read everything (or write everything) without messing around with the autoincrementing register.  But this doesn't work for read-modify-write!  So, set aside another pair of addresses and if the registers are written via these addresses, autoincrementing is disabled or maybe it only occurs after writing.  You set the register to some location and you read without incrementing and when you eventually write the updated byte, the counter increments.  Something like that.  You could also have a single bit control register at another IO address that controls the autoincrement.

You will need a multiplexer on the address lines and you will need some way to direct CPU write data to the memory (depending on whether DI and DO share the same pins).

In any event, I wouldn't try to get the graphics memory inside the CPU address space.
« Last Edit: October 14, 2017, 10:40:00 pm by rstofer »
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: More educated attempt at my VGA question
« Reply #20 on: October 14, 2017, 11:42:31 pm »
I wonder if read-modify-write is the only kind of access the CPU will need to the graphics memory.  Why would the CPU just want to read the pixels?

1) set up counter
2) read byte of pixels
3) modify byte
4) write byte of pixels - counter updates at end of write
5) go to 2
 

Online Ian.M

  • Super Contributor
  • ***
  • Posts: 12860
Re: More educated attempt at my VGA question
« Reply #21 on: October 15, 2017, 12:16:30 am »
Consider the traditional set of operations supported by a bitBLT routine, and also note that when rendering text with an 8 pixel wide font, one will be incrementing to the next scanline to paint each row of the character far more often than incrementing within a scanline.  If using different width fonts, it gets to be an unholy mess of incrementing within a scanline and incrementing by scanline.  Adding hardware support for offloading managing the video RAM access pointer from the CPU when it isn't just a simple autoincrement is *DIFFICULT*.

It may be preferable to map a small window onto the graphics RAM into the main CPU memory map.  4K would probably be an appropriate size, as that would allow 32 scanlines (assuming 80 bytes/line at 128 byte increments) to be accessible in the memory window at the same time, which would make for easy scrolling of text of up to 16 pixels height.   Again for convenience, either a 4 bit adder would be used to combine the low bits of the window base register with bits A8 to A11 of the CPU address bus so that the window start could be set to any even scanline, or it would actually be divided into two 2K windows with independent base registers.

However I still prefer the idea of a slave processor handling the graphics, and a ring buffer in shared memory to feed it text, control codes and graphics commands.

 

Online BrianHG

  • Super Contributor
  • ***
  • Posts: 7733
  • Country: ca
Re: More educated attempt at my VGA question
« Reply #22 on: October 15, 2017, 12:53:39 am »
  However unlike old CRT monitors, you cant damage a LCD or LED flat panel monitor by feeding it a signal with out of range H-sync or V-sync pulse rates.

This is a little backwards.  The ADC and clock generator logic in an LCD screen just wont turn on with an invalid video signal.  Feeding invalid timing to some CRTs, especially a mall-formed timing sync, like the old Macintosh 2 monitor destroying virus did, will blow a CRT's yoke's flyback circuitry destroying the monitor.  I repaired 2 such blown CRTs back in the day.  There is no such high powered coil driven circuitry which needs a cleanly timed signal to burn out the power transistors in an LCD screen.
 

Offline Dan MoosTopic starter

  • Frequent Contributor
  • **
  • Posts: 357
  • Country: us
Re: More educated attempt at my VGA question
« Reply #23 on: October 15, 2017, 12:57:26 am »
Lots of good stuff here. Some of it may take a bit of processing on my part, but its making sense.

I have another question about the VGA signal standard.

I am looking at the VGA ignal my uVGAIII board is putting out. Eveything seems in line with what I've been reading. At 640x480, all timings are as I expected

Except one... The pulse width of the V sync is half what I read it should be. Everywhere I look, it says it should be 60 or so milliseconds wide, but the Vsync on my uVGA board is 31 milliseconds. Eery other thing (Hsync, porches, times bewteen pulses, ect.) is as I expected.

My understanding is that a vertical refresh is triggered by the down going edge of the Vsync pulse. Am I to infer, that within reason, the actual pulse width is not that important? Or is there something else at play with the uVGA's signal. I only ask because i'm starting to get a hold of this business, and this anomaly is bugging me!

 

Offline Dan MoosTopic starter

  • Frequent Contributor
  • **
  • Posts: 357
  • Country: us
Re: More educated attempt at my VGA question
« Reply #24 on: October 15, 2017, 12:59:41 am »


This is a little backwards.  The ADC and clock generator logic in an LCD screen just wont turn on with an invalid video signal.  Feeding invalid timing to some CRTs, especially a mall-formed timing sync, like the old Macintosh 2 monitor destroying virus did, will blow a CRT's yoke's flyback circuitry destroying the monitor.  I repaired 2 such blown CRTs back in the day.  There is no such high powered coil driven circuitry which needs a cleanly timed signal to burn out the power transistors in an LCD screen.

Unless I misread his post, I think what he wrote agrees with you.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf