Author Topic: Graphics coprocessor - what would you use today?  (Read 1106 times)

0 Members and 1 Guest are viewing this topic.

Offline bd139

  • Super Contributor
  • ***
  • Posts: 16044
  • Country: gb
Re: Graphics coprocessor - what would you use today?
« Reply #25 on: August 05, 2020, 04:52:47 pm »
Nearly. I had a kernel that ran a static binary called /bin/init which had my program in it. It worked!  :-DD
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 3011
  • Country: si
Re: Graphics coprocessor - what would you use today?
« Reply #26 on: August 05, 2020, 05:52:02 pm »
This is great input; many thanks all.

Looking at OpenGL, if this needs the chip vendor's driver, that driver is going to be a binary executable which needs a specific CPU.

Obviously there are 80x86 versions; I see OpenGL/Cuda/whatever options in my Vegas video editors. Hilariously they don't actually work much faster than disabling GPU acceleration and just running the 12 core i7 CPU, but they have loads of bugs (a sore point which few want to talk about) ;) but then the target I have in mind would be just a 150MHz ARM which is perhaps 1/100 of the performance.

But are there OpenGL drivers for an ARM, not running some specific OS?

Pretty much all ARM chips with a GPU will come with a linux OpenGL driver in some way (Since that's usually how the GPU vendor intends you to use it). Having the driver run on another CPU is not really a concern since these GPUs are on the same silicon die as the CPU, so its not possible to connect that GPU to anything else than the CPU it is sharing the chip die with.

And yes offloading complex video processing tasks to the GPU via Cuda can be complicated and might depend a lot on the specific hardware configuration of your PC, the drivers, the OS etc.. But this is fairly new tech so it might have kinks that need to be worked out.

When running old versions of OpenGL for rendering 2D or 3D graphics you get really wide compatibility. It will run on pretty much any PC GPU made in the last 15 years be it Nvidia, ATI, Intel, VIA...etc. It just runs faster on the newer more powerful cards. If an embedded version of it such as OpenGL-ES is used then the compatibility also includes mobile GPUs such as the ones used in mobile phones and tablets. Heck there are even software implementations of OpenGL that don't need a GPU at all and run fully inside the CPU (But its usually going to be too slow to be useful)

Basically anything Broadcom touches, run away from. Rapidly. Preferably shooting over your shoulder.
Yep i hate them too. Not that Qualcomm is any better and they ware out to acquire NXP.
 

Offline peter-h

  • Frequent Contributor
  • **
  • Posts: 306
  • Country: gb
  • Doing electronics since the 1960s...
Re: Graphics coprocessor - what would you use today?
« Reply #27 on: August 05, 2020, 09:28:26 pm »
To a large extent how to approach this would depend on how cleverly one can run the Bresenham and Horn algorithms (for drawing lines and arcs, respectively). I did this way back on a Z80, obviously in asm, and it wasn't bad at 4MHz. Next come polygon fills; those are unfortunately much slower.

But a whole lot also depends on how the data is represented. Often one can cache a lot of graphics shapes - the old "sprite" concept.
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 3286
  • Country: us
Re: Graphics coprocessor - what would you use today?
« Reply #28 on: August 08, 2020, 12:45:08 am »
Quote
a life of 20+ years
Hah hah. 

I think there has only been ONE standard for just the connection of a display to a controller that has lasted 20 years (VGA.)  (and it's going away now, since it's fundamentally based on CRT technology, which is also "obsolete.")
 

Offline peter-h

  • Frequent Contributor
  • **
  • Posts: 306
  • Country: gb
  • Doing electronics since the 1960s...
Re: Graphics coprocessor - what would you use today?
« Reply #29 on: August 08, 2020, 08:05:36 am »
VGA is nothing to do with CRTs.
 

Offline bd139

  • Super Contributor
  • ***
  • Posts: 16044
  • Country: gb
Re: Graphics coprocessor - what would you use today?
« Reply #30 on: August 08, 2020, 08:41:13 am »
Huh it most certainly is entirely designed for and tied to CRTs at an electrical and protocol level. The fact it works with anything else is a testament to the LCD controller manufacturers.
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 3011
  • Country: si
Re: Graphics coprocessor - what would you use today?
« Reply #31 on: August 08, 2020, 11:32:35 am »
The VGA port was designed for use with CRT monitors, because the only type of high resolution color display on the market in 1987 was a CRT.

As a result the circuitry required to put a VGA signal onto a CRT monitor was as simple as could be. Both the Vsync and Hsync signals come in on dedicated wires, so the is no sync extraction needed. Those get fed into two entirely separate PLLs as a reference and that makes the two ramp signals that scan the beam in X-Y. The Hsync signal continues to pulse even when inside vertical blanking (There is no purpose for this as these are no video lines to sync) since this makes it easier for the horizontal scan PLL to keep lock and makes AC coupling easier inside the horizontal deflection amplifiers (Since they just keep scanning). Then the R G B signals are also sent over 3 wires just as the CRT circuitry needs them, so all it has to do is amplify each color channel signal and send it directly into the electron gun. The RGB color signals also go black when in blanking, so no blanking circuitry is needed in the monitor.

None of this is required for a LCD panel, but when the market is 100% CRT you are going to design something that firs well for those. However eventually LCD technology did get far to become a CRT alternative. By this point most full color high resolution displays used VGA or a weird proprietary standard that works almost identical to VGA (Such as the DB13W3 used by SiliconGraphics, IBM, NeXT...etc) so in order to compete with CRTs they had to support VGA.

Fortunately for LCDs they can in principle be drawn to in any order you like given the right controller circuitry. So in order to make LCDs easy to connect to a VGA source they imitated the CRT as much as possible. The Hsync and Vsync lines are just as convenient for LCD, as is the separate R G B intensity in analog. The Hsync is locked on by a PLL to multiply its frequency by a the number of horizontal pixels (+blanking), this gives a pixel clock that samples the R G B color values and steps the pixel counter from left to right (Since CRTs also go from left to right). Hsync edge simply steps down by 1 row. The TFT cells on a LCD display need analog so you can simply run the analog R G B values trough a CCD bucket brigade to deserialise the pixels into a full line that is then put onto the column drivers. The row drivers are just shift registers clocked by Hsync edges. Counters are used to cut away the useless blanking areas that the LCD does not need. And done! So since LCDs are more flexible regarding driving methods it was easy to make LCD controller circuit that draws it the same way a CRT does and so make it simple to connect to VGA.

But its still a bit of a hack, so LCD controllers started providing a direct pixel clock input. In something like a laptop the graphics cards video DAC clock could be tapped off and provided to the LCD controller as a pixel clock (With no need for pixel clock recovery via a PLL). But since the graphics card is generating digital data, why not just put the video DAC inside the LCD controller where its timings can be tightly tied to the required LCD matrix scan timings. So this is how the DPI (Digital Parallel Interface) RGB bus was created. It still has the useless CRT blanking times to be VGA compatible, but is now fully digital.

This is now the modern digital video bus. When you use DVI or HDMI this is that exact DPI RGB bus, just sent trough a serializer and some encoding so that it only needs 4 diff pairs instead of 28 wires (8 red, 8 green, 8 blue, 1 Hsync, 1 Vsync,1 DataEnable, 1 PixelClock) and works reliably over long cables. The LVDS bus used in modern laptops to drive LCDs is the same DPI bus except its only serialized but not encoded (No long wires needed here). So even with HDMI you still are using blanking periods from the 80s that had the only purpose of giving time for the slow magnetic deflection CRTs to get the beam back into the top left corner. But HDMI found a use for this blanking time and placed digital audio data in there.

So in a sense you could say HDMI is still designed for CRTs due to the comparability chain it was born out of.
 
The following users thanked this post: bd139

Offline rsjsouza

  • Super Contributor
  • ***
  • Posts: 4019
  • Country: us
  • Eternally curious
    • Vbe - vídeo blog eletrônico
Re: Graphics coprocessor - what would you use today?
« Reply #32 on: August 08, 2020, 12:02:35 pm »
Excellent post! :clap:
Vbe - vídeo blog eletrônico http://videos.vbeletronico.com

Oh, the "whys" of the datasheets... The information is there not to be an axiomatic truth, but instead each speck of data must be slowly inhaled while carefully performing a deep search inside oneself to find the true metaphysical sense...
 
The following users thanked this post: Berni

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 15652
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Graphics coprocessor - what would you use today?
« Reply #33 on: August 08, 2020, 01:11:43 pm »
Heh, that they work in similar ways, I think you could prove the necessity of that:

Consider what both are doing: filling a screen of millions of pixels, from a few signal lines.

Right off the bat, it's obvious we can't connect single pixels.  Millions of signals, that's preposterous at any level.  Even connecting individual rows or columns, isn't really useful.  You'd need a 223-signal connector just to drive the rows for an NTSC monitor, and you still need some way to drive the columns, some kind of multiplexing.

So to even begin to be practical, we need to multi-multiplex our pixels.


Geometrically speaking, we want to project a volume -- video spans two spacial dimensions and one time dimension.  We can construct that volume however we like, but simple geometric shapes will be the easiest, and anything that requires additional circuitry or mechanics to deal with synchronization, or variable bandwidth, distortion, etc. will be very quickly impractical.

And then, whatever our multiplex encoding method is, it will map to the axes of that geometric space.

The cube, cylinder and sphere are the simplest.  Consider some of their attributes:
- A cube is the extrusion of a square, which is the extrusion of a line segment.
- More generally, a skewed, unequal-sided cube is a parallelepiped (the 3D equivalent of a parallelogram).  The rectangular prism is the special case when the axes are orthogonal, and the cube when the axes are orthogonal and equal.  We probably want this case, to avoid dealing with non-square pixels and consequent distortion.
- A cylinder is the extrusion of a circle, which is the revolution of a line segment.
- A sphere is the revolution of a circle, which is the revolution of a line segment.

We can perform a geometric extrusion by scanning a raster: we project a dot or line, then move it relative to the projection screen.  If we do this twice, we can go from a pointlike signal to a flat surface, and thus turn a 0-D signal into a line, into a plane, and finally animate that to get our full video experience.

We can perform a revolution in the same way, but it's less obvious how this would be constructed electronically.  A mechanical implementation might scan the signal onto a line, then rotate the line physically.  That gives us a POV spinner.  This has the expected downsides: the intensity varies with radius (the central pixels are moving less than the outer ones), and the pixel resolution is inconsistent (leading to a circular streaking appearance towards the edge).  We can at least solve the resolution electrically (committing more bandwidth to the edge pixels than the central ones), but this is a challenge -- if we didn't have an FPGA or fast CPU to do it, we'd have a real problem.

The signal can also be double-revolved, which has the advantage of spherical symmetry -- it looks the same from any angle (assuming we've solved the uniformity problems, of course).


All of this says nothing about the sort of multiplexing -- merely that it exists.  Anything doing a scan, has to be time-division multiplexed (TDM), including the mechanical types.  TDM seems to be the simplest case.  We could of course choose frequency instead (FDM), but it will require a lot of signal analysis to get at.

Which... heh, that's actually a bit interesting, for two reasons:
1. We could make a POV spinner by driving LEDs from an array of bandpass filters.  The bandwidth of each filter is proportional to its center frequency, which is proportional to its radius on the spinner.  Thus we automatically get more bandwidth as we go out.  Downside: it's not easy to construct arrays of bandpass filters!
2. Spectrograms have been used as auditory steganography for some years.  This really depends on the availability of "waterfall" spectrogram displays in common audio editing tools, without which, it sounds funny like there's something going on, but you can't really tell until you see the transform.  It would be a right pain to construct live video this way (again, you'd want a reasonably powerful CPU or FPGA to do the work), but it would drop directly into to the above setup!


Anyway, multiplexing also requires bandwidth.  Obviously, you need ~X * Y * FPS bandwidth.  NTSC needed a few MHz, which is a good example of the smallest commercially viable bandwidth, give or take.  VGA used low 10s of MHz, and it's gone up over time since.

Wires in cables, offer 100s of MHz bandwidth without too much effort -- pretty typical for a coaxial or twisted pair of modest length (some meters, perhaps up to 100s of meters when done with care).

So multiplexing is a fine choice, and it seems TDM is the simplest and best option.


What about the multiplex format, then?  If we did a polar coordinate transform, we could transmit pixels in a spiral.  Tricky though: the number of pixels per ring varies with radius, so we need to be very careful with timing.

The simplest TDM multi-multiplex, with consistent timing and bandwidth (fixed pixel clock), is a square raster.  Don't have to worry about pixels per ring, or intensity or shape or anything, it's all equal.

We could still quibble about scan direction, whether we're doing it by rows back and forth (lawnmower scan) or all one direction or what; but here again, the simplest case is probably best, and anything more elaborate will only be more and more complicated.


For the CRT, it makes particular sense to scan in the same order (e.g., left to right, top to bottom): if the scans alternated direction (making a zig-zag down the screen), you'd need perfect deflection linearity on both phases of the waveform.  Probably more work to solve; plus a bidirectional deflection driver is required.  (An actual electrical concern back in the days of expensive and unreliable vacuum tubes!  The sawtooth sweep is easily solved with a single driver.)

For the LCD, it also makes sense to scan in the same order: if the pixel address increments linearly, then it can simply overflow (or compare and reset to zero), little logic required.  If the scan reversed every row, you'd need more logic to compute the increment/decrement.

Plus, an alternating direction would be annoying.  Say, if it ever got out of sync and dropped a row, the whole image flips!  Or for the CRT, the vertical scan needs to be a staircase linked to the horizontal sweep, or else the rows will have a subtle taper from edge to edge.


So I think it's very reasonable actually, that LCDs would be driven in the same way.

Internal to the LCD, there is row buffering, level conversion (translating digital bytes into analog pixel driver voltages), and physical addressing -- row and column drivers.  Which means... we do in fact use enormous 2000+ signal cables after all, they're just very short, and etched directly onto the glass. :D  (Or occasionally hot-bar bonded between glass and driver boards.  Plasma TVs, too.)

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Online NiHaoMike

  • Super Contributor
  • ***
  • Posts: 6530
  • Country: us
  • "Don't turn it on - Take it apart!"
    • Facebook Page
Re: Graphics coprocessor - what would you use today?
« Reply #34 on: August 08, 2020, 02:30:53 pm »
So even with HDMI you still are using blanking periods from the 80s that had the only purpose of giving time for the slow magnetic deflection CRTs to get the beam back into the top left corner.
Those blanking periods have become much shorter for LCDs.
https://en.wikipedia.org/wiki/Coordinated_Video_Timings#Reduced_blanking
Cryptocurrency has taught me to love math and at the same time be baffled by it.

Cryptocurrency lesson 0: Altcoins and Bitcoin are not the same thing.
 

Offline peter-h

  • Frequent Contributor
  • **
  • Posts: 306
  • Country: gb
  • Doing electronics since the 1960s...
Re: Graphics coprocessor - what would you use today?
« Reply #35 on: August 08, 2020, 03:01:27 pm »
In a previous life, I used to design and build CRT drive boards, mono and colour, so I knew the stuff about V and H sync, etc :)

What I was referring to was the CPU-graphics interface, which is generally just memory mapped. It's nothing to do with a CRT.

It is further downstream that another bit of hardware picks up this video memory (de facto it is dual-ported, although in the old days the writing would be restricted to the H or V flyback slots, for simplicity) and shifts it out into a DAC, synced to the line and frame rates.
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 3011
  • Country: si
Re: Graphics coprocessor - what would you use today?
« Reply #36 on: August 08, 2020, 03:19:29 pm »
So even with HDMI you still are using blanking periods from the 80s that had the only purpose of giving time for the slow magnetic deflection CRTs to get the beam back into the top left corner.
Those blanking periods have become much shorter for LCDs.
https://en.wikipedia.org/wiki/Coordinated_Video_Timings#Reduced_blanking

True it does make sense to reduce then since who would make a FullHD CRT monitor with HDMI input. But they are still there, mostly because LCD displays have gotten so used to having blanking that they tend to use that time for some internal processes so do actually need a little bit of it and HDMI also needs it to fit audio in. Also HDMI supports interlaced video and that abomination would not exist if it was not for CRTs together with broadcast bandwidth limitations.

Heh, that they work in similar ways, I think you could prove the necessity of that:

Consider what both are doing: filling a screen of millions of pixels, from a few signal lines.
Sure everyone draws from left to right, top to bottom (Likely just because we read that way since there is no technical reason for it) but there are some variations on it.

There is interlaced video for example. But in more modern times its bandwidth requirements that pushing some other ways of doing it. For example high resolution LCD controlers have moved in to Dual Link LVDS. This is the same as the usual 4 pair LVDS except that one set of 4 pairs carries the odd pixels while the other pair carries the even pixels. This drops the bitrate by half and allows the display controller to be implemented as two slower display controllers with the column lines interleaved. This is also how DualLink DVI is implemented. But then as ridiculous resolution monitors came around bandwidth once again became a problem so they started paralleling multiple digital video cables together, so now the display data was being sent to the display with interleaved columns separately for the left and right half of the screen.

Then they started putting ridiculous resolution displays in phones and they realized that sending over many millions of pixels 60 times a second takes a lot of power too. So the MIPI-DSI standard implemented two ways of drawing to the display, the classical way (Basically LVDS with a few extra commands) and the DCS way. This method draws pixels in any order while the LCD controller keeps a framebuffer in RAM. So since mobile phones draw a lot of static menus this means the CPU only needs to send the pixels that are changing in any order it likes.




Sorry gotten this thread a bit off topic. But i suppose the OP will need to connect a high resolution LCD display at some point and venture into this zoo of modern display interface standards.
 
The following users thanked this post: T3sl4co1l

Offline westfw

  • Super Contributor
  • ***
  • Posts: 3286
  • Country: us
Re: Graphics coprocessor - what would you use today?
« Reply #37 on: August 08, 2020, 08:05:07 pm »
Quote
What I was referring to was the CPU-graphics interface [of VGA], which is generally just memory mapped. It's nothing to do with a CRT.
Fair enough.  It'll probably be a long time before the standard abstraction of a 2d-array of pixels isn't a 2-d array of memory.  (although there have been examples: vector displays, scan-line oriented displays on early video games, sprite-based stuff...)

I was explicitly referring to the electrical connection, though, which is the part that has lasted 20+ years.The CPU/Graphics part of VGA (640x480: 1987) quickly gave way to SVGA (800x600: 1988) and XGA (1024x768: 1990) and hung out there for a while till HD came along (and the electricals still worked.  Yea for the VGA electrical designers!)  (not that there weren't competitors in the early days.  I think I still have some "Apple to VGA" adapters...)

 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf