Author Topic: Why does Y increase going down the screen?  (Read 3135 times)

0 Members and 1 Guest are viewing this topic.

Offline NiHaoMikeTopic starter

  • Super Contributor
  • ***
  • Posts: 9015
  • Country: us
  • "Don't turn it on - Take it apart!"
    • Facebook Page
Why does Y increase going down the screen?
« on: October 03, 2018, 03:14:42 am »
Nancy released a math (review) video that reminded me of something computer scientists and mathematicians seemingly have yet to agree on - why is it that on modern computers, Y values increase going down the screen when in math, Y values increase going up? Wouldn't it make sense to agree to the convention set by mathematicians centuries ago?

Has there been a computer that worked that way - that is, where the Y values increase going up the screen? Extending the question, has there been a computer that uses a backwards X axis?
Cryptocurrency has taught me to love math and at the same time be baffled by it.

Cryptocurrency lesson 0: Altcoins and Bitcoin are not the same thing.
 

Offline Rerouter

  • Super Contributor
  • ***
  • Posts: 4694
  • Country: au
  • Question Everything... Except This Statement
Re: Why does Y increase going down the screen?
« Reply #1 on: October 03, 2018, 03:17:44 am »
I beleive its a remnant from CRT screens, they display top to bottom left to right. So it would render from Y0 and increment it while moving down the screen
 

Offline helius

  • Super Contributor
  • ***
  • Posts: 3640
  • Country: us
Re: Why does Y increase going down the screen?
« Reply #2 on: October 03, 2018, 03:28:33 am »
In OpenGL the origin is the lower-left corner (that is, X coordinates increase rightwards and Y coordinates increase upwards). This is because it is a special case of a 3-dimensional system in which Z coordinates increase farther away from the eye. Most 2-dimensional graphics systems are set up the way you describe, with the origin in the upper-left corner.

I beleive its a remnant from CRT screens, they display top to bottom left to right. So it would render from Y0 and increment it while moving down the screen
This is the case in most CRT screens, but certainly not all: in many portrait CRT displays, the raster scans at a 90° angle, so "horizontal" scanning is actually vertical in the user's reference frame. The Xerox Alto was like this, for example. The Radius Pivot was a later, fairly common example.
 
The following users thanked this post: NiHaoMike

Offline Ampera

  • Super Contributor
  • ***
  • Posts: 2578
  • Country: us
    • Ampera's Forums
Re: Why does Y increase going down the screen?
« Reply #3 on: October 03, 2018, 03:54:27 am »
The general idea, not only with CRTs, but I even believe more modern flat panel displays as well, you would have a beam that scanned left to right, and then downwards each line.

If you're in the 80's and developing a video generator for your microcomputer, which would be easier, just spitting out each pixel by reading memory sequentially, or having to do some sort of maths in order to get to the right part of memory? Memory is almost always just a line of numbers, so when you're plotting out your linear screen memory into something more useful for your program (a Cartesian grid) counting your X's from left to right makes sense, as that's how it's in memory, and counting your Y's from top to bottom also makes sense, since the higher the line on the screen, the farther back it is in memory.

Of course, when dealing with modern computing, even though on the meat, the screen is likely still written to like this, you can talk about the screen however you like, even in vectors if you like.
I forget who I am sometimes, but then I remember that it's probably not worth remembering.
EEVBlog IRC Admin - Join us on irc.austnet.org #eevblog
 

Online T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21675
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Why does Y increase going down the screen?
« Reply #4 on: October 03, 2018, 07:25:41 am »
Yeah, it more or less goes back to Philo T. Farnsworth routing the CRT electron beam as English text: lines scanned left to right, then top to bottom.

Probably if it were an Arabic invention, it would scan right to left first, or if Chinese, top to bottom even(??).

Diversion:

The display is scanned in real time -- there is no memory, no buffering, no nothing between the video generator and the screen.  Once you see and understand an analog video signal, it's really quite a visceral thing to see, you can almost read the video off the oscilloscope without it being rasterized.  Easier to recognize patterns, contrasts and animations, than exact imagery of course, but sometimes that's enough.

This had been true for almost the entire history of television: what's being viewed, in the studio, in real time, is exactly in lockstep synchronization (save for the propagation delays through cables and radio transmitters) with what's appearing on television screens across the country, indeed across the globe for international telecasts, within some limits (NTSC vs. PAL for example needs a scan converter of some sort, AFAIK usually no more fancy than a camera watching a CRT).

Of course there was always the option of filming, and rebroadcasting the film reel -- an easy way to distribute edited programs and movies, to affiliates across the country, allowing them to show material at location-appropriate times.  As technology improved, somewhat cheaper tapes (rewritable!) showed up (in the 50s), giving similar benefits with shorter turn-around -- no film developing and production needed.  Tape loops allowed sanitization of live video feeds.  Later still (~70s), real-time digital signal processing became available, so that entire frames could be buffered, synchronized, mixed, scaled and composited, without having to synchronize (genlock) multiple sources together.  Television networks began to link up with worldwide microwave and satellite feeds -- feeds which, in the early days (~80s), were sent in the clear so that anyone with a suitably equipped satellite receiver could view them!  Come the present day, memory and processing are so cheap that video streams, heavily compressed, are stored automatically at regional cache servers, then transmitted over wired and sometimes wireless networks, before sitting in a local buffer for seconds at a time; finally, the video decoder, and output chain itself, buffers several more frames still, before the final result is sent to the physical display.  Which still receives its video (albeit in a digital format)...in the same raster scanned sequence as a century ago. :)


As for alternate coordinate systems -- probably there are more interesting examples from oddball tubes and systems.  The first high definition display was developed in the late 50s, believe it or not -- by marrying two 10-bit DACs to a mainframe computer, you can scan a beam back and forth fast enough to form images, with a total resolution of 1024 x 1024!  Of course, the analog bandwidth of this system limits how many points or line segments can be drawn, a very different and much harsher drawback compared to the 525-line television of the time.

RADAR sets used a polar coordinate system, although I don't know how many were purely electrical, versus electromechanical (e.g., the CRT deflection yoke is physically rotated, in sync with the RADAR antenna, to implement the azimuth scan).  You might be able to implement image rotation with a solenoidal deflection coil (e.g., this is how a Tek 475 implements its trace rotation trim), though I'm not sure how well that can work out for large angles (e.g., the center and edge of the image may not rotate the same, unavoidably turning a line scan into an 'S' shape?).  Alternately, a rather complicated polar-cartesian converter circuit could be used, which could still be analog (ah, the heady days of analog computing) and so not compromise the granularity or bandwidth too badly.

Some chipsets scanned memory in a peculiar way, for example the ZX Spectrum apparently divided the screen into three regions (top, middle, bottom), and I don't know the particulars beyond that, something about how sprites or palettes or background were mapped into memory in a nonlinear manner (in graphics, "linear" always refers to a simple analog-style raster).

I do know very well the IBM-PC-compatible systems, CGA, EGA and VGA.  Text modes are character based (effectively, a fixed grid of sprites, laid out in linear fashion), selecting from 256 characters.  Each character has a foreground and background color (choosing from 16 and 8 options respectively), and a flashing attribute, for a total of 16 bits per character.  Typical modes are 40x25, 80x25, 80x43, etc..

CGA is linear, and free to access but accessing video RAM during scan is likely to cause "snow" (corruption of the bits read into the palette decoder and then to the screen).  16 colors in text, 4 colors at 320x200 (linear pixel raster), 2 colors at 640x200.

EGA introduced bit planes, where each color (R, G, B and additive gray) is bit-linear (1 byte = 8 consecutive pixels), and memory is read from four different locations to make each string of eight pixels.  This sounds kinda horrible, but it's both better and worse: IO is mediated by the controller, which allows simultaneous writes to any combination of colors, with logic operations.  You don't have sprites moving over a background -- there's only one plane of memory that's drawn to the display -- but you can emulate that with masking and compositing (at the expense of CPU and IO cycles, so it goes rather slowly).

VGA introduced higher resolutions, a few hardware touchups (e.g., being able to, y'know, read back the IO register states?), and a 256-color mode that's linear on the face, but actually chained to internal planes.  Downside, a lot of VRAM is wasted (you don't get any hidden pages that you can write to while viewing a different, static, memory area).  This led to the development of "Mode X", the unchained version of this mode, which has IIRC, pixels interleaved every 4th byte.  So byte 0 is (0,0), byte 1 is (4,0), byte 2 is (8,0) and so on; byte (320/4 * 200 - 1) is (319,199), byte (320/4 * 200 -1) + 1 is (1,0), etc.  This seems terribly inconvenient, but when you can organize writes into columns, it's no imposition (which happens to be what WOLF3D, DOOM and others did naturally), and because the memory isn't fragmented, you get two (or more) pages so you can buffer the output to eliminate frame tearing (draw to invisible plane while viewing active plane; when done with both, swap them and redraw).

Which is the reason for a peculiar fault, speaking of DOOM: the Venetian Blind crash.  When a particularly nasty exception occurred (with an unimplemented handler), the protected-mode backend would ragequit back to DOS without cleaning up after everything.  Which left interrupts (timer, keyboard) hooked so the prompt was unresponsive and you had to reset, and left the video in its unchained ("Mode X") state, which was funny because the error message emitted (and dutifully printed by DOS, and the video BIOS routine in turn) just becomes colorful vertical lines of gibberish!


If you're curious for more depth, there's quite a lot of info on the various sprite systems that were used through the 80s and 90s -- the C64, the NES and SNES, the Master System/Genesis, even into the 3D era with the PS1, N64, and to a lesser extent the GBA or DS, as well as the many ways modern GPUs model scenes and composite graphics.

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline chris_leyson

  • Super Contributor
  • ***
  • Posts: 1541
  • Country: wales
Re: Why does Y increase going down the screen?
« Reply #5 on: October 03, 2018, 08:12:55 am »
I think a lot of it has to do with hardware and memory mapping. Text goes from left to right and top to bottom, so the start of a page of text in memory maps to the top left corner. It's no coincidence that raster scans also go from left to right and top to bottom, see Tim's post, and using the same mapping conventions for graphics then the origin is top left.
The Tektronix 4010 and 4014 graphics terminals, which used a storage CRT, had the origin at the bottom left and so did one of it's predecessors, the IBM 2015.
 

Offline Rasz

  • Super Contributor
  • ***
  • Posts: 2616
  • Country: 00
    • My random blog.
Re: Why does Y increase going down the screen?
« Reply #6 on: October 03, 2018, 11:30:04 am »
In OpenGL the origin is the lower-left corner

so did DirectX fixed-function pipeline afaik
I seem to remember DirectDraw/win32 GDI had flipped orientation on a hardware level under the hood
Who logs in to gdm? Not I, said the duck.
My fireplace is on fire, but in all the wrong places.
 

Offline @rt

  • Super Contributor
  • ***
  • Posts: 1059
Re: Why does Y increase going down the screen?
« Reply #7 on: October 04, 2018, 12:59:21 pm »
There’s this little known device called the Apple iPhone :D
 

Offline rrinker

  • Super Contributor
  • ***
  • Posts: 2046
  • Country: us
Re: Why does Y increase going down the screen?
« Reply #8 on: October 04, 2018, 02:24:47 pm »
 Tim's pretty much got it covered.
In fact, my first computer, the whole thing had to run at an 'odd' frequency so it corresponded to the scan speed of the TV. The video chip dimply did DMA access to memory in order with the scan, setting the video line high or low depending on a 1 or 0 bit. This 'video chip' was little more than a glorified shift register. There was no dedicated video memory - it just displayed the contents of main memory. Minimum it could display was 256 bytes, which was all the base system had - so depending on the length of your program, you would have a whole bunch of random pixels which was your code plus the area you wrote a specific image to that was being displayed.

 

Offline tooki

  • Super Contributor
  • ***
  • Posts: 11500
  • Country: ch
Re: Why does Y increase going down the screen?
« Reply #9 on: October 08, 2018, 10:05:18 am »
Nancy released a math (review) video that reminded me of something computer scientists and mathematicians seemingly have yet to agree on - why is it that on modern computers, Y values increase going down the screen when in math, Y values increase going up? Wouldn't it make sense to agree to the convention set by mathematicians centuries ago?

Has there been a computer that worked that way - that is, where the Y values increase going up the screen? Extending the question, has there been a computer that uses a backwards X axis?
Well, the best answer IMHO is that you need to not assume that even just all modern computers work the same!

Most computer graphics systems do indeed use the top-left as its graphics origin. This includes Windows, classic Mac OS, iOS, X11, and AmigaOS.

Others use bottom-left, like OpenGL, Mac OS X, OS/2, and the BBC Micro!. (Mac OS X allows a developer to set a flag to use the top-left origin instead.)

I'm not aware of any that use an x origin at the right.

Of course, all modern windowing systems allow both axes to have positive and negative values, since a window may be moved partly off-screen, or because additional displays in a multi-monitor arrangement may be placed beyond the origin. For example, if you have a Windows system with two identical 1920x1080 displays side-by-side, and you have the taskbar on the right-hand display (making it the primary display), then the coordinates for the left display will range from (-1920,0) [top left], (-1,0) [top right], (-1920,1079) [bottom left] and  (-1,1079) [bottom right], and on the right display, from (0,0) [top left], (1919,0) [top right], (0,1079) [bottom left] and  (1919,1079) [bottom right].

It's important to mention that in a multi-monitor setup, the displays need not be aligned, so it's eminently possible to have oddly-shaped sections of the coordinate grid that are off-screen.

Note that the numbers might be off by 1px, due to the issue of where the pixel lies: do the coordinates describe the center point of a pixel, or do they describe one of its corners? Various graphics systems over the years have handled this differently, and it results in fencepost errors, especially when going between systems and not realizing it. For example, in classic Mac OS (QuickDraw), to draw an 8x4px wide rectangle, you had to enclose the rectangle and draw it as (0,0),(8,4).
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf