Author Topic: Generating test video signal with Rigol DG1022Z  (Read 1509 times)

0 Members and 1 Guest are viewing this topic.

Offline Ben321Topic starter

  • Frequent Contributor
  • **
  • Posts: 894
Generating test video signal with Rigol DG1022Z
« on: October 15, 2021, 01:38:49 am »
As I mentioned in one of the posts in a thread the Test Equipment section here, it's really easy to generate any waveform for this func-gen, because the file format it uses RAF is literally just raw waveform data (assuming you don't use the more complex version of the file format that has a header). Each sample is 14bits padded to the nearest byte (not bit-packed), and the values are unsigned, and the byte order is little-endian (for each 2-byte value, the least significant byte is first).

With this knowledge I was able to use Photoshop to create a test signal from a picture I found on the internet. The frame/field arangement I decided should be the simple one used by the SNES and N64 game consoles. This gives you 240p resolution, instead of 480i resolution. Basically each frame is approximately the height of what is normally a field (in effect repeating the same field for every image instead of alternating between fields, so the frame height is half the normal height but running at twice the frame rate). So each frame consists of exactly 263 scanlines, instead of 525 scan lines (if 525, it would be split into 2 fields with 262.5 lines per field). And the framerate is 60fps, instead of having a field rate of 60fps with a frame rate of 30fps. The first 20 lines are VBI, and then 243 lines of image, of which 3 are black lines (only 240 lines are actual image, because it is a 240p video signal).

I used only the tools in Photoshop to edit the image to make it into a transmittable NTSC frame. I used the levels control to do things like setting the sync, blanking, and black levels, and used the rectangular selection box to cut out the sync signals. I didn't bother with color because Photoshop doesn't have anything to make a chroma carrier with. I cut a few corners in terms of width and level of syncs, to be approximately correct instead of exactly by reading the NTSC specs. I've looked at the NTSC specs before, and got a general idea of approximately the values I need. One of the corners I cut for example is I made the equalizing pulses in the VBI have the same width as the HSync pulses, even though equalizing pulses are normally about half the width of the HSync pulses. I then used the levels control at the end to make sure to scale the output values to fit in the 0x0000 to 0x3FFF range expected by the DG1022Z's 14bit DAC. I then saved the it in 16bit mode to a raw file, and made sure that it had the correct file extension of RAF.

I used a USB stick to get this file into my DG and then manually set the output setting to 50ohm load, and 1Vpp signal amplitude. I set the correct sample rate in the scope then (12.6MSmp/Sec), based on the intended line rate of 15750 and frame width of 800 pixels (counting the total of image, blanking, and sync pixels on each line). Given the frame height of 263 pixels that gives me 59.89 frames/second, which is close enough to the 60 fields/sec for monochrome TV it shouldn't make a difference (note that for color TV it actually is 59.94 fields/sec, and 59.89 this is even farther off-spec than that, but it still seems to work for game consoles like the N64 or SNES).

Note that in a separate experiment with this DG and my Picoscope and a 50ohm terminator, I found the output voltage was actually about 1.2Vpp instead of 1Vpp. This might be an issue with the 50ohm BNC terminator I used, and that on a real TV it's internal 50ohm resistor might be much closer to actually 50ohms giving an actual 1.2Vpp signal. I also figured, I figured any decently built TV, even if it is getting a 1.2Vpp signal instead of the proper 1Vpp signal, would still likely be able to compensate for this and not actually be damaged, so I decided to go ahead with my TV experiment.

After getting the DG setup, I connected it to the composite video input of my TV, via an RCA video cable and a BNC-to-RCA adapter, and switched on the signal output of my DG. And guess what, IT ACTUALLY WORKED!!!!!!!!!!!!!!!!!!!!!!!

I got a picture of my TV's display with my cellphone camera, and I've attached that photo to this post.
 
The following users thanked this post: tv84

Offline Ben321Topic starter

  • Frequent Contributor
  • **
  • Posts: 894
Re: Generating test video signal with Rigol DG1022Z
« Reply #1 on: October 15, 2021, 01:42:39 am »
Here's the actual raw waveform file I used. I've zipped it up to post here, because the forum won't lest me post files with an RAF file extension.
 

Offline Ben321Topic starter

  • Frequent Contributor
  • **
  • Posts: 894
Re: Generating test video signal with Rigol DG1022Z
« Reply #2 on: October 15, 2021, 01:48:57 am »
Here's a PNG file showing what the full frame (including blanking and sync signals) would look like stretched from black to white (not scaled to fit in the 0x0000 to 0x3FFF range expected by the DG). This PNG image is saved from 2 steps before generating the raw output (PNG is from prior to both scaling to fit in the 0x0000 to 0x3FFF range and converting from 8-bit to 16-bit mode).
 

Offline Ben321Topic starter

  • Frequent Contributor
  • **
  • Posts: 894
Re: Generating test video signal with Rigol DG1022Z
« Reply #3 on: October 15, 2021, 02:15:20 am »
Then I remembered I'd previously used my Picoscope to analyze how the N64 did it, and it didn't use 2 VSync pulses per line on the VSinc lines, nor did it use 2 equalizing pulses per line on the equalizing pulse lines. Instead, it used only one extra long VSync pulse on each VSync line, and it completely skipped using of any equalizing pulses, instead using an HSync pulse at the beginning of each such line just like on normal lines, and no pulses at all in the middle of these lines. So I created a new test signal using this technique in the VBI lines. And again I found it worked quite well with my TV. I didn't screenshot it this time though, as it visually appeared the same as the result of my previous experiment. I did however save a PNG of the new frame image (as saved from Photoshop prior to the conversion to 16-bit mode and final scaling). I have attached both that PNG and the raw waveform file (again in a zip file), to this post.
 

Offline Ben321Topic starter

  • Frequent Contributor
  • **
  • Posts: 894
Re: Generating test video signal with Rigol DG1022Z
« Reply #4 on: October 15, 2021, 07:37:14 am »
Next step was to test color video. I realized quickly that I couldn't use Photoshop to simply draw this out, as it would need to have a chroma carrier of a precise frequency, and that is something that can't be done in Photoshop. This is where I switched over to Visual Basic to write a program to generate the waveform file. At first I thought that using the approximations for things like sync pulse level and sync pulse width I'd used in Photoshop would be sufficient. It wasn't. My first attempt just gave me a flat gray screen. No color. That's when I decided to read the official NTSC specificatons to get the sync pulse timings, and more precisely measured out IRE levels from -40 for sync, 7.5 for black, and 100 for white. I still kept the 263 line technique used by the N64 game console (including its removal of equalizing pulses and modifications to VSync so that there was 1 VSync pulse per line on the VSync lines), but otherwise kept all the pulse timings in-spec for NTSC (including making sure the gap after each N64 width VSync pulse was the same sized gap that would be expected between the second VSync pulse on a normal VSync line and the end of the line). The only reason I went with the 240p game console video output style is because it's simpler to implement and no need to worrying about interlacing, and because I know it actually works (if I hadn't known that it actually was used on a game console, I'd assume even this technique would be way too out of spec to bother trying to use). Of course, it now also contains a color burst (chroma sync) signal as well for each line, before each line, excluding the 3 VSync lines, and excluding the 3 lines before and 3 lines after VSync. Of course the color burst timing and amplitude is precisely set to the NTSC specifications. I did previously try to fit it into my previous scheme of fitting it into my older timing scheme and the color burst wasn't detected by the TV, as I mentioned above in this paragraph, resulting in just a gray screen. With all the timing corrected though, it worked great and I was able to display a color image.

On the func-gen side of things, the only thing I needed to do was modify the sample rate to make sure that the chroma carrier would be at the precise frequency it would need to be. Monochrome video uses 15750 lines per second (exactly 12.6MSmp/Sec for my 800pix wide test video), while color video uses approximately 15734.264 lines per second (approximately 12.587411MSmp/Sec for my 800pix wide test video).

The color test picture I made is very simple. It has 2 colored rectangles, top one blue (actually it's a slightly purplish blue, but in the screenshot it looks more pure blue), bottom one greenish yellow. This was done by having the top rectangle consist of a sinewave 180deg out of phase with the chroma carrier (positive Cb, and 0 Cr, in the CbCr color encoding method used by NTSC), and the bottom rectangle consist of a sinewave exactly in phase with the chroma carrier (negative Cb, and 0 Cr). In both rectangles, it uses a luma value that's half of maximum (IRE half way between 7.5 and 100). The chroma carrier's amplitude in both rectangles is 20IRE peak (or 40IRE peak-to-peak), which the same amplitude as the color burst.

Attached here is a screenshot of the TV screen I took with my cellphone camera, so you can see the end result of this experiment. Note I had downscale the image due to it being over 5MB (my cellphone camera takes pics at a resolution of 4096x3072, which even when JPEG compressed has a huge filesize). I saved the downscaled image as a PNG file to avoid adding additional compression artifacts.


UPDATE: I fixed an incorrect statement in this post. I thought I'd generated the test color image signal at twice the color burst amplitude, but upon looking back at the program I'd written, it actually generated that test color image signal at the same amplitude as the color burst amplitude.
« Last Edit: October 17, 2021, 03:16:14 am by Ben321 »
 
The following users thanked this post: tv84

Offline Ben321Topic starter

  • Frequent Contributor
  • **
  • Posts: 894
Re: Generating test video signal with Rigol DG1022Z
« Reply #5 on: October 15, 2021, 07:40:18 am »
I've attached here (in a zip file) a copy of the raw waveform file that I used with my DG1022Z to generate the color NTSC test signal described in the previous post.
 

Offline gcewing

  • Regular Contributor
  • *
  • Posts: 197
  • Country: nz
Re: Generating test video signal with Rigol DG1022Z
« Reply #6 on: October 15, 2021, 07:56:13 am »
t used only one extra long VSync pulse on each VSync line, and it completely skipped using of any equalizing pulses, instead using an HSync pulse at the beginning of each such line just like on normal lines
Equalizing pulses have to do with interlaced scanning. Since your signal is non-interlaced, you don't need them.

 

Offline tv84

  • Super Contributor
  • ***
  • Posts: 3221
  • Country: pt
Re: Generating test video signal with Rigol DG1022Z
« Reply #7 on: October 15, 2021, 08:14:39 am »
Good job, Ben.   :-+

Can you post some images of a normal line with CB taken in a scope?  Would love to see the HS/VS and CB areas.
 

Offline Ben321Topic starter

  • Frequent Contributor
  • **
  • Posts: 894
Re: Generating test video signal with Rigol DG1022Z
« Reply #8 on: October 15, 2021, 09:32:56 pm »
Good job, Ben.   :-+

Can you post some images of a normal line with CB taken in a scope?  Would love to see the HS/VS and CB areas.

This isn't captured with my scope, rather just generated directly by my program that generated the test signal. This is just a displayable copy of the raw waveform, as a PNG file. It lets you see everything, including syncs and chroma carrier. Basically the -40 to 100 IRE range was scaled into 0 to 255 for display, and then this raw image was loaded into Photoshop and saved as PNG for making it displayable in normal software (as well as making it attachable here to this forum post).
 

Offline Ben321Topic starter

  • Frequent Contributor
  • **
  • Posts: 894
Re: Generating test video signal with Rigol DG1022Z
« Reply #9 on: October 15, 2021, 09:51:35 pm »
Here's another displayable version of the waveform. This one decodes the image back to RGB. One nice thing about making a raw test signal is you already know the phase of the chroma carrier and the H and V synchronization is already guarentied, so no need for the decoder to detect the sync pulses or synchronize an oscillator with the color burst. It just straight up uses the same chroma carrier phase for decoding as the encoder used for encoding, and the image is already synchronized vertically and horizontally. So no syncs required. All my decoder does is splitting the composite signal to luma and chroma, and then demodulating and filtering the chroma, and then converting the YCbCr color space to RGB. Also it handles the 7.5 IRE for black and performs the other scaling operations needed to generate a technically correct decoded RGB output.

All parts of the frame outside the actual image region are black because they are all from darker than black signal levels (sync and blanking). You will notice that some of the horizontal edges (at the top of the image region, and also at the change in color half way through the image region) have a wave pattern at the frequency of the chroma carrier. This is because of how the chroma/luma separation is done. Luma is extracted from the composite signal by averaging the current line with the previous line, while chroma is extracted by subtracting the previous line from the current line. This works because of how the chroma subcarrier inverts its phase between lines (the chroma carrier frequency for NTSC was actually originally picked specifically for this phase inversion on each line, in order to allow easy filtering like this). However this also means that discontinuities (sudden changes) of signal levels between lines will generate artifacts like this.

This rendered copy also shows that it's more of a pale purple like on the TV, rather than pure blue. Note that there's also a difference between the color displayed from this direct rendering (which is slightly purpler), and the color as shown on the actual TV (not as captured in its cellphone screenshot but if you were here in person you'd see it, being slightly bluer than the direct rendering but not as blue as in the screenshot I took with my cellphone camera), which is likely due to the need for me to adjust the image controls on my TV to get more correct colors.
« Last Edit: October 17, 2021, 03:20:06 am by Ben321 »
 

Offline Ben321Topic starter

  • Frequent Contributor
  • **
  • Posts: 894
Re: Generating test video signal with Rigol DG1022Z
« Reply #10 on: October 17, 2021, 04:11:07 am »
So now I tested a more complicated image, varying the phase and amplitude of the chroma carrier (and luma still at mid gray). The chroma carrier in the image portion of the signal (not at color burst) is phase shifted a bit from line to line, from the top line of the image to the bottom line, so as to make sure all phases (and thus all hue values) are included in this test. In the horizontal direction, the amplitude of the chroma carrier is increased from pixel to pixel starting at the left side of the image portion of the frame. The chroma carrier amplitude is 0IRE peak-to-peak on the left side of the image and is 80IRE peak-to-peak (twice the color burst amplitude) on the right side of the image. This goes from gray to strong color saturation. The luma remains constant throughout the image, at mid gray (half way between black level and white level).

Below is a screenshot I got of the TV displaying this signal, with my cellphone camera. Notice that it should wrap around to purple at the bottom, but it doesn't. That's because that part of the image is actually outside of the normal picture region in what is known as the overscan region. Old CRT TVs would often scan too far (electron beam would actually scan off of the phosphor coated part of the screen) so this portion of the picture wouldn't be visible without manual adjustments to the image width and height controls, if the TV even had such controls. A modern LCD TV can easily display the full picture (including overscan region) precisely down to the pixel, but are usually configured at the factory to not display it, and thus mimic the behavior of old TVs, so that people used to watching old TVs won't have a significantly different experience when upgrading to an LCD TV, when the video source is composite video (if it's HDMI or other HD source it usually shows every pixel all the way to the edge of the image). I don't know about what settings are available on my LCD TV, but it appears to have a default setting at least to mimic the behavior of old CRT TVs.

Again, this cellphone photo had to be downscaled in order to fit the file size requirement here.
« Last Edit: October 17, 2021, 04:16:37 am by Ben321 »
 

Offline Ben321Topic starter

  • Frequent Contributor
  • **
  • Posts: 894
Re: Generating test video signal with Rigol DG1022Z
« Reply #11 on: October 17, 2021, 04:18:28 am »
Here's a direct rendering of the signal, as generated by my same program that creates the waveform file for the Rigol func-gen.
 

Offline Ben321Topic starter

  • Frequent Contributor
  • **
  • Posts: 894
Re: Generating test video signal with Rigol DG1022Z
« Reply #12 on: October 17, 2021, 04:21:52 am »
Here's a copy (in a zip file) of the raw waveform file for the Rigol func-gen.
 

Offline Ben321Topic starter

  • Frequent Contributor
  • **
  • Posts: 894
Re: Generating test video signal with Rigol DG1022Z
« Reply #13 on: October 17, 2021, 04:34:10 am »
Note that the correct setting on the function generator for all of the above signals is load impedance at 50ohms (not high-z), and amplitude at 1Vpp. For the sample rate, monochrome test signals should use the sample rate 12.6 MSmp/Sec, and color test signals should use the sample rate 12.587411 MSmp/Sec.

Note that while monochrome test signals will likely work with either sample rate, color test signals will likely need to use only the second sample rate, due to it preserving the precise frequency of the chroma carrier. Depending on the TV it's connected to (especially modern TVs with DSP processing of the signal, so complex algorithms can be run to detect things that are even strongly out of spec), it may be able to detect the frequency of the chroma carrier, not just its phase, by measuring the color burst signal, but there's no guaranty as to how far out of spec the frequency can get, before the TV decides to reject the chroma signal and treat the image as monochrome. It likely will vary between different brands of TV, and even different models from the same company. So your best bet is to just use the 12.587411 MSmp/Sec on your Rigol if you are using it to generate a half-height progressive color composite video NTSC signal.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf