Author Topic: Learning FPGAs for Video Processing  (Read 3379 times)

0 Members and 1 Guest are viewing this topic.

Offline xxninjabunnyxxTopic starter

  • Newbie
  • Posts: 7
  • Country: us
Learning FPGAs for Video Processing
« on: January 15, 2019, 10:51:39 pm »
I want to start learning FPGAs for video processing. I bought a  Zybo Z7: Zynq-7000 ARM/FPGA SoC Development Board and two VGA pmods. My idea is to have the video come through VGA then the dev board alters the color in realtime then sends it back out through the other VGA. All the tutorials I found show you how to generate a VGA out but it looks like the image is hardcoded and not done with any kind of input. I have never done any FPGA development but I have an extensive background in C so I'm not afraid of low-level development. Are there any resources that you would recommend to learn how to build a system like this?
 

Offline TimNJ

  • Super Contributor
  • ***
  • Posts: 1649
  • Country: us
Re: Learning FPGAs for Video Processing
« Reply #1 on: January 15, 2019, 11:36:39 pm »
The first question is: Do you understand the difference between hardware description languages (Verilog, VHDL) and sequential programming languages (C, C++, etc.)?

It's a whole different way of thinking. Your extensive C background will likely still be helpful since the workflow/tools are sort of similar, but in general, C is nothing like Verilog or VHDL.



 

Offline xxninjabunnyxxTopic starter

  • Newbie
  • Posts: 7
  • Country: us
Re: Learning FPGAs for Video Processing
« Reply #2 on: January 15, 2019, 11:45:36 pm »
Yes, I understand the difference. I have a Verilog book and the modules are very easy for me to grasp. I'm no stranger to writing things that execute in parallel because I have written lots of automated data processing applications in the past. I have also dealt with clock, latch, and data lines before.
 

Offline MavMitchell

  • Contributor
  • Posts: 29
  • Country: au
  • Not my real name
Re: Learning FPGAs for Video Processing
« Reply #3 on: January 16, 2019, 12:58:41 am »
Hi,

I presume the VGA pmod is a DAC(output) only, which one are you using?
The VGA input is analog, so you would need a VGA ADC pmod to get the video into you project. Is there such a thing?

« Last Edit: January 16, 2019, 01:29:52 am by MavMitchell »
 

Online BrianHG

  • Super Contributor
  • ***
  • Posts: 7660
  • Country: ca
Re: Learning FPGAs for Video Processing
« Reply #4 on: January 16, 2019, 01:31:52 am »
Take a look at this functional project:
https://www.eevblog.com/forum/microcontrollers/fpga-video-format-conversion/
You'll get VGA ADCs from TI, FPGA, scan rate generator written in Verilog, but in your case you may be passing the video syncs from input to output untouched, and even simple line doubling algorithm.  Though this project is B&W on the input, you will get all the info you need.
« Last Edit: January 16, 2019, 01:37:40 am by BrianHG »
 
The following users thanked this post: xxninjabunnyxx

Offline xxninjabunnyxxTopic starter

  • Newbie
  • Posts: 7
  • Country: us
Re: Learning FPGAs for Video Processing
« Reply #5 on: January 16, 2019, 01:56:10 am »
Take a look at this functional project:
https://www.eevblog.com/forum/microcontrollers/fpga-video-format-conversion/
You'll get VGA ADCs from TI, FPGA, scan rate generator written in Verilog, but in your case you may be passing the video syncs from input to output untouched, and even simple line doubling algorithm.  Though this project is B&W on the input, you will get all the info you need.


The board has an HDMI in and out would that be easier to start with that than building an ADC VGA pmod? I just want to get something up and running in a few weeks.
« Last Edit: January 16, 2019, 02:01:17 am by xxninjabunnyxx »
 

Offline helius

  • Super Contributor
  • ***
  • Posts: 3632
  • Country: us
Re: Learning FPGAs for Video Processing
« Reply #6 on: January 16, 2019, 02:10:40 am »
To get something working you should start with a single video format, like standard 31kHz VGA. Now you need to make sure your source outputs that format: so your project may need to emit an EDID, or hard-code the format at the source.
HDMI is a SERDES signal, you need the dedicated SERDES unit on the FPGA to interface with it.
For VGA, you use ADC and DAC, which are only present on-chip with "mixed signal FPGAs" like the Microsemi Fusion.
 

Online BrianHG

  • Super Contributor
  • ***
  • Posts: 7660
  • Country: ca
Re: Learning FPGAs for Video Processing
« Reply #7 on: January 16, 2019, 02:24:05 am »
Take a look at this functional project:
https://www.eevblog.com/forum/microcontrollers/fpga-video-format-conversion/
You'll get VGA ADCs from TI, FPGA, scan rate generator written in Verilog, but in your case you may be passing the video syncs from input to output untouched, and even simple line doubling algorithm.  Though this project is B&W on the input, you will get all the info you need.


The board has an HDMI in and out would that be easier to start with that than building an ADC VGA pmod? I just want to get something up and running in a few weeks.
The first thing I would do is just decode the HDMI in, re-clock all the data and send to the HDMI out.
Stick with simple PC video card outputs as most consumer HDMI video may demand HDCP encryption before they will broadcast any video picture data.
Next, extract your HS, VS, and Active video flags plus the picture data.  Remember with PC video, either the HS and/or VS may be randomly inverted depending on the video mode.
Then play with simple controls like brightness and contrast as these are nothing more than multiply and adds to the RGB data.
Dealing with component HDMI (YUV) source video from cameras and video players, where there is also embedded HDMI audio will be more work if you want to go that route, but, if your HDMI video source demands HDCP compliance, like DVD players, Bluray players, Netflix compliant set-top boxes, you are completely out of luck here unless you hack these devices.

Next, think about a CSC, (Color Space Converter) which combines brightness and contrast with color saturation and tint or hue.

Using 2 or 3 lines of video cache memory with a 2D convolution matrix will offer picture sharpness, edge filters and more sophisticated versions will allow median noise filters.

Random number generators allow for grain or noise generation.
« Last Edit: January 16, 2019, 02:27:41 am by BrianHG »
 
The following users thanked this post: xxninjabunnyxx

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: Learning FPGAs for Video Processing
« Reply #8 on: January 16, 2019, 02:31:55 am »
Hi xxninjabunnyxx

I've done most of this stuff, and can offer advice. There are two roads:

1. The Easy Road

Leverage as many sets of FPGA and board vendor IP as you can, and put the minimal glue between them. This is relatively quick and easy.

Much like inviting friends over for a takeaway curry. You will get good results with minimal effort, and maybe some expense (in this case, overly complex IP blocks, and long build times).

2. The hard road

You have an FPGA, so use it at it's lowest levels. Read the DVI-D and HDMI specs, build a few test projects, work your way up to a video pipeline. See the joys of the TMDS coding scheme, Learn how EDID works, struggle with CRC checksums, learn how BCH ECCs work. Experience the pain of trying to work out how to sync the SERDES sampling window to the data eye. Learn more about YCC and RGB, full range and studio level, than you ever wanted to know. Know what 444 422 and 411 pixel formats are. Spend hours trying to understand the clocking infrastructure within the PLL and FPGA IO blocks, and debugging interface bit ordering problems.  Struggle with copy protected sources, learn all about HDMI data islands.

This is like making dinner for friends from your own garden, with your own chickens. It is a lot of work just to get a chicken curry.

I'm a low-level dude who did this stuff for a hobby, so for me it isn't the application, but understanding the technology. So I usually walk the hard road.

Some of my code in action.



« Last Edit: January 16, 2019, 02:34:16 am by hamster_nz »
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 
The following users thanked this post: xxninjabunnyxx

Offline xxninjabunnyxxTopic starter

  • Newbie
  • Posts: 7
  • Country: us
Re: Learning FPGAs for Video Processing
« Reply #9 on: January 16, 2019, 02:52:56 am »
Take a look at this functional project:
https://www.eevblog.com/forum/microcontrollers/fpga-video-format-conversion/
You'll get VGA ADCs from TI, FPGA, scan rate generator written in Verilog, but in your case you may be passing the video syncs from input to output untouched, and even simple line doubling algorithm.  Though this project is B&W on the input, you will get all the info you need.


The board has an HDMI in and out would that be easier to start with that than building an ADC VGA pmod? I just want to get something up and running in a few weeks.
The first thing I would do is just decode the HDMI in, re-clock all the data and send to the HDMI out.
Stick with simple PC video card outputs as most consumer HDMI video may demand HDCP encryption before they will broadcast any video picture data.
Next, extract your HS, VS, and Active video flags plus the picture data.  Remember with PC video, either the HS and/or VS may be randomly inverted depending on the video mode.
Then play with simple controls like brightness and contrast as these are nothing more than multiply and adds to the RGB data.
Dealing with component HDMI (YUV) source video from cameras and video players, where there is also embedded HDMI audio will be more work if you want to go that route, but, if your HDMI video source demands HDCP compliance, like DVD players, Bluray players, Netflix compliant set-top boxes, you are completely out of luck here unless you hack these devices.

Next, think about a CSC, (Color Space Converter) which combines brightness and contrast with color saturation and tint or hue.

Using 2 or 3 lines of video cache memory with a 2D convolution matrix will offer picture sharpness, edge filters and more sophisticated versions will allow median noise filters.

Random number generators allow for grain or noise generation.


I understand the decoding and then recoding the HDMI signal, but what I can't seem to wrap my head around I what to do with the signal after it has been decoded and waiting to be recorded. How do I make sure the right line is being sent to the encoder at the right time?
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: Learning FPGAs for Video Processing
« Reply #10 on: January 16, 2019, 03:09:40 am »
I understand the decoding and then recoding the HDMI signal, but what I can't seem to wrap my head around I what to do with the signal after it has been decoded and waiting to be recorded. How do I make sure the right line is being sent to the encoder at the right time?

In general, you don't. You just stream everything through the pipeline using the pixel clock. You ensure that the data and control paths have the same latency, and then you have nothing to do.

Advanced effects that alter geometry (e.g. zooming) may require a frame buffer in SDRAM, but then you count cycles from HSYNC and VSYNC, and use that to trigger when you replay the data form RAM.

I know it's VHDL, but see https://github.com/hamsternz/Artix-7-HDMI-processing/blob/master/src/audio_meters.vhd for how I overlaid audio level meters on the video stream.

It would pay to build a block-level design for your pipeline, and check that each stage only needs a handful of video lines to be held. These will end up in block RAM. Also count multiplications as multipliers may be a constraint.

Spending a a few days on your design before you start coding will help HEAPS!
« Last Edit: January 16, 2019, 03:22:06 am by hamster_nz »
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Offline LapTop006

  • Supporter
  • ****
  • Posts: 467
  • Country: au
Re: Learning FPGAs for Video Processing
« Reply #11 on: January 18, 2019, 04:50:56 am »
If you want to start with something more pre-baked you can look at the projects around the NeTV2 board, they can get you started on dealing with HDMI video.
 

Online Berni

  • Super Contributor
  • ***
  • Posts: 4922
  • Country: si
Re: Learning FPGAs for Video Processing
« Reply #12 on: January 18, 2019, 06:30:35 am »
First step is to forget everything you know about C. The If Switch For..etc statements might look familiar but they work differently in HDL. It tends to be more useful to imagine what you are trying to do as a collection of D flip-flips and combinational logic between them to do the actual "computation".

In terms of video interfaces its best to stick to VGA. All other interfaces are more complicated but perfectly possible to do on a FPGA. To generate VGA all you need is  a R2R resistor DAC hanging off some FPGA pins to generate the analog red green blue signals and some resistor level shifting on Vsync and Hsync lines to tell the monitor where to draw. You will find lots of examples of how to build a VGA timing generator in HDL. Once you have a timing generator making your Hsync and Vsync you just bring out the line and row counter out of the generator and make a module that takes the X Y position as input and outputs the R G B values for that pixel. A common hello world for this is usually generating a "color barf" pattern by just feeding the X Y counters into the RGB values.

As for video input that's a bit more tough. You can't directly feed VGA into a FPGA because of it being analog. The best way to go about it is to use a chip that converts your format of choice to a parallel RGB bus. This is a very common bus used to move video around and its basically the same as VGA except that the R G B is in the form of a digital bus like for example 8bit Red 8bit Green 8bit Blue for a bus with 24bit colors, each clock cycle puts the values for 1 pixel on those lines. You can get chips that convert Composite, VGA, DVI, HDMI etc... into a RGB bus.
« Last Edit: January 18, 2019, 06:32:46 am by Berni »
 

Offline MavMitchell

  • Contributor
  • Posts: 29
  • Country: au
  • Not my real name
Re: Learning FPGAs for Video Processing
« Reply #13 on: January 18, 2019, 08:16:15 am »
Another thought might to be to use a PC grade VGA to HDMI converter($10), decode the digital and output via VGA.
The benefit is a single known HDMI format.


https://www.ebay.com/itm/VGA-Male-To-HDMI-Output-1080P-HD-Audio-TV-AV-HDTV-Video-Cable-Converter-Adapter/142543986595?hash=item2130489fa3:m:mY9GzpcsC4ln4doOzYSKNgg&var=441533081724
 

Online BrianHG

  • Super Contributor
  • ***
  • Posts: 7660
  • Country: ca
Re: Learning FPGAs for Video Processing
« Reply #14 on: January 18, 2019, 09:23:33 am »
According to  xxninjabunnyxx's Reply #9 on: January 15, 2019, 09:52:56 pm, I think what he is trying to say is that he doesn't know what a video signal is or how it works.  Whether it's VGA, DVI, or HDMI, video has a vertical sync which reset's the vertical position to 0 and a horizontal sync which resets the horizontal position to 0.  At every clock on the data coming in, the pixels are fed from left to right, top to bottom.  Remember, DVI is exactly a VGA signal in digital form.  HDMI usually only is sent in YUV instead of RGB, but, the picture pixel elements are broadcast in the exact same left to right, top to bottom order.

In HDMI and DVI, there is also an 'Active Video' flag.  Like the Vertical sync flag and Horizontal sync flag, this flag tells you when during each pixel clock, a valid picture pixel is present.  In analog VGA, you need to to scan for these borders in the source video as different video standards have different active video regions.  Like I said, with DVI and HDMI, the 'Active video' or 'Video Enable' flag creates this rectangular region for you and it's embedded in the HDMI DVI standard.
 

Offline xxninjabunnyxxTopic starter

  • Newbie
  • Posts: 7
  • Country: us
Re: Learning FPGAs for Video Processing
« Reply #15 on: January 19, 2019, 11:47:14 pm »
According to  xxninjabunnyxx's Reply #9 on: January 15, 2019, 09:52:56 pm, I think what he is trying to say is that he doesn't know what a video signal is or how it works.  Whether it's VGA, DVI, or HDMI, video has a vertical sync which reset's the vertical position to 0 and a horizontal sync which resets the horizontal position to 0.  At every clock on the data coming in, the pixels are fed from left to right, top to bottom.  Remember, DVI is exactly a VGA signal in digital form.  HDMI usually only is sent in YUV instead of RGB, but, the picture pixel elements are broadcast in the exact same left to right, top to bottom order.

In HDMI and DVI, there is also an 'Active Video' flag.  Like the Vertical sync flag and Horizontal sync flag, this flag tells you when during each pixel clock, a valid picture pixel is present.  In analog VGA, you need to to scan for these borders in the source video as different video standards have different active video regions.  Like I said, with DVI and HDMI, the 'Active video' or 'Video Enable' flag creates this rectangular region for you and it's embedded in the HDMI DVI standard.


I understand how video works. What I don't know to do is make a buffer that holds a few lines of video and sends them out at the correct time. I'm thinking that the HDMI encoder would have pulses sent over to the BRAM module when it ready to send a new line. I like I said before I'm very new to FPGAs and I don't know if this is the right method to take.
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: Learning FPGAs for Video Processing
« Reply #16 on: January 20, 2019, 12:38:50 am »
According to  xxninjabunnyxx's Reply #9 on: January 15, 2019, 09:52:56 pm, I think what he is trying to say is that he doesn't know what a video signal is or how it works.  Whether it's VGA, DVI, or HDMI, video has a vertical sync which reset's the vertical position to 0 and a horizontal sync which resets the horizontal position to 0.  At every clock on the data coming in, the pixels are fed from left to right, top to bottom.  Remember, DVI is exactly a VGA signal in digital form.  HDMI usually only is sent in YUV instead of RGB, but, the picture pixel elements are broadcast in the exact same left to right, top to bottom order.

In HDMI and DVI, there is also an 'Active Video' flag.  Like the Vertical sync flag and Horizontal sync flag, this flag tells you when during each pixel clock, a valid picture pixel is present.  In analog VGA, you need to to scan for these borders in the source video as different video standards have different active video regions.  Like I said, with DVI and HDMI, the 'Active video' or 'Video Enable' flag creates this rectangular region for you and it's embedded in the HDMI DVI standard.


I understand how video works. What I don't know to do is make a buffer that holds a few lines of video and sends them out at the correct time. I'm thinking that the HDMI encoder would have pulses sent over to the BRAM module when it ready to send a new line. I like I said before I'm very new to FPGAs and I don't know if this is the right method to take.
You make a delay line.

You chain a few Block RAM blocks to make a 27-bit wide (for 24-bit RGB + hsync + vsync + video_active flags) , dual ported memory, and you write to address 'i', read from address 'I+horizontal_count)'.

Makes it look like a big long shift register....

Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 
The following users thanked this post: xxninjabunnyxx

Online BrianHG

  • Super Contributor
  • ***
  • Posts: 7660
  • Country: ca
Re: Learning FPGAs for Video Processing
« Reply #17 on: January 20, 2019, 01:37:02 am »
Use a dual port, dual clock fifo in the FPGA.  The output of the fifo should run at the pixel clock.  On the internal system clock use the 'Almost Full' flag to decide how to transfer data.  This method can have syncing issues as you need to pass the horizontal alignment into the buffer and fill the buffer with exact pixel counts per line.

My working method is a dual port ram, which has a multiple of 2048 x (X lines of cache) x 24 bits.  On the output clock side, running at the pixel clock, with my horizontal raster line generator and address counter reset to the beginning of active video, on the output of that dual port ram I feed the data into my HDMI out and for the MSB on the dual port ram, I place a 2 bit counter which increments on the HS out only during an active video region and it resets on VS.  This means, if the output mode is 1920, in that cache ram, I will waste 2048-1920=128 pixels.  In lower resolution modes, less of this buffer will be used.  On the system clock side, all I monitor is an asynchronous VS from the output and the 2 bit counter on the output to tell where my vertical position is in my 4 line output buffer.  In other words, right after a VS out before new active video, I begin to fill my 4 line video out dual port cache ram.  As it's 2 bit output counter increases, I know I have free new lines to fill.  This means I have a video raster generator on the output clock and only 3 asynchronous signals are going back to the system core clock, VSout and a 2 bit vertical buffer position.  (This makes making any core DDR video system ram, or scan rate converters a breeze when paging a fill line of DDR ram in at a time making the fastest possible burst leaving blank DDR cycles for other uses)

Yes there are more advanced methods to do this, but with the scope of you project time, choose this extra simple method to fill a clean line by line video out, and the exact reverse for a video in buffer.

The rules get simpler and change for real time processing where the output image format matches the input image format where you enhance video on the fly.  But going that way, you will not be able to cache an image in DDR Memory and play back a full screen buffer.  You will also need to sync copy in and out syncs with and appropriate delay of clocks or line to do this.

« Last Edit: January 20, 2019, 01:42:17 am by BrianHG »
 
The following users thanked this post: xxninjabunnyxx

Offline xxninjabunnyxxTopic starter

  • Newbie
  • Posts: 7
  • Country: us
Re: Learning FPGAs for Video Processing
« Reply #18 on: January 20, 2019, 01:41:58 am »

You make a delay line.

You chain a few Block RAM blocks to make a 27-bit wide (for 24-bit RGB + hsync + vsync + video_active flags) , dual ported memory, and you write to address 'i', read from address 'I+horizontal_count)'.

Makes it look like a big long shift register....


So let me see if I got this right. Create BRAM that has full, empty, almost full, and almost empty flags. Decode the HDMI and encode it into 27-bit wide serial signal and write that to the BRAM with the address of the hsync value. Then create a module that reads from the BRAM when the empty flag is not set, edits the signal and writes it back to the BRAM at the same address. Then create an HDMI encoder that reads the BRAM when the almost full flag is set at the start of the BRAM and incrementing a counter. So the address counter would work like program counter on a virtual machine (Java VM or Python VM, not VirtualBox VM).
 

Online BrianHG

  • Super Contributor
  • ***
  • Posts: 7660
  • Country: ca
Re: Learning FPGAs for Video Processing
« Reply #19 on: January 20, 2019, 01:46:07 am »

You make a delay line.

You chain a few Block RAM blocks to make a 27-bit wide (for 24-bit RGB + hsync + vsync + video_active flags) , dual ported memory, and you write to address 'i', read from address 'I+horizontal_count)'.

Makes it look like a big long shift register....


So let me see if I got this right. Create BRAM that has full, empty, almost full, and almost empty flags. Decode the HDMI and encode it into 27-bit wide serial signal and write that to the BRAM with the address of the hsync value. Then create a module that reads from the BRAM when the empty flag is not set, edits the signal and writes it back to the BRAM at the same address. Then create an HDMI encoder that reads the BRAM when the almost full flag is set at the start of the BRAM and incrementing a counter. So the address counter would work like program counter on a virtual machine (Java VM or Python VM, not VirtualBox VM).
My above method (https://www.eevblog.com/forum/beginners/learning-fpgas-for-video-processing/msg2134117/#msg2134117) gives you an X/Y position counter.  However, the Y position is the bottom 2 bits only, you need to keep track of all the upper bits as the picture is fed out. (Note you can increase the line cache size if you line, 3 bits for 8 lines of cache, ect.  Same goes if you reverse the process for video input.)
« Last Edit: January 20, 2019, 01:47:47 am by BrianHG »
 
The following users thanked this post: xxninjabunnyxx

Online BrianHG

  • Super Contributor
  • ***
  • Posts: 7660
  • Country: ca
Re: Learning FPGAs for Video Processing
« Reply #20 on: January 20, 2019, 02:03:57 am »
Remember, doing real time video transforms like color and simple enhancement, you don't need to cache video, just send the video input to your multipliers and then to video output copying clock, and pixel delay the HS and active video flag and VS to match the multiplier delay so the output picture is centered.  Line delay BRAMS may be used for 2D fir / convolution picture enhancement filters.  (Please do not go into Interlaced video, it's take you half a year to get this just right...)

A lower res OSD may be placed completely in FPGA cache memory (dual port BRAM) and genlocked and superimposed onto the video output for on screen controls and text.  Think about a 128 character font with onscreen color test and a programmable palette and transparency levels (R+G+B+Alpha blend) in the 8 to 16 color palette controls.
« Last Edit: January 20, 2019, 02:08:50 am by BrianHG »
 

Offline xxninjabunnyxxTopic starter

  • Newbie
  • Posts: 7
  • Country: us
Re: Learning FPGAs for Video Processing
« Reply #21 on: January 20, 2019, 03:00:27 am »
Remember, doing real time video transforms like color and simple enhancement, you don't need to cache video, just send the video input to your multipliers and then to video output copying clock, and pixel delay the HS and active video flag and VS to match the multiplier delay so the output picture is centered.  Line delay BRAMS may be used for 2D fir / convolution picture enhancement filters.  (Please do not go into Interlaced video, it's take you half a year to get this just right...)

A lower res OSD may be placed completely in FPGA cache memory (dual port BRAM) and genlocked and superimposed onto the video output for on screen controls and text.  Think about a 128 character font with onscreen color test and a programmable palette and transparency levels (R+G+B+Alpha blend) in the 8 to 16 color palette controls.


Creating an OSD is the next step I want to take. I would like to create a line doubler as well.
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
Re: Learning FPGAs for Video Processing
« Reply #22 on: January 20, 2019, 04:11:06 am »
This sounds like a very ambitious project to start out with, I would suggest starting simple and then working your way up toward more complex stuff. The fact that you already know C may actually be more a hindrance than help. HDL looks like computer code superficially but it's a completely different concept. You're not writing a program, things are not executing in parallel, they're not executing at all. The code is describing digital hardware so you have to look at this not as a software design/coding problem but as hardware design. Figure out how you would go about your project using logic ICs because that is effectively what the FPGA gives you, a gigantic breadboard and an unlimited supply of any digital logic IC you could imagine. FPGA development is hardware design disguised as coding, most beginners stumble until this sinks in.
 
The following users thanked this post: Berni, spec, BrianHG


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf