| Electronics > Projects, Designs, and Technical Stuff |
| Digital FPV video for drone racing |
| << < (4/28) > >> |
| radar_macgyver:
You'll find the thing that kills you is latency (as mentioned in @Gary350z's video). Flying any quad FPV (let alone for racing) is hard, and even the smallest latency will be quickly noticed by the pilot. One of my buddies was into racing, and he asked if this was possible, I spent a while looking into it but the latency turned out to be the hardest problem to solve. My plan was to use a RPi zero and send the video stream over wifi. You might have better luck if the video compression is done in hardware. Maybe a Zynq, with the PL handling the video encoding and the PS streaming the data over wifi. |
| Marco:
There is nothing inherent in digital which causes latency, not even with motion compensation. Unless you consider the number of scanlines needed for a row of blocks to be significant latency. |
| OwO:
I've been looking into the exact same kind of project. In my case the focus is on range and cost; all existing digital FPV solutions are about HD video, have shitty range, and cost >$500. I did some experiments with JPEG a long time ago and found that for 480p you can get the size down to about 20KB per frame and still get a usable image, which is 4.8Mb/s at 30fps. However video frames are highly redundant and with just a little bit of inter-frame prediction you should be able to do much better. H.264 for example can give you HD 1080p video at 5Mb/s. For the RF data link I would recommend not going above QPSK because efficiency (bits you can get across per unit energy) rapidly falls as you stuff more bits into a given bandwidth. For a 6MHz channel this means 12Mb/s uncoded rate, and you will ideally want FEC at rate 1/2, which means 6Mb/s maximum actual bit rate. You also need to factor in some amount of protocol overhead (framing etc). I would not consider wifi because of inherent protocol flaws and bad off-the-shelf implementations (bad listen-before-talk thresholds, etc) that will give you unexplained high latency. Please do keep us updated because I'm sure your findings will be helpful for my project as well ;) |
| OwO:
There was a VHDL H.264 encoder but I haven't gotten around to trying it yet: http://hardh264.sourceforge.net/H264-encoder-manual.html I would imagine for a simpler implementation doing the JPEG stuff on a soft core might be fast enough, and you can accelerate major bottleneck components like motion estimation in hardware. You would send a full jpeg frame every e.g. 5 frames followed by partial frames (motion vectors & differences). The other thing I would add is a 6MHz channel is a lot to expect in terms of phase coherence; 6MHz corresponds to a delay spread of <160ns, which means any reflections can corrupt your signal. You can probably predict whether this will be a problem or not by using an analog video transmitter/receiver and looking closely at the received image. If you see any ghosting or shadows then you might run into issues with inter-symbol interference. |
| Marco:
--- Quote from: OwO on January 19, 2019, 07:32:33 am ---For the RF data link I would recommend not going above QPSK because efficiency (bits you can get across per unit energy) rapidly falls as you stuff more bits into a given bandwidth. --- End quote --- Don't these 5.8 GHz analogue video transmitters simply take the input (nominally NTSC/PAL) and FM modulate the carrier with it? Trying to get cutesy with carrier coding for the input of an FM modulator doesn't seem much use to me ... just treat it as a 1D channel, nice and simple. What I would personally try to do is adapt something like the old V.34 channel equalization (except 1D instead of 2D). Every kB or so send a sync+CAZAC pulse to get an accurate equalization filter with FFT, rest of the time use adaptive LMS equalization. Only use 0 values for the sync, use say 16-256 for the data. |
| Navigation |
| Message Index |
| Next page |
| Previous page |