Electronics > Projects, Designs, and Technical Stuff
Digital FPV video for drone racing
StillTrying:
For 5.8GHz at shorter distances or even indoors and with lots of movement, I don't think circular polarization and up to 4 receiver diversity could be beat by any digital fixing, 5.8GHz is almost radar and reflects off almost everything.
--- Quote from: Marco on January 19, 2019, 07:03:37 am ---There is nothing inherent in digital which causes latency, not even with motion compensation.
--- End quote ---
In theory, but sometimes there's around 100ms of latency when a miniature camera is directly analogue connected to a 5in LCD monitor.
Marco:
--- Quote from: StillTrying on January 19, 2019, 12:50:18 pm ---I don't think circular polarization and up to 4 receiver diversity could be beat by any digital fixing
--- End quote ---
Analogue can't use channel equalization and compression. The reflections change fast on a human timescale, but do they change fast enough to be a problem on the LMS algorithm's timescale?
Also you could do something in between, use half the bandwidth to just send a PCM image (but with the benefit of channel equalization, so still less ghosting than pure analogue) use the other half to send a compressed refinement layer.
hexahedron:
WOW! Thanks for all the feedback on my project so far! Apologies for not responding, as I live in the USA I was asleep :P The standard fpv system does indeed just directly fm modulate the 5.8Ghz signal, and has a 9Mhz sub-carrier for audio. I'm seeing a lot of talk about H.264, and while that would indeed compress the video much further, it has a few problems. Firstly and foremost, latency! From what I know, implementing H.264 on the transmitter and receiver end will add at least 1 frame of latency, which is unacceptable! Please correct me If I'm wrong on that. My system at most should add about 0.1 frames of latency to the system, here's why. If you recall from my explanation of how JPEG works, we reduce the color channels resolution by a factor of 2 in each dimension, then split it into 8*8 blocks. This means that 16 scan lines (horizontal row of pixels) are required for compression. My fpga system should be able to compress all the blocks in those 16 scan lines within 1 extra scan line from my calculations. We can then send all of the data for those 16 scan lines in less than 16 more scan lines (the brightness channel is calculated after 8 scan lines, so we start transmitting there). There are 480 scan lines (not including overscan) in a 480p video signal.
(1/480)*16 = 0.0333...
As it will take a little less than 16 scan lines to transfer all the data (for those 16 scan lines), and as all of the data is needed to re-construct those 16 scan lines, it will have a minimum of a little less than 32 scan lines of latency.
(1/480)*32 = 0.0666...
If we take into account that there are 60 frames per second:
((1/480)*32)/60 = 0.001111... seconds of latency
that is about 1.111... Milliseconds of latency!!!
Obviously, computing time will add a little more to that number, but I don't think latency will be an issue.
Again, thanks for the positive feedback and useful discussion! This is the first forum I've found where people actually care about and understand what I'm working on, and I can't tell you how much that means to me!
Marco:
It depends on the encoder. Say X264 can just encode blocks in sync with the scan pattern (they've done development work for game streaming).
dmills:
I would start by giving the design of your receivers a serious look, proper filters and high IMD3 amplifiers and mixers would seem to be a good starting point. Having the receivers close to the course is NOT always the smart play, because the inverse square law then puts a drone close to your rx aerials as having a VASTLY stronger signal then one on the other side of the course, where if your aerials are further away the two signals are much closer in (lower) level, level you can make up with antenna gain, blocking DR not so much.
In terms of image coding, start by defining how much latency is acceptable (as little as possible is NOT a specification), which will tell you how many video lines you have to work with (Don't forget that some cameras and monitors have a nasty habit of introducing a frame or two for their own purposes, for low latency a vidicon (Or image Orthicon) tube and CRT is hard to beat, but possibly a little heavy on an aerial drone!
You might be able to do interesting things with multiple receivers and sneaking training sequences into the VBI to allow your system to calculate the impulse response of the channel on a frame by frame basis. Multipath can for example be detected by looking for AM on the audio sub carrier, which with multiple receivers can be used to lower the 'fitness' of any receiver that has significant multipath.
There are really two parts to the digital approach, data reduction and the data coding for the link, and you should probably provide some sort of back channel so that the coder can be told what the characteristics of the signal at the receivers are and can modify the amount of bandwidth it allocates to FEC to suit, it will then have to communicate with the video data reduction codec to indicate how much bandwidth is available...
One trick that can be useful is to shuffle the data going into the transmitter so that bursts of interference tend to be spread out as a few bit errors per line rather then being a burst that completely wipes out a whole block of data, this is especially useful if the FEC data is added before this is done as it allows reconstruction of even badly corrupted data providing the temporal spread is large compared to the length of the interference burst.
Just some thoughts.
Regards, Dan.
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version