Electronics > Projects, Designs, and Technical Stuff
Digital FPV video for drone racing
<< < (6/28) > >>
Marco:

--- Quote from: dmills on January 19, 2019, 06:50:02 pm ---I would start by giving the design of your receivers a serious look, proper filters and high IMD3 amplifiers and mixers would seem to be a good starting point.

--- End quote ---
A massive undertaking. If I really wanted an alternative to the RTC6715 based modules I'd just pay TI the eye watering price for the LMX8410L and design around that ... the problem is hard enough as is and it's not like they sell other necessary parts for a more discrete design for reasonable prices either, a synthesizer alone will run you the price of a cheap RTC6715 module.

--- Quote ---Having the receivers close to the course is NOT always the smart play, because the inverse square law then puts a drone close to your rx aerials as having a VASTLY stronger signal then one on the other side of the course, where if your aerials are further away the two signals are much closer in (lower) level, level you can make up with antenna gain, blocking DR not so much.
--- End quote ---
Nice thing about the cheap receiver modules is that you can do a diversity receiver by simply having lots of modules and switching the IF output.

--- Quote ---You might be able to do interesting things with multiple receivers and sneaking training sequences into the VBI to allow your system to calculate the impulse response of the channel on a frame by frame basis.
--- End quote ---
I doubt frame by frame is fast enough if you try to do channel equalization.
ogden:

--- Quote from: hexahedron on January 18, 2019, 08:57:29 pm ---I will try my best to explain the process the best I can, but if you are still confused, check out https://en.wikipedia.org/wiki/JPEG , It's a great article.

--- End quote ---

https://en.wikipedia.org/wiki/Motion_JPEG  Is even greater article.

I am afraid that you underestimate complexity of digital video compression & transmission. You may want to reconsider and check what you can make out of existing stuff like digital MJPEG cameras capable of 640X480 MJPEG@120fps and 5.8GHz WiFi transceivers.

Camera I am talking about is something like this: https://www.google.com/search?q=USBFHD01M

Those usually are using popular MJPEG USB chip: https://www.realtek.com/en/products/computer-peripheral-ics/item/rts5822
StillTrying:
After looking at some of the small fast quad FVP videos I don't think this is solvable by making the video digital. Even when using 2 receiver diversity BOTH receivers very often drop out at the same time, often for many frames.
I think some improvements would be to get the TX antenna further away from the body of the quad, which is not possible on a racing quad, or to have quite a few meters between 2 diversity receiver antennas, which isn't very practicable either.
Even with digital video I think diversity that works is going to be essential for any improvement.
TheDane:
I am wondering why you are trying to re-invent something that already exists.
Try google UDP FPV Video, and you'll see a lot of projects already working with low latency and digital video

https://www.google.com/search?q=udp+fpv+video
Kilrah:

--- Quote from: Marco on January 19, 2019, 04:03:44 am ---Do you have the space/power for some low power SBC (i.MX6ULL based?). The amount of effort needed to experiment with video coding algorithms will be a lot less than with a FPGA.

--- End quote ---


--- Quote from: Marco on January 19, 2019, 07:03:37 am ---There is nothing inherent in digital which causes latency, not even with motion compensation.

--- End quote ---

Nothing "inherent", but it's still percieved as such (and the case in practice with anything one tries stuff with) becasue 99.9% of implementations don't care about minimizing it or systems have "something" that increases it needlessly. That includes hardware encoders in most SoCs. And they all use standards that are not appropriate for that use as is.

For FPV what's crucial is:
- low latency
- as much as possible no breakups at all
- ultra fast resync if there's one anyway
- immunity to noise, not in the sense of keeping a clean output but in the sense that the garbled output is still somewhat recognisable, if a couple of frames get garbled so bad that your image essentially nothing more than 9 "color blobs" aka huge pixels that's okay and much preferable to a breakup
- interference should corrupt as little as possible of the image. Many systems send a frame as one packet or 2 and have no sync within the frame, so if one packet can't get recovered by error correction you lose a significant portion of the frame or possibly the entirety or all the rest of it after the corruption because once good data starts coming again the receiver has no idea where it should go in the image. Better to send a frame as 1000 small packets, and include coordinates in them so that the receiver can place good packets at the right place.

Problem is, most compression methods achieve their results by working on a large dataset, and we precisely want them as small as possible.

I was thinking of streaming stuff in a way similar to progressive JPEG, aka you stream the same frame multiple times with increasing "resolutions".
Navigation
Message Index
Next page
Previous page
There was an error while thanking
Thanking...

Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod