There is a lot of redundancy from one line to the next, so some sort of line-to-line delta compression might be worth investigating, as it could be very simple to encode and decode.
Also may be some mileage in multiple receivers working on a line-by-line basis, so as long as one receiver gets a good line, the whole frame will be correct. Receivers linked with gbit ethernet using raw packets (not TCP/IP) ought to be fast enough to handle this
There is a lot of redundancy from one line to the next, so some sort of line-to-line delta compression might be worth investigating, as it could be very simple to encode and decode.
Yes, indeed similar to h264 motion estimation can be used, but then low latency compressor becomes exponentially complex task.... unless video is ~80% centered crop of the sensor area, each frame is compressed in shifted coordinate system to match previous frame - so redundant image parts (if any) can be omitted, yet later frame is displayed in original coordinates. Kind of whole frame motion estimator. We may be on to something here.
[edit] This is more or less how digital steadycam is working, obviously it does not do "displayed in original coordinates"
Also may be some mileage in multiple receivers working on a line-by-line basis, so as long as one receiver gets a good line, the whole frame will be correct. Receivers linked with gbit ethernet using raw packets (not TCP/IP) ought to be fast enough to handle this
Yes, "digital diversity" shall be included into feature list. Thou we talked about 8pix vertical slice so (M)JPEG compression can be used. For 480p video slice could be vertical 16pix so whole frame is split into 30 packets. At 20Mbit & 120 FPS it is ~694 bytes/packet.
You offer to compress video with fast vertical panning using 292:1 compression? - You sound like those 21-st century teens who declare that there's no need for education/experience/skills other than trained thumbs and internet
You sound like someone who hasn't actually tried to convert some action cam footage to 2 Mbps. It's not pretty, but then neither is NTSC. That's at less than 1 bit per symbol, which is a tad pessimistic ... noise isn't that high in the NTSC footage.
By 100% saturating data channel you are adding one more frame time
Nah, as I said you can probably slice encode. Get X lines, encode as slice, put on wire, decode as slice ... that's <3X lines delay.
120 fps isn't all that important, since displays which can actually do it are so sparse. The mobile phone panels which can do it are unobtainium, there's just some expensive square panels for VR glasses ... or you need to move up to 15.6" laptop panels.
You offer to compress video with fast vertical panning using 292:1 compression? - You sound like those 21-st century teens who declare that there's no need for education/experience/skills other than trained thumbs and internet
You sound like someone who hasn't actually tried to convert some action cam footage to 2 Mbps.
Yes, I admit that I did not try to compress 480p 120fps action cam footage into 2Mbps. Did you? Really? - You sound like you tried everything.
Nah, as I said you can probably slice encode. Get X lines, encode as slice, put on wire, decode as slice ... that's <3X lines delay.
This is not how h264 you are promoting, works.
120 fps isn't all that important, since displays which can actually do it are so sparse.
It is. You may need to re-read (or comprehend) my posts to understand why.
Would it not be easier to take the raw digital stream from the camera, encode it with something like Manchester 2 or similar telephone style 3 or 4 level polarity independent scheme, ie AMI or NRZI , transmit a sync word, and send keyed, AM, FM, Whatever, it with one of the sidebands suppressed by a simple filter? ie Vestigial Sideband like an old TV modulator... In other words, just MAKE IT PSEUDO ANALOG? Decode with a Costas Loop if needed, etc.. If need be mix with a pseudo-random stream to make it spread spectrum... You would have no error correction, no multipath protection, bandwidth is freaking huge, I know the downsides... Then just do diversity receive using a pilot carrier...
Might just work. I've seen far crazier stuff done in Ham Radio to get video across.
Expecting inbound laughter... in 5,4,3,2,1 ... But very low latency...
Steve
One of the requirements is having 8 systems working together in a relatively narrow ISM band, so "freaking huge" bandwidth won't cut it.
BTW that already exists (WHDI-like systems) but while RF latency is almost 0 the use of standard input/output interfaces and camera/displays still introduces quite a bit of it in the total. And the transmission performance is meh at more than short distances.
Yes, I admit that I did not try to compress 480p 120fps action cam footage into 2Mbps. Did you? Really? - You sound like you tried everything.
vlc -I dummy -vvv "xxx.mp4" --sout=#transcode{venc=x264,fps=60,vcodec=h264,vb=2048}:standard{access=file,mux=ts,dst=yyy.mp4}
Downloading a 60 fps movie off youtube is a bit tricky, all the new firefox extensions suck ... but YouTube Video and Audio Downloader can do it (you have to play a bit of a guessing game which it is).
This is not how h264 you are promoting, works.
Yes it does. Slices are in principle designed for parallel encoding/decoding, but they work fine to reduce latency too, x264 went down the same path when they were paid to create a low latency encoder for network gaming.
It is. You may need to re-read (or comprehend) my posts to understand why.
I know why higher framerate lowers latency, but it's simply not an option for now in a system at sensible cost/size. So framelocked encoding/decoding chasing/leading the scanline at 60 Hz is the best you can do.
Would it not be easier to take the raw digital stream from the camera, encode it with something like Manchester 2 or similar telephone style 3 or 4 level polarity independent scheme, ie AMI or NRZI , transmit a sync word, and send keyed, AM, FM, Whatever, it with one of the sidebands suppressed by a simple filter?
To do it cheaply it's by far the easiest to use the existing 5.8 GHz chipsets ... in which case you get an amplitude only input/output for a FM encoder/decoder.
For what it's worth, the RTC6715 seems pretty good at AGC at least, just out of hand rejecting it as crap is presumptuous.
Hey again! I just had a fantastic idea, let's see what will be thought of it! Pure digital is for sure not going to be very easy to accomplish, but what if we just improved the analog standard? Obviously ntsc has a lot of issues when it comes to how color is encoded, and I could fix that, but I have a better idea. What if we were to apply the discrete cosine tranform to 8x8 blocks, but them transmit them as an analog signal? We could assign more time to the more important parts of the image (IE: low frequencies and brightness) and on the other end, we could do a ton of over sampling. This would result in high frequency parts of the image being lost first, giving the effect of higher levels of jpeg compression as signal quality is lost. Thoughts?
So framelocked encoding/decoding chasing/leading the scanline at 60 Hz is the best you can do.
What's the point of building 60Hz digital system if by design it's latency cannot be lower than existing NTSC (60Hz half-frames)?
[edit] Thank you for h.264 slices info. It makes sense to use just "standard" h264 for this application. Most likely you wanted to post following link:
https://en.wikipedia.org/wiki/Flexible_Macroblock_Ordering. I wonder - does rPI h264 encoder have slicing support?
What's the point of building 60Hz digital system if by design it's latency cannot be lower than existing NTSC (60Hz half-frames)?
Even if you just did a pure PCM YUV encoding it would be an improvement on NTSC, it's not bandwidth efficient. The blackbox synch algorithms can almost certainly be improved upon too.
Thoughts?
I'd spend a quarter of the bandwidth on interlaced HQVGA YUV at 60 Hz, then the rest on digital. If digital fails for a block interpolate the "analogue" signal.