Author Topic: Aggressive QAM + aggressive FEC vs Robust QAM + minimal FEC  (Read 4812 times)

0 Members and 1 Guest are viewing this topic.

Offline rs20Topic starter

  • Super Contributor
  • ***
  • Posts: 2318
  • Country: au
Aggressive QAM + aggressive FEC vs Robust QAM + minimal FEC
« on: November 23, 2015, 12:38:47 am »
When designing a QAM system, one might choose a very simple QAM constellation that has a very low "symbol error rate"*, which means that FEC with a very low over head can be used to reconstruct the original signal. However, if you make the QAM constellation too simple, you're just wasting bandwidth (communications definition).

So you choose a QAM constellation that is rather borderline, where individual symbols are only somewhat likely to be recognized correctly, and rely heavily on Trellis Encoding/FEC to recover a valid data stream.

Now IIUC, if you choose a wildly overaggressive QAM scheme (QAM-65536, say), and couple that with a quality FEC which has huge redundancy, then due to the near-optimality of modern FEC techniques you'll end up with much the same data rate than if you had used a more sensible QAM scheme. But maybe the computational complexity goes up?

My question: how does one choose a QAM constellation in the real world, given that you sort-of can make it as high as you want? Are my assumptions above correct? More concretely, all engineering decisions are a balance between two conflicting considerations. What is the consideration that prevents use of QAM-65536? Is it just computational complexity, or something else?

* Just inventing my own terms, please do correct me.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8646
  • Country: gb
Re: Aggressive QAM + aggressive FEC vs Robust QAM + minimal FEC
« Reply #1 on: November 23, 2015, 04:08:19 am »
Get back to the derivation of the channel capacity theorem. If you pollute individual QAM symbols with AWGN, the symbols where the AWGN content is below half the Euclidean separation of the constellation points will demodulate correctly. Those where it is above half will not. If you could average out all the peaks and troughs of the AWGN to a fixed level, a separation of constellation points just over twice this average level would ensure every symbol demodulates correctly. Any type of modulation will give you the same picture. I just used QAM as an example, as its easier to visualise what is going on with a simple constellation, than with something more complex like OFDM. It is conceptually possible to flatten the noise, and the capacity (the maximum possible error free throughput) of the channel is where you are staying just the right side of the noise line. Work the maths of that through, and you have Shannon's equation. The big question is how do you smooth the noise to pretty much a constant level, while keeping the information bits flowing through the system at the required rate? Essentially, you need to spread the information over time, to the point where the average of the noise over the spreading window is pretty much constant. If low latency is important in your application you might need to compromise on the spreading, but conceptually you can spread and spread until the noise is nearly constant, and then you nearly achieve capacity. Modulation schemes like OFDM spread the information quite a bit through time, but most modulation schemes use short term symbols. If you try to make really slow QAM symbols you will have to put more bits in each symbol, which is not the effect we are looking for. So, something more than modulation is needed to approach capacity. Channel coding, which smears the information bits through time, is the solution. Exactly how you optimally channel code has been a research topic for years. Recent developments like Polar Codes, have got us pretty near to optimal, and we can get darned close to channel capacity without ludicrous latency. Let's be clear, simply calling channel coding FEC is misleading. There are many forms of FEC used in comms systems, but a channel coding scheme specifically works because it efficiently performs the required temporal smearing of the information.

Most designs start with a given channel bandwidth. Then you choose a modulation scheme. The bandwidth and modulation scheme pretty much define your symbol rate. You should know the bit rate you want to achieve, and from that and the symbol rate you have your minimum bits per symbol before coding. The channel capacity equation will tell you the SNR it will take to achieve that. Now you need to devise a combination of modulation and channel coding which will let you get close to capacity, and achieve your desired bit rate with a low BER. Let's say you use QAM. Every extra bit you add to the information stream for channel coding doubles the number of constellation points, and loses you 3dB, which is a big cost that the coding needs to recover before it offers you any benefits. Noise mostly causes you to mistake a constellation point for one that's one or two steps away. It would be a once in a lifetime super extreme point on an AWGN waveform that takes you far across the constellation. Most coded QAM systems don't even code the bits which represent big steps across the constellation. They code the bits which represent small steps, and they usually end up with just one or two extra bits in the stream to the modulator. The improvement channel coding schemes bring comes more from how effectively they smear things through time, than how many bits there are.
« Last Edit: November 23, 2015, 04:15:54 am by coppice »
 

Offline rs20Topic starter

  • Super Contributor
  • ***
  • Posts: 2318
  • Country: au
Re: Aggressive QAM + aggressive FEC vs Robust QAM + minimal FEC
« Reply #2 on: November 23, 2015, 04:42:47 am »
To clarify I should have been more specific and said that I'm doing QAM over OFDM.

Do you have SNR and BER feedback from receiver? If yes, do it adaptively. If not, then it really depends on experiments.

That's a terribly unprincipled approach, and in both cases it still doesn't answer the question "We're getting errors -- should we reduce the QAM, or up the FEC?"
« Last Edit: November 23, 2015, 04:46:00 am by rs20 »
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8646
  • Country: gb
Re: Aggressive QAM + aggressive FEC vs Robust QAM + minimal FEC
« Reply #3 on: November 23, 2015, 05:00:20 am »
To clarify I should have been more specific and said that I'm doing QAM over OFDM.
It doesn't make a huge difference to the answer to your question. Conceptually OFDM could approach capacity without any channel coding. Comms books from the early 70s described what we now call OFDM as a way to conceptually approach capacity, usually commenting that a scheme like that was unlikely to ever be economical to implement. Now its the basis of a <$2 WiFi chip.  :) In practice OFDM is still used with channel coding, as achieving all the necessary temporal smearing through lots of low symbol rate carriers has its drawbacks.

Quote
Do you have SNR and BER feedback from receiver? If yes, do it adaptively. If not, then it really depends on experiments.

That's a terribly unprincipled approach, and in both cases it still doesn't answer the question "We're getting errors -- should we reduce the QAM, or up the FEC?"
Without solid theory to build experiments upon you would just be stumbling around in the dark. Experimentation is certainly very important in a number of areas of comms system design. Only real world experiments will tell you about the kind of multipathing and fading you will see in the real world. However, the only experiments which have a solid place in basic channel design happen in computer models.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8646
  • Country: gb
Re: Aggressive QAM + aggressive FEC vs Robust QAM + minimal FEC
« Reply #4 on: November 23, 2015, 05:38:34 am »
Given the BW occupation and payload BW, just tweak QAM overhead and FEC overhead, to see which works the best.

The real science behind it relates to how noise is introduced, what channel it is transmitting in, what interference model you have, and etc.

Sure, you can model everything, but the time spent in modeling it won't worth it. There is a hybrid way to solve this problem, combining experiments (automated simulation) and noise modeling.

You can simply model the noise source, and do a parametric scanning simulation. With the right toolbox and Simulink this should not take too long.
You shouldn't model, you should use Simulink? I think you're going to have to expand on that a little. It seems like drivel.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8646
  • Country: gb
Re: Aggressive QAM + aggressive FEC vs Robust QAM + minimal FEC
« Reply #5 on: November 23, 2015, 06:02:53 am »
Given the BW occupation and payload BW, just tweak QAM overhead and FEC overhead, to see which works the best.

The real science behind it relates to how noise is introduced, what channel it is transmitting in, what interference model you have, and etc.

Sure, you can model everything, but the time spent in modeling it won't worth it. There is a hybrid way to solve this problem, combining experiments (automated simulation) and noise modeling.

You can simply model the noise source, and do a parametric scanning simulation. With the right toolbox and Simulink this should not take too long.
You shouldn't model, you should use Simulink? I think you're going to have to expand on that a little. It seems like drivel.

Model things in a painless graphical, drag and drop manner, and do quick and dirty simulations.

The hard way is to model things down to physical layer, and write tons of formulas across tens of pages of papers, to derive a mathematical, algebraic solution.

My principle is, whenever numeric methods exist, do not use algebraic methods.
For most channel coding work nobody has algebraic models. Most things have to be handled by numerical models. None of this addresses the original question, so I have no idea what point you were trying to make.
 

Offline thewyliestcoyote

  • Regular Contributor
  • *
  • Posts: 100
Re: Aggressive QAM + aggressive FEC vs Robust QAM + minimal FEC
« Reply #6 on: November 27, 2015, 10:57:57 am »
When considering Modulation for a system it comes down to a couple of factors. I will try and mention some of them in no order and are just some.

First I would like to address the QAM-65536. This represents a couple of problems first being the linearity of the Tx PA. This can kill the performance. I have never seen or heard of something this high order in practice. Additionally if you have a constellation that dense it does not make sense to use a square. You would be throwing away a fraction of possible symbols as they are in the unit circuit but not in the contained square. Don't believe me? take the ratio of the area of the unit circuit to the largest contained square. This is limit as the order of constellation goes to infinity. The highest I have heard of was QAM-1024 and it was in a very directive point to point microwave link, 75 GHz if memory serves correctly. It had something crazy like 50 dB of SNR.

Second is error correction is not free. There are time, power, and complexity trades as with anything else. Just a run down of the list of common means of correcting errors. First is the block codes. These are like the name suggests operating on fixed size blocks and data and therefore can only correct up to a fixed number of bit errors. The naming convention is first the number of bits in the block, second is the number of data bits encoded. 3 examples are the (3,1) block code, (7,4) block code, (24,12) block code. (3,1) code while not really practically useful it has a very elegant visualization as 3D space is very easy to visualize. Picture a cube and label each with a 3 bit grey code(each corner is only 1 bit different than the adjoining corner) and pick 2 corners that is the 2 code words (ECC bit sequences). The distance between these corners in bits is called the Hamming distance and for this it is 3, corresponding to the 3 single bit alterations of any code word into any other code word. Another parameter of interest with the (3,1) is the coding rate is 1/3. It is the bits of data divided by the bits of the coded data. Because of the Hamming distance of 3 this code can correct 1 bit alteration correctly, and detect 2 bit alterations but not correct. (7,4) does have application but mostly in things like USB and things that ether have large SNR's or don't carry about operating close the Shannon capacity limit. (24,12) Golay code was used on Voyager and some other Deep space probes. If you wish to correct more and more errors and get closer to the Shannon's limit you have to make you block code bigger and bigger. Put it bluntly this sucks and is a game not worth playing for real high performance systems today as there are 3 convolution codes Viterbi, Turbo, and Low Density Parity Check LDPC. These all really on larger chucks of data than what is used in block codes generally and work like IIR filters. There are many systems that use Viterbi and will continue even in light of better code like Turbo and LDPC. If fact up until 2013 the answer as Viterbi unless you wished to pay Panted money for Turbo.

Third is the consideration of inter symbol interference ISI. This how much one symbol effects the other symbols transmitted. Put simply some of the energy of the symbol before effects the current symbol. This is because of a couple of things. The first is the transmit pulse shaping filter. There is no such thing as the perfect filter, its just the real world and we don't want to have to know the symbol we are going to transmit 1 year from now to send the current symbol. The second factor is the delay spread of the channel witch the data is being transmitted over. In many case this can have more of an effect than the pulse shaping and is the motivation for things like OFDM. OFDM is just he use of many sub-carrier to make a lot of narrower bandwidth channels out of the one big channel. This means that all that delay spreading just because a amplitude and phase factor like what would be measured with a VNA measuring S21 with port one being the transmitter and port 2 being the receiver.

Fourth is the amount of data to be send and the desired over all performance of the system. If only small amounts of data need to be send, high order QAM does not make sense. This is because of the limited performance of any FEC over small amount of data. With higher order the probability goes up with the Q function.

5th is the quality of the local oscillator of the transmitter and receiver. The QAM order limit is a direct function of the phase noise of the system.

6 is the system has some desirable total bit error rate. The nominal BER of the modulation as that SNR and kind of errors the error correcting code can handle. We are talking about non-infinite sequences of data so there is going to be some percentage of error. This is why things up the network layer have things like check sums.

I will try and wrap this up. It does not make sense to use a modulation that yields a BER of 49.9%. Going back to the derivation of the Shannon's means you will have to have a very very long error correction means to get any real useful data out of it and at a very high cost. Further more most FEC design awesome that the alterations to the signal are because of AWGN not something that is correlated like PA non-linearity, ISI and so on. As a generally rule of thumb the high order constellations are only for things like point to point and are not even suitable for long cables. As with any system it is all about optimization for the application. You have to find the cost of order of constellation and FEC and minimize the total. There is no way around Shannon's capacity its like the speed of light. These are the limits of about how information is moved around.

Are you asking for a analytical expression(s) for what to use when? or just some kind of general understanding?
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8646
  • Country: gb
Re: Aggressive QAM + aggressive FEC vs Robust QAM + minimal FEC
« Reply #7 on: November 27, 2015, 11:30:06 am »
First I would like to address the QAM-65536. This represents a couple of problems first being the linearity of the Tx PA. This can kill the performance. I have never seen or heard of something this high order in practice. Additionally if you have a constellation that dense it does not make sense to use a square. You would be throwing away a fraction of possible symbols as they are in the unit circuit but not in the contained square. Don't believe me? take the ratio of the area of the unit circuit to the largest contained square. This is limit as the order of constellation goes to infinity. The highest I have heard of was QAM-1024 and it was in a very directive point to point microwave link, 75 GHz if memory serves correctly. It had something crazy like 50 dB of SNR.
The issue of a square constellation is pretty much the same for QAM 64 or 256, which are widely used. The peak power to send those corners is a similar issue, although they do greatly simplify blind recovery at the receiver. :) Look at V.34 for a constellation that absolutely had to avoid the corners - a 1664 point completely rounded constellation..
Second is error correction is not free.
I can't see anyone shying away from first class correction these days because of computational costs.
 

Offline DanielS

  • Frequent Contributor
  • **
  • Posts: 798
Re: Aggressive QAM + aggressive FEC vs Robust QAM + minimal FEC
« Reply #8 on: November 27, 2015, 01:45:26 pm »
I can't see anyone shying away from first class correction these days because of computational costs.
There isn't much point in using a more robust FEC algorithm when the error correction and channel coding cost you at least as many bits as what you gained from more aggressive modulation. Once you reach that point, the additional computational complexity is simply not worth bothering with.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8646
  • Country: gb
Re: Aggressive QAM + aggressive FEC vs Robust QAM + minimal FEC
« Reply #9 on: November 27, 2015, 05:24:21 pm »
I can't see anyone shying away from first class correction these days because of computational costs.
There isn't much point in using a more robust FEC algorithm when the error correction and channel coding cost you at least as many bits as what you gained from more aggressive modulation. Once you reach that point, the additional computational complexity is simply not worth bothering with.
The optimal number of additional coding bits is always low, so the computation required to implement any currently know FEC scheme is not extreme.
 

Offline thewyliestcoyote

  • Regular Contributor
  • *
  • Posts: 100
Re: Aggressive QAM + aggressive FEC vs Robust QAM + minimal FEC
« Reply #10 on: November 27, 2015, 07:51:38 pm »
Quote
I can't see anyone shying away from first class correction these days because of computational costs.
I agree 100% that for most systems really good error correction optimized to the application is very important if not a must. It is the thing that will get you with in 99% of the Shannon limit. This can only be done having the error correction adapted the channel of operation. But it does take work to optimize for a application and computational cost. It is not free and therefore does effect the optimal design. If does not make sense to have the error correction code having to handle multi bit errors in every symbol. Additionally Tubro and LDPC require a lot of very simple guess and check, i.e. flip one bit check if valid done else try another bit. This why things very high performance FEC is done in FPGA's or ASIC's rather than general processors.

It does not make sense for a channel that has a capacity of 1 unit of data and use a modulation that is intended for 10 units of data and have FEC pick up the difference.

Quote
Quote
First I would like to address the QAM-65536. This represents a couple of problems first being the linearity of the Tx PA. This can kill the performance. I have never seen or heard of something this high order in practice. Additionally if you have a constellation that dense it does not make sense to use a square. You would be throwing away a fraction of possible symbols as they are in the unit circuit but not in the contained square. Don't believe me? take the ratio of the area of the unit circuit to the largest contained square. This is limit as the order of constellation goes to infinity. The highest I have heard of was QAM-1024 and it was in a very directive point to point microwave link, 75 GHz if memory serves correctly. It had something crazy like 50 dB of SNR.
The issue of a square constellation is pretty much the same for QAM 64 or 256, which are widely used. The peak power to send those corners is a similar issue, although they do greatly simplify blind recovery at the receiver. :) Look at V.34 for a constellation that absolutely had to avoid the corners - a 1664 point completely rounded constellation..

Thanks for the tip on V.34.
Log base 2 of the number of possible symbols in the set of all symbols is the number of coded bits that are sent. Looking at QAM 64 and 256 the number of added symbols to lower than the number added for things like QAM 1024 or something large and the case of the limit of just taking the area. It is not the corners that are really much of the problem. Yes they are the most extream part of the constellation but they are on the unit circle and I am also guessing many symbols of V.34 are as well.  You have to look at he EVM of each symbol after transition. Distortion caused by the PA is always a really problem for limited systems. Else just use a class A PA and only make 1% use of it.


Quote
Quote
Quote
I can't see anyone shying away from first class correction these days because of computational costs.
There isn't much point in using a more robust FEC algorithm when the error correction and channel coding cost you at least as many bits as what you gained from more aggressive modulation. Once you reach that point, the additional computational complexity is simply not worth bothering with.
The optimal number of additional coding bits is always low, so the computation required to implement any currently know FEC scheme is not extreme.
 
The number of coding bits must be greater than number of bits from the modulation minus the number of bits from Shannon's capacity of the channel.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf