I'm working on a school project which simulates a wireless channel with this model:
[Encoder] -> [8PSK Modulation] -> [Rayleigh fading] -> [AWGN] -> [Demodulation] -> [Decoder]
My simulator works perfect without the convolutional decoding and I know the encoder works after a lot of testing. So the problem is with the decoder. My power efficiency graph looks like the attached. Is there a Viterbi pro that might know why I get such a shitty graph?
I can show you my code but it's probly unreadable to most people lol.