Keyword: OFDM.
Goertzel filters were/are used in a DTMF decoders.
DTMF is 8 seperate tones, applied in 16 combinations.
Keyword: OFDM.
I've got one word for you Benjamin, Plastics!
Keyword: OFDM.
I've got one word for you Benjamin, Plastics!
:palm: I was not kidding. In short: using iFFT it is possible to generate multiple tones, depending how you set bins of FFT. Using FFT you "demodulate" it.
Further reading (http://rfmw.em.keysight.com/wireless/helpfiles/89600b/webhelp/subsystems/wlan-ofdm/Content/ofdm_basicprinciplesoverview.htm)
Single abbreviation was much more relevant than whole sentence of (yours), so stop :horse:, stay on topic please.Keyword: OFDM.
I've got one word for you Benjamin, Plastics!
:palm: I was not kidding. In short: using iFFT it is possible to generate multiple tones, depending how you set bins of FFT. Using FFT you "demodulate" it.
Further reading (http://rfmw.em.keysight.com/wireless/helpfiles/89600b/webhelp/subsystems/wlan-ofdm/Content/ofdm_basicprinciplesoverview.htm)
I never thought you were kidding. You have to say something to make a joke.
Single abbreviation was much more relevant than whole sentence of (yours), so stop :horse:, stay on topic please.Keyword: OFDM.
I've got one word for you Benjamin, Plastics!
:palm: I was not kidding. In short: using iFFT it is possible to generate multiple tones, depending how you set bins of FFT. Using FFT you "demodulate" it.
Further reading (http://rfmw.em.keysight.com/wireless/helpfiles/89600b/webhelp/subsystems/wlan-ofdm/Content/ofdm_basicprinciplesoverview.htm)
I never thought you were kidding. You have to say something to make a joke.
Perhaps some clarification is in order:
1. Only 1 needed - can be "moderately" expensive (~$200)
2. User end (decoder) is simple Arduino based FFT display. 32 "bins" seems possible.
3. Tones from 300 - 3000 Hz (audio passband of 220 MHz amateur transceiver) - preferably not harmonically related.
I have an amateur 2M repeater with 7 remote receive sites in addition to the local receiver.
At the control site I can observe the activity of each remote site - receive or not receive (the first 8 "bits")
I can also watch the voter selected receiver indication (1 "bit" for each receiver but only 1 selected at a time) - the second 8 "bits"
We already take this information and place it into an AX.25 packet and transmit it out at 1200 bps but it can run 1/2 to 3/4 second behind due to packetization and buffering.
This lag prevents the observation of the system as a whole - in real time - sometimes required to troubleshoot.
It also requires the user to make an investment in obtuse hardware and arcane software - in some cases a full PC or several (3) SOCs.
One Arduino and a 2 line LCD display is WAY simpler and cheaper.
So far so good - several interesting things to look at more closely.
Bill
Why don't you try contributing something rather than arguing about it?I already contributed by telling that OFDM could be solution. Sorry if you were unable to process that.
Try explaining how the frequencies would be assigned and how they should relate to the sample rate for optimum isolation of the channels and how fast data might be sent given the constraints of the FFT. Do you have experience with any of this?OP/you can start with link (http://rfmw.em.keysight.com/wireless/helpfiles/89600b/webhelp/subsystems/wlan-ofdm/Content/ofdm_basicprinciplesoverview.htm) I gave. Other useful resource about using OFDM multicarrier communication is https://en.wikipedia.org/wiki/PRIME_(PLC) (https://en.wikipedia.org/wiki/PRIME_(PLC)). PRIME-Spec_v1.3.6.pdf (https://www.prime-alliance.org/wp-content/uploads/2020/04/PRIME-Spec_v1.3.6.pdf) to be specific. Obviously basic knowledge of FFT/iFFT will help. Yes, I know enough about subject. No, I won't do any classes here.
16 bits at 1200bps is ~13ms, an arduino should have no problem doing 1200bpsUnfortunately it's not the "bits" that are the problem with AX.25.
Goertzel filters were/are used in a DTMF decoders.
In DTMF the difference in amplitude between the two tones is called, "twist". I've always thought that was a strange term and wondered why this came about.
Keyword: OFDM.
Then assuming the FM rigs have an audio bandwidth that covers 500Hz to 2500 Hz, you might choose tones starting at 550 Hz with a tone spacing of (say) 125 Hz. This avoids harmonics of the lower tones (caused by distortion) falling on top of the upper tones -- at least I think it does. FFT seems to be the way to decode these, and software generation of the tones seems pretty straightforward. Of course the FFT buckets have to be narrow enough to not include the harmonics that *are* there. I suppose you could use the same tone generation software to create quadrature tones and make individual tone detectors. I have no idea if this is easier or harder than the FFT -- never done it myself.
2. User end (decoder) is simple Arduino based FFT display. 32 "bins" seems possible.
3. Tones from 300 - 3000 Hz (audio passband of 220 MHz amateur transceiver) - preferably not harmonically related.
We already take this information and place it into an AX.25 packet and transmit it out at 1200 bps but it can run 1/2 to 3/4 second behind due to packetization and buffering.
Why is it good to limit the low end of the frequency range so much? If your tone spacing is 125 Hz, there's no reason in the processing to not use down to say, 200 Hz to allow a bit wider spacing. Is there something about the RF gear that limits the low end range?if I'm not mistaken, the OP intends to send these tones via ham narrowband FM. These radios usually have a lower-frequency audio cutoff somewhere above 300 Hz, and they often limit the high end to 3 KHz or even less. It's sort of like the old telephony audio range of 300-3000 Hz, and often even narrower.
16 bits at 1200bps is ~13ms, an arduino should have no problem doing 1200bpsUnfortunately it's not the "bits" that are the problem with AX.25.
Once the initial connection is established, data is transferred in frames - I (information) frames to be specific.
Fields in a frame:
Flag (8 bits), Address (112 to 560 bits), Control (8 bits), PID (8 bits), Information (our 16 bits), FCS (16 bits), Flag (8 bits) -
Therefore we have to transmit a WHOLE BUNCH more than our 16 bits of data to satisfy the AX.25 protocol requirements.
Oh - and the frame, once received, must be acknowledged back to the sender before the sender will send the next frame.
Sigh - AX.25 was not the best implementation. Why I'm asking the group mind for ideas.
Thanks,
Bill
Why is it good to limit the low end of the frequency range so much? If your tone spacing is 125 Hz, there's no reason in the processing to not use down to say, 200 Hz to allow a bit wider spacing. Is there something about the RF gear that limits the low end range?if I'm not mistaken, the OP intends to send these tones via ham narrowband FM. These radios usually have a lower-frequency audio cutoff somewhere above 300 Hz, and they often limit the high end to 3 KHz or even less. It's sort of like the old telephony audio range of 300-3000 Hz, and often even narrower.
So you are saying this is how the equipment is designed. Is there a reason for this low end cutoff of the equipment being so high? Is it just to reduce the required size of the coupling caps in the signal path?
So you are saying this is how the equipment is designed. Is there a reason for this low end cutoff of the equipment being so high? Is it just to reduce the required size of the coupling caps in the signal path?
Not at all! The audio channel is optimized for voice communications, so the frequency cutoffs are chosen to pass the voice frequencies needed for comprehension. By design it's not Hi-Fi, but an optimization of radio channel utilization and voice communications effectiveness.
True, shifting down the low-frequency cutoff wouldn't increase the FM occupied bandwidth, but excess low-frequency response can reduce comprehension as it allows low-frequency noise (vehicle rumble, etc) to interfere.
It's sort of like the old telephony audio range of 300-3000 Hz, and often even narrower.
So you are saying this is how the equipment is designed. Is there a reason for this low end cutoff of the equipment being so high? Is it just to reduce the required size of the coupling caps in the signal path?
So you are saying this is how the equipment is designed. Is there a reason for this low end cutoff of the equipment being so high? Is it just to reduce the required size of the coupling caps in the signal path?
Not at all! The audio channel is optimized for voice communications, so the frequency cutoffs are chosen to pass the voice frequencies needed for comprehension. By design it's not Hi-Fi, but an optimization of radio channel utilization and voice communications effectiveness.
True, shifting down the low-frequency cutoff wouldn't increase the FM occupied bandwidth, but excess low-frequency response can reduce comprehension as it allows low-frequency noise (vehicle rumble, etc) to interfere.
It's sort of like the old telephony audio range of 300-3000 Hz, and often even narrower.
So you are saying this is how the equipment is designed. Is there a reason for this low end cutoff of the equipment being so high? Is it just to reduce the required size of the coupling caps in the signal path?
Low end frequency of baseband will inevitably leave impact on frequency of symbol rate.
The audio channel is optimized for voice communications, so the frequency cutoffs are chosen to pass the voice frequencies needed for comprehension. By design it's not Hi-Fi, but an optimization of radio channel utilization and voice communications effectiveness.
Ok, there! Everything in the first paragraph is not actually relevant. But the need to limit the low frequency response to exclude noise would be an actual reason to limit the bandwidth.
You can do the DDS thing in pure software. I've done it with only 2 simultaneous tones, but going to 16 only needs a faster processor. DDS is easy enough to even be feasible in assembly.
A small FPGA will also fit it, in case you need more speed.
Regards.
The discussion is about the low frequency cut off. While below 500 Hz may not be required for intelligibility, unless there is a specific reason to not transmit it there is no reason to block it. Telephony passes frequencies between 300 Hz and 500 Hz, so it would seem there may be some minor loss of intelligibility or the phone companies and radio would likely use the same pass band. Your second paragraph gives a reason to block it, low frequency noise which is present in mobile applications. This would not have been a consideration in the initial design of telephony systems which set their standard.
In this post you have shifted to talking about the high frequency cutoff, totally unrelated.
Someone else talked about using frequencies below 500 Hz for CTCSS so clearly frequencies below 500 Hz are being transmitted on the RF, just not used for voice.
Flag (8 bits), Address (112 to 560 bits), Control (8 bits), PID (8 bits), Information (our 16 bits), FCS (16 bits), Flag (8 bits) -
Therefore we have to transmit a WHOLE BUNCH more than our 16 bits of data to satisfy the AX.25 protocol requirements.
Oh - and the frame, once received, must be acknowledged back to the sender before the sender will send the next frame.
Sigh - AX.25 was not the best implementation. Why I'm asking the group mind for ideas.
Amateur radio on the VHF/UHF bands uses the lower frequencies for CTCSS so the audio frequency response begins above this range.
I'm guessing for modest audio frequencies that this could be done with a DSP chip or maybe even fast microcontroller.
Amateur radio on the VHF/UHF bands uses the lower frequencies for CTCSS so the audio frequency response begins above this range.
I have modified a few FM transceivers to gain direct access to the FM modulator and demodulator. As you mention, the frequencies below the lower end of the voice band are used for sub-audible signalling. Frequencies above the voice band are used to measure quieting which is what triggers the squelch on the receiver. But of course you cannot access either of these through the audio output because it is filtered to limit it to roughly 300 to 3000 Hz to remove them.
Another thing to consider is that the audio filtering will usually have considerable phase distortion which can interfere with modulation that relies on phase.
2. User end (decoder) is simple Arduino based FFT display. 32 "bins" seems possible.If you can use an Arduino can't you just use a Teensy 4 with their audio library to generate the sines, sum them and output them?
Currently looking at possible Arduino generation using PWM but finding that having only a single clock frequency to play with makes tone
As I mentioned, VARA is a software modem, optimized for radio comms, that runs on a standard PC. It uses 52 carriers. with the individual carrier modulation adapted according to the channel characteristics. The symbol rate (Baud) is 37.5 Hz. The individual carrier modulation ranges from BPSK through 4PSK / 8PSK / 16QAM and 32QAM. The bytes per packet rate (or packet length) is also varied to optimize for the channel conditions. The actual modulation and demodulation are probably the easy part, it's the adaptive capabilities that are quite complicated.
I'm not suggesting that the OP needs to do anything this complicated, just that a simple software modem isn't that hard and perhaps more bits/second could be achieved by choice of a better modulation technique.
VARA: https://rosmodem.wordpress.com/2017/09/03/vara-hf-modem/ (https://rosmodem.wordpress.com/2017/09/03/vara-hf-modem/)
So you are saying this is how the equipment is designed. Is there a reason for this low end cutoff of the equipment being so high? Is it just to reduce the required size of the coupling caps in the signal path?In commercial and most amateur VHF systems, frequencies below 300 hertz are filtered out and used for "sub-audible" signalling e.g. CTCSS / PL / Quiet Channel as examples.
Bill
So you are saying this is how the equipment is designed. Is there a reason for this low end cutoff of the equipment being so high? Is it just to reduce the required size of the coupling caps in the signal path?In commercial and most amateur VHF systems, frequencies below 300 hertz are filtered out and used for "sub-audible" signalling e.g. CTCSS / PL / Quiet Channel as examples.
Bill
All of them :DCurrently looking at possible Arduino generation using PWM but finding that having only a single clock frequency to play with makes tone
EDIT: How many 16-bit symbols do you need to transmit per second?
I don't care how inefficient Arduino programming is, a 600 MHz Cortex M-7 is not going to care for this.
EDIT: How many 16-bit symbols do you need to transmit per second?All of them :D
VARA: https://rosmodem.wordpress.com/2017/09/03/vara-hf-modem/ (https://rosmodem.wordpress.com/2017/09/03/vara-hf-modem/)
I did not find any source code. Is it open source?
Btw, yet another (open source) soft modem implementation (for Linux) is linmodem (https://github.com/geofft/linmodem).
But I would only consider a sophisiticated modem implementation if a higher bit rate and/or better BER are really required.
Seems to me that OldVolts has not very high requirements, though, and rather wants a simple solution.
This should be trivial using an FPGA, implementing 16 DDS blocks and generating, say, 8-bit sine waves.
Each sine wave digital value would be added to the current sample when its corresponding bit was a '1'.
The sine table would be scaled so all of them hitting the max value at the same time would not overload the number of bits on the output. Then, you feed that sum to a DAC.
I'm guessing for modest audio frequencies that this could be done with a DSP chip or maybe even fast microcontroller.
Jon
Some are suggesting an FFT is the way to go, but the devil is in the details and so far I have not seen any details from the OP.
Yeah, it seems like it's time to ignore this thread. Am I wrong?
Yeah, it seems like it's time to ignore this thread. Am I wrong?Not wrong.
Some are suggesting an FFT is the way to go, but the devil is in the details and so far I have not seen any details from the OP.
No offense to OP intended, but usually reason of insufficient info is lack of knowledge to provide such. That comes into mind when I see requirement "NO other way than 16 simultaneous tones and selfmade modem".
Quote from: gf on Yesterday at 04:07:52 pmHow many 16-bit symbols do you need to transmit per second?All of them :D
QuoteYeah, it seems like it's time to ignore this thread. Am I wrong?
You are wrong indeed. Besides OP there are other forum members who can learn something new.
You are wrong indeed. Besides OP there are other forum members who can learn something new.
Without a clear statement of requirements it's a bit like asking how long is a piece of string? Do you plan to continue to espouse effluently on a vague subject? One of the posts consisted of nearly a single word with no explanation at all. So I guess on the average the posts are a good length.
So how long is that piece of string? :-DDThe answer doesn't much matter to me. What I enjoy is the process we go through along the way, wherever we end up. Perhaps we never end up anywhere, and that's OK too.
Discussing issues in general is fine, but without a context we might as well be discussing the weather. Hmmm... I wonder what an FFT of the weather patterns would look like. Better yet, do an IFFT and control the weather! 8)
Hmmm... I wonder what an FFT of the weather patterns would look like.
Hmmm... I wonder what an FFT of the weather patterns would look like.
https://joannenova.com.au/2013/05/fourier-analysis-reveals-six-natural-cycles-no-man-made-effect-predicts-cooling/
[...] It is the decoding that is a bit more difficult. Some are suggesting an FFT is the way to go, but the devil is in the details [...]
I found a time slot to refine my simulation. Octave script is attached.Could you show some more info about your solution for those who don't have Octave installed (yet)? I wonder you still use initial upconverter with FFT16 instead of let's say 16 (middle'ish) bins of FFT64 or even FFT32 w/o upconverter?
So can you tell us what you decided to implement other than just saying "KISS" and "Arduino"?Right. BTW he said "LPF" and "CMOS" as well 8)
... I worked at a company once that designed push to talk military radios (never call them walkie talkies). The old timers had very little sophistication and any time they wanted to say MCU they just said PIC. I kept thinking they had a serious hard-on for Microchip for some reason.
... I worked at a company once that designed push to talk military radios (never call them walkie talkies). The old timers had very little sophistication and any time they wanted to say MCU they just said PIC. I kept thinking they had a serious hard-on for Microchip for some reason.
walkie talkies were push to talk miliary radios, but they weren't hand held; those are handie talkies. HT is still the term used today for hand held PTT radios. We used to use the term "walkie talkies" when we were kids for the AM toy PTT radios, but that wasn't what the term originally meant. I worked for Moto on HTs.
I put an AVR in a product at work last year. Everyone refers to it as "the PIC". I've stopped correcting them; it's no use.
..
...Could you show some more info about your solution for those who don't have Octave installed (yet)? ...
They still make back pack versions they call man pack which is more powerful with longer range as well as a vehicle mounted version with even more power, I forget the term for that, I think it was just a vehicle adapter with an amplifier.
Could you show some more info about your solution for those who don't have Octave installed (yet)?
I wonder you still use initial upconverter with FFT16 instead of let's say 16 (middle'ish) bins of FFT64 or even FFT32 w/o upconverter?
Receiver side assumes sampling the audio signal at 10.8kSa/s sampling rate, a 384.375Hz quadrature LO (realized as software DDS as well), mixer (one complex multiplication per sample), and overlapping 128-point FFT (112 points overlap -> 8 FFT evaluations per symbol).
EDIT: Prior decimation could reduce the FFT size, but OTOH requires an additional decimation filter. I'm not sure if that were computationally cheaper, after all. And decimation were still limited to 2x, since the transition band of the decimation filter needs to be > 0, and the 2700Hz channel bandwidth is fully occupied by the 16 sub-bands.
OFDM requires the insertion of a guard interval between the symbols (mostly a cyclic prefix -> CP-OFDM), so I guess symbol timing recovery cannot be renounced at the receiver, and the receiver cannot work async? Or can it?
Could you show some more info about your solution for those who don't have Octave installed (yet)?
It is not the solution, but a simulation of the potential outcome. The simulation assumes that the carriers are generated via (software) DDS and AM-modulated (regular double side-band modulation) with the 16 digital signals (after band-limiting them with a pulse shaping filter). Data streams are supposed to be async, with a (maximum) bit rate of 84.375 bits/s each. Carriers are 384.375, 553.125, 721.875, 890.625, 1059.375, 1228.125, 1396.875, 1565.625, 1734.375, 1903.125, 2071.875, 2240.625, 2409.375, 2578.125, 2746.875, 2915.625 Hz. Hopefully "KISS enough" :-// The most computationaly expensive thing at the sender side is likely the pulse shaping filter.
Receiver side assumes sampling the audio signal at 10.8kSa/s sampling rate, a 384.375Hz quadrature LO (realized as software DDS as well), mixer (one complex multiplication per sample), and overlapping 128-point FFT (112 points overlap -> 8 FFT evaluations per symbol).
EDIT: Prior decimation could reduce the FFT size, but OTOH requires an additional decimation filter. I'm not sure if that were computationally cheaper, after all. And decimation were still limited to 2x, since the transition band of the decimation filter needs to be > 0, and the 2700Hz channel bandwidth is fully occupied by the 16 sub-bands.
Where did you get the max bit rate? I'm assuming it was from the frequencies you picked?
I don't understand the need for the down conversion. Your spectral results don't seem to be any different, just different frequencies. Were your graphs not from the 128 point FFT, but rather higher resolution processing?
Receiver side assumes sampling the audio signal at 10.8kSa/s sampling rate, a 384.375Hz quadrature LO (realized as software DDS as well), mixer (one complex multiplication per sample), and overlapping 128-point FFT (112 points overlap -> 8 FFT evaluations per symbol).
EDIT: Prior decimation could reduce the FFT size, but OTOH requires an additional decimation filter. I'm not sure if that were computationally cheaper, after all. And decimation were still limited to 2x, since the transition band of the decimation filter needs to be > 0, and the 2700Hz channel bandwidth is fully occupied by the 16 sub-bands.
Try 8KHz sample rate, iFFT64 and bins 4 till 20. Unused bins are set to 0, some may be used to transmit FCS btw. Decode accordingly - using 8KHz sample rate and FFT64.QuoteOFDM requires the insertion of a guard interval between the symbols (mostly a cyclic prefix -> CP-OFDM), so I guess symbol timing recovery cannot be renounced at the receiver, and the receiver cannot work async? Or can it?
Cyclic prefix will not hurt - because prolonged symbol do not need windowing for FFT at the receive end. Sync is needed as well - to avoid inter-symbol interference, thou for particular (OOK of subcarriers) application it can be as simple as brick: transmit some silence (>=1/4 of symbol time) between symbols. On receive end "listen" with AM envelope detector, save received baseband samples in the buffer. As soon as envelope detector finds signal which lasts as long as symbol - try FFT on buffer and consider it done. Well, maybe check FCS.
Note this is eventuall just AM+FDM, it is not OFDM.Neither are the carriers orthogonal, nor does any symbol synchronization occur at the receiver.
EDIT: Not exactly true. While the actual carriers are not orthogonal, the down-converted carriers happen to be. W/o symbol synchronization, the benefit of orthogonality is limited, though.
The FFT on the receiver side (in conjunction with the window function) is just "abused" as filter bank.
I'm indeed tempted to try (simulate) that OFDM stuff, too. It seems to have a couple of nice properties 8)Would be nice to see what you make out of it.
It is OFDM indeed, just obfuscated [...]
Would be nice to see what you make out of it.
It is OFDM indeed, just obfuscated [...]
Isn't the "circular" property missing for the symbol's time domain samples, when they were not generated via IFFT, but as 16 independent AM streams, and without considering dedicated symbol borders?
Exact frequency of each subcarrier, not phase defines - "symbol" has circular property or not. OFDM symbol (output from iFFT) contains integer periods of each subcarrier sine. So if you generate correct subcarrrier waveforms using other than iFFT math, result shall be still valid. You did kinda prove it yourself :) Other argument - OFDM mostly uses nPSK modulations for subcarriers meaning their phase can vary, yet it does not impact circular properties of symbol, allows cyclic prefix.
Generating the subcarrrier waveform for a symbol via IFFT implies that sub-carrier amplitude and phase is constant for the symbol duration. OTOH, the sine wave, AM-modulated with the low-pass-filtered NRZ signal, does not have a constant amplitude for the symbol duration. Still equivalent?
Would be nice to see what you make out of it.
Sorry but not a prank. Really looking to generate 16 different audio tones simultaneously with the ability to control each individual tone.
These signals are audio. There is no reason to use 16 different DDS generators. A single generator with 16 sets of registers (easily implemented in LUT memory) will run many times faster than required. As others have pointed out this can easily be done in nearly any MCU you can think of. It is the decoding that is a bit more difficult. Some are suggesting an FFT is the way to go, but the devil is in the details and so far I have not seen any details from the OP. In fact, this "project" may just be a prank. He seems to be responding to serious questions with sarcasm.
Yeah, it seems like it's time to ignore this thread. Am I wrong?
The tones I have chosen are non harmonically related through 5th order and are based on prime numbers so the FFT decoder should have an easy time keeping the tone display "clean".
I would also think of adding a pilot tone, so the receiver can 'anchor' onto something.
The tones I have chosen are non harmonically related through 5th order and are based on prime numbers so the FFT decoder should have an easy time keeping the tone display "clean".
You've got that backwards. An FFT can only give a 'clean' output for tones that are periodic within the sample set. Everything else will spill in to other bins.
... I cannot see why this can't be dealt with as a parallel to serial conversion, simple transmission, serial to parallel at the other end. These are decades old techniques with simple, cheap and well proven technology....
... I cannot see why this can't be dealt with as a parallel to serial conversion, simple transmission, serial to parallel at the other end. These are decades old techniques with simple, cheap and well proven technology....
I completely agree. I was baffled as I read the LONG description. What would make someone devise the proposed multitone solution to the problem?
> A mobile can vote between 3 or 4 receivers in ONE second -NO packet system is that fast.
So the data needs to be sent at least at a 1 Hz rate. 16 bits every 1 second.
The tones I have chosen are non harmonically related through 5th order and are based on prime numbers so the FFT decoder should have an easy time keeping the tone display "clean".
You've got that backwards. An FFT can only give a 'clean' output for tones that are periodic within the sample set. Everything else will spill in to other bins.
I think he's got that much right. Say you have a FFT with a bin width of 50 Hz. Use prime-related tones of 50 x {2, 3, 5 ... 53} (first 16 primes). This gives you tones of 100Hz, 150Hz, 250Hz ... 265 Hz. You can scale the bins and multipliers to better fit the VHF audio channel characteristics, but this will avoid false detection of distortion products. I haven't looked at any windowing requirements, so perhaps the spectral leakage will be an issue.
I thought you said it was going to be an Arduino??? KISS, no?Not enough I/O pins on simple Arduinos.
One crucial requirement I don't think I've seen is the time between an event and the time it is recognized at the other end. That is what seems to dominate his thinking as if it needs to be as close to zero as possible rather than defining a number then working with that.I did ask about that, but no reply. The implication is a person could see events in real time. A person takes quite a few milliseconds to notice an event and many more to react. A good 'below human perception' delay is 20ms, the frame rate of some television. And if it is a person reacting, a certain level of error tolerance is built in - 1 bit in a 100 would be noticed by a person but quickly ignored. Using straight async characters with start bit, stop bit = 8 data bits and 2 overhead. 16 bits of data takes 20 bits, which at 1200 baud takes less than 20ms. 1200 baud is slow by amateur radio standards.
I completely agree. I was baffled as I read the LONG description. What would make someone devise the proposed multitone solution to the problem?Try 10 "updates" per second - 10 Baud per "carrier".
> A mobile can vote between 3 or 4 receivers in ONE second -NO packet system is that fast.
So the data needs to be sent at least at a 1 Hz rate. 16 bits every 1 second.
Worth several times that amount.One crucial requirement I don't think I've seen is the time between an event and the time it is recognized at the other end. That is what seems to dominate his thinking as if it needs to be as close to zero as possible rather than defining a number then working with that.I did ask about that, but no reply. The implication is a person could see events in real time. A person takes quite a few milliseconds to notice an event and many more to react. A good 'below human perception' delay is 20ms, the frame rate of some television. And if it is a person reacting, a certain level of error tolerance is built in - 1 bit in a 100 would be noticed by a person but quickly ignored. Using straight async characters with start bit, stop bit = 8 data bits and 2 overhead. 16 bits of data takes 20 bits, which at 1200 baud takes less than 20ms. 1200 baud is slow by amateur radio standards.
My preferred solution (given the meager specification) would be to capture and time stamp events, and have some way of recording everything. That way, events can be analyzed at leisure. Also, it doesn't need someone sitting looking at it. At one time I was responsible for 75 computers around the world. Each one generated a couple of hundred events a day. These were compressed (zip) and sent daily to a computer on our site. Then a scheduled job filtered and summarised, so I had a report at 8:00 a.m. when I arrived at work, which may have a dozen items for action. Usually a simple fix, but sometimes a trawl through the raw data to get a picture (The events were recorded in local time, but before zipping they were adjusted to UTC. Why certain operating systems don't run on UTC is an annoyance. One mainframe was turned off for an hour at start of daylight savings, couldn't handle records time stamped in the future. Some of our problems were on networks crossing time zones so UTC made correlation easier).
my 2¢.
Historical - even by seconds - simply doesn't cut it.As has been mentioned, human response being what it is you would prefer that the delay between event and display to be no greater than 30 ms. My suggestion to use 300 bps 2FSK gave you a response time of perhaps 200 ms (which I suspect would be perfectly usable for your needs.) On a good ham VHF link you can pretty easily bump that datarate from 300 up to 1200, giving you a 50ms transmission time. You can speed this further by simplifying my four word "packet", perhaps by eliminating the final checksum byte and instead relying on the parity bits for error checking.
$ ./encode | ./decode | more
Hard limited % of max
1110011010100010 100 100 100 0 0 100 100 0 100 0 100 0 0 0 100 0
1110011010100010 100 100 100 0 0 100 100 0 100 0 100 0 0 0 100 0
1110011010100010 100 100 100 0 0 100 100 0 100 0 100 0 0 0 100 0
1110011010100010 100 100 100 0 0 100 100 0 100 0 100 0 0 0 100 0
1110011010100010 100 100 100 0 0 100 100 0 100 0 100 0 0 0 100 0
0110001111000100 0 100 100 0 0 0 100 100 100 100 0 0 0 100 0 0
0110001111000100 0 100 100 0 0 0 100 100 100 100 0 0 0 100 0 0
0110001111000100 0 100 100 0 0 0 100 100 100 100 0 0 0 100 0 0
0110001111000100 0 100 100 0 0 0 100 100 100 100 0 0 0 100 0 0
0110001111000100 0 100 100 0 0 0 100 100 100 100 0 0 0 100 0 0
...
With 50% random noise added$ ./encode | ./decode | more
Hard limited % of max
1110011010100010 91 90 95 6 11 95 92 6 100 11 90 11 11 16 88 11
1110011010100010 100 100 94 5 9 96 75 14 89 4 98 6 19 17 99 16
1110011010100010 88 100 82 8 6 95 69 2 90 5 87 9 2 18 91 11
1110011010100010 90 83 89 13 15 100 78 3 87 18 80 4 7 11 66 4
1110011010100010 100 77 81 7 3 92 85 5 60 11 85 12 6 11 73 10
1111000111101110 98 86 87 89 9 18 26 77 83 99 85 7 97 100 91 4
1111000111101110 93 87 86 84 18 18 12 86 76 89 77 9 100 74 88 14
1111000111101110 95 95 88 90 3 13 20 87 93 97 94 22 100 76 89 10
1111000111101110 80 78 83 75 4 10 10 75 56 100 84 9 86 87 87 7
1111000111101110 96 83 73 98 4 12 11 88 100 90 96 11 90 84 79 12
0000001101101100 4 15 7 3 9 1 100 92 9 90 96 5 96 93 11 7
0000001101101100 5 9 9 4 6 7 98 100 11 99 88 10 93 99 10 6
0000001101101100 5 6 5 7 6 5 86 100 5 75 80 3 83 97 13 9
0000001101101100 9 9 2 15 5 9 99 95 6 88 100 11 95 95 2 2
0000001101101100 12 2 7 9 14 3 89 100 8 95 97 9 98 93 8 8
1010010001001110 83 3 87 9 9 75 11 7 4 85 4 6 91 78 100 11
1010010001001110 89 6 94 4 10 100 14 10 4 87 10 6 85 89 97 5
1010010001001110 93 4 100 19 9 89 8 14 5 89 5 8 100 91 94 4
1010010001001110 87 6 100 15 13 98 2 8 13 93 4 10 94 93 99 7
1010010001001110 85 12 87 5 4 100 14 5 3 91 3 13 92 93 88 3
1000011100111100 90 10 6 11 23 84 100 96 4 8 89 87 98 95 11 5
1000011100111100 88 8 10 8 16 91 89 89 6 8 95 93 94 100 12 7
1000011100111100 82 12 7 4 5 100 79 86 13 7 91 91 97 78 16 4
1000011100111100 86 6 13 7 6 81 87 81 10 4 90 83 100 96 9 13
1000011100111100 75 5 14 10 14 81 81 100 9 21 72 86 90 89 17 4
0110011100110111 9 78 100 16 22 81 81 89 18 7 84 94 10 78 85 91
0110011100110111 3 100 92 16 9 90 99 94 12 9 92 84 24 83 61 87
0110011100110111 12 89 99 15 13 100 84 86 19 13 91 87 11 100 99 95
0110011100110111 15 87 97 15 18 88 79 91 2 15 67 100 24 97 82 86
0110011100110111 5 88 76 7 6 100 85 78 9 9 82 90 15 78 89 77
1111000110011101 97 80 100 86 5 16 20 82 78 18 13 92 99 82 5 90
1111000110011101 87 100 71 98 15 5 11 98 87 6 7 89 88 93 4 100
1111000110011101 74 65 90 84 9 13 16 86 81 10 9 100 90 77 19 67
...
I completely agree. I was baffled as I read the LONG description. What would make someone devise the proposed multitone solution to the problem?Try 10 "updates" per second - 10 Baud per "carrier".
> A mobile can vote between 3 or 4 receivers in ONE second -NO packet system is that fast.
So the data needs to be sent at least at a 1 Hz rate. 16 bits every 1 second.
Bill
ok, so 10x that. How would I transmit 160 bits/sec? Simplest things I can think of are OOK and FSK. As mentioned before, all that is needed here is a parallel to serial converter. What I have lying around are Arduino Pro Minis, just the ticket. 17 completely undedicated digital I/Os (PORTB 0-4, PORTC 0-5, PORTD 2-7). If you need more you can use the RESET line, XTAL (PORTB 6-7 and use internal 8MHZ RC) PORTB 5 (LED) and TX/RX (PORTD 0-1), but I'm going to use the Tx/Rx. I won't even have to touch the Arduinos. All I need is one resistor.I had a closer look at what OP is on about. I gather all the information needed is going into or coming out of the RVS-8. There are 16 signals to deal with, 8 incoming signals and 8 output signals. The voted repeater output is one bit on one of 8 outputs and as OP pointed out, this could be coded as 4 bits, 1 to say if any line is selected and 3 for which of the 8.
Coded u_tx.c just to generate numbers and throw them out the UART at 600 baud. Yes, for 16 bit packets I'd have some frame byte and two data bytes, but I'm just showing the concept. Hooked the ATmega Tx UART output through a resitor to the FM modulation port of my signal generator set for 500 MHz, 5 kHz peak deviation.
For the receiver, I hauled out my modulation analyzer. Set for FM demod, 75usec de-emphasis, 3kHz lowpass. Fed the modulation output directly into the Rx UART pin of another Arduino Pro Mini running u_rx.c. All the code does is take the UART Rx input, and send it out the UART Tx so I can see it on my PC.
Picture shows setup. Both ATmegas are on a single small solderless breadboad sitting on my laptop keyboard. Square wave FSK input of sig gen is shown on scope as well as Modulation Analyzer demod output. PC is running a terminal program that shows sequential numbers being received.
For the simple code, I use Arduino hardware and the gcc and avrdude that gets installed with the Arduino install, but I ditch the Arduino UI and use the avr libraries. I call gcc and avrdude from the command line.
ok, so 10x that. How would I transmit 160 bits/sec? Simplest things I can think of are OOK and FSK. As mentioned before, all that is needed here is a parallel to serial converter. What I have lying around are Arduino Pro Minis, just the ticket. 17 completely undedicated digital I/Os (PORTB 0-4, PORTC 0-5, PORTD 2-7). If you need more you can use the RESET line, XTAL (PORTB 6-7 and use internal 8MHZ RC) PORTB 5 (LED) and TX/RX (PORTD 0-1), but I'm going to use the Tx/Rx. I won't even have to touch the Arduinos. All I need is one resistor.I had a closer look at what OP is on about. I gather all the information needed is going into or coming out of the RVS-8. There are 16 signals to deal with, 8 incoming signals and 8 output signals. The voted repeater output is one bit on one of 8 outputs and as OP pointed out, this could be coded as 4 bits, 1 to say if any line is selected and 3 for which of the 8.
Coded u_tx.c just to generate numbers and throw them out the UART at 600 baud. Yes, for 16 bit packets I'd have some frame byte and two data bytes, but I'm just showing the concept. Hooked the ATmega Tx UART output through a resitor to the FM modulation port of my signal generator set for 500 MHz, 5 kHz peak deviation.
For the receiver, I hauled out my modulation analyzer. Set for FM demod, 75usec de-emphasis, 3kHz lowpass. Fed the modulation output directly into the Rx UART pin of another Arduino Pro Mini running u_rx.c. All the code does is take the UART Rx input, and send it out the UART Tx so I can see it on my PC.
Picture shows setup. Both ATmegas are on a single small solderless breadboad sitting on my laptop keyboard. Square wave FSK input of sig gen is shown on scope as well as Modulation Analyzer demod output. PC is running a terminal program that shows sequential numbers being received.
For the simple code, I use Arduino hardware and the gcc and avrdude that gets installed with the Arduino install, but I ditch the Arduino UI and use the avr libraries. I call gcc and avrdude from the command line.
I was thinking along the Pro Mini line myself. I don't write C, just assembler. As the output was to be audio tones rather than FM I was going to generate PWM output applied to a low pass filter to generate one of two tones. See similar idea here https://chapmanworld.com/2015/04/07/arduino-uno-and-fast-pwm-for-afsk1200/ (https://chapmanworld.com/2015/04/07/arduino-uno-and-fast-pwm-for-afsk1200/). I hadn't looked too hard at the receiver end but I think the Pro is fast enough to decode the audio applied to an analog input pin.
We don't know the link over which the data is moved. Maybe the FM scheme will work if the transmitter/receiver handle it. If the restriction is audio in/audio out I'm pretty sure the Pro Mini could do it, but it may take more than 1 resistor.
...Please note: I’m NOT looking for other ways to do this – 16 tones at a time is where we’re going....He clearly stated that he isn't open for suggestions against his 16 tone solution.
We don't know the link over which the data is moved. Maybe the FM scheme will work if the transmitter/receiver handle it. If the restriction is audio in/audio out I'm pretty sure the Pro Mini could do it, but it may take more than 1 resistor.
I just realized my mistake with the OP...Please note: I’m NOT looking for other ways to do this – 16 tones at a time is where we’re going....He clearly stated that he isn't open for suggestions against his 16 tone solution.
I just realized my mistake with the OP...Please note: I’m NOT looking for other ways to do this – 16 tones at a time is where we’re going....He clearly stated that he isn't open for suggestions against his 16 tone solution.
Apparently the only acceptable response was supposed to be "Wow! What an amazingly great scheme!"
I don't recall how he said he was going to demodulate the tones. Would each propeller CPU be used as a tone detector?
Decoding is simple: Google for "Arduino audio analyzer" - plenty of examples.
#include <xc.h>
static int8_t const st[1 << 8] = {
0, 3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, 36, 39, 42, 45,
48, 51, 54, 57, 59, 62, 65, 67, 70, 73, 75, 78, 80, 82, 85, 87,
89, 91, 94, 96, 98, 100, 102, 103, 105, 107, 108, 110, 112, 113, 114, 116,
117, 118, 119, 120, 121, 122, 123, 123, 124, 125, 125, 126, 126, 126, 126, 126,
127, 126, 126, 126, 126, 126, 125, 125, 124, 123, 123, 122, 121, 120, 119, 118,
117, 116, 114, 113, 112, 110, 108, 107, 105, 103, 102, 100, 98, 96, 94, 91,
89, 87, 85, 82, 80, 78, 75, 73, 70, 67, 65, 62, 59, 57, 54, 51,
48, 45, 42, 39, 36, 33, 30, 27, 24, 21, 18, 15, 12, 9, 6, 3,
0, -3, -6, -9, -12, -15, -18, -21, -24, -27, -30, -33, -36, -39, -42, -45,
-48, -51, -54, -57, -59, -62, -65, -67, -70, -73, -75, -78, -80, -82, -85, -87,
-89, -91, -94, -96, -98, -100, -102, -103, -105, -107, -108, -110, -112, -113, -114, -116,
-117, -118, -119, -120, -121, -122, -123, -123, -124, -125, -125, -126, -126, -126, -126, -126,
-127, -126, -126, -126, -126, -126, -125, -125, -124, -123, -123, -122, -121, -120, -119, -118,
-117, -116, -114, -113, -112, -110, -108, -107, -105, -103, -102, -100, -98, -96, -94, -91,
-89, -87, -85, -82, -80, -78, -75, -73, -70, -67, -65, -62, -59, -57, -54, -51,
-48, -45, -42, -39, -36, -33, -30, -27, -24, -21, -18, -15, -12, -9, -6, -3
};
static uint8_t const pi[16] = { 2, 3, 5, 7, 8, 11, 12, 13, 17, 19, 20, 23, 25, 27, 28, 29 };
static volatile uint8_t samples[1 << 8];
static volatile uint8_t update;
static volatile uint8_t timer;
static void generate(uint16_t const tones)
{
uint8_t n = 0;
uint8_t pa[16] = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
while(update);
do {
int16_t i = 0;
for(uint8_t t = 0; t < 16; ++t) {
if(tones & (1 << t)) i += st[pa[t]];
pa[t] += pi[t];
}
samples[n] = (2048 + i) >> 4;
} while(++n);
update = 1;
}
void __attribute__ ((signal, used, externally_visible)) TIMER1_CAPT_vect(void)
{
static uint8_t n;
static uint8_t new;
static uint8_t out[1 << 8];
if(new) out[n] = samples[n];
OCR1A = 100 + out[n];
if(!n++) {
--timer;
if(new) {
new = 0;
update = 0;
} else {
new = update;
}
}
}
int main(void)
{
PORTB = PORTC = PORTD = 0;
DDRB = DDRC = DDRD = 0xFF;
TCCR1A = 0xA2;
TCCR1B = 0x19;
TCCR1C = 0x00;
ICR1 = 13 * 32 - 1;
OCR1A = OCR1B = 100 + 128;
TIFR1 = 0;
TIMSK1 = 0x20;
__asm__ __volatile__ ("sei" ::: "memory");
generate(~0);
timer = 255; while(timer);
uint16_t lfsr = 1;
for(;;) {
timer = 7; while(timer);
lfsr ^= lfsr >> 7; lfsr ^= lfsr << 9; lfsr ^= lfsr >> 13;
generate(lfsr);
}
return 0;
}
I don't recall how he said he was going to demodulate the tones. Would each propeller CPU be used as a tone detector?Quote from: OldVoltsDecoding is simple: Google for "Arduino audio analyzer" - plenty of examples.
Propeller was for tone generation. Two of them.
It can be done with an ATMega328. Here is some code that uses DDS for 16 tones. IFFT could also be used.Code: [Select]
#include <xc.h>
static int8_t const st[1 << 8] = {
0, 3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, 36, 39, 42, 45,
48, 51, 54, 57, 59, 62, 65, 67, 70, 73, 75, 78, 80, 82, 85, 87,
89, 91, 94, 96, 98, 100, 102, 103, 105, 107, 108, 110, 112, 113, 114, 116,
117, 118, 119, 120, 121, 122, 123, 123, 124, 125, 125, 126, 126, 126, 126, 126,
127, 126, 126, 126, 126, 126, 125, 125, 124, 123, 123, 122, 121, 120, 119, 118,
117, 116, 114, 113, 112, 110, 108, 107, 105, 103, 102, 100, 98, 96, 94, 91,
89, 87, 85, 82, 80, 78, 75, 73, 70, 67, 65, 62, 59, 57, 54, 51,
48, 45, 42, 39, 36, 33, 30, 27, 24, 21, 18, 15, 12, 9, 6, 3,
0, -3, -6, -9, -12, -15, -18, -21, -24, -27, -30, -33, -36, -39, -42, -45,
-48, -51, -54, -57, -59, -62, -65, -67, -70, -73, -75, -78, -80, -82, -85, -87,
-89, -91, -94, -96, -98, -100, -102, -103, -105, -107, -108, -110, -112, -113, -114, -116,
-117, -118, -119, -120, -121, -122, -123, -123, -124, -125, -125, -126, -126, -126, -126, -126,
-127, -126, -126, -126, -126, -126, -125, -125, -124, -123, -123, -122, -121, -120, -119, -118,
-117, -116, -114, -113, -112, -110, -108, -107, -105, -103, -102, -100, -98, -96, -94, -91,
-89, -87, -85, -82, -80, -78, -75, -73, -70, -67, -65, -62, -59, -57, -54, -51,
-48, -45, -42, -39, -36, -33, -30, -27, -24, -21, -18, -15, -12, -9, -6, -3
};
static uint8_t const pi[16] = { 2, 3, 5, 7, 8, 11, 12, 13, 17, 19, 20, 23, 25, 27, 28, 29 };
static volatile uint8_t samples[1 << 8];
static volatile uint8_t update;
static volatile uint8_t timer;
static void generate(uint16_t const tones)
{
uint8_t n = 0;
uint8_t pa[16] = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
while(update);
do {
int16_t i = 0;
for(uint8_t t = 0; t < 16; ++t) {
if(tones & (1 << t)) i += st[pa[t]];
pa[t] += pi[t];
}
samples[n] = (2048 + i) >> 4;
} while(++n);
update = 1;
}
void __attribute__ ((signal, used, externally_visible)) TIMER1_CAPT_vect(void)
{
static uint8_t n;
static uint8_t new;
static uint8_t out[1 << 8];
if(new) out[n] = samples[n];
OCR1A = 100 + out[n];
if(!n++) {
--timer;
if(new) {
new = 0;
update = 0;
} else {
new = update;
}
}
}
int main(void)
{
PORTB = PORTC = PORTD = 0;
DDRB = DDRC = DDRD = 0xFF;
TCCR1A = 0xA2;
TCCR1B = 0x19;
TCCR1C = 0x00;
ICR1 = 13 * 32 - 1;
OCR1A = OCR1B = 100 + 128;
TIFR1 = 0;
TIMSK1 = 0x20;
__asm__ __volatile__ ("sei" ::: "memory");
generate(~0);
timer = 255; while(timer);
uint16_t lfsr = 1;
for(;;) {
timer = 7; while(timer);
lfsr ^= lfsr >> 7; lfsr ^= lfsr << 9; lfsr ^= lfsr >> 13;
generate(lfsr);
}
return 0;
}
I don't recall how he said he was going to demodulate the tones. Would each propeller CPU be used as a tone detector?Quote from: OldVoltsDecoding is simple: Google for "Arduino audio analyzer" - plenty of examples.
Propeller was for tone generation. Two of them.
It can be done with an ATMega328. Here is some code that uses DDS for 16 tones.Thanks for the practical proof of concept, showing that the core functionality of a software DDS is indeed not more than a handful lines of code.
A little outside of the voice 300 -> 3k bandwidth requirement, but that's just selecting the right range of primes and sample rate...Given that a 300...3000Hz channel is available (as specified by the OP), larger primes need to be selected so that fmax/fmin does does exceed 10:1. For instance the 16 primes in the 7...67 range, in conjunction with a base frequency of 44.6Hz would fit (orthogonality of tones were granted then for window sizes which are integral multiples of 22.422ms). For DDS, the sampling rate does not need to be an integral multiple of the generated frequency. By using a larger phase accumulator (e.g. 32bits), and doing phase truncation, one can basically generate arbitrary frequencies at much finer granularity (at the cost of a bit of phase noise, but the amount can be controlled via the the waveform table size). Of course it is certainly helpful if a sampling rate can be selected which does not require phase truncation, but if it can't, then it is not a disaster either.
In lost post (https://www.eevblog.com/forum/news/news-forum-reverted-to-backup/msg3561514/#msg3561514) of mine I was saying that multichannel transmission does not increase information transmission speed, nor improve latency.
Example: OOK/BPSK spectral efficiency is 1bit/Hz, 1600bps transmission requires 1600Hz frequency band of single channel. Transmission of 16 bits will take 1/100 sec. When we split 1600Hz channel into smaller 16 non-interfering 100Hz subchannels and transmit using OOK/BPSK, then those who can do the math will see that latency of 16 bit transmission will be 1/100 sec. In short: 16-tone requirement only adds unnecessary complexity, does improve nothing in terms of speed or latency.
It can be done with an ATMega328. Here is some code that uses DDS for 16 tones.Thanks for the practical proof of concept, showing that the core functionality of a software DDS is indeed not more than a handful lines of code.
I'm not sure whether this was understood by the OP, when a DDS was proped pretty early in this thread (w/o giving a practical code example showing the final implementation on a µC).A little outside of the voice 300 -> 3k bandwidth requirement, but that's just selecting the right range of primes and sample rate...Given that a 300...3000Hz channel is available (as specified by the OP), larger primes need to be selected so that fmax/fmin does does exceed 10:1. For instance the 16 primes in the 7...67 range, in conjunction with a base frequency of 44.6Hz would fit (orthogonality of tones were granted then for window sizes which are integral multiples of 22.422ms). For DDS, the sampling rate does not need to be an integral multiple of the generated frequency. By using a larger phase accumulator (e.g. 32bits), and doing phase truncation, one can basically generate arbitrary frequencies at much finer granularity (at the cost of a bit of phase noise, but the amount can be controlled via the the waveform table size). Of course it is certainly helpful if a sampling rate can be selected which does not require phase truncation, but if it can't, then it is not a disaster either.
In lost post (https://www.eevblog.com/forum/news/news-forum-reverted-to-backup/msg3561514/#msg3561514) of mine I was saying that multichannel transmission does not increase information transmission speed, nor improve latency.I was wondering about that. To achieve higher bit rates needs each subchannel to have some form of multibit encoding. Old modems (and maybe still do) negotiated by sending tones and working out noise and distortion, encoded as many bits as each subchannel could reliably handle. However, this does nothing to reduce latency, just send more bits in the same time slice. OP seems interested in latency more than bit rate.
Example: OOK/BPSK spectral efficiency is 1bit/Hz, 1600bps transmission requires 1600Hz frequency band of single channel. Transmission of 16 bits will take 1/100 sec. When we split 1600Hz channel into smaller 16 non-interfering 100Hz subchannels and transmit using OOK/BPSK, then those who can do the math will see that latency of 16 bit transmission will be 1/100 sec. In short: 16-tone requirement only adds unnecessary complexity, does improve nothing in terms of speed or latency.
The requirements are not well enough thought out and expressed if it was monitoring the state of 16 subsystems on a deep space probe.I understand what you saying. I don't think two $3 boards and a few passives is over engineering. The beauty of microprocessors is how much can be done with very little. Reading 16 inputs, encoding, decoding and create 16 outputs is a simple task. I recently decided I needed a second 9600 baud input to a processor with one UART, implemented one in software, no problem and for good measure would handle 3.3V input without a level translator. On an 8 bit processor worth maybe $1. Is that over engineering or just making best use of what is there?
However, the OPs requirements are well enough thought out and expressed for a hobby project, to be made out of stuff on hand, and perform 'well enough' for their needs, which appear to be monitoring 16 things in a semi-remote radio site.
This sounds like an environment where a Raspberry Pi + cell phone + web cam pointed at battery meters could be a workable solution to monitoring UPS charge. Let's not over-engineer it. >:D
[...] And if it works, is it in any way superior to a time slot system. I think it is like clean coal. Feasible but uneconomic.
Delay is dominated by by pulse shaping on the sender side (necessary to prevent cross-talk into neighbor channels) and the FFT window I used at the receiver side. Makes a delay of about 3-4 symbols in total (about 35-45ms).I think from your numbers that you are modulating the individual tones at twice the channel spacing (in Hz)? If you cut the modulation rate in half (baud rate = tone spacing) then the modulation spectral nulls will have the same spacing as the channel-spacing, reducing neighbor crosstalk and your filter requirements. Of course that halves your data rate...
I think from your numbers that you are modulating the individual tones at twice the channel spacing (in Hz)? If you cut the modulation rate in half (baud rate = tone spacing) then the modulation spectral nulls will have the same spacing as the channel-spacing, reducing neighbor crosstalk and your filter requirements. Of course that halves your data rate...
Nice, very nice.
A little outside of the voice 300 -> 3k bandwidth requirement, but that's just selecting the right range of primes and sample rate...
The OP mentioned using "Arduino audio analyzer" as a basis for decoding. That may be a reference to this: https://create.arduino.cc/projecthub/shajeeb/32-band-audio-spectrum-visualizer-analyzer-902f51