Hello,
I’ve read these two rules of thumb, in different places over the internet but the context was not always clear:
a) the input impedance of a device should be greater (around 10x to 100x is common) than the output impedance of the circuit supplying the signal, to avoid overloading the source and weakening the signal too much.
b) when dealing with transmission lines, output and input impedances should be matched for best signal transmission
When is one or the other most relevant? Is it only a matter of frequency?
Thank you

A bit more context:
I have a signal coming from an optocoupler (a 6N137). I use a 1k pull-up resistor at the output. So the output impedance is 1k if I’m not mistaken.
The signal was originally a 9600 baud stream, on-off modulated over a 50 kHz sine carrier, which I have full wave rectified just before the optocoupler. I want to filter that with a RC filter, then pass it through a schmitt trigger inverter to recover the data.
How would you choose values for R and C ?I read on the Internet some conflicting advice:
c) “the RC constant of the filter should be much greater than the period of the carrier, but smaller than the message signal period”
d) “the RC constant should be smaller than the period of the carrier”
I played around with LTSpice (I’m not very good a it yet) and apparently choosing RC = 10µs = the period of the carrier seems a good compromise, for example 10k/1n, 1k/10n, etc.
according to a) I should use say R=10k and C=1n.
I breadboarded all that, including the bridge rectifier (4*1N4148), the optocoupler, the filter and two stages of a 74HC14 for final cleaning. The signal then goes to a USB to serial adapter so I can see the decoded data in Realterm.
I was looking at the signal at the output of the filter with an oscilloscope and at the same time I kept an eye on the error status lights in Realterm.
With 10k/1n I found that it was really hard to adjust R to get reliable transmission, Realterm kept saying “framing error” or “checksum error” and if I moved an eyelash too fast, the connection would break.
On the other hand, with 1k/10n, the transmission was perfect. And it was much more tolerant of resistor variation; using the same 10n cap I got good results with any resistor, from very low values to 1k5.
on the scope "screenshots" 0V is the line at the bottom of the screen, it is set to 1V/div.
With 8k2 (or 10k) and 1n, the signal has ripple up to about 2.5 volts, which is about the same as the positive-going threshold voltage of the 74HC14, giving false positives and corresponding to what I’ve seen in Realterm. On the other hand, with 1k/10n the signal is cleaner and the ripple stays below 2 volts and everything is fine.
Some may say “well if you found OK values, go on, why bothering?” and they’d be quite right; but I would like to understand better what I observed.
Any comments or explanations (or links to useful pages that I surely missed!) would be greatly appreciated. And also:
- when is it OK,and not OK, to refer to rules of thumb a, b, c and d
- with 1k and 10n the input impedance of the filter is about 2,6kOhms. Should I increase the value of the pull-up resistor to have a perfect match ? (I didn’t try yet because I was expecting rule a) to apply)
Thank you
