It's for power transfer, right. Except when it's not.

Noise matching and signal quality are also good motivations.
Typically, the ratio between noise voltage and noise current of a port, happens to be close to its small-signal impedance, but it doesn't have to be exact, and it can differ significantly.
Unfortunately, I don't have examples or reasons handy for why this can be the case; I'm not well read on low noise design.
In general, the small-signal impedance
can be very different from the large-signal or intended impedance, thanks to device properties and negative feedback, so that is a possible explanation.
Where signal quality is paramount, the power transfer need not matter at all. All those digital logic and comm standards using source termination resistors (USB is a common example), are simply loaded by a slightly capacitive CMOS input pin at the end. This load strongly reflects a wave back to the source, where it is absorbed. If the pulse length is longer than the line length, the wave also opposes further current flow, conserving power and allowing smaller transistors to drive the line. If not, the waves superimpose elsewhere on the line, but at the terminal end, signal quality is still maintained -- hence this is usable for strict point-to-point buses.
For multi-drop buses, either the pulse length needs to be longer than the line length, or the line must be doubly terminated.
This for example limited the clock speed of PCI buses to, whatever they were, 66 or 100MHz was it? The relevant distance being the length from the motherboard system chip, up and down each expansion slot, to the end. Source termination was used, so signal quality at any point along the bus was poor, except for the very ends, but no useful assumptions can be made about that as a card can be plugged in anywhere along the bus.
Industrial RS-485 buses (multidrop) use low impedance drivers and a doubly-terminated line. The source impedance doesn't need to match, because signal level is a higher priority than power transfer, and a lower impedance source is able to pull the line closer to VCC/GND. They do need pretty beefy transistors to implement this (while also being rated for short circuits), hence there's usually an external driver chip for the purpose.
Audio is impedance matched, when signal quality matters -- the power doesn't matter much at all, it's just some standard line level (or less, since after all, audio isn't constant power; well, much as commercial broadcasters attempt to do otherwise..). Matching the line improves signal quality (with respect to interference and losses), and a well-defined impedance facilitates transformer design (as a transformer is itself another kind of transmission line component).
Much audio is not matched; rather, a low impedance driver is used (an op-amp output; some termination resistance may be added, and this is more for preventing the amp from oscillating into an unterminated line, than for signal or power reasons), feeding a high impedance load (10k? 100k?). Simplicity and cost are stronger motivators here, and anyway, interference is most often from low impedance sources (ground loop) where you don't really have any other options besides differential signaling or isolation, both of which cost extra to implement.
Another good example of active circuits with odd impedances: the standard audio (loudspeaker) amplifier is very low impedance, fractional ohm, perhaps milliohms; but the speaker is an impedance of, well, notionally 8 ohms or whatever, but it varies quite a lot as well. Speakers can indeed be designed for an impedance-matched system, but they don't need to be, and it seems to be the modern convention to optimize around a CV source rather than a matched one.
Needless to say, you won't get much power from your amplifier if you use a 0.01 ohm speaker; but actually, you will, but you need to understand where, and why. The power transfer theorem assumes a linear system. The amplifier is linear for small signals at least; if we deliver an output of say 10mV, and the amplifier's current limit is well in excess of 1 ampere, then we will obviously draw much more power from those 10mV, with a 10mΩ load, than with 4 or 8Ω! The amplifier itself is not a linear device overall, and will either limit extreme currents or voltages, or destroy itself trying to, when the load is very different from nominal.
Regarding the oscilloscope and function generator: the generator is source-terminated, so in principle, the signal quality at the end will be fine at any load impedance, including 1 or 10MΩ. (In practice, it's about half as good as doubly-terminated; but again, it's a power tradeoff in practical systems.)
Your piece of test equipment can be designed either way. You might even make it switchable. It should have a 50 ohm source impedance regardless, to limit short-circuit current and give good signal quality on any length cable -- but whether the gain is set for a matched load, or open, is your choice.
You certainly cannot make it a matched impedance to a probe, because probes simply don't work that way -- they are designed to read a voltage, and their impedance varies from 10M at low frequencies, to ~100s at high frequencies. Between, the impedance is capacitive. This is usually written as 10M || 6pF or something like that, but that's only a mid-frequency approximation, and there are other impedances that show up at high frequency. Hence the impedance doesn't continue to tank, it actually levels out, or perhaps bounces around.
You can't match that, because presumably the intent is to have it matched over the whole band of interest -- but that would require a negative capacitor, which is impossible*.
*You can synthesize one with an op-amp, but in this case I think you'll just get a very roundabout oscillator. If nothing else, the gain won't be flat, because again, the probe is designed for sensing voltage.
Scopes also aren't terribly good examples of signal quality -- some are, or can be; but for the most part, I mean, you have an 8-bit ADC, and about as much display resolution, and even that is a lot to expect from a standard clip-on style probe. They're only designed to measure down to the mV's, and typically have a noise floor of fractional mV (say, 10s of LSBs on the most sensitive range?), so aren't terrifically useful down there to begin with.
Incidentally, your current probe itself is an example of -- explicitly and intentionally -- mismatched impedance. The RF equivalent is a directional bridge, which taps off a fraction of the transmitted or reflected wave into respective ports. A typical implementation uses a pair of transformers, effectively one of which senses a fraction of line voltage and the other senses line current. (Actually they serve reciprocal purposes, because they're arranged symmetrically -- the whole network is symmetrical of course.) If you leave off just one, well, you're adding an impedance in series with the line, or in parallel with it, and that's necessarily a mismatch.
So, it's simple to see that a PT or CT must be a discontinuity. We take advantage of the fact that, at low frequencies, and for modest gains, we can have a minimal impact on the line impedance (Zpar >> Zo or Zser << Zo), so we don't worry about reflections from them. But as you go up in frequency, it does become more and more important, and you eventually get to a point where you need to implement it as a power tap or directional bridge instead.

(At 3MHz, you have absolutely nothing to worry about.

)
Tim