And it makes the case that this isn't a mathematical pet trick -- it is a way of projecting a one dimensional real valued signal into two dimensions. The combination of the two dimensions is useful because it allows us to talk about the instantaneous phase and amplitude of a signal at one point in time without having to look at its history or future.
And for that matter, even if we wanted to ask, what is: g(t) = f(t) / sin(w*t), we can only ask that question for bounded ranges at a time. As sin goes to zero, the value of this expression quickly blows up, and becomes indeterminate at the zeros (w*t = n*pi).
So, mathematically speaking, we cannot construct a real-valued signal g(t) such that, when multiplied by (i.e., mixed with) sin(w*t), we reconstruct the original signal. It must always be missing those pinholes.
For practical purposes, we can gloss over the pinholes anyway; they have exactly zero time duration, after all. But we still can't deal with the dynamic range of g(t). In effect, we must remove, not just infinitesimal points, but all points where |g(t)| > MAXVAL. (MAXVAL being, for example, an ADC's input range, or an analog amplifier's supply rails.) And, to have those stretches be short enough that we aren't losing too much information about f(t) in the process, we need a large factor of dynamic range (as in, several times 20dB) more than we need to hold onto f(t) alone.
So, the math says there's a problem, and in reality, we can ignore that problem, but the fact remains, the math is warning us that we're setting up for a dumb.
As it happens, there is a very simple resolution to this, which yields a perfectly bounded output (i.e., the bounds are as small as possible), costs no dynamic range, does not require patching holes, and is also a unique representation of the signal. Instead of dividing by sin(w*t), we multiply: g(t) = f(t) * sin(w*t), and h(t) = f(t) * cos(w*t). The two functions, g(t) and h(t), are the I and Q channels.
The math is telling is us that, although we can get most points of f(t) out of the first process, and while we technically don't care about the remaining points (the function is countably whole), the fact remains that we have
just not enough information to reconstruct the output, because the output necessarily goes to zero every time sin(w*t) goes to zero. There's simply no degree of freedom at those zeroes. So, we must produce a companion function that fills in those holes.
The selection of multiplication, is motivated by its simplicity: we can build analog multipliers, relatively easily. We also know that multiplying sine functions gets us the sum and difference frequencies, so we can build a superhetrodyne radio this way. (And for this reason, we can prove that, even though g(t) and h(t) are products themselves, the sines go away when multiplied again: we can reconstruct f(t), uniquely, without using something as ill-conditioned as division -- only multiplication is needed!)
If we choose more complicated operators, like convolution, instead of multiplication, then we can obtain different results. This is equivalent to the choice of transform I spoke about above.
Tim