I too think Dave hit the topic pretty well.
Polar (\$z = r \, \begin{array}{|l}\theta\\\hline\end{array}\$) and Cartesian (\$z = x + j y\$) representations of the same complex number \$z\$ can also be considered "transforms" of the other representation, with the logic that
- in polar form, multiplication and division are simple, but addition and subtraction complicated;
- in Cartesian form, addition and subtraction are simple, but multiplication and division can become arduously long and complicated.
You can always transform between the two. When \$\theta\$ is in radians, the polar form is the same as the exponential form, \$z = r e^{j \, \theta}\$.
Tangentially related (pun not intended):
This ties in to the comment about
the importance of (logical) transforms, recently.
Fourier transform (and its inverse) might be the most well known to EEs, transforming for example an AC signal between the time domain (as a function of time) and the frequency domain (as a function of frequency). However, using complex numbers to represent a time-varying signal with two components – like alternating current with voltage and current – can be considered such a transform, a logical transform of the problem at hand; and picking the representation, polar or Cartesian, and even switching between the two a couple of times while solving a particularly hard problem, a logical transform of the way we describe the signal.
(I just wish the boffins had called them "complex values" instead of "complex numbers", because it is easier to intuitively grasp that a value can consist of multiple components. That a "complex number" actually refers to something that has two separate components, and to write one out usually takes two numbers, is just unnecessary complexity, unnecessary cognitive load.)
If we use the word "domain" to describe the set of assumptions we have and the way we describe the problem, we can say something truly important:
choose your domain wisely. If there is no single "domain" that works well enough, then split the problem into smaller parts, and apply transforms as needed to solve each sub-problem in a suitable domain.
In computer programming, especially embedded or microcontroller programming, choosing the "domain" (so that you only need a small number of "fast" arithmetic operations to achieve the task at hand) wisely can mean the difference between a sub-$1 MCU/CPU and a $10+ one needed.
In systems and applications programming, in the long term, "maintainability" is a crucial detail in choosing the proper "domain".
Jumping between "domains" (via "transforms", like I described above) is like thinking outside the box; or, like looking at the problem from different angles.
If you only have one domain you feel comfortable in, it is like only having a hammer, thus making all problems nails. If there is a single MCU one always uses, or a single family, is it because that one is a familiar one, or because it is a good fit to the problem at hand all considered?
I cannot put any of this into words well enough, much less into a video. But, this is an important idea that spans not just all fields applying math, but basically all engineering and science fields. I do not see it stated directly or indirectly often, but I do see experienced people apply it all the time; including Dave. ("If you do
this, then Bob's yer uncle, and the solution is plain obvious!")
This is extremely useful in practice: just because a problem
looks difficult in one domain, it does not mean the problem is actually difficult; often, it is only difficult in that particular domain. If you have a sufficient toolbox of transforms in your belt, and aren't afraid of using them, you can often find some other domain you can transform the original problem to and from, where the problem is simple or at least straightforward to solve.