Absolutely, the complex phasor notation is what you use in actual calculations.
But I always think it helps to have some understanding of what is going on before using a special notation. Especially since there are no complex numbers in real voltages and currents. The basis for the whole j notation is not something that was ever explained well to me, it was just assumed that it was easy to understand (it wasn't).
I agree that the e notation may be somewhat counterintuitive initially, and more time should perhaps be spent in the initial introduction. I do recall a similar hurdle in my own time, when first wrapping my brain around the concept.
So my notes above are just intended as an aid to understanding, not an alternative calculation approach.
Understanding the correlation between the e notation and trigonometry is a must, of course so in that sense you do need the trig once you come back up for air after a long calculation. At the same time, it is almost as if by design, how well the complex notation and time-varying signals fit together.
Even now, I don't think it is entirely obvious or intuitive that manipulations of complex numbers represented as (a,b) or (r,theta) maps onto superpositions or convolutions of sine waves with different magnitudes and phase angles.
This is actually one of the central fundaments of digital signal processing. I take an example - it is a bit involved but bear with me.
The whole idea of the Fourier transform in its various forms starts from the assumption of a superposition of sine waves. The Discrete Fourier Transform exmplifies this beautifully: you start with a signal of
N samples in length. You can think of the signal as a vector in a
N-dimensional (Hilbert) space C
N, extending from the origin to some direction in the hyperspace, defined by the coefficients n (n=0,1,...n) of the signal sample. Since the sample was collected in time, the space where the vector lies is defined in the time domain by its basis (i.e. the vectors that tune the N-space). To visualize, think of the familiar 3-space and the cartesian coordinates x/y/z. Those coordinates are tuned by the unit vectors i,j,k that together form the basis of the coordinate system. This basis is orthonormal since the vectors are all at mutual 90 degree angles and of equal norm , i.e. equally long (of length 1). You can see this easily in your head when you think about it. So all coordinates we give in x/y/z are expansions in this space and the coefficients are multiples of the unit vectors. Thus point 3,5,-2 = 3*i,5*j,-2*k). Mathematically, the basis vectors don't need to be orthonormal, or indeed even orthogonal. They can be at any mutual angles as long as no two vectors are collinear so that coordinates in this space are a linearly independent combination of the basis vectors.
So we have this sample vector in the n-space in time domain. The time domain signal
x can be expressed as the sum of
x(n)*
ek where each sample is a Dirac delta function d(n-k). Next we expand the vector out of the time domain, into another basis. This being the Fourier transform the new basis is of course the frequency domain. Expansion just means that we project the signal vector into each individual tuning vector in the new basis, to obtain the signal coefficient for that coordinate vector. In practice we calculate the vector inner product between the signal and each of the basis vectors.
Enter e.
The orthogonal basis of the frequency domain is defined in a Hilbert space of dimensionality C
N as a set of vectors w
k(n) = e
j*2*pi/N*n*k; (n,k=0,1,...,N-1). This basis can be shown to be orthogonal but i am not going to go there.
The change of basis expresses the signal not as a linear combination of delta functions, but as a linear combination of sinusoids. If you inspect the basis of the frequency domain more carefully, you can see that it is a set of sinusoids of varying frequency. In fact from 0 to N/2. The Fourier transform "maps" the signal against all of these sinusoids and extracts the "amount" of match between the particular sinusoid and the signal (i.e. projects the signal on this particular basis vector).
OK, sorry about this lengthy yarn. But the point is that a) there are plenty of manipulations and superpositions of sinusoids and b) that doing all of the vector operations in sin/cos notation would be about as nice as a hot poker in the ass. Not to mention that there are critical optimizations for real life performance, that are only obvious in e notation, really.
[Footnote: mathematically, a complex number a + jb is represented as a pair of numbers (a,b) and a set of rules for manipulating them. Whether you treat your pair of numbers in rectangular coordinates as (a,b) or polar coordinates as (A,w), you are still really carrying along sines and cosines and doing manipulations of them.]
To be sure. But it makes all the difference in the world if you know how to pick the optimal representation for each case. For technical calculations, especially DSP, the e notation can't be beaten.