As an example to whet your appetite -- pi comes from the sine wave.
As Benta noted, we could solve all of this in real time -- and I mean that literally, with real numbers. But differential equations are hard, so we'd prefer not to.
Note, computers are good at hard yet mindless tasks; we can set them to cranking through solutions in this way, and with much more particular (complex, nonlinear) systems than we're talking about here. And so we have SPICE for example -- a simulation environment which steps incrementally forward in time, evaluating an arbitrarily complex system in the only tractable way left.
But we generally prefer to analyze things, when we can. Analysis gets exponentially (probably hyperbolically?) hard with problem scope, so if we find a simple enough method fits the problem, we'd be wise to use it!
If we're concerned with LTI (linear, time-invariant) RLC networks, then we can apply such a simplification. In short, if we assume sinusoidal stimulus, then all the differential equations can be rewritten in terms of sums of sinusoids: we don't need to carry around the sines at all, just represent their magnitude and phase at a given frequency. Which sounds like complex numbers, but isn't a huge motivation for them, yet.
When we solve a (LTI) differential equation in the time domain, after some work, we find two facts always pop up:
1. The differential equation can be rewritten in terms of an auxiliary equation, a simple polynomial of one variable.
2. We get solutions in terms of exponentials:
$$ e^{t / \tau_1} \left( A \sin \omega_1 t + B \cos \omega_1 t \right) + t e^{t / \tau_2} \left( C \sin \omega_2 t + D \cos \omega_2 t \right) + ... $$
where, -- you'll have to excuse me as it's been YEARS since I did my diff eq homework -- the parameters (tau, omega) are in fact the roots of that auxiliary polynomial. Which, in general, involve algebraic numbers (sqrt and worse, or for real-valued inputs, whatever), including roots of -1, the imaginary constant.
But in that case, or indeed, in general -- we don't need to write out the real and complex parameters, we can use Euler's formula,
\$ e^{ix} = \cos \theta + i \sin \theta \$
so we substitute an exponent of \$tau + i \omega\$ and discard the sin+cos stuff.
And for further elucidation on that remarkable fact -- 3blue1brown on YouTube has one of the best visual explanations of how this works out.
So, we get complex numbers, because the real part manifests as a damping (exponential decay) term, the imaginary part manifests as an oscillatory (sinusoidal) form, and the waveforms are systems of exponentials (e^t, t e^t, etc.). The aux. equation doesn't really seem to mean anything, but it sure is easier.
Wouldn't it be great if we could just work with the auxiliary equation instead?
Well, we sort of can. The framework built up around that uses
transforms, of which Laplace has been mentioned, and is probably the simplest to start off with (however, the inverse is a bit ugly). The "full version" is the Fourier transform, which transforms a function of time to a function of frequency, the value of which is a complex number -- where B + iA has the A and B coefficients on the trig version.
The definition of the Fourier transform, is to convolve the input function with every possible sinusoid -- "every possible" is done with an integral (or for the periodic form, the Fourier series, with an infinite sum of harmonics), and convolution effectively means to select for every sinusoid that correlates with the input.
Also, I've been writing i for mathematical convenience, but I should really be using j, because us EEs already use i for signal current, go figure.
Also also, this is all high level stuff, I think usually year 3/4 in EE? Or at least not 1st year. So it's being left intentionally vague, and will eventually fill in somewhat, if you can hold onto it until the next class that uses it and so on. Hopefully, your professors will give some proofs -- engineers tend not to, sadly, which to me at least makes the assertions, theorems, etc. much less impactful, and harder to understand. If this is the case, do take the time to challenge yourself through them, do extra homework -- it's quite worthwhile, at least if you're into that sort of thing.
And also also also, as Terry said... math isn't for everyone, pretty much everything has been derived at this point, you only need to know what it is (for which, at least an introduction to the subject helps, if not a thorough understanding of it) and look it up in a book or whatever. Everyone has their range of interest, play to that.
Like, me personally, I find network theory fascinating, but it's not an undergrad subject so I didn't get academic training in it (<-- BS EE), and honestly, working through some of those problems, like, trying to solve for some damn group delay or something (lots of arctangents, or 1/sqrt-something I forget which, because group delay has to do with... phase derivative, isn't it?), and it's all that work just to set up some equations that probably don't have an analytical solution anyway (in fact, even simple polynomials over order 5 have no closed form solution, let alone what kinds of equations are used to solve for other network properties) -- so it's a hell of a lot of work just to set up an equation that a computer has to solve anyway (or you with a LOT of grind work -- as they had to, back in the day -- network theory dates back to the 1920s, in particular some seminal papers in BSTJ, free on archive.org if you're curious) and then you scratch up a big old table of numbers,
and you still can't quite use it yet because real components are inexact and a real circuit needs tweaking----- So, for the most part, we do use those hard-won solutions as a starting point, then we simulate from there, using realistic models, and other sources of data.
Tim