It's a numerical solution of differential equations.

The difference equations arise from the linear components:

V = R * I

V = L * dI/dt

I = C * dV/dt

For each node-to-node voltage V and branch current I, of each component respectively (R, L, C).

You build a matrix of node voltages and branch currents, and invert through whatever methods. SPICE uses LU reduction I think. Finally, you get the node voltages for a given timestep, add the differences, and repeat.

That solves general, linear circuits for transient results, but these aren't so interesting, as analytical and frequency-domain results are available much faster (and often much more useful).

To do that, to solve in the frequency domain, use the Fourier or Laplace transform of the RLC components (e.g., V = j*omega*L*I), solve the node equations, and you're done. For arbitrary sources, transform them as well, and use superposition to solve for the result in the circuit. Finally, if time domain results are desired, apply inverse transform and solve for initial conditions.

This is SPICE's AC analysis, where the node equations are solved at a single (given) frequency. Usually a sweep of frequencies. The inverse transform is not used.

That's the easy part. The hard part is considering nonlinear elements: diode junctions, transistors, that sort of thing.

SPICE handles this by finding an operating point, solving for the incremental resistance of each element at that point (i.e., R_incr = dV/dI), and iterating until the error falls below the various tolerances (RELTOL, VTOL, CHGTOL..). This solves for the node voltages and currents in the timestep. The next timestep is calculated, and the process repeats. If the error isn't falling (convergence failure), timestep is reduced and iteration is attempted again. This makes the points much more dense around regions of rapid change. There are some other stability tricks, source and Gmin stepping for example, and other integration methods (trapezoidal (~Newton's method), and Runge-Kutta (order 2+)) also used, or available.

Falstad simulator, as far as I know, uses a fixed timestep and Newton integration, which is very easy to code, reasonably stable for simple circuits, and still pretty good with some badly behaved systems (of note, integrating an event-driven digital simulator into it isn't too bad; not sure if that's what's used, but it seems likely). It's also that much more fragile when it comes to numerical stability. It's fairly easy to create a circuit which diverges, producing exponential outputs (which the viewer is more than happy to show climbing to ridiculous values for you).

Two commonly seen convergence issues arise from this:

- If there are undefined nodes (e.g., capacitors or current sources in series), or loops (inductors or voltage sources in parallel), the fact that that variable is undefined, manifests as a singular matrix. Attempting to invert a singular matrix is analogous to dividing by zero. Matrices are just richer in the ways they can behave like zero, and this is one such case. Solution: add a leakage resistor from the node to a nearby node, or ESR to the loop. SPICE has the RSHUNT parameter to do this automatically; but if you use ~safe large values (e.g., 1GΩ), it may converge but go really slowly.

- If the nonlinearity just can't be fitted (by reducing timestep), it will eventually give up and say "timestep too small". (Or it won't at all, as LTSpice tends to do, IIRC.) Try loosening tolerances, or adding parasitics to the circuit (you may be attempting to simulate a nonphysical circuit -- this is especially important, say, in switching power supplies). Avoid discontinuous or piecewise functions (e.g., switches, IF or TABLE expressions, EXP without LIMIT, etc.).

Tim