I thought I explained it
a week ago in that thread already: too-large time steps, and not enough numerical precision.
Plus, most electrical simulations do not actually verify that the total energy is conserved in an isolated system. What is the charge stored in the 10-farad capacitors at the beginning of the simulation? What is the magnetic field in the transformers at the start of the simulation?
One of my very first self-made simulators simulated Morse copper, i.e. using pairwise
Morse potential with constants \$D_e\$, \$a\$, and \$r_0\$ chosen to roughly match real-world copper. I had a few thousand atoms in a face-centered cubic lattice, modeling solid monocrystalline Morse copper, and a similar number of atoms in random positions but no pair closer than the lattice constant, in a periodic box (infinitely repeated in all directions). My problem was that I actually placed the second block one lattice constant too close to the solid block, so that in the planar interface, there were atoms
extremely close to each other. After some twenty thousand time steps, it "stabilized", with acoustic vibrations where the entire system vibrated with wavelength about a dozen times longer than the diagonal of the simulated volume. The initial energy was just so ridiculously high, the simulation became utterly unrealistic. (I much later found out that after a hundred thousand to a million more time steps, the entire system liquefies as one would expect; the Morse copper
will eventually melt. It's just that to maintain total energy, those initial time steps have to be tiny, in the attosecond class; and the melting only takes on the order of picoseconds.)
Funnily enough, if you use a much too large time step, you get much more believable results initially (with that utterly unrealistic start condition). It's only if you check the total energy, you see it jumping around randomly, a tell-tale sign that such an isolated constant-volume ("microcanonical ensemble") simulation has gone wrong.
SteveThackery's question is, how to determine
exactly how the simulation goes wrong.
That is a difficult question, similar to when someone shows you the original question and the result they got, but not their work, and asks you where they went wrong. If we knew the exact working, step-by-step, it would be just a matter of walking through it step by step, and seeing where the error is. Here, the Falstad simulator originates in Java code, with the initial version
here, current versions at Github
here. The Java sources for transformers are
here, for capacitors
here. If we read
circuitjs1/client/CirSim.java, it says
"For information about the theory behind this, see Electronic Circuit & System Simulation Methods by Pillage or https://github.com/sharpie7/circuitjs1/blob/master/INTERNALS.md."The general answer to that question is explained in that
github.com/sharpie7/circuitjs1/blob/master/INTERNALS.md file, by the original author of the JavaScript/Java-over-GWT port, showing how the circuits are calculated each timestep.
Note how that file explains how it uses Kirchhoff's current law as its basis? Now, look at the three isolated center circuits in
reply #1. As both the capacitors and transformers are idealized,
these inner circuits have zero resistance.
The solution method here is \$\mathbf{A}\mathbf{x} = \mathbf{b}\$, where each row vector of \$\mathbf{A}\$ describes the current in one node, according to Kirchhoff's current law, with \$\mathbf{x}\$ a column vector corresponding to voltage (with respect to ground) at each node, and \$\mathbf{b}\$ is a column vector corresponding to the net current to each node. Since we have no current sources in the circuit, \$\mathbf{b}\$ is all zeros.
Mathematically, the solution is \$\mathbf{x} = \mathbf{A}^{-1} \mathbf{b}\$ where \$\mathbf{A}^{-1}\$ is the inverse matrix (or \$\mathbf{x} = \mathbf{A}^{*} \mathbf{b}\$ where \$\mathbf{A}^{*}\$ is the pseudoinverse or Moore–Penrose inverse, useful when \$\mathbf{A}\$ is not a square matrix, yielding the "best" solution in the least squares sense; here, it should be square as both column vectors have the same number of elements, the number of nodes in the circuit). A lot of the "speed" in simulations is related to how to calculate this inverse fast, and often some numerical precision is sacrificed for maximum speed.
The problem here is that since the resistance in the three inner circuits is zero, the Kirchhoff's current law equations in them, solved for the voltage at each of their nodes, involves a division by zero. However, because the corresponding result is also zero, those divisions get cancelled out during the matrix construction, similar to as if you had say (x+1)/(x-1)=2/(x-1) and the solver recognized the common divisor and multiplies it out, and telling you the answer is x=1, even though there is no actual solution due to division by zero. Essentially,
the constructed matrix no longer describes the electrical circuit.
That circuit shows exactly what the Falstad simulator cannot simulate correctly: circuits or sub-circuits with zero resistances and nonlinear components. Hell, the documentation for the simulator explicitly warns against this!
(It is also a reason why some dislike
ngspice: it very often reports failure for similar circuits. I've seen many people comment that they'd just rather have
some result instead, instead of fixing their damn circuits to be more realistic. Yeah, the extra resistors sprinkled in the simulation can be annoying, but that's what you get if you use idealized components in the first place.)
The correct version of said circuit includes series resistors with each transformer coil, matching the approximate resistance of best physically possible coils –– make them out of silver, 15.87·10⁻⁹ Ω·m = 0.00000001587 ohms per meter of wire needed. I suspect, but have not verified (since it also depends on how the length of time steps is defined in the simulation, and I just cannot be arsed to delve
that deep for this), that even adding just one nano-ohm = 0.000000001 Ω resistor per coil will drastically change the simulation behaviour, because of how that affects the matrix representation of that circuit using Kirchhoff's current law.