I read your note in the linked groups.io thread:
By the way, the slowdown does start at the 12ms (simulated time) point, when the current starts to ramp up to 200mA. It becomes almost unbearably slow once it reaches 150mA, but if you leave it that way for a few hours, it keeps going and, amazingly, gets slower and slower, so by the time you reach the 22ms point, it's beachballs galore and switching from another app like Outlook to LT Spice (Mac version under Mojave) takes no less than 60 seconds for the UI to draw. It is that slowness which prompted me to start this thread.
This sounds to me like the machine is thrashing - suffering from a deep lack of RAM, and using paging to disk or slower memory, or simply falling out of L2/L3 CPU cache while trying to cope with a large working set despite low RAM. It's generally not possible to add memory to a lot of Macs, so the approach unfortunately has to be to reduce the complexity of the simulation.
You can quit other running programs to free up memory, but this only gets you a fixed increase in memory, and if the simulation goes "geometric" with ever increasing memory requirements, it only staves off the simulation time where the simulation crawls to a halt.
Another possible taxonomy is that there's a memory leak, but this should not cause problems with a well designed paging system, since the leaked memory will not be used ever again, and can be returned to the free list, and only cause a small amount of inefficiency.
So: why would a large simulation fail on the Mac and not the PC? There could be some differences in how virtual memory and paging is handled between the two OSes, but in the timings shown in the thread, it seems like both simulations get bogged down - it takes 30 minutes to get to 22ms under Wine. I never have the patience to run a simulation that long, since I mostly use a laptop, and I don't want to cook it. I did build a desktop i7 10700K with a big heatsink so I could pound it for some simulations, but I still do not have the patience to wait for that.
I have run some simulations of switching regulators where using, for example, accurate 8 component models for each capacitor caused the sim to crawl and become essentially useless. I was able to get these simulations to be useful for design by replacing some of the accurate capacitors with ideal capacitors, chopping down the number of nodes, and thus greatly reducing the size of the matrices used to represent the simulation.
If you can do this with any of the components in your simulation, or somehow provide some other ways to reduce the complexity of the simulation matrix, you may be able to fit the simulation into a smaller memory footprint, and avoid hitting the “thrashing” point where it all goes to pieces.
I’m just spitballing here, but those execution times are deeply useless, and I think it would serve you best to try to find ways to reduce the simulation complexity somehow. I have done things like that with a few large switcher simulations and I was able to transform a useless sim into something I could design with.