Author Topic: LTspice why not GPU accelerate  (Read 10837 times)

0 Members and 1 Guest are viewing this topic.

Offline HarvsTopic starter

  • Super Contributor
  • ***
  • Posts: 1202
  • Country: au
LTspice why not GPU accelerate
« on: May 08, 2014, 12:28:30 am »
I know this isn't a embedded question, but clearly we have a broad mix of people in here.

So listening to the Amp hour the other day with Mike Engelhardt talking about the optimisations done on LTspice, interesting episode.

Having spent about a year doing all this horrendous math by hand in power systems classes (which from what I can gather is very similar, large systems of nonlinear equations solved with a Jacobian), I would have thought this would have been just the sort of thing that a GPU could be used to accelerate.

I'm sure this isn't the case otherwise Mike and his team would have been onto it (as their seems to be a very strong motivation).  But I'm left wondering why not?
 

Offline senso

  • Frequent Contributor
  • **
  • Posts: 951
  • Country: pt
    • My AVR tutorials
Re: LTspice why not GPU accelerate
« Reply #1 on: May 08, 2014, 01:06:50 am »
Would you rather code for Nvidia or ATI?
CUDA code is non portable to any other brand unless NVIDIA, and not all the cards have the same capabilities.
Multi-thread the heck of it, abuse SIMD extensions, almost everybody as at least a dual core x86, lots of laptops don't even have dedicated graphics..
 

Offline ovnr

  • Frequent Contributor
  • **
  • Posts: 658
  • Country: no
  • Lurker
Re: LTspice why not GPU accelerate
« Reply #2 on: May 08, 2014, 01:30:46 am »
Well, OpenCL runs on both AMD and Nvidia cards, and will fall back to CPU emulation (still pretty fast) if no compatible GPUs are present. It ought to run on other, more exotic hardware too, like the Xeon Phi.

That being said, I'm not entirely sure the problem domain solved in LTspice is easy to port to a typical heterogenous computing architecture, but I'd be happy to be proved wrong.
 

Offline miguelvp

  • Super Contributor
  • ***
  • Posts: 5550
  • Country: us
Re: LTspice why not GPU accelerate
« Reply #3 on: May 08, 2014, 03:54:26 am »
Furthermore Open CL runs on some Altera FPGA dev boards. So who is starting the OpenCL LTSpice effort?

Don't look at me, too busy and happy with my PC based LTSpice :)
 

Offline HarvsTopic starter

  • Super Contributor
  • ***
  • Posts: 1202
  • Country: au
Re: LTspice why not GPU accelerate
« Reply #4 on: May 08, 2014, 05:46:08 am »
Would you rather code for Nvidia or ATI?
CUDA code is non portable to any other brand unless NVIDIA, and not all the cards have the same capabilities.
Multi-thread the heck of it, abuse SIMD extensions, almost everybody as at least a dual core x86, lots of laptops don't even have dedicated graphics..

I'm aware of this.  However the whole motivation for the fast LTspice solver is for LT's chip designers, not your typical engineer that simply doesn't need the speed.  Mike talks about him benchmarking and selecting a high end PC build for the chip designers.  So if they made it support just Cuda, and fall back to CPU without GPU acceleration, then that would work since those that actually need the performance are having their PCs custom built specifically for the job anyway.
 

Offline madshaman

  • Frequent Contributor
  • **
  • Posts: 698
  • Country: ca
  • ego trans insani
Re: LTspice why not GPU accelerate
« Reply #5 on: May 08, 2014, 02:15:46 pm »
I would guess that some large numbers of LTSpice users are still running it on old boxes with XPpro.

Since person hours are actually very expensive, I'd imagine it was decided that time was better spent on the CPU side.

Also, unless you work at a game company, most developers have no experience writing shaders or working with acceleration; so it becomes a risk, and most companies can't afford to put risky ventures on a critical path.

Also, perhaps there's something about the code base which would make it more difficult to integrate GPU assisted computation.  It could also be that so many of the calculations involve such long dependency chains that parallelism doesn't help so much.

I think an argument could be made for putting more focus on GPU acceleration, but I can't see deciding NOT to do this as a bad decision without more info.
To be responsible, but never to let fear stop the imagination.
 

Offline krivx

  • Frequent Contributor
  • **
  • Posts: 765
  • Country: ie
Re: LTspice why not GPU accelerate
« Reply #6 on: May 08, 2014, 02:24:49 pm »
Wasn't this discussed on the AmpHour episode? I think they said that GPU is only advantageous for massively parallel operations and wasn't worth it for ltspice...
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf