Properly designed drive gives the same (or nearly so) torque, but with much lower current, much better motor efficiency
Really? So a PMDC motor supplied with a DC voltage and operating at, say, 70% efficiency will suddenly become 85% efficient with the right drive?
I think you need substantiate that.
Sure.
Think about the extreme case: When you apply the full voltage in a stalled DC motor, it basically takes current only limited by the R of the windings and brushes, hence massive I^2R losses, without providing more torque than the iron magnetization can support anyway. This is the "diminishing returns" area of operation - double the current, only get slightly more torque. Typically the efficiency starts to drop considerably after about 150-200% the nominal torque (~ current) of the motor - (of course depending on the motor!) You can check the efficiency curves on the motor datasheets if you don't believe this.
Now, when you apply a
controlled, i.e., limited current, which takes into account what the motor can actually do, you get the same (or nearly the same) torque with much less current, much less losses, better efficiency.
Of course, a totally stalled motor will never do any work, and the efficiency, by definition, is always 0%. The absolute losses are still important; with a proper driver, the motor can survive while giving this torque indefinitely.
But when the motor is driven at low RPMs - sometimes this happens only during spin-up, sometimes this operating mode is continuous (such as in an EV when driving in city traffic) - the choice of dumbly applying full voltage, vs. "just apply some PWM based on speed control, using current measurement only as a safety thing", vs. "use a proper current mode", might mean motor efficiencies such as 10%, 60%, and 90%, respectively. The middle one of those three examples is hard to predict and analyze, and the solutions typically have much more "safety" logic (and, hidden state) than anybody expected when they chose to use such a "simple" system
. And this safety logic often gives unnecessary "nuisance trips", when the user actually wanted a torque control.
It really depends, in many cases with constant, light loads and easy dynamics, there is of course absolutely no difference between a current mode control and "simply" using speed information as a feedback signal for the PWM, except for the short inrush at spinup, which may be totally insignificant in the total energy consumption.
When building the inverter & motor for a test EV years back, one of the first things to notice between the control algorithms was the heating of the
motor, while the PWM scheme stayed the same. This was hard to see on the lab table, but when driving a real 600kg vehicle at walking speeds, motor running at some hundred RPM only, this was a total showstopper. Sure, this was an induction motor, so there is the extra challenge of controlling the magnetization, so it's not a good example, but wanted to mention it anway since the effect was very dramatic. The first control scheme was a stupid speed regulation loop, the latter one was a torque control based on slip feedback.
With small motors, having high resistance anyway, and designed to work near the full speed most of the time, with a rather constant load, this is often ignored. A proper current-controlled source can really spin up a motor with 90% average efficiency, while a "dumb" solution may have average efficiency at 45% (ramping from 0% to 90%), but does this matter if the spin-up time is limited anyway, and there is a protection scheme that prevents the stalled motor running all the time?
BTW, since I sometimes like to give analogies, even though they are not 1:1 identical: this is analoguous to charging a capacitor from a voltage source through R, which always gives 50% efficiency by definition (there was a huge thread about this, it's hard to grasp to some!), vs. charging it with a current-controlled switch mode supply which doesn't have such 50% limit. In this analogy, the inductance of the "converter" (motor) saturates, and the system is back to the resistive nature only. Now, if you only charge the capacitor (spin up the motor) once, this 50% efficiency might not be an issue, but if you are have an extra load, draining the "capacitor" (BEMF), keeping it at 20% voltage all the time (i.e., driving an EV with motor running at 20% nominal speed), the control scheme must ensure that it stays a proper "converter", i.e., the core doesn't start to saturate. And by this, I don't only mean the "fully saturated" condition, but the area of operation which is kinda ok, but with diminishing returns and poor efficiency. You don't need to do that, just like you don't need to drive LEDs at absolute maximum ratings, since, even though they'll give you the most lumens out that way, the lm/W figure will drop.
But this is why each and every brushed DC motor controller driving large motors will always have a
fast current regulation loop, which is the inmost, primary loop. It's the right thing to do, and it isn't even more complex - it's actually a lot simpler once you get rid of the old notions, since it makes
sense -- just like we are finally starting to accept the current mode as a right thing to do when talking about switch mode converters, but the same is happening slower with motors, even though it's exactly the same case (imagine a huge output capacitor on a synch buck converter to simulate the inertia and BEMF).
Doing the Wrong Thing is often good enough, and doing the Right Thing is sometimes wrong in real engineering, but you need so stop and think about these things.
Hope this helps.