EEVblog Electronics Community Forum
Electronics => Beginners => Topic started by: derGoldstein on June 17, 2016, 02:28:14 pm
-
I've been trying to understand the behavior of inductors in different circuits and there's a particular property that I'm having trouble grasping: the magnetic field collapsing generates a gradually increasing voltage across the inductor.
When current is initially passed through an inductor, the inductor resists the change in current across it, starting with a high impedance that gradually decreases as the magnetic field is formed. When the field is fully formed, the circuit is in steady state, and as long as the current passing through the inductor doesn't change, it will just behave as the length of wire in it.
(I'm assuming that I'm correct so far, please tell me if I'm not)
At this point I disconnect the power from the inductor, and this is where my understanding fails: The inductor again resists the change in current across it, and as the magnetic field collapses, it generates a current across the inductor that will keep increasing in voltage until it's high enough to overcome the resistance between the two ends of the inductor and close the circuit. If the inductance is very high the voltage will increase enough to arc through very high resistance such as air.
(here again I'm assuming that the last paragraph was correct, even if I don't understand why it's so)
If this is the case, couldn't a buck circuit output a higher voltage than the input if the resistance of the load is high enough (or if the load is removed completely)? Does the capacitor prevent this from happening due to its behavior of resisting the change in voltage? If so, wouldn't the voltage across the capacitor in a buck circuit that's not connected to a load keep increasing with no limit? Could this happen if the regulation circuit fails?
I'd be grateful to anyone who could elucidate.
-
Hi
An inductor stores energy. It and the capacitor are two "ideal" elements that do so.
The inductor stores energy in a magnetic field.
When you attach it to a power source, it "soaks up" energy from the source.
When you remove it from a power source (or change the source), it dumps that energy.
There are basic rules that dictate how fast it will soak up energy and how much energy it will store. The same rules help to estimate how it will dump that energy when the source is removed (or changed).
An inductor wants to maintain a constant current. If something external tries to change that, it resists the change by either storing more energy or by dumping some of the energy it has stored.
So far so good?
Bob
-
One simple equation says it all: VL = L di/dt The voltage across an inductor is proportional to its inductance times the rate of change of current. If you could shut off the current instantaneously, di/dt would be infinite and the voltage would rise to infinity. There are practical limitations that keep this from happening but that's what the math says. Ignition coils come to mind...
https://en.wikipedia.org/wiki/Inductor
-
An inductor stores energy. It and the capacitor are two "ideal" elements that do so.
The inductor stores energy in a magnetic field.
When you attach it to a power source, it "soaks up" energy from the source.
When you remove it from a power source (or change the source), it dumps that energy.
There are basic rules that dictate how fast it will soak up energy and how much energy it will store. The same rules help to estimate how it will dump that energy when the source is removed (or changed).
An inductor wants to maintain a constant current. If something external tries to change that, it resists the change by either storing more energy or by dumping some of the energy it has stored.
So far so good?
I've seen the comparison between the inductor and the capacitor in so many places, and it always seems like a very poor equation. The inductor and the capacitor are two passive (solid-state) energy storage devices, but the similarities don't extend much further.
An ideal capacitor will keep its charge indefinitely if there's no electrical connection between its terminals (and the real-life version will "leak" very slowly), and won't ever output a voltage higher than the one that flowed into it (I understand that some ceramic capacitors can generate voltage transients, but I mean the ideal model). It's very similar to a battery in the mathematical sense.
An inductor will only store and discharge energy under specific, dynamic conditions, and it doesn't have a non-energized storage state. Also, unlike the capacitor, the magnetic field has "other options". If there's a secondary winding, the energy can be discharged through it instead. If there's a change in the arrangement of ferromagnetic matter around it then the energy can be discharged mechanically. In DC terms, it's nothing like a battery.
The capacitor's behavior is much more intuitive. It can be compared to a latched spring, or just a volume of mass that's raised from the ground to a higher position like a shelf. In a way you can also compare it to linear momentum (or, if you want to think of it as a stationary version, then angular momentum).
You put energy in and you take energy out the same way, and in between the energy "stays there".
One simple equation says it all: VL = L di/dt The voltage across an inductor is proportional to its inductance times the rate of change of current. If you could shut off the current instantaneously, di/dt would be infinite and the voltage would rise to infinity. There are practical limitations that keep this from happening but that's what the math says. Ignition coils come to mind...
So if you have an energized inductor which you disconnect from the circuit, does the voltage instantly rise to theoretical infinity, or does it rise gradually? If the change is instantaneous, how can any additional semiconductors in the circuit (like a freewheeling diode) survive the reaction?
-
One simple equation says it all: VL = L di/dt The voltage across an inductor is proportional to its inductance times the rate of change of current. If you could shut off the current instantaneously, di/dt would be infinite and the voltage would rise to infinity. There are practical limitations that keep this from happening but that's what the math says. Ignition coils come to mind...
So if you have an energized inductor which you disconnect from the circuit, does the voltage instantly rise to theoretical infinity, or does it rise gradually? If the change is instantaneous, how can any additional semiconductors in the circuit (like a freewheeling diode) survive the reaction?
Because nothing changes in zero time. Even if the current is switched with a mechanical switch, there will be enough voltage generated to arc across the switch contacts for a while. The magnetics aren't quite as perfect as the math. But I do remember making "shock boxes" from audio transformers and a battery. Closing the switch was no big deal but when it opened, it stung! As to freewheeling diodes, they only see a 0.6 or 0.7V drop. Some resistive device in the circuit accounts for the rest of the energy.
Mostly we deal with low levels of inductance and limited switching speeds. It all works out fine. But the math is what it is - if it were possible to switch in zero time, the voltage would be infinite. Neither can happen in the real world.
Really, it's the same story with a capacitor. The current is C dv/dt. If the rate of change of voltage is high, so is the capacitor current. The current causes heating and is a limiting factor on how fast computers can go. Smaller feature size with shorter interconnects means lower capacitance which means less heat for a given switching frequency. One way to limit dv/dt is to use lower and lower logic voltages.
-
The voltage upon disconnection does not rise to infinity due to the resistance of the coil. If the series resistance would be zero, it would be ideal and indeed go to infinity.
-
The energy stored in a capacitor is proportional to the square of the voltage between the plates
The energy stored in an inductor is proportional to the square of the current running through the coil.
Ei = 0.5*L*I^2
Ec = 0.5*C*V^2
When you remove an inductor from a circuit, via a switch for example, the current path is interrupted, but energy must be conserved so the voltage will rise sufficiently to keep the current flowing till the stored energy is dissipated through resistive elements in the wire and circuit. With a simple switch it will arc across the poles: voltage will = di/dt: the faster the current drops the higher the voltage.
-
An inductor stores energy. It and the capacitor are two "ideal" elements that do so.
The inductor stores energy in a magnetic field.
When you attach it to a power source, it "soaks up" energy from the source.
When you remove it from a power source (or change the source), it dumps that energy.
There are basic rules that dictate how fast it will soak up energy and how much energy it will store. The same rules help to estimate how it will dump that energy when the source is removed (or changed).
An inductor wants to maintain a constant current. If something external tries to change that, it resists the change by either storing more energy or by dumping some of the energy it has stored.
So far so good?
I've seen the comparison between the inductor and the capacitor in so many places, and it always seems like a very poor equation. The inductor and the capacitor are two passive (solid-state) energy storage devices, but the similarities don't extend much further.
An ideal capacitor will keep its charge indefinitely if there's no electrical connection between its terminals (and the real-life version will "leak" very slowly), and won't ever output a voltage higher than the one that flowed into it (I understand that some ceramic capacitors can generate voltage transients, but I mean the ideal model). It's very similar to a battery in the mathematical sense.
An inductor will only store and discharge energy under specific, dynamic conditions, and it doesn't have a non-energized storage state. Also, unlike the capacitor, the magnetic field has "other options". If there's a secondary winding, the energy can be discharged through it instead. If there's a change in the arrangement of ferromagnetic matter around it then the energy can be discharged mechanically. In DC terms, it's nothing like a battery.
The capacitor's behavior is much more intuitive. It can be compared to a latched spring, or just a volume of mass that's raised from the ground to a higher position like a shelf. In a way you can also compare it to linear momentum (or, if you want to think of it as a stationary version, then angular momentum).
You put energy in and you take energy out the same way, and in between the energy "stays there".
One simple equation says it all: VL = L di/dt The voltage across an inductor is proportional to its inductance times the rate of change of current. If you could shut off the current instantaneously, di/dt would be infinite and the voltage would rise to infinity. There are practical limitations that keep this from happening but that's what the math says. Ignition coils come to mind...
So if you have an energized inductor which you disconnect from the circuit, does the voltage instantly rise to theoretical infinity, or does it rise gradually? If the change is instantaneous, how can any additional semiconductors in the circuit (like a freewheeling diode) survive the reaction?
Hi
The inductor does not go to infinite voltage in zero time. The capacitor does not go to infinite current in zero time. Same issue on both devices.
The real capacitor does not store energy forever and ever. Neither does the real inductor. Again the same thing on both devices. Ideal inductors would work just as well as ideal capacitors. Superconducting coils are an "ideal" inductor. Short the terminals and the field stays "forever".
A multi winding inductor is not that different than a multi plate capacitor, but that gets a bit more complex.
Since they are both ideal non- realizable elements, they are purely defined by math. One converts current to stored energy, the other converts voltage to stored energy. That's the only difference.
Bob
-
Thanks everyone, I'm starting to alter my intuition of how the inductor behaves, and how the equations defining the devices are useful comparisons.
So in practice, in a buck converter circuit, suppose these 3 conditions occur:
1) the regulation circuitry fails (while the mosfet is off)
2) the load is almost zero
3) the output capacitor is (potentially) too small
Now we're left with the inductor, the freewheeling diode, and the output capacitor. For the sake of this example let's say there's nowhere for the current to flow back "upstream" through the switching mosfet. Is it then possible for that circuit to output voltage higher than the input voltage?
Suppose I'm trying to prevent this exact scenario from happening using only passives. Would adding more capacitance to the output along with a bleed resistor help?
uncle_bob:
Since they are both ideal non- realizable elements, they are purely defined by math. One converts current to stored energy, the other converts voltage to stored energy. That's the only difference.
Ok, I'm beginning to see the usefulness of the comparison in the equations, but there are some properties which I don't find to be good comparisons:
The inductor does not go to infinite voltage in zero time. The capacitor does not go to infinite current in zero time. Same issue on both devices.
Right, so in practice we change "infinity" to "as high as the series resistance and other imperfections permit". Ok, I can see this one.
The real capacitor does not store energy forever and ever. Neither does the real inductor. Again the same thing on both devices. Ideal inductors would work just as well as ideal capacitors. Superconducting coils are an "ideal" inductor. Short the terminals and the field stays "forever".
This is the comparison I find impractical. If we consider the real-world behavior, assuming that we open the circuit after we charge the devices, the *effective* property of the capacitor is to keep the charge, while the effective property of the inductor is to discharge by any means. The inductor's charge is by nature transient.
Neither the resistance of copper nor the resistance of teflon is infinite, so we can, in theory, call them both "conductors". But for any practical purpose, we'd classify copper as a conductor, and teflon as an insulator. In practice, the charge of a capacitor does "stay put" for most practical purposes, it's objectively a fairly good energy storage device.
A multi winding inductor is not that different than a multi plate capacitor, but that gets a bit more complex.
Multi-plate capacitors are kind of like capacitors in parallel, while something like an electrolytic capacitor has one pair of plates that are spooled, but I see what you mean.
-
Thanks everyone, I'm starting to alter my intuition of how the inductor behaves, and how the equations defining the devices are useful comparisons.
So in practice, in a buck converter circuit, suppose these 3 conditions occur:
1) the regulation circuitry fails (while the mosfet is off)
2) the load is almost zero
3) the output capacitor is (potentially) too small
Now we're left with the inductor, the freewheeling diode, and the output capacitor. For the sake of this example let's say there's nowhere for the current to flow back "upstream" through the switching mosfet. Is it then possible for that circuit to output voltage higher than the input voltage?
Suppose I'm trying to prevent this exact scenario from happening using only passives. Would adding more capacitance to the output along with a bleed resistor help?
uncle_bob:
Since they are both ideal non- realizable elements, they are purely defined by math. One converts current to stored energy, the other converts voltage to stored energy. That's the only difference.
Ok, I'm beginning to see the usefulness of the comparison in the equations, but there are some properties which I don't find to be good comparisons:
The inductor does not go to infinite voltage in zero time. The capacitor does not go to infinite current in zero time. Same issue on both devices.
Right, so in practice we change "infinity" to "as high as the series resistance and other imperfections permit". Ok, I can see this one.
The real capacitor does not store energy forever and ever. Neither does the real inductor. Again the same thing on both devices. Ideal inductors would work just as well as ideal capacitors. Superconducting coils are an "ideal" inductor. Short the terminals and the field stays "forever".
This is the comparison I find impractical. If we consider the real-world behavior, assuming that we open the circuit after we charge the devices, the *effective* property of the capacitor is to keep the charge, while the effective property of the inductor is to discharge by any means. The inductor's charge is by nature transient.
Neither the resistance of copper nor the resistance of teflon is infinite, so we can, in theory, call them both "conductors". But for any practical purpose, we'd classify copper as a conductor, and teflon as an insulator. In practice, the charge of a capacitor does "stay put" for most practical purposes, it's objectively a fairly good energy storage device.
A multi winding inductor is not that different than a multi plate capacitor, but that gets a bit more complex.
Multi-plate capacitors are kind of like capacitors in parallel, while something like an electrolytic capacitor has one pair of plates that are spooled, but I see what you mean.
Hi
Your basic issue is still not looking at the two devices (inductor and capacitor) as working on voltage versus current.
To put a capacitor in "storage mode" you open circuit it.
To put an inductor in "storage mode" you short circuit it.
If you short a capacitor .... zonk ... all energy gone.
If you open an inductor ... same ... all energy gone.
Current for one, voltage for the other.
Bob
-
If one could manufacture a capacitor that had no other properties (like series inductance, series resistance, leakage etc) I have no doubt there would be a Nobel prize in the offing.
Equally, if one could make an inductor that had no inter-winding capacitance or series resistance one would have similar accolades bestowed.
We go around drawing lines (wires) on schematics as if they have zero resistance when in the end they are made of copper rather than superconductors - the very same substance they make transformers out of which have copper losses - and then have to debug our 'perfect' drawings.
As it is we have to live in the universe we have which is way more complicated, and lovely, where ones mind has to grapple with many more things than those which are optimal.
It's the arty part of electronics; to get ones head around the not so obvious things that are going on ::)
-
Your basic issue is still not looking at the two devices (inductor and capacitor) as working on voltage versus current.
To put a capacitor in "storage mode" you open circuit it.
To put an inductor in "storage mode" you short circuit it.
If you short a capacitor .... zonk ... all energy gone.
If you open an inductor ... same ... all energy gone.
Current for one, voltage for the other.
I understand the comparison in mathmatical terms, but in practical terms those four statements are: true, false, true, true.
Even if you took a toroidal inductor, charged it, and then magically made the winding vanish completely, leaving a closed magnetic circuit, the field would not stay put for any "useful" amount of time (because of the imperfection of the ferromagnetic material).
The steady state of the two devices, when implemented in the real world, are not equally "steady", that's all I'm saying
-
Your basic issue is still not looking at the two devices (inductor and capacitor) as working on voltage versus current.
To put a capacitor in "storage mode" you open circuit it.
To put an inductor in "storage mode" you short circuit it.
If you short a capacitor .... zonk ... all energy gone.
If you open an inductor ... same ... all energy gone.
Current for one, voltage for the other.
I understand the comparison in mathmatical terms, but in practical terms those four statements are: true, false, true, true.
Even if you took a toroidal inductor, charged it, and then magically made the winding vanish completely, leaving a closed magnetic circuit, the field would not stay put for any "useful" amount of time (because of the imperfection of the ferromagnetic material).
The steady state of the two devices, when implemented in the real world, are not equally "steady", that's all I'm saying
Hi
You are flipping back and forth between open circuiting an inductor and complaining that they are not ideal. One is a fundamental theoretical issue the other is simply a function of how much money you want to pay. If you took both an ideal capacitor and an ideal inductor, they would both store energy forever.
Bob
-
Bob has it - there is no steady state.
Nothing is perfect (least of all me) even though the mathematics persuades us that there is perfection
-
You are flipping back and forth between open circuiting an inductor and complaining that they are not ideal. One is a fundamental theoretical issue the other is simply a function of how much money you want to pay. If you took both an ideal capacitor and an ideal inductor, they would both store energy forever.
The original question was about open-circuiting an inductor. The post that you just replied to concerned the practicality of the two components as energy storage devices. If this were a multi-threaded forum, I'd split the two up, but it's not, so I have to alternate.
I understand that an ideal version of both devices would store the energy forever. More fundamentally, an ideal, closed magnetic circuit will remain there forever. The same goes for an ideal, closed electrical circuit.
As for it being a function of how much money you want to spend, this is exactly the same as saying that, if the voltage is high enough, teflon is a *practical* conductor. As in, it would, under certain real-world conditions, make sense to fabricate a circuit where the traces were made out of teflon, because its conductivity isn't zero, and therefore it's a conductor. For the absolutely overwhelming majority of real-world purposes, this would not happen.
So yes, it's a function of how much money you want to spend. A 1 farad capacitor charged to 10V holds 0.015Wh of energy. You can calculate the amount of energy lost over 1 minute while it's just standing there after being charged, it's not going to be much in terms of percentage. In the real world, I can buy this capacitor for $40 retail. How much would it cost for someone to construct a device that stores 0.015Wh for 1 minute using an inductor? How large would that device be?
In theory, an inductor is an energy storage device, and in theory, teflon is a conductor. In practice, you wound't use an inductor to store energy, and in practice, you wouldn't use teflon as a conductor.
-
An inductor IS an energy storage device - while the energy is moving (alternating) - Like a flywheel
A Capacitor is a storage device - while the energy is static - like a bucket of water on a ladder, waiting to fall
Magnetic fields collapse; capacitors leak
-
You are flipping back and forth between open circuiting an inductor and complaining that they are not ideal. One is a fundamental theoretical issue the other is simply a function of how much money you want to pay. If you took both an ideal capacitor and an ideal inductor, they would both store energy forever.
The original question was about open-circuiting an inductor. The post that you just replied to concerned the practicality of the two components as energy storage devices. If this were a multi-threaded forum, I'd split the two up, but it's not, so I have to alternate.
I understand that an ideal version of both devices would store the energy forever. More fundamentally, an ideal, closed magnetic circuit will remain there forever. The same goes for an ideal, closed electrical circuit.
As for it being a function of how much money you want to spend, this is exactly the same as saying that, if the voltage is high enough, teflon is a *practical* conductor. As in, it would, under certain real-world conditions, make sense to fabricate a circuit where the traces were made out of teflon, because its conductivity isn't zero, and therefore it's a conductor. For the absolutely overwhelming majority of real-world purposes, this would not happen.
So yes, it's a function of how much money you want to spend. A 1 farad capacitor charged to 10V holds 0.015Wh of energy. You can calculate the amount of energy lost over 1 minute while it's just standing there after being charged, it's not going to be much in terms of percentage. In the real world, I can buy this capacitor for $40 retail. How much would it cost for someone to construct a device that stores 0.015Wh for 1 minute using an inductor? How large would that device be?
In theory, an inductor is an energy storage device, and in theory, teflon is a conductor. In practice, you wound't use an inductor to store energy, and in practice, you wouldn't use teflon as a conductor.
Hi
Pardon me, energy storage is *exactly* what you use inductors for in switched mode power supplies. As long as you believe are useless for that purpose, there is no need to get into the power supply part of this. The fact that they store energy is so fundamental to the switcher question that you have to get it straight first.
Bob
-
Dear component supplier,
Please supply four of your "ideal" inductors...
Constant current - from whence (Shakespeare!)
LC starts life with no energy, then it gets a flick from outside, oscillates in a decaying fashion as it circulates the energy around, then goes back to starting state.
-
Pardon me, energy storage is *exactly* what you use inductors for in switched mode power supplies. As long as you believe are useless for that purpose, there is no need to get into the power supply part of this. The fact that they store energy is so fundamental to the switcher question that you have to get it straight first.
I never once said that inductors do not store energy. The single claim that I made which in any way contradicted or argued with any of your points was that comparing the manner in which capacitors and inductors store energy was, for the purposes of understanding them, not a practical one.
The only thing I disagree with is the *simile*. I never contradicted the any of the facts, equations, ideal functions of the devices, real-world behaviors, or practical uses.
-
I think I have a better way of explaining the problem I have with the simile.
Suppose you take a ball and throw it into the air. You applied force to move the mass of the ball away from the mass of the earth. The mass of the ball will now strive to get back as close as it can to the mass of the earth, and will, to a certain limit, apply force to do so (displacing the air away from its path). In throwing the ball into the air you've "charged" it with kinetic energy. I would not, however, say that as a result, the ball is an "energy storage device". I'm not saying that energy *isn't* stored in the ball, I just wouldn't classify the ball as an energy storage device.
Now I'll modify the situation slightly. I'll take the same ball and put it on a shelf. I've applied the same force, but "latched" the energy by not allowing it to fall back down. To release this energy, I would only have to push the ball very slightly, which would cause it to roll off of the shelf and once again apply force to get back to the ground. While I still wouldn't call the ball an energy storage device, it's now a bit closer to that description, subjectively speaking.
My classification or subjective description of the ball does not in any way change how it behaves. I'm merely arguing that I would describe it differently.
-
And what if the same ball were spinning around inside a bucket on a string?
-
Pardon me, energy storage is *exactly* what you use inductors for in switched mode power supplies. As long as you believe are useless for that purpose, there is no need to get into the power supply part of this. The fact that they store energy is so fundamental to the switcher question that you have to get it straight first.
I never once said that inductors do not store energy. The single claim that I made which in any way contradicted or argued with any of your points was that comparing the manner in which capacitors and inductors store energy was, for the purposes of understanding them, not a practical one.
The only thing I disagree with is the *simile*. I never contradicted the any of the facts, equations, ideal functions of the devices, real-world behaviors, or practical uses.
Hi
It is not a "cute trick" used to compare the two devices. They are uniquely coupled in what they do. That actually is deliberate. If it were not true, we would use a different pair of parts for which is was true. The way in which they are coupled is what makes the math work. Understanding that coupling is a very fundamental part of circuit theory. Without it, the rest of the analysis should not / would not / could not make sense.
Bob
-
Bad explanation :palm:
Ball inside bucket, bucket attached to string, someone is holding the string in their hand and making the whole thing turn either round horizontally, or vertically.
Now the ball isn't moving with respect to the bucket, but the bucket is moving. If you push the ball outside the bucket there is energy stored in the ball that can be captured - like the ball on the shelf falling off due to gravity, this way due to cetripedal force.
-
Sorry Bob, not your explanation, mine :palm:
-
It is not a "cute trick" used to compare the two devices. They are uniquely coupled in what they do. That actually is deliberate. If it were not true, we would use a different pair of parts for which is was true. The way in which they are coupled is what makes the math work. Understanding that coupling is a very fundamental part of circuit theory. Without it, the rest of the analysis should not / would not / could not make sense.
I see that, I can see the mirroring of their properties in the equations, I see how they complement each other and work as LC filters or oscillators, and that if the math were different then those mechanisms wouldn't work.
I was just making a subjective observation about how they're explained. Maybe I should have said that a capacitor seems (to me) more like a battery and therefore more intuitive to understand (to me).
Ok, I retract the observation. I see what you mean about having to understand both as energy storage devices in order to understand how power circuits work. I really did come away from this with a better understanding of their interaction. Thanks for your patience.
-
Ball inside bucket, bucket attached to string, someone is holding the string in their hand and making the whole thing turn either round horizontally, or vertically.
Now the ball isn't moving with respect to the bucket, but the bucket is moving. If you push the ball outside the bucket there is energy stored in the ball that can be captured - like the ball on the shelf falling off due to gravity, this way due to cetripedal force.
They're both moving with respect to the earth, but not with respect to each other. In this case kinetic energy is stored. If you release the string they'll both take off linearly, and due to drag and gravity form a ballistic trajectory. If the ball falls out of the bucket while it's still spinning the same thing will happen to it. Both the ball and the bucket could also store some energy as angular momentum, but going on the description it probably wouldn't be much.
While flying through the air, friction would cause some of the surface of the objects to heat up slightly compared to when they started their journey, and if the air is dry enough the objects could also pick up some static electricity.
So now you have a ball that's storing linear momentum, angular momentum, heat, static electricity, and probably some elastic deformation. In spite all of this, I still wouldn't describe neither the ball nor the bucket as energy storage devices. I retracted my observation about the inductor, but I'm going to stand firm on this one: A ball should not be classified as a "practical energy storage device" :)
-
Ball inside bucket, bucket attached to string, someone is holding the string in their hand and making the whole thing turn either round horizontally, or vertically.
Now the ball isn't moving with respect to the bucket, but the bucket is moving. If you push the ball outside the bucket there is energy stored in the ball that can be captured - like the ball on the shelf falling off due to gravity, this way due to cetripedal force.
They're both moving with respect to the earth, but not with respect to each other. In this case kinetic energy is stored. If you release the string they'll both take off linearly, and due to drag and gravity form a ballistic trajectory. If the ball falls out of the bucket while it's still spinning the same thing will happen to it. Both the ball and the bucket could also store some energy as angular momentum, but going on the description it probably wouldn't be much.
While flying through the air, friction would cause some of the surface of the objects to heat up slightly compared to when they started their journey, and if the air is dry enough the objects could also pick up some static electricity.
So now you have a ball that's storing linear momentum, angular momentum, heat, static electricity, and probably some elastic deformation. In spite all of this, I still wouldn't describe neither the ball nor the bucket as energy storage devices. I retracted my observation about the inductor, but I'm going to stand firm on this one: A ball should not be classified as a "practical energy storage device" :)
Why not? Carry a ball up a hill. You are providing kinetic energy to get it there and, once there, it has some potential energy which is stored with essentially NO leakage. Should the ball roll down the hill, the potential energy is converted back to kinetic energy.
It all ties back to Physics 101... http://www.diffen.com/difference/Kinetic_Energy_vs_Potential_Energy (http://www.diffen.com/difference/Kinetic_Energy_vs_Potential_Energy)
-
Sorry, I got sidetracked by the argument and forgot about the original question: Under what conditions, if any, could a buck converter output a higher voltage than its input voltage?
I'll copy-paste this from one of my previous posts:
In a buck converter circuit, suppose these 3 conditions occur:
1) the regulation circuitry fails (while the mosfet is off)
2) the load is almost zero
3) the output capacitor is (potentially) too small
Now we're left with the inductor, the freewheeling diode, and the output capacitor. For the sake of this example let's say there's nowhere for the current to flow back "upstream" through the switching mosfet. Is it then possible for that circuit to output voltage higher than the input voltage? I think the answer is yes, especially if the output capacitor isn't sufficiently large, thereby filling up before the inductor has completely discharged.
Suppose I'm trying to prevent this exact scenario from happening using only passives. Would adding more capacitance to the output along with a bleed resistor help? Would adding a few very fast-response capacitors (like ceramics) in parallel with the output capacitor help "capture" the high voltage spike?
-
The language "current collapses" is silly, I suppose true enough for an inductor that is hard switched with no other load (like a mechanical switch, sparking a relay or solenoid coil), but doesn't help much for SMPS where the voltage across the inductor is always fixed and stable and well defined.
That is, when the switch turns off (for a buck or boost design, for example), the inductor voltage falls and reverses, and the current discharges (for the buck, the switch and discharge current both flow into the output, so the current is continuous; for the boost, the input current is continuous and the discharge current is intermittent).
It's very important to understand: inductors do not generate dangerous voltages, at least all by their own. Nor do they ever carry or deliver any more current than you've switched into it. When a switched inductor is commutated (i.e., switched from a 'charge' to 'discharge' state, or vice versa), it acts like a constant current source.
Indeed, the ultimate truth about SMPS control, is that the inductor is a current source. The most reliable designs control this current firstly, and vary it to regulate voltage. By controlling the current first, you can keep it well within limits: there's no such thing as a short circuit load to a circuit like this, because a short circuit simply causes it to deliver maximum current, which is controlled by design.
Tim
-
The language "current collapses" is silly, I suppose true enough for an inductor that is hard switched with no other load (like a mechanical switch, sparking a relay or solenoid coil), but doesn't help much for SMPS where the voltage across the inductor is always fixed and stable and well defined.
That is, when the switch turns off (for a buck or boost design, for example), the inductor voltage falls and reverses, and the current discharges (for the buck, the switch and discharge current both flow into the output, so the current is continuous; for the boost, the input current is continuous and the discharge current is intermittent).
It's very important to understand: inductors do not generate dangerous voltages, at least all by their own. Nor do they ever carry or deliver any more current than you've switched into it. When a switched inductor is commutated (i.e., switched from a 'charge' to 'discharge' state, or vice versa), it acts like a constant current source.
But when an inductor is partially charged in a buck circuit, and the switch is turned off, the output current can be higher than the input current, right? I thought this was the result of the magnetic field inducing current under different conditions than the ones in which it was charged.
Indeed, the ultimate truth about SMPS control, is that the inductor is a current source. The most reliable designs control this current firstly, and vary it to regulate voltage. By controlling the current first, you can keep it well within limits: there's no such thing as a short circuit load to a circuit like this, because a short circuit simply causes it to deliver maximum current, which is controlled by design.
I meant what would happen if the control circuitry failed mid-operation, while the high-side switch was off. You're left with an inductor, a diode (or switch) and a capacitor, in series, with the output path for circuit being the terminals of the capacitor. The inductor could be partially or fully charged (most likely not saturated). If the load resistance is too high, couldn't this condition induce a higher voltage (across the capacitor) than the one that flowed into the circuit?
-
But when an inductor is partially charged in a buck circuit, and the switch is turned off, the output current can be higher than the input current, right? I thought this was the result of the magnetic field inducing current under different conditions than the ones in which it was charged.
Say you have a 12V, 1A input and a 4V, 3A output.
The switch isn't charging to 1A, it's charging to (slightly more than) 3A.
The 3A is intermittent (33% duty), so averages to 1A. The switch has to work hard (peak = 3 x average current), but not for all that long.
I meant what would happen if the control circuitry failed mid-operation, while the high-side switch was off. You're left with an inductor, a diode (or switch) and a capacitor, in series, with the output path for circuit being the terminals of the capacitor. The inductor could be partially or fully charged (most likely not saturated). If the load resistance is too high, couldn't this condition induce a higher voltage (across the capacitor) than the one that flowed into the circuit?
If the load suddenly goes from full load to open, the worst case overshoot is also a design parameter.
In fact, it comes from one of my favorite ratios:
Z = sqrt(L/C)
which if you read many of my posts, may be familiar. :)
If the inductor is charged to 3A, and the inductor is 10uH and the capacitor is 10uF, then the overshoot will be delta V = (3A) * sqrt(10uH/10uF) = 3V.
The overshoot can be much more if the control circuit responds slowly, which is usually the case for a voltage regulating circuit, regardless of whether it's current-mode or voltage-mode at heart.
No need for fault conditions at all, this is simply part of normal, expected operation.
Switching converters often use quite a lot of load capacitance for this reason, to keep the filter impedance low and therefore the overshoot low. Some also require that the filter's response have a low cutoff (some kHz) and be well damped (Z < capacitor ESR, taking advantage of the lossy properties of electrolytics), which is typical of voltage mode converters (like the archaic TL494); the ESR provides a zero in the loop response, keeping it stable.
Tim
-
Say you have a 12V, 1A input and a 4V, 3A output.
The switch isn't charging to 1A, it's charging to (slightly more than) 3A.
The 3A is intermittent (33% duty), so averages to 1A. The switch has to work hard (peak = 3 x average current), but not for all that long.
So the switch is completely on for ~33% of the time, during which it lets 3A flows into the inductor, while the rest of the time it's completely off. Ideally (but not in practice), all of this current charges the magnetic field, the inductor never saturates so it never shorts the switch to the output directly, right?
So that means that the input into the inductor is 3A (at 12V) for ~33% of the time, and the output is an average of 3A (at 4V) continuous (and this output is smoothed out by the output capacitor). So the current does "develop" in the inductor, which, as you mentioned previously, becomes a current-source during the time the switch is off.
So when the switch is on, the inductor resists by developing a magnetic field in order to keep the current across it constant (creating a higher-impedance state than just the resistance of the coil), and when the switch is off, the inductor again resists the change in current by using the magnetic field to induce current in the coil.
Did I get that right?
If so, does it mean that, in this example circuit, the "number of charged particles" going into the inductor is lower than the number of charged particles coming out of it? Or is this the wrong way to think about it?
If the load suddenly goes from full load to open, the worst case overshoot is also a design parameter.
In fact, it comes from one of my favorite ratios:
Z = sqrt(L/C)
which if you read many of my posts, may be familiar. :)
If the inductor is charged to 3A, and the inductor is 10uH and the capacitor is 10uF, then the overshoot will be delta V = (3A) * sqrt(10uH/10uF) = 3V.
The overshoot can be much more if the control circuit responds slowly, which is usually the case for a voltage regulating circuit, regardless of whether it's current-mode or voltage-mode at heart.
No need for fault conditions at all, this is simply part of normal, expected operation.
Switching converters often use quite a lot of load capacitance for this reason, to keep the filter impedance low and therefore the overshoot low. Some also require that the filter's response have a low cutoff (some kHz) and be well damped (Z < capacitor ESR, taking advantage of the lossy properties of electrolytics), which is typical of voltage mode converters (like the archaic TL494); the ESR provides a zero in the loop response, keeping it stable.
Ok, so let's say I messed up the design and chose a capacitor that was too small for the job, say 1uF instead of 10uF:
(3A) * sqrt(10uH/1uF) = ~9.486V
Now I have over 9 volts at the output, which is potentially higher than the input voltage? If so, I can prevent this from happening by always choosing a higher-capacity capacitor than the equation dictates?
-
Since you are asking about altering the design parameters, why don't you just model the circuit in LTSpice and play with it. Many buck/boost devices are included in the library and datasheets are certainly available. Then, instead of just wandering around trying to guess what might happen, you can put an electronic switch in the circuit and activate it as some value of (t).
The LT3570 was the first buck converter I came up with and it is definitely in the library. Just tack on the external parts and run a transient analysis.
-
Since you are asking about altering the design parameters, why don't you just model the circuit in LTSpice and play with it. Many buck/boost devices are included in the library and datasheets are certainly available. Then, instead of just wandering around trying to guess what might happen, you can put an electronic switch in the circuit and activate it as some value of (t).
To the contrary, I'm trying *not* to guess, I'm trying to understand the physics.
I will do simulations, I'll also wire up circuits and do hands-on tests, but I'm trying to get some of the fundamental behavior down. Previously I just changed parameters and observed the result, and as a consequence I still don't have a good grasp on what lead to those results.
-
So the switch is completely on for ~33% of the time, during which it lets 3A flows into the inductor, while the rest of the time it's completely off. Ideally (but not in practice), all of this current charges the magnetic field, the inductor never saturates so it never shorts the switch to the output directly, right?
Right. Also, more precisely, since the inductance is not infinite, the current is always increasing or decreasing, along a ramp waveform. The peak switch current (at turn-off) is higher than the average output current, by the amount of ripple it's designed for. Likewise, the current when the switch turns on is lower by the same amount (or zero, for a discontinuous current mode design, or it can also be negative for a synchronous converter).
Of course, the average (while the switch it on) is still equal to 3A, and over a cycle, or for all time, the input and output averages are 1A and 3A.
And also, I'm using a slightly different meaning of 'charge'. Namely: when you charge, say, a balloon by rubbing it against your hair, you are charging its capacitance. There are many kinds of charge, not just electric charge (particle physicists work with charges and currents of many types, which arise in the study of high-energy particles). In the context of capacitors and inductors, I'm meaning: to charge, is to increase the value of that component's stored quantity. The capacitor stores electric charge; the inductor stores magnetic charge.
Physically, magnetic charge is the movement of electrons, a rate or velocity, not a count. A superconductor retains a current, with zero resistance, for essentially unlimited time; just as the capacitor's charge is quantized by the electron charge, the superconductor's current is quantized by the flux quanta (a related quantity).
Also, always remember that a small, two-terminal (one port) component must always conserve current. You don't charge a capacitor by applying an unbalanced current; current flows in one side, and out the other. (How it crosses the dielectric, doesn't matter in the least; it's just two pins of component carrying some current! But if you like, the term is displacement current.)
So when the switch is on, the inductor resists by developing a magnetic field in order to keep the current across it constant (creating a higher-impedance state than just the resistance of the coil), and when the switch is off, the inductor again resists the change in current by using the magnetic field to induce current in the coil.
Did I get that right?
Impedance isn't quite correct, but it's close. We can analyze that more closely, too.
The waveform at the switch is +12V for 33% of the cycle, 0V for 67%. This, of course, averages 4V (as it must, as the average voltage across an inductance is zero*).
*Nonzero for a real inductor, but a real inductor isn't a pure inductance, it has series resistance too (among other properties).
If we subtract the average component, we're left with a squarewave that goes above and below zero (8V / 33%, -4V / 67%). This is pure AC (it has no DC), and so can be decomposed further into sine wave components. (Well, technically we already removed one of the cos(t) components -- it just happens to be cos(0*t), so it's the DC term. Duh. :P )
The reason you'd decompose it into sine waves, is because voltage, current and impedance are only valid -- under customary definitions and uses -- for steady state sine wave conditions.
We can assert that we'll simply redefine those for a square waveform, and forge ahead bravely! But we hit a snag, because the input voltage is a square wave, while the current output is a triangle wave. So whatever it is we're talking about, we can't talk about in consistent terms, because the input and output are different. We could go further and say we're only going to talk about impedance of an inductor, for these particular waveforms, but even that isn't very useful, because it still depends on duty cycle; there's just no good to be found this way.
That said:
The fundamental component is very close to the amplitude of the square wave (i.e., between 4 and 8V, at whatever the frequency is), and the harmonics are filtered by the inductor as 1/N, so the 3rd harmonic is 1/3 of this, and so on. So the current won't be much more than I = V / (2*pi*F*L), and that's mainly because it looks triangular, so the harmonics are building up those peaks, which will be a little higher than the true fundamental sine wave component alone would give. Which is perfectly reasonable, and good enough to hand-wave through.
If this converter runs at 100kHz, then the reactance is 2*pi*(10uH)*(0.1MHz) = (u and M cancel) = 6.3 ohms, and for about 6V in, we have about 1A of ripple.
Is there a better way? Absolutely! In fact, square waves are absolutely ideal for a time-domain approach.
Instead of decomposing it into frequencies, just take it one step at a time, and use calculus to solve it for you.
The fundamental equation of the inductor is:
V = L * dI/dt
'd/dt' means derivative, but what's dI/dt of a triangle wave? It's only two things, up or down, by some fixed amount each! How much is the V? We already did that arithmetic: 8 or -4. If dt is 3.33 and 6.67us for each state of the switch, then dI becomes delta I, and we can thank calculus for its service as we slide along these straight lines, without having to crank difficult functions through an integrator.
Namely, I(pk-pk) is the change in current delivered while the switch is only on (or only off, as the case may be), which is (8V) * (3.33us) / (10uH) = 2.67A. The peak is half this, or 1.33A, which is the amount above and below the average. That is, the inductor current varies from 1.67 to 4.33A over a cycle. (Which means the current is continuous, i.e., CCM, which was about what I intended. Not bad for pulled-out-of-the-ass numbers. :) )
If so, does it mean that, in this example circuit, the "number of charged particles" going into the inductor is lower than the number of charged particles coming out of it? Or is this the wrong way to think about it?
Like I said before, rate, not number; and everything entering one pin, exits the other, necessarily. ;)
For a capacitor, the "thing" entering one pin need not be precisely the same (in a certain sense) as what's exiting the other -- however, particle physics assures us that electrons are identical and indistinguishable, so.... how could you know? :)
(When they aren't the same, we can do neat things. In electrochemistry, we can produce ions at one electrode, and deposit completely different ions at the other!)
Ok, so let's say I messed up the design and chose a capacitor that was too small for the job, say 1uF instead of 10uF:
(3A) * sqrt(10uH/1uF) = ~9.486V
Now I have over 9 volts at the output, which is potentially higher than the input voltage? If so, I can prevent this from happening by always choosing a higher-capacity capacitor than the equation dictates?
Yes- in fact, such a circuit (with an "overly small" capacitor) can be used as a resonant converter. The inductor and capacitor are series resonant, and the voltage multiplication is given by the Q factor, which can be modest (2-10?) for commercially available components and reasonable efficiency.
I've misled a bit with the equations, because those would suggest 9000uF would be required, which is ridiculous. Fact is, the capacitor is already charged; the impedance ratio works if it's starting from zero (which is true of inductors and capacitors resonating, say if you want to convert the peak current (at Vc = 0) to a peak voltage (at I_L = 0)), but we need to account for the initial voltage here.
The equation here is,
dV = (1/2) L Ipk^2 / (CV)
This is calculus-approximated, so assumes dV << V. Your tiny capacitor example would obviously fail this, :) but practical SMPS cases should fit nicely.
Note that it contains the term (L/C), i.e., Zo^2. We could rewrite it as,
dV = Zo * Ipk * (Zo / Zsw)
Where Zo = sqrt(L/C) and Zsw = Vo / Ipk. Kind of a weird impedance, taking the output voltage and the peak inductor current (two things that don't really come together), but it's close to the minimum load resistance (i.e., (Vout / Iout), for nominal / maximum output current). Perhaps less mysteriously: if we ignore peak current, or make the inductor very large (so that ripple is small), this is basically Iavg, which is Iout.
If we have 10uH and 100uF, then Zo ~= 0.316 ohm, and dV = 0.225V, which is maybe a little high for 4V but makes for a reasonable example.
And again, that's assuming the controller stops immediately, which isn't the case, but can be close in practice.
Tim
-
Tim:
Thank you so much for all this detail. Some of it I was able to comprehend, but most of it I'll have to save for later after I read more about LC circuit behavior. I'll also have to relearn calculus properly (it's been a decade), I'm clearly not going to comprehend this otherwise. At least now I understand some of the physical characteristics of the circuit that prevent voltage spikes from exiting the LC circuit, aside from the regulatory circuitry.
There is one thing that keeps nagging at me whenever I look at the formulas used to figure out the optimal component values for a given frequency and Vin/Vout ratios: they all deal with the behavior of the circuit over a (comparatively) "long time" (kilo-cycles).
As I understand it, the control circuit can only control the duty cycle. Some newer ICs mention dynamically altering the frequency, but that's rare, and the ones that do will only tweak it a bit. The information that the controller has to work with (apart from the input voltage) is either just the output voltage, or the output voltage along with a voltage drop over a sense resistor. I understand that the most economical way, in terms of circuit complexity, is to control the duty cycle.
But: what if you were able to -- A) gather *way* more information from across the circuit (the voltage at almost every junction) and B) use extremely fast op-amps and comparators. Couldn't you control the circuit cycle-by-cycle? Rather than matching a particular frequency to an inductance, you'd manage the behavior of the switch (or switches, if this is a synchronous version) throughout the cycle itself. You'd turn on the switch, monitor the voltage at the input and the output of the inductor, and turn the switch off when it got to the optimal point for the load current measured at that cycle.
Or, rather than a network of analog devices, use an MCU instead? This is probably massive overkill, but we already have software-controlled radio, why not software-controlled SMPS control?
Since the switching frequency of a modern SMPS controller can reach 1MHz (or more if it's an integrated switch and the inductor is practically glued to the IC), I expect the reason it's not done is because you'd need a $20 MCU (maybe a Cortex-M7) with very fast and expensive ADCs to be able to respond that fast. You could even factor in temperature information and employ some basic learning algorithms.
It would be prohibitively expensive, but would it be possible in theory?
-
There is one thing that keeps nagging at me whenever I look at the formulas used to figure out the optimal component values for a given frequency and Vin/Vout ratios: they all deal with the behavior of the circuit over a (comparatively) "long time" (kilo-cycles).
Yes! For the voltage control, or the averages where you only need to look at ripple and DC volts/amps sorts of things.
As I understand it, the control circuit can only control the duty cycle.
Only for a very basic, textbook example. But this is a terrible example. As textbooks often do...
The result is always to control the duty cycle... "but that's not why we do it", as Feynman said. (If you don't know that quote, look it up. The original context is more amusing.. ;) )
We do it to control the inductor current. The inductor's state variable is its current. Just as the state of a capacitor is its voltage. The current varies over time, rising when the switch is on, and falling when off (or ringing around zero, as the case may be).
We can generate "PWM" by, rather than comparing some control voltage to some ramp waveform, but by using the inductor itself as the ramp, measuring its current and using a comparator to decide when to "cut" the ramp, thus generating pulses of variable width (and perhaps variable frequency as well).
This is the operating method of peak current mode controllers, like the classic UC3842 series. There is no PWM comparator, no "555 timer" structure. There is only a latch, which is clocked to begin a cycle (the clock transitions, turning the switch on, and it stays on forever, until..), which turns itself off when the inductor current reaches the threshold. Okay fine, the clock source is an oscillator, but there's no requirement that such a circuit must operate at constant frequency. It's just done that way for convenience!
Some newer ICs mention dynamically altering the frequency, but that's rare, and the ones that do will only tweak it a bit. The information that the controller has to work with (apart from the input voltage) is either just the output voltage, or the output voltage along with a voltage drop over a sense resistor. I understand that the most economical way, in terms of circuit complexity, is to control the duty cycle.
I don't think it's that uncommon.
I do think there are hints of "fuck it, we'll solve it in digital" in many designs, though. I digress:
For example, TI's Eco-mode series boasts high efficiency at low load currents, because the switching frequency is reduced.
Stepwise.
Why, for the sake of all things that are good and holy in the analog world, would you reduce frequency by halves?!
Probably because some jerk tossed a series of flip-flops (with enables) into the design and said "fuck it, it works".
Stuff like this bugs me. It works, but the fundamental design procedure shows utter disregard for any concept of understanding the problem!
...And yes, I admit such an opinion is somewhat bizarre, absurd even, in the context of PULSE WIDTH MODULATION, the very heart of which is about switching things on and off!
But the heart of the matter should be, utilize that one nonlinearity to get the conversion efficiency, and do everything else as smoothly as possible. Continuous functions, smooth analog action. Nothing impossible about that, or even, very hard, really.
(The practical implication of stepwise frequency changes, is discontinuous shifts -- increases -- in the amount of output ripple, at lighter and lighter loads. It's unlikely you'll have a load that's concerned about the derivative, the change in ripple versus I_load, so I have little practical excuse to complain about, but trying to comprehensively analyze such a system would be so complicated as to be absurd. A complexity that is unnecessary.)
It's particularly silly when nine transistors can solve the problem more elegantly (of course, not nearly as efficiently, or with all the other bells and whistles that you get with a monolithic controller/regulator):
http://seventransistorlabs.com/Images/Discrete_Tube_Supply.png (http://seventransistorlabs.com/Images/Discrete_Tube_Supply.png)
This circuit is, in a sense, based on the UC3842 (if not recognizably so!), except that oscillator frequency is varied proportionally with "throttle" (the amount of power demanded by the optoisolator; the TL431 is the voltage regulating error amplifier here). This gives better efficiency, ripple and stability than running it flat out, and the frequency-variable tweak simply comes for free (the 2.2k feeding the 680 ohm).
But anyway, I digress... ;)
But: what if you were able to -- A) gather *way* more information from across the circuit (the voltage at almost every junction) and B) use extremely fast op-amps and comparators. Couldn't you control the circuit cycle-by-cycle? Rather than matching a particular frequency to an inductance, you'd manage the behavior of the switch (or switches, if this is a synchronous version) throughout the cycle itself. You'd turn on the switch, monitor the voltage at the input and the output of the inductor, and turn the switch off when it got to the optimal point for the load current measured at that cycle.
I'll tell you a story instead, for this one.
At PPoE, my duty was to design the electronics (and most of the brains, though I didn't have to write it all myself too, as a software guy did the programming, which worked out nicely I think). Specifically, a high frequency induction heater: up to 400kHz, models from 5-50kW capacity. Now, an induction heater is simply a resonant power supply, except the frequency might need to cover a wide range, and there's no rectifier and filter (as in a resonant SMPS), it just goes into making a hunk of steel glow ever brighter...
My reference, the "spec" if you will (though nothing was ever so formal as a "spec" there), was to control the resonant circuit as fast as every quarter cycle. The supposition being, about every 1/4 cycle, you can tell something usefully new about the system, like a zero crossing, or a peak (where the derivative -- slope versus time -- goes to zero).
That should already get your hair raising, because a derivative control method is practically doomed to failure. When you take the derivative of a real signal, you're doing the opposite of an integrator: +20dB/dec of gain, boosting a huge bandwidth of pure noise at high frequencies!
The other option isn't much better. Suppose you're sampling the inverter current waveform in this resonant power supply, which is mostly sinusoidal. At what point does it cross zero? If you simply compare the samples to zero, then around the zero crossing, you will get a range of values which are near enough to zero that random noise will push some samples above and below -- you get multiple zero crossings for each pass.
How do you deal with this?
The deeper question, though: how much data are you actually basing that decision on?
Only the few samples nearest the zero crossing.
Indeed, a zero-and-peak detection method ignores as much of the waveform as possible, trying to find the one or two samples where the requested event actually occurred. Which of course, in a noisy environment, is utterly impossible.
So you're left contemplating, okay, how can we include more data?
Zero crossings might be smoothed out with a sliding average filter, or a line segment best-fit that spans more samples around the zero. But both of these necessarily introduce delay. You can't make a decision about the slope of a line until you've got all the points to compose that line!
A more traditional signals method is simply to filter the signal (bandpass or lowpass, say). This removes high frequency noise, which makes the derivative better behaved (and making zero crossings cleaner). But how many samples does a filter operate over?* How much time delay does a filter introduce?
*This is a twofold question, because in principle, if it operates over more data, aha--that's more information about the system. More information is a good thing. But that also necessarily introduces delay, perhaps several fractions of a cycle, perhaps whole cycles!
So, as I worked on the project, and talked it over with my supervisor (who was also my boss, who was also everyone's boss... warning sign?), I had to convince him.. gently at first, then after he didn't get it for the first three months, more and more intently and frankly...
(As any infrequent reader of my posts should recognize, I'm not one for mincing words. I'm frank and to the point. I don't mean to be rude, and if I come off that way, I apologize. Sadly, a lot of people don't know how to receive constructive criticism, and mistake it as personal instead.)
(This comes to mind, as, in the course of this job, I, shall we say, got rather more experience in just how direct I can say things to certain people...)
Anyway, the point I had to make, was threefold:
1. You can't make a stateless controller (i.e., one that makes snap decisions without prior knowledge or memory), for a system that is stateful. Like, a system that responds after tens of cycles of control influence. Or, well, you can, in a sense... it's just that, those systems are called "oscillators".
2. It is MEANINGLESS to control a resonant system so quickly. For a tank which resonates at 100kHz with a Q of 10 (a Q factor which is on the low side of the kinds of problems we had to accommodate), the response to a step change of input power (at the resonant frequency) will take on the order of Q/F seconds, or 100us, to change. That is, the amplitude envelope will have grown, exponentially, by about a time constant (63% of the way from initial to final) in that time. If our control loop responds a few times faster than this, we won't give two shits about being any faster.
Indeed, a higher sampling rate, say for regulating the amplitude of the waveform, will only encounter rounding errors due to the overly small timesteps (and thus need to keep track of more bits in the DSP registers), and greater errors due to the poorly filtered ripple that results from the amplitude measuring process, usually an active rectifier or RMS converter.
3. The entire, fundamental point of using a resonant system, is that the resonator contains more energy than we can possibly deliver within a cycle, or indeed, within about Q cycles! If this were not the case, then the inverter would literally output so much power that it can man-handle the coil directly, and we wouldn't need a resonant capacitor at all!
Which, yes... can technically be done, but it's very expensive to do that, because even for a low Q of 10, you need 10 times the inverter capacity, 100kVA of inverter for 10kW of output!
So, given that management direction, I did the best I could do, and forged ahead with my own interpretation of the "spec": to control it as fast as feasible.
The software guy (who joined that company a bit earlier than I did) had spent a little time trying to implement the "spec" as literally as possible (with zero crossings and such), and, crude though it was, it failed as miserably as I expected (much to the confusion of boss-man). The very first instruction I gave him, was to write a system which computes the process inputs (just the basics, like voltage, current and frequency) during the course of one cycle, and to implement a feedback loop using those process variables (which are therefore sampled at the operating frequency, or I think there was an every-other-cycle factor in there for some reason, I don't remember).
The heart of the system was just a garden variety PID loop. It was a bitch to compensate for all possible loads (a Q of 5, i.e., a time constant of ~5 cycles, is surely overdamped, while a Q of 50 might be questionable under some conditions, perhaps?), and so definitely started up much slower than it could've. IIRC, the startup transient took on the order of 30ms with an average load; this after I had foolishly suggested perhaps 1ms would be possible (still a far cry from boss-man's hypothetical ideal of "a few cycles").
I wouldn't mind developing a project to push below 1ms with such a system, but as it turns out, it was a rather silly requirement to begin with. No one uses a power supply that fast, so no one knows how to use one that fast. It's not enough power to do any kind of work with, over such short time scales, anyway. Our competitors' power supplies ranged from 300ms to 2s for the startup transient -- yes, you could literally heard them whistle* as they swept down to find the setpoint!
*Whistling due to the low operating frequency (3-10kHz) of those particular supplies. The control loop was so slow (~s time constant) that it was pretty unlikely to oscillate, at least. Talk about driving a nail with a boulder instead of a hammer...
Bonus pic: http://seventransistorlabs.com/Images/Tallboy_HotStuff.jpg (http://seventransistorlabs.com/Images/Tallboy_HotStuff.jpg) (doesn't look to be anything proprietary in the pic, nor of sufficient resolution to tell. Most of what's pictured was obsoleted long ago, anyway.) This was the 25kW prototype. The steel cylinder is about 10cm across.
Anyway, returning to SMPS:
If you poke at a few key nodes, sure, you can measure inductor current. For a non-resonant system, this is a whole lot simpler!...
The key, as I said, is inductor current. If you have one monitor on that, you're doing almost as well as you possibly can. You might still want to watch input and output voltages, really more just to protect the switches, and "inform" the control circuit of what to expect.
Namely, the dI/dt of the inductor is proportional to voltage difference. Which means, at high input voltages, the inductor will be charging up a lot faster; you might want to increase switching frequency (assuming the increased switching losses don't start a fire..), or switch in* a different capacitor in the compensation loop, to accommodate the faster inverter section.
*This is embarrassingly bad terminology, and I write it only to make an example of it. In an analog control loop, of all places, you should never be switching timing components! This is just like the ripple-derivatives rant, except that the output (as well as the derivatives) may be discontinuous, if it's badly implemented.
Consider a controller where there's a capacitor in a feedback loop (normally, for a "type 1" (integrator) controller, it's a capacitor from output to -in of an op-amp). If you simply combined an analog switch with an additional capacitor, you'll find that, while the switch is open, the open capacitor's voltage drifts randomly due to leakage. Even if it doesn't, you'll likely find that, when you switch it back in, the controller is no longer at the same setpoint (maybe it's commanding more or less power now), so the capacitor's charge difference sends a sudden jolt into the system! Kablammo, a huge spike of current or voltage, just when you don't want it, just when conditions are changing!
Unfortunately, variable capacitors of large size, with electronic control, are rather hard to come by, or to implement.
Anyway, sure, you could use supply and output voltages in a controller, such as to change frequency or time constant.
And, like my rant about controlling things overly quickly, you can avoid that, too: it particularly shines when you want low ripple and high efficiency at low cost. In CCM (continuous conduction mode), you use a relatively large inductance, so the ripple is small, and current doesn't drop to zero (except under light load, where efficiency will stink anyway, so we don't mind). This works best with what's called average current control.
You simply read the inductor current, filter it a little (to take the edge off the ripple, which is but a noise error to the controller), and close the loop on inductor current. Closing the loop slows things down a little, but not terrifically (it's already relatively slow, from the large value inductor). The big bonus is reducing errors, so that the inductance and supply voltage can vary over a wide range, and the resulting system (setpoint voltage input -- current flow output) is consistent with respect to gain and frequency. You also constrain the current flow, because current simply won't go any higher than whatever the input is commanded to reach. Set the gains right and "VCC" is nominal max. What could be simpler?
To regulate voltage with such a subsystem, you put another error amp outside, to control current setpoint. The greatest part: because you're controlling current, you have a known gain and frequency response into the filter capacitor. It's not some gnarly RLC resonant mess (as is the case for a voltage mode controller, where you try to control PWM based on output voltage alone -- the two pole filter, plus the necessary controller pole, makes for good oscillators and poor regulators!). You can compensate it perfectly for any sufficiently large capacitor -- independent of ESR.
Or, rather than a network of analog devices, use an MCU instead? This is probably massive overkill, but we already have software-controlled radio, why not software-controlled SMPS control?
Since the switching frequency of a modern SMPS controller can reach 1MHz (or more if it's an integrated switch and the inductor is practically glued to the IC), I expect the reason it's not done is because you'd need a $20 MCU (maybe a Cortex-M7) with very fast and expensive ADCs to be able to respond that fast. You could even factor in temperature information and employ some basic learning algorithms.
It would be prohibitively expensive, but would it be possible in theory?
Overkill? Not necessarily. My power supply story used a modest sized FPGA, at higher speeds than that. Well... I would've preferred to do that project largely analog, and much of the synthesized hardware would be easy enough to implement that way, but there are undeniable benefits to performing DSP (whether it's on an MCU, in FPGA fabric, or a DSP proper) for many kinds of signals and analyses.
And as "prohibition" goes-- if adding a $20 chip saves you $10 in the size of electrolytics, $2/ea for smaller transistors ran closer to ratings (let's say qty 6, in a VFD motor controller?), $5-20 on EMI filtering (carefully timed ZVS, harmonics reduction, spread spectrum, etc.), it's a clear winner -- and you have the future migration option of ASIC-ificating the design, if the product sells well, and then that $20 chip becomes $5 or less at qty 100,000+. :)
Tim
-
It feels a bit like someone's explaining the intricacies of jet propulsion and I'm asking "so it makes a big fire?".
I'm going to have to resort to metaphors so that if what I'm asking makes no sense it's easier to explain why the metaphor's wrong rather than using literal terms:
As I understand it, SMPS topologies rely on working "with" the natural behavior of the LC circuit, so they're like riding a fixie bicycle: You control when to press down on the pedal and when to release your leg and let momentum carry your foot, in a circular path, until it becomes useful to press down again. If you're on a flat plain, keeping a constant speed, you're not going to have to spend much energy, you just have to press lightly on occasion. However, you always have to be aware of *when* to press down, and when to allow momentum to do its thing. If your leg in the metaphor is the switch, you could, in theory, use only one leg to pedal (a pedal grinding wheel is an example of single-leg pedaling).
Then there's induction heating. Back to literal terms, here you're forcing the inductor (the object being heated) to change its magnetic polarity back and forth as hard and as fast as you can. You're not waiting for the iron/steel to naturally saturate, you're forcing it to one side and then the other, inducing current in the object (making it into a shorted single-winding coil) and forcing it to change direction as fast as possible to create heat.
So, using the terms of the bicycle metaphor, this time your goal isn't to move forward at all. Let's say that at the moment your right foot is on the pedal facing forward and your left foot is on the pedal facing back. You press hard on the pedal facing forward, and after only a few degrees you loosen your right leg and press hard with your left leg on the pedal facing back, forcing the direction to change. Assume you have super-strength for this part of the metaphor... You keep doing this, pressing back and forth, with the goal of creating friction between the wheel and the road, heating up the tires until they melt.
This is why I don't understand why, when you're designing an induction heater, you care about the natural inductive properties of the object being heated, since you're intentionally overriding them anyway. If you were to wait for the iron/steel to naturally saturate, you wouldn't go up anywhere near 400kHz, you'd have to switch direction much more slowly and it wouldn't induce enough heat in the object.
But I'm assuming that I'm getting something wrong here...
So back to the SMPS circuits, there's another reason I chose this particular metaphor, because the first example actually isn't how most SMPS circuits work. If I were to match their behavior as close as possible, I wouldn't be trying to determine, cycle-by-cycle, when to press down on the pedal. Instead, I'd estimate my speed, try to calculate what the RPM of the wheel should be to keep this speed, and come up with a frequency (match the frequency to the inductance). At this point, I'd stop trying to figure out what's the optimal time to press on the pedal is. Instead, I'd press down on the pedal once each cycle, at the frequency I previously determined, and pretty much ignore the feedback coming in from my legs. Now the only thing I'd be controlling is for how many degrees (pulse width) I keep the pressure on every cycle. This won't be as efficient as determining when to press down during every cycle, but it does, in theory, allow me to "think less". Some cycles will be more efficient then others, and the overall average would be fairly efficient.
Yet, for some reason, the constant-frequency method is far more common?... Why? It only takes a couple of comparators and a latch (or two) to determine when it's optimal to turn the switch on and off at every cycle (use the measured inductor current as the frequency generator), but yet almost all of the SMPS documents talk about closely matching the switching frequency to the inductance (seriously, I don't think I've read a single document that *doesn't* mention matching the switching speed to the inductance).
Is it because the comparators don't operate fast enough? But the UC3842 (which I was unaware of) is over 10 years old. Is it because voltage-mode circuits don't have to use a sense resistor? Does it have to do with component count?
I must be misunderstanding something (yet again)...
-
It feels a bit like someone's explaining the intricacies of jet propulsion and I'm asking "so it makes a big fire?".
I'm going to have to resort to metaphors so that if what I'm asking makes no sense it's easier to explain why the metaphor's wrong rather than using literal terms:
;D
As I understand it, SMPS topologies rely on working "with" the natural behavior of the LC circuit, so they're like riding a fixie bicycle: You control when to press down on the pedal and when to release your leg and let momentum carry your foot, in a circular path, until it becomes useful to press down again. If you're on a flat plain, keeping a constant speed, you're not going to have to spend much energy, you just have to press lightly on occasion.
(snip)
Very close. This is closest to a resonant system, because the frequency is consistent.
Suppose you're going up an incline. To maintain constant speed (constant frequency), you need to impart more PWM to your foot. Steeper slope means greater power.
This works for induction heating (which, like I said, is really just a resonant SMPS with a big fat bulk resistor instead of a rectifier and output filter :) ) and suitable topologies of resonant SMPS.
This is why I don't understand why, when you're designing an induction heater, you care about the natural inductive properties of the object being heated, since you're intentionally overriding them anyway. If you were to wait for the iron/steel to naturally saturate, you wouldn't go up anywhere near 400kHz, you'd have to switch direction much more slowly and it wouldn't induce enough heat in the object.
But I'm assuming that I'm getting something wrong here...
The inductive properties don't matter much; it's just a bulk equivalent. What you get at the coil is, some amount of ESR and inductance. Match to that, as you'd match any tuned / RF circuit, and you're golden.
The main downside is, those parameters can vary widely if the load changes, such as when a part is inserted and removed by the user, or as part of an automated production line. (Or you can turn the power supply off when there's no part to be heated, but this doesn't always work out, either.)
So the simple fix for that is to track frequency (something rarely available for standard RF applications), and to have a bit of extra VA capacity to accommodate poor matches (effectively, so you can still deliver full power into a VSWR of perhaps 2 to 4).
You don't usually, but you can get into conditions where the inductive properties matter. It usually takes low frequencies (dB/dt is smaller, so |B| is larger; likewise, L is the usual, but F is smaller, so current is much higher). The effect is quite interesting: as iron saturates, the skin depth suddenly increases; depending on the exact frequency and power level, you can very selectively heat the outer, say, 3mm of a steel part, then shower it with water to quench-harden it.
In fact, the surface won't even be the hottest part of the metal during heating -- an inner layer will, because heat is concentrated on the boundary where the metal is still magnetic (causing additional hysteresis losses), and meanwhile, the outer surface has heat loss (radiation), while the interior does not.
I've seen nonlinear FEA (thermo-magnetic) simulations of this process; it's not easily computed (got to love nonlinear dynamic systems..), but when it works: the temp curve (versus depth) looks exactly the same as what you measure from the metallurgical results (i.e., how the metal hardens from the heating). :)
So back to the SMPS circuits, there's another reason I chose this particular metaphor, because the first example actually isn't how most SMPS circuits work.
Yup -- the trick is, you don't have to pedal a single inductor at any special frequency. It's not resonant. So, stateless, time domain solutions work well, or well enough (like comparators).
You can make various tweaks (like adding quasi-resonant snubbers, or doing ZVS or ZCS), which may restrict the range of pulse widths, and generally how freely you can time things versus comparator events and the like. When you get into systems where there's residual energy from the previous cycle(s), you start getting into ranges where the controller needs to be, in a sense... more mindful of things.
Yet, for some reason, the constant-frequency method is far more common?... Why? It only takes a couple of comparators and a latch (or two) to determine when it's optimal to turn the switch on and off at every cycle (use the measured inductor current as the frequency generator), but yet almost all of the SMPS documents talk about closely matching the switching frequency to the inductance (seriously, I don't think I've read a single document that *doesn't* mention matching the switching speed to the inductance).
Is it because the comparators don't operate fast enough? But the UC3842 (which I was unaware of) is over 10 years old. Is it because voltage-mode circuits don't have to use a sense resistor? Does it have to do with component count?
Precise frequency match isn't a big deal for switched-inductor (non-resonant) circuits. Indeed, the relatively wide range (3:1 being pretty reasonable to achieve by design) is handy for circuits that need a wide range of input or output voltage.
Tim
-
When current is initially passed through an inductor, the inductor resists the change in current across it, starting with a high impedance ...
The impedance doesn't change. There is an induced voltage that opposes the change in current.
I disconnect the power from the inductor, and this is where my understanding fails: The inductor again resists the change in current across it, and as the magnetic field collapses, it generates a current across the inductor that will keep increasing in voltage until it's high enough to overcome the resistance between the two ends of the inductor and close the circuit.
"Current across" is meaningless; currents are "through". There is an induced voltage that opposes the change in current.