I just added a resistor to my hypothetical state scenario in the previous post, because as you said there wasnt a resistor, which meant that the voltage across the inductor would never diminish from the source or input V without another component in which the voltage could drop across, thus I added the resistor in my second try and added a sloppy 5 state schem to illustrate, but I placed R in wrong spot. I was also looking for confirmation on my comprehension considering the 5 state schem.
In a practical converter, you stop at the, 11.89V point or whatever. Waiting until V_L drops to 10 or 8 or 5V means all the rest of the voltage is across the resistor, which means doubly:
1. You're way the hell over rated current. If you're switching the nominal load, say 1A, at the 11.89V point, then at the 6-7V point it's doing 50 fricken amps.
2. The resistor is dropping all the power, and the inductor isn't storing much more energy. You're close to 50% efficiency on this loop already, and it keeps dropping the longer you leave it on. Whereas when switched at the early threshold, it might be 99.5% (if this were the only loss mechanism).
I think that this type of charging and then completely discharging is maybe considered discontinuous mode, while you were/are commenting based on CCM?
If you're talking in terms of discharging == inductor current reaches zero every time, yes. It works fine either way. The difference is, in CCM, you can have much more DC current than AC ripple, which delivers more charge while spending less reactive power in the inductor.
Which is relevant when inductors have low Q factors (high AC losses, but low DC losses), typical of powdered iron chokes. One might have a Q of 10, so a 1W power dissipation limit can only draw 10VA of reactive power, but if this is done at a ~10% ripple fraction you can deliver ~100W of real DC output with that.
Which are figures in the ballpark of your typical old fashioned ATX power supply. That's how they're designed.
I've also been talking about the time constant, and while that is important during the charge event, I've since learned that LC ccts have a resonant frequency rather than a time constant, but I'm assuming with the diode in there that the resonance only gets 1 half harmonic(?), or the resonance only gets to go from L to C once, it doesn't bounce back, so I'd have to determine the time for 1/2 of a period of the resonant frequency in order to determine discharge time?
Not harmonic, cycle, yes!
You can look at resonant circuits as a time constant, depending on how you want to use it. You can take the radial time constant (i.e., the time of one radian), sqrt(LC), or the cycle time 2 pi sqrt(LC), or the quarter cycle pi sqrt(LC)/2, etc.
There's also a time constant for the ringdown, because real resonant circuits have losses. Works in the same way; the general form is sine wave*exp decay. There's always some sine in there, but it can be negligible if R is dominant compared to L or C, in which case the result looks exp dominant (seemingly RC or RL alone). And there's always some exp in there, even if R is very small (e.g. superconducting resonators, which can have a Q factor around 10^7).
Anyway, you are quite right actually, that when the switch turns off and the diode turns on, you can draw the equivalent circuit reflecting those states, and it's simply a charged inductor discharging into a capacitor. If allowed to fully discharge (to current = 0), the voltage will follow an arc segment which is a piece of the full LC ringdown waveform. Typically, C is made large enough that the voltage change is small.
Conversely, if there's capacitance loading the switching node (there always is), then there is a time between the switch turning off, and the diode turning on, where that capacitance takes all the inductor current. Because this capacitance is small, the voltage again follows a segment of LC ringdown, in this case the rapid rising part.
Because the diode changes the equivalent circuit as it turns on and off, you only ever see tiny segments of these curves, and you can quite reasonably approximate them either as linear ramps or quadratic segments.
(Resonant converters, you can think of as a hybrid case; understandably, the exact mechanics are more difficult to calculate. Fortunately, with appropriate design, a crude, simplified control method is possible.)
Ill try to work on CCM as it seems that's what's being proposed more so than fully discharging.
Note that the limiting case in CCM is for L --> ∞, so ΔI = 0. This isn't very interesting for control purposes (no amount of PWM can change the average current flow..), but if we assume equilibrium conditions so that we maintain the flux balance condition, then from cycle to cycle, what we're doing is putting in a completely square half-cycle of energy, and getting out the same amount.
The continuum between the two extremes is: energy is the time integral of V*I. In the DCM case, current is a ramp and voltage is constant, so the power is a triangle, and its area is the energy, hence 1/2 L Ipk^2. In the ΔI = 0 case, current is constant, voltage is constant, so power is a rectangle and its area is the energy, Vin I_L t_on.
For large but finite inductances, the average current does vary over time. Which, heh, again, "average" meaning what it does, needs to be qualified, i.e., over what time? Well, in this case, take a cycle average for example.
The dynamic of interest is the time constant of the converter, which can span many cycles -- roughly, inverse of the ripple ratio. Since, after all, if the ripple ratio is 10% (at a nominal say 50% PWM), then applying 0% or 100% PWM, can only change the current by about 10% down or up, each cycle. So we can use this parameter to decide how much we should concern ourselves about, say, an individual cycle or what. Likewise, in DCM, cycles are fully independent, so they are individually important.
Ive more of a physics background than electrical, so that's why I'm breaking it down into units I know better, ie joules.
Would you be better served with a mechanical analogy instead? (Are you well versed in mechanical dynamics?)
You can indeed implement a switching converter in mechanics; the trouble is that, because the speed of sound is relatively low in most materials, you end up needing a great number of compromises in order to keep switching losses low. A "practical" switching transmission for an automobile would be about as large (or larger than?) the engine itself, probably switching at a few Hz or 10s of Hz.
Most of the space would be taken up by flywheels (bypass capacitors) and a fuckoff huge spring (which needs to take up multiple rotations in a cycle, while handling up to full engine torque). The clutches (this would likely be a synchronous inverter) would be slamming on and off, rapidly, under hydraulic actuation I suppose (achieving switching edges in the low ms). Even with top materials and lubricants, it would wear extremely rapidly.
We have the other fortune, that unlike sliding metal contacts, electronic components do not wear within their nominal ratings!
The spring, incidentally, might not be a wear item. Some alloys apparently exhibit a fatigue limit, below which their life goes up exponentially as strain decreases. Other alloys do not, and the life is always related to strain (proportionally, or by another function, I forget what). The fatigue limit of spring steel is a reasonable fraction of its yield strength, so there is a direct tradeoff between cycle lifetime and power density.
The engine would be governed at constant RPM, or perhaps increased RPM under heavy load as needed, but not by too much. This would be a good diesel application. The transmission's output can deliver far more torque than the engine (torque is stepped up in the same way a buck converter steps up current). Torque or RPM can vary continuously, unlike a conventional transmission that merely switches between gear trains while absorbing the difference in clutch slip, or in a lossy fluid coupling. (Which, to be fair, does a good job all its own, it isn't actually all loss -- hence the name "torque converter".)
If you can imagine an automatic transmission shifting gears in milliseconds, and doing that dozens of times per second, yeah, that's how incredibly loud and jarring and fast-wearing this would be!
I'm pretty sure this has been built before; but it's really just a lab curiosity, of course.
Even for all the challenges we have with conventional geared transmissions (and also CVTs -- mechanical variacs), it's telling that it's been better to face those, than to try to make a more general device like this. Like I said, the material properties -- in particular, speed of sound, and density -- just don't work out.
For all the comments and posts I've read (not just my own threads) it seems I'm making this way harder than necessary, but for some reason I'm still missing the disconnect. All the PFC ccts I've seen take 3 spot measurements, before inductor, at cap, at ground or return line, and then control accordingly, but I can't figure why the operation of the chip is so complex to me.
The control is another level on top of the converter. You must abstract away the converter as a transconductance stage: given some setpoint input, it draws a (switching cycle-averaged) input current and delivers a transformation of that to the output (i.e., same power, different V and I).
The input and output voltages do not change over a switching-cycle time scale, so can be treated as constant by the converter.
With this abstraction, you don't care how the converter does its work. Black box. It could be full of dwarves for all you know, hammering electrons off one end and stacking them up on the other side. Maybe it's switching inductors. Maybe it's switching capacitors. Maybe it's pure computronium and its losses are actually CPU power, calculating how to overthrow the human race. Doesn't matter, externally it just transfers power in response to a control input.
Then, and only then, can you apply the PFC algorithm. Set input current proportional to input voltage. If the converter's control is in terms of input current, you're set, that's all you need, great. If not, you may need an inner control loop, or a function block, to implement that.
Meanwhile, averaged over multiple line cycles, set mean input current as needed to maintain output voltage. Which, again, output voltage changes gradually even over mains cycles.
This more or less describes a typical BCM or CCM PFC architecture. CCM inevitably has to deal with DCM at light loads, and measuring actual inductor current can be hard so they come up with ways to estimate it (comes to mind, UCC28070's "current synthesizer" block), or use a function that corrects for the error (say, predistorting the setpoint, on the assumption that it will go into DCM and therefore change its gain).
You don't stand to gain anything from looking at the converter with PFC as the direct goal. Converters can be controlled to do PFC, but they fundamentally don't do it by themselves.
Tim