EEVblog Electronics Community Forum
Electronics => Beginners => Topic started by: MathWizard on October 31, 2021, 04:47:32 am
-
When doing node or mesh analysis of a basic BJT model, in bigger circuits with multiple BJT's, and you can make some matrix eqn's for the V or I, what's a better method, a more advanced method, than just plugging in Vbe=0.7V ? Is there any "quick" way, to do matrix math with, actual variables inside the matrix ?
How does spice really solve multi-BJT circuits, for Ic and Vbe ???
Just the diode equation is super hard to come up with a way to do the math, so how does that get plugged into matrices ??? Doing 1 big matrix problem by hand is bad enough, do they just iterate it over and over from a starting point and get closer and closer values ??
-
Usually, when you have non-linear equations, you would use an iterative method to solve them. Not particularly convenient by hand, but practical for a computer.
You may be aware that for a single non-linear equation with one unknown, then Newton's method can be used to find the solution numerically.
The same principle of numerical iteration works with systems of many equations with many unknown variables. Newton's method is the basis of most such techniques, and is how Spice will do it.
-
Depends on the circuit.
But sometimes it is not necesary to solve matrix equations. You can solve for steady state and then solve for small signal. Each one have their simplifications, for example Vbe = 0.7V.
If you post a circuit we can talk more precisely.
-
When doing node or mesh analysis of a basic BJT model, in bigger circuits with multiple BJT's, and you can make some matrix eqn's for the V or I, what's a better method, a more advanced method, than just plugging in Vbe=0.7V ? Is there any "quick" way, to do matrix math with, actual variables inside the matrix ?
This can be done using Cramer's rule, it allows you to express the solution to a system of linear equations analytically, i.e. the elements in the system matrix can be variables.
Cramer's rule however requires you to find the determinant of the system matrix, which becomes very cumbersome to do by hand for matrices larger than 3x3.
I hope it helps,
Lary
-
Perhaps with iterative solutions:
Iterative Solution to a System of Matrix Equations.
https://www.hindawi.com/journals/aaa/2013/124979/ (https://www.hindawi.com/journals/aaa/2013/124979/)
https://www.sciencedirect.com/science/article/pii/S0898122108006196 (https://www.sciencedirect.com/science/article/pii/S0898122108006196)
-
Here's an explanation of how to adapt Newton's method to solve a system of (non-linear) equations:
https://math.stackexchange.com/questions/466809/solving-a-set-of-equations-with-newton-raphson
-
In Machine Learning, we are trying to find a global minimum in a large series of equations where large is defined as potentially thousands. We find the solution by picking a starting point in the N dimensional space and stepping each variable (find the partial derivative with respect to the output) by a tiny amount. If the slope is downward (we want a minimum) then we step a little further. In the end, we iterate over all N dimensions and then we cycle back and do it all again. At some point, there won't be much change in the output for various steps and that's at least a local minimum. Finding the absolute global minimum depends on a lucky guess for the starting conditions. So we iterate over multiple random starting positions.
The same thing works here. Pick an assumed value (steady state) and start stepping around the function looking for a result closer to the desired result.
Here is a great video that shows how to use the Newton-Raphson method for iterating over matrices looking to reduce the error (little to no change in output when the inputs are tweaked). I used this method to flow balance the DI water piping system for an entire wafer fab back around '88 and I did it in Excel just like the video. I was taught the technique (by another name) in grad school and never thought I would have a use for it. Turns out it was a good thing I stayed awake!
https://www.youtube.com/watch?v=tB7Sj41Juu4&ab_channel=ChristiPattonLuks (https://www.youtube.com/watch?v=tB7Sj41Juu4&ab_channel=ChristiPattonLuks)
What I didn't know was that Excel did matrix arithmetic. I need to get out more often!
And who said they would never have a use for partial differential equations?
-
Just the diode equation is super hard to come up with a way to do the math, so how does that get plugged into matrices ??? Doing 1 big matrix problem by hand is bad enough, do they just iterate it over and over from a starting point and get closer and closer values ??
The equation itself will make your eyes roll back but for a computer it is just a few hundred nanoseconds (if that) of plug and crunch (as it is called in Applied Mathematics).
And, yes, you pick a starting condition and iterate over small steps looking for the smallest change in output for changes in inputs.
Think of some really ugly function and try to differentiate or integrate it symbolically. It's hard! But plugging in values and evaluating the function is easy for a computer. The actual definition of a derivative screams out for a computer since we want to vary 'h' to smaller and smaller, yet non-zero, values.
https://tutorial.math.lamar.edu/classes/calci/defnofderivative.aspx
The first blue box definition is the important part. As 'h' goes to zero, the difference between the starting value and the final value goes toward zero but h is headed toward zero and dividing my a small fractional number is the same as multiplying by the value. That is, x / (1/2) is the same as x * 2. As h goes smaller we are adding gain to the difference in the numerator by dividing by h. h is allowed to approach but never reach 0.
Integration or differentiation of ugly functions by hand is a PITA but it is trivial with a computer. Just use the 'blue box' formulation. There are other approaches but the idea is the same - let the computer to the plug and crunch.
For integration, we are looking for the area of the function by cutting it into slices which we will assume are rectangles and adding up all the pieces. There are many techniques but Riemann Sums are easy to understand. There are three: Left corner, Right Corner and Center (average of Left and Right). As the slices get thinner, the sums approach the actual area.
https://en.wikipedia.org/wiki/Riemann_sum
Some time back I decided to play with Riemann Sums so I wrote a bit of Fortran to compare that technique with trapezoidal integration. The code is attached. Note how well the Center Riemann Sum compares with the definite integral (worked by hand). The function itself is on line 28 (and the document doesn't have line numbers) and it's not truly ugly because I still needed to find the definite integral by hand. There are 100,000 slices taken and the results are shown as comments starting at line 61.
This stuff is trivial for a computer!
Attached but change the type to .f90
-
When doing node or mesh analysis of a basic BJT model, in bigger circuits with multiple BJT's, and you can make some matrix eqn's for the V or I, what's a better method, a more advanced method, than just plugging in Vbe=0.7V ? Is there any "quick" way, to do matrix math with, actual variables inside the matrix ?
How does spice really solve multi-BJT circuits, for Ic and Vbe ???
Just the diode equation is super hard to come up with a way to do the math, so how does that get plugged into matrices ??? Doing 1 big matrix problem by hand is bad enough, do they just iterate it over and over from a starting point and get closer and closer values ??
The methods SPICE uses are described in this book: "Electronic Circuit and System Simulation Methods" by T. L. Pillage, R. A Rohrer, and C. Visweswariah. Chapter 10 is what you are looking for; in about 30 pages they explain how to deal with diodes, BJTs, and MOSFETs. Chapter 10 starts with this introduction:
"Finally, we now have all of the background that we need to discuss nonlinear circuit analysis. The essence of nonlinear circuit simulation was covered briefly in Chapter 1. In this chapter we will elaborate on that exposition and consider as well some of the subtleties that arise in the course of nonlinear circuit simulation. We start with a brief description of the industry standard SPICE."
For a simple introduction, I showed how to solve a simple diode circuit using the diode equation in this post:
https://www.eevblog.com/forum/chat/numerical-methods-in-matlab/ (https://www.eevblog.com/forum/chat/numerical-methods-in-matlab/)
If you use a Computer Algebra System (CAS), you can input the nodal equations (KCL or MNA) and solve directly. A little bit about CAS is also in the post linked above.
-
This can be done using Cramer's rule, it allows you to express the solution to a system of linear equations analytically, i.e. the elements in the system matrix can be variables.
Cramer's rule however requires you to find the determinant of the system matrix, which becomes very cumbersome to do by hand for matrices larger than 3x3.
I hope it helps,
Lary
Just a bit of a warning here: whenever somebody says "Cramer's rule" in the context of circuit analysis, it rises a giant red flag of, (how to put it politely)... lack of experience (that should do!). Real life circuits that require matrices are never solved using "Cramer's rule".
-
The methods SPICE uses are described in this book: "Electronic Circuit and System Simulation Methods" by T. L. Pillage, R. A Rohrer, and C. Visweswariah. Chapter 10 is what you are looking for; in about 30 pages they explain how to deal with diodes, BJTs, and MOSFETs. Chapter 10 starts with this introduction:
Thanks for the link, I ordered a copy from Alibris and it took quite a chunk out of my allowance. No, I don't really get an allowance...
-
The steady state is easy to solve.
The major problem is solving the differential equations of the transient state.
-
Maybe this will add to the conversation:
https://www.emcs.org/acstrial/newsletters/summer09/HowSpiceWorks.pdf (https://www.emcs.org/acstrial/newsletters/summer09/HowSpiceWorks.pdf)
Small steps
-
This can be done using Cramer's rule, it allows you to express the solution to a system of linear equations analytically, i.e. the elements in the system matrix can be variables.
Cramer's rule however requires you to find the determinant of the system matrix, which becomes very cumbersome to do by hand for matrices larger than 3x3.
I hope it helps,
Lary
Just a bit of a warning here: whenever somebody says "Cramer's rule" in the context of circuit analysis, it rises a giant red flag of, (how to put it politely)... lack of experience (that should do!). Real life circuits that require matrices are never solved using "Cramer's rule".
Yes I know that, my answer was mostly in regard to the OP asking for a quick way to solve a system of equations with variables in the systemmatrix and Cramer's rule does fit that description, especially for 3x3 matrices.
-
The steady state is easy to solve.
The major problem is solving the differential equations of the transient state.
The differential equations are surprisingly easy to solve! The book "Electronic Circuit and System Simulation Methods" by T. L. Pillage, R. A Rohrer, and C. Visweswariah clearly explains how to do that in chapter 4. For example, the model for the inductor, for which we now the relationship between its current and voltage is:
\$V_L=L\frac{dI_L}{dt}\$
can represented at some time \$t+\Delta{t}\$ by discretizing the differential equation using the trapezoidal rule (as indicated in page 85 of the reference above) as:
[attachimg=1]
Something similar can be done for all the inductors and capacitors in the circuit. So all energy storage elements can be reduced to a 'history' current source in parallel with some conductance. Of course, to obtain the complete solution, the circuit has to be solved for many \$\Delta{t}s\$.
-
Here is a quick .ods (openoffice version of excel) sheet that does 1x1 through 6x6 systems
-
Maybe this will add to the conversation:
https://www.emcs.org/acstrial/newsletters/summer09/HowSpiceWorks.pdf (https://www.emcs.org/acstrial/newsletters/summer09/HowSpiceWorks.pdf)
Small steps
That article is dangerously close to the vertical axis (and not in a good way) in the Dunning–Kruger effect curve. But at least it mentions this book which I also have and I like quite a bit:
Computer Methods for Circuit Analysis and Design, by Kishore Singhal and Jiri Vlach (I have the first edition)
The article also says this: "We’ll also postpone a discussion about when Kirchhoff’s laws break down for a future article (hint: Faraday’s law trumps Kirchhoff’s law)." It smells like Walter Lewin's BS.
-
The methods SPICE uses are described in this book: "Electronic Circuit and System Simulation Methods" by T. L. Pillage, R. A Rohrer, and C. Visweswariah. Chapter 10 is what you are looking for; in about 30 pages they explain how to deal with diodes, BJTs, and MOSFETs. Chapter 10 starts with this introduction:
Thanks for the link, I ordered a copy from Alibris and it took quite a chunk out of my allowance. No, I don't really get an allowance...
I bought the "McGraw-Hill Special Reprint Edition" back in 2003 or 2004 for USD 70$. How much did you pay? Scanned versions of this book are also available at the usual places. Of course, nothing like having the real thing made of paper!
-
$79
Not a really bad price if I can learn something. I will soon pass it along to my grandson as he will be starting grad school studying some portion of applied mathematics and you can't go far without running into advanced matrix algebra. Not that he hasn't already but this may be a different application.
The cost of books is insignificant compared to the cost of tuition. It doesn't matter what the book cost when you get right down to it.
-
Here is a quick .ods (openoffice version of excel) sheet that does 1x1 through 6x6 systems
I had never even thought about Excel as being capable of matrix operations, it just never came up. Your spreadsheet works quite well!
-
As I said above, "small steps". Most computer solutions to problems involving differential equations will involve finding a starting point and moving a small delta away and solving for that point.
In the Fortran project I linked above, I was integrating f(x) = (x**2) + sqrt(1.0 + (2.0 * x)) between x=3 and x=5 by taking 100,000 slices. By hand, for a homework problem, I might be willing to take 6 slices. The computer doesn't care, it has all the time in the world to work out 100,000 steps.
As the above posts conclude, the real issue is in representing the functions to enumerate. Fortunately, at least for me, the workable approaches have already been discovered.
I wonder what a conversation with Newton, Leibnitz, Gauss or Euler would have been like. There are many other luminaries but those 4 have contributed to a wide range of subjects. Can you imagine sitting in a pub and talking to these fellows? In modern times? It would be amazing!
-
Here is a great video that shows how to use the Newton-Raphson method for iterating over matrices looking to reduce the error (little to no change in output when the inputs are tweaked). I used this method to flow balance the DI water piping system for an entire wafer fab back around '88 and I did it in Excel just like the video. I was taught the technique (by another name) in grad school and never thought I would have a use for it. Turns out it was a good thing I stayed awake!
A more up-to-date approach to quickly solving non-linear system of equations is to use a "Computer Algebra System" (CAS). For example the problem in the video can be solved in a few seconds using the free Giac/Xcas available here:
http://www-fourier.univ-grenoble-alpes.fr/~parisse/giac.html (http://www-fourier.univ-grenoble-alpes.fr/~parisse/giac.html)
Attached is the solution. I have also used Giac/Xcas to solve non-linear circuits with diodes and transistors from the nodal equations.