Author Topic: Numerical Methods in Matlab  (Read 4812 times)

0 Members and 1 Guest are viewing this topic.

Offline ElectricGuyTopic starter

  • Regular Contributor
  • *
  • Posts: 240
  • Country: pt
Numerical Methods in Matlab
« on: May 19, 2019, 02:34:20 pm »
Hi everyone;

My sister is studying chemical engineering and needs to do numerical method work using 1 of 5 methods in matlab. The work is about concetration in reactors "plug".

The methods are;
method of successive bisections;
false position method;
fixed-point iteration method;
newton-raphson method;
secant method

And the equation is attached.

R is equal to (volume of product introduced into the system)/(total volume of the system)

I don't know nothing about numerical methods, so can somebody help her?

Thanks





 
Thank you!
Regards
ElectricGuy
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9886
  • Country: us
Re: Numerical Methods in Matlab
« Reply #1 on: May 19, 2019, 03:36:47 pm »
Google is her friend!  I searched for 'MATAB Newton Raphson' and got a lot of hits including:


Or the code is here:

https://www.mathworks.com/matlabcentral/fileexchange/68885-the-newton-raphson-method

Code: [Select]
% Author - Ambarish Prashant Chandurkar
%The Newton Raphson Method
clc;
close all;
clear all;
syms x;
f=x*exp(x)-1; %Enter the Function here
g=diff(f); %The Derivative of the Function
n=input('Enter the number of decimal places:');
epsilon = 5*10^-(n+1)
x0 = input('Enter the initial approximation:');
for i=1:100
     f0=vpa(subs(f,x,x0)); %Calculating the value of function at x0
     f0_der=vpa(subs(g,x,x0)); %Calculating the value of function derivative at x0
     y=x0-f0/f0_der; % The Formula
     err=abs(y-x0);
     if err<epsilon %checking the amount of error at each iteration
          break
     end
     x0=y;
end
y = y - rem(y,10^-n); %Displaying upto required decimal places
fprintf('The Root is : %f \n',y);
fprintf('No. of Iterations : %d\n',i);

I didn't search for the other methods, I'll leave that to her.  The solutions are out there!

ETA:

The code above is contributed by a user, not by Mathworks itself.  It may be better to look for factory code, if available.  I don't know that the above code works but it's a start.

« Last Edit: May 19, 2019, 03:46:19 pm by rstofer »
 

Offline ElectricGuyTopic starter

  • Regular Contributor
  • *
  • Posts: 240
  • Country: pt
Re: Numerical Methods in Matlab
« Reply #2 on: May 20, 2019, 12:08:21 pm »
Thank you for your reply.

I made this, but i think, ( and also she ) tha is not correct.

Code: [Select]
f=@(x) log([1+x*0.1/x*0.1]-[(1+x)/x*(1+x*0.1)])
df=@(x) ((-2*x^2)-x)/((x+1)*(x*(-x+1)+1).^2)
i=0
x0=-100
q=f(x0)
p=df(x0)
e=0.00005
max=1000
while i<max
    if abs(q)<e
        fprintf('A raiz é: %d', xo)
        break
     end
    i=i+1
    x1=x0-(q/p)
    q1=f(x1)
    p1=df(x1)
    if abs(q1)<e
        fprintf('A raiz é: %d',x1)
        break
    end
    x0=x1
    q=q1
    p=p1
end

Can somebody help?
thanks
Thank you!
Regards
ElectricGuy
 

Offline jeremy

  • Super Contributor
  • ***
  • Posts: 1079
  • Country: au
Re: Numerical Methods in Matlab
« Reply #3 on: May 20, 2019, 12:17:49 pm »
My MATLAB is a little rusty, but you should probably start with ‘fsolve’ to get the correct answer and try to compare the output to that of your code. You mentioned that you don’t think your code is correct, but why do you think that? Have you tried it against any known good results?
 

Offline ElectricGuyTopic starter

  • Regular Contributor
  • *
  • Posts: 240
  • Country: pt
Re: Numerical Methods in Matlab
« Reply #4 on: May 20, 2019, 12:19:54 pm »
My MATLAB is a little rusty, but you should probably start with ‘fsolve’ to get the correct answer and try to compare the output to that of your code. You mentioned that you don’t think your code is correct, but why do you think that? Have you tried it against any known good results?

No, just a feeling.  :D I don't have nothing to compare the values. My MatLab is not a little, but a lot rusty!
Thank you!
Regards
ElectricGuy
 

Offline jeremy

  • Super Contributor
  • ***
  • Posts: 1079
  • Country: au
Re: Numerical Methods in Matlab
« Reply #5 on: May 20, 2019, 12:22:21 pm »
Then fsolve is definitely your friend. Put some realistic values in, record the outputs, and see if your code calculates the same result.
 

Offline ElectricGuyTopic starter

  • Regular Contributor
  • *
  • Posts: 240
  • Country: pt
Re: Numerical Methods in Matlab
« Reply #6 on: May 20, 2019, 01:21:01 pm »
Then fsolve is definitely your friend. Put some realistic values in, record the outputs, and see if your code calculates the same result.

Try it with fsolve, mas always getting error. I think i don't know how to use fsolve  |O
Thank you!
Regards
ElectricGuy
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6171
  • Country: fi
    • My home page and email address
Re: Numerical Methods in Matlab
« Reply #7 on: May 20, 2019, 01:58:46 pm »
The first step should always be to simplify the equation to a more easily tractable form.  If we exponentiate both sides, and use \$Y = 1 - X_{Af}\$, we get
$$\frac{1 + R Y}{R Y} = \exp\left(\frac{1 + R}{R ( 1 + R Y ) }\right)$$
which simplifies to
$$\frac{1}{R Y} + 1 = \exp\left(\frac{1 + R}{R + R^2 Y}\right)$$
where the notation \$\exp(z) = e^{z}\$.

In root-finding form, this can be written as
$$f(R, Y) = \frac{1}{R Y} + 1 - \exp\left(\frac{1 + R}{R + R^2 Y}\right) = 0$$
For a number of the suggested methods, the derivative of the function is useful. Solving for Y,
$$\frac{\partial f(R, Y)}{\partial Y} = f_Y(R, Y) = \frac{1 + R}{(1 + R Y)^2}\exp\left(\frac{1 + R}{R + R^2 Y}\right) - \frac{1}{R^2 Y}$$
and solving for R,
$$\frac{\partial f(R, Y)}{\partial R} = f_R(R, Y) = \frac{Y + Y^2 ( R^2 + 2 R)}{R^2 Y ( R Y + 1 )^2 }\exp\left(\frac{1+R}{R + R^2 Y}\right) - \frac{1}{R^2 Y}$$

Because of how \$R\$ and \$Y\$ are defined, we know that \$0 \lt R \lt 1\$ and \$0 \lt Y \lt 1\$.  Without knowing where the possible solutions are, it is extremely common to start the root finding at regular points in the parameter space, so that you have a better chance of finding the best global solution, and not just a solution.

Let's look at Newton-Raphson method for example.  If we solve for \$Y\$, the iteration formula is
$$Y^{\prime} = Y - \frac{f(R, Y)}{f_Y(R, Y)}$$
and if solving for \$R\$,
$$R^{\prime} = R - \frac{f(R, Y)}{f_R(R, Y)}$$
Similar iteration formulae exist for all other root-finding algorithms.  In general, you'll need to avoid division-by-zero, and have a way for your root-finding function to return failure, so that you can more easily write a loop for regularly sampling the entire parameter space, and find all/global solutions.

I personally do not use Matlab (nor even Octave, really), so I cannot give any advice on how to do this in Matlab/Octave, nor which parts Matlab/Octave already implements.
« Last Edit: May 20, 2019, 02:04:42 pm by Nominal Animal »
 

Offline ElectricGuyTopic starter

  • Regular Contributor
  • *
  • Posts: 240
  • Country: pt
Re: Numerical Methods in Matlab
« Reply #8 on: May 20, 2019, 03:41:07 pm »
The first step should always be to simplify the equation to a more easily tractable form.  If we exponentiate both sides, and use \$Y = 1 - X_{Af}\$, we get
$$\frac{1 + R Y}{R Y} = \exp\left(\frac{1 + R}{R ( 1 + R Y ) }\right)$$
which simplifies to
$$\frac{1}{R Y} + 1 = \exp\left(\frac{1 + R}{R + R^2 Y}\right)$$
where the notation \$\exp(z) = e^{z}\$.

In root-finding form, this can be written as
$$f(R, Y) = \frac{1}{R Y} + 1 - \exp\left(\frac{1 + R}{R + R^2 Y}\right) = 0$$
For a number of the suggested methods, the derivative of the function is useful. Solving for Y,
$$\frac{\partial f(R, Y)}{\partial Y} = f_Y(R, Y) = \frac{1 + R}{(1 + R Y)^2}\exp\left(\frac{1 + R}{R + R^2 Y}\right) - \frac{1}{R^2 Y}$$
and solving for R,
$$\frac{\partial f(R, Y)}{\partial R} = f_R(R, Y) = \frac{Y + Y^2 ( R^2 + 2 R)}{R^2 Y ( R Y + 1 )^2 }\exp\left(\frac{1+R}{R + R^2 Y}\right) - \frac{1}{R^2 Y}$$

Because of how \$R\$ and \$Y\$ are defined, we know that \$0 \lt R \lt 1\$ and \$0 \lt Y \lt 1\$.  Without knowing where the possible solutions are, it is extremely common to start the root finding at regular points in the parameter space, so that you have a better chance of finding the best global solution, and not just a solution.

Let's look at Newton-Raphson method for example.  If we solve for \$Y\$, the iteration formula is
$$Y^{\prime} = Y - \frac{f(R, Y)}{f_Y(R, Y)}$$
and if solving for \$R\$,
$$R^{\prime} = R - \frac{f(R, Y)}{f_R(R, Y)}$$
Similar iteration formulae exist for all other root-finding algorithms.  In general, you'll need to avoid division-by-zero, and have a way for your root-finding function to return failure, so that you can more easily write a loop for regularly sampling the entire parameter space, and find all/global solutions.

I personally do not use Matlab (nor even Octave, really), so I cannot give any advice on how to do this in Matlab/Octave, nor which parts Matlab/Octave already implements.

WoW... Thanks

Xaf is equal to 0.9. The Teacher forgot to mention to my syster. She needs R
« Last Edit: May 20, 2019, 03:43:41 pm by ElectricGuy »
Thank you!
Regards
ElectricGuy
 

Offline ElectricGuyTopic starter

  • Regular Contributor
  • *
  • Posts: 240
  • Country: pt
Re: Numerical Methods in Matlab
« Reply #9 on: May 20, 2019, 03:53:50 pm »
I did this to find a possible answer without using matlab. Is this aproach correct? R =~ 0.429
Thank you!
Regards
ElectricGuy
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6171
  • Country: fi
    • My home page and email address
Re: Numerical Methods in Matlab
« Reply #10 on: May 20, 2019, 05:27:57 pm »
R =~ 0.429

Yes. For \$X = 0.9\$, Maple says \$R \approx 0.4299449860\$.  For \$X = 0.8\$, \$R \approx 0.7395476306\$. For \$X = 0.75\$, \$R \approx 0.9718973768\$.  (Just in case you wish to verify your methods work.)

Is this aproach correct?
You did a graphical search based on the exponentiated version of the two parts of the given equation; that works fine, but isn't one of the methods listed.

To implement a Newton-Raphson, you write a Matlab function with eg. a for or while loop in it.  Something like
Code: [Select]
Rold = R + 1;
Rnew = R;
eps = 0.001;
while abs(Rnew - Rold) > eps
    Rold = Rnew;
    Rnew = Rold - ... ;
end
R = Rnew;
where ... contains the \$f(R_{old}) / f_R(R_{old})\$ expression -- I'm too lazy to implement that; again, I'm not a Matlab/Octave person.

The eps in the snippet refers to machine epsilon; the largest positive value you consider approximately zero.  (Because of limited precision in floating-point numbers, there may not be a floating-point \$R\$ for which the equation given is zero.  Instead, you iterate until the equation is close enough; and eps is roughly the scale you consider close enough.)
« Last Edit: May 20, 2019, 05:33:02 pm by Nominal Animal »
 

Offline ElectricGuyTopic starter

  • Regular Contributor
  • *
  • Posts: 240
  • Country: pt
Re: Numerical Methods in Matlab
« Reply #11 on: May 20, 2019, 07:44:24 pm »
R =~ 0.429

Yes. For \$X = 0.9\$, Maple says \$R \approx 0.4299449860\$.  For \$X = 0.8\$, \$R \approx 0.7395476306\$. For \$X = 0.75\$, \$R \approx 0.9718973768\$.  (Just in case you wish to verify your methods work.)

Is this aproach correct?
You did a graphical search based on the exponentiated version of the two parts of the given equation; that works fine, but isn't one of the methods listed.

To implement a Newton-Raphson, you write a Matlab function with eg. a for or while loop in it.  Something like
Code: [Select]
Rold = R + 1;
Rnew = R;
eps = 0.001;
while abs(Rnew - Rold) > eps
    Rold = Rnew;
    Rnew = Rold - ... ;
end
R = Rnew;
where ... contains the \$f(R_{old}) / f_R(R_{old})\$ expression -- I'm too lazy to implement that; again, I'm not a Matlab/Octave person.

The eps in the snippet refers to machine epsilon; the largest positive value you consider approximately zero.  (Because of limited precision in floating-point numbers, there may not be a floating-point \$R\$ for which the equation given is zero.  Instead, you iterate until the equation is close enough; and eps is roughly the scale you consider close enough.)

Thank you. That helps a lot. And knowing a right answer helps too.
Thank you!
Regards
ElectricGuy
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6171
  • Country: fi
    • My home page and email address
Re: Numerical Methods in Matlab
« Reply #12 on: May 20, 2019, 10:28:44 pm »
As to the methods:
  • method of successive bisections
    Also known as bisection method.
    This is basically a binary search. You start with two \$R\$ values, with \$f(R_1) f(R_2) \lt 0\$. (That is, the root function is positive for one \$R\$, and negative for the other.)
    You split the interval in half, until the interval is small enough for you.  You calculate the root function at the midpoint \$R\$. If it is zero, you have found a root, and are done.  Otherwise, you move the endpoint -- \$R_1\$ or \$R_2\$, whichever yields the same sign for the root function as the midpoint does -- to the midpoint \$R\$.  In each iteration, this halves the possible \$R\$ range (adding one bit, or 0.30103 decimal digits to \$R\$).
    The problem/trick/trap for young players is that you need to test consecutive endpoint pairs rather close together, in order to not miss a possible root (unless you first prove your root function is nondecreasing/nonincreasing).  So, don't just start with \$R_1 \approx 0\$ and \$R_2 \approx 1\$, but split it to say a dozen or a hundred intervals, and check each one separately.  (Of course, only the interval(s) where \$f(R_1)\$ and \$f(R_2)\$ have different signs, are actually iterated over.)

  • false position method
    Described at Wikipedia.
    Somewhat similar to the bisection method, in that you start with a range.  Instead of midpoint, you use a point where the line through the range endpoints intersects zero.

  • fixed-point iteration method
    Described at Wikipedia.
    Instead of \$f(R)\$, this involves function \$g(R) = f(R) + R\$. The method simply iterates \$R^\prime = g(R)\$, since at \$R = g(R)\$ we have \$f(R) = g(R) - R = 0\$.
    The iteration ends when \$R\$ no longer changes.  However, numerically, it is possible the \$R\$ "oscillates" around two (or a few) values.  An often used method is to keep either all iterations of \$R\$, or a few dozen last \$R\$, and end iteration when you see an \$R\$ you've seen before.

  • newton-raphson method
    Already described in this thread; but also described at Wikipedia.

  • secant method
    Described at Wikipedia.
    You start with two points, and apply a variant of the Newton-Raphson method where you do not need to know the derivative, but use the finite difference approximation (i.e., line between the two known points), replacing the older point with the iterated point.
    The recurrence relation is the simplified form.  For this particular equation, if \$R_{next}\$ is the resulting \$R\$, \$R_{curr}\$ is the current value for \$R\$, and \$R_{prev}\$ is the previous value for \$R\$ (i.e., was the current one in the previous iteration), \$f_{curr} = f(R_{curr})\$, and \$f_{prev} = f(R_{prev})\$, then
    $$R_{next} = R_{curr} - f_{curr}\frac{R_{curr} - R_{prev}}{f_{curr} - f_{prev}}$$
    which is equivalent to
    $$R_{next} = \frac{R_{prev} f_{curr} - R_{curr} f_{prev}}{f_{curr} - f_{prev}}$$
    Again, it is possible for \$R\$ to "oscillate" around the optimum solution when using finite-precision floating-point numbers.
    So, you probably want to stop iterating when \$\lvert R_{next} - R_{curr}\rvert\$ and \$\lvert R_{curr} - R_{prev}\rvert\$ are both close enough to zero; or, when enough iterations have occurred (in which case I'd pick the \$R\$ that yielded the smallest \$\lvert f(R)\rvert\$).

In my experience, the relevant University courses do have additional suggestions and gotchas to avoid (especially when the function at hand is "nasty", or all sorts of wiggly-squiggly with infinities or undefined points).  If the lecturer has slides or course notes, I warmly recommend reading those through, to see if there are relevant nuggets of implementation detail information hidden in there.  (I do a lot of numerical computation programming; I've never lectured any of those courses myself.)
« Last Edit: May 20, 2019, 10:36:46 pm by Nominal Animal »
 

Offline jesuscf

  • Frequent Contributor
  • **
  • Posts: 499
  • Country: ca
Re: Numerical Methods in Matlab
« Reply #13 on: May 21, 2019, 10:00:41 am »
Here is is the solution in plain old good C, the way electrical engineers used to do:

Code: [Select]
#include <stdlib.h>
#include <stdio.h>
#include <math.h>

double equation (double R, double Xaf)
{
double k;

k=1.0-Xaf; // To help us write a simpler equation
return ( log((1.0+R*k)/(R*k))-((1.0+R)/(R*(1.0+R*k))) );
}

void main (void)
{
double R=0.1; // Put initial guess here
double Xaf=0.9; // Given
double Rnew;  // New estimated value of R
double f;     // The equation evaluation at R
double df;    // The approximate value of the derivative of the equation at R
double h=0.00001; // The delta used to approximate the derivative
double maxerr=1.0e-9; // The maximum tolerated error for the solution
int cnt=0; // Maximum number of iterations

while (cnt<20)
{
f=equation(R, Xaf); // Evaluate the equation at R
df=(f-equation(R-h, Xaf))/h; // Compute an approximation of the (partial) derivative at R
Rnew=R-f/df; // Apply Newton-Raphson

printf("R=%14.12lf\n", Rnew);
if(fabs(R-Rnew)<maxerr) break;
R=Rnew;
cnt++;
}

}

The result:

Code: [Select]
R=0.169580882366
R=0.262069455687
R=0.355885031976
R=0.414420536054
R=0.429224934785
R=0.429943393584
R=0.429944985949
R=0.429944986005
Homer: Kids, there's three ways to do things; the right way, the wrong way and the Max Power way!
Bart: Isn't that the wrong way?
Homer: Yeah, but faster!
 

Offline ElectricGuyTopic starter

  • Regular Contributor
  • *
  • Posts: 240
  • Country: pt
Re: Numerical Methods in Matlab
« Reply #14 on: May 21, 2019, 01:13:48 pm »
Here is is the solution in plain old good C, the way electrical engineers used to do:

Code: [Select]
#include <stdlib.h>
#include <stdio.h>
#include <math.h>

double equation (double R, double Xaf)
{
double k;

k=1.0-Xaf; // To help us write a simpler equation
return ( log((1.0+R*k)/(R*k))-((1.0+R)/(R*(1.0+R*k))) );
}

void main (void)
{
double R=0.1; // Put initial guess here
double Xaf=0.9; // Given
double Rnew;  // New estimated value of R
double f;     // The equation evaluation at R
double df;    // The approximate value of the derivative of the equation at R
double h=0.00001; // The delta used to approximate the derivative
double maxerr=1.0e-9; // The maximum tolerated error for the solution
int cnt=0; // Maximum number of iterations

while (cnt<20)
{
f=equation(R, Xaf); // Evaluate the equation at R
df=(f-equation(R-h, Xaf))/h; // Compute an approximation of the (partial) derivative at R
Rnew=R-f/df; // Apply Newton-Raphson

printf("R=%14.12lf\n", Rnew);
if(fabs(R-Rnew)<maxerr) break;
R=Rnew;
cnt++;
}

}

The result:

Code: [Select]
R=0.169580882366
R=0.262069455687
R=0.355885031976
R=0.414420536054
R=0.429224934785
R=0.429943393584
R=0.429944985949
R=0.429944986005

Thank you. That also confirms my result. I will try to convert it into MatLab.
Thank you!
Regards
ElectricGuy
 

Offline jesuscf

  • Frequent Contributor
  • **
  • Posts: 499
  • Country: ca
Re: Numerical Methods in Matlab
« Reply #15 on: May 21, 2019, 03:53:05 pm »
Here is exactly the same thing using Matlab:

Code: [Select]
Xaf=0.9;
k=1.0-Xaf;
equ = @(R)( log((1.0+R*k)/(R*k))-((1.0+R)/(R*(1.0+R*k))) );
h=0.00001;
maxerr=1.0e-9;
R=0.1;
for i=1:20
     f=equ(R);
     df=(f-equ(R-h))/h;
     Rnew=R-f/df; % Apply Newton-Raphson
     fprintf('R=%14.12f\n', Rnew);
     if abs(R-Rnew)<maxerr
          break
     end
     R=Rnew;
end
Homer: Kids, there's three ways to do things; the right way, the wrong way and the Max Power way!
Bart: Isn't that the wrong way?
Homer: Yeah, but faster!
 

Offline ElectricGuyTopic starter

  • Regular Contributor
  • *
  • Posts: 240
  • Country: pt
Re: Numerical Methods in Matlab
« Reply #16 on: May 21, 2019, 04:03:04 pm »
Here is exactly the same thing using Matlab:

Code: [Select]
Xaf=0.9;
k=1.0-Xaf;
equ = @(R)( log((1.0+R*k)/(R*k))-((1.0+R)/(R*(1.0+R*k))) );
h=0.00001;
maxerr=1.0e-9;
R=0.1;
for i=1:20
     f=equ(R);
     df=(f-equ(R-h))/h;
     Rnew=R-f/df; % Apply Newton-Raphson
     fprintf('R=%14.12f\n', Rnew);
     if abs(R-Rnew)<maxerr
          break
     end
     R=Rnew;
end

Thanks. This is going to really help my sister.

As i said before, numerical methods is not my "thing". Why she didn't pick up electronics!  ;D ;D ;D
Thank you!
Regards
ElectricGuy
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9886
  • Country: us
Re: Numerical Methods in Matlab
« Reply #17 on: May 21, 2019, 07:23:35 pm »
And in some older dialect of Fortran (GNU Fortran).  Fortran90 might not like the statement function:
func(R,k) = ...
As handy as the construct is, F90 is trying to get rid of it.

Code: [Select]
     program NewtonRaphson
          implicit none
          integer i
          real  func, R, k, Xaf, Rnew, eps, h, f, df

          func(R,k) = LOG((1.0+R*k)/(R*k)) - ((1.0+R)/(R*(1.0+R*k)))

          Xaf = 0.9
          k   = 1.0 - Xaf
          h   = 0.00001
          eps = 1.0E-9
          R   = 0.1

          do i = 1, 20
              f    = func(R,k)
              df   = (f - func(R-h,k))/h
              Rnew = R - f/df
              print *,  Rnew
              if (abs(R-Rnew) .lt. eps) exit
              R = Rnew
          end do

      end program NewtonRaphson

Output:

Code: [Select]
  0.169601589   
  0.262146801   
  0.355143547   
  0.413859963   
  0.429177195   
  0.429945886   
  0.429945081   
  0.429945081 
[/font]
« Last Edit: May 21, 2019, 07:28:09 pm by rstofer »
 

Offline jesuscf

  • Frequent Contributor
  • **
  • Posts: 499
  • Country: ca
Re: Numerical Methods in Matlab
« Reply #18 on: May 21, 2019, 08:33:34 pm »
As i said before, numerical methods is not my "thing". Why she didn't pick up electronics!  ;D ;D ;D

I hope you are joking, because this stuff shows up all the time in electronics.  A very, very simple example:  consider the circuit below:



where the diode has these characteristics:

\$\begin{array}{l}
 I_S  = 8.1 \times 10^{ - 10} A \\
 n = 2 \\
 V_T  = 25mV \\
 \end{array}\$

Calculate ID, and VD.

Start by getting the circuit equations:

 \$\left\{ {\begin{array}{*{20}c}
   {I_D  = \frac{{10V - V_D }}{{10k}}}  \\
   {I_D  = 8.1 \times 10^{ - 10} A \times e^{\frac{{V_D }}{{0.05V}}} }  \\
\end{array}} \right.
\$

one of which as you can see, is not a linear equation.  We can solve it using Newton-Raphson.  Start by getting the function f(VD):

 \$
f(V_D) = \frac{{10V - V_D }}{{10k}} - 8.1 \times 10^{ - 10} A \times e^{\frac{{V_D }}{{0.05V}}}
\$

and apply the same techniques shown in the posts above:

Code: [Select]
clc
equ = @(VD) ( ((10-VD)/10000)-(8.1E-10)*exp(VD/0.05) );
h=0.00001; % The delta used to approximate the derivative
maxerr=1.0e-9;
VD=0.8;
for i=1:20
     f=equ(VD);
     df=(f-equ(VD-h))/h;
     VDnew=VD-f/df; % Apply Newton-Raphson
     fprintf('VD=%14.12f\n', VDnew);
     if abs(VD-VDnew)<maxerr
          break
     end
     VD=VDnew;
end
ID=(10-VD)/10000;
fprintf('ID=%14.12f\n', ID);

The solution:

Code: [Select]
VD=0.756416804008
VD=0.721823065156
VD=0.702664957044
VD=0.697932784278
VD=0.697695969347
VD=0.697695433380
VD=0.697695433430
ID=0.000930230457


« Last Edit: May 21, 2019, 08:41:37 pm by jesuscf »
Homer: Kids, there's three ways to do things; the right way, the wrong way and the Max Power way!
Bart: Isn't that the wrong way?
Homer: Yeah, but faster!
 

Offline ElectricGuyTopic starter

  • Regular Contributor
  • *
  • Posts: 240
  • Country: pt
Re: Numerical Methods in Matlab
« Reply #19 on: May 21, 2019, 08:43:08 pm »


I hope you are joking, because this stuff shows up all the time in electronics.  A very, very simple example:  consider the circuit below:


I know it can be used in electronics, but, tell me, do you really need that to solve the circuit?
Thank you!
Regards
ElectricGuy
 

Offline jesuscf

  • Frequent Contributor
  • **
  • Posts: 499
  • Country: ca
Re: Numerical Methods in Matlab
« Reply #20 on: May 21, 2019, 08:50:45 pm »


I hope you are joking, because this stuff shows up all the time in electronics.  A very, very simple example:  consider the circuit below:


I know it can be used in electronics, but, tell me, do you really need that to solve the circuit?

Not that particular one, which is there as an example.  But add more diodes, some transistors, or some magnetic core saturation, and them there is no escape.  Actually, I had a circuit once with just two diodes that need to be solved that way.  You can use a program to solve the circuit for you, but most likely the program will be using Newton-Raphson.
Homer: Kids, there's three ways to do things; the right way, the wrong way and the Max Power way!
Bart: Isn't that the wrong way?
Homer: Yeah, but faster!
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9886
  • Country: us
Re: Numerical Methods in Matlab
« Reply #21 on: May 21, 2019, 09:03:07 pm »


I hope you are joking, because this stuff shows up all the time in electronics.  A very, very simple example:  consider the circuit below:


I know it can be used in electronics, but, tell me, do you really need that to solve the circuit?

Maybe when you're playing with the base-emitter junction of a transistor.  There are similar equations for Vbe and Ib

The thing that's nice about Newton-Raphson is that we use the original definition of a derivative rather that trying to differentiate the function itself.  It is immensely easier to call a function with slightly different arguments, some delta apart, than it is to actually differentiate the function.  And then simply divide by the delta.

This has been an interesting exercise!
« Last Edit: May 21, 2019, 09:04:47 pm by rstofer »
 

Offline jesuscf

  • Frequent Contributor
  • **
  • Posts: 499
  • Country: ca
Re: Numerical Methods in Matlab
« Reply #22 on: May 21, 2019, 11:14:22 pm »
The thing that's nice about Newton-Raphson is that we use the original definition of a derivative rather that trying to differentiate the function itself.  It is immensely easier to call a function with slightly different arguments, some delta apart, than it is to actually differentiate the function.  And then simply divide by the delta.

This has been an interesting exercise!

I learnt this neat 'trick' from the book 'Numerical Analysis', Second Edition, by Burden, Faires, and Reynolds.  In page 456 it says:

"In most situations, the exact evaluation of the partial derivatives is inconvenient and in many applications impossible.  This difficulty can be overcome by using finite difference approximations to the partial derivatives.  For example:

 \$\frac{{\partial f_j }}{{\partial x_k }}(x^{(i)} ) \approx \frac{{f_j (x^{(i)}  + e_k h) - f_j (x^{(i)} )}}{h}\$

where h is small in absolute value and ek is the vector whose only nonzero entry is a one in the kth coordinate."

Of course the book is talking about the solution of non-linear systems of equations.  For the case of this post there is only one equation and one unknown.  I have newer versions of the book that use Maple and/or Matlab, but the second edition of the book is my favorite.
« Last Edit: May 21, 2019, 11:27:31 pm by jesuscf »
Homer: Kids, there's three ways to do things; the right way, the wrong way and the Max Power way!
Bart: Isn't that the wrong way?
Homer: Yeah, but faster!
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9886
  • Country: us
Re: Numerical Methods in Matlab
« Reply #23 on: May 22, 2019, 01:37:46 am »
I learnt this neat 'trick' from the book 'Numerical Analysis', Second Edition, by Burden, Faires, and Reynolds.  In page 456 it says:
...

They're up to the 10th edition but there is a free .pdf of the 9th edition

https://fac.ksu.edu.sa/sites/default/files/numerical_analysis_9th.pdf


I hope everybody is saving all the code.  This technique will come in handy for a variety of applications.

ETA The 10th edition I’d also online.  Where I found it requires a subscription.
« Last Edit: May 22, 2019, 02:18:44 pm by rstofer »
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6171
  • Country: fi
    • My home page and email address
Re: Numerical Methods in Matlab
« Reply #24 on: May 22, 2019, 09:18:22 am »
If you do not know the derivative of the function whose root (solving for \$x\$ in \$f(x) = 0\$) or roots you are trying to find, I recommend using the secant method instead.  It's close enough to being the same thing, but easier to compute, and well established in the peer-reviewed literature.

(It's like pseudo-random number generators.  Some of them are really good, the sequences they produced look very random and pass a big battery of statistical analysis tests; some of them are really bad and should not be used at all.  The difference? Often just a tiny tweak to one integer constant.)

Octave is a free alternative to Matlab.  It is mostly compatible with Matlab, too.

If you cannot afford a Maple (or Mathematica) license, you can use free SageMath to calculate those derivatives for you symbolically.  I used Maple for the above derivatives myself, but I occasionally use the command-line interface for SageMath instead, for "quick stuff" when I already have a terminal open.  Most of the time I spend with those, is trying to find the "easiest" form of the equation to work with; they all have lots of "transfrom/simplify/combine/factor" form functions, but for complex functions, usually it boils down to one form being nicest for one part of the equation, and another for another, so some hand-crafting is usually needed.  It took me maybe five minutes to work out the function and derivative form -- but the exponentiation I did immediately, when first seeing the formulae; it's just that useful with equations where one side is a logarithm.

For C, C++, Fortran, and Java, many consider Numerical Recipes the book for numerical methods.  It's not perfect, but it does cover the subject quite well.



Even if you do not write numerical method implementations yourself, knowing the basics -- let's say something like reading this thread and the linked Wikipedia pages, and writing some experimental functions for yourself and seeing how/when they fail -- is demonstrably useful.  You do need to fail at first to learn the important stuff, though.

Let me tell you my recent real-world story.

I am not an EE, but I had a few really simple MOSFET switching circuits (steady-state, DC operation; the simplest stuff there is) I was trying to see if/how they would work, before getting the components and putting one of them on a breadboard.  So, comparison and getting a basic grasp on things kind of situation.  Running a Linux machine, I installed Qucs (trivial via Synaptic, my GUI package manager), and drew up the simplest version of the circuit, and tried to simulate it.  No worky; instead I got error messages, or the simulation didn't seem to progress any.

Of course, I read the documentation and tutorials and examples, and checked what the error messages meant, but it was not at all absolutely clear how it all related to the exact circuit I was simulating.

However, because I know how easily a numerical solver diverges or misses solutions when the function is too wonky (has lots of roots, almost-roots, infinities, undefined points, or some combination of these), I knew the first step is to simplify the equations.  Because I have calculated some basic DC circuits by hand on a basic electronics course, I knew this does not necessarily mean reducing the number of components, but making it simpler to establish the voltage and current at each important point in the circuit.  So, I added current-limiting resistors to the voltage source and MOSFET gate, and added a couple of ohms dummy load (as a stand-in for my real-world load I was switching the voltage/current to).  Ta-dah, worky-worky now!

At no point did I get too frustrated or felt "stuck", because I had at least a vague idea of what was going wrong, and how to overcome it.  If I had thought of the simulation as a black box, which magically produced the outputs without understanding how it does it, having it not produce those outputs even though my simulation looked reasonable on its face, would have been extremely frustrating.

Having a bit of practical experience, like real-world voltage sources not providing infinite current, nor current sources providing infinitely high voltages, and knowing the difference to simulated "perfect" components, also helped a lot: I knew how to make the simulated circuit more "realistic".

This same thing has happened to me in computational materials physics, too (at the very beginning, "joining" Morse copper blocks too close, as if under millions of atmospheres of pressure at their interface only, causing all sorts of havoc and amusing effects; which deeply intensified the interest in the subject for me), and I've heard from friends doing chemical reactor simulations it has happened to them as well.  In all cases, having a basic understanding how the simulation math works, and how the simulation differs from real life cases, has always helped pinpoint the error/issue/misunderstanding before descending to super-frustration. (Although you do usually go through the what-the-fuck-just-happened? phase, which to me at least, is lots of fun.)
« Last Edit: May 22, 2019, 09:21:57 am by Nominal Animal »
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf