Author Topic: Automated DP832 Calibration  (Read 16247 times)

0 Members and 1 Guest are viewing this topic.

Offline alank2

  • Super Contributor
  • ***
  • Posts: 2183
Re: Automated DP832 Calibration
« Reply #75 on: July 18, 2020, 01:24:52 pm »
Also, if the first step is positive, how does it extrapolate below that?  Shouldn't the first point be less than zero, even if barely, so that it can be used to find that range including zero.
 

Offline Mecanix

  • Frequent Contributor
  • **
  • Posts: 263
  • Country: cc
Re: Automated DP832 Calibration
« Reply #76 on: July 18, 2020, 02:02:49 pm »
Quick side note to say thanks for all the helpful tips & tricks and those scpi commands others have posted! Good info. I've calibrated mine and long gone are those wicked low/neg mV/mA, finally. Nothing broke ;)

Background; had a DM3058 professionally re-calibrated by someone two weeks ago and happy this is finally talking the same language with the DP832, and within surprisingly tight tolerance. Particularly glad to see that what I manually enter on the keypad not only match the digits on that LCD but also match what's displayed on the DM3058. Before I use to set 3.30V and was getting 3.269V on the DP832's lcd and 3.33354V on the DMM lol It was all over the place, and useless almost...

So... thanks again guys! ps. cool kit @Rigol, btw. Not bad at all!

ps. Ended up writing my own in csharp for several reasons, using a simple condition to skip out the negative readings. Can't get any simper, works too! 

1025922-0
 

Offline aristarchus

  • Regular Contributor
  • *
  • Posts: 107
  • Country: 00
Re: Automated DP832 Calibration
« Reply #77 on: July 18, 2020, 02:13:06 pm »
..
ps. Ended up writing my own in csharp for several reasons, using a simple condition to skip out the negative readings. Can't get any simper, works too! 

(Attachment Link)

Congrats! good job done!

Now you know the next question..   :)   any chance to upload this VC project on a github or even as an attachment here ?
 

Offline garrettm

  • Frequent Contributor
  • **
  • Posts: 266
  • Country: us
Re: Automated DP832 Calibration
« Reply #78 on: July 18, 2020, 02:18:57 pm »
Well, I'm going to try to see what I can do on this today.  I'll figure out how to compile java at the very least!

What I find odd is that in CH1 a similar thing happens where step 1 is even less than step 0 and it seems to work, but not so with CH2.

Are the steps arbitrarily chosen?  Do 0.2V and 0.5V equate to some DAC value?  When you guys talk about finding an offset, is the offset was 0.6V, are you saying that the calibration points should shift to 0.2V+0.6V, 0.5V+0.6V, and so on?

The cal points are the default from Rigol, but they can be arbitrarily chosen with up to a maximum of 80 points per array. I assume they picked these particular points based on some sort of best-fit data for linearity, but who knows.

The readback for 0.2V should be a non-negative value distinct from and lower than 0.5, it's just that the lower end of the series regulator may have some offset/clamping that throws off the lower default cal points. This is where we can improve the calibration routine.

The "offset" is simply the value added to the initial default cal point (say 0.2V+offset = 0.28V) needed to reach zero (or near zero) on readback. This then forces the series regulator into a linear region such that the cal point has significance. We don't want to drop cal points if we can, lowering the number of cal points can only worsen linearity. But we also don't want to use "bad" cal points either, i.e. use negative readback values that don't linearly increase as you've seen.

I can add this modification to the script if you want. But feel free to modify the script, and let me know if you need any help if you do.
« Last Edit: July 18, 2020, 02:38:43 pm by garrettm »
 

Offline garrettm

  • Frequent Contributor
  • **
  • Posts: 266
  • Country: us
Re: Automated DP832 Calibration
« Reply #79 on: July 18, 2020, 02:32:43 pm »
I do want to point out that adjustments for the DAC-V and DAC-I routines are probably not needed for the ADC-V and ADC-I routines. Readback on the PSU display seems to work fine with the script as is.
 

Offline Mecanix

  • Frequent Contributor
  • **
  • Posts: 263
  • Country: cc
Re: Automated DP832 Calibration
« Reply #80 on: July 18, 2020, 02:37:56 pm »
Now you know the next question..   :)   any chance to upload this VC project on a github or even as an attachment here ?

Thanks! I'm all up for sharing, problem being its part of a form project and linked up with so many other stuff (LCR sweep & bin, oscilloscope, DMM data acquisition, arb waves, scales, etc etc). I'd need a day to pull a standalone/functional app out of this lot. Got the DMM and a LCR over serial com port, the rest over LAN. Pretty messy...

The primary function (method?) you see on the visual is what drives it all really, and pretty much the same what others have done already. PM me if you need any particular code or help with your own code and I'll happily help out, certainly will try my best anyway.

EDIT: here goes the full method. Good enough to get an idea I'm guessing. Nothing as elaborate and high quality as what Garrett already did. Below just put up the cal data in the DP832 and, that's all. Used Microsoft Visual Studio 2019, free Community edition (.NET Windows Form, buttons and all that good stuff).

Code: [Select]

using System.IO;
using System.IO.Ports;
using System.Net.Sockets;
using System.Threading;


        string DP832IP = "192.168.100.111";
        int DP832port = 5555;
        string cmd_runtime;


        // Array example CH1
        string[] VDAC = new string[] {  "0.842", "0.843", "0.845","0.85","0.9", "1", "1.2", "1.8", "2", "2.4", "2.7", "3", "3.3", "3.6", "4", "4.2", "4.5", "4.7",
                                       "5", "5.5", "6", "6.5", "7", "7.5", "8", "8.5", "9", "9.5", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19",
                                       "20", "21", "22", "23", "24", "25", "26", "27", "28", "29", "30", "31", "32" };
        string[] VADC = new string[] { "0v", "0.001v", "0.005v", "0.05v", "0.1v", "0.5v", "1v", "5v", "10v", "12.8v", "20v", "30v", "32v" };

        //CH1, CH2 & CH3 iADC
        string[] iDAC = new string[] { "0.001A", "0.002A", "0.005A", "0.01A", "0.02A", "0.03A", "0.04A", "0.05A", "0.06A", "0.07A", "0.08A", "0.09A", "0.1A",
                                       "0.2A", "0.3A", "0.4A", "0.5A", "0.6A", "0.7A", "0.8A", "0.9A", "1A", "1.2A", "1.5A", "1.7A", "2A", "2.2A", "2.5A",
                                       "2.7A", "3A", "3.2A" };
        string[] iADC = new string[] { "0A", "0.1A", "0.5A", "1A", "2A", "3A", "3.2A" };


        private void btn_v_Click(object sender, EventArgs e)
        {
            Calibrate("CH3", "V");
        }
        private void btn_c_Click(object sender, EventArgs e)
        {
            Calibrate("CH3", "C");
        }

        private void Calibrate(string ch, string type)
        {
            int SCPIdelay = 1000;
            int DMMdelay = 3000;
            double DMMval = 0.00;

            using (var client = new TcpClient(DP832IP, DP832port))
            using (var networkStream = client.GetStream())
            using (var writer = new StreamWriter(networkStream))
            {
                writer.Write(":CAL:Start 11111," + ch + "\r\n");
                Thread.Sleep(SCPIdelay);
                writer.Write(":CALibration:Clear "+ch+",v\r\n");
                Thread.Sleep(SCPIdelay);
                writer.Write("*RST\r\n");
                Thread.Sleep(SCPIdelay);
                writer.Write(":OUTPUT " + ch + ",ON\r\n");
                Thread.Sleep(SCPIdelay);

                if (type == "V")
                {
                    cmd_runtime = "MEAS:VOLT:DC?";

                    Console.WriteLine("VDAC:");
                    for (int i = 0; i < VDAC.Length; i++)
                    {
                        writer.Write(":CAL:Set " + ch + "," + type + ", " + i + "," + VDAC[i] + ",1\r\n");
                        Thread.Sleep(DMMdelay);
                        DMMval = MEAS.Measure(cmd_runtime); // <-- DMM Get reading here
                        if (DMMval > 0)
                        {
                            Console.WriteLine(VDAC[i]);
                            writer.Write(":CAL:MEAS " + ch + "," + type + "," + i + "," + DMMval + ",1\r\n");
                        }
                        Thread.Sleep(SCPIdelay);
                    }
                    Console.WriteLine("VADC:");
                    for (int i = 0; i < VADC.Length; i++)
                    {
                        writer.Write(":CAL:Set " + ch + "," + type + "," + i + "," + VADC[i] + ",0\r\n");
                        Thread.Sleep(DMMdelay);
                        DMMval = MEAS.Measure(cmd_runtime); // <-- DMM Get reading here
                        if (DMMval > 0)
                        {
                            writer.Write(":CAL:MEAS " + ch + "," + type + "," + i + "," + DMMval + ",0\r\n");
                            Console.WriteLine(VADC[i]);
                        }
                        Thread.Sleep(SCPIdelay);
                    }
                }
                else
                {
                    cmd_runtime = "MEAS:CURR:DC?";

                    Console.WriteLine("iDAC:");
                    for (int i = 0; i < iDAC.Length; i++)
                    {
                        writer.Write(":CAL:Set " + ch + "," + type + ", " + i + "," + iDAC[i] + ",1\r\n");
                        Thread.Sleep(DMMdelay);
                        DMMval = MEAS.Measure(cmd_runtime); //+0.0003; //CH2 add 0.0003 offset to comp mA
                        if (DMMval > 0)
                        {
                        Console.WriteLine(iDAC[i]);
                            writer.Write(":CAL:MEAS " + ch + "," + type + "," + i + "," + DMMval + ",1\r\n");
                        }
                        Thread.Sleep(SCPIdelay);
                    }
                    Console.WriteLine("iADC:");
                    for (int i = 0; i < iADC.Length; i++)
                    {
                        writer.Write(":CAL:Set " + ch + "," + type + "," + i + "," + iADC[i] + ",0\r\n");
                        Thread.Sleep(DMMdelay);
                        DMMval = MEAS.Measure(cmd_runtime);
                        if (DMMval > 0)
                        {
                            writer.Write(":CAL:MEAS " + ch + "," + type + "," + i + "," + DMMval + ",0\r\n");
                            Console.WriteLine(iADC[i]);
                        }
                        Thread.Sleep(SCPIdelay);
                    }
                }
                writer.Write(":OUTPUT " + ch + ",OFF\r\n"); Console.WriteLine(":OUTPUT " + ch + ",OFF");
                Thread.Sleep(SCPIdelay);
                writer.Write(":CAL:End 07/16/2020," + ch + "\r\n"); Console.WriteLine(":CAL:End 07/16/2020," + ch + "\n");
                Thread.Sleep(SCPIdelay);
            }
        }




///// Public Class for the DMM's double val (over serial though)

    class MEAS
    {
        public static double Measure(string message)
        {
            double data = 0.0;
            try
            {
                Form1.port.Write(message + "\r\n");// Thread.Sleep(1);
                data = Convert.ToDouble(Form1.port.ReadLine());
            }
            catch (Exception ex)
            {
                Console.WriteLine("Error in MEAS.cs (Measure() method): " + ex.Message);
            }
            return data;
        }
    }



« Last Edit: July 18, 2020, 03:05:03 pm by Mecanix »
 
The following users thanked this post: garrettm

Offline alank2

  • Super Contributor
  • ***
  • Posts: 2183
Re: Automated DP832 Calibration
« Reply #81 on: July 18, 2020, 03:49:55 pm »
I'm going to try my hand at modifying garret's java today, but in the meantime, I'm looking through the python script and have a simple question:

Code: [Select]
            if value <= first_positive:
                continue
            if unit == 'A' and value > self._manual_current_limit and manual == False:
                manual = True
                print()
                print("WARNING: CURRENT BEYOND DMM LIMIT, MANUAL INPUT REQUIRED")
                self._wait_for_enter("Connect alternative DMM 10A CURRENT inputs to PSU channel %d" % (channel))

            self._psu._write("CALibration:Set CH%d,%s,%d,%g%s,%d" % (channel, ident, step, value, unit, index));


I can see the indenting, but how does python know that "self._psu._write" is not part of the if?  I am used to brackets or and endif or something.
 

Offline alank2

  • Super Contributor
  • ***
  • Posts: 2183
Re: Automated DP832 Calibration
« Reply #82 on: July 18, 2020, 05:06:38 pm »
Okay, I've modified the script to locate the last negative result and begin calibrating there.

So far:

ch2 DAC-V determine last negative value

ch2 DAC-V starting at 0.5v

ch2 DAC-V calibration
step  0, cal point:  0.5v, meas val: -0.2530v
step  1, cal point:  1.2v, meas val:  0.4428v

ANY thoughts on why my 0.5V meas is different today than yesterday?

ch2 DAC-V calibration
step  0, cal point:  0.2v, meas val: -0.0689v
step  1, cal point:  0.5v, meas val: -0.0707v
step  2, cal point:  1.2v, meas val:  0.4444v
 

Offline alank2

  • Super Contributor
  • ***
  • Posts: 2183
Re: Automated DP832 Calibration
« Reply #83 on: July 18, 2020, 05:17:05 pm »
The good news is that seemed to work, CH2 is much better now, I can ask for voltages less than 500mV and they are good.

garrettm - do you mind if I post the modified script?

My technique was to determine the last negative step and start using it for the calibration which may skip steps before it.
 
The following users thanked this post: garrettm

Offline garrettm

  • Frequent Contributor
  • **
  • Posts: 266
  • Country: us
Re: Automated DP832 Calibration
« Reply #84 on: July 18, 2020, 05:22:05 pm »
@alank2 that's fine with me, just add that you modified it somewhere so people know the difference between the two (in case someone downloads both and becomes confused as to which one they are using).

I'll post a new revision by Monday that adjusts the lower cal points as described earlier, to preserve the total number of cal points and stay in the linear region of the series regulator.
« Last Edit: July 18, 2020, 05:24:43 pm by garrettm »
 

Offline alank2

  • Super Contributor
  • ***
  • Posts: 2183
Re: Automated DP832 Calibration
« Reply #85 on: July 18, 2020, 05:42:54 pm »
@alank2 that's fine with me, just add that you modified it somewhere so people know the difference between the two (in case someone downloads both and becomes confused as to which one they are using).

Will do.

I'll post a new revision by Monday that adjusts the lower cal points as described earlier, to preserve the total number of cal points and stay in the linear region of the series regulator.

Sounds great; please reply in this thread and I'll check it out.  Your approach then is to try to find the first positive value and instead of calibrating the first step at say 0.2V, you'll calibrate it at wherever you find the first positive value?  Sounds good.
 

Offline alank2

  • Super Contributor
  • ***
  • Posts: 2183
Re: Automated DP832 Calibration
« Reply #86 on: July 18, 2020, 05:46:06 pm »
Enclosed is my mod of garrettm's Java script.

it has two changes.

#1 - When calibrating DAC-V and DAC-I, it will look to see if there is more than one negative region.  On my PS step 1 was even more negative than step 0!  It will begin calibrating at the last negative region if one exists or at step 0 if there are no negative steps.

#2 - It will make you type "1" enter before it ends/saves the calibration.  This gives you time to abort and power cycle the unit if you don't like the ADC-V or ADC-I results you are seeing.

This is my first time writing anything in Java, so hopefully I did all right.  Mostly I used garrettm's code as an example and tweaked it.
« Last Edit: July 18, 2020, 05:47:53 pm by alank2 »
 
The following users thanked this post: garrettm, Mecanix

Offline garrettm

  • Frequent Contributor
  • **
  • Posts: 266
  • Country: us
Re: Automated DP832 Calibration
« Reply #87 on: July 18, 2020, 05:56:18 pm »
Your approach then is to try to find the first positive value and instead of calibrating the first step at say 0.2V, you'll calibrate it at wherever you find the first positive value?  Sounds good.

Correct. The idea is to increase the lower cal points by some fixed value so the readback values are strictly increasing. I'd be curious to see if this improves the lower ranges any over skipping points. It could be that both are equally as effective.
 

Offline garrettm

  • Frequent Contributor
  • **
  • Posts: 266
  • Country: us
Re: Automated DP832 Calibration
« Reply #88 on: July 18, 2020, 05:58:59 pm »
#2 - It will make you type "1" enter before it ends/saves the calibration.  This gives you time to abort and power cycle the unit if you don't like the ADC-V or ADC-I results you are seeing.

Good idea! I'll add that to the new revision as well.
 

Offline alank2

  • Super Contributor
  • ***
  • Posts: 2183
Re: Automated DP832 Calibration
« Reply #89 on: July 18, 2020, 06:19:52 pm »
garrettm - do you have any thoughts about the difference in the 0.5V step values today vs. yesterday in post 83.
 

Offline Mecanix

  • Frequent Contributor
  • **
  • Posts: 263
  • Country: cc
Re: Automated DP832 Calibration
« Reply #90 on: July 18, 2020, 06:38:18 pm »
Correct. The idea is to increase the lower cal points by some fixed value so the readback values are strictly increasing. I'd be curious to see if this improves the lower ranges any over skipping points. It could be that both are equally as effective.

It does. The lowest found DAC positive val allows for the ADC to set very low reference later on the cal... if you are into mV/mA projects that helps a lot. What also improved on the tolerance (I've just found out) is to have DMM sampling at 20~50Hz and capturing an *averaged value, as opposed to the single one the DMM capture after the delay. **Requires the DMM to be set to medium/fast speed + a Sum()/x or Average() function in the code.

* The DMM fluctuate quite a lot in the mV &mA ranges, that's what gave me the idea of sampling
** Extra bells & whistles for those with low end DM3058 (5.5digits) DMMs, like the one I have ;-)
 
The following users thanked this post: alank2

Offline TurboTom

  • Super Contributor
  • ***
  • Posts: 1388
  • Country: de
Re: Automated DP832 Calibration
« Reply #91 on: July 18, 2020, 06:40:08 pm »
I think it doesn't make any sense to use negative calibration points on these single-quadrant supplies. The ability to supply slightly negative output voltages at low currents IMO is a side-effect of the active discharge circuitry that Rigol included in order to be able to discharge the PSU's internal output capacitance as well as possible input smoothing caps inside the attached DUT so it doesn't take ages for the output voltage to drop when programmed to a lower value or turned off. To compensate for semiconductor thresholds, the PSU needs a slightly negative internal supply voltage, maybe just a diode drop below the output ground.

Since it's reported that arbitrary calibration points are possibe, I'ld recommend to increase in "Test Cal Mode" the output voltage in 1mV steps until 0V is reached/crossed, use this as the first calibration value and start the CAL sequence with the recommended intervals (i.e. 0V-offset plus CAL interval) up to 1 or 2V, and above this, use the "table values". The strange, non-monotonic behaviour at negative output voltages is just a side-effect of saturating the discharge circuitry and can be disregarded for calibration purposes.
 
The following users thanked this post: alank2, Wolfgang

Online Wolfgang

  • Super Contributor
  • ***
  • Posts: 1773
  • Country: de
  • Its great if it finally works !
    • Electronic Projects for Fun
Re: Automated DP832 Calibration
« Reply #92 on: July 18, 2020, 06:51:19 pm »
I think it doesn't make any sense to use negative calibration points on these single-quadrant supplies. The ability to supply slightly negative output voltages at low currents IMO is a side-effect of the active discharge circuitry that Rigol included in order to be able to discharge the PSU's internal output capacitance as well as possible input smoothing caps inside the attached DUT so it doesn't take ages for the output voltage to drop when programmed to a lower value or turned off. To compensate for semiconductor thresholds, the PSU needs a slightly negative internal supply voltage, maybe just a diode drop below the output ground.

Since it's reported that arbitrary calibration points are possibe, I'ld recommend to increase in "Test Cal Mode" the output voltage in 1mV steps until 0V is reached/crossed, use this as the first calibration value and start the CAL sequence with the recommended intervals (i.e. 0V-offset plus CAL interval) up to 1 or 2V, and above this, use the "table values". The strange, non-monotonic behaviour at negative output voltages is just a side-effect of saturating the discharge circuitry and can be disregarded for calibration purposes.

Calibrating a DP832 in the few mA / less than 1V mode for the 30V outputs is rather futile. When looking at the specs, the accuracy is so low that this does not really make sense below 1V/10mA. Two ways to fix that:
- Use a good DMM to measure *real* voltage and current (6 1/2 digit is OK, the same you would need for calibration anyway).
- Use a SMU (OK. thats real money, but then low voltage/currents are reliable.
 

Offline alank2

  • Super Contributor
  • ***
  • Posts: 2183
Re: Automated DP832 Calibration
« Reply #93 on: July 18, 2020, 07:21:04 pm »
Their steps do not seem so consistent, but perhaps there is a reason I don't see in it:

 

Offline sequoia

  • Supporter
  • ****
  • Posts: 154
  • Country: us
Re: Automated DP832 Calibration
« Reply #94 on: July 18, 2020, 09:11:35 pm »
Their steps do not seem so consistent, but perhaps there is a reason I don't see in it:


Where does these 35 (DAC-V) calibration points originate from?  I checked my supply and factory calibration seems to be using 43 points....

Seems like calibration can use variable number of points (up to some limit)?  So I guess optimal calibration could be achieved by first scanning
the whole range (100mV steps, etc..) recording the diff between set/measured values. Then based on this data determine optimal calibration points and then do second "pass" and calibrate using the points determined based on the first pass...

Finding optimal calibration points "manually" should be rather easy based on the curve generated from initial pass, finding the optimal calibration points programmatically would require little bit work...

 

Offline alank2

  • Super Contributor
  • ***
  • Posts: 2183
Re: Automated DP832 Calibration
« Reply #95 on: July 18, 2020, 09:46:04 pm »
Is there a way to read the calibration points?

Also, how many points maximum can the calibration handle?
 

Offline Mecanix

  • Frequent Contributor
  • **
  • Posts: 263
  • Country: cc
Re: Automated DP832 Calibration
« Reply #96 on: July 19, 2020, 12:00:24 pm »
Boy oh boy did I woke up on the wrong side today. My yesterday's DP832 successful calibration celebration really just got smashed, I am getting totally different current readings (as much as 7% lower than after yesterday's tests). Just found out its really (too?) sensitive to temperature. That being said the current calibration needs to be done over a long period of time for this to remain consistent/accurate i.e. you can't calibrate this unit when its hot from let's say; previous cal runs.

e.g.
Step 0 > Send cal value > Wait +10min for the unit to reach its op temp for this value > Average DMM's readings over a 1min period > SAVE
Step 1 > Send cal value > Wait +10min for the unit to reach its op temp for this value > Average DMM's readings over a 1min period > SAVE
Step 2 > Send cal value > Wait +10min for the unit to reach its op temp for this value > Average DMM's readings over a 1min period > SAVE
.......
Step 32 > Send value > Wait +1min for the unit to reach its op temp for this value > Average DMM reading over a 1min period > SAVE

I'm mostly interested in the low 0~300 mA range so I'm guessing I'd need to do this for the first 10~12 steps, or does the higher A stepping also affect the ADC?

Voltage is still spot on though, very happy with that part and what matters most for me anyway. One would measure current with a precision DMM if this is required anyway. Remains I'd like this bad boy to put up correct iFigures on its LCD (in mA that is) when I feel like questioning without the DMM being hooked up.

 
 

Offline alank2

  • Super Contributor
  • ***
  • Posts: 2183
Re: Automated DP832 Calibration
« Reply #97 on: July 19, 2020, 12:24:43 pm »
I was thinking, probably the best way to find the first positive point is a binary search.  Start at 0.5V, then add 0.25V depending on if it was positive or negative, then 0.125V, and so on.  That should be able to pinpoint the first positive in 8 tests.  Then why use their scale at all.  Divide the maximum number of calibration entries you have into the range and then do them.

Is it known how many entries maximum for each range?

Could the ADC (meter accuracy) be improved by adding more steps if we knew its maximum?

Is there a way to read the current calibration steps somehow?  (This would be nice so one could back them up and restore them if necessary).
 

Offline Mecanix

  • Frequent Contributor
  • **
  • Posts: 263
  • Country: cc
Re: Automated DP832 Calibration
« Reply #98 on: July 19, 2020, 08:22:18 pm »
PS: Trick to get accurate current readings in the low 0~300mA is to calibrate iDAC and iADC with a good 30min delay in between. So:

1 - Run iDAC steps
2 - Pause 30min
3 - Run iADC steps

Another way is to take note of the unit temperature before the calibration (Utility > Sys info > key sequence 1-3-2) and run the iADC steps when it reached to about the same as when iDAC started. Took a good 30'ish minutes for mine to come back down to normal operating temp... which is about the same Rigol recommends.

Using those steps for all channels (current):
Code: [Select]

        //CH1, CH2 & CH3 (iDAC iADC)
        string[] iDAC = new string[] { "0.001A", "0.002A", "0.005A", "0.01A", "0.02A", "0.03A", "0.04A", "0.05A", "0.06A", "0.07A", "0.08A", "0.09A", "0.1A",
                                       "0.2A", "0.3A", "0.4A", "0.5A", "0.6A", "0.7A", "0.8A", "0.9A", "1A", "1.2A", "1.5A", "1.7A", "2A", "2.2A", "2.5A",
                                       "2.7A", "3A", "3.2A" };
        string[] iADC = new string[] { "0A", "0.1A", "0.5A", "1A", "2A", "3A", "3.2A" };

« Last Edit: July 19, 2020, 08:34:20 pm by Mecanix »
 

Offline sequoia

  • Supporter
  • ****
  • Posts: 154
  • Country: us
Re: Automated DP832 Calibration
« Reply #99 on: July 19, 2020, 11:25:19 pm »
Is there a way to read the calibration points?

Also, how many points maximum can the calibration handle?


You can use the manual calibration (menu), but its bit tedious...(you can just follow the calibration process, but not save at the end...)

It might be possible to read these via SCPI (but may require that "magic" usb drive in the usb port):

338 :PROJect:CALIbration:DATA:VOLTage:WRITe
339 :PROJect:CALIbration:DATA:VOLTage:READ?
340 :PROJect:CALIbration:DATA:CURRent:WRITe
341 :PROJect:CALIbration:DATA:CURRent:READ?
342 :PROJect:CALIbration:DATA:CURRent:ADDRess?
343 :PROJect:CALIbration:INFO:WRITe
344 :PROJect:CALIbration:INFO:READ?
345 :PROJect:CALIbration:INFO:ADDRess?

(these are from: dp800_all_commands.txt found in https://www.eevblog.com/forum/testgear/need-help-hacking-dp832-for-multicolour-option/msg2325633/#msg2325633)


 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf