Author Topic: Python AsyncIO GPIB Libraries  (Read 4953 times)

0 Members and 1 Guest are viewing this topic.

Offline maatTopic starter

  • Regular Contributor
  • *
  • Posts: 144
  • Country: de
Python AsyncIO GPIB Libraries
« on: January 02, 2021, 08:08:42 am »
Happy New Year to all forum members.

During my thesis, I have used several GPIB adapters and devices for setting up test environments. Today I would like to share some of my scripts and drivers with you.

Introduction
Typical test environments require multiple devices for reading data or controlling parameters. Modern devices usually feature an Ethernet or USB port, while vintage devices come with GPIB. Typical setups include code like:
Code: [Select]
result1 = device1.read()
result2 = device2.read()
result3 = device3.read()
...
All of these reads are syncronous and processed one after the other. Depending on the bus, there will be a roundtrip latency between requesting a read operation and receiving the data. For some numbers see https://www.ni.com/de-de/innovations/white-papers/11/instrument-bus-performance---making-sense-of-competing-bus-techn.html.
While the GPIB bus is extremely good at maintaining low latency (30 µs) and jitter, this advantage is usually ruined using the popular GPIB to USB adapters. USB was never intended for realtime applications, as the USB host controller (i.e. computer) is always in charge, polling each connected device. Therefore latence depends on the OS, the number of device, bus activity and so on. It can be as low as a few hundred µs and go up to a few ms. With ethernet this gets even worse, when we have a typical round trip delay of about a ms up hundreds of ms.

Now, a millisecond here and there might not sound like much, but when orchestrating multiple devices like four or five dataloggers, things will become a serious issue, when taking data at "fast" rates like 1 PLC or 10 PLC. Even such a light load will bring any synchronous attempt to its knees, no matter how powerful the machine, because the operations are I/O bound.

This problem can be solved by an asynchronous approach. Employing the code example above, it will look more like this:
Code: [Select]
result1, result2, result3 = await asyncio.gather(
    device1.read(),
    device2.read(),
    device3.read()
)
...

This code queries all devices at the same time and waits until all results are back in. This reduces the total time to the roundtrip time of the slowest request. When orchestrating a whole experiment this speed increase becomes mind-boggling :)

Libraries
I do have a few devices for which I wrote libraries and will be publishing all of those as soon as I have brushed them up a bit and added some documentation. For now I have only uploaded the HP 3478A library.

GPIB Adapters

Devices

Measurement Script
I will also be adding a full measurement script to showcase logging of multiple devices.

Feedback is very welcome and I hope these libraries are of any help to other people.

Update: Added Fluke 5440B
Update: linux gpib wrapper
« Last Edit: January 20, 2021, 05:15:57 am by maat »
 
The following users thanked this post: TiN, ManateeMafia, jjoonathan, ap, MiDi, ch_scr, MegaVolt, Irv1n, eplpwr, notfaded1, Kibabalu, adinsen

Offline notfaded1

  • Frequent Contributor
  • **
  • Posts: 559
  • Country: us
Re: Python AsyncIO GPIB Libraries
« Reply #1 on: January 02, 2021, 04:35:03 pm »
This is really cool!  So essentially you're measuring the GPIB in parallel correct?  That's why you're saying the read period is the time for the longest of the list of reads to complete?  Are you reading multiple usb GPIB interfaces in parallel or multiple devices on the same GPIB chain?

Bill
.ılılı..ılılı.
notfaded1
 

Offline maatTopic starter

  • Regular Contributor
  • *
  • Posts: 144
  • Country: de
Re: Python AsyncIO GPIB Libraries
« Reply #2 on: January 02, 2021, 05:23:57 pm »
Are you reading multiple usb GPIB interfaces in parallel or multiple devices on the same GPIB chain?

Yes, that is the idea. In my setup I use several different interfaces like Ethernet, Serial and GPIB to query different devices and all of these queries run in parallel. The GPIB bus by itself is synchronous, though. If you are using multiple devices on the same bus, a read request would be queued. A request like "ACAL DCV" on a 3458A can of course run in the background, while other reads and writes are performed. With devices on the same bus, you save on the round trip times, because you can for example request data from several devices at once and then read them as soon as they trigger a service request.

I am looking forward to the Yaugi 4 (https://www.eevblog.com/forum/metrology/yaugi-4-gpib-ethernet-poe-adapter/), so that I put an ethernet connection on all devices for a decent price. I assume adding it to the list of supported GPIB adapters will be straightforward as it implements the same commands as the Prologix adapters.
 

Offline Anders Petersson

  • Regular Contributor
  • *
  • Posts: 122
  • Country: se
Re: Python AsyncIO GPIB Libraries
« Reply #3 on: January 03, 2021, 03:30:37 am »
Good thinking!
I suppose you created (or reused) asyncio drivers for GPIB, wrapped in specific asyncio-aware device classes?
Is there a chance to reuse the existing device classes of PythonIVI, InstrumentKit or PyMeasure? Seems there's already a fragmented set of projects that support different subsets of instruments.
I don't know if that existing code could be reused by wrapping in regular threads, to allow each class to continue blocking? (The advantage of supporting massive numbers of parallel asyncio tasks won't matter here.)
 

Offline maatTopic starter

  • Regular Contributor
  • *
  • Posts: 144
  • Country: de
Re: Python AsyncIO GPIB Libraries
« Reply #4 on: January 03, 2021, 12:28:04 pm »
I suppose you created (or reused) asyncio drivers for GPIB, wrapped in specific asyncio-aware device classes?
Is there a chance to reuse the existing device classes of PythonIVI, InstrumentKit or PyMeasure?

I did create a pure python implementation for the Prologix adapter and the HP3478A. The linux-gpib implementation is a wrapper for the existing linux python module (wich in turn wrapps the c driver). I will release this shortly. I only want to do a little more testing, but my current setup is logging data till Tuesday ;)

You can execute a thread in Python AsyncIO by using the executor. This is a special thread (or process) pool, that connects AsyncIO to threaded tasks. To connect the threaded world with AsyncIO I use a Janus queue (https://github.com/aio-libs/janus). For an example check out the link.

Now this is not a panacea, because this will spawn lots of threads and you will be back where you started before using AsyncIO ;), but this is more an academical problem, unless you are scaling up to a lot (a real lot) of devices. I took the executor route with the GPIB driver, because there is a very limited number of USB connectors available on any computer. Typically I use ethernet connections, which will scale nicely as these connectors are written using AsyncIO (I will upload code for this as well).

So to answer your second question. I guess you could simply wrap those libraries and put them in the executor, but if there is a python lib already available, then porting it to AsyncIO is usually not a problem if you have a template to copy from.
 

Offline pwlps

  • Frequent Contributor
  • **
  • Posts: 372
  • Country: fr
Re: Python AsyncIO GPIB Libraries
« Reply #5 on: January 04, 2021, 09:17:03 pm »
Hi,

This looks interesting, I have a couple of questions. Actually I wrote an asynchronous GBIB/Visa library for NET (https://www.codeproject.com/Articles/1166996/Multithreaded-communication-for-GPIB-Visa-Serial-i, see also this thread: https://www.eevblog.com/forum/metrology/3458a-logging-via-windows-app-revisited/) and was considering porting it to Python sometime.
1) One of the the main sources of latency in GPIB comes from the fact that it's not a packet-switched protocol so that the bus gets busy once a device is addressed to talk, the only way I found to deal with it (as I explain in my article) is either periodically polling the status byte for the "message available" bit or using service requests, how do you deal with it in your app?
2) As far as I know Python's threading is rather limited because of the Global Interpreter Lock, however I don't have enough experience with Python to tell how this will impact IO-bound operations such as when waiting for a GPIB dll call to return. I would be grateful if you can advise me on that.
 

Offline ledtester

  • Super Contributor
  • ***
  • Posts: 3036
  • Country: us
Re: Python AsyncIO GPIB Libraries
« Reply #6 on: January 04, 2021, 10:01:39 pm »
Hi,

This is a valuable contribution and I don't want to seem to be belittling it, but as a hardcore programmer who is aware of and bothered by all the little things that could possibly go wrong, I prefer to run logging of different devices in independent threads (or even processes). Then a hangup in reading from one device won't affect the scheduled readings of the others.

Of course, once you have things going the chance that you'll suffer a comms failure or that a reading will take an abnormally long time is very, very small in which case this approach will work just fine.
 

Offline maatTopic starter

  • Regular Contributor
  • *
  • Posts: 144
  • Country: de
Re: Python AsyncIO GPIB Libraries
« Reply #7 on: January 05, 2021, 01:59:03 am »
1) One of the the main sources of latency in GPIB comes from the fact that it's not a packet-switched protocol so that the bus gets busy once a device is addressed to talk, the only way I found to deal with it (as I explain in my article) is either periodically polling the status byte for the "message available" bit or using service requests, how do you deal with it in your app?
2) As far as I know Python's threading is rather limited because of the Global Interpreter Lock, however I don't have enough experience with Python to tell how this will impact IO-bound operations such as when waiting for a GPIB dll call to return. I would be grateful if you can advise me on that.

1) Yes, you are quite right the GPIB bus is a bit painful when running mutliple things in parallel. If you want async reads, then the only option is to use service requests and to register the MSG_RDY bit for SRQs on the devices. You will then find yourself regularly serial polling all the devices in the tree to find the right one. This gets lame fast... I tend to have only one or two devices on the bus and then connect the GPIB adapter via Ethernet or USB. Yeah, that kills the only cool feature of the GPIB bus (latency), but if I want that, I hook up a MCU to do the job. So the final answer is, that it is up to the user, you can isue a write(), a wait(RQS) and then a read() if you really need to talk to other devices or a write(), then read() to just wait until the device is ready. The latter is far easier to implement and less errorprone.

Here is an example from the Fluke 5440B library. It takes about 5 minutes to do a full selftest and I therefore resorted to SRQs:
Code: [Select]
(snip)
await self.__query_job(self.__board.config, gpib.IbcAUTOPOLL, enabled)  # this will call ibconfig() on the gpib *board* (not device) and set IbcAUTOPOLL to 1, to enable automatic serial polling
(snip)
await self.set_srq_mask(SrqMask.DOING_STATE_CHANGE)   # Enable SRQs to wait for each test step
self.__logger.info("Running analog selftest. This takes about 4 minutes.")
await self.write("TSTA")
while "testing":
    await self.__conn.wait(1 << 11)    # Wait for RQS
    spoll = await self.serial_poll()
    (snip) more error handling (snip)
    if spoll & SerialPollFlags.DOING_STATE_CHANGE:
        state = await self.get_state()
        (snip) handle different states (snip)
        if state == State.IDLE:
            break
self.__logger.info("Analog selftest passed.")

2) The GIL is not an issue unless you go to insane amounts of threads, because you have a lot I/O waits and do not spend much time interpreting python code. The GIL can be a problem if you spend a lot of time running python code.

Edit: Added code example
« Last Edit: January 05, 2021, 02:46:16 am by maat »
 
The following users thanked this post: pwlps

Offline maatTopic starter

  • Regular Contributor
  • *
  • Posts: 144
  • Country: de
Re: Python AsyncIO GPIB Libraries
« Reply #8 on: January 05, 2021, 02:27:17 am »
Hi,

This is a valuable contribution and I don't want to seem to be belittling it, but as a hardcore programmer who is aware of and bothered by all the little things that could possibly go wrong, I prefer to run logging of different devices in independent threads (or even processes). Then a hangup in reading from one device won't affect the scheduled readings of the others.

Of course, once you have things going the chance that you'll suffer a comms failure or that a reading will take an abnormally long time is very, very small in which case this approach will work just fine.

No offence taken. I believe there are two approaches to logging data:
  • Timestamp each individual datum and stuff it into a database for later retrieval and post processing. Here you will want to gather as much data from each individual sensor as you can (or need). If some sensors fail you only loose their data.
  • Have one dataset per point in time with all relevant sensor data. In this case, if you have failing sensors you loose the dataset, because it became useless as it would break your database scheme.

I believe you are refering to the first logging scheme, when you say, that a lost sensor should not influence the rest, so I will address that issue:
In AsyncIO you do have multiple options. You can either use the simple
Code: [Select]
result_list = await asyncio.gather()
syntax, which will blow up in your face if one future breaks. An example would be an environmental chamber, that is logging test data. I don't need half butchered data. I would rather drop the whole dataset and fix the issue (automtically if possible) and the return to sampling. But for your situation there are several other options and a more versatile is
Code: [Select]
done, pending = await asyncio.wait()
This function allows you to specify, when it should return: After a timeout, when the first future is done, on the first exception or when all done. You can then go through each result and decide what to do with it. You can even keep procesing the pending futures by waiting for them again. This would be the solution if you have a number of sensors that interact with each other and you want to keep the scope narrow.

Another option is to schedule a task for each sensor. In this case if a sensor blow up, only that task is affected, the rest keeps running. Your code can easily treat each problem induvidually. I use that for logging dozens of sensors spread all over our labs to monitor different variables. In this case I need hotpluging and automatic failover. The data is timestamped and streamed to multiple data servers and aggregation happens somewhere in the backend. Interaction between independant tasks can be realized via queues or more elaborate constucts (pub sub for example).

I believe all those situations can be solved fairly elegantly in Python nowadays. There are some very powerful frameworks available on Python, especially, if you want to connect all that to a network.
 

Offline Anders Petersson

  • Regular Contributor
  • *
  • Posts: 122
  • Country: se
Re: Python AsyncIO GPIB Libraries
« Reply #9 on: January 05, 2021, 05:02:09 pm »
I believe there are two approaches to logging data:
  • Timestamp each individual datum and stuff it into a database for later retrieval and post processing. Here you will want to gather as much data from each individual sensor as you can (or need). If some sensors fail you only loose their data.
  • Have one dataset per point in time with all relevant sensor data. In this case, if you have failing sensors you loose the dataset, because it became useless as it would break your database scheme.
[...]
There are some very powerful frameworks available on Python, especially, if you want to connect all that to a network.

Maybe this is taking the thread out on a tangent but I'd like to hear people's thoughts on the handling of data since I'm designing a data processing library.

I agree with maat on the categorization of approaches to logging, but in my own thinking I've generalized it and added a third option.
We can view the data as a mathematical matrix where each row is the readings at one point in time. In practical terms, it's like a big CSV file, spreadsheet or database table.

1. Sparse matrix = logging each sensor with individual timestamps (maat category 1)
2. Dense matrix = all sensors have a value for every line
   2a. Dense with irregular intervals = Each line has a timestamp (maat category 2)
   2b. Dense with regular intervals = The timestamps have a pre-defined interval (such as one per second) so timestamp can be calculated from the line index and much processing can be done without timestamps

1 is most flexible but least efficient to store and process. 2b is the opposite.

Is there an established terminology here? How do different software packages deal with the conflicting goals of expressiveness and efficiency?
 

Offline ledtester

  • Super Contributor
  • ***
  • Posts: 3036
  • Country: us
Re: Python AsyncIO GPIB Libraries
« Reply #10 on: January 06, 2021, 04:44:07 am »
Quote
1 is most flexible but least efficient to store and process. 2b is the opposite.

(1) doesn't have to be difficult to store or process; (2b) imposes constraints on your collection process that you may not be able to keep.

Consider logging each sensor to its own CSV file by concurrently running processes. Even though the sensors are read from separate processes, you can coordinate the logging so that, for instance, each sensor is read once a second on the second.

Randomly accessing a timestamped CSV file can be performed efficiently, but I wouldn't try to use it like you would use a database. I see it as a record of the raw observations - it's purpose it to make sure you have a permanent record of the data. After you have the CSV data you may find it useful to import it into a database or spreadsheet, but I wouldn't try to make that step part of the logging process.
 

Offline IanJ

  • Supporter
  • ****
  • Posts: 1609
  • Country: scotland
  • Full time EE & Youtuber
    • IanJohnston.com
Re: Python AsyncIO GPIB Libraries
« Reply #11 on: January 06, 2021, 07:50:18 am »
Hi all,

FWIW here is the format I use for my Windows GPIB app and why:

As mentioned above, efficiency is key. Keep it simple.

INDEX, DEVICE NAME, DATE_TIME, VALUE, TEMPERATURE, HUMIDITY

Code: [Select]
1,3458A,2020-11-13_16:25:29,1.000002084,26.75,37.25
2,34461A,2020-11-13_16:25:29,1.00001,26.75,37.25
3,3458A,2020-11-13_16:25:30,1.000001982,26.73,37.26
4,34461A,2020-11-13_16:25:30,1.00001,26.73,37.26
5,3458A,2020-11-13_16:25:31,1.000002102,26.72,37.26
6,34461A,2020-11-13_16:25:31,1.00001,26.72,37.26
7,3458A,2020-11-13_16:25:32,1.000002236,26.73,37.25
8,34461A,2020-11-13_16:25:32,1.00001,26.73,37.25
9,3458A,2020-11-13_16:25:33,1.000001864,26.72,37.29
10,34461A,2020-11-13_16:25:33,1.000009,26.72,37.29
11,3458A,2020-11-13_16:25:34,1.000002005,26.73,37.26
12,34461A,2020-11-13_16:25:34,1.00001,26.73,37.26
13,3458A,2020-11-13_16:25:35,1.00000182,26.71,37.24
14,34461A,2020-11-13_16:25:35,1.000009,26.71,37.24
15,3458A,2020-11-13_16:25:36,1.000001369,26.75,37.33
16,34461A,2020-11-13_16:25:36,1.000009,26.75,37.33
17,3458A,2020-11-13_16:25:37,1.000002081,26.73,37.34
18,34461A,2020-11-13_16:25:37,1.000009,26.73,37.34
19,3458A,2020-11-13_16:25:38,1.000001753,26.72,37.42
20,34461A,2020-11-13_16:25:38,1.000009,26.72,37.42
21,3458A,2020-11-13_16:25:39,1.000001894,26.71,37.42

My app can log the data from two GPIB devices at the same time. The individual GPIB processes run in their own threads.
The logging to CSV etc is outwith that. The GPIB comms as a result is extremely reliable.....even on Windows.

The CSV file has the option to comma delimit or semi-colon delimit. The reason for that is because some Euro countries use the comma as a D.P. and therefore would mess up the CSV. The user can select accordingly.

When logging from two GPIB devices at the same time there is a common sample rate, user settable from below 0.5sec right up, hence the time stamp down to the second (1 sec and above for offline graph/plotting).

The INDEX doesn't do much, it's just handy to have, say when manually reviewing the file.

Since the sample rate is common then the two devices alternate in the file as you can see.

When each sample is taken the temperature & humidity is taken at the same time for graph plot overlay.

No calculated perameters are recorded in the file, i.e. graph plot overlay calculates PPM, PPM/DegC tempco stuff offline.

The graph plot overlay software allows the user to select either or both devices in a dual log CSV for display/analysis. The device name in the CSV is key to this, also theoretically, if the sample rate was to change half-way through for example then the offline graph/plot software can handle this and plot accordingly. Not sure I have tried this with mine but it should be okay as the plotting goes by the time stamp. There is no recording of the actual sample rate directly in the CSV.

Hope that helps,

Ian.
« Last Edit: January 06, 2021, 10:18:38 am by IanJ »
Ian Johnston - Original designer of the PDVS2mini || Author of the free WinGPIB app.
Website - www.ianjohnston.com
YT Channel (electronics repairs & projects): www.youtube.com/user/IanScottJohnston, Twitter (X): https://twitter.com/IanSJohnston
 

Offline maatTopic starter

  • Regular Contributor
  • *
  • Posts: 144
  • Country: de
Re: Python AsyncIO GPIB Libraries
« Reply #12 on: January 06, 2021, 10:59:03 am »
1. Sparse matrix = logging each sensor with individual timestamps (maat category 1)
2. Dense matrix = all sensors have a value for every line
   2a. Dense with irregular intervals = Each line has a timestamp (maat category 2)
   2b. Dense with regular intervals = The timestamps have a pre-defined interval (such as one per second) so timestamp can be calculated from the line index and much processing can be done without timestamps

Type 1 is, what you would typically put in a relational database. If you intend to log for years, with sensors coming and going. That’s the way to go. Your logger should then put it right into the database asynchronously, as soon as the data arrives. From the database data can be collected very efficiently and you have powerful queries to help with aggregation. The format in the database would typically be close to what Ian has posted:

$Timestamp (I'll shoot anyone using local time), $sensor_id, $value

All other static parameters can then be extracted from other tables and this in turn keeps the level of normalization high, which ensures, that your logging scheme is extendable in future (See wikipedia on https://en.wikipedia.org/wiki/Database_normalization). Do yourself a favour a drop the extra index. If you need line numbers, use an editor that prints them for you. An index like that will blow up in your face as soon as you start joining more than one file. An index is supposed to improve data retrieval speeds. The timestamp already does that for you. When using a database, interpolation and resampling of the data can be done on the fly by the database when extracting data.

Type 2 is more what you would expect in a file, where you do have a finite logging time and are interested in calculating correlations and the like. Here you do have finite time span, in which case you do not need the high degree of normalization. The format may change between measurements. It is more about ease of use. All these statistical packages (numpy, scipy, etc.) assume, that data columns are of the same size, interpolating or resampling to get the same number of entries with the same timestamp per column makes things complicated.

I would keep my fingers off version number 2b as well, like ledtester said, because this is an unnecessary constraint. The timestamp by itself is already the index. If you desire integers and seconds, I recommend looking at the Unix Epoch format. Imagine what happens, when a sensor misses a step. This invalidates the whole measurement. Also you cannot easily stitch files together using copy&paste. You always need post-processing to do so. Memory is cheap nowadays. Keysight uses this model in their 34xxx DMM series, when logging data and I could kill them for that decision.

I suggest you best open a separate thread to further discuss these issues. It might get longer ;)
 

Offline Anders Petersson

  • Regular Contributor
  • *
  • Posts: 122
  • Country: se
Re: Python AsyncIO GPIB Libraries
« Reply #13 on: January 08, 2021, 08:42:52 pm »
Thanks for your comments ledtester, IanJ, maat!
Each of these three data representations have their place -- for example a processed and interpolated time series stored in RAM for plotting can use type 2b. Ians model is a pragmatic mix of 1 and 2a and is useful for his use case, but would have a big overhead if there are a million data points or for many sensors. There's simply not a single model that's most suitable for all scenarios.

There are many aspects to this but I'll refrain from taking this thread more off-topic.
 

Offline maatTopic starter

  • Regular Contributor
  • *
  • Posts: 144
  • Country: de
Re: Python AsyncIO GPIB Libraries
« Reply #14 on: January 10, 2021, 06:16:35 am »
I have just uploaded the Fluke 5440B library (see top post). I have implemented most features, but the following:

  • External calibration
  • Calculating DUUT uncertainties on the calibrator

I currently do not have the means to run the external calibration, so I cannot test the code. If there is someone willing to test that for me, I will add the code. Regarding the uncertainties, the instrument allows to calculate the uncertainty of the calibration. I believe this is best done in software on the host machine and not on the calibrator. I am willing to implement that though, if there is interest.

I also do no have a calibrator that throws errors, so I can not test those calls with errors. Again if you want to help, please do test those calls.

Enjoy.
 

Offline maatTopic starter

  • Regular Contributor
  • *
  • Posts: 144
  • Country: de
Re: Python AsyncIO GPIB Libraries
« Reply #15 on: January 20, 2021, 05:19:15 am »
I have finally added the linux-gpib wrapper to the list. I have also updated the examples of the Fluke 5440B to show the usage. The methods are basically the same and only initialization differs.

Using the wrapper, one can easily use the full features of linux-gpib in an AsycIO project.

Next on my todo list is either the Fluke 1524 or the HP3458A
 

Offline leighcorrigall

  • Frequent Contributor
  • **
  • Posts: 453
  • Country: ca
  • Nuclear Materials Scientist
Re: Python AsyncIO GPIB Libraries
« Reply #16 on: April 20, 2023, 03:28:17 pm »
I have just uploaded the Fluke 5440B library (see top post). I have implemented most features, but the following:

  • External calibration
  • Calculating DUUT uncertainties on the calibrator

I currently do not have the means to run the external calibration, so I cannot test the code. If there is someone willing to test that for me, I will add the code.

...


Hi maat,

The originally described EXT CAL is semi-automatic in the Service Manual. Test leads have to be swapped in and out and knobs have to be adjusted on both a null detector and the voltage divider. Could you please elaborate on what you are trying to improve? I believe have the capabilities of EXT CAL if you want to work with me to include these features.

The following instruments are essential for calibration:
Fluke 732A - Voltage Standard [calibrated July 2022 with known drift]
Fluke 732A - Voltage Standard [calibrated July 2022]
Fluke 845AR - Null Detector
Fluke 752A - Voltage Divider
Fluke 5440B - Voltage Calibrator
Keithley 2002 - Multimeter [calibrated April 2023]

EDIT: I also noticed that the example scripts provided on GitHub are missing useful features such as:
- Toggle the external guard
- Toggle remote sense
- Range select (e.g., source 11 V on 11 V or 22 V range)
- Divider output

Let me know if I can be of any help. I would like to eventually use this python code to validate multimeters with my Fluke 5440B.

Regards.
« Last Edit: April 20, 2023, 04:08:10 pm by leighcorrigall »
MASc, EIT, PhD Candidate
 

Offline leighcorrigall

  • Frequent Contributor
  • **
  • Posts: 453
  • Country: ca
  • Nuclear Materials Scientist
Re: Python AsyncIO GPIB Libraries
« Reply #17 on: April 20, 2023, 08:14:02 pm »
I am reviewing Tables 4-7 of the Fluke 5440B Operators Manual. It gives all of the commands that I would need to control the instrument remotely. Here is a simple example that sources a voltage:
RESET
SOUT 9.876543
OPER
STBY

These commands were fed sequentially into the Agilent E5810A 'Find & Query Instruments' webpage after selecting the correct GPIB address. I will eventually write a script in Python or some other language.

I am not quite sure why I would need a separate library in order to accomplish a similar task. Is this library simply to enhance the connectivity of other units synchronously?
« Last Edit: April 20, 2023, 08:15:37 pm by leighcorrigall »
MASc, EIT, PhD Candidate
 

Offline maatTopic starter

  • Regular Contributor
  • *
  • Posts: 144
  • Country: de
Re: Python AsyncIO GPIB Libraries
« Reply #18 on: April 22, 2023, 10:21:47 am »
I have just uploaded the Fluke 5440B library (see top post). I have implemented most features, but the following:

  • External calibration
  • Calculating DUUT uncertainties on the calibrator

I currently do not have the means to run the external calibration, so I cannot test the code. If there is someone willing to test that for me, I will add the code.

Hi maat,

The originally described EXT CAL is semi-automatic in the Service Manual. Test leads have to be swapped in and out and knobs have to be adjusted on both a null detector and the voltage divider. Could you please elaborate on what you are trying to improve? I believe have the capabilities of EXT CAL if you want to work with me to include these features.

Thanks for offering help. This is great. Regarding 'adding code' I intend to add an example file that allows to do the full EXT CAL procedure (including prompts and stuff). So basically a CLI utility that can guide one through the EXT CAL procedure. I will look up the procedure tonight to see how to put that into code. It would be great if you could test it afterwards.

EDIT: I also noticed that the example scripts provided on GitHub are missing useful features such as:
- Toggle the external guard
- Toggle remote sense
- Range select (e.g., source 11 V on 11 V or 22 V range)
- Divider output

Ah yes indeed, the documentation is still a little slim. The function calls are
Toggle the external guard: set_internal_guard(bool) <- use False to enable external guard
Toggle remote sense: set_internal_sense(bool) <- see above
Range select: Is there a way to manually select the range?
Divider output: set_divider(bool) <- True to enable the divider

I will add them as example as well.
 

Offline maatTopic starter

  • Regular Contributor
  • *
  • Posts: 144
  • Country: de
Re: Python AsyncIO GPIB Libraries
« Reply #19 on: April 22, 2023, 10:34:11 am »
I am reviewing Tables 4-7 of the Fluke 5440B Operators Manual. It gives all of the commands that I would need to control the instrument remotely. Here is a simple example that sources a voltage:
RESET
SOUT 9.876543
OPER
STBY

These commands were fed sequentially into the Agilent E5810A 'Find & Query Instruments' webpage after selecting the correct GPIB address. I will eventually write a script in Python or some other language.

I am not quite sure why I would need a separate library in order to accomplish a similar task. Is this library simply to enhance the connectivity of other units synchronously?

Yes, these are the correct commands. The 5440 is fairly straight forward to use in case you only have a simple application in mind. It is a little slow though ;) Far slower than a modern computer, so at times it needs a little wait time. I add 200 ms after each command. This may be tweaked a little, but the 5440 needs a more time to settle anyway. Another thing I added was polling the status after calling the command to see if it succeeded or raise an error. If you want to run more complex stuff like the self-test or ACAL routines, the library also does the background checking required. Finally there is stuff like reports and the CAL constants that can be extracted from the device. The library does the pretty printing for you. The same goes for those status bytes.

So all in all, this library is a convenience function and helps a lot when integrating the 5440 in a complex script. Escpecially, if there is more than one device involved and you need to reliably orchestrate the show.

I will do some more cleaning up tonight and push it to pypi, so that it will become as easy as pip install to use it.
 
The following users thanked this post: MegaVolt, leighcorrigall

Offline maatTopic starter

  • Regular Contributor
  • *
  • Posts: 144
  • Country: de
Re: Python AsyncIO GPIB Libraries
« Reply #20 on: April 25, 2023, 06:01:36 am »
I will do some more cleaning up tonight and push it to pypi, so that it will become as easy as pip install to use it.

I have some good news and some bad news.  :-DD

First, the good news. I reworked the library and it is looking pretty decent right now. I moved the documentation to a separate page and cleaned up the readme while I was at it. If you have any questions, do let me know. The best thing about the rework is that the library can now be used with the new

Code: [Select]
async with f5440b(...) as device:
Python syntax to employ a context manager that takes care of the cleanup. See the examples for more details.

I have uploaded the latest build to PyPI and it is now possible to install the library via
Code: [Select]
pip install fluke5440b-async
Additionally, I have tested the self-test code and fixed some issues. The library should now correctly raise an exception if there is an error during self-test. This will also sort of pretty-print the error codes.

Finally, the bad news. I had a chance to test the self-test code...
Code: [Select]
SelftestError: High voltage self-test failed with error: SelfTestErrorCode.FIL_BOUT_BRDS_FAULT_CHECK_875V_RANGE.
I love those Monday mornings. Oh well, another thing on my todo list. I will also look into the EXT cal routine as soon as I find some time.
« Last Edit: April 25, 2023, 06:23:50 am by maat »
 
The following users thanked this post: leighcorrigall


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf