Hello again everyone, got some more details to provide about the SDK...
A question was asked earlier about the ReadHardData and DrawWave functions, specifically the "nReadLen" and "nDisLen" for the ReadHardData function and the "nSrcDataLen" and "nDisDataLen" for the DrawWave functions.
When I was implementing GUI controls for Vertical and Horizontal I noticed that my application wasn't behaving exactly the same way as the stock 6022BE software, more specifically the Horizontal scaling was way off.
I had a hunch as to what was causing it, but no idea how to solve it... It turns out the "Read Length" and "Display Length" properties were the culprit as I had used the default values provided in the "SDK" and "Manual" PDF file.
The problem is, these values are highly dependent on what timebase you are in, and there is no documentation on what values to use on which timebases, and there is only one example in the SDK and PDF of it's use, but not enough to figure out what the rest should be.
The first problem I had to tackle was what "Read Length" values to use for each of the 39 different timebases in the original software, and an earlier post in this thread got me close, but no cigar.
I ended up having to write a redirection dll for the HTDisplayDll.dll which is simply a DLL to use between the original software and the real HTDisplayDll.dll file so I can snoop around and see what is being passed into the DLL function calls. This process was VERY tedious and time consuming but I got it working and here is what I came up with.
Below is a snippet of code, an enum I use in my software to describe the 39 different timebases used by the two DLLs. The index values for each enum is in the same order as the Timebase drop-down list in the original software and not surprisingly the same values used by the DLLs. Each enum is grouped by "Read Length" (see samples comments) and followed by a comment which shows the "Display Length", "Horizontal Scaling" and "Vertical Scaling" "Zoom" arguments used in the DrawWave function, and it turned out they were all 1:1 ratio but never the less good to know.
//Time Division
enum THantekTimeDivision
{
//1016 samples
HTTimeDiv48MS_1NS=0, //960, 1, 1
HTTimeDiv48MS_2NS=1, //960, 1, 1
HTTimeDiv48MS_5NS=2, //960, 1, 1
HTTimeDiv48MS_10NS=3, //960, 1, 1
HTTimeDiv48MS_20NS=4, //960, 1, 1
HTTimeDiv48MS_50NS=5, //960, 1, 1
HTTimeDiv48MS_100NS=6, //960, 1, 1
HTTimeDiv48MS_200NS=7, //960, 1, 1
HTTimeDiv48MS_500NS=8, //960, 1, 1
HTTimeDiv48MS_1US=9, //960, 1, 1
HTTimeDiv48MS_2US=10, //960, 1, 1
//130048 samples
HTTimeDiv16MS_5US=11, //800, 1, 1
HTTimeDiv8MS_10US=12, //800, 1, 1
HTTimeDiv4MS_20US=13, //800. 1, 1
HTTimeDiv1MS_50US=14, //500, 1, 1
HTTimeDiv1MS_100US=15, //1000, 1, 1
HTTimeDiv1MS_200US=16, //2000, 1, 1
HTTimeDiv1MS_500US=17, //5000, 1, 1
HTTimeDiv1MS_1MS=18, //10000, 1, 1
HTTimeDiv1MS_2MS=19, //20000, 1, 1
//523264 samples
HTTimeDiv1MS_5MS=20, //50000, 1, 1
HTTimeDiv1MS_10MS=21, //100000, 1, 1
HTTimeDiv1MS_20MS=22, //200000, 1, 1
//1047552 samples
HTTimeDiv1MS_50MS=23, //500000, 1, 1
HTTimeDiv1MS_100MS=24, //1000000, 1, 1
HTTimeDiv500K_200MS=25,//1000000, 1, 1
HTTimeDiv200K_500MS=26,//1000000, 1, 1
HTTimeDiv100K_1S=27, //1000000, 1, 1
HTTimeDiv100K_2S=28, //2000000, 1, 1
HTTimeDiv100K_5S=29, //5000000, 1, 1
HTTimeDiv100K_10S=30, //10000000,1,1
HTTimeDiv100K_20S=31, //20000000,1,1
HTTimeDiv100K_50S=32, //50000000,1,1
HTTimeDiv100K_100S=33, //100000000,1,1
HTTimeDiv100K_200S=34, //200000000,1,1
HTTimeDiv100K_500S=35, //500000000,1,1
HTTimeDiv100K_1000S=36,//1000000000,1,1
HTTimeDiv100K_2000S=37,//2000000000,1,1
HTTimeDiv100K_5000S=38,//-1,1,1
};
As you can see, to scale the waveforms properly the "Display Length" is not the same as the "Read Length" and in some cases it's larger than the former.
Needless to say, my application now behaves exactly like the stock software and later on I'll post my DLL Hooking code that I used to reverse engineer the timebase argument convention.
Yet again, some more undocumented information... Not surprisingly the Hantek "SDK" or "Manual" doesn't mention this, but in order for triggering to work and display a stable waveform, you have to use the Trigger Point Index value (set by the ReadHardData function) inside the DrawWave function, specifically the nCenterData argument.
Now, the provided example code in the "SDK" shows this nCenterData argument being supplied with simply half of the ReadLength (of the raw data buffer).
However if you use this method you will find your waveforms bouncing all over the place and your Trigger Level having no effect what so ever...
Here is what you have to do... Supply the nCenterData argument with the value of the Trigger Point Index (index relative to raw data where triggered) + Raw Read Length divided by 2 (half of the entire raw data).
Again I had to use my Hooking code to find out what the hell was going on with the DrawWave function and I noticed that in the original software the nCenterData value wasn't exactly half of the RawDataLength and furthermore it was changing rapidly, which clued me to the possibility they were using the Trigger Point Index to offset it somehow.
I have reverse engineered the Front End if anyone is interested:
The A7 device is just a fast switching rectifier, it breaks down at 100V and it's being used to clamp the signal to -5V and +5V.
Also take note that the instead of your usual 1M ohm resistor and trimmer capacitor in parallel connected from input signal to ground we have the input signal going through a 909K resistor with the trimmer capacitor in parallel with it, then shunted to ground through a 100K resistor and an SMD capacitor in parallel. To the input, it looks like a 1M resistor to ground, but the signal is being tapped between 909K and 100K, then going to the first Op Amp. I'm not sure a 5V input signal in 1x mode is going to give a 5V signal at the node between the 909K and 100K, but more like 500mv. This would mean the probe in 1x mode is going to be safe all the way up to 50V and 500V in 10x.
Also, the outside of my unit shows a label between the BNC connecors that says "35vpk max" which I am not sure if they mean for 1x or 10x?
Richard,
I would be very interested in what you've done with the front end. I wrapped the SDK into Python via ctypes, thinking I would use this scope for data acquisition and possibly doing some spectrum analysis. To you and anyone else that might be interested, I uploaded my Python wrapper to
https://github.com/rpcope1/Hantek6022API . I hope someone can get something useful out of it.
Also to you and everyone else, I saw someone mentioned changing or fixing the DC-DC converter. I can see where the Mornsun DC-DC convert is soldered in (and I agree, it's a terrible choice for this application). I think the first thing I'm going to attempt to do is solder some big electrolytic caps onto between both the V+ and V- and the 0V reference coming out of the converter. Does this sound reasonable thing to do to reduce noise on the traces? I'm also open to changing the DC-DC converter here if anyone has some suggestions as to how to do, and reporting how it worked. I think this will make an interesting experiment.
Also, I'm interested in any other experimentation you all are thinking about (including possibly hacking the firmware?).
I would be very interested in what you've done with the front end
I added 100uF 16V electrolytics in parallel with the large SMD capacitors after the +5V USB branch and after the +3.3V Regulator.
I also stacked (in a very bodgy fashion) SMD capacitors on top of ALL the bypassing capacitors for the USB Micro and ADC.
Then I stacked SMD Capacitors on the larger of the three SMDs before the DC-DC (C103 & C105) and I stacked the SMD Capacitors on the other side, specifically the large ones going between +5 and -5 and the ones going from +5 to GND and -5 to GND, both before and after the Inductors.
Then I made a very rudimentary screening can out of copper foil shielding over clear PETE plastic, to which I measured and scored the PETE and folded it into a can shape, then soldered tinned copper wire to various places around the side, aligned with where the solder holes for the can were positioned. I made two of these so each channel had it's own can.
I then used copper foil tape to shield the holes on both aluminum end caps which were only covered with the sticky label on the other side, the largest of the holes is where the Logic Analyzer header would mount.
I used copper foil tape around the DC-DC and grounded it with a braided copper strap (Solder wick). Also, my unit did not come with a heatsink on the ADC (though my board revision is 1.00.2, not 1.00.1) so I added a flat TO220 heatsink mounted with Arctic Silver thermal adhesive and ground strapped with copper braid again.
Figures though, stupid me forgot to aquire some waveforms pre-modification to compare, but from memory I was getting 10-15mv noise and post modification it's down to 3-5mv.
I'll take pictures of the interior later and post them here, as well as some screenshots of the noise floor.
I think the first thing I'm going to attempt to do is solder some big electrolytic caps onto between both the V+ and V- and the 0V reference coming out of the converter. Does this sound reasonable thing to do to reduce noise on the traces? I'm also open to changing the DC-DC converter here if anyone has some suggestions as to how to do, and reporting how it worked. I think this will make an interesting experiment.
The DC-DC datasheet specifies a maximum capacitive load of 100uF so be careful what you add there, and I also plan on populating the Buck & Charge Pump section to see what happens.
Edit: Here is a high-res PNG legend of my changes using Aurora's high res PCB image (hope he doesn't mind):
http://img198.imageshack.us/img198/3733/7ljo.png
...forgot to aquire some waveforms pre-modification to compare, but from memory I was getting 10-15mv noise and post modification it's down to 3-5mv.
We're more interested in the after, than the before, but 10-15 mVpp noise is about average from all the reports here, and elsewhere. Worst case I've seen reported was 20 mV, and that was probably an outlier. And as bad as that sounds, it's not really unusual. I.e., the max sensitivity of the 6022BE is 20mV/div, so you were seeing ~1/2-3/4 div of noise. That's similar to many more expensive scopes. Though they have much higher sensitivities, the relative noise levels are about the same.
While Vpp noise is good to know, also helpful could be using the FFT function, to get the spectral distribution of the noise components.
Lastly, you enumerated 6 tweaks you made to the board, though I thought I counted more like 8. But in any event, it would be helpful to know the relative contribution of each, to maximize 'bang for the buck'. Not necessarily to minimize cost, but perhaps time. I suspect, and it's highly probably, that some tweaks will have very minimal influence. And omitting them really wouldn't matter.
Of course, to conduct such a study would require stopping to evaluate after each mod was installed. I realize that's asking a bit much. But just a thought for those following in your footsteps, if they'd like to make their own contributions.
Little bit more progress on the software side (see attachment)...
I have implemented two different cursor modes, one exactly like the cursor mode in the original software (Cross, Horizontal, Vertical) and one I call Interactive, where two cursors labeled 1 & 2 in little circles are drawn with the waveform and you can interact with them by dragging them across the waveform, and they will follow the wave's y axis.
By the way, ignore the ugly toolbar button glyphs, those will not be in the final version as I am going to make my own graphics for the toolbar but those are placeholders until I am done.
Anyway, lots more work to do on it, but thought I'd share my progress thus far for those who are interested, thanks