I owed the a reply about how restoring the long time division but adding the measurements feature would be nice for automotive use. This shows it nicely from another low cost scope.
We basically need more screen of less samples and still see min/max over the period. The events might be a car crank and capturing the current for a few seconds for a relative compression test or pressure transducer to determine engine component failure or excessive backpressure.
Here is a playlist of the application (hscope automotive module)
Tokar: try 0.019d?
Now it is possible to scroll, progress by 2 screens. 350+750+350 display points. By removing the restriction, it will be possible to move 3000 samples over the entire memory, but I have to find out how the recalculation for display is solved.
Long time is possible, but more change code.
First version Fnirsi-1013D( fpga Altera) firmware v.0.019d. The displayed output voltage, in Ch1 and Ch2, depends on the frequency, with a F>4MHz
Hardware problem, check on original firmware
Hello, I noticed one nuance, when setting the brightness of the display from 0 to 50% there is a squeak inside. Is it like this?
Recovery firmware
If something doesn't work. Or it works strangely.
It beeps even on the original firmware, the more discharged the battery, the louder it beeps.
Hello, I noticed one nuance, when setting the brightness of the display from 0 to 50% there is a squeak inside. Is it like this?
I'm sure you will hear that too with the original firmware. The brightness setting is done with pulse width modulation of the displays background LED driver. The chip used in the device has a max input frequency of 1KHz, and the FPGA outputs ~800Hz for this. The coil used in the boost converter makes this noise.
Tokar: try 0.019d?
v0019d Device after calibration, and autoset. Sensitivity 500mv/div. We increase the DC voltage until the limit is reached, press autoset. And, lo and behold, the signal (2.4v 500mv/div) without limitation, expanded to full screen (0:29). Setting time by pressing autoset is ~5s.
We increase the signal to the limit (5v). We press autoset, the range switches to 1v/div. But the limitation remains. Still the same 4.5 cells as reported earlier (#1697). We increase the voltage to 7V, press autoset. The range changes to 2.5v/div, the limit disappears (1:50).
Then I gradually increase the voltage, and at 16V I get a transition to 5v/div (3:39). But the "0" marker is in the center of the screen. I increase the voltage again, checking the status each time by pressing autoset. There is a limitation at 22V, but there is no offset of the “0” marker (5:20). At 31v we get a marker shift, and, as a result, the signal limitation is removed (7:30).
https://youtu.be/2mIM5b9XkMI
Moving the signal through the entire memory. it is necessary to reduce the sampling rate in the menu if you want to capture a larger time segment.
beta v0.019e
Hi! Looks like a small bug with setting of pin 14 U1 (output from CPU to LCD) - very small amplitude of LCD_D12 (1V and 50Hz distortions from environment). On original FW there is full amplitude 3V. Because of it there is some graphical noise in a middle of screen (lower bit of Red color is distorted, it viewable on ACQ menu). Maybe this pin it is not programmed as output?
Also because of it startup logo have not quite smooth gradient
some pictures. tested this pin with another oscilloscope.
Hi! Looks like a small bug with setting of pin 14 U1 (output from CPU to LCD) - very small amplitude of LCD_D12 (1V and 50Hz distortions from environment). On original FW there is full amplitude 3V. Because of it there is some graphical noise in a middle of screen (lower bit of Red color is distorted, it viewable on ACQ menu). Maybe this pin it is not programmed as output?
Also because of it startup logo have not quite smooth gradient
Hi _AVP_, welcome to the forum.
Good catch. I did notice something with the display while I did the development, but never looked into it.
And you are absolutely correct that there is a bug. So thanks for mentioning it here.
@Atlan, since you are working on it with new releases, it is easiest if you make the small change. In the function sys_init_display the port configuration register 1 needs not 0x22222227, but 0x22272222. See the code part below. I have not tested it, but it should do the trick. The comment in the source was and is correct, but the actual value was not.
void sys_init_display(uint16 xsize, uint16 ysize, uint16 *address)
{
int32 time;
uint32 *ptr = DISPLAY_CONFIG_ADDRESS;
uint32 checksum = 0;
uint32 index;
//Setup the used port D pins for LCD
*PORTD_CFG0_REG = 0x22222227; //PD00 is not used for the display
*PORTD_CFG1_REG = 0x22272222; //PD12 is not used for the display
*PORTD_CFG2_REG = 0x00222222; //Only 22 pins for port D
0.019f
mistake
0.019g is ok
I hope that there is no similar error when reading data from fpga.
I hope that there is no similar error when reading data from fpga.
Doubt it, but you never know. I did make more of these copy, paste forget to edit errors.
I still can't get over the noise on the AD converter. I tried to use a 10-bit converter and the noise is almost the same... I tried to split the power supply for the analog and digital parts at 8-bit and it didn't help.
Addition to message #1707.
v0019d. I provide a complex AC + DC signal. I press autoset (0:50). This results in very slow signal processing. If the DC signal (only) is processed for 5 seconds, then the complex signal is ~ 30-40 seconds. During processing, the image on the screen freezes, clearly visible by the clock in the upper right corner. I reduce the AC signal (2:15), press autoset. The DC signal spans the entire screen, AC goes beyond the screen, but the limitation does not show (3:10). I increased the AC value (3:25), the line at the top turned red - limitation. I turned off the AC signal completely and pressed autoset (3:44). The result is not satisfactory: 2.5v/div, Uin =12v DC, which does not correspond to the span of 8 cells (4:25). I reduce the DC signal to a minimum - there is no reaction on the screen. I set it to 0.5v DC, press autoset (5:25), the readings are correct. I increase the signal to 12V, press the autoset (5:53, 6:01), and at 2.5v/div I get a swing of 4.5 cells and a limit. I’m watching an unrestricted signal (5v/div) (6:25) - everything is fine.
https://youtu.be/UU2kYDJEzzc
DC mode. Yes, it is far from functional. It will be necessary to come up with some functional solution. It's not fun looking at it all the time. Sometimes you need to rest and find a solution. Besides, you also have to go to work
But thanks for the information. I appreciate it.
Has anyone checked the timestamp of the files? Does the time and date of file creation or modification match under Linux and Windows?
I still can't get over the noise on the AD converter. I tried to use a 10-bit converter and the noise is almost the same... I tried to split the power supply for the analog and digital parts at 8-bit and it didn't help.
The ADC's used might well be B stock of a clone of the AD9288 assumed to be used. Maybe they are not even 100MHz parts, but 80MHz or less, and used with overclocking. Further more the whole design is noisy. There is no separation between analog and digital ground. The power supply setup for the analog part is crap. Etc.
It is a nice platform to play with and learn (scope) software development, but that is it.
They put other fpga there, even the pcb has at least 3 versions. So why not change the power supply and GND?
What is the idea of starting the calibration of input dividers?
In practice, it only needs to be done once. Should the fixed firmware be used to recall this calibration? Or add an item to the menu for Reset basic settings?
The calibration on the new firmware is two fold.
- Set the dc offset to obtain a center reading without as probe connected. (Grounded input would be better)
- Find the difference between the two ADC's that make up a single channel, so that a equal reading can be made on the interleaved samples.
I tried to make it better then the original, to always have the +3K samples instead of like what the original does. For the 100MSa/s and below they only use a single ADC per channel and read just 1500 samples. Only on the ranges that need 200MSa/s they use the second ADC, and get the extra samples.
The equalization in the original is done on adding or subtracting something on the second ADC data. This reduces more on the total resolution. My version tries to divide it and modifies the data of each ADC with half the difference of the average.
The calibration has nothing to do with accuracy. That is why they called it base line calibration.
It's OK. This is the calibration of the input dividers. Now 7 constants are fixed in the code. Otherwise, it would be a completely different calibration by connecting precise voltages and adjusting the conversion constants.
Ah, yes I remember these. Those values are from the original firmware. Probably not worth spending a lot of time on trying to improve on that.
Menu for calibration of input dividers