Products > Test Equipment

Siglent SDS2000 new V2 Firmware

<< < (29/56) > >>

nctnico:

--- Quote from: Performa01 on December 31, 2015, 08:31:00 am ---The list display only shows what’s on the screen, which is a good thing given the poor response of the select knob. The problem with this is that the correlation to the total captured data is lost, as the list now starts with number one again and we cannot know how this relates to the numbers in the full
list.

An alternative (and probably better) approach would be to always have the full list and we could select a value in it and the corresponding waveform gets automatically centred on the screen. This way we wouldn’t need the horizontal position control at all.

--- End quote ---
The Agilent DSO7000 works this way which is a real blessing when looking for a malformed packet. The waveform tracks the selected message from the list (which shows all the messages in the memory and not just what is on screen).

All in all the decoding on the SDS2000 is still crappy (as I feared). How on earth are you going to find a missing/malformed bit or message? BTW I have pointed Siglent to Youtube videos on how decoding should work early in 2015 but appearantly they are not willing to learn!

Performa01:

--- Quote from: rf-loop on December 30, 2015, 08:17:48 am ---There need be one setting more. Lock trigger position to the current user adjusted position on the screen independent of t/div scale  and then same function need include swap between delayed time position and this trigger position....it need just one button... or there can even use long push and short push or ... this need carefully design for good useability.   

--- End quote ---

I totally agree.

The PicoScope I've mentioned before certainly has both settings at the same time - pre-trigger range (trigger position) and trigger delay. The latter I've never used so far, because thankfully I was always able to set up a trigger condition closely related to the signal I wanted to see. So I thought trigger delay would generally not see much use (except maybe analogue video signals when no dedicated video line/field triggers are available, but that should be a real niche application nowadays), hence I could make do with just having to switch between the two in the horizontal menu.

I would even be happy with a soft button in the horizontal menu, where we could set the trigger delay by means of a (hopefully properly working by then) select knob, just like the holdoff value in the trigger menu. But the main use of the horizontal position control should be to set the screen position of the trigger point, that would not change with the timebase.

Performa01:
Serial Trigger (DSO) - UART

I’ve still used a conventional edge trigger for the review of the serial UART protocol decoding. Now let’s see how well the serial UART trigger works.

The setup appears to be independent of the serial decoding, so we need to repeat all the settings here, such as channels, trigger levels, idle level, bit order, plus the specific UART protocol settings (number of bits, stop bits, parity, baudrate).

We can trigger on several conditions, for instance on the start of a frame (UART_trig_start)



Of course this will trigger on the start of any frame, so we get just a random portion of our data stream. We can also trigger on the stop condition (UART_trig_stop)



That doesn’t make much of a difference, other than the trigger fires just one bit clock earlier, i.e. on the stop bit, that is immediately followed by the start bit within one packet.

Apropos packets: in the previous screenshot we can see the transition between two packets, as I deliberately programmed a little break between them. It is the signal staying idle for some 240µs on the transition from 0xef to 0xf0.


We can trigger on specific data as well, unfortunately a single data item only, not a complete message. The following screenshots demonstrate this by triggering on

0x00  = the start of the first packet
0x1f = the end of the 2nd packet
0x20 = the start of the 3rd packet

(UART_trig_data_0x00, UART_trig_data_0x1f, UART_trig_data_0x20)








There is also a parity error trigger, and of course it never fired in my test setup, since parity isn’t even enabled. I didn’t test this any further as I couldn’t be bothered to program the UART in my micro ina way that it would generate parity errors sporadically – I’m just willing to believe that this trigger condition works, provided parity checking works in the first place. This I’ve also not tested yet, but then again, most UART connections only run over short distances nowadays and so we don’t usually configure them with parity – just like for SPI busses, where parity checking isn’t even an option.


Conclusion

The serial UART trigger works just fine, though it would be nice to be able to trigger on more than just one single data item. Setting up a list of, say, up to 8 data items that have to occur in the data stream in sequence in order to fire the trigger would certainly be helpful in finding a particular message on a busy data connection.

Performa01:
Serial Decode (DSO) - SPI

SPI is a synchronous serial transmission standard, so we would expect decoding to be a fair bit easier than the asynchronous ones, like UART, where the bit clock has to be recovered from the data stream in order to decode it.

There are 4 SPI modes for data setup and sample at any combination of clock polarity (which determines the idle level) and clock phase (leading or trailing edge).

I’ve set up the most common SPI mode 0, which means clock idle level is low, hence positive clock polarity and data is set up on the falling (trailing) edge and sampled on the rising (leading) edge of the clock signal.

The data length can be set to anything from 4 to 96 bits, but 8 bits is the most common configuration, and I’ve used this one too.

Interestingly, the SPI decoder of the SDS2000 doesn’t provide a configuration for the clock polarity (or idle level), which might make it a bit more difficult for the decoder to get in sync at the start of a packet. At least there is a toggle for selecting the clock edge for sampling, which just distinguishes between SPI modes 0/4 (rising) and 1/2 (falling). So I chose the rising edge (SPI_Decode_Setup_CLK)




As my little micro provides almost no hardware support for SPI, I stick with a clock of only 100kHz, but I also have no doubt that speed doesn’t matter for this test as long as the sample rate is kept about an order of magnitude higher than the SPI clock.

In contrast to UART, data channels can be disabled in the SPI decoder menu, even though the menu item says ‘CLOSE’ whereas I would think ‘Disable’ would be more descriptive (SPI_Decode_Setup_MISO)




For the initial tests I use negative edge triggering on the slave select signal, which is labeled ‘CS’ in the SDS2000 decoder menu and the ‘~’ (C language notation for bit-wise inversion) is used in the menu to describe an inverted signal (SPI_Decode_Setup_CS)




Complaint: While the SDS2000 keeps most settings after a restart, it does not preserve the threshold levels for the various signals. In my setup, threshold levels reverted back to 16V after every restart and I had to set them again to 2.5V for each individual signal (SCK, MOSI, CS).

At a timebase setting of 100µs/div we can see a complete packet with 16 data items. Decoding appears to work and only the MOSI line of decoded values is displayed at the bottom of the screen, just as expected. The data look correct, just the first value (0x50) is already a little too wide to fit the available space and the incomplete display is indicated by a red dot

I’ve set the trigger point to about 7% of the screen width, so the 2nd list entry corresponds to the first data after the trigger point, with timestamp 0.00µs (SPI_Decode_100us_7%)



The first decoded data item at -98.00µs is faulty (should be 0x4f), as is to be expected, since the packet is truncated. But why on earth is it displayed in the first place? The decoder knows that we are looking for 8 bit data, so why display something what looks like a valid decoding instead of something like ‘6 error bits’, as other protocol decoders would? Even displaying nothing at all would be better than filling the list with invalid data without any hint that they aren’t valid.


What happens if I run the test at 5ms/div in order to capture some 50 packets (800 data bytes) at once? With the trigger point set just 200µs apart from the left screen edge, we get the following picture (SPI_Decode_5ms_0%_1)



The first data item after the trigger is now the 4th item in the list and data appear to be correct here, but everything else? Oh boy, what a disaster!

First look at the timestamps in the list.

We have two entries with each -100.00µs and 0.00µs respectively, which might be a hint that the actual resolution of the timestamp is worse than 100µs, even though the display suggests a resolution of 10ns and the current sample rate of 20MSa/s allows 50ns after all.

Then what happens after the 4th entry? Timestamp suddenly jumps up to the maximum value and says 69.8ms for all the 860 remaining entries in the list!

And what about the decoded data line at the bottom of the screen? It says just ‘0xe0’ for a total of 864 bytes on the screen!

Scrolling thorugh the list, data is basically correct, but shows an unjustified 0x00 decoding between packets, except for the transition where the data counter rolls over from 0xFF to 0x00. So it seems it doesn’t happen if the following value after the short break is actually zero. To illustrate this, I show screen shot examples for two transitions (SPI_Decode_5ms_0%_17, SPI_Decode_5ms_0%_34)






Just for completeness I also show the end of the list, that contains a total of 864 entries. The last complete packet ends at line 859 and the data is 0x2F, then comes the already familiar extra 0x00 decoding, followed by 4 items that are just garbage. It almost looks like the decoder keeps working a little beyond the end of the acquisition buffer on some data, that have not actually been captured (SPI_Decode_5ms_0%_858)




All displays show just some random number for the bottom line decoding.

Can we try to find the problem? Let’s just lower the timebase (in stop mode, as I’ve moved over to single shot capture already) to 100µs/div in order to see a complete packet, just as we did before – but all the errors we’ve seen so far are there again (SPI_Decode_5ms_100us_D600us_1)



The timestamps now jump to just 1.30ms, but the data is correct except for the extra zero byte, that clearly isn’t there when looking at the data traces on the screen. There are double lines because of the peak detect mode, but even though it might not look pretty, it has absolutely no impact on the decoding. Just to make sure, I’ve tried it in normal acquisition mode as well, without any different results. The bottom display line keeps saying ‘0xe0’ and of course the multi selection, shading and artefacts in the list are there, just as already mentioned in the UART review.

Just for completeness I’ll also show the end of the list where we see another extra zero decoding, once again not warranted by the signal traces as they appear on the screen. Neither at the start nor at the end of the packet is the MOSI line low at all, let alone for full 8 clock periods (SPI_Decode_5ms_100us_D600us_13)




Even though no further tests would be necessary in this situation, I still tried zoom mode. Even though I’ve already seen this kinda working sometimes, it is a complete mess right now. The decoding line at the bottom of the screen shows ‘0x00’ and the list is in accordance with that for a change (SPI_Decode_5ms_Zoom_100us)




It’s the same for serial SPI triggering, which also doesn’t work.

I could not be bothered to post a dedicated review on this, so I just want to point out that serial SPI trigger on 8 bit data only works for the upper 4 bits, whereas the lower nibble always is treated as zero, no matter what the actual setting is. And for the entire trigger word, the don’t care setting (‘X’) is ignored and treated as zero. So this is completely unusable as well.


Apart from the fact, that the scope should never behave like this, no matter what signals are thrown at it, there still remains the question if there’s something wrong with my signal? I think the fact that the list display is basically correct, but the timestamps and the decoding line at the bottom of the screen are not, already is a strong indication that there’s nothing wrong with the signals after all.

Just to be sure, I have hooked up an LA with serial protocol decoder in order to verify that everything is fine with the signal. And the results indicate that it is indeed! All the data is correct and there are no extra zero data items between packets. And this is with a sample rate of just 1MHz!

This decoder also displays the first (truncated) packet that is not in sync, but it clearly indicates that there were 2 bits missing on the last data item, so we can know that the entire packet is invalid (SPI_Data_Verification)




Conclusion:

SPI decoding is totally unusable and I cannot even begin to list all the bugs I’ve found during my tests. Well, it’s more than just bugs, it simply doesn’t work. SPI trigger doesn’t work either.
This is clearly a case where nobody could claim this part of the scope has ever been tested, other than maybe one single message at one fixed timebase setting, just like my very first scenario.

It is very bad practice, to include totally untested pieces of firmware with an official release.

nctnico:
return return return return :box:

Navigation

[0] Message Index

[#] Next page

[*] Previous page

There was an error while thanking
Thanking...
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod