I have been spending way too much time looking to see if the base 50MHz EDU version is hackable to 220MHz/4Gs/S
I appreciate it is early days, but does anybody know if that is the case?
Best regards
Petter
Your specifications address to a new R&S scope. 4ch, 10 bit ADC and touch in that price range may be very interesting 😎
Still comparing the 1102G with the SDS1102X+
Still undecided on which one to get for home
There are a couple of information on decodes i can't find. (I am sure i saw them once but i re read datasheet and user manual, nothing)
What are the limits on baud rate / clock in the various decodes?
i.e: According to datasheet UART in the siglent can go up to 330.4 kbps (slow-ish)
Also, could somebody that has both of them take a picture of them side by side, maybe with a signal on it? (dave, could you? in theory the sds1000X-E is in the same form factor as the 1000x+?)
Still comparing the 1102G with the SDS1102X+
Still undecided on which one to get for home
There are a couple of information on decodes i can't find. (I am sure i saw them once but i re read datasheet and user manual, nothing)
What are the limits on baud rate / clock in the various decodes?
i.e: According to datasheet UART in the siglent can go up to 330.4 kbps (slow-ish)
Also, could somebody that has both of them take a picture of them side by side, maybe with a signal on it? (dave, could you? in theory the 1202X-E is in the same form factor?)
On 1102G, UART Baudrate will do arbitary rates up to 8M, and also 10M
Still comparing the 1102G with the SDS1102X+
Still undecided on which one to get for home
Siglent have a new model SDS1000 about to be released if you aren't interested in the MSO or ARB option it should be a better buy.
too bad i'm interested in both
MSO is actually the only real reason i'm having doubts, not sure if i want to get a separate LA or adding a printf in the software to decode an LCD with parallel interface (future hobby project)
and waveform screen size (four more horizontal divisions, longer data streams)
everything else is pointing towards keysight
Anyway by following hints on this and other topics, searching
siglent "sds1102x-e"will point at the chinese siglent website. hitting on translate gives a very readable chinglish translation and will get around location redirects
Still comparing the 1102G with the SDS1102X+
Still undecided on which one to get for home
Siglent have a new model SDS1000 about to be released if you aren't interested in the MSO or ARB option it should be a better buy.
I wouldn't be that optimistic. It is a new product from Siglent based on new hardware so it is likely riddled with bugs and features which are just there to mark an item on a checklist but provide no real value to the user (Eres for exmple). I'm also wondering if Siglent's new scope decodes the full memory instead of what is only on screen. All in all it needs a really thourough test before recommending it to anyone.
I'm also wondering if Siglent's new scope decodes the full memory instead of what is only on screen.
How many years it take you understand that whole captured length is always between screen left and right border. It is now and it have been. If capture length is example 1.4Mpoints (SDS1kX decode limit) this whole length is visible on the screen. There is NOTHING out of screen - whole acquisition length is visible and decoded. This nothing out of screen can not decode. Also your marketed GoodWill do not decode anything what is not captured. If your goodwill have record length 1.4M and Siglent same length. Both decode same length. Difficult?
But I do not try tell Siglent decode is good or perfect. All need do better. Every single thing can do better and need do better before I say exellent. But this is other thing.
But, you are trumpling this your sentence endless just as infinite loop recorder.
Deliberately trying to create a false impression. Although the argument is literally not lie.
And I will correct it every single time if I see it and have time.
My aunt had a parrot. It was learned playing bad sentence. Teaching out of this habit was difficult. Feels just same. But this is yours marketing habit.
I'm also wondering if Siglent's new scope decodes the full memory instead of what is only on screen.
How many years it take you understand that whole captured length is always between screen left and right border. It is now and it have been.
Sorry but it is you who doesn't want to understand but that is understandable because you want the products you sell look good and not see the lesser aspects of them. Maybe you should try to use an oscilloscope for solving a though I2C or SPI problem and you'll finally see why decoding what is on screen is so bad. Also: can you save the event table to a text (CSV) file in a Siglent scope? On a Keysight you can and it is a very handy feature for diving deeper into the data.
How many years it take you understand that whole captured length is always between screen left and right border. It is now and it have been.
Sorry but it is you who doesn't want to understand but that is understandable because you want the products you sell look good and not see the lesser aspects of them. Maybe you should try to use an oscilloscope for solving a though I2C or SPI problem and you'll finally see why decoding what is on screen is so bad.
When you do debugging with decoding, do you sometimes find it useful to see the decoding output displayed while the scope is running (i.e., still performing waveform captures)? If so, then I suppose I could see a notable difference between the two approaches when debugging. But if decoding is really only useful when the scope is stopped, then I hardly see how the two approaches differ in terms of the end result. They'd obviously differ in terms of setup.
If decoding is something that would be visible while in zoom mode then even real-time debugging wouldn't really be affected (at least, I can't see how it would be). You'd just have to use a slightly different means of seeing the same thing.
Decoding while capturing isn't extremely usefull because the data changes so quick you can't really see what is what. For starters you'll have to trigger on a specific message (I2C address, CAN ID, etc) to see what is going on. I usually find myself capturing a long trace and then look at the various timing aspects (time between messages, clock to data edges, extra pulses and stuff like that).
Decoding while capturing isn't extremely usefull because the data changes so quick you can't really see what is what. For starters you'll have to trigger on a specific message (I2C address, CAN ID, etc) to see what is going on. I usually find myself capturing a long trace and then look at the various timing aspects (time between messages, clock to data edges, extra pulses and stuff like that).
That's sort of what I was thinking. But if that's the case, then why does it matter what the real-time display is, as long as the size of your capture is sufficient to get everything of interest? The Siglent doesn't interfere with that in the slightest: what you see is all you get, so you need only set the window to cover the capture range you want to get, and you can change the timebase and positioning all you want once you've stopped the capture. You can thus see what you want to see. The only difference is how you get there.
Siglent have a new model SDS1000 about to be released if you aren't interested in the MSO or ARB option it should be a better buy.
I wouldn't be that optimistic. It is a new product from Siglent based on new hardware so it is likely riddled with bugs and features which are just there to mark an item on a checklist but provide no real value to the user (Eres for exmple).
Might want to wait until people get it in hand before condemning it that way.
Yeah, Siglent's history isn't all that great with respect to initial firmware releases, and I'm sure this new scope will have its share of bugs owing to the (presumed) change in architecture. But it's quite premature to declare that it's "likely riddled with bugs" when it's almost certain that Siglent is going to be using their current SDS1000X firmware as the basis for the firmware in this scope.
I'm also wondering if Siglent's new scope decodes the full memory instead of what is only on screen. All in all it needs a really thourough test before recommending it to anyone.
It's usually a bad idea to recommend a scope before it's been thoroughly tested, regardless of the make, don't you think?
I wouldn't be that optimistic. It is a new product from Siglent based on new hardware so it is likely riddled with bugs and features which are just there to mark an item on a checklist but provide no real value to the user (Eres for exmple). I'm also wondering if Siglent's new scope decodes the full memory instead of what is only on screen.
Yes it does, that's one of it's marketing highlights.
1M points FFT, enhanced resolution, serial bus decoding on 14 M points of raw data,
measurements and math on 14 M points of raw data, all of which raise the DSP ability of entry-
level scopes to a new level.
It's usually a bad idea to recommend a scope before it's been thoroughly tested, regardless of the make, don't you think?
It depends greatly on the brand whether I take the specs at face value or with a large grain of salt. I'm not overly concerned about bugs because people make mistakes and there is a lot of pressure to put a product on the market. However what matters most is whether bugs get fixed within a couple of weeks (good) or years later (very bad). Ofcourse the promised functionality should be there from day one. Personally I feel better about extra functions getting added later than missing functions when I buy a piece of equipment.
It's usually a bad idea to recommend a scope before it's been thoroughly tested, regardless of the make, don't you think?
It depends greatly on the brand whether I take the specs at face value or with a large grain of salt. I'm not overly concerned about bugs because people make mistakes and there is a lot of pressure to put a product on the market. However what matters most is whether bugs get fixed within a couple of weeks (good) or years later (very bad).
Yeah, that makes a big difference, for sure. How quickly a manufacturer fixes bugs can be a moving target, however. Take Instek for instance. Apparently (see
https://www.eevblog.com/forum/testgear/opinions-on-gw-instek-scopes/msg1131121/#msg1131121) they were slow to fix at least some bugs in the GDS-2104. But your experience with the GDS-2204E was quite different, and much better.
Similarly, Siglent's bug fix rate was atrocious for the SDS-2000 (for which they were rightly much maligned), but has apparently been much better for the SDS-2000X and SDS-1000X scopes.
And even Rigol has improved their game with the DS-1054Z (though we still await the firmware they said they'd release by the end of January).
Ofcourse the promised functionality should be there from day one. Personally I feel better about extra functions getting added later than missing functions when I buy a piece of equipment.
Which scopes had certain functionality in the specs that wasn't actually present
at all at launch? It's one thing for the functionality to be buggy (which can easily be to the point of unusability, of course), but quite another for it to not be present in the first place.
Decoding while capturing isn't extremely usefull because the data changes so quick you can't really see what is what. For starters you'll have to trigger on a specific message (I2C address, CAN ID, etc) to see what is going on. I usually find myself capturing a long trace and then look at the various timing aspects (time between messages, clock to data edges, extra pulses and stuff like that).
That's sort of what I was thinking. But if that's the case, then why does it matter what the real-time display is, as long as the size of your capture is sufficient to get everything of interest? The Siglent doesn't interfere with that in the slightest: what you see is all you get, so you need only set the window to cover the capture range you want to get, and you can change the timebase and positioning all you want once you've stopped the capture. You can thus see what you want to see. The only difference is how you get there.
Real-time decoding can definitely be useful sometimes, more in the case where you're using is for debug outputs, but also things like seeing realtime data from I2C peripherals like accelerometers, and spotting glitches & issues that happen intermittently.
From many years of experience using (hardware) serial decode I'd say that any decode that noticeably slows down screen update is at least very annoying and detrimental to productivity.
Decoding while capturing isn't extremely usefull because the data changes so quick you can't really see what is what. For starters you'll have to trigger on a specific message (I2C address, CAN ID, etc) to see what is going on. I usually find myself capturing a long trace and then look at the various timing aspects (time between messages, clock to data edges, extra pulses and stuff like that).
That's sort of what I was thinking. But if that's the case, then why does it matter what the real-time display is, as long as the size of your capture is sufficient to get everything of interest? The Siglent doesn't interfere with that in the slightest: what you see is all you get, so you need only set the window to cover the capture range you want to get, and you can change the timebase and positioning all you want once you've stopped the capture. You can thus see what you want to see. The only difference is how you get there.
Real-time decoding can definitely be useful sometimes, more in the case where you're using is for debug outputs, but also things like seeing realtime data from I2C peripherals like accelerometers, and spotting glitches & issues that happen intermittently.
From many years of experience using (hardware) serial decode I'd say that any decode that noticeably slows down screen update is at least very annoying and detrimental to productivity.
Its different use cases, I like you spend more time with larger realtime systems and want to catch those odd cases. Setting serial triggers on specific data patterns which shouldn't exist but appear to be triggering something on the slaves (or errors) quickly confirms behaviour without having to capture huge sequences of data and hunt through them.
Decoding while capturing isn't extremely usefull because the data changes so quick you can't really see what is what. For starters you'll have to trigger on a specific message (I2C address, CAN ID, etc) to see what is going on. I usually find myself capturing a long trace and then look at the various timing aspects (time between messages, clock to data edges, extra pulses and stuff like that).
That's sort of what I was thinking. But if that's the case, then why does it matter what the real-time display is, as long as the size of your capture is sufficient to get everything of interest? The Siglent doesn't interfere with that in the slightest: what you see is all you get, so you need only set the window to cover the capture range you want to get, and you can change the timebase and positioning all you want once you've stopped the capture. You can thus see what you want to see. The only difference is how you get there.
Real-time decoding can definitely be useful sometimes, more in the case where you're using is for debug outputs, but also things like seeing realtime data from I2C peripherals like accelerometers, and spotting glitches & issues that happen intermittently.
From many years of experience using (hardware) serial decode I'd say that any decode that noticeably slows down screen update is at least very annoying and detrimental to productivity.
Its different use cases, I like you spend more time with larger realtime systems and want to catch those odd cases. Setting serial triggers on specific data patterns which shouldn't exist but appear to be triggering something on the slaves (or errors) quickly confirms behaviour without having to capture huge sequences of data and hunt through them.
It may not be practical to set up a trigger when you don't know exactly what's happenning. If you have fast realtime update, then you can often spot errors in real time as they happen rather than having to wade through a long capture.
Decoding while capturing isn't extremely usefull because the data changes so quick you can't really see what is what. For starters you'll have to trigger on a specific message (I2C address, CAN ID, etc) to see what is going on. I usually find myself capturing a long trace and then look at the various timing aspects (time between messages, clock to data edges, extra pulses and stuff like that).
That's sort of what I was thinking. But if that's the case, then why does it matter what the real-time display is, as long as the size of your capture is sufficient to get everything of interest? The Siglent doesn't interfere with that in the slightest: what you see is all you get, so you need only set the window to cover the capture range you want to get, and you can change the timebase and positioning all you want once you've stopped the capture. You can thus see what you want to see. The only difference is how you get there.
Real-time decoding can definitely be useful sometimes, more in the case where you're using is for debug outputs, but also things like seeing realtime data from I2C peripherals like accelerometers, and spotting glitches & issues that happen intermittently.
From many years of experience using (hardware) serial decode I'd say that any decode that noticeably slows down screen update is at least very annoying and detrimental to productivity.
Its different use cases, I like you spend more time with larger realtime systems and want to catch those odd cases. Setting serial triggers on specific data patterns which shouldn't exist but appear to be triggering something on the slaves (or errors) quickly confirms behaviour without having to capture huge sequences of data and hunt through them.
It may not be practical to set up a trigger when you don't know exactly what's happenning. If you have fast realtime update, then you can often spot errors in real time as they happen rather than having to wade through a long capture.
I guess it greatly depends on how you work and how you check what is on a bus. I use decoding primarily for checking timing (data integrity) and hunting rare events. Once the timing is OK I'm not looking at decoding to see the data. I do that at a higher level because that usually involves firmware which can check the messages and print an ERROR in case a message is bad.
The last dozen or more posts have been with Siglent as the topic of discussion in a KS thread.
Some might say: get a room......maybe this one ?
https://www.eevblog.com/forum/testgear/siglent-sds1000x-series-oscilloscopes/
Well, Dave
did raise the subject of the Siglent in this thread ...
I don't like to post replies in a thread that differs from that which the message I'm responding to lives in, because doing so means that the participants of the original thread might well not see the followup discussion at all, whether or not they're actually interested in it. It breaks the flow of the conversation, too. And finally, the context in the thread one is putting the reply into just isn't the same. So, lots of downsides without a whole lot of upsides.