I did something similar at lecroy booths. take the timebase knob, spin it fast left, fast right, fast left and repeat that like 10 times. Then walk away while the scope goes nuts zooming in and out and displaying that dreaded "triggering..." with the slowly crawling progress bar at the bottom.Ah, the good ol' "let's do computation in the UI event handler" antipattern. Causes the events to be queued, slowing down everything. Drives me crazy.
This is not new, and was well known and common in the 8-bit era already, especially games. It saddens me to hear that even LeCroy fell into this easily avoided UI trap...The problem is most likely much deeper and may even be intentional. What I have seen on Windows based equipment from several brands is that the CPU load is low even when the equipment is doing heavy processing tasks (like math or analysis). It looks more like they are trying to keep the CPU cool on purpose. The slow UI response can also be a hold-off mechanism.
if i take my old hp infiniium running a 150MHz pentium 1 and windows 95 and 4 meg of ram and spin that timebase button the screen refresh is instantaneous , like an analog scope.
Those lecroy machines have windows 7 , 8 core cpu's with hyperthreading and 64 gigs of ram. you give the timebase knob a couple of fast spins and you will hear the cpu fan go into overload and the thing crawl slower than a drunk snail. those machines are not oscilloscopes. they are very fast digitizers and the rest is don on the pc. Rohde & Schwarz , tek and Hewlettagilentkeypackardsight do all that stuff on custom hardware. the OS is only there for the GUI : well known platforms with a friendly face for the user.
Paintshop pro after it was borged by Corel. Installer became crap , spyware , bloatware , now totally unusable. Same happened to Microfx Designer. Corel borged it and ran it into ground.
I did something similar at lecroy booths. take the timebase knob, spin it fast left, fast right, fast left and repeat that like 10 times. Then walk away while the scope goes nuts zooming in and out and displaying that dreaded "triggering..." with the slowly crawling progress bar at the bottom.Ah, the good ol' "let's do computation in the UI event handler" antipattern. Causes the events to be queued, slowing down everything. Drives me crazy.
We trialed a $20k Tek scope for a while and the interface would often freeze for 1-2 seconds periodically while the scope was doing...something in the background. That's just unacceptable. The scope shouldn't do that. It speaks to poor software design; a lack of parallelism and priority. Anything taking more than 100ms immediately feels 'laggy' to users. It wasn't the only reason to discard that scope, but it was a significant one.
Paintshop pro after it was borged by Corel. Installer became crap , spyware , bloatware , now totally unusable. Same happened to Microfx Designer. Corel borged it and ran it into ground.
I still use PSP7 regularly... very lightweight, still easier to use than GIMP, and clearly made with low RAM and slow CPUs in mind; feels like such a flex to preview a Gaussian filter on a 10Mpx image.
Tim
Even my Tek TDS460 has -- besides the annoyingly slow-to-respond menus -- an occasional bug which, I suspect goes something like: user input is interrupt triggered, and the front panel encoders go crunchy sometimes. So just sliding a cursor around can freeze the UI requiring a power cycle.
One of the real tricks is having a way to cancel and/or restart the heavy calculation. (An atomic flag and an early return from the computation function works well.) In an oscilloscope, this is not really an option, because –– assuming I have the correct picture of how they work –– the "heavy computation" is actually communication with the dedicated capture hardware, and setting that up. I can well imagine that a hardware FPGA/ASIC designer without any user interface experience would design this communication to be a full setup information package, instead of a per-variable/feature one, because the former is just so much simpler and robust. But, it also means that the UI must be very careful as to when it decides to send such setup packages: delay too long, and the UI will be sluggish. Queue the changes, and you get the "twirl-a-knob-and-it-will-freeze".
It is not possible to do that with software.
Linux is simultaneously a good and bad thing. It's as good as the price we pay, because the "support" is "piss off, you should know this, we learnt and so now must you, and learn all the new acronyms and syntax which some autistic 'community' assumes you knew from birth, and we know you have a busy life, but spend a month trawling sourceforge, then compile... rinse and repeat"
It is not possible to do that with software.Not with dumb software and dumb data buffering schemes, no..
But let's say you have a 8-bit ADC and 64-byte cachelines, and as you receive the data, you construct a parallel lookup of min-max values, filling another cacheline per 32 cachelines (2048 samples). You've now dropped the memory bandwidth required to find min-max for any range to 1/32th, except that the start and end points have a granularity of 64 samples. (So do those cachelines separately, I guess.)
Similarly, if you can reorder the received data so that you get the cachelines across waveforms, you can construct the display from left to right and use all sorts of clever spanning techniques. Even antialiased lines boils down to lots and lots of additions, and a few not-too-large lookup tables (that depend on the time base and such).
Using an ARM or Intel/AMD core for that kind of stupid work makes no sense. The cores are slow at that sort of stuff, and you're paying for nothing there. Instead, stick a DSP or similar between the acquire buffer and the UI processor, so that the UI processor computes and sets the lookup tables and memory transfers, and the DSP just spits out intensity slices (say, 5-bit full-height pixel columns) that the UI processor then just composes into the display.
To do this sort of stuff right, one must think of the data flow. A very similar thing really bugs me with most simulator software running on HPC clusters: they calculate, then communicate, then calculate, then communicate, and so on, instead of doing them both at the same time. Why? Because it is hard to think of what data needs to be transferred after the next step, when the next step is yet to be calculated. The data does need to be present before the next time step is calculated, so essentially your data transfers need to be at least one step ahead, and that means predictive and/or heuristic transfers without false negatives (you can transfer extra, but you need to transfer all that are needed), node load balancing, and so on... Just too hard for programmers who can always just tell professors to buy more and newer hardware.Linux is simultaneously a good and bad thing. It's as good as the price we pay, because the "support" is "piss off, you should know this, we learnt and so now must you, and learn all the new acronyms and syntax which some autistic 'community' assumes you knew from birth, and we know you have a busy life, but spend a month trawling sourceforge, then compile... rinse and repeat"No, that's not it.
For open source communities, end users are a net negative: a cost, not a benefit. Only those who contribute back, somehow, are worth the effort of helping. What "actual 9-5 humans want, need and use" is absolutely, completely irrelevant. This is why Linux greybeards laugh at you when you say something like "you need to do X so that Linux can become as popular as Y". It is as silly to us as Insta-gran and Fakebook "influencers" demanding free food and accommodation.
As to why paid Linux end-user support is relatively hard to find, I think it is because getting such an commercial venture going is highly risky. It is relatively simple to set up Linux user support in an organization, but as a commercial service, you have huge risks from customers who vent their disappointment at Linux not being a drop-in Windows replacement at you, ruining your reputation at the same time. The risks aren't worth the gains.
I mean, I consider you, eti, a professional person. But I for sure would not like to put anyone under your ire at Linux and open source. The £20 or so an hour you'd be willing to pay would not be worth it.
Perhaps it is time to just admit that Linux and open source is not for you. And that's fine; it's not supposed to be for everyone, it's just a tool among others.
It is not possible to do that with software.Not with dumb software and dumb data buffering schemes, no..
But let's say you have a 8-bit ADC and 64-byte cachelines, and as you receive the data, you construct a parallel lookup of min-max values, filling another cacheline per 32 cachelines (2048 samples). You've now dropped the memory bandwidth required to find min-max for any range to 1/32th, except that the start and end points have a granularity of 64 samples. (So do those cachelines separately, I guess.)
Similarly, if you can reorder the received data so that you get the cachelines across waveforms, you can construct the display from left to right and use all sorts of clever spanning techniques. Even antialiased lines boils down to lots and lots of additions, and a few not-too-large lookup tables (that depend on the time base and such).
m "aftershot". There's another tool from back then : ACDsee
It is not possible to do that with software.Not with dumb software and dumb data buffering schemes, no..
But let's say you have a 8-bit ADC and 64-byte cachelines, and as you receive the data, you construct a parallel lookup of min-max values, filling another cacheline per 32 cachelines (2048 samples). You've now dropped the memory bandwidth required to find min-max for any range to 1/32th, except that the start and end points have a granularity of 64 samples. (So do those cachelines separately, I guess.)
Similarly, if you can reorder the received data so that you get the cachelines across waveforms, you can construct the display from left to right and use all sorts of clever spanning techniques. Even antialiased lines boils down to lots and lots of additions, and a few not-too-large lookup tables (that depend on the time base and such).
Using an ARM or Intel/AMD core for that kind of stupid work makes no sense. The cores are slow at that sort of stuff, and you're paying for nothing there. Instead, stick a DSP or similar between the acquire buffer and the UI processor, so that the UI processor computes and sets the lookup tables and memory transfers, and the DSP just spits out intensity slices (say, 5-bit full-height pixel columns) that the UI processor then just composes into the display.
To do this sort of stuff right, one must think of the data flow. A very similar thing really bugs me with most simulator software running on HPC clusters: they calculate, then communicate, then calculate, then communicate, and so on, instead of doing them both at the same time. Why? Because it is hard to think of what data needs to be transferred after the next step, when the next step is yet to be calculated. The data does need to be present before the next time step is calculated, so essentially your data transfers need to be at least one step ahead, and that means predictive and/or heuristic transfers without false negatives (you can transfer extra, but you need to transfer all that are needed), node load balancing, and so on... Just too hard for programmers who can always just tell professors to buy more and newer hardware.Linux is simultaneously a good and bad thing. It's as good as the price we pay, because the "support" is "piss off, you should know this, we learnt and so now must you, and learn all the new acronyms and syntax which some autistic 'community' assumes you knew from birth, and we know you have a busy life, but spend a month trawling sourceforge, then compile... rinse and repeat"No, that's not it.
For open source communities, end users are a net negative: a cost, not a benefit. Only those who contribute back, somehow, are worth the effort of helping. What "actual 9-5 humans want, need and use" is absolutely, completely irrelevant. This is why Linux greybeards laugh at you when you say something like "you need to do X so that Linux can become as popular as Y". It is as silly to us as Insta-gran and Fakebook "influencers" demanding free food and accommodation.
As to why paid Linux end-user support is relatively hard to find, I think it is because getting such an commercial venture going is highly risky. It is relatively simple to set up Linux user support in an organization, but as a commercial service, you have huge risks from customers who vent their disappointment at Linux not being a drop-in Windows replacement at you, ruining your reputation at the same time. The risks aren't worth the gains.
I mean, I consider you, eti, a professional person. But I for sure would not like to put anyone under your ire at Linux and open source. The £20 or so an hour you'd be willing to pay would not be worth it.
Perhaps it is time to just admit that Linux and open source is not for you. And that's fine; it's not supposed to be for everyone, it's just a tool among others.
“Not for you”? Lol. I’ve been using it as a seasoned pro since 2004. That’ll be a common mistake of assuming you know someone online.
The issue with Linux is not so much Linux as the arrogance of the obsessives and how they decry “evil” (read: hugely hard working, clever and deservedly successful) Microsoft etc. Sour grapes sure make a lot of whine.
Linux fans whine about the fact that the hardware that is designed and made for a profitable market, IE the gargantuan and profitable desktop and server market, isn’t able to run perfectly on Linux etc etc. Hey guys, make your own hardware if you’re that upset (hang on, that would require a large paying user base and parts market that form around it due to it being dominant and used EVERYWHERE FOR DECADES)
Whiners wine. I’ve heard (and stupidly taken part in) every conceivable, predictable pro Linux debate ever online, and the same junk goes round and round for years. Windows pays bills, work involving windows pays bills. Work done on Macs pays bills. Servers running Linux pays huge bills too. Desktop Linux is what’s left at the end of the meal. That’s how it panned out.
If they want to be successful TRULY, then it’s time to walk out of the pity party, go home and put on their suits and go do some selling, never mind everyone else. Linux people love to evangelise and criticise. That massages egos but doesn’t pay well.
Perhaps it is time to just admit that Linux and open source is not for you. And that's fine; it's not supposed to be for everyone, it's just a tool among others.“Not for you”? Lol. I’ve been using it as a seasoned pro since 2004. That’ll be a common mistake of assuming you know someone online.
The issue with Linux is not so much Linux as the arrogance of the obsessives and how they decry “evil” (read: hugely hard working, clever and deservedly successful) Microsoft etc. Sour grapes sure make a lot of whine.
In other words, to the developers, most of the time users are just a pain in the bum. They ask silly questions. The only time they're helpful is when they find bugs, but for a developer to help you, you need to convince them you've tried everything and researched the problem properly.
This doesn't count the Linux fanboys who are convinced it's the best thing ever and everyone should use it.
Linux developers don't give a toss about whether their software is used or not. The situation regarding help and support with Linux is similar to those asking questions on this forum. He explained it quite well at the end of the post linked below:
https://www.eevblog.com/forum/programming/rust-is-political/msg4188961/#msg4188961
In other words, to the developers, most of the time users are just a pain in the bum. They ask silly questions. The only time they're helpful is when they find bugs, but for a developer to help you, you need to convince them you've tried everything and researched the problem properly.
This doesn't count the Linux fanboys who are convinced it's the best thing ever and everyone should use it.
(Bit of advice: if you write a post and find that most of the content is just rehashing your tired old complaint about some generalized group of people you disagree with, then don't press the "Post" button.)
"But I need to know that everyone hates the same things I do!!!"
It is not possible to do that with software.Not with dumb software and dumb data buffering schemes, no..
But let's say you have a 8-bit ADC and 64-byte cachelines, and as you receive the data, you construct a parallel lookup of min-max values,