General > General Technical Chat

[Banter] What is the worst software you have used for its price?

<< < (17/32) > >>

ebclr:
Kicad for sure, has the worst interface in any software I have used,  totally not intuitive

Nominal Animal:

--- Quote from: T3sl4co1l on May 23, 2022, 12:51:46 am ---Even my Tek TDS460 has -- besides the annoyingly slow-to-respond menus -- an occasional bug which, I suspect goes something like: user input is interrupt triggered, and the front panel encoders go crunchy sometimes.  So just sliding a cursor around can freeze the UI requiring a power cycle.
--- End quote ---
It may not be exactly the same thing, but the underlying problem – coupling each UI state change to a particular update – is the same.

Even in the example of finite impulse response filter analysis as a standalone HTML page, I separated "calculation" from "redraw".  Granted, because the redraw was fast enough even on the slowest machine (with a modern browser; mainly Canvas support), I just chained them together.  If it had been a problem, all I'd need is to add a global variable representing the current redraw timeout.  Whenever recalculate is triggered, it would first disable any existing redraws, and then set a new timeout after calculation completes.

I've used Gtk+ and Qt widget toolkits extensively.  (Gtk+ is pure C, Qt is C++.)  They both are based on an event loop under the toolkit control, so require an event-driven programming approach.  Neither works well with major computation done in the same thread as the UI event loop, and this is typical for all UI toolkits and approaches.  Even so, that –– do everything within the UI event loop; perhaps in the idle handler –– is what the widget toolkit documentation recommends.  Stupid.  You do not even need multiple hardware threads!  All you need is a way to interrupt or time-slice the major computation.  The simplest implementation only requires a timer interrupt, and switching the processor state and stack between the different concurrent tasks.

One of the real tricks is having a way to cancel and/or restart the heavy calculation.  (An atomic flag and an early return from the computation function works well.)  In an oscilloscope, this is not really an option, because –– assuming I have the correct picture of how they work –– the "heavy computation" is actually communication with the dedicated capture hardware, and setting that up.  I can well imagine that a hardware FPGA/ASIC designer without any user interface experience would design this communication to be a full setup information package, instead of a per-variable/feature one, because the former is just so much simpler and robust.  But, it also means that the UI must be very careful as to when it decides to send such setup packages: delay too long, and the UI will be sluggish.  Queue the changes, and you get the "twirl-a-knob-and-it-will-freeze".

No, the control messages need to be categorized by the latencies of their effects, for best results.  Something like changing the trigger level should be basically instant.  Something that causes e.g. a relay to change states will take human-scale noticeable time (a fraction of a second), and therefore must override any of the faster changes.  The UI side needs a configuration state machine that is aware of these latencies, so that it does not bother to e.g. set the input scale if it knows there is already a different scale selected in the UI.  And yes, you indeed want to do the (necessary) highest-latency changes first: I'll leave it as an exercise to realize why.

Simply put, you need an intermediary state machine or filter between the user events and the capture engine, to minimize the latency between UI changes to visible effects.  For hardcore FPGA/ASIC designers who disdain of anything human-scale, this is "ugly" and "silly"; they feel it is an intrusion into their domain.

Similarly, if you write an user interface to communicate with say an USB or Serial or Ethernet-connected device, you need to split the communication into a separate thread (often a state machine), and use a thread-safe queue or similar to pass commands/requests/state changes and results between the two.  Either side also needs that small state change optimizer machine, so that changes are not queued, but combined for efficiency.  Hell, even if you just write a crude tool to display the microphone signal level at the edge of your display, you'll want to use a separate thread for that, so that UI hiccups do not affect the audio stream and vice versa.
For most programmers, this is hard.  It is already hard to switch between event-driven and state machine logic; but to do so just because of getting a responsive UI is just not worth the effort to most –– even if they are paid to do just that.

I definitely blame the developers and programmers for this kind of issues.  They are solvable, and the solutions are well known.  The fuckers are just too lazy or inept to do the work right.  (Then again, the one complaint I always got when I wrote software for living was "it does not need to be perfect; it just needs to look like it works.  We can fix the issues later, when we have time".  So maybe I'm expecting too much from people.)

free_electron:

--- Quote from: Nominal Animal on May 23, 2022, 03:06:07 am ---One of the real tricks is having a way to cancel and/or restart the heavy calculation.  (An atomic flag and an early return from the computation function works well.)  In an oscilloscope, this is not really an option, because –– assuming I have the correct picture of how they work –– the "heavy computation" is actually communication with the dedicated capture hardware, and setting that up.  I can well imagine that a hardware FPGA/ASIC designer without any user interface experience would design this communication to be a full setup information package, instead of a per-variable/feature one, because the former is just so much simpler and robust.  But, it also means that the UI must be very careful as to when it decides to send such setup packages: delay too long, and the UI will be sluggish.  Queue the changes, and you get the "twirl-a-knob-and-it-will-freeze".

--- End quote ---
no. the problem is that they do everything in software on the pc side. They copy a bunch of samples from the acquisition board and then do a bunch of data crunching and visualise it.

For example peak detect. you have to search the array for the minimum and maximum value. if you just did that on a million samples. that takes some time... if it is done in hardware it takes zero time ( as it is done while the data is being acquired. two digital comparators looking at what comes out of the ADC before it even goes into memory. once the memory is full you have you min and max right there. no need to go over it once more.

Another gripe of mine : aliasing on a scope screen. There is only so many pixels horizontally. so how do you cram 1 million samples on a 1024x800 screen ? do you only take one every 1000 ? do you draw a line to the next one  ? we all know there is a risk that creates an aliased image. The correct way to do this is to take a block of 1000 samples , find the minimum and maximum in that block and draw a vertical line ( not a line to next "sample" ) from min to max. then process the next block. so your horizontal step is simple a counter for the block being processed , while the vertical is a line from min to max. bye bye aliasing. the problem : that takes a lot of cpu cycles to do . in hardware ? can be done as the data is being acquired its a digital comparator to find min and max and at the end of 1000 samples store min and max. so you get a secondary array containing the actual stuff that needs visualisation. feed that data straight into a hardware overlay ( That's how the infiniiums did it. the scope hardware writes directly to video memory bypassing the entire pc and operating system. if you did an alt-printscreen you got a nice picture of all the windows icons and menus but the actual scope grid was a black canvas... ( actually a specific RGB value : the hardware knew it could only overlay anything having that specific RGB mark. Think of a green screen like for tv newsrooms or weather reports. anything with that specific rgb is to be replaced by hardware overlay)
if you altered the timebase post-capture they simply instructed the hardware to rescan its memory and build the new min/max array ( those scope have 800x600 lcd monitors so you only need like 512 pairs horizontally... ). since the acquisition hardware can cycle the memory at 4Ghz (the scope can do 4Gs/s so it has the memory speed ) , crunching 4 megasamples takes 1 milliseconds to find the 800 min/max pairs. The LCD refresh rate is 100Hz... that means this new visualisation happens much faster than even the monitor can follow.

On the newfangled machines they need to first pump all the data rom the acquisition memory over a relatively slow bus ( much slower than the acquisition memory ) into the pc memory. and then a bunch of algorithms have to run sequenctually and iteratively over the data to find what they need. in many scopes these days the acquisition memory is much much larger than the pc memory, so there is no way to pump all data over ( with a dma like mechanism for example) . the pc side simply doesn't have enough storage space. storing it on ssd or spinning rust would be an even slower nightmare... so that means the pc side must access that memory through a bus that is much slower than it could even access its own memory.

that's where the bottleneck lays. it's a big dataset that needs moving or accessing through a slow bus. much slower than acquisition or main memory. and then you need to unleash iterative and sequential operations to find what is of interest.
Shoving that task onto the acquisition hardware is the correct solution. that memory and logic is much much faster than the pc will ever be since it can run at acquisition speed.

that's why all those cheap scope are using a simple ARm processor and a big fat fpga hooked into the acquisition memory. they don't copy data or move it or dig in it from the arm side. the arm tells the hardware build me an array of screensize with this kind of information. the hardware does that before the current lcd redraw cycle has even completed ( in the old days of vacuum balloons they did it in the vertical flyback. )

Look at those older 54645D oscilloscopes. they have 4megasample memory. there is 16 megabyte total ( 8meg for the two analog channels , 4 each , and 8 meg for the 16 digital channels ). that thing is ran by a motorola 68000 clocked at 8MHz ...
its display is a picture tube with 600x400 resolution. you can twirl that timebase knob as hard was you can. that screen refresh is instantaneous without flickering or lag or aliasing.
It is not possible to do that with software. that 68K has a 16 bit data bus. so even if you were to pair the memories you would need to move 8megawords of data at 8MHz ?  that would take one second alone. and you haven't done anything yet. counting and doing the compare to find min max on each . let's assume you need 10 instruction for each sample. you are looking at 10 seconds to do a screen redraw .. and there is tons of other things to do : scan the keyboard, encoders, the gpib , maybe run an FFT to show the spectrum of the analog channels. how about doing bus decoding or pattern matching on the logic data ?

The acquisition system runs at a 100MHz clock ... find the min/max pairs on a 4meg deep block ? it can do that 25 times a second ! (it has parallel access to all data)

eti:
Linux is simultaneously a good and bad thing. It's as good as the price we pay, because the "support" is "piss off, you should know this, we learnt and so now must you, and learn all the new acronyms and syntax which some autistic 'community' assumes you knew from birth, and we know you have a busy life, but spend a month trawling sourceforge, then compile... rinse and repeat"

I know the benefits and pitfalls of ALL mainstream Mac/Win/Lin OS', and use them with caution and wisdom.

PS: I am autistic, we aren't ALL oblivious to HUMANS being the ones using products, and needing clear, simple guides. Linux 'community' is the reason it's not a full-blown, worldwide commercial phenomenon, because aside from a trillion conflicting variants, there's a CLEAR LACK of the ability to understand what ACTUAL 9-5 humans want, need and use. As for all the "open source is magic" mindset - utter tripe - if it WORKS and I can PAY ££ FOR SUPPORT, and not pay my valuable time chasing my tail adn tearing my hair out, I'll gladly line your pockets, screw "open source" - it's an ego massage.

Nominal Animal:

--- Quote from: free_electron on May 23, 2022, 04:49:46 am ---It is not possible to do that with software.
--- End quote ---
Not with dumb software and dumb data buffering schemes, no..

But let's say you have a 8-bit ADC and 64-byte cachelines, and as you receive the data, you construct a parallel lookup of min-max values, filling another cacheline per 32 cachelines (2048 samples).  You've now dropped the memory bandwidth required to find min-max for any range to 1/32th, except that the start and end points have a granularity of 64 samples.  (So do those cachelines separately, I guess.)

Similarly, if you can reorder the received data so that you get the cachelines across waveforms, you can construct the display from left to right and use all sorts of clever spanning techniques.  Even antialiased lines boils down to lots and lots of additions, and a few not-too-large lookup tables (that depend on the time base and such).

Using an ARM or Intel/AMD core for that kind of stupid work makes no sense.  The cores are slow at that sort of stuff, and you're paying for nothing there.  Instead, stick a DSP or similar between the acquire buffer and the UI processor, so that the UI processor computes and sets the lookup tables and memory transfers, and the DSP just spits out intensity slices (say, 5-bit full-height pixel columns) that the UI processor then just composes into the display.

To do this sort of stuff right, one must think of the data flow.  A very similar thing really bugs me with most simulator software running on HPC clusters: they calculate, then communicate, then calculate, then communicate, and so on, instead of doing them both at the same time.  Why?  Because it is hard to think of what data needs to be transferred after the next step, when the next step is yet to be calculated.  The data does need to be present before the next time step is calculated, so essentially your data transfers need to be at least one step ahead, and that means predictive and/or heuristic transfers without false negatives (you can transfer extra, but you need to transfer all that are needed), node load balancing, and so on...  Just too hard for programmers who can always just tell professors to buy more and newer hardware.


--- Quote from: eti on May 23, 2022, 07:13:22 am ---Linux is simultaneously a good and bad thing. It's as good as the price we pay, because the "support" is "piss off, you should know this, we learnt and so now must you, and learn all the new acronyms and syntax which some autistic 'community' assumes you knew from birth, and we know you have a busy life, but spend a month trawling sourceforge, then compile... rinse and repeat"
--- End quote ---
No, that's not it.

For open source communities, end users are a net negative: a cost, not a benefit.  Only those who contribute back, somehow, are worth the effort of helping.  What "actual 9-5 humans want, need and use" is absolutely, completely irrelevant.  This is why Linux greybeards laugh at you when you say something like "you need to do X so that Linux can become as popular as Y".  It is as silly to us as Insta-gran and Fakebook "influencers" demanding free food and accommodation.

As to why paid Linux end-user support is relatively hard to find, I think it is because getting such an commercial venture going is highly risky.  It is relatively simple to set up Linux user support in an organization, but as a commercial service, you have huge risks from customers who vent their disappointment at Linux not being a drop-in Windows replacement at you, ruining your reputation at the same time.  The risks aren't worth the gains.
I mean, I consider you, eti, a professional person.  But I for sure would not like to put anyone under your ire at Linux and open source.  The £20 or so an hour you'd be willing to pay would not be worth it.

Perhaps it is time to just admit that Linux and open source is not for you.  And that's fine; it's not supposed to be for everyone, it's just a tool among others.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

There was an error while thanking
Thanking...
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod