| General > General Technical Chat |
| [Banter] What is the worst software you have used for its price? |
| << < (18/32) > >> |
| eti:
--- Quote from: Nominal Animal on May 23, 2022, 08:02:14 am --- --- Quote from: free_electron on May 23, 2022, 04:49:46 am ---It is not possible to do that with software. --- End quote --- Not with dumb software and dumb data buffering schemes, no.. But let's say you have a 8-bit ADC and 64-byte cachelines, and as you receive the data, you construct a parallel lookup of min-max values, filling another cacheline per 32 cachelines (2048 samples). You've now dropped the memory bandwidth required to find min-max for any range to 1/32th, except that the start and end points have a granularity of 64 samples. (So do those cachelines separately, I guess.) Similarly, if you can reorder the received data so that you get the cachelines across waveforms, you can construct the display from left to right and use all sorts of clever spanning techniques. Even antialiased lines boils down to lots and lots of additions, and a few not-too-large lookup tables (that depend on the time base and such). Using an ARM or Intel/AMD core for that kind of stupid work makes no sense. The cores are slow at that sort of stuff, and you're paying for nothing there. Instead, stick a DSP or similar between the acquire buffer and the UI processor, so that the UI processor computes and sets the lookup tables and memory transfers, and the DSP just spits out intensity slices (say, 5-bit full-height pixel columns) that the UI processor then just composes into the display. To do this sort of stuff right, one must think of the data flow. A very similar thing really bugs me with most simulator software running on HPC clusters: they calculate, then communicate, then calculate, then communicate, and so on, instead of doing them both at the same time. Why? Because it is hard to think of what data needs to be transferred after the next step, when the next step is yet to be calculated. The data does need to be present before the next time step is calculated, so essentially your data transfers need to be at least one step ahead, and that means predictive and/or heuristic transfers without false negatives (you can transfer extra, but you need to transfer all that are needed), node load balancing, and so on... Just too hard for programmers who can always just tell professors to buy more and newer hardware. --- Quote from: eti on May 23, 2022, 07:13:22 am ---Linux is simultaneously a good and bad thing. It's as good as the price we pay, because the "support" is "piss off, you should know this, we learnt and so now must you, and learn all the new acronyms and syntax which some autistic 'community' assumes you knew from birth, and we know you have a busy life, but spend a month trawling sourceforge, then compile... rinse and repeat" --- End quote --- No, that's not it. For open source communities, end users are a net negative: a cost, not a benefit. Only those who contribute back, somehow, are worth the effort of helping. What "actual 9-5 humans want, need and use" is absolutely, completely irrelevant. This is why Linux greybeards laugh at you when you say something like "you need to do X so that Linux can become as popular as Y". It is as silly to us as Insta-gran and Fakebook "influencers" demanding free food and accommodation. As to why paid Linux end-user support is relatively hard to find, I think it is because getting such an commercial venture going is highly risky. It is relatively simple to set up Linux user support in an organization, but as a commercial service, you have huge risks from customers who vent their disappointment at Linux not being a drop-in Windows replacement at you, ruining your reputation at the same time. The risks aren't worth the gains. I mean, I consider you, eti, a professional person. But I for sure would not like to put anyone under your ire at Linux and open source. The £20 or so an hour you'd be willing to pay would not be worth it. Perhaps it is time to just admit that Linux and open source is not for you. And that's fine; it's not supposed to be for everyone, it's just a tool among others. --- End quote --- “Not for you”? Lol. I’ve been using it as a seasoned pro since 2004. That’ll be a common mistake of assuming you know someone online. The issue with Linux is not so much Linux as the arrogance of the obsessives and how they decry “evil” (read: hugely hard working, clever and deservedly successful) Microsoft etc. Sour grapes sure make a lot of whine. Linux fans whine about the fact that the hardware that is designed and made for a profitable market, IE the gargantuan and profitable desktop and server market, isn’t able to run perfectly on Linux etc etc. Hey guys, make your own hardware if you’re that upset (hang on, that would require a large paying user base and parts market that form around it due to it being dominant and used EVERYWHERE FOR DECADES) Whiners wine. I’ve heard (and stupidly taken part in) every conceivable, predictable pro Linux debate ever online, and the same junk goes round and round for years. Windows pays bills, work involving windows pays bills. Work done on Macs pays bills. Servers running Linux pays huge bills too. Desktop Linux is what’s left at the end of the meal. That’s how it panned out. If they want to be successful TRULY, then it’s time to walk out of the pity party, go home and put on their suits and go do some selling, never mind everyone else. Linux people love to evangelise and criticise. That massages egos but doesn’t pay well. |
| Kjelt:
Siemens Teamcenter Clearcase Both are bureaucratic bloatware programs that should only be sold and used in Nort Korea. Eagle although I use it Who TF comes up with the stupid idea that copying some symbolsfrom one schematic to the other requires the user to manually type CUT whileit is a copy and not a cut go to another schematic and type PASTE. It is a GUI a rightmouse click should suffice for this..... unbelievable. |
| nctnico:
--- Quote from: Nominal Animal on May 23, 2022, 08:02:14 am --- --- Quote from: free_electron on May 23, 2022, 04:49:46 am ---It is not possible to do that with software. --- End quote --- Not with dumb software and dumb data buffering schemes, no.. But let's say you have a 8-bit ADC and 64-byte cachelines, and as you receive the data, you construct a parallel lookup of min-max values, filling another cacheline per 32 cachelines (2048 samples). You've now dropped the memory bandwidth required to find min-max for any range to 1/32th, except that the start and end points have a granularity of 64 samples. (So do those cachelines separately, I guess.) Similarly, if you can reorder the received data so that you get the cachelines across waveforms, you can construct the display from left to right and use all sorts of clever spanning techniques. Even antialiased lines boils down to lots and lots of additions, and a few not-too-large lookup tables (that depend on the time base and such). --- End quote --- The reality is that you can't do it in hardware either (until recently; GPUs are becoming more mainstream in embedded systems). Think about going through >100MB of data and process it in a timely matter. So clever sub-sampling techniques are used to create an image that represents the minimum / maximum while having some aliasing on purpose to indicate there is an anomaly in the signal. After all, an oscilloscope is intended to provide meaningfull information about a signal even if the individual periods can not be shown. A simple test you can do is by acquiring a frequency sweep of a reasonably high frequency with a small span. This will reveal the sub sampling. |
| HighVoltage:
--- Quote from: free_electron on May 23, 2022, 01:05:47 am ---m "aftershot". There's another tool from back then : ACDsee --- End quote --- Don't laugh.... I am still on ACDSee Photo Manager 2009. It is fast and quick in all aspects. Every newer version I tried just sucked. |
| zzattack:
Embarcadero C++Builder.. their raison d'être has to be companies that are overcommitted to their current software base. They do have excellent marketing and there was a time when their VCL (from Delphi) offered advantages over competitors. While they claim to be the premier platform for rapid application development on Windows/Android/iOS/whatever-the-hell-else, allow me to just briefly highlight some of its very basic shortcomings: - frequent compiler bugs - longstanding STL issues - code editor is super annoying with: * undo/redo buffer corrupting frequently * inability to set custom hotkeys * inability to disable cursor-past-end-of-line * no block-mode editing * ctrl+arrow key navigation skips over nearly all common code symbols - opening a file from the project browser opens that file about 50% of the time, the other 50% another seemingly random window/tab receives focus - terrible UI editor * no undo AT ALL * everything visual studio and Qt do right, Embarcadero does wrong * inheritance of UI components requires manually updating all instances where component is reused - debugger is next to useless; this is probably the biggest productivity killer * no inspection of STL container types * accuracy of call stacks is hit and miss * inability to inspect local vars of calling function * frequently crashes to a point where system requires reboot * about 80% of the time variables cannot be inspected, showing only "???" * data breakpoints cannot have conditions - no parallel compilation until recently (acquired a 3rd party plugin to do so, buggy) - contents of project files change every time they are opened; terrible for version control - custom compiler/linker, still based on clang 5.0, but not supporting most compiler switches Absolutely abysmal how debugging on a PIC8 with MPLAB is a less frustrating experience than working with this IDE when targeting Win32 for a modern desktop application. |
| Navigation |
| Message Index |
| Next page |
| Previous page |