Author Topic: Neat algorithms  (Read 2256 times)

0 Members and 1 Guest are viewing this topic.

Online Nominal Animal

  • Frequent Contributor
  • **
  • Posts: 770
  • Country: fi
    • My home page and email address
Re: Neat algorithms
« Reply #50 on: August 06, 2019, 08:47:45 pm »
Shows you the "care" typical proprietary developers have with copyrights...  :palm:
Open source mate !
https://csharp.hotexamples.com/examples/-/FFT/-/php-fft-class-examples.html
So... because somebody put the code on a web page, you have the right to use it in your projects?  No, that's not how copyright works, mate.

Let me clarify my position.  Many proprietary software developers believe that open source is anticapitalistic, communistic, or otherwise against competition and horrible for everyone involved, and that gives them the right to ignore the copyright licenses.  I disagree.  I do both open source and proprietary development, and actually had to hire a lawyer in 2000 or so to clarify various licenses to me -- I have this irrational internal drive to do things right, even if nobody else cares --, and I now claim to have a pretty good picture of both the legal implications in various jurisdictions, as well as their suitability for various developer intentions.  Thing is, open source is based on competition; you can describe how it works in terms of game theory.  It is just that if you are fixated on a single business model, you refuse to consider any other models, and cannot see how one can benefit (as a developer) from open source licensing.   However, laws apply whether one understands them or not, and that includes copyright laws.

Even more than unscrupulous businesses (like Qualcomm and Cisco, who seem to rely on their size of their legal department to not to have to follow laws, especially copyright laws wrt. open source projects, especially Linux), I hate incompetent copy-paste programmers, who not only get paid to create shit that will stain others, but happily believe they are doing something positive in the world.  They should be summarily shot.

(Not with a firearm, though.  With a cluebat.  Everyone can learn, except the rare ones who refuse.  Now those, those can be shot with a firearm for all I care.)
« Last Edit: August 06, 2019, 08:50:22 pm by Nominal Animal »
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 2915
  • Country: us
Re: Neat algorithms
« Reply #51 on: August 07, 2019, 11:27:05 am »
Quote
Any source code found here may be freely used provided credits are given to the author.
I had suspected that that part had not been noticed, or followed...  A *LOT* of web-published SW contains such provisions.  As a hobbyist, giving credit where it is due is the least you can do.  As a profession, it's really important, legally, to keep track of the provenance of any found software that you use.  You probably need a lawyer or legal team to say "for sure" whether your use of OSSW will change the terms under which you can make money with your final product, but you should AT LEAST be able to say "I found this code at xxx site, read the license, and I think it's ok for me to use it in my product..."
This gets a bit fuzzier if we're talking about those 4-line "neat algorithms" with an obvious implementation.  I wouldn't worry about that "bit counting using shifts, ands, and adds", for example.  But if you have a piece of code as large as an FFT subroutine...   Pay more attention!
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 1026
  • Country: nz
  • Currently at SiFive, previously Samsung R&D
Re: Neat algorithms
« Reply #52 on: August 07, 2019, 11:40:22 am »
Quote
Any source code found here may be freely used provided credits are given to the author.
I had suspected that that part had not been noticed, or followed...  A *LOT* of web-published SW contains such provisions.  As a hobbyist, giving credit where it is due is the least you can do.  As a profession, it's really important, legally, to keep track of the provenance of any found software that you use.  You probably need a lawyer or legal team to say "for sure" whether your use of OSSW will change the terms under which you can make money with your final product, but you should AT LEAST be able to say "I found this code at xxx site, read the license, and I think it's ok for me to use it in my product..."
This gets a bit fuzzier if we're talking about those 4-line "neat algorithms" with an obvious implementation.  I wouldn't worry about that "bit counting using shifts, ands, and adds", for example.  But if you have a piece of code as large as an FFT subroutine...   Pay more attention!

I'm feeling like many people didn't notice that even when I quoted the paragraph containing it here.

It's not exactly onerous terms to adhere to.
 

Offline legacy

  • Super Contributor
  • ***
  • Posts: 3918
  • Country: ch
Re: Neat algorithms
« Reply #53 on: August 11, 2019, 08:06:31 pm »
Quote
Any source code found here may be freely used provided credits are given to the author.
I had suspected that that part had not been noticed, or followed...  A *LOT* of web-published SW contains such provisions.  As a hobbyist, giving credit where it is due is the least you can do.  As a profession, it's really important, legally, to keep track of the provenance of any found software that you use.  You probably need a lawyer or legal team to say "for sure" whether your use of OSSW will change the terms under which you can make money with your final product, but you should AT LEAST be able to say "I found this code at xxx site, read the license, and I think it's ok for me to use it in my product..."

you definitively need a lawyer or legal team.
 

Online NorthGuy

  • Super Contributor
  • ***
  • Posts: 1684
  • Country: ca
Re: Neat algorithms
« Reply #54 on: August 11, 2019, 11:03:32 pm »
Thing is, open source is based on competition

Open source certainly participate in competition, and often wins because of low price, despite better products may be available. Then, once getting widespread, it is glorified by the mob, so that people believe it is actually the best.

A good example is FreeRTOS, which is fairly mediocre compared to other RTOSes which were present on the market several years ago. By now, people believe FreeRTOS is the best, and market for RTOSes is destroyed entirely.

Meanwhile, the FreeRTOS itself is bought and re-purposed by Amazon. I'm sure Amazon could buy one of better RTOSes as the companies making RTOSes are certainly not in good shape and would definitely sell. But no. Amazon sees more value in popularity than in quality.
 

Online Nominal Animal

  • Frequent Contributor
  • **
  • Posts: 770
  • Country: fi
    • My home page and email address
Re: Neat algorithms
« Reply #55 on: August 11, 2019, 11:50:15 pm »
Then, once getting widespread, it is glorified by the mob, so that people believe it is actually the best.
Human mobs are irrational, absolutely true.

Open source has and will/would survive fine without mobs, but take out the competitive aspect, and it dies from bitrot.  Competition keeps FOSS alive.

A good example is FreeRTOS, which is fairly mediocre compared to other RTOSes which were present on the market several years ago. By now, people believe FreeRTOS is the best, and market for RTOSes is destroyed entirely.
Don't blame FreeRTOS, blame humans who are satisfied with mediocre features and performance.

I have exactly the same gripe about the fundamental structure in molecular dynamics simulators (LAMMPS, Gromacs, etc.).  Their core design reflects computer architectures of the 1980s.  Everything else, from MP (multiprocessing) to MPI (distributed computing) to GPGPU (GPU computation) is just bolted-on.  Aggregation, not design.  But since it works, after a fashion, everyone is satisfied. (Except myself, it seems.)

I just believe in freedom of choice, too; that I do not have the right to force others to sanity.



The reason I do not blame open source, or even think of ways to "fix" open source to counteract the mediocrity, is that this has happened and keeps happening all over human behaviour.  For example, a hundred and twenty years ago, electric cars were ubiquitous in New York.  Compared to now, we have better materials (lighter structures, better electric motor performance, and better battery capacity), but it is all incremental development.  What happened?

Roman legions had battlefield medics with capabilities (including amputation) and survival rates that were only re-reached in the first world war.  What happened in the near two millenia in between?

We still marvel at megalithic ruins, especially the ones with multi-tonne boulders and basically seamless joins.  We say we could do that now, if we wanted, but we don't.  We don't even know why they were built, and therefore claim they were built for religious reasons.  How does that actually make sense?

It boils down to human mobs being really shitty; the majority of humanity only looking for relative comfort and ease.  Bread and circuses.  It is a small minority that actually pushes technologies and advancements forward.  Among that minority, open source works just fine.  Making it work with the majority of humans is much more difficult; in my case, I've found that mixing proprietary bits to essentially stop the stupid/annoying exploiter ones, but not hobble the true developer minority, works best.  It does not reflect negatively on open source per se, it's just that we have to work in the imperfect real world.
 

Online NorthGuy

  • Super Contributor
  • ***
  • Posts: 1684
  • Country: ca
Re: Neat algorithms
« Reply #56 on: August 12, 2019, 01:37:04 am »
Don't blame FreeRTOS, blame humans who are satisfied with mediocre features and performance.

You cannot really blame people for preferring something free to something expensive.

Look at the car market. It's now doing relatively well - there are numbers of manufacturers which try hard to make their cars better, so they're getting better and less expensive.

Now imagine a new car would be introduced, which is free - you don't buy it, you just download it and drive. Even if it wasn't as good as paid cars, many people woud prefer it. Soon, you'd have lots of people driving free cars. How wonderful.

But this would destroy the mechanism which forces the car manufacturers to produce better and cheaper cars. So, they wouldn't. Many would go out of business. Ferrari would survive, but Ford wouldn't. Soon after, you would only have mediocre free cars and nothing else.

Capitalism is a mechanism which forces people to work. People work, produce things. Things get consumed. People are happy. If the mechanism is destroyed (as in Soviet Union, or in software market) people stop producing and everything goes down the drain.
 

Online coppice

  • Super Contributor
  • ***
  • Posts: 4356
  • Country: gb
Re: Neat algorithms
« Reply #57 on: August 12, 2019, 03:36:37 am »
Thing is, open source is based on competition

Open source certainly participate in competition, and often wins because of low price, despite better products may be available. Then, once getting widespread, it is glorified by the mob, so that people believe it is actually the best.

A good example is FreeRTOS, which is fairly mediocre compared to other RTOSes which were present on the market several years ago. By now, people believe FreeRTOS is the best, and market for RTOSes is destroyed entirely.

Meanwhile, the FreeRTOS itself is bought and re-purposed by Amazon. I'm sure Amazon could buy one of better RTOSes as the companies making RTOSes are certainly not in good shape and would definitely sell. But no. Amazon sees more value in popularity than in quality.
Many otherwise good tools in the embedded space - compilers, RTOSes, GUI libraries, etc. - have become a huge liability for people by suddenly disappearing from the general market when one player (e.g. a silicon vendor) bought them. I think the RTOS area had reached the tipping point where people have lost trust in things that someone else controls their use of. Of course, the price of FreeRTOS doesn't hurt. :)
 

Online Nominal Animal

  • Frequent Contributor
  • **
  • Posts: 770
  • Country: fi
    • My home page and email address
Re: Neat algorithms
« Reply #58 on: August 12, 2019, 03:41:27 am »
You cannot really blame people for preferring something free to something expensive.
I can, if there is a significant difference in quality.  Here, we are talking about tools, whose quality will immediately affect the quality of the result.

I obviously understand the reason why hobbyists prefer free over nonfree; I'm a hobbyist myself.  But, when doing a paid work task, I do pay for proper tools.

But this would destroy the mechanism which forces the car manufacturers to produce better and cheaper cars.
The current mechanism which forces the car manufacturers to produce better and cheaper cars.

Fact is, all industries have gone through a fundamental revolution changing business models every couple of generations for a couple of centuries now.  The answer is not to freeze the business models, but to develop them so that they match the market, and the overall market drives what the cultures involved want and need.

Let's say that what you outlined actually happened, and suddenly cars were a commodity instead of a luxury item.  How does that differ from how internet came to be?
Would you have stopped the development of the internet just to protect the business model of telecom and cable TV companies?

The fast and hard fact is that companies are not human, and are not worth protecting.  It is the market, the competition, that keeps businesses healthy, and that is what we (whoever has the executive power) must enforce and direct so that overall, the market does what we want.

A major problem in current software choices is that ever since Microsoft gained dominance, humans have accepted that software is unreliable; that that is the nature of software.  There are a few niches (like medical devices) that are supposed to have different values, but experience (their failure rates and typical bugs) shows that is simply not true, they're just as crap as every other piece of software.  That attitude, that software is fundamentally unreliable, is pervasive throughout human computer use: we're basically programmed for that now.  Yet, Microsofts products are nonfree, and cost money, even though better free alternatives exist.

More importantly, the shift even in free software is from reliability and robustness to ... I dunno, surface bling? marketing? catering to lowest common denominator? Whatever the heck systemd is; all I know that it is neither KISS nor UNIXy.  Fewer and fewer people are interested in OpenBSD or FreeBSD.  Crappy and buggy Linux distributions are most popular; they don't need to be robust or reliable anymore.  Hell, they've brought back the "you need to reboot your computer now" idiom!

If the mechanism is destroyed (as in Soviet Union, or in software market) people stop producing and everything goes down the drain.
No, that happened in Soviet Union because competition was eradicated.

In the software market, competition is weak because people accept crap from both commercial and free software.  There is no demand for anything better.  You claim it is because no-cost software saturated the market; I claim it is because people are easily satisfied with bread and circuses, even when something better is available.

Here in Finland, it is completely accepted that multi-million IT projects often simply "fail", producing no results.  They are blamed on some of the lower-level workers, the bosses get their bonuses and switch to new projects without any harm to their reputation.  This is accepted practice, and is also the reason why IT product quality in Finland has gone to crap.



Let me go back a couple of decades, when I first started running a Linux-based mail and file servers for a small department in a Finnish university.

This was the era of Sendmail, a buggy insecure piece of shit that everyone was using, because it was what everyone was using.  Me, I've always had a paranoid streak, so after testing a few different options (with Exim, procmail, and qmail in the top three I chose from), settled to qmail, because of its security approach.

At every single turn, I had to explain my choice to people who could not themselves explain why they used Sendmail and not something else.

The biggest issue I ever had with it, was that it was too fast in processing internal mailing lists, confusing the main University Sendmail installation; limiting the rate at the Sendmail end fixed that.  After a couple of years, even the main Uni server admins came to accept it as a reliable, although "quirky" choice.

The exact same situation was, and remains, with Bind.  Having fed up with bind security issues, I switched to djbdns (for a local NAT).

Fast forward to today.

Exim has the same architecture as Sendmail, and although is much better security-wise, is Swiss cheese compared to qmail.  Most people still use bind, and simply live with the frequent security issues.  qmail is a tiny niche.  Reliability and efficiency wise we have gone backwards, not forwards.  "Patch Tuesday" is a fact of life, and not a joke anymore.

What happened, was that combined with the increasing memory, bandwidth, and processing power, we reached the satisfy-the-humans level with crappy software.  It does not matter whether it is paid or no-cost, free or proprietary.

So yeah, I definitely blame us humans for having such low expectations and being so shortsighted.  Stupid, really.



Many otherwise good tools in the embedded space - compilers, RTOSes, GUI libraries, etc. - have become a huge liability for people by suddenly disappearing from the general market when one player (e.g. a silicon vendor) bought them.

I know nothing of the RTOS market, but single-vendor products freak the shit out of me.  I've been burned too many times.  I don't trust them at all.
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 13262
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Neat algorithms
« Reply #59 on: August 12, 2019, 05:46:20 am »
A major problem in current software choices is that ever since Microsoft gained dominance, humans have accepted that software is unreliable; that that is the nature of software.  There are a few niches (like medical devices) that are supposed to have different values, but experience (their failure rates and typical bugs) shows that is simply not true, they're just as crap as every other piece of software.  That attitude, that software is fundamentally unreliable, is pervasive throughout human computer use: we're basically programmed for that now.  Yet, Microsofts products are nonfree, and cost money, even though better free alternatives exist.

I like your systematic perspective; which leads me to question why Microsoft -- an agent, not a market -- deserves specific blame?

I would argue that:
1. Obviously, smaller software products have fewer bugs to begin with.
2. Nostalgia bias masks how buggy early software was.  (OS bugs have been around since day one; examples comes to mind of the first viruses, and other exploits on mainframe and time share systems.  Many of which arguably were a conceptual failure: trusting that the users weren't trying to game the system.  It was a tenable assumption for the smaller labs or companies of the time that enforced user accountability.)
3. There were fewer users to find bugs.  A "big hit" might've had 10k's of sales in the 70s or early 80s.
4. Microsoft rode the wave of technology through the 80s and 90s; they are certainly exemplar of large-scale corporate software production, but they aren't to blame for the wave itself.
5. And with that scale, the first three points get reversed, plus a necessary compromise between quality, development time and development cost.  On the upside, you get a considerable reduction in unit cost, keeping products accessible to a wide user base, despite the rising development costs.  (And, as it turns out, this can go all the way to zero, making FOSS a thing.)

Cheers!
Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 2385
  • Country: fr
Re: Neat algorithms
« Reply #60 on: August 12, 2019, 06:12:01 am »
Thing is, open source is based on competition

Open source certainly participate in competition, and often wins because of low price, despite better products may be available.

Competition normally leads to better products, or cheaper, or both.
You can't beat free though. Once a given product is, or has become free, competition is dead. The game is over. Sure you can always sell BETTER products, but you'll never beat the infinite performance/cost ratio.

Of course and fortunately, that's not entirely true. Direct costs are only part of the story. The total cost is what really matters. For a given commercial product, it may cost a lot less overal to select a good commercial RTOS rather than FreeRTOS. Problem is, too many managers still don't realize this.
 

Online Nominal Animal

  • Frequent Contributor
  • **
  • Posts: 770
  • Country: fi
    • My home page and email address
Re: Neat algorithms
« Reply #61 on: August 12, 2019, 06:54:38 am »
why Microsoft -- an agent, not a market -- deserves specific blame?
My opinion is biased by my own experiences, obviously.

My first foray into computing was on a Commodore C64 in late 1980s, then C128, then a Huyndai 80286 PC clone.  All of these had rather stable software, with the exception of tape issues on C64 and C128 (I never had the disk drive for those), and that I never got SEUCK (Shoot-Em Up Construction Kit) to work on my C128, even in C64 mode.  My first real Unix use was on DEC Alphas in 1994.  In 1998, I maintained a couple of classrooms full of Macs (most running Mac OS 7.5, with full suite of Adobe Photoshop, Pagemaker, and Macromedia Freehand, Director, and so on), a dozen or so Windows PCs, and a couple of Linux servers.  The Macs were the easiest to manage, as I cloned them off master templates, even though they were in use by students (without personal logins, so having access to internals; which necessitated reinstalls rather often).

Before 1998, my experience was that aside from Windows stuff, computers and applications tended to work quite reliably, unless you ran out of memory.  (Mac OS was especially nice, as it allowed one to easily control the amount of memory an application was allowed.)

From 1992 onwards, I had been using Windows (3.1, then 95, then 98, and a bit of NT "server" stuff), and it never felt stable to me.  Comparing to anything else I used, it was the rickety unstable one.

In 1996-1998, when I started using Linux, it surprised me how some software was so much more reliable and stable than others; and that it was not a question of complexity, but of the people maintaining them.  I recall the times when Red Hat pushed a different compiler for the kernel than the userspace.. and not fondly.
It became obvious to me that there were two camps of people: one pushed product out, and let users deal with the bugs that may or may not be fixed later.  The other, like Adobe and Macromedia, provided software that actually worked.  Interoperability was always a problem (up to crashing programs when trying to open files created with a competing program), but this was the era of Wordperfect - Word wars, and Microsoft was mired up to its CEO in that as well.  Before that, one could expect interoperability to became better, not worse.

The final straw that broke my back wrt. Microsoft was how they "fixed" a bug in Mac version of Microsoft Office.  The bug was that after installing Office, the computer would refuse to shut down.  (Obviously, some MS component was stalling the shutdown process.)  The "fix" was an additional system module, that constantly consumed a megabyte of memory in an era where 32 megabytes was lots.  The fuckers couldn't even fix their own code, and instead added a constantly running system module to coddle the piece of shit to allow shutting down the computer!

I myself never switched to Windows 2000 or ME or later versions, as I switched to Linux instead.  Early Mac OS X versions were interesting, and I did use some Macs sporadically since then, but I felt Linux fit my uses much better.  By 2005, I was using Linux exclusively.

So, to summarize, in my experience shitty code comes from Microsoft and people who believe Microsofts tactics and business methods are acceptable;
and from people who simply don't/didn't see the need for software to be reliable and robust, like Eric Allman or Paul Vixie.

In the last twenty years or so, I've seen shitty code become more and more ubiquitous, and robust and reliable code rarer and rarer.
Is it just me?  Could be, but I don't think so.

The total cost is what really matters. For a given commercial product, it may cost a lot less overal to select a good commercial RTOS rather than FreeRTOS. Problem is, too many managers still don't realize this.
So true.  With FOSS, the immediate price may be zero, but there are lots of other costs; the most important thing being that being a "FOSS customer" means absolutely nothing, unlike when you have a commercial relationship.  As a FOSS user/"customer", you have zero leverage on the developers, unless you spend money, time, or other resources.  It definitely is not zero-cost.
 

Online NorthGuy

  • Super Contributor
  • ***
  • Posts: 1684
  • Country: ca
Re: Neat algorithms
« Reply #62 on: August 12, 2019, 07:17:14 am »
Fact is, all industries have gone through a fundamental revolution changing business models every couple of generations for a couple of centuries now.

Capitalism is not a market model. It is a fundamental mechanism which guides the economy. Businesses which produces something useful receive money (which is a measure of their contribution to the society if you will), then they can use the money to produce more. Such businesses get progressively bigger and bigger. Business which produces something which nobody wants, lose money and die. The death is key as it frees resources for successful businesses. This mechanism forces everyone to do something useful.

When the mechanism is impeded it works worse. It is even possible to destroy it completely (as in Soviet Union).

With software, capitalism worked great at the beginning - there were lots of businesses of various sizes which produced commercial software.  But then free software came to fruition, the mechanism meticulously eliminated all the non-free businesses and there's nothing left (setting aside games, accounting end few other niches which hasn't been penetrated by free software yet). Since capitalism cannot work with things lacking monetary value, there's no software market at all. In such conditions, people simply have to accept whatever they're given, because there's no choice. I don't think this has any relation to Microsoft.

As to the mail servers. I used Sendmail for a while. Then I wrote my own mail server for my own use. Long ago. It works great. But only for me. If I decided to sell it, nobody would buy it because there's plenty of free mail servers everywhere - no market for commercial mail servers. So, you're stuck with Exim, whether you accept it or not ;)
 

Online NorthGuy

  • Super Contributor
  • ***
  • Posts: 1684
  • Country: ca
Re: Neat algorithms
« Reply #63 on: August 12, 2019, 07:24:54 am »
In the last twenty years or so, I've seen shitty code become more and more ubiquitous, and robust and reliable code rarer and rarer.
Is it just me?  Could be, but I don't think so.

That has been my experience too. However, I attribute this to the way people write software now. Programmers of older days are all but extinct.
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 1953
  • Country: nz
Re: Neat algorithms
« Reply #64 on: August 12, 2019, 09:48:08 am »
In the last twenty years or so, I've seen shitty code become more and more ubiquitous, and robust and reliable code rarer and rarer.
Is it just me?  Could be, but I don't think so.

That has been my experience too. However, I attribute this to the way people write software now. Programmers of older days are all but extinct.

Where software needs to be reliable, and people pay for it to be reliable, then it is reliable. I've been running complex equipment in production for 5+ years, 24x7, without any downtime.

I've also had health customers say "availability of our application is critical to us, but I won't pay the $50k needed for licensing a shared-nothing clustered SQL database". I suspect they apply the same process to paying for software QA engineers.
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Online Nominal Animal

  • Frequent Contributor
  • **
  • Posts: 770
  • Country: fi
    • My home page and email address
Re: Neat algorithms
« Reply #65 on: August 12, 2019, 10:09:55 am »
Fact is, all industries have gone through a fundamental revolution changing business models every couple of generations for a couple of centuries now.
Capitalism is not a market model. It is a fundamental mechanism which guides the economy.
No argument there.  However, you seem to assert that there is only one software business model in capitalism, and I'm saying that due to changes in technology (various costs), business models must change and adapt in capitalistic systems to survive.  I base that on history, and on that long-lived tech companies have changed their business tactics every couple of generations.

There are many more ways to do business in software than package them for sale on store shelves.

Where software needs to be reliable, and people pay for it to be reliable, then it is reliable. I've been running complex equipment in production for 5+ years, 24x7, without any downtime.
Are you sure you haven't simply paid for enough hardware to cover the cases where the software shits on you, without it being visible to your end users?
Because that is the only way I know how to get arbitrary stuff done right now without writing, or configuring and fixing, the software myself.
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 1953
  • Country: nz
Re: Neat algorithms
« Reply #66 on: August 12, 2019, 10:55:43 am »
Quote from: Nominal Animal link=topic=198744.msg2609568#msg2609568
Are you sure you haven't simply paid for enough hardware to cover the cases where the software shits on you, without it being visible to your end users?
Because that is the only way I know how to get arbitrary stuff done right now without writing, or configuring and fixing, the software myself.

As the person who fixes the broken stuff (or escorts vendor engineers in to fix broken stuff) I am sure that given enough time all hardware shits on you... but no amount of perfect hardware can make up for unreliable software.

But software & systems can make up for the failings of hardware - and yes, that usually involves redundancy. But you need the redundancy it to do maintenance anyway....
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Online NorthGuy

  • Super Contributor
  • ***
  • Posts: 1684
  • Country: ca
Re: Neat algorithms
« Reply #67 on: August 12, 2019, 12:58:04 pm »
There are many more ways to do business in software than package them for sale on store shelves.

Sure. Such as "software as a service". Microsoft doesn't really sell software. They simply collect money. When I buy laptop and install Linux on it, Microsoft get paid. You wouldn't say that they sold software to me, would you?
 

Online BravoV

  • Super Contributor
  • ***
  • Posts: 5872
  • Country: 00
Re: Neat algorithms
« Reply #68 on: August 12, 2019, 01:27:23 pm »
Can we back on track at the algorithms only please ...

Offline legacy

  • Super Contributor
  • ***
  • Posts: 3918
  • Country: ch
Re: Neat algorithms
« Reply #69 on: August 17, 2019, 01:47:42 am »
Code: [Select]
opening btree file [mydata.bin] .. done
> h
integer followed by "I" to Insert
integer followed by "D" to Delete
integer followed by "S" to Search { found, not found }
integer followed by "A" to Search* { found, closest left-value }
"." to show the tree
"Q" to quit
> 98 99 10 d
item 98 99 100 have been deleted
> 100 a
Search* (100):
{{ 253 752 1003 1254 1505 1756 2007 2258 2509 2760 3011 3262 3513 }
{ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74
75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 101
102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118
119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135
136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152
153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169
170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186
187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203
204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220
221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237
238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 } }

Not found, the closest left-value is 97
>

This algorithm is a b-tree (not b+tree). After building the B-tree by entering integers on the keyboard or by supplying these as a script (this is what I have done, creating items from 0 to 3999), the user can update it interactively by entering or deleting individual item. The user can also search the B-tree for a given item.

The App writes, reads, and updates nodes on disk using a binary file.

But, the real challenge was ... how to find the "closest" item when it doesn't exist.

Code: [Select]
Search* (100):

... 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 .. .. .. 101 102 103 ...

Not found, the closest left-value is 97
 

Online Nominal Animal

  • Frequent Contributor
  • **
  • Posts: 770
  • Country: fi
    • My home page and email address
Re: Neat algorithms
« Reply #70 on: August 17, 2019, 02:11:42 am »
But, the real challenge was ... how to find the "closest" item when it doesn't exist.
Situations like that are surprisingly common.  The idea/algorithm itself may be easy to implement, but getting the engineering right so it works for practical, imperfect data, takes a surprising amount of effort.

(Also, one may optimize the "work case", only to find out that half or more than half of the actual cases encountered are those that "miss", and the chosen algorithm works terribly badly for those.  It is too common to just write a comment saying "fix this later, I'm too busy right now", because the proper answer is to just rewrite the entire section.  Error checking is just a small part of that, too; you might wish to verify your implementation works if you search for a value smaller or greater than any already in the tree.)

I personally would also consider adding a command that saves the current tree in Graphviz Dot format.  If the values are unique, you can use them as the node keys; otherwise you need to synthesize an identifier for each value.  I've found that exploring (even pseudorandom) data and looking at what kind of trees they generate, really helps in getting a good intuitive picture of the algorithm itself.  (Especially when you can compare different tree structures or algorithms, for the same data.)
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf