Author Topic: Machine Learning Algorithms  (Read 25171 times)

0 Members and 1 Guest are viewing this topic.

Offline ralphrmartin

  • Frequent Contributor
  • **
  • Posts: 480
  • Country: gb
    • Me
Re: Machine Learning Algorithms
« Reply #50 on: November 16, 2021, 04:31:10 pm »
If you can predict the 1.3% of cases in which it will fail, then that would be very acceptable - since we could just ignore/discount the result. (E.g. if it doesn't work the 1.3% of the time the temperature is below -5C, then we wouldn't use it in cold weather)

Would you be content if the 1.3% resulted in you being seriously injured or locked up in jail?

That's not how things work in real life. If we knew which of the one in a million (or whatever the fraction actually is) flights is the one that is going to crash, we would not get on it. Instead, we take a flight knowing that the risk of a crash is small. Asking for a system with no unpredictable failures is unrealistic. They can occur due to programming errors even if the algorithm is well characterised, hardware failures, cosmic rays, operator error, and so on.


Fitting a hypothesis to previous observations is not science. (E.g. gold is a good invesment because it went up 50% last week is an argument that only charlatans would use!)

Fitting a hypothesis to previous observations and then using the hypothesis to make falsifiable predictions is science.

Again, you misunderstand how deep learning is done. When building a deep learning system, fitting the hypothesis to the previous observations is called training. Using the hypothesis to make falsifiable predictions on different data is then called verification. The aim of doing so is to ensure the model generalises to new, unseen data. Both of your steps are used. Clearly, there needs to be a lot of care taken in ensuring these data sets are independent, are representative of the real uses to which the model is put and so on, and these are not easy. But there is nothing fundamentally different or unscientific about deep learning.

You dont like deep learning as you "dont understand what is inside the black box". But why should we trust an inverse square law for electrostatics? We no more "understand" why nature should apparently follow a simple mathematical rule in this case, and where and how that rule may break down. People trusted Newtonian mechanics until relativity showed it to be a poor description in some cases.

Anyway, let me finish this discussion with the observation that you are unlikely to see me in a self driving car in the near future. Given the current success rate of deep learning systems on much simpler, and less safety critical, computer vision systems, and that driving is a much more complex problem with many "unknown unknowns", I do not believe they are likely to reach an acceptable (to me) success rate in the near future, except in very controlled conditions. This is not because the method is flawed per se, but because the problem is too difficult.
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #51 on: November 16, 2021, 04:53:44 pm »
It is a recognised problem that pilots are becoming automation controllers at the expense of stick-and-rudder competency. That's caused many crashes.

...and saved countless others.  Nobody records what would have happened but didn't, so the data is skewed.

The explanation that I heard for Boeing's recent debacles was that they trusted outside contractors too much.  They used to know what they're doing and did it themselves, so it was right.  Now, they contract out important stuff to people who don't have a clue but are much cheaper, and neglect to tell them ALL of the requirements because they're used to it being common knowledge.  Turns out it isn't, and they end up with a software product that has an input for a redundant sensor but doesn't actually use it...

The biggest hindrance to widespread automation is NOT the engineering.  It's the short-sighted idiot bean-counters that routinely take over engineering and screw it up.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19509
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #52 on: November 16, 2021, 05:02:41 pm »
If you can predict the 1.3% of cases in which it will fail, then that would be very acceptable - since we could just ignore/discount the result. (E.g. if it doesn't work the 1.3% of the time the temperature is below -5C, then we wouldn't use it in cold weather)

Would you be content if the 1.3% resulted in you being seriously injured or locked up in jail?

That's not how things work in real life. If we knew which of the one in a million (or whatever the fraction actually is) flights is the one that is going to crash, we would not get on it.

I'm well aware of that!

Quote
Instead, we take a flight knowing that the risk of a crash is small. Asking for a system with no unpredictable failures is unrealistic.  They can occur due to programming errors even if the algorithm is well characterised, hardware failures, cosmic rays, operator error, and so on.

You continue to miss the point.

It is unreasonable to base a safety critical system on a technology and implementation that is not subject to inspection, understanding, and validation.

Quote
Fitting a hypothesis to previous observations is not science. (E.g. gold is a good invesment because it went up 50% last week is an argument that only charlatans would use!)

Fitting a hypothesis to previous observations and then using the hypothesis to make falsifiable predictions is science.

Again, you misunderstand how deep learning is done. When building a deep learning system, fitting the hypothesis to the previous observations is called training. Using the hypothesis to make falsifiable predictions on different data is then called verification. The aim of doing so is to ensure the model generalises to new, unseen data. Both of your steps are used. Clearly, there needs to be a lot of care taken in ensuring these data sets are independent, are representative of the real uses to which the model is put and so on, and these are not easy. But there is nothing fundamentally different or unscientific about deep learning.

There's an old engineering maxim that young software developers seem to be unable to comprehend: "you can't test quality into a product (it has to be designed in)". Verification is merely another name for testing.

Quote
You dont like deep learning as you "dont understand what is inside the black box". But why should we trust an inverse square law for electrostatics? We no more "understand" why nature should apparently follow a simple mathematical rule in this case, and where and how that rule may break down. People trusted Newtonian mechanics until relativity showed it to be a poor description in some cases.

No, I don't dislike it for that reason. I dislike it because nobody, not even the designers can understand it.

Quote
Anyway, let me finish this discussion with the observation that you are unlikely to see me in a self driving car in the near future. Given the current success rate of deep learning systems on much simpler, and less safety critical, computer vision systems, and that driving is a much more complex problem with many "unknown unknowns", I do not believe they are likely to reach an acceptable (to me) success rate in the near future, except in very controlled conditions. This is not because the method is flawed per se, but because the problem is too difficult.

Quite, although you might end up on top of somebody else's self-driving car :) The problem is that ML is being applied to safety critical systems, regardless of the lack of suitability. And I include "medical diagnosis" and "court sentencing" as safety critical systems.

The problems I have noted are common to all ML systems. A reputable non-alarmist technophile organisation (the IEEE) has a decent short introductory article to ML problems at https://spectrum.ieee.org/ai-failures
  • Brittleness
  • Embedded Bias
  • Catastrophic Forgetting (particularly relevant to your verification contentions)
  • Explainability
  • Quantifying Uncertainty
  • Common Sense
  • Math
Now I'll concede that in humans "common sense isn't", and that maths isn't necessarily a problem.

If you don't think the other problems are important or real, I'd be interested to hear your reasoning.


« Last Edit: November 16, 2021, 05:16:46 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19509
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #53 on: November 16, 2021, 05:11:53 pm »
It is a recognised problem that pilots are becoming automation controllers at the expense of stick-and-rudder competency. That's caused many crashes.

...and saved countless others.  Nobody records what would have happened but didn't, so the data is skewed.

That is the standard contention, and there is some validity to it. But it isn't completely clear-cut.

It definitely isn't obvious in medical diagnosis and court sentencing ML applications.

Quote
The explanation that I heard for Boeing's recent debacles was that they trusted outside contractors too much.  They used to know what they're doing and did it themselves, so it was right.  Now, they contract out important stuff to people who don't have a clue but are much cheaper, and neglect to tell them ALL of the requirements because they're used to it being common knowledge.  Turns out it isn't, and they end up with a software product that has an input for a redundant sensor but doesn't actually use it...

The biggest hindrance to widespread automation is NOT the engineering.  It's the short-sighted idiot bean-counters that routinely take over engineering and screw it up.

The first paragraph is irrelevant, even if true. N.B. the brown stuff has hit the fan and will land everywhere. One Boeing employee (Mark A. Forkner) has already been indicted.

The second paragraph might be valid somewhere, but unfortunately not on Planet Earth. That's the way things work here :(

Do have a look at the examples in https://spectrum.ieee.org/ai-failures
« Last Edit: November 16, 2021, 05:20:11 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14476
  • Country: fr
Re: Machine Learning Algorithms
« Reply #54 on: November 16, 2021, 05:25:23 pm »
It is a recognised problem that pilots are becoming automation controllers at the expense of stick-and-rudder competency. That's caused many crashes.

...and saved countless others.  Nobody records what would have happened but didn't, so the data is skewed.

In cases the root cause is automation, you're right, except that the fact pilots don't understand what automation is doing at some point and thus don't know how to take corrective actions that COULD have been taken if what automation was doing was clearer, is still a major issue and can be seen in many of these cases, including of course the Boeing debacle. Had the pilots understood what automation was doing, the planes would never have crashed.

It's not about getting the perfect tools, it's about getting decent tools that their users know well.

And, OTOH, there are of course a number of crashes not caused by automation at all, but by various hardware failures, for instance. In which case we have ample proof of how pilots can react in those cases and how some are able to land safely with a severly damaged plane. So we definitely know how humans can react to completely unexpected events in a much better way than any machine could.

To me, the Boeing issue is very telling. Sure we can say that it's huge design mistake. But that will happen again. No design process is perfect, and even though it's kind of easy in this example to put blames, there are cases for which it's a lot less. Critical systems must always be designed so that they are resilient. That includes the obvious redundancy, which was largely missing in the Boeing's case, and enabling users to take corrective actions.

And good thing here that the software used for the MCAS was infinitely simpler in itself, and easier to understand, than any AI-based stuff. So we could at least determine what the problem was, and fix it. If we don't know what the problem is, we can never fix it. Again if we can't analyze why a given system fails, we can't fix it. We can only run in circles like flies and frantically retrain NNs until we seem to get an even better success rate than the previous version with larger/seemingly "better" training datasets, and cross fingers. That's an odd way of considering safety and correctness.

Also, pure statistics are great for some things, less interesting for others too. I gave this fun hammer example. But it's IMO an interesting question.
Say we have one fully automated system for which extensive tests have shown a correct behavior rate of 99%. Now say that an equivalent approach with a less automated system and more human control is estimated to have a rate of 98%. Which one are you going to feel safer with? Which one seems best for long-term use? Which one is easier to fix or improve? There are underlying questions that are a lot more complex than they might seem.

And accountability is also a major point here IMO. No it's not per se about "who to put the blame on" so we can get some feeling of revenge and move on. Accountability is there to give a strong incentive both to limit errors before they happen, but also to fix errors when they do happen. Without accountability, there is exactly ZERO incentive to fix/improve anything, except maybe just for marketing reasons. "Look, my autonomous plane has 0.1% probability of crashing, yours has 0.2% ! Buy me!". So lack of accountability = design things to the minimum level of safety possible and put profitability before safety.

« Last Edit: November 16, 2021, 05:27:19 pm by SiliconWizard »
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19509
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #55 on: November 16, 2021, 05:45:33 pm »
It is a recognised problem that pilots are becoming automation controllers at the expense of stick-and-rudder competency. That's caused many crashes.

...and saved countless others.  Nobody records what would have happened but didn't, so the data is skewed.

In cases the root cause is automation, you're right, except that the fact pilots don't understand what automation is doing at some point and thus don't know how to take corrective actions that COULD have been taken if what automation was doing was clearer, is still a major issue and can be seen in many of these cases, including of course the Boeing debacle. Had the pilots understood what automation was doing, the planes would never have crashed.

It's not about getting the perfect tools, it's about getting decent tools that their users know well.

And, OTOH, there are of course a number of crashes not caused by automation at all, but by various hardware failures, for instance. In which case we have ample proof of how pilots can react in those cases and how some are able to land safely with a severly damaged plane. So we definitely know how humans can react to completely unexpected events in a much better way than any machine could.

Precisely, on all counts.

A personal example is that I've safely stopped a car after a wheel fell off when overtaking. I wonder what a Tesla would have done?

My favourite two aircraft example are safe landings of a B52 missing a wing and an F15 missing a wing. They are easy to locate, and there are videos of the latter. Plus, of course, there is the stunning UA232 which lost all control surfaces; they even made a (poor) movie about that one.



Your points below are also valid.

Quote
To me, the Boeing issue is very telling. Sure we can say that it's huge design mistake. But that will happen again. No design process is perfect, and even though it's kind of easy in this example to put blames, there are cases for which it's a lot less. Critical systems must always be designed so that they are resilient. That includes the obvious redundancy, which was largely missing in the Boeing's case, and enabling users to take corrective actions.

And good thing here that the software used for the MCAS was infinitely simpler in itself, and easier to understand, than any AI-based stuff. So we could at least determine what the problem was, and fix it. If we don't know what the problem is, we can never fix it. Again if we can't analyze why a given system fails, we can't fix it. We can only run in circles like flies and frantically retrain NNs until we seem to get an even better success rate than the previous version with larger/seemingly "better" training datasets, and cross fingers. That's an odd way of considering safety and correctness.

Also, pure statistics are great for some things, less interesting for others too. I gave this fun hammer example. But it's IMO an interesting question.
Say we have one fully automated system for which extensive tests have shown a correct behavior rate of 99%. Now say that an equivalent approach with a less automated system and more human control is estimated to have a rate of 98%. Which one are you going to feel safer with? Which one seems best for long-term use? Which one is easier to fix or improve? There are underlying questions that are a lot more complex than they might seem.

And accountability is also a major point here IMO. No it's not per se about "who to put the blame on" so we can get some feeling of revenge and move on. Accountability is there to give a strong incentive both to limit errors before they happen, but also to fix errors when they do happen. Without accountability, there is exactly ZERO incentive to fix/improve anything, except maybe just for marketing reasons. "Look, my autonomous plane has 0.1% probability of crashing, yours has 0.2% ! Buy me!". So lack of accountability = design things to the minimum level of safety possible and put profitability before safety.
« Last Edit: November 16, 2021, 05:47:18 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8172
  • Country: fi
Re: Machine Learning Algorithms
« Reply #56 on: November 16, 2021, 05:54:36 pm »
The root cause for "I don't actually know how to fly at all so I crash" accidents (which are indeed very numerous - typical example being pulling the nose up in panic when stick shaker activates indicating stall) is not the addition of automation, but the vast increase in flying, and especially cheap flights. Specifically, in early 2000's the problem was sudden and huge, airlines just needed to hire whomever they can, no need for exceptional skills, no need for ambition for flying. And no money, no time for thorough training!

Almost overnight, the "human related accidents" changed from mishaps caused by very skilled but unquestioned hero captain, where skilled F.O. would have been able to prevent the crash but couldn't question the captain, into a completely new genre where there are two pilots in the cockpit neither of whom have no idea how to fly and what to do in completely normal situations.

Automation can be blamed though because it was the enabler for this. These crap pilots kind of learn how to fly, but without automation, they would create much larger number of accidents; to the point of no one daring to fly, it would be just impractical. So enter automation; as it stands, these pilots only cause an accident whenever the automation decides to let the pilot handle the situation for whatever reason, or disable automated safety features (due to sensor malfunction, for example).

Tesla Autopilot is similar. Give it to a drunk idiot and it will easily save lives by driving better, more reliably, and, more predictably than said drunk idiot. But the comparison is moot. We shouldn't let drunk idiots drive to begin with.
« Last Edit: November 16, 2021, 05:57:42 pm by Siwastaja »
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #57 on: November 16, 2021, 07:20:58 pm »
The root cause for "I don't actually know how to fly at all so I crash" accidents (which are indeed very numerous - typical example being pulling the nose up in panic when stick shaker activates indicating stall) is not the addition of automation, but the vast increase in flying, and especially cheap flights. Specifically, in early 2000's the problem was sudden and huge, airlines just needed to hire whomever they can, no need for exceptional skills, no need for ambition for flying. And no money, no time for thorough training!

Almost overnight, the "human related accidents" changed from mishaps caused by very skilled but unquestioned hero captain, where skilled F.O. would have been able to prevent the crash but couldn't question the captain, into a completely new genre where there are two pilots in the cockpit neither of whom have no idea how to fly and what to do in completely normal situations.

Automation can be blamed though because it was the enabler for this. These crap pilots kind of learn how to fly, but without automation, they would create much larger number of accidents; to the point of no one daring to fly, it would be just impractical. So enter automation; as it stands, these pilots only cause an accident whenever the automation decides to let the pilot handle the situation for whatever reason, or disable automated safety features (due to sensor malfunction, for example).

Tesla Autopilot is similar. Give it to a drunk idiot and it will easily save lives by driving better, more reliably, and, more predictably than said drunk idiot. But the comparison is moot. We shouldn't let drunk idiots drive to begin with.

But they will anyway.  So it's better to get them home without their involvement.  A car that has no facilities at all for a human to micro-manage it would be wonderful in that sense.  (and opens up a different can of worms in another)

And the bean counters will continue to skimp on whatever they can, including training.  So the equivalent of a "drunk pilot" will continue to exist as well.



In addition to a bunch of cheap flights, there's a shortage of skilled pilots to start with, because that generation is in the process of retiring now, and the new generation just isn't interested.  It's too expensive to meet the legal standard, a lot of which comes out of their own pockets *in hopes* of getting hired somewhere.  And it's not inherently exciting anymore, like it was a generation ago.  So the financially prudent ones that aren't independently wealthy, do something else that's a lot less risky.

So the shortage of skill continues, which provides the motivation to automate.  Change the law, not to make it easier to be allowed to hand-fly with commercial passengers, but to apply (at least) the same standard of reliability to an automated system.  Possibly more.  Certify the aircraft with the automation in place, as an integral part of the aircraft and as part of the certification, in a larger system that allows a swarm of them to operate with no human control whatsoever.  (a lot of that system already exists, in various forms of pilot assistance)  The entire process is designed to have that level of gate-to-gate reliability (or driveway-to-driveway?) as part of the certification itself.  The younger generations that only care about getting from A to B safely, *regardless of how it's done*, will get their wish.

"Automated aircraft" is not a new concept; the technology has existed for a couple of decades already to do it, and there's been a lot *more* serious engineering since then.  The real problem is convincing the old-generation bureaucrats who are cognitively rigid in the old *must be human!* dogma, and don't want to make themselves irrelevant, to allow it to an extent that actually *works*.
(When these regulators were still mentally plastic, we DIDN'T have machines that could do this, and so the dogma was well founded.  Not anymore.)

Because of the automation paradox, partial solutions tend to be worse than either extreme, so it's unfair to tentatively mix in just a little bit and then kill the project because the approach itself set it up to fail.  Automated cars for another example: In a system that actually realizes the practical benefits of that (bumper to bumper at Mach 0.5; entry, exit, and flat interchanges at that speed; etc.), even one human that insists on manual control is going to cause the biggest pileup in history.  Ruthlessly forbid manual control in such a system, and it all works smoothly.

(I remember reading a sci-fi "slice-of-life" story about a car salesperson, where the justification for the story was that an old internal-combustion truck that depended on a still in the owner's backyard, had just become illegal to drive to market because the highway in between became "automated only", and it was physically blocked from entering.  Newer vehicles would automatically disable the manual controls when passing that point.  The rest of the story was the process of selling a modern vehicle to this luddite while addressing their concerns.  I think that author has the right understanding.)
« Last Edit: November 16, 2021, 07:26:06 pm by AaronD »
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19509
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #58 on: November 16, 2021, 07:44:24 pm »
Because of the automation paradox, partial solutions tend to be worse than either extreme,

Precisely right. The boundary and handover is a real problem.

Quote
so it's unfair to tentatively mix in just a little bit and then kill the project because the approach itself set it up to fail.

Not quite.

If a "little bit mixed in" is all that is done, then it should be killed. Either do something that can be proved to work properly, or don't do it.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Smokey

  • Super Contributor
  • ***
  • Posts: 2591
  • Country: us
  • Not An Expert
Re: Machine Learning Algorithms
« Reply #59 on: November 16, 2021, 09:17:40 pm »
Same things can be said about humans.  But this wouldn't answer the question from the OP.  There are places where ML fits and other places where it doesn't.

Key differences: the human can explain why they made a decision.

Sorta, maybe, kinda, not really.  At least not reliably......
https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19509
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #60 on: November 16, 2021, 09:41:23 pm »
Same things can be said about humans.  But this wouldn't answer the question from the OP.  There are places where ML fits and other places where it doesn't.

Key differences: the human can explain why they made a decision.

Sorta, maybe, kinda, not really.  At least not reliably......
https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow

It is regarded as poor that the "worst" humans struggle to explain their reasoning.
Even the "best" ML systems can't begin to explain their reasoning.
Spot the difference!

The whole article https://spectrum.ieee.org/ai-failures is well worth reading, since it includes many pertinent examples. However here's the bit on "explainability", with my emphasis...

Quote
Why does an AI suspect a person might be a criminal or have cancer? The explanation for this and other high-stakes predictions can have many legal, medical, and other consequences. The way in which AIs reach conclusions has long been considered a mysterious black box, leading to many attempts to devise ways to explain AIs' inner workings. "However, my recent work suggests the field of explainability is getting somewhat stuck," says Auburn's Nguyen.

Nguyen and his colleagues investigated seven different techniques that researchers have developed to attribute explanations for AI decisions—for instance, what makes an image of a matchstick a matchstick? Is it the flame or the wooden stick? They discovered that many of these methods "are quite unstable," Nguyen says. "They can give you different explanations every time."

In addition, while one attribution method might work on one set of neural networks, "it might fail completely on another set," Nguyen adds. The future of explainability may involve building databases of correct explanations, Nguyen says. Attribution methods can then go to such knowledge bases "and search for facts that might explain decisions," he says.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: Machine Learning Algorithms
« Reply #61 on: November 17, 2021, 10:02:50 pm »
The CNN algorithm is perfectly capable of producing humanly acceptable explanation. For example, in judicial system, the algorithm can produce an explanation similar to: "here's 10 most similar cases. In 9 of these 10 cases there was a death sentence, so I recommend the death sentence as well". Such explanation is actually very similar to what judge may say - the first thing the judge would look at are rulings in similar cases. And similar to the judge the software may be corrupted (by hackers, bugs, or whatnot), and may be made to disregard some relevant cases.

CNN is not really self-learning. Neural networks are. Here the problem is that relatively large neural network may get its own agenda and decide not to pursue the goals posted by humans, or even may work against humans. This will be real horror, although humans will probably not see it not until it's too late.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19509
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #62 on: November 17, 2021, 10:15:00 pm »
The CNN algorithm is perfectly capable of producing humanly acceptable explanation. For example, in judicial system, the algorithm can produce an explanation similar to: "here's 10 most similar cases. In 9 of these 10 cases there was a death sentence, so I recommend the death sentence as well". Such explanation is actually very similar to what judge may say - the first thing the judge would look at are rulings in similar cases. And similar to the judge the software may be corrupted (by hackers, bugs, or whatnot), and may be made to disregard some relevant cases.

CNN is not really self-learning. Neural networks are. Here the problem is that relatively large neural network may get its own agenda and decide not to pursue the goals posted by humans, or even may work against humans. This will be real horror, although humans will probably not see it not until it's too late.

If it isn't a neural network, then I presume it is a forward/backward chaining expert system fashionable in the 80s. Where rules are explicitly coded, yes of course it can give an explanation.

Unfortunately neural nets are descendents of Igor Alexander's WISARD. That distinguished well between cars and tanks in the lab, but failed dismally in the field. Eventually they realised it had trained itself to distinguish between cloudy and sunny days. It is said colleagues then refused to acknowledge Alexander's presence on sunny days :)

Yes, that kind of problem is being rediscovered by today's youngsters. Yawn.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #63 on: November 17, 2021, 10:22:25 pm »
I wonder, if the best way to produce a human-equivalent explanation is to organize and train the AI as if it were human?  Use a layered approach, learning basic concepts first, none of which are the end goal, and reinforcing them into oblivion in random situations with all of the related inconveniences, then slightly more advanced but still quite simple, etc., with each step building on the capabilities of the previous one.  Then after a similar time that it takes a human to fully mature (big disqualifier in today's instant world), the AI will make similar decisions and be able to explain them in terms of what each layer came up with.

And is that really what WE call "explanation" too?  Just a "debug spew" of what each hierarchical layer had for an answer?  If one of them is determined to be wrong, then that's training data for that layer, but still no explanation for how it got what it did from what the previous layer gave it.  I think that applies surprisingly well for *us* too.
 

Offline NiHaoMike

  • Super Contributor
  • ***
  • Posts: 9018
  • Country: us
  • "Don't turn it on - Take it apart!"
    • Facebook Page
Re: Machine Learning Algorithms
« Reply #64 on: November 18, 2021, 12:47:15 am »
Quick overview of three types of machine learning:
Cryptocurrency has taught me to love math and at the same time be baffled by it.

Cryptocurrency lesson 0: Altcoins and Bitcoin are not the same thing.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19509
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #65 on: November 18, 2021, 01:00:03 am »
I wonder, if the best way to produce a human-equivalent explanation is to organize and train the AI as if it were human?  Use a layered approach, learning basic concepts first, none of which are the end goal, and reinforcing them into oblivion in random situations with all of the related inconveniences, then slightly more advanced but still quite simple, etc., with each step building on the capabilities of the previous one.

Congratulations.

You've just reinvented the approach used in expert systems in the 80s :) There are even languages for those techniques. Search terms: forward chaining, backward chaining, Horn clauses.

(BTW, welcome to the Triumphant Re-inventors club  :) We all do that from time to time; I did it with FSMs and microcoding).
« Last Edit: November 18, 2021, 01:10:43 am by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14476
  • Country: fr
Re: Machine Learning Algorithms
« Reply #66 on: November 18, 2021, 01:48:12 am »
Yep. The techniques and knowledge we have about them haven't changed all that much actually. What has is technology - the computational power we have at our disposal, which now makes some approaches, that were once unpractical, usable.

There definitely are hybrid approaches too - that unfortunately mostly stay in academic circles, probably because they are not hype enough. One common hybrid approach is to have a "good old "rule-based system, being coupled to a NN, either to determine the rules themselves, or to adjust/improve them as the system is being used. I rather like this approach. The rules themselves are then perfectly understandable. They can be fully automatically derived from training data as well, but it's also possible to verify them and hand-modify the ones that would appear to be bogus.

The hype about current AI (which is definitely not what all AI is about either) reminds me a bit about the hype there was on fuzzy logic few decades ago. Manufacturers started shoving fuzzy logic everywhere, even when a PID would have worked at least as well. The hype passed. And I find this kind of "debacle" (maybe too strong a word though) a shame: fuzzy logic has some interesting things to it actually, way beyond how it was used back then in industry - I suggest reading literature about it, starting with Zadeh's papers of course. You may find concepts and ideas that are a lot more interesting than what has been said about it (at least ever since it went out of fashion.)
« Last Edit: November 18, 2021, 01:50:21 am by SiliconWizard »
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #67 on: November 18, 2021, 02:45:09 am »
I wonder, if the best way to produce a human-equivalent explanation is to organize and train the AI as if it were human?  Use a layered approach, learning basic concepts first, none of which are the end goal, and reinforcing them into oblivion in random situations with all of the related inconveniences, then slightly more advanced but still quite simple, etc., with each step building on the capabilities of the previous one.

Congratulations.

You've just reinvented the approach used in expert systems in the 80s :) There are even languages for those techniques. Search terms: forward chaining, backward chaining, Horn clauses.

(BTW, welcome to the Triumphant Re-inventors club  :) We all do that from time to time; I did it with FSMs and microcoding).

Ha!  Okay.  I hadn't seen that, but I wouldn't have thought to look there either.

I did it with IEEE754 floating point too.  I needed to compress a wide range of integers into a single byte and then decompress it on an 8-bit microcontroller, and my initial thought was that the IEEE version was too complicated.  But by the time I had solved all the problems with my version, it was pretty much *exactly* IEEE754, just with fewer bits.  So now I know why *that* is the way it is.  :)
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19509
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #68 on: November 18, 2021, 09:27:45 am »
Yep. The techniques and knowledge we have about them haven't changed all that much actually. What has is technology - the computational power we have at our disposal, which now makes some approaches, that were once unpractical, usable.

There definitely are hybrid approaches too - that unfortunately mostly stay in academic circles, probably because they are not hype enough. One common hybrid approach is to have a "good old "rule-based system, being coupled to a NN, either to determine the rules themselves, or to adjust/improve them as the system is being used. I rather like this approach. The rules themselves are then perfectly understandable. They can be fully automatically derived from training data as well, but it's also possible to verify them and hand-modify the ones that would appear to be bogus.

The hype about current AI (which is definitely not what all AI is about either) reminds me a bit about the hype there was on fuzzy logic few decades ago. Manufacturers started shoving fuzzy logic everywhere, even when a PID would have worked at least as well. The hype passed. And I find this kind of "debacle" (maybe too strong a word though) a shame: fuzzy logic has some interesting things to it actually, way beyond how it was used back then in industry - I suggest reading literature about it, starting with Zadeh's papers of course. You may find concepts and ideas that are a lot more interesting than what has been said about it (at least ever since it went out of fashion.)

Yup!

The hybrid approach does use the standard engineering technique: decomposition into small independent sections that are testable in isolation. The ML mob ignores that concept in favour of magic.

It has to be said that some problems aren't amenable to that, e.g. automated translation, since they do require global context to avoid the "out of sight out of mind -> invisible idiot" problem.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14476
  • Country: fr
Re: Machine Learning Algorithms
« Reply #69 on: November 18, 2021, 05:08:26 pm »
There may be a bunch of "followers" favoring magic, but this point itself is nothing new. When something becomes the de facto approach, whatever the reason, people will tend to flock to it. Actually, those who don't might be even considered idiots. (Yes does that ring a bell? ;D ) That's how trends have always worked.

It's interesting to consider how it subtly drifts off the standard engineering techniques, as you said. This trend is IMO not restricted to AI/ML, but that would be a whole topic in itself. I think it goes hand in hand with what I mentioned earlier, an apparent will to get rid of the concept of accountability. Here as well, it's absolutely not restricted to AI. It seems to be a deep change in society that's happening. Tell me I'm wrong though!

But regarding ML, I think there's more to it than that. What does ML currently feed off? Huge amounts of data. That's ML's fuel. And data has become the XXIst century's goldmine. Is that a wonder ML is pushed at all costs by giant tech companies? So now, as we can even read in this thread, people think we can solve all problems with more data.

We know though that large amounts of data, improperly used, can lead to absolutely any conclusion and its opposite. Yes, even the same data. Classic fun: https://tylervigen.com/spurious-correlations
« Last Edit: November 18, 2021, 05:12:24 pm by SiliconWizard »
 

Offline SuzyC

  • Frequent Contributor
  • **
  • Posts: 792
Re: Machine Learning Algorithms
« Reply #70 on: November 18, 2021, 05:22:56 pm »
Suppose I wanted the fewest cost, lowest size of hardware to solve a "simple" problem, like a device that  could perform satisfactorily to recognize a few command words.

 I have also seen working examples of fuzzy logic  used with a 8-bit microcontroller that successfully learns to balance a  double-pendulum.

Which brings to mind quickly two questions:

 How much ML or NN hardware components would be required to do the same two example tasks?

 Why is fuzzy logic no longer in fashion to create intelligent devices?
« Last Edit: November 18, 2021, 05:31:02 pm by SuzyC »
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14476
  • Country: fr
Re: Machine Learning Algorithms
« Reply #71 on: November 18, 2021, 05:59:41 pm »
Fuzzy logic is not dead. There still are numerous books and papers about it.
Just as examples in areas where CNNs are used:
https://link.springer.com/book/10.1007/978-1-4614-6666-6
https://ieeexplore.ieee.org/document/8812593

I also suggest reading this: https://www.sfu.ca/~vdabbagh/Zadeh_08.pdf

It went out of fashion probably because it was overhyped in the 90s and early 2000s, and got replaced with another hype. Something notable too is that it has been consistently misunderstood and misused. As I mentioned, the common examples of fuzzy logic back in the days were often regulation systems - so that was all cute, because you could suddenly use a set of understandable rules to solve a given problem, instead of resorting to a formula with derivatives and integrals, but did not necessarily provide a lot of benefits compared to just using PIDs.

As to comparing the resources needed for a given task using various approaches, that's an interesting question. There may be papers about that, although a fair comparison may not be easy. You probably need to dig that up.
« Last Edit: November 18, 2021, 06:01:47 pm by SiliconWizard »
 
The following users thanked this post: SuzyC

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #72 on: November 18, 2021, 06:10:37 pm »
Suppose I wanted the fewest cost, lowest size of hardware to solve a "simple" problem, like a device that  could perform satisfactorily to recognize a few command words.

How robust do you want it?  I've seen some research, for example, to try and see how well a dog actually understands human language.  The primary result was that they essentially hear the sound of the first syllable and just count syllables after that.  If you're satisfied with that, then maybe record the first peak, and do an FFT on the recording while you count subsequent peaks.  By the time the entire command is done, the FFT might also be done, and you can compare it to a lookup table of statistics for each command, plus the exact count.  The learning part is to build that lookup table of statistics.

I have also seen working examples of fuzzy logic  used with a 8-bit microcontroller that successfully learns to balance a  double-pendulum.

The way I'd do that is to fix the system to a mathematical model with a few unknowns, so that the learning part is only to fill in those unknowns.  The system is limited to that task, but it's much easier to make than it would be for a general purpose thing that then learns this.

Why is fuzzy logic no longer in fashion to create intelligent devices?

That, I don't know.  My only exposure to something that was called "Fuzzy Logic" was in industrial controls.  The vendor's new firmware introduced another "black box" module that they called "Fuzzy Logic".  Essentially, you would configure it for N input variables and M output variables, and then enter an N-dimensional lookup table of M output values at each position, that are based on a small handful of inputs that are easy to characterize.  (hot/cold, empty/low/high/full, etc.)  It then did a linear interpolation of that lookup table based on the actual input values from the physical process.

No actual learning at all in that system.  You gave it some strategic answers, and it drew a bunch of straight lines in between.  If a straight line didn't work, you'd add another data point with a predetermined answer.

I remember thinking at the time that that's not real FL.  It felt more like a marketing buzzword to make an electrician-turned-programmer feel fancy.  It has some similarities, but it's not the real thing.
 
The following users thanked this post: SuzyC

Offline SuzyC

  • Frequent Contributor
  • **
  • Posts: 792
Re: Machine Learning Algorithms
« Reply #73 on: November 18, 2021, 06:14:05 pm »
SiliconWizard, thanks for those links to get to better understand my questions about fuzzy logic!

But the second question remains  unanswered. What would be the minimal hardware required to obtain the same results?


What are the minimal components required to implement a NN or ML system to solve the example problems?

Is it that NN and ML are only used by themselves in the realm of AI applications and a NN ML system is by fashion not allowed to be integrated with fuzzy logic to solve problems?
« Last Edit: November 18, 2021, 06:25:42 pm by SuzyC »
 

Offline SuzyC

  • Frequent Contributor
  • **
  • Posts: 792
Re: Machine Learning Algorithms
« Reply #74 on: November 18, 2021, 06:21:16 pm »
Thanks AaronD,

From your posting I get the idea that judicial sentencing and medical diagnostic work could also be done using fuzzy logic..if it was in style.

Referring to the example you  posted, what part of "real" fuzzy logic was neglected, not used in control implementations?
« Last Edit: November 18, 2021, 06:24:48 pm by SuzyC »
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf