If you can predict the 1.3% of cases in which it will fail, then that would be very acceptable - since we could just ignore/discount the result. (E.g. if it doesn't work the 1.3% of the time the temperature is below -5C, then we wouldn't use it in cold weather)
Would you be content if the 1.3% resulted in you being seriously injured or locked up in jail?
Fitting a hypothesis to previous observations is not science. (E.g. gold is a good invesment because it went up 50% last week is an argument that only charlatans would use!)
Fitting a hypothesis to previous observations and then using the hypothesis to make falsifiable predictions is science.
It is a recognised problem that pilots are becoming automation controllers at the expense of stick-and-rudder competency. That's caused many crashes.
If you can predict the 1.3% of cases in which it will fail, then that would be very acceptable - since we could just ignore/discount the result. (E.g. if it doesn't work the 1.3% of the time the temperature is below -5C, then we wouldn't use it in cold weather)
Would you be content if the 1.3% resulted in you being seriously injured or locked up in jail?
That's not how things work in real life. If we knew which of the one in a million (or whatever the fraction actually is) flights is the one that is going to crash, we would not get on it.
Instead, we take a flight knowing that the risk of a crash is small. Asking for a system with no unpredictable failures is unrealistic. They can occur due to programming errors even if the algorithm is well characterised, hardware failures, cosmic rays, operator error, and so on.
Fitting a hypothesis to previous observations is not science. (E.g. gold is a good invesment because it went up 50% last week is an argument that only charlatans would use!)
Fitting a hypothesis to previous observations and then using the hypothesis to make falsifiable predictions is science.
Again, you misunderstand how deep learning is done. When building a deep learning system, fitting the hypothesis to the previous observations is called training. Using the hypothesis to make falsifiable predictions on different data is then called verification. The aim of doing so is to ensure the model generalises to new, unseen data. Both of your steps are used. Clearly, there needs to be a lot of care taken in ensuring these data sets are independent, are representative of the real uses to which the model is put and so on, and these are not easy. But there is nothing fundamentally different or unscientific about deep learning.
You dont like deep learning as you "dont understand what is inside the black box". But why should we trust an inverse square law for electrostatics? We no more "understand" why nature should apparently follow a simple mathematical rule in this case, and where and how that rule may break down. People trusted Newtonian mechanics until relativity showed it to be a poor description in some cases.
Anyway, let me finish this discussion with the observation that you are unlikely to see me in a self driving car in the near future. Given the current success rate of deep learning systems on much simpler, and less safety critical, computer vision systems, and that driving is a much more complex problem with many "unknown unknowns", I do not believe they are likely to reach an acceptable (to me) success rate in the near future, except in very controlled conditions. This is not because the method is flawed per se, but because the problem is too difficult.
It is a recognised problem that pilots are becoming automation controllers at the expense of stick-and-rudder competency. That's caused many crashes.
...and saved countless others. Nobody records what would have happened but didn't, so the data is skewed.
The explanation that I heard for Boeing's recent debacles was that they trusted outside contractors too much. They used to know what they're doing and did it themselves, so it was right. Now, they contract out important stuff to people who don't have a clue but are much cheaper, and neglect to tell them ALL of the requirements because they're used to it being common knowledge. Turns out it isn't, and they end up with a software product that has an input for a redundant sensor but doesn't actually use it...
The biggest hindrance to widespread automation is NOT the engineering. It's the short-sighted idiot bean-counters that routinely take over engineering and screw it up.
It is a recognised problem that pilots are becoming automation controllers at the expense of stick-and-rudder competency. That's caused many crashes.
...and saved countless others. Nobody records what would have happened but didn't, so the data is skewed.
It is a recognised problem that pilots are becoming automation controllers at the expense of stick-and-rudder competency. That's caused many crashes.
...and saved countless others. Nobody records what would have happened but didn't, so the data is skewed.
In cases the root cause is automation, you're right, except that the fact pilots don't understand what automation is doing at some point and thus don't know how to take corrective actions that COULD have been taken if what automation was doing was clearer, is still a major issue and can be seen in many of these cases, including of course the Boeing debacle. Had the pilots understood what automation was doing, the planes would never have crashed.
It's not about getting the perfect tools, it's about getting decent tools that their users know well.
And, OTOH, there are of course a number of crashes not caused by automation at all, but by various hardware failures, for instance. In which case we have ample proof of how pilots can react in those cases and how some are able to land safely with a severly damaged plane. So we definitely know how humans can react to completely unexpected events in a much better way than any machine could.
To me, the Boeing issue is very telling. Sure we can say that it's huge design mistake. But that will happen again. No design process is perfect, and even though it's kind of easy in this example to put blames, there are cases for which it's a lot less. Critical systems must always be designed so that they are resilient. That includes the obvious redundancy, which was largely missing in the Boeing's case, and enabling users to take corrective actions.
And good thing here that the software used for the MCAS was infinitely simpler in itself, and easier to understand, than any AI-based stuff. So we could at least determine what the problem was, and fix it. If we don't know what the problem is, we can never fix it. Again if we can't analyze why a given system fails, we can't fix it. We can only run in circles like flies and frantically retrain NNs until we seem to get an even better success rate than the previous version with larger/seemingly "better" training datasets, and cross fingers. That's an odd way of considering safety and correctness.
Also, pure statistics are great for some things, less interesting for others too. I gave this fun hammer example. But it's IMO an interesting question.
Say we have one fully automated system for which extensive tests have shown a correct behavior rate of 99%. Now say that an equivalent approach with a less automated system and more human control is estimated to have a rate of 98%. Which one are you going to feel safer with? Which one seems best for long-term use? Which one is easier to fix or improve? There are underlying questions that are a lot more complex than they might seem.
And accountability is also a major point here IMO. No it's not per se about "who to put the blame on" so we can get some feeling of revenge and move on. Accountability is there to give a strong incentive both to limit errors before they happen, but also to fix errors when they do happen. Without accountability, there is exactly ZERO incentive to fix/improve anything, except maybe just for marketing reasons. "Look, my autonomous plane has 0.1% probability of crashing, yours has 0.2% ! Buy me!". So lack of accountability = design things to the minimum level of safety possible and put profitability before safety.
The root cause for "I don't actually know how to fly at all so I crash" accidents (which are indeed very numerous - typical example being pulling the nose up in panic when stick shaker activates indicating stall) is not the addition of automation, but the vast increase in flying, and especially cheap flights. Specifically, in early 2000's the problem was sudden and huge, airlines just needed to hire whomever they can, no need for exceptional skills, no need for ambition for flying. And no money, no time for thorough training!
Almost overnight, the "human related accidents" changed from mishaps caused by very skilled but unquestioned hero captain, where skilled F.O. would have been able to prevent the crash but couldn't question the captain, into a completely new genre where there are two pilots in the cockpit neither of whom have no idea how to fly and what to do in completely normal situations.
Automation can be blamed though because it was the enabler for this. These crap pilots kind of learn how to fly, but without automation, they would create much larger number of accidents; to the point of no one daring to fly, it would be just impractical. So enter automation; as it stands, these pilots only cause an accident whenever the automation decides to let the pilot handle the situation for whatever reason, or disable automated safety features (due to sensor malfunction, for example).
Tesla Autopilot is similar. Give it to a drunk idiot and it will easily save lives by driving better, more reliably, and, more predictably than said drunk idiot. But the comparison is moot. We shouldn't let drunk idiots drive to begin with.
Because of the automation paradox, partial solutions tend to be worse than either extreme,
so it's unfair to tentatively mix in just a little bit and then kill the project because the approach itself set it up to fail.
Same things can be said about humans. But this wouldn't answer the question from the OP. There are places where ML fits and other places where it doesn't.
Key differences: the human can explain why they made a decision.
Same things can be said about humans. But this wouldn't answer the question from the OP. There are places where ML fits and other places where it doesn't.
Key differences: the human can explain why they made a decision.
Sorta, maybe, kinda, not really. At least not reliably......
https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow
Why does an AI suspect a person might be a criminal or have cancer? The explanation for this and other high-stakes predictions can have many legal, medical, and other consequences. The way in which AIs reach conclusions has long been considered a mysterious black box, leading to many attempts to devise ways to explain AIs' inner workings. "However, my recent work suggests the field of explainability is getting somewhat stuck," says Auburn's Nguyen.
Nguyen and his colleagues investigated seven different techniques that researchers have developed to attribute explanations for AI decisions—for instance, what makes an image of a matchstick a matchstick? Is it the flame or the wooden stick? They discovered that many of these methods "are quite unstable," Nguyen says. "They can give you different explanations every time."
In addition, while one attribution method might work on one set of neural networks, "it might fail completely on another set," Nguyen adds. The future of explainability may involve building databases of correct explanations, Nguyen says. Attribution methods can then go to such knowledge bases "and search for facts that might explain decisions," he says.
The CNN algorithm is perfectly capable of producing humanly acceptable explanation. For example, in judicial system, the algorithm can produce an explanation similar to: "here's 10 most similar cases. In 9 of these 10 cases there was a death sentence, so I recommend the death sentence as well". Such explanation is actually very similar to what judge may say - the first thing the judge would look at are rulings in similar cases. And similar to the judge the software may be corrupted (by hackers, bugs, or whatnot), and may be made to disregard some relevant cases.
CNN is not really self-learning. Neural networks are. Here the problem is that relatively large neural network may get its own agenda and decide not to pursue the goals posted by humans, or even may work against humans. This will be real horror, although humans will probably not see it not until it's too late.
I wonder, if the best way to produce a human-equivalent explanation is to organize and train the AI as if it were human? Use a layered approach, learning basic concepts first, none of which are the end goal, and reinforcing them into oblivion in random situations with all of the related inconveniences, then slightly more advanced but still quite simple, etc., with each step building on the capabilities of the previous one.
I wonder, if the best way to produce a human-equivalent explanation is to organize and train the AI as if it were human? Use a layered approach, learning basic concepts first, none of which are the end goal, and reinforcing them into oblivion in random situations with all of the related inconveniences, then slightly more advanced but still quite simple, etc., with each step building on the capabilities of the previous one.
Congratulations.
You've just reinvented the approach used in expert systems in the 80s There are even languages for those techniques. Search terms: forward chaining, backward chaining, Horn clauses.
(BTW, welcome to the Triumphant Re-inventors club We all do that from time to time; I did it with FSMs and microcoding).
Yep. The techniques and knowledge we have about them haven't changed all that much actually. What has is technology - the computational power we have at our disposal, which now makes some approaches, that were once unpractical, usable.
There definitely are hybrid approaches too - that unfortunately mostly stay in academic circles, probably because they are not hype enough. One common hybrid approach is to have a "good old "rule-based system, being coupled to a NN, either to determine the rules themselves, or to adjust/improve them as the system is being used. I rather like this approach. The rules themselves are then perfectly understandable. They can be fully automatically derived from training data as well, but it's also possible to verify them and hand-modify the ones that would appear to be bogus.
The hype about current AI (which is definitely not what all AI is about either) reminds me a bit about the hype there was on fuzzy logic few decades ago. Manufacturers started shoving fuzzy logic everywhere, even when a PID would have worked at least as well. The hype passed. And I find this kind of "debacle" (maybe too strong a word though) a shame: fuzzy logic has some interesting things to it actually, way beyond how it was used back then in industry - I suggest reading literature about it, starting with Zadeh's papers of course. You may find concepts and ideas that are a lot more interesting than what has been said about it (at least ever since it went out of fashion.)
Suppose I wanted the fewest cost, lowest size of hardware to solve a "simple" problem, like a device that could perform satisfactorily to recognize a few command words.
I have also seen working examples of fuzzy logic used with a 8-bit microcontroller that successfully learns to balance a double-pendulum.
Why is fuzzy logic no longer in fashion to create intelligent devices?