The autonomous car example is telling. In case the autonomous system fails and yields an accident, the driver will be liable! Because the machine itself can't of course be liable of anything, and since it's not provable, the company selling it can't be either (which is utterly twisted of course.) The really fun part is that proponents of this will claim how much more reliable AI is compared to humans on the road, yet if anything goes wrong, the driver is supposed to be the one supervising the machine at all times and will be liable. And of course all this is perfectly consistent.
Isn't it that if an aircraft crashes because the autopilot malfunctioned, the pilots are at fault for not noticing and taking action? (In one case, the autopilot disengaged for some reason but the warning buzzer wasn't loud enough to stand out from the background noise, while the pilots were troubleshooting some other problem with the aircraft.)
Looks like you missed the point - at least you haven't given this a lot of thought.
A few things:
- I pointed out the patent inconsistency of CLAIMING that AI systems are much safer than any human could be, while ultimately expecting the human to make up for any mishap of the automated system. That is just twisted.
- I would have a lot fewer concerns overall if companies promoting and selling stuff with AI systems were ENTIRELY liable in case of a mishap. That'd be a game changer for sure.
- Pilots in aircrafts are not a very good parallel - ultimately, the "pilot in command" is responsible for anything that happens in the aircraft, not just any pilot (copilots are not). This has strict legal implications and is quite different from the case of an individual driver in a car.
- Conventional autopilots are predictable (at least for the most part
). Sometimes things can go wrong, due for instance to sensor failure not well handled in software, but most often, when a sensor fails, the autopilot will disengage itself first thing. The exceptions mentioned by tggzzz are actually not "autopilot" failures per se, but extra flight systems that are supposed to keep the plane safe. Not that it fundamentally makes a big difference, just that those systems are "sneakier" than autopilots which can just be disabled upon the press of a button. Possibly a parallel in a car would be, for instance, ABS failure, rather than a failure of those AI-based "autopilots".
- Even so, there already are cases with existing systems, which are not AI-based (like the MCAS debacle). But as a few of us are trying to explain in this thread, the difference is that it was in the end relatively straightforward to understand where the problem came from, what happened and how to fix it. Because the sytems in question were analyzable. And Boeing got the consequences. Imagine the same issue with Boeing's MCAS, but this time the MCAS was entirely AI-based, and no one could for sure pinpoint the issue after the accidents.
"Interestingly", Elon Musk is perfectly aware of those issues with AI and has been saying things about it that are quite similar to what tggzzz, I, and a few others are saying here. His main point for actively *using* AI in his products is to become proactive rather than being passive and letting others do it anyway. He's been a proponent of *regulating* AI in a strict way. Problem though, nothing much is really happening yet in that area, and he's still actively promoting AI, while - at least as far as I know - having not done much for the regulation part (like actively working with politics) apart from a few talks. I get his point of being proactive rather than letting others do it anyway, but as it is, whatever his concerns are, it's not helping much and doesn't look liike much more than just cute marketing talk to make him look like the "good guy".