| General > General Technical Chat |
| Self driving liability |
| << < (8/9) > >> |
| jpanhalt:
@Someone Post #27 My ISP was down for almost 12 hours yesterday. Sorry for the late response. I find accident data interesting as choice of denominator has a huge influence. For aircraft operations, is it operations, passenger miles, total flight time, etc.? The Australian report uses ATM's (aircraft transport movements). If accurate, that is probably a decent measure, but how accurate is it? For example, a small GA aircraft may make multiple approaches with or without landing and taking off. I don't believe those are logged by the tower, if there is one*, yet they involve the most dangerous phases of flight. Rates per passenger mile is deceptive as scheduled carriers log far more passenger miles than GA for a single airplane. That is, most GA flights are over shorter distances than scheduled carrier flights and have far fewer passengers per flight. *Unless there is an accident. In the US, we have lots of uncontrolled airports. The "pattern" at those airports used to be quite crowded on weekends and holidays. Today, they are far less crowded. One can still practice approaches with or without a tower. |
| TerraHertz:
The best thing about self driving cars, is how they are an excellent intelligence test for people. Ask anyone if they think self driving cars are a good idea. In a few moments you'll know whether they are sane or a delusional, amoral lunatic. Whether they understand the nature of consciousness and the importance of having a comprehensive model of the world around them. Whether they understand the dilema of 'strong AI' at all, or think a world with AI slaves would be just peachy. Anyway, it's all very entertaining (except for side effects like car insurance rates going up.) Here's an example of the kind of expensive comedy that self-driving cars (with less than strong AI) will keep on giving: Tesla on Autopilot Crashes Into $3 Million Jet I particularly enjoyed how the car seemed to think "Wait, did I just hit something? Nah, it's all good, I'll keep going. Uh, no, something is definitely wrong." Then there was that other hilarious case (last year?) where a Tesla car on highway autopilot decapitated it's occupant by trying to drive under a semi that was pulling out of a side road. If cars end up being given strong AI, then there will be a whole different world of problems. And they _would_ be given strong AI as soon as it's developed. Except for other more immediate world factors that will make all such future issues moot. |
| CatalinaWOW:
There are certainly many examples of self driven cars doing stunningly stupid things. Unfortunately the same is true for human drivers. I know two people who drove at high speed into objects (not suicides). That is within a group of people I know well enough to have gone to dinner or attended the same party. A group that totals at most a few thousand people. Any claim for superiority of one method to another is very weakly supported. |
| TerraHertz:
Did I say human drivers don't do stupid things? Nope. Just today I was a few meters away (on foot) from a near accident where a human driver did something very stupid, and the two cars involved screeched to a stop about a foot apart. But it's a logical fallacy to argue that because human drivers screw up, we should allow 'AI' self-driving cars. Even if somehow self-driving cars managed to have fewer accidents per mile driven, less injuries, etc, it would still be a very bad idea. |
| eugene:
--- Quote from: TerraHertz on April 23, 2022, 02:57:01 pm ---Did I say human drivers don't do stupid things? Nope. Just today I was a few meters away (on foot) from a near accident where a human driver did something very stupid, and the two cars involved screeched to a stop about a foot apart. But it's a logical fallacy to argue that because human drivers screw up, we should allow 'AI' self-driving cars. Even if somehow self-driving cars managed to have fewer accidents per mile driven, less injuries, etc, it would still be a very bad idea. --- End quote --- Don't mistake me for an AI fanboi, but I don't understand how anything that makes traveling safer can be a "very bad idea." In any case, I'm certain that I see the entire topic differently than most of the contributors to this thread. (I find it comical that a bunch of off-duty engineers think they understand the issues better than the on-duty engineers working in the auto industry.) Only nutjobs like Elon Musk think it's a good idea to make a car that is completely driverless and sell it today. On the other hand, ALL of the major auto manufacturers are investing big money and effort on the topic in a more general way. None of GM, VAG or Toyota are going to be offering cars that drive around a tarmac and crash into anything in the way. The reason is obvious and the major topic of this thread: liability. But, they are doing all of the research that their enormous resources allow. And in the course of their research they will develop all sorts of ways to build cars that assist the driver, possibly in surprising ways. The consequence can only be that the cars will be MORE safe. No major auto manufacturer (Telsla not withstanding) is going to introduce features that make traveling less safe. It's just not part of their business plan. Anyway, all of this assisted driving will eventually result in auto-driving. In the meantime (i.e. today) we can have fun talking about the what ifs, but it's all essentially hypothetical because unless Google starts making cars, Tesla will be the only example we have for discussion, and they aren't even close to typical. |
| Navigation |
| Message Index |
| Next page |
| Previous page |