Author Topic: Interesting moral question - should autonomous cars choose the lesser evil?  (Read 26293 times)

0 Members and 1 Guest are viewing this topic.

Offline suicidaleggroll

  • Super Contributor
  • ***
  • Posts: 1453
  • Country: us
Re: Interesting moral question - should autonomous cars choose the lesser evil?
« Reply #50 on: November 06, 2015, 11:28:08 pm »
You guys arguing against this DO realize that there are already self-driving cars, don't you?...Six years, over a million miles driven, 14 accidents, and not ONE has been the fault of the AI.

This is much worse than the average for accidents per mile driven in the US. If it was driven on the same roads, with the same tires, at the same times of day, with a rolling chassis of similar quality as the norm for other vehicles, then the worse accidents are absolutely the fault of the AI, because that is the only difference.

You're comparing total miles driven on all roads, the majority of which are straight highways.  The google cars are on very dense metropolitan city streets only.  If I spent my entire life driving around downtown San Francisco I wouldn't be surprised at all to get run into every ~85k miles because somebody else wasn't paying attention.
 

Offline rx8pilot

  • Super Contributor
  • ***
  • Posts: 3634
  • Country: us
  • If you want more money, be more valuable.
Re: Interesting moral question - should autonomous cars choose the lesser evil?
« Reply #51 on: November 07, 2015, 12:19:09 am »
The statistical metrics have to be apples to apples. That is not easy with such a small sample.

Factory400 - the worlds smallest factory. https://www.youtube.com/c/Factory400
 

Offline helius

  • Super Contributor
  • ***
  • Posts: 3640
  • Country: us
Re: Interesting moral question - should autonomous cars choose the lesser evil?
« Reply #52 on: November 07, 2015, 12:28:30 am »
You're comparing total miles driven on all roads, the majority of which are straight highways.  The google cars are on very dense metropolitan city streets only.  If I spent my entire life driving around downtown San Francisco I wouldn't be surprised at all to get run into every ~85k miles because somebody else wasn't paying attention.
I would be greatly surprised, because more accidents occur in rural areas, and San Francisco has a rate that is half of the average for American cities.
But I messed up, because there is a huge difference between accidents and fatalities (which have the most available statistics). There are about 2 reported crashes per million miles driven in the US. Fatalities are only 0.6% of reported accidents, or just over 1 per 100 million miles. The higher priority mission for safety authorities is to reduce fatalities, so most papers comparing rates between groups and cities focus on that.

With respect to the sample size, the tests by Google should probably be seen as close to an ideal case with more carefully controlled conditions than would pertain in "the real world". The number of accidents by DARPA Challenge cars is higher, for example. But neither has caused fatalities so far, which leaves that question open.
But extrapolation is hard, because when more cars are autonomous, they presumably could better detect each other's position and avoid bunching up and so on. Whether that would have any effect on fatalities is hard to say.
« Last Edit: November 07, 2015, 12:31:57 am by helius »
 

Offline han

  • Frequent Contributor
  • **
  • Posts: 311
  • Country: 00
Re: Interesting moral question - should autonomous cars choose the lesser evil?
« Reply #53 on: November 07, 2015, 01:35:34 am »
I don't the software will or should calculate number of casualty, it should chose the safest route within the legal boundary.
And justice system now is cannot applied to the AI.
Current justice system is based on revenge, and only can applied to human.
The prison can make people scare to steal/murder/drive while drunk, so what punishment for the AI car that do lesser evil? spend 10 year in dark and damp garage? :-DD
I prefer trust fully automated car, since accident can happen will be happen, no matter human or AI behind the wheel.
 

Offline rs20

  • Super Contributor
  • ***
  • Posts: 2318
  • Country: au
Re: Interesting moral question - should autonomous cars choose the lesser evil?
« Reply #54 on: November 07, 2015, 01:45:32 am »
I don't the software will or should calculate number of casualty, it should chose the safest route within the legal boundary.

I don't understand the distinction -- what is your metric of "safety" if not E(number of casualties)?
 

Offline han

  • Frequent Contributor
  • **
  • Posts: 311
  • Country: 00
Re: Interesting moral question - should autonomous cars choose the lesser evil?
« Reply #55 on: November 07, 2015, 02:55:52 am »
I don't the software will or should calculate number of casualty, it should chose the safest route within the legal boundary.

I don't understand the distinction -- what is your metric of "safety" if not E(number of casualties)?

Safety IMHO is best effort to avoid damage to the car by minimal effort.
like better to brake  then make another dangerous movement..
 

Offline helius

  • Super Contributor
  • ***
  • Posts: 3640
  • Country: us
Re: Interesting moral question - should autonomous cars choose the lesser evil?
« Reply #56 on: November 07, 2015, 05:35:20 am »
han raises an interesting point, which I was already thinking about. If letting machines make safety decisions would save lives, then some would argue that the only moral course would be to cede control to the machines. Would they have the same answer if the question was, should machines make the decisions in cases of criminal justice? The currency in both cases is, of course, the same—people's lives at stake. And there are notorious problems with emotion-ridden human beings making penal decisions, including thousands falsely imprisoned, innocent people executed, and so forth. Automated cars are complicated to implement because of infrastructure issues, sensors and actuators, etc. Why not start by simply replacing juries with computers? Surely nobody is prepared to argue that an average juror could defeat Deep Blue in a battle of wits, so it's a foregone conclusion.
 

Offline rs20

  • Super Contributor
  • ***
  • Posts: 2318
  • Country: au
Re: Interesting moral question - should autonomous cars choose the lesser evil?
« Reply #57 on: November 07, 2015, 08:01:23 am »
I don't the software will or should calculate number of casualty, it should chose the safest route within the legal boundary.

I don't understand the distinction -- what is your metric of "safety" if not E(number of casualties)?

Safety IMHO is best effort to avoid damage to the car by minimal effort.
like better to brake  then make another dangerous movement..

So, if the car is presented with a boulder that suddenly falls on the road; it should make a demonstrably futile attempt to brake rather than dodge the boulder, even if its calculations indicate that death will most likely result?
 

Offline han

  • Frequent Contributor
  • **
  • Posts: 311
  • Country: 00
Re: Interesting moral question - should autonomous cars choose the lesser evil?
« Reply #58 on: November 07, 2015, 09:59:28 am »
Boulder on middle of street is extreme example.
Of course It depend not only one condition. Brake is first priority always, too fast dodge maneuver is very dangerous  without enough data. In extreme condition your left side can be Abrams tank, your right can be 1000 m cliff.
« Last Edit: November 07, 2015, 10:01:10 am by han »
 

Offline han

  • Frequent Contributor
  • **
  • Posts: 311
  • Country: 00
Re: Interesting moral question - should autonomous cars choose the lesser evil?
« Reply #59 on: November 07, 2015, 10:03:52 am »
han raises an interesting point, which I was already thinking about. If letting machines make safety decisions would save lives, then some would argue that the only moral course would be to cede control to the machines. Would they have the same answer if the question was, should machines make the decisions in cases of criminal justice? The currency in both cases is, of course, the same—people's lives at stake. And there are notorious problems with emotion-ridden human beings making penal decisions, including thousands falsely imprisoned, innocent people executed, and so forth. Automated cars are complicated to implement because of infrastructure issues, sensors and actuators, etc. Why not start by simply replacing juries with computers? Surely nobody is prepared to argue that an average juror could defeat Deep Blue in a battle of wits, so it's a foregone conclusion.
Problem is people will chose 1000 death from human mistake then 100 death from machine.
You can blame people and punish them..
 

Offline AF6LJ

  • Supporter
  • ****
  • Posts: 2902
  • Country: us
Re: Interesting moral question - should autonomous cars choose the lesser evil?
« Reply #60 on: November 08, 2015, 09:21:06 pm »
AI in its current state; for that matter in the foreseeable future doesn't have the computing power of the human brain.
Your chess model is simply an example of a glorified adding machine.

You were doubting the ability of AI to see into the future, yet we do this routinely with weather forecasting and failure analysis? If you can model something you can predict an outcome and give a value to the accuracy of that prediction.
Tracking the speed and vector of multiple objects is trivial.

Quote
or get me killed since I am a pedestrian.
I don't see why, even an Xbox can determine what a human looks like.

Quote
Yes they might for the simple reason they do one thing well.

What am I doing that is so clever?, spinning a wheel and pressing two pedals. My FM stereo and my air con are more complicated.
I have 120 degrees of vision and a reaction time measured in hundreds of milliseconds. I suck at controlling anything going faster than 10mph.
I have no radar, no infra-red, no 360deg awareness, I can only read one analogue sensor input at a time, limited ability to process more than a few moving objects, no ability to collaboratively regulate my speed with other cars, no idea of traffic flow beyond the next car and limited idea of physical road conditions.

Quote
relegate pedestrians to segregated walkways
If I had better sensors and the ability to communicate with other car borne sensors who are also tracking people, I could have detected that child behind the bus.
Tracking pedestrians and reacting to them is something better handled by collaborative computers, not by the dummy who isn't even looking in their direction.


You have intuition the AI doesn't, you draw on anywhere from years to decades of driving experience. I am fine with driverless cars as long as the software engineers and company management are held just as responsible for death and injury as you would be driving your car.

By the way;
To the person who commented on the existing driverless cars, they are not that great and have posed a hazard on the road.
Sue AF6LJ
 

Offline rs20

  • Super Contributor
  • ***
  • Posts: 2318
  • Country: au
Re: Interesting moral question - should autonomous cars choose the lesser evil?
« Reply #61 on: November 09, 2015, 12:15:59 am »
You have intuition the AI doesn't, you draw on anywhere from years to decades of driving experience.
The AI can draw on millenia of collective driving experience...
 

Offline Halcyon

  • Global Moderator
  • *****
  • Posts: 5679
  • Country: au
Re: Interesting moral question - should autonomous cars choose the lesser evil?
« Reply #62 on: November 09, 2015, 04:24:29 am »
Well currently when the car can't make a decision, the software relinquishes control to the human. I think you'll see this as the "default" mode for a long long time to come. Let's say that we do get fully autonomous cars, I think it'll probably come down to a vote and/or probability system. i.e.: What is the probability that given the current circumstances that the driver or Car A will be serious injured/killed v the probability of the person in Car B meeting the same fate. The lesser of those "wins". So you could find that if the car decides that it's "safer" to ditch the vehicle into a tree/bushes etc... than colliding with another vehicle to lessen the injury.

It's a bit how humans think, however unlike machines, humans tend to freeze up in emergency situations when panic sets in and aim directly for the thing their eyes are fixated on (this is the reason people crash into lone trees on the sides of roads rather than veering around them).

The AI can draw on millenia of collective driving experience...

I could think of nothing worse! The quality of the vast majority of drivers out there is pretty poor! As time goes on, people are becoming more complacent, lazy and impatient. If a computer was making a decision based on the "average", I'd rather assume control myself and hope for the best.

I guess what would be more interesting is the link between the autonomous driving system and all the other controls such as stability, anti-lock brakes, radars/proximity sensors, cameras, etc... what gets priority and when? In an emergency you can't always have stability or anti-skid for example.
« Last Edit: November 09, 2015, 04:28:40 am by Halcyon »
 

Offline AF6LJ

  • Supporter
  • ****
  • Posts: 2902
  • Country: us
Re: Interesting moral question - should autonomous cars choose the lesser evil?
« Reply #63 on: November 09, 2015, 05:17:58 am »
You have intuition the AI doesn't, you draw on anywhere from years to decades of driving experience.
The AI can draw on millenia of collective driving experience...

I think it will draw on its experience and that will help. We will have to develop new data storage techniques in order to store the massive amount of data accumulated over time.
Sue AF6LJ
 

Offline daqqTopic starter

  • Super Contributor
  • ***
  • Posts: 2302
  • Country: sk
    • My site
Re: Interesting moral question - should autonomous cars choose the lesser evil?
« Reply #64 on: November 09, 2015, 06:02:58 am »
Quote
You have intuition the AI doesn't, you draw on anywhere from years to decades of driving experience.
Actually, assuming a well coded and trained software, the car can have the equivalent of all of the intuition available, with the difference that it can process all of the available information about the situation instantly. Assuming a fast enough system, it can analyze the situation from all available sensors, find the most probable outcomes based on available actions and choose the action that will inevitably lead to a known outcome. I'm pretty certain, that in the, say, .25 seconds available an AI can make a more accurate assessment of the data available than you or me.

Taken to an extreme: When the car detects an inevitable situation, where it has to decide between lives of X people, provided a few tens of milliseconds time and a good enough connection, it could scan the faces, connect to the Internet, look them up in all available databases and resources and choose to run over the worst of the lot. I'm quite certain this could be implemented.


Probably the best way is to "outsource" the decision to the driver - by means of some kind of action configuration form you set in advance - some if-else logic.
Believe it or not, pointy haired people do exist!
+++Divide By Cucumber Error. Please Reinstall Universe And Reboot +++
 

Offline rs20

  • Super Contributor
  • ***
  • Posts: 2318
  • Country: au
Re: Interesting moral question - should autonomous cars choose the lesser evil?
« Reply #65 on: November 09, 2015, 06:04:55 am »
The AI can draw on millenia of collective driving experience...

I think it will draw on its experience and that will help. We will have to develop new data storage techniques in order to store the massive amount of data accumulated over time.

That would be helpful too, but it could also be as simple as picking a representative sample of "incidents" from the past month, analyzing the logs, and adjusting the algorithms to avoid any suboptimal choices made during those incidents. Logs of successful trips would be very voluminous, but less interesting (although, if the AI could also log whenever it's "confused" -- or more accurately, when it fails to classify an object or incident with high confidence -- those could be stored as well.)
 

Offline AF6LJ

  • Supporter
  • ****
  • Posts: 2902
  • Country: us
Re: Interesting moral question - should autonomous cars choose the lesser evil?
« Reply #66 on: November 09, 2015, 03:33:04 pm »
When WE reach the point that we actually under what self awareness, and intelligence is then we will actually be able to build safe driverless cars. 
Sue AF6LJ
 

Offline han

  • Frequent Contributor
  • **
  • Posts: 311
  • Country: 00
Re: Interesting moral question - should autonomous cars choose the lesser evil?
« Reply #67 on: November 10, 2015, 02:05:48 am »

Interesting moral question - should autonomous cars choose the lesser evil?
Interesting moral question - should you choose the lesser evil?
hit one people or hit a school bus?
Ride autonomous cars = if it fully autonomous and you didn't drive. less guilty feeling and no jail for you
You drive...= you now what's waiting for you


Simple fact: almost everyone once or more in a lifetime can have a accident (even if it's only hitting garage door, and you drive long enough) if you fell you drive better then everybody else, maybe it the way around
 

Offline jimdeane

  • Regular Contributor
  • *
  • Posts: 128
  • Country: us
Re: Interesting moral question - should autonomous cars choose the lesser evil?
« Reply #68 on: November 10, 2015, 08:02:56 am »
I do actually own a small truck, chosen for the carrying of light bulky loads and for the extensive safety systems (traction and stability control, brake assist, antilock, side curtain airbags, side airbags, high crash ratings) and for the decent v6 engine.

My previous cars have been nimble compacts. My wife's car is a compact. My next car will likely be a nimble compact or a small sedan, since I expect the truck will still serve for load hauling after I no longer drive it every day.

And I am all for using technology to make the roads safer, but within reasonable limits, such as never designing a safety system that can make moral decisions. The car should protect its occupants, without exception.

 

Offline jimdeane

  • Regular Contributor
  • *
  • Posts: 128
  • Country: us
Re: Interesting moral question - should autonomous cars choose the lesser evil?
« Reply #69 on: November 10, 2015, 08:16:41 am »
(Sorry, didn't quite. This is in response to the crashes experienced by the Google self driving cars. )

Why didn't they get out of the way? Surely they have sensors to detect surrounding traffic in real-time. If a human sees they are about to be hit, there are often evasive actions they can take. We aren't usually focused in all directions at a stop light, but a self driving car is always watching.
« Last Edit: November 10, 2015, 08:25:38 am by jimdeane »
 

Online coppice

  • Super Contributor
  • ***
  • Posts: 8642
  • Country: gb
Re: Interesting moral question - should autonomous cars choose the lesser evil?
« Reply #70 on: November 10, 2015, 08:24:26 am »
And I am all for using technology to make the roads safer, but within reasonable limits, such as never designing a safety system that can make moral decisions. The car should protect its occupants, without exception.
If the car must unquestioningly protect its occupants it must be prepared to murder others to achieve that. Most humans at the driving wheel do not unerringly put themselves above everything else on the road. They do actually swerve to avoid people, and even animals, putting their own lives at risk in the process. Why would you want an autonomous car to act otherwise?
 

Offline jimdeane

  • Regular Contributor
  • *
  • Posts: 128
  • Country: us
Re: Interesting moral question - should autonomous cars choose the lesser evil?
« Reply #71 on: November 10, 2015, 08:32:00 am »
One of the best potential features of automated or heavily augmented driving was alluded to by another poster--flocking activity. Or more specifically, distributed network mass routing.

We are already seeing some of this, and its not from automated driving, it is from live traffic auto routing by Google maps. When traffic slowdowns occur, it reroutes to a new faster route. And if what my wife and I have observed is correct, it may even be distributing traffic load among multiple near-equivalent routes.

If you can envision a system integrated with infrastructure traffic control, you might even have dramatically reduced need for stopping, since a good traffic control algorithm could get most cars to lights while green.
 

Offline MikeW

  • Regular Contributor
  • *
  • Posts: 104
  • Country: gb
  • Self confessed noob
Re: Interesting moral question - should autonomous cars choose the lesser evil?
« Reply #72 on: November 10, 2015, 08:38:46 am »
Taken to an extreme: When the car detects an inevitable situation, where it has to decide between lives of X people, provided a few tens of milliseconds time and a good enough connection, it could scan the faces, connect to the Internet, look them up in all available databases and resources and choose to run over the worst of the lot. I'm quite certain this could be implemented.

Special priority given to those spamming facebook with terrible memes.
« Last Edit: November 10, 2015, 04:37:59 pm by MikeW »
 

Offline Mechanical Menace

  • Super Contributor
  • ***
  • Posts: 1288
  • Country: gb
Re: Interesting moral question - should autonomous cars choose the lesser evil?
« Reply #73 on: November 10, 2015, 04:37:03 pm »
Why does everyone assume you need an AI to make a self driving car? It doesn't need to contemplate the meaning of life, the universe, and everything. It only needs to get from A to B and avoid obstacles...
Second sexiest ugly bloke on the forum.
"Don't believe every quote you read on the internet, because I totally didn't say that."
~Albert Einstein
 

Offline rs20

  • Super Contributor
  • ***
  • Posts: 2318
  • Country: au
Re: Interesting moral question - should autonomous cars choose the lesser evil?
« Reply #74 on: November 11, 2015, 12:55:36 am »
Why does everyone assume you need an AI to make a self driving car? It doesn't need to contemplate the meaning of life, the universe, and everything. It only needs to get from A to B and avoid obstacles...

+1000
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf