Author Topic: Self Driving Cars: How well do they work in areas with haphazard driving rules?  (Read 37045 times)

0 Members and 1 Guest are viewing this topic.

Offline X

  • Regular Contributor
  • *
  • Posts: 179
  • Country: 00
    • This is where you end up when you die...
Several months ago, the self-driving car manufacturers were trying to lobby governments (including the Australian government) to make legislation outlawing non-autonomous vehicles by the 2020's. I am not looking forward to a future where nobody can be blamed for an accident that cripple another person. This is enough to convince me that there is an agenda, and this is all about greed and profits rather than safety. Would you feel safe with hackable two-ton robots with human "hostages" running around the road?

Can you really trust that:
  • The algorithms will not have any bugs in them?
  • Everyone will be able to keep their firmware updated to deal with different infrastructure changes and regulations?
  • The car cannot be hacked or altered by others on the side of the road?
  • Manufacturers will issue timely firmware updates to handle changes to roads, infrastructure, regulations etc?
  • The failure modes of the self-driving mechanism are going to be safe?

If the self-driving cars are to be legal on the road, a law must be in place that renders manufacturers 100% liable to prosecution if it can be reasonably deduced that the auto-pilot function led to the accident, or didn't stop the avoidance of the accident.

I don't even know why we're bothering with autonomous vehicles on the road, because they have already invented trains. Autonomous vehicles takes the fun out of the road. I think I'll stick with a motorcycle. The world will be better if more people rode motorcycles.
« Last Edit: June 19, 2017, 03:28:55 am by X »
 

Offline rs20

  • Super Contributor
  • ***
  • Posts: 2317
  • Country: au
Can you really trust that:
  • The algorithms will not have any bugs in them?
  • Everyone will be able to keep their firmware updated to deal with different infrastructure changes and regulations?
  • The car cannot be hacked or altered by others on the side of the road?
  • Manufacturers will issue timely firmware updates to handle changes to roads, infrastructure, regulations etc?
  • The failure modes of the self-driving mechanism are going to be safe?

Interesting that you assert that self-driving cars must be 100% flawless before they should be allowed to replace human drivers, when on US roads alone 35 thousand people died in 2015 due to mostly human drivers.

The only really sensible minimum requirement for self-driving cars to be allowed is for them to be better than humans -- a low bar indeed, as noted above. If you hold an opinion that there should be a more stringent requirement, then either you consider your right to the freedom of driving your own car to be worth killing others for, or you are one of the 90% of humans who consider their ability at doing X (driving in this case) to be better than the median.

Are self driving cars better than humans yet? I dunno, I'm purely debating what the threshold should be.
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
Right, of course, and life is like that too.

But the problem is, as Ive heard again and again, business needs stability.

What that means is that when accidents occur,

(and I think we can expect them, and likely they wont involve just one car, they may involve thousands at the same time, like other computer glitches),

we can expect denial.. and lack of accountability. Its guaranteed in this context.

Like the recent fire in London, I think we also can expect news about their true impacts to be suppressed.


 If automated cars killed a lot of people, before the 1990s, given the opportunity people would have attempted by voting to force politicians to ban them. That would now be barred because of the transparency requirement that corporations get a chance to tell government they dont want prospective laws and demand compensation in advance.

They would frame that as being "measures tantamount to expropriation" of their investment..  "indirect expropriation"

 Compensation to the corporation must be paid - their expected lost profits.

The desire to appease the corporations is more powerful than any fear of public outrage.


What I think will happen at first is the car will be programmed to do whatever it can to protect the owner and his or her "life" even if that is dangerous or fatal to others - because thats what the buyers of these $100,000 cars will likely want.  But as time passes the governments and corporations will get together to fine tune what they both want the software to do.

Governments may also require that it be possible for them to assume control of the wheel for example, if a car was about to be repossessed that given car could be told to lock its doors and drive somewhere or perhaps immobilized. Or all cars could be immobilized. Laws on not using cell phones in moving cars would also be easy to implement.

But to get back to your original point, just as other changes being made for corporations seem to be consistent in their inflexibility, I think the law on self driving cars will likely be similar.
Quote from: coppice on Today at 18:49:44>Quote from: cdev on Today at 12:32:41
Andy and coppice, the answer to your question seems to me to be that each self driving car will have a regulatory database of "the rules" that apply in its country, and it has to obey them. Whatever flexibility is in the law now will have to be removed for the corporations. Also, there will be a period, say five years, where self driving features will only be enabled in larger roads that have the technology to manage the traffic flow automatically. Of course initially, like now, some areas will be mostly old fashioned non-self driving cars so they will have to be given some time to buy new cars. Some areas - "off the beaten track" may remain non self driving indefinitely.
This completely misses the point. In real world driving we are forced to break the rules when we encounter the aftermath of an accident or really convoluted roadwork setups. We frequently puzzle over what to do for a while, but we have to take some rule breaking action, or we would be stuck until the obstacles are cleared. Getting a car to take safe action as it encounters such an incident is reasonably straightforward. Getting it safely out of such situations, so it can continue to make progress, it a whole different thing.
« Last Edit: June 19, 2017, 04:52:40 am by cdev »
"What the large print giveth, the small print taketh away."
 

Offline JoeNTopic starter

  • Frequent Contributor
  • **
  • Posts: 991
  • Country: us
  • We Buy Trannies By The Truckload
This completely misses the point. In real world driving we are forced to break the rules when we encounter the aftermath of an accident or really convoluted roadwork setups. We frequently puzzle over what to do for a while, but we have to take some rule breaking action, or we would be stuck until the obstacles are cleared. Getting a car to take safe action as it encounters such an incident is reasonably straightforward. Getting it safely out of such situations, so it can continue to make progress, it a whole different thing.

Sometimes it even happens with a cop directing the "rules breaking" - using the wrong side of the road or shoulder, or grass, etc. for the duration of the incident.  How does a self-driving car interpret a cop or traffic warden directing traffic?   I would think not very well.
Have You Been Triggered Today?
 

Offline Brumby

  • Supporter
  • ****
  • Posts: 12288
  • Country: au

The only really sensible minimum requirement for self-driving cars to be allowed is for them to be better than humans -- a low bar indeed, as noted above. If you hold an opinion that there should be a more stringent requirement, then either you consider your right to the freedom of driving your own car to be worth killing others for, or you are one of the 90% of humans who consider their ability at doing X (driving in this case) to be better than the median.

Are self driving cars better than humans yet? I dunno, I'm purely debating what the threshold should be.

While the basic logic expressed here has obvious merit, there is one significant component that I can see being a rather nasty can of worms.... the nature of the incidents that occur.

Certainly, it may well be that there is a significant number of situations where an automated vehicle will reduce the number of accidents - such as lane changes without proper care and driver fatigue - there still lies the potential for absolutely obscure or incredibly complex situations which a human can make a better call than an automaton.  All you need is for a few events that the automated vehicle made a less than ideal choice in a situation where a human driver - even the most unskilled - would have easily coped and you will soon get ridicule and problems with acceptance - even if the bottom line is better.

Ask any programmer to write a moderately complex program that will handle ALL contingencies in an acceptably controlled manner and see how they squirm - especially if you put them on-call for the first run.

I am certain that there will be an incremental improvement over time - but like the vehicle and aeronautical industries have demonstrated, it takes death and serious injury to precipitate change.

How accepting will society be during that process, having already gone through such things over the last century or so?
 

Offline JoeNTopic starter

  • Frequent Contributor
  • **
  • Posts: 991
  • Country: us
  • We Buy Trannies By The Truckload
Several months ago, the self-driving car manufacturers were trying to lobby governments (including the Australian government) to make legislation outlawing non-autonomous vehicles by the 2020's.

Honestly, I just can't see it in the U.S.  The nation just got piqued and elected Trump as a big FU to the establishment.  Adopting a law like that at a national level would probably create so many dead politicians that it would make the French Revolution look tame in comparison. 

Manufacturers need to make this work in the current system of most people driving themselves.  That is what they promised.  If they can't do it, they can't sell their products.  End of story.
Have You Been Triggered Today?
 

Offline X

  • Regular Contributor
  • *
  • Posts: 179
  • Country: 00
    • This is where you end up when you die...
Interesting that you assert that self-driving cars must be 100% flawless before they should be allowed to replace human drivers, when on US roads alone 35 thousand people died in 2015 due to mostly human drivers.
Maybe not 100% perfect, but if not 100% then if something happens, the manufacturers of the algorithms must be liable to prosecution just like an equivalent human driver. Of course humans don't have WiFi connectivity, don't need firmware updates, and don't have the risk of being hacked by an RF blast from the whiz-kid delinquent next door.

Good luck getting some compensation from a driverless car manufacturer after a multi-car multi-casualty pile-up caused by a bug in the algorithm, where your car ran into another driverless car from a different manufacturer, because it couldn't detect the traffic cones for the contra-flow setup in place while road workers were mending a burst water main.

The only really sensible minimum requirement for self-driving cars to be allowed is for them to be better than humans -- a low bar indeed, as noted above. If you hold an opinion that there should be a more stringent requirement, then either you consider your right to the freedom of driving your own car to be worth killing others for, or you are one of the 90% of humans who consider their ability at doing X (driving in this case) to be better than the median.

Are self driving cars better than humans yet? I dunno, I'm purely debating what the threshold should be.
The bar is not as low as you make it to be. Humans actually make very few errors, given the sheer level of input they need to process in such a short time span, and this will be a tough act to follow. I have not seen any significant testing in this regard, only the "oh look our car stops when it sees an isolated bollard" style tests.
In a world where silicon valley invests in useless junk and Chinese manufacturers slip on loads of safety markings just for the sake of it, there must be laws that impose a very high level of accountability from the manufacturers of autonomous vehicles in the event something goes wrong.

Autonomous cars may have their place eventually, but it shouldn't render non-autonomous vehicles to be banned or forcibly made obsolete. This is a freedom that people should not be required to give up.
« Last Edit: June 19, 2017, 05:05:09 am by X »
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
Trump IS the Establishment a lot more than people think he is. Thats all a big act, like Clinton having seizures, etc. Just like Obama was. They signed away the rights to do most everything of importance in 1994, on December 8.

The real payload is the new trade deals, which are even more radical than them with their nagative list..

notice how TTIP and TISA are still on full speed ahead.. (and the worst parts of TPP like the SOE chapter just got moved to TiSA)


Quote from: JoeN on Today at 22:52:15>Quote from: X on Today at 21:19:12
Several months ago, the self-driving car manufacturers were trying to lobby governments (including the Australian government) to make legislation outlawing non-autonomous vehicles by the 2020's.

Honestly, I just can't see it in the U.S.  The nation just got piqued and elected Trump as a big FU to the establishment.  Adopting a law like that at a national level would probably create so many dead politicians that it would make the French Revolution look tame in comparison. 


No it wouldnt because people would think they would be able to afford them. It would be timed to create false expectations which would only be dashed in the most deceptive possible ways.. They are very very smart about this kind of thing.

Manufacturers need to make this work in the current system of most people driving themselves.  That is what they promised.  If they can't do it, they can't sell their products.  End of story.


----

People will buy them or start taking the bus, or more likely, staying at home, without any bus.. Read some history.

There never is an end of story.

Think of this Tesla as a sort of marketing gimmick. Nothing it does is revolutionary, its still 100% a human driven car, it just has some nifty add on features..


Kind of like my old Tek 2211 (hybrid analog "DSO")

I didn't see people taking a nap or having sex or whatever people will do in the initial batch of human driven cars with cruise control features.. nor will we see them doing that now.
in a Tesla..

But thats really what self driving cars are all about.
"What the large print giveth, the small print taketh away."
 

Offline rs20

  • Super Contributor
  • ***
  • Posts: 2317
  • Country: au
Maybe not 100% perfect, but if not 100% then if something happens, the manufacturers of the algorithms must be liable to prosecution just like an equivalent human driver.

So if I provide a box that, if we substitute it in for all humans drivers (admittedly a hypothetical), and it results in a total of 10,000 deaths per year instead of the current 30,000+ human-caused deaths, you say I should be liable for those 10,000 deaths? Because it feels like I saved 20,000 lives, even if there is manifest room for further improvement...

Of course humans don't have WiFi connectivity, don't need firmware updates, and don't have the risk of being hacked by an RF blast from the whiz-kid delinquent next door.

Heart attacks, sleep deprivation, blind spots...

The bar is not as low as you make it to be.

I said the bar is 30,000 deaths per year, I'm not sure what you mean by "that's not as a low as you make it to be". It's a known number: 30,000 deaths per year.

Humans actually make very few errors, given the sheer level of input they need to process in such a short time span, and this will be a tough act to follow. I have not seen any significant testing in this regard, only the "oh look our car stops when it sees an isolated bollard" style tests.

Google self driving cars have driven 2 million miles on open roads with 0 fatalities. Not sure if that 2 million has reached the stage of being compelling evidence yet, I'm not claiming that it has (in fact, the E(fatalities) for humans over the same distance would be 0.02 by my calculations, so probably not), but the work is being done.

Autonomous cars may have their place eventually, but it shouldn't render non-autonomous vehicles to be banned or forcibly made obsolete. This is a freedom that people should not be required to give up.

How many deaths per year would you consider an acceptable price for this freedom?

 

Offline X

  • Regular Contributor
  • *
  • Posts: 179
  • Country: 00
    • This is where you end up when you die...
Maybe not 100% perfect, but if not 100% then if something happens, the manufacturers of the algorithms must be liable to prosecution just like an equivalent human driver.

So if I provide a box that, if we substitute it in for all humans drivers (admittedly a hypothetical), and it results in a total of 10,000 deaths per year instead of the current 30,000+ human-caused deaths, you say I should be liable for those 10,000 deaths? Because it feels like I saved 20,000 lives, even if there is manifest room for further improvement...
Your device killed 10,000 people. If you are building something which kills 10,000 people through no fault of their own, you should be held fully responsible for all 10,000 deaths.

Of course humans don't have WiFi connectivity, don't need firmware updates, and don't have the risk of being hacked by an RF blast from the whiz-kid delinquent next door.

Heart attacks, sleep deprivation, blind spots...
These can be alleviated to a large degree (apart from the heart attacks of course).
As for blind spots, I agree this is where the machine will beat the man. I think you have raised a good point, and I think some automation in terms of crash avoidance is very handy, but not without a human override. It's like having another passenger in the vehicle.

The bar is not as low as you make it to be.

I said the bar is 30,000 deaths per year, I'm not sure what you mean by "that's not as a low as you make it to be". It's a known number: 30,000 deaths per year.
It's known for non-autonomous vehicles. I will be very surprised if it becomes lower when fully-autonomous vehicles take over.

Humans actually make very few errors, given the sheer level of input they need to process in such a short time span, and this will be a tough act to follow. I have not seen any significant testing in this regard, only the "oh look our car stops when it sees an isolated bollard" style tests.

Google self driving cars have driven 2 million miles on open roads with 0 fatalities. Not sure if that 2 million has reached the stage of being compelling evidence yet, I'm not claiming that it has (in fact, the E(fatalities) for humans over the same distance would be 0.02 by my calculations, so probably not), but the work is being done.
There aren't many self-driving cars around yet, so this is an insufficient sample space. 2 million isn't actually a lot of miles for a handful of cars, and it is very easy to pass this without a single car in the set having an accident.
Also, there have been a few deaths and injuries already.

Autonomous cars may have their place eventually, but it shouldn't render non-autonomous vehicles to be banned or forcibly made obsolete. This is a freedom that people should not be required to give up.

How many deaths per year would you consider an acceptable price for this freedom?
This is a slippery slope. If you want zero deaths a year, just put everybody in jail for safety's sake. It's all about balance. It's attitudes like "if it's one or more it's an issue" that result in nanny states in the first place.

My answer to this question: as many as reasonably necessary for this freedom to continue to exist.
« Last Edit: June 19, 2017, 06:27:37 am by X »
 

Offline Rick Law

  • Super Contributor
  • ***
  • Posts: 3423
  • Country: us
There are two issues mashed together in this discussion.  I think they need to be separated.

One issue is navigation of how to get from point A to point B.  The other issue is how to avoid accidents and I think this is the much more important one.

The car ahead of me has a bike on the roof bike/luggage-rack and my luck was that it felt off just in front of me.  How will an auto-driving car avoid it?  Will it decide to veer left and hit the tree, or veer right and hit the Rolls Royce with a little girl in it on the right?  Or, just brake hard and pray since both other options seem uninviting?  I made up the Rolls Royce part (just to make it sound less inviting), but next to me was indeed a car with at least one kid in it.  My choice was I braked hard and pray.  If it had been an auto-driving car, how will it decide and would I agree with that decision - after all, that decision will affect the rest of my life.

I think besides distraction, accidents typically originates by something unexpected.  Yet most of the current "self driving trials" seem not to be focused on accident avoidance but instead more focused on navigation.

No GPS or advanced navigation systems will help the self-driving car avoid the rabbit from running across.  Only vision-based system can help avoid the result of the rabbit's decision.  In my 30+ years of driving, besides that bike that felt from a car roof and at least half a dozen squirrels that I attempted to avoid (some unsucessfully), I netted one rabbit, two deer, and at least one bird.  All these could have been very costly.  (I was on River Road and yes it was a road running next to a river.  Accident avoidance there do carry a high risk of getting soaked).

Seeing the video on how the Tesla failed to detect the barrier makes me think they are no where near ready for prime time.

This looks like Tesla follows the Microsoft roll out model: let your customers be your beta testers.  They could test in the cities for years and the cars would have huge cumulative experience on how to avoid a texting pedestrians crossing your path, but if you are the first owner to have to drive past a soccer field on a hill everyday, their learning may mean you having to attend a kid's funeral and carry that feeling of guilt...
« Last Edit: June 19, 2017, 06:59:52 am by Rick Law »
 

Offline AndyC_772

  • Super Contributor
  • ***
  • Posts: 4208
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
Your device killed 10,000 people. If you are building something which kills 10,000 people through no fault of their own, you should be held fully responsible for all 10,000 deaths.

You've hit the nail on the head here.

If I go out and buy a new car today, I can be (almost!) completely certain that it won't kill me through any kind of design or manufacturing defect. That's an amazing achievement considering what it does, how it's achieved, and the levels of abuse to which it will be subjected.

That's not to say I can't drive it into a solid object, or that some other driver won't run me off the road, but those are outside the scope and responsibility of the car manufacturer.

But, if the car is self-driving, then responsibility for the car and its actions has to shift. There are a couple of logical steps which lead to this inevitable conclusion:

1) If we accept that the whole reason for self-driving is that a computer controlled car will be safer than a human controlled one, it logically follows that under some circumstances, it will do things that are different to what a human driver would have done.

2) In order to realise the benefit, the human occupant of the vehicle must allow the computer pilot to have control, even when it takes actions which were not what a human driver would have done

Therefore, responsibility for the car's actions - including any injury which results from its actions - can only rest with its manufacturer.

I happen to agree that a machine which is demonstrably (say) 3x safer than the people it replaces should, in theory, be regarded as a good thing. Unfortunately that's not the way society works, and the news headlines will be full of people saying "tell that to the families of the people your death-machine killed".

As engineers, we should be concerned about this. We will ultimately end up being the ones who write the algorithms and design the hardware. More than a few of us will probably end up in jail for causing death by dangerous driving, without ever leaving our offices, before the legal and social aspects of the technology work themselves out.

In the meantime, I too will be out enjoying my motorbike.

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb
1) If we accept that the whole reason for self-driving is that a computer controlled car will be safer than a human controlled one, it logically follows that under some circumstances, it will do things that are different to what a human driver would have done.

I suspect that the driving (no pun intended) forces are:
  • Consumer/purchaser convenience and/or brag factor
  • Corporate profitability - Establishing an early, dominant position in a market sector that didn't previously exist.
  • Engineer appeal - "Hot damn, I'm working on frickin' self-driving cars."

"Improved safety" is an after the fact claim to justify the pursuit, not a real driving force.

I happen to agree that a machine which is demonstrably (say) 3x safer than the people it replaces should, in theory, be regarded as a good thing. Unfortunately that's not the way society works, and the news headlines will be full of people saying "tell that to the families of the people your death-machine killed".

The road traffic incidents that make the headlines, the "something must be done" moments, involve multiple simultaneous fatalities, the classic "motorway pile-up". The same moment will come for self-driving vehicles in the same way, both politically and causally. Most 'pile-ups' happen because a number of drivers make the same error at the same time, following too close in unsuitable conditions (fog, rain, whatever). That is, there is a systematic failure in the human risk assessment algorithm.

At some point a similar failure will happen with self-driving algorithms, where a common systematic failure, probably an error in programmer's assumptions, causes a large number of vehicles in close proximity to make the same mistake, at the same time, at high speed, resulting in mass casualties and fatalities. It may even be worse than in human driven vehicles, where some variability between driver actions spreads the risk and diffuses the damage, whereas in self-driving vehicles it might be that they all make exactly the same error with no spread in responses which concentrates the damage.
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 

Offline rs20

  • Super Contributor
  • ***
  • Posts: 2317
  • Country: au
So if I provide a box that, if we substitute it in for all humans drivers (admittedly a hypothetical), and it results in a total of 10,000 deaths per year instead of the current 30,000+ human-caused deaths, you say I should be liable for those 10,000 deaths? Because it feels like I saved 20,000 lives, even if there is manifest room for further improvement...
Your device killed 10,000 people. If you are building something which kills 10,000 people through no fault of their own, you should be held fully responsible for all 10,000 deaths.

I suppose you consider the trolley problem a difficult question too? At least we've discovered the core point of disagreement between us. I'm not going to build this box if I'm going to be held responsible even for a decrease in deaths; the idea that you're more interested in assigning blame regarding the 10,000 deaths rather than the enormous suffering and heartbreak associated with the 20,000 extra deaths is just unconscionable, at least with respect to my personal axioms/approach to ethics. The fact that you have absolutely no interest in the 20,000 lives saved is just bewildering, absolutely bewildering to me. Those aren't statistics, those are real people in real families, only some of whom are even responsibile for their deaths?

Autonomous cars may have their place eventually, but it shouldn't render non-autonomous vehicles to be banned or forcibly made obsolete. This is a freedom that people should not be required to give up.

How many deaths per year would you consider an acceptable price for this freedom?
This is a slippery slope. If you want zero deaths a year, just put everybody in jail for safety's sake. It's all about balance. It's attitudes like "if it's one or more it's an issue" that result in nanny states in the first place.

My answer to this question: as many as reasonably necessary for this freedom to continue to exist.

Oh I agree; I'm not implying the answer should be zero. But I mean geez, 30,000 per year? That feels like is going a little too far off the other end of the balance. Not sure if I could live with that as a voter/lawmaker.

 

Offline Brumby

  • Supporter
  • ****
  • Posts: 12288
  • Country: au
So if I provide a box that, if we substitute it in for all humans drivers (admittedly a hypothetical), and it results in a total of 10,000 deaths per year instead of the current 30,000+ human-caused deaths, you say I should be liable for those 10,000 deaths? Because it feels like I saved 20,000 lives, even if there is manifest room for further improvement...
Your device killed 10,000 people. If you are building something which kills 10,000 people through no fault of their own, you should be held fully responsible for all 10,000 deaths.

I suppose you consider the trolley problem a difficult question too? At least we've discovered the core point of disagreement between us. I'm not going to build this box if I'm going to be held responsible even for a decrease in deaths; the idea that you're more interested in assigning blame regarding the 10,000 deaths rather than the enormous suffering and heartbreak associated with the 20,000 extra deaths is just unconscionable, at least with respect to my personal axioms/approach to ethics. The fact that you have absolutely no interest in the 20,000 lives saved is just bewildering, absolutely bewildering to me. Those aren't statistics, those are real people in real families, only some of whom are even responsibile for their deaths?

The issue is not one of statistical reality - it is of public perception.

No matter how many deaths are saved, the fact that 10,000 of the ones that do occur can be associated with a SINGLE point of commonality - ie autonomous vehicles - is what will get the press.

There is a parallel in the judicial system - in any society .... It's not whether justice is served that matters - it is that justice is SEEN to be served that matters.

The same perception bias applies.
 

Offline Jeroen3

  • Super Contributor
  • ***
  • Posts: 4067
  • Country: nl
  • Embedded Engineer
    • jeroen3.nl
...

Therefore, responsibility for the car's actions - including any injury which results from its actions - can only rest with its manufacturer.

I happen to agree that a machine which is demonstrably (say) 3x safer than the people it replaces should, in theory, be regarded as a good thing. Unfortunately that's not the way society works, and the news headlines will be full of people saying "tell that to the families of the people your death-machine killed".

As engineers, we should be concerned about this. We will ultimately end up being the ones who write the algorithms and design the hardware. More than a few of us will probably end up in jail for causing death by dangerous driving, without ever leaving our offices, before the legal and social aspects of the technology work themselves out.

In the meantime, I too will be out enjoying my motorbike.
We will indeed be the ones in control of the development of the algorithms. There isn't that much government involved yet...

Uncle Bob said in a talk that there will be software that kills 10,000 people. When that happens, the governments will step in, crafting laws to prevent such further events in the future. It should be the developers and engineers who make their workflow and ethics in such a way this event will never happen. I think he's right.

Meanwhile, have fun driving your organ donation machine.
 

Offline AndyC_772

  • Super Contributor
  • ***
  • Posts: 4208
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
Right. There are two separate things happening here.

One is that the overall death toll reduces from, let's say, 30000 to 10000. That's fantastic!

BUT: responsibility for those 10000 deaths is no longer distributed amongst the drivers of the cars they were in and the vehicles that may have hit them. It becomes concentrated at a single, identifiable point of blame: the person who signed off the auto-pilot as being fit for purpose.

I would never be willing to be that person, because however many lives my self-driving technology might save, I'll be the one straight up against the wall when the first absolutely inevitable fatality does happen.

Let me tell a short story...

A few years ago I (voluntarily!) attended a short motorcycle safety course run by the police. One of the people who spoke there was an accident investigator, and he gave an insightful and useful lecture on the common causes of bike crashes.

(Incidentally, one of the main ones is failing to recognise just how capable a modern motorcycle is when it comes to braking effectiveness, or achievable lean angle and corner speed... having made a mistake, riders panic rather than simply leaning further or braking harder in order to make it round a corner or stop in time - but I digress).

All was going well until the topic came up of modifications. It's very common here for riders to swap out stock suspension parts in favour of upgraded aftermarket alternatives, but he was dead set against the whole idea. He was adamant that manufacturers always "know best", and that he had seen a number of crashes where a rider had made his bike "unrideable" by modifying it.

What he failed to realise, was that his own data set was skewed. He only ever saw crashed bikes. He had no way of knowing if and when a rider had avoided a crash precisely because his bike was properly set up for his size, weight, preferences and riding style. For all his expertise, he was completely oblivious to the fact that his data set had holes in it.

I fear the same will happen with self-driving cars. Make an improvement that saves 20,000 lives, and you'll still end up taking responsibility for the 10,000 who - it will be argued - might not have died were it not for your "so-called safe" auto-pilot.

Offline AndyC_772

  • Super Contributor
  • ***
  • Posts: 4208
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
Meanwhile, have fun driving your organ donation machine.

Oh, I do, don't worry about that.

I'm *much* more scared that I'll find myself a 95 year old dribbling idiot in a hospital bed somewhere, regretting having spent an entire life not having lived, than I am of the prospect of a bike accident.

I have a good helmet and leathers, and the wisdom not to do anything stupid. The rest is a risk which I absolutely, completely, understand and accept.

Offline X

  • Regular Contributor
  • *
  • Posts: 179
  • Country: 00
    • This is where you end up when you die...
I suppose you consider the trolley problem a difficult question too?
No, but it has nothing to do with this discussion.

At least we've discovered the core point of disagreement between us. I'm not going to build this box if I'm going to be held responsible even for a decrease in deaths; the idea that you're more interested in assigning blame regarding the 10,000 deaths rather than the enormous suffering and heartbreak associated with the 20,000 extra deaths is just unconscionable, at least with respect to my personal axioms/approach to ethics. The fact that you have absolutely no interest in the 20,000 lives saved is just bewildering, absolutely bewildering to me. Those aren't statistics, those are real people in real families, only some of whom are even responsibile for their deaths?
You clearly don't care about the 10,000 people your device killed, and you're fine with "Oh well, at least I statistically saved 20,000 lives!" This is like a murderer saying she's not a murderer because she "saved 8 lives" by killing 2 people instead of 10.

In this case, the person who caused the accidents can easily be brought to justice (given the legal system works). In the case of driverless cars, the blame will be entirely on the manufacturer of the device. They won't care about heartbreak and suffering, and it will be a case of the small mouse trying to fight the big cat.
I'm not suggesting in any way that a decrease in deaths is bad, but there are real practical and legal considerations at play that you have conveniently (and hypocritically) ignored in favour of a purely sympathetic and emotional appeal.

If Microsoft/Apple/Google don't care about treating people's data and computers as their toys, I doubt they'll care about the suffering families endure when their driverless cars harm others.

Oh I agree; I'm not implying the answer should be zero. But I mean geez, 30,000 per year? That feels like is going a little too far off the other end of the balance. Not sure if I could live with that as a voter/lawmaker.
The US has over 325 million people, and this figure represents less than 0.009% of the population. I think that's quite acceptable given that far more people die from illness and other accidents. Perhaps we should ban people from going outside, and allow health inspectors forced entry so they can sterilise your home. While we're at it, the only food everyone's allowed to eat is government-approved nutritional paste, certified free of contaminants.

I'm all for saving lives, but not at the cost of freedom.
« Last Edit: June 19, 2017, 12:27:36 pm by X »
 

Offline rs20

  • Super Contributor
  • ***
  • Posts: 2317
  • Country: au
I agree with everything in your message, X, except what is quoted below;

At least we've discovered the core point of disagreement between us. I'm not going to build this box if I'm going to be held responsible even for a decrease in deaths; the idea that you're more interested in assigning blame regarding the 10,000 deaths rather than the enormous suffering and heartbreak associated with the 20,000 extra deaths is just unconscionable, at least with respect to my personal axioms/approach to ethics. The fact that you have absolutely no interest in the 20,000 lives saved is just bewildering, absolutely bewildering to me. Those aren't statistics, those are real people in real families, only some of whom are even responsibile for their deaths?
You clearly don't care about the 10,000 people your device killed, and you're fine with "Oh well, at least I statistically saved 20,000 lives!"

Statistically? Should we give no credit to the measles vaccine because the 17.1 million lives it have saved are merely "statistical" "estimates"?

I care about the fact that I am responsible for there being 10,000 deaths instead of 30,000, and would work my ass of to get it down to 5,000 and below. Again, people who have trouble with the Trolley problem can talk their way into allowing 30,000 people to die, but not me. I'm happy for us to agree to disagree on this point though.

I suppose you consider the trolley problem a difficult question too?
No, but it has nothing to do with this discussion.

This is like a murderer saying she's not a "real" murderer because he only killed 2 people instead of 10, thus saving 8 lives.

I mean, that is exactly the trolley problem with 10 people on the main line and 2 people on the siding, but eh whatever
 

Offline AndyC_772

  • Super Contributor
  • ***
  • Posts: 4208
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
Statistically? Should we give no credit to the measles vaccine because the 17.1 million lives it have saved are merely "statistical" "estimates"?

If that vaccine were known to have killed 1 million of the people who received it, would you give it to your children?

On balance, you certainly should. After all, it's 17 times as likely to save their lives, as it is to kill them.

People don't work that way though, which is probably why I prefer working with machines instead.
 
The following users thanked this post: Someone, rs20

Offline X

  • Regular Contributor
  • *
  • Posts: 179
  • Country: 00
    • This is where you end up when you die...
Statistically? Should we give no credit to the measles vaccine because the 17.1 million lives it have saved are merely "statistical" "estimates"?
This is again detracting from the issue.
The vaccine is not taking peoples' freedom away just for the sake of saving lives. It is actually immunising people (who are at risk of a gruesome death), who can then carry on with their lives as per normal. To the best of my knowledge, the vaccine didn't actually kill anyone, and to date the risk of that happening is demonstrably tiny, so better to take that tiny calculated risk.
Of course if the vaccine kills or injures anyone, questions will be asked of the doctor who performed the vaccination, and possibly the manufacturer of the vaccine, just like any other medical procedure gone wrong.

In your initial problem, your box actually resulted in the deaths of 10,000 when they might have otherwise been fine. I agree that 10,000 is better than 30,000 in this case, but it still doesn't change the fact that the responsibility for the deaths now all lies upon you, since those 10,000 people were certainly not at fault for their deaths.

I care about the fact that I am responsible for there being 10,000 deaths instead of 30,000, and would work my ass of to get it down to 5,000 and below. Again, people who have trouble with the Trolley problem can talk their way into allowing 30,000 people to die, but not me. I'm happy for us to agree to disagree on this point though.
Can't disagree with this one.

I mean, that is exactly the trolley problem with 10 people on the main line and 2 people on the siding, but eh whatever
The way I see it, with the trolley problem, whoever is responsible for either 1 or (n-1) deaths is the villain who tied the people onto the line in the first place, all you did was make a decision that influences how many deaths the villain is responsible for. If the people got stuck on the line by their own accord, it's probably their own fault, and you can't be held responsible for any of the deaths, regardless of whether or not you flicked the switch.
« Last Edit: June 19, 2017, 01:46:14 pm by X »
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
I completely agree, but the whole thing being 100% about profit and a planned obsolescence nirvana with the powers that be also increasingly concerned about their own legitimacy (this is a very big concern among them) and their power, and with likely  rights to all sorts of unknown features in self driving cars - being so useful for surveillance, or even worse, I think we'll see them much sooner than later, even if the business case for them gets sketchier and sketchier fast and indeed the number of people driving to work drops like a stone and we clearly don't need them which in fact I think will likely be the case.

Due to automation and the Internet, the number of people needing to go to work in their nearby city every day may get much smaller. Where people choose to live may not be so important to where they work.  That is what the original promise of InternetOfThings

 "Ubiquitous computing", at Xerox PARC It was about workplace cooperation at a level rarely seen in corporations, BTW.

 Interesting research, now quite old, but could be realized now to make people significantly more productive. (But surveillance nightmare too)

Good for very high functioning, high trust level, non-hierarchical research groups.. Kind of like my ideal of where we should be going, if we want to maintain high levels of employment, actually that is what we must do, so people at age 20 will be what a PhD is today, we need to accelerate the learning process and +stop wasting time beating the spirit out of people_, let them remain chilldlike in terms of curiosity their whole lives. (Creative geniuses are often like that)

"Ubiquitous computing"  - check it out.  They had a vision for the future many of us lack today. Its one where people are online but they are also much more engaged with their co-workers and friends, not detached and isolated.


Quote from: X on Yesterday at 22:56:33

    Autonomous cars may have their place eventually, but it shouldn't render non-autonomous vehicles to be banned or forcibly made obsolete. This is a freedom that people should not be required to give up.
« Last Edit: June 19, 2017, 01:14:36 pm by cdev »
"What the large print giveth, the small print taketh away."
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
The body IS a machine.. and this is a great example of why we should take a pragmatic approach to that and use what we know to protect it..

The grain of truth at the core of the IMHO phony and very contentious "vaccine issue" is a serious threat posed to all of us by exposures to strongly pro-oxidant substances - that threat being especially serious at certain points of the life cycle.

This is a fairly simple scientific fact which would be easy to teach people about..

Something that has been known for at least three decades to me, and explained in at least tens of thousands of scientific papers is the importance of glutathione to health.

Due to our need for glutathione  these pro-oxidant substances can become additive toxicants - Many are fairly common and many are completely unregulated..  However, the glutathione is there to protect our cells and when a toxic challenge from some substance is encountered, it must be there or the cell may have to kill itself to prevent DNA damage and cancer. So exposure to pro-oxidant toxicants is expensive to or health and should be compensated for.

As multiplication of cells is finite (see "Hayflick Limit") and as we age the built in repair mechanism- apoptosis of damaged cells and cell division of undamaged cells.. becomes less and less available. Also inflammation levels rise using up our precious glutathione even without toxic exposures. (everybody should supplement with NAC as they get older. Not a lot but enough to ensure we're getting enough cysteine to make enough glutathione for our age related needs and take more if we know we're geting toxic exposures. (lead is strongly pro-oxidant and NAC is useful in cases of chronic lead exposure. Another amino acid of use in that context is taurine.)

The big problem as described in the specific paper linked below is that pro-oxidant toxicants have the potential to cause birth defects in pregnancy due to changes in the expression of two genes "Fyn" and "c-Cbl" .  Very low level exposures can disrupts precursor cell function. In other words disrupt cell differentiation.

So, a very serious threat to humanity's reproductive process is posed by low level pollutants. So all pollutants and other chemicals that have strong pro-oxidant activity should be considered to be additive, not regulated separately.

Pro oxidant toxicants are toxic to the unborn children of pregnant women, and should be of concern to all others as well - They can dramatically effect the IQs of children and cause life chaging autoimmune disease and inflammation in people of all ages..

certain polulation grous aloready have low glutathione chronically due to the creation of cross links (advanced glycation endproducts, commonly abbreviated "AGE"s ) as we age - So we should be concerned when any of those groups, unborn children (pregnant women or women of childbearing age, infants, children, or the old or sick are exposed to them even at very low levels. 

However, many pro-oxidant substances areunregulated or regulated only at extremely high levels.That leads to binary hinking and behaviors that are extremely ill advised, people thinking that chronic 24 hour a day, seven day a week levels below some 30 year old "action level" picked to apply in a workplace setting, 8 hrs day 40 hours a week..
is okay.

Here is the paper.. Note that people exposed to a plethora of pro-oxidant substances can supplement with n-acetylcysteine to improve their cells "redox status" - substantially reducing the risk of toxicity to an unborn child and cancers (reducing apoptosis - programmed cell death of exposed cells ) and preserving much of the body's finite cellular repair capacity longer..

PLOS Biology: Chemically Diverse Toxicants Converge on Fyn and c-Cbl to Disrupt Precursor Cell Function

Quote from: AndyC_772 on Today at 06:40:25>Quote from: rs20 on Today at 06:30:59
Statistically? Should we give no credit to the measles vaccine because the 17.1 million lives it have saved are merely "statistical" "estimates"?

If that vaccine were known to have killed 1 million of the people who received it, would you give it to your children?

On balance, you certainly should. After all, it's 17 times as likely to save their lives, as it is to kill them.

People don't work that way though, which is probably why I prefer working with machines instead.
« Last Edit: June 19, 2017, 01:55:43 pm by cdev »
"What the large print giveth, the small print taketh away."
 

Offline CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5170
  • Country: us
This thread initiated talking about situations that will be difficult or impossible for self driving cars.  A recent letter to American Scientist poo-pooed the idea of a self driving car by relating the authors recent cross country trip that involved several difficult situations including iced up roadways, a stint off road and others that I don't remember.

The solution to these problems is simple and was identified in science fiction decades ago. As with all solutions it isn't perfect, but it will work most of the time.  You just retain the capability for the driver to control the car, and switch to the driver when in a situation outside of the autopilots capabilities.  You get the benefits of the autopilot on long boring cruises across country and don't have to pay for some genius of the future autopilot that can handle the bizarre cases.

Simple GPS maps could handle the decision function in most cases.  Augment with a few sensors and contact with the weather bureau and the autopilot would have a pretty robust way of identifying when it was in over its head.

It is easy to find fault with this solution.  Many of the cases identified in this thread and others are things that would escape this system and put the autopilot in a difficult situation.  But a little thought shows that human drivers don't generally do very well in these cases either.  The example of emergency braking with cars in adjacent lanes with a child in one car and so on.  How many of us can truly say that in an emergency braking situation we run through all of the options and select the best?
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf