Author Topic: Self Driving Cars: How well do they work in areas with haphazard driving rules?  (Read 37284 times)

0 Members and 1 Guest are viewing this topic.

Offline JoeNTopic starter

  • Frequent Contributor
  • **
  • Posts: 991
  • Country: us
  • We Buy Trannies By The Truckload
Third world countries, China, Lower Manhattan, etc.

It seems they work well now when they are tested on all those rural roads.  How well do they do in crazier areas and parking lots?  What is the state of the art on this now?  Anyone knowledgeable?
Have You Been Triggered Today?
 

Offline rs20

  • Super Contributor
  • ***
  • Posts: 2318
  • Country: au
Not knowledgeable, but I doubt that anyone has even begun to try.
 

Offline Brumby

  • Supporter
  • ****
  • Posts: 12298
  • Country: au
I think DARPA might have some answers - but I'm not sure they will be keen to go into a lot of detail.
 

Offline TK

  • Super Contributor
  • ***
  • Posts: 1722
  • Country: us
  • I am a Systems Analyst who plays with Electronics
UBER is doing it in Pittsburgh, PA
 

Offline JoeNTopic starter

  • Frequent Contributor
  • **
  • Posts: 991
  • Country: us
  • We Buy Trannies By The Truckload
My cousin sister's husband proved this personally. He was driving with in speed limit, on a non-illuminated road, and just got blinded by some dick head driving with high beam. When he managed to adjust his vision, he hit a man fixing his car at the center of the road with no warning signs or what so ever, even not head/tail lights, and killed that man. The victim's family demanded 1 million CNY settlement, which my cousin's family couldn't afford. How her husband is serving a 2 year sentence.

That sounds a lot like blood money.  Not too many advanced countries still do it this way.  I am surprised China still does, since it seems to be viewing itself as an advanced country these days.
Have You Been Triggered Today?
 

Offline TK

  • Super Contributor
  • ***
  • Posts: 1722
  • Country: us
  • I am a Systems Analyst who plays with Electronics
My cousin sister's husband proved this personally. He was driving with in speed limit, on a non-illuminated road, and just got blinded by some dick head driving with high beam. When he managed to adjust his vision, he hit a man fixing his car at the center of the road with no warning signs or what so ever, even not head/tail lights, and killed that man.
A self driving car could have avoided killing the man because LIDAR and other sensors used in the cars are not blinded by a front headlight high beam of an incoming car... and it has a reaction time that is orders of magnitude higher than of a human.
 

Online Monkeh

  • Super Contributor
  • ***
  • Posts: 7992
  • Country: gb
I don't think self driving will work in China due to legal issues. Chinese laws is very unfriendly to drivers compared to to pedestrians.

My cousin sister's husband proved this personally. He was driving with in speed limit, on a non-illuminated road, and just got blinded by some dick head driving with high beam. When he managed to adjust his vision, he hit a man fixing his car at the center of the road with no warning signs or what so ever, even not head/tail lights, and killed that man. The victim's family demanded 1 million CNY settlement, which my cousin's family couldn't afford. How her husband is serving a 2 year sentence.

And he's entirely innocent of blame for not stopping his vehicle when unable to see where he was going.
 

Offline Rick Law

  • Super Contributor
  • ***
  • Posts: 3441
  • Country: us
...
Now, if a self driving car hit and killed someone while still not breaking traffic rules, who's going to take the responsibility?
...

The USA "MO" is to sue everyone you possibly can, particularly the ones with the deepest pocket.

In other words, the question is not "who is going to take the responsibility" but instead the question is "who best can we get compensation from."
 

Offline EEVblog

  • Administrator
  • *****
  • Posts: 37738
  • Country: au
    • EEVblog
Just basic stuff like accidents, temporary road work speed limits etc are no doubt going to cause issues for driverless cars.
The permutations of possible issues is effectively infinite.
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
Here in the US there is already talk about some kind of legal exemption from lawsuits for manufacturers..

Since these applications "are essential to national security" (like fracking and its Clean Water Act and right to know acts trade secrecy exemptions) its likely manufacturers will get preemption of claims against them in some manner.

Just stay off the roads (and the first few hundred feet of your lawn |O ) and you'll be okay.
"What the large print giveth, the small print taketh away."
 

Offline TK

  • Super Contributor
  • ***
  • Posts: 1722
  • Country: us
  • I am a Systems Analyst who plays with Electronics
Just basic stuff like accidents, temporary road work speed limits etc are no doubt going to cause issues for driverless cars.
The permutations of possible issues is effectively infinite.
self driving cars are based on machine learning + sensors (LIDAR, cameras, IR) + GPS data + real time traffic information.  GPS is needed to be able to get from point A to point B, but the driving itself is done by the car using machine learning algorithms.  The objective is for the car to learn how to drive and add more capabilities and experience as it keeps driving.  I think it was at CES 2017 that Audi showed a car that learned how to drive starting from basic pre-defined knowledge.  It is a really surrealistic experience watching the UBER self driving cars roaming the streets of Pittsburgh... and watching the Tesla self driving cars on youtube... it seems futuristic, but I think we are very close to seeing them on the roads in a near term.
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
Is it true that if a driver (typically a rich person) seriously injures somebody (typically a poor person) in China they sometimes then run them over a few times to kill them so that they have to pay less money.

Videos: https://www.youtube.com/results?search_query=driver+kills+pedestrian+in+china

That should be considered to be premeditated murder!


Quote from: blueskull on Today at 20:37:53>Quote from: Monkeh on Today at 20:22:29
And he's entirely innocent of blame for not stopping his vehicle when unable to see where he was going.

He's not completely innocent, but he shouldn't take majority of responsibility.
Current Chinese laws says if a motor vehicle hits a non-motor vehicle or a pedestrian, the motor vehicle takes at least 70% of the responsibility, and the only exception is intentional suicide. That's just bullshit.
« Last Edit: June 18, 2017, 04:31:46 am by cdev »
"What the large print giveth, the small print taketh away."
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
I wouldn't be surprised if that statement was completely false.

Lawsuits of all kinds in the US, are so extremely difficult to finance, and many kinds of suits, especially medical malpractice claims are at historic lows, (despite very high rates of medical injury).

Why dont I see people who are injured by self driving cars being able to sue even when the other party is clearly at fault?

The system is set up to make justice next to impossible. (As long as one person in a million gets justice and the news covers it, the government doesn't care)

"Subrogation clauses" in many contracts involving insurance, especially employer plans which until recently most Americans had through their jobs if they were employed, make suing- under contingency agreements- impossible because the (health) insurance company who rendered care after the person was paralyzed, gets first crack at any "winnings".  Only after they have been reimbursed at meaningless inflated rates does the injured attorney get paid. Whatevers left after that if its not a deficit goes to the "winner" I suppose. Even if the entire rest of their lives they will need medical care, tough luck. Its the illusion of justice, not justice, that matters.



Quote from: Rick Law on Today at 20:49:45>Quote from: blueskull on Today at 19:47:16
...
Now, if a self driving car hit and killed someone while still not breaking traffic rules, who's going to take the responsibility?
...

The USA "MO" is to sue everyone you possibly can, particularly the ones with the deepest pocket.

In other words, the question is not "who is going to take the responsibility" but instead the question is "who best can we get compensation from."
« Last Edit: June 18, 2017, 04:45:46 am by cdev »
"What the large print giveth, the small print taketh away."
 

Offline EEVblog

  • Administrator
  • *****
  • Posts: 37738
  • Country: au
    • EEVblog
In other words, the question is not "who is going to take the responsibility" but instead the question is "who best can we get compensation from."

That's entirely what insurance is for.
No point suing someone who doesn't have insurance, too hard to get anything. But an insurance company will simply pay out as a statistical loss.
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
How much do lawyers get paid in China?

Here in the US better attorneys often make over a thousand dollars an hour. Mediocre attorneys might make $300/hr.
"What the large print giveth, the small print taketh away."
 

Offline JoeNTopic starter

  • Frequent Contributor
  • **
  • Posts: 991
  • Country: us
  • We Buy Trannies By The Truckload
...
Now, if a self driving car hit and killed someone while still not breaking traffic rules, who's going to take the responsibility?
...

The USA "MO" is to sue everyone you possibly can, particularly the ones with the deepest pocket.

In other words, the question is not "who is going to take the responsibility" but instead the question is "who best can we get compensation from."

Sure, but the U.S. (and most countries with English laws) difference from blood money countries is that the law doesn't turn an accident into a crime if and only if you are not able to come up with a large settlement.  That was the sum of my point.  It's not that their law is invalid, just interesting to me.  I wonder why insurance wouldn't cover it, that's how we do it in the U.S.  I have $300,000 worth of liability insurance, which is considered on the low side, that is a lot more than $1M yuan.
Have You Been Triggered Today?
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8646
  • Country: gb
I'd really like to see some coverage of how things like the Uber and Google self driving cars react to completely unstructured situations like accidents, and slightly more structured things which are off the script, like lane closures for road repairs. Its easy to see how an automated car would adopt a safe position, but how it would get through situations that even humans would puzzle over is more interesting.
 

Online BrianHG

  • Super Contributor
  • ***
  • Posts: 7733
  • Country: ca
I'd really like to see some coverage of how things like the Uber and Google self driving cars react to completely unstructured situations like accidents, and slightly more structured things which are off the script, like lane closures for road repairs. Its easy to see how an automated car would adopt a safe position, but how it would get through situations that even humans would puzzle over is more interesting.
We will only be told as little as possible.  I'm sure potential issues may be inadvertently exposed by releasing such information.  Believe me when I say Google and Uber will hide everything just like drug companies with their hidden multiple tests for a new drug, while selecting only the positive trials to be disclosed to the FDA to get approval.
Google and Uber are in it for the money.
They will guard/hide any fault or problems any way they can so they can get a product on the market ASAP and assure the public that it is safe.
They will avoid any tests they can which may reveal any such problems.
If you think otherwise, you are a fool and have no clue what and how so much money involved can manipulate US approvals.
« Last Edit: June 18, 2017, 07:35:50 am by BrianHG »
 

Offline AndyC_772

  • Super Contributor
  • ***
  • Posts: 4228
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
I'd really like to see some coverage of how things like the Uber and Google self driving cars react to completely unstructured situations like accidents, and slightly more structured things which are off the script, like lane closures for road repairs.

I'm curious about how they react in situations where obeying the rules of the road is not the norm.

Programming a machine with "when X, do Y" is fair enough. For example, provided it's in a country where people do generally stop at red lights, a machine programmed to stop at red lights won't have a problem.

But what if it's driving through somewhere where expectations are different. What if, for example, stopping at a red light is likely to get you rear-ended by someone who's not expecting it? What if it will get you car-jacked?

What if the way to minimise the chances of an accident isn't to do what you're supposed to do, but to do what other local drivers expect you to do?

What rules-of-the-road do you program in?

Offline SeanB

  • Super Contributor
  • ***
  • Posts: 16283
  • Country: za
They can try them here in South Africa. If they can drive for a year in any big city and not get hit by a taxi driving on the wrong side of the road, stopping anywhere, crossing across 6 lanes on the freeway to pick up a fare, driving the wrong way to avoid traffic cops, running red lights ( Eish it was fresh red mlungu, fresh red), taking the rule book and using as toilet paper. Then they will be fine everywhere. You can do the same testing in Mumbai as well, all you get extra is a few more animals in the road, few more, not many more, but less drunk drivers and pedestrians.
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
In the US the worth of a human life appears to be far more income dependent, meaning dependent on future projected earnings, and familial needs. Thats why undesirables (toxic facilities) are always located in poor areas, and lawyers won't ever work on contingency for poor people. The lives of children or old people are valued on their future earnings too, which gives you an idea of how much lawyers want those cases.


I can hear them now.


"Look, if you don't trust modern technology, stay off the roads."


China no longer requires that people get permission slips to travel, right?


I see the networked self driving cars and lack of widely used inexpensive public transit as a form of insurance for despotic regimes sort of like the "Internet kill switch" concept.



"What the large print giveth, the small print taketh away."
 

Online BrianHG

  • Super Contributor
  • ***
  • Posts: 7733
  • Country: ca
Enjoy!

 :-DD
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Third world countries, China, Lower Manhattan, etc.

It seems they work well now when they are tested on all those rural roads.  How well do they do in crazier areas and parking lots?  What is the state of the art on this now?  Anyone knowledgeable?
I think that is the easiest part to fix: don't hit other objects while travelling towards the destination. I have some hands on experience with driving in Jakarta (Indonesia) and you just go with the flow and don't hit anything. There is no other way to go so even driving on the left side of the road isn't hard to do.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline AndyC_772

  • Super Contributor
  • ***
  • Posts: 4228
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
you just go with the flow and don't hit anything

As an engineer, would you ever program a machine to "just go with the flow", if that meant contravening the local rules and regulations that apply?

How would you justify that decision, in court if necessary?

I don't think I could. I'd probably try and demonstrate that a vehicle which adhered rigidly to the law was incapable of functioning in that environment, and leave it for others to argue over what the solution should be. Most likely, not using a self-driving vehicle at all would be the best decision, IMHO.

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
you just go with the flow and don't hit anything

As an engineer, would you ever program a machine to "just go with the flow", if that meant contravening the local rules and regulations that apply?
Note the title of the thread!
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline sokoloff

  • Super Contributor
  • ***
  • Posts: 1799
  • Country: us
you just go with the flow and don't hit anything
As an engineer, would you ever program a machine to "just go with the flow", if that meant contravening the local rules and regulations that apply?

How would you justify that decision, in court if necessary?
I would not program "my" self-driving car to studiously adhere to the speed limits on controlled-access expressways as one concrete example. No good comes from that.
 

Offline vodka

  • Frequent Contributor
  • **
  • Posts: 518
  • Country: es
And he's entirely innocent of blame for not stopping his vehicle when unable to see where he was going.

He's not completely innocent, but he shouldn't take majority of responsibility.
Current Chinese laws says if a motor vehicle hits a non-motor vehicle or a pedestrian, the motor vehicle takes at least 70% of the responsibility, and the only exception is intentional suicide. That's just bullshit.

The shit and the fuck proportional responsability , how many  injustices is responsable that legal term? 
Here, we have the problem with the pedestrians that don't see and don't warn to car when they have the intention to cross the pass zebra without semaphores, and worse are the grandma with their grandsons,
they take off the baby wagon to middle road after they see if there are any car to approach them  :clap: :clap:.  The guilty always is for driver car, and very rarely for pedestrians.
 
In  the next place , we have  the kamikaze bikers that they have the bad habit to go for roads from the time of Primo de Rivera (very tigh lanes, without roadside and low visibility), when for any circumstance like sun blinding or a curve with low-visbility ram the bikers.  The guilty always is for driver car
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
Andy and coppice, the answer to your question seems to me to be that each self driving car will have a regulatory database of "the rules" that apply in its country, and it has to obey them. Whatever flexibility is in the law now will have to be removed for the corporations. Also, there will be a period, say five years, where self driving features will only be enabled in larger roads that have the technology to manage the traffic flow automatically. Of course initially, like now, some areas will be mostly old fashioned non-self driving cars so they will have to be given some time to buy new cars. Some areas - "off the beaten track" may remain non self driving indefinitely.

I think the main use case of self driving cars will be on highways and high speed streets which the non self driving cars, motorcycles, bikes and pedestrians will be banished from completely.

Just like in the 60s many towns eschewed sidewalks and avoided engaging in regional mass transit systems (a decision many now regret) because they didn't want to be seen as "pedestrian", many communities will likely only be reachable via self driving car only highways.

Quote from: AndyC_772 on Today at 02:06:39>Quote from: coppice on Today at 00:53:10>>>I'd really like to see some coverage of how things like the Uber and Google self driving cars react to completely unstructured situations like accidents, and slightly more structured things which are off the script, like lane closures for road repairs.
>I'm curious about how they react in situations where obeying the rules of the road is not the norm.

>Programming a machine with "when X, do Y" is fair enough. For example, provided it's in a country where people do generally stop at red lights, a machine programmed to stop at red lights won't have a problem.

>But what if it's driving through somewhere where expectations are different. What if, for example, stopping at a red light is likely to get you rear-ended by someone who's not expecting it?


There won't be any need for red lights in a completely self driving ecosystem because the cars will all be networked with one another and the highway control systems all the time.

>What if it will get you car-jacked?

>What if the way to minimise the chances of an accident isn't to do what you're supposed to do, but to do what other local drivers expect you to do?

>What rules-of-the-road do you program in?

We'll see. I think there is a lot more to this self driving car thing than just the technical and legal issues.

Its also part of a general re-stratification of society by income and class. A way to covertly put boundaries back into place which were temporarily eliminated or reduced for the industrial age..
"What the large print giveth, the small print taketh away."
 

Offline rdl

  • Super Contributor
  • ***
  • Posts: 3667
  • Country: us
Instead of self driving cars, I think they should have started with something easier like self piloted airliners.
 

Offline KE5FX

  • Super Contributor
  • ***
  • Posts: 1890
  • Country: us
    • KE5FX.COM
Instead of self driving cars, I think they should have started with something easier like self piloted airliners.

Or even trains.  Why humans still need to be involved in train operations, I'll never know.
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
Lots of places have automated passenger trains. They are popular at airports.
"What the large print giveth, the small print taketh away."
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
I think it is going to be about sensors.

In "BLUE", my 2017 Chevy Bolt (which is blue), the car isn't driverless but it probably is only a step away.  I have cameras fore and aft and when backing up, the display shows the anticipated track based on the steering angle.  It also has the ability to sense cars in my blind spots and there is an orange symbol on the outside mirrors if there is something within 11 feet of my rear quarter panel.  It also has a warning light/buzzer if I am about to rear-end the car ahead and will apply the brakes for me.  This Forward Automatic Braking works between 5 and 50 MPH.  It also has a Front Pedestrian Braking System that works over the same speed range but doesn't work well at night.  Oh, and it also has a form of lane detection and will correct the car if I tend to drift into another lane.  Fact is, I may actually have to go to the training class just to find out how many things it does.

I think I'm only along to press on the 'loud pedal'.  Just "punch it and fry 'em" and let the computers worry about the details!

The 2017 Bolt is the "Motor Trend Car Of The Year"
http://www.motortrend.com/news/chevrolet-bolt-ev-2017-car-of-the-year/

It isn't as sporty as my 2014 Chevy Spark EV but GM realized they needed to tame the acceleration.  With 400 ft-lbs of torque, the EV could be a handful as it shifted drive from side to side as tires lost traction.  It was a fun car to drive!

The Bolt is a lot of fun and, for our needs, the range is more than adequate.

Now here is something weird for the conspiracy types.  There is a satellite view of the car and surroundings.  Under some conditions, the image is so 'real time' that it can change the image as you put your arm out the window.  Now that's "real time".  No matter where I am, there is a satellite watching my car.  How cool is that?

So, there I am sitting in line at TacoBell and I can see my car from the top down in front of the server window and also a couple of cars behind me.

Self driving is going to come and it will be a huge application of distributed computing with vehicles talking to each other.
 

Offline sokoloff

  • Super Contributor
  • ***
  • Posts: 1799
  • Country: us
Now here is something weird for the conspiracy types.  There is a satellite view of the car and surroundings.  Under some conditions, the image is so 'real time' that it can change the image as you put your arm out the window.  Now that's "real time".  No matter where I am, there is a satellite watching my car.  How cool is that?
:bullshit:  :-DD

That view is synthesized from cameras on the car.

Don't believe me? Try it in a garage. It will still work.
 

Offline station240

  • Supporter
  • ****
  • Posts: 967
  • Country: au
Just basic stuff like accidents, temporary road work speed limits etc are no doubt going to cause issues for driverless cars.
The permutations of possible issues is effectively infinite.

Nope, there are various solutions to these problems already.

Some 'autopilot'* systems read speed limit signs, others drive slower if other vehicles are also doing so. So far a lot of neat tech, but no car has all if it. Tesla have accidence avoidance software, auto braking (stop if something/someone on the roadway), auto steer (stay in lane, but detects if another vehicle changes into your lane and swerves/brakes to prevent accident).

Lack of or poor road rules is going to be unsafe anyway, computer can react faster than any human, isn't so affected by night or fog, and will err on the side of caution unlike people.

* some in name only, it's often not full self driving and is more driver assist.

Also new story specific to auto pilot in China, doesn't seem to be a problem.
http://knowledge.ckgsb.edu.cn/2016/11/21/technology/self-driving-cars-china/
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
I think truly autonomous cars are at least 20 years behind where most people think they are. I predict the first round of them released into the wild will be rushed out by arrogant tech companies and there will be a spectacular failure. One or more of a number of scenarios will happen, a well publicized incident of a hardware or software fault will cause a car to accelerate through a mass of pedestrians killing a bunch of people, an anomaly in the road markings will cause a number of serious accidents, a weather event such as a major snowstorm will cause a bunch of the cars to either go apeshit and crash or completely shut down and gridlock a major city. Then there is the almost guaranteed scenario where a bunch of hackers and pranksters come out of the woodwork with GPS and LiDAR jammers and other tech toys, spraypaint to make phony road markings and signs and wreak havoc, anyone who doesn't think that will happen is either smoking something or has their head in the sand.

Anyway the end result of any of these is that a bunch of people will be killed or seriously inconvenienced and the whole thing will be shut down for a significant period of time. I just don't see the concept living up to the promises any time soon. Sure Google has driven their cars millions of miles or whatever but it is millions of miles around the same heavily documented routes around the cities where they are based. The day of being able to get into a car, punch in a destination anywhere in the country and have the car reliably deliver you there are a LONG way off. Personally I'm ok with that, if I don't get to drive it myself then I have zero interest in owning, maintaining and insuring a car. I could take a lot of bus trips or cab rides for the cost of owning a car. The only real reason I have a car is because I like to drive and I like tinkering with cars.

We should be focused on finding alternative fuels, even if it drives itself it's still going to need fuel.
« Last Edit: June 18, 2017, 08:04:43 pm by james_s »
 

Offline AndyC_772

  • Super Contributor
  • ***
  • Posts: 4228
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
you just go with the flow and don't hit anything
As an engineer, would you ever program a machine to "just go with the flow", if that meant contravening the local rules and regulations that apply?

How would you justify that decision, in court if necessary?
I would not program "my" self-driving car to studiously adhere to the speed limits on controlled-access expressways as one concrete example. No good comes from that.

OK, good example.

At some point, one of your self-driving cars is going to be involved in an accident on one of those roads. An injured party will claim that the vehicle was speeding - which it was! - and that this is was a contributory factor to the damage which resulted from the crash.

Do you suggest in court, that it was safer to program the car to do this? Statistically speaking, it may well indeed have been, but that defence rarely if ever goes down well in front of a magistrate when the driver is human, and especially so when personal injury has occurred.

I'd hate to be the engineer hauled up in front of a judge and jury, and asked to explain why I've deliberately programmed the machine to contravene regulations. I suspect pragmatism would be in short supply.


Offline sokoloff

  • Super Contributor
  • ***
  • Posts: 1799
  • Country: us
Do you suggest in court, that it was safer to program the car to do this? Statistically speaking, it may well indeed have been, but that defence rarely if ever goes down well in front of a magistrate when the driver is human, and especially so when personal injury has occurred.
I do suggest that in court, backed with statistics and data on the speeds of the cars around me, including the plaintiff's car.
I'd hate to be the engineer hauled up in front of a judge and jury, and asked to explain why I've deliberately programmed the machine to contravene regulations. I suspect pragmatism would be in short supply.
I'd hate to be in that situation as well, but I'd rather engineer according to pragmatism and science with an awareness of the relevant statutes, rather that walk around constantly afraid of a legal shadow hitting my work.
 

Offline FlyingHacker

  • Frequent Contributor
  • **
  • Posts: 807
  • Country: us
  • You're Doing it Wrong
I'd say it is more likely that governments will have a higher speed limit for self-driving cars. Thus, they will be able to go the 5-10mph over the limit that most drivers do daily. The argument will be that the reaction time of the computer will make up for the higher speed limit. Lobby money from tech companies will ensure passage.

--73
 

Offline Brumby

  • Supporter
  • ****
  • Posts: 12298
  • Country: au
That's a problem for any driver situation.

It's not the driver capabilities that are in control of the outcome - it is simply physics.  A function of inertia, distance and friction.

This is one example where dash cams are invaluable for demonstrating the innocence of the driver in such situations.
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
Who is going to pay for these new highways? They will be privatized of course, and once you privatize anything you can't un-privatize it, so no-matter whether it works, or doesnt, or whether the economy prospers, or tanks, which is not unlikely, they will have to stick with it.

Courts also tend to support contracts so if the contract gives the new owners of our roads the right to set the tolls, and any disputes have to go to ISDS arbitration, thats it.

Also, because of non-compete agreements, and the quite unambiguous prohibition in the WTO GATS on governments competing with MNCs, they will become the only choice people have.

Just as we don't let horses and carriages on highways now, the old human driven cars likely eventually wont be allowed, as they wouldn't be able to drive in the controlled manner required to avoid accidents. Allowing that kind of autonomous driving, particularly for delivery vehicles and commuting is two of the main goals of autonomous vehicles.

The big growth and investments made over the next ten or twenty years is in developing countries, not in developed countries. Thats what all the fuss in trade agreements is about, the MNCs want to be protected from every possibility before they sink billions into factories in countries with low wages but the strong possibility of social unrest.

So they put the jobs in developed countries, which are seen as in decline, in terms of profitability, on the table as bargaining chips.

In exchange for jobs here they hope to be able to get "national treatment" and "most Favored Nation" status there so they can invest vast amounts of money building factories to employ the millions of unemployed workers in those countries so they don't start wars with us or sell cheap life saving drugs.

they want to bring them into their big Ponzi scheme as investors.
"What the large print giveth, the small print taketh away."
 

Offline hamdi.tn

  • Frequent Contributor
  • **
  • Posts: 623
  • Country: tn
Simple answer, they will not. autonomous car need some road infrastructure to operate correctly.
The humain factor is and will always be the main cause of traffic accident, and the random variable to solve in an automated sensor based system like autonomous cars.
So i think in order for this to perform as it should, all cars need to be autonomous and road are equipped for that. Now that can be true for a couple of country in 20 years maybe, but in no way this will be used in any part of the third world country.
So till then Drivers are responsable for their cars autonomous or not, i really hoped they stick to the Assisted driving definition or restrictive driving to oblige the driver to follow some rules,  instead of "autonomous", this will only cause more chaos to the roads and more debate about whose responsable.

in an other side, i don't really understand the movement of the automobile industry towards this autonomous driving thing, in my simple thinking it's just a way to make a 3000$ car cost 10000$ with shitload of sensors and on board computers, it's insane.
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
No cars being made today are one step away from driverless cars because a driverless car is beyond the most automated airliners, imagine the equivalent of a Boeing 777, except for roads, it has complete fly by wire as the default everything and the human controls that exist for off road driving on conventional roads are similar to a video game, where you see the world around you digitally and navigate through it via joystick and GPS. They wont be like cars of today that are still largely electromechanical, even electric cars and hybrids. Its a transformation we have already seen in a great many areas where computers become the default way of solving problems. This will require a complete re-engineering of the problem of getting from point a to point b to make it a digital process.

Of course at the same time, huge numbers of people - billions, will be freed of the burden of having to travel every day to work. So the number of people with a need to travel will drop by several orders of magnitude.  The main reason so many people bought cars in the 20th century was to commute to work, so they could man the machines and staff the offices of industry.
"What the large print giveth, the small print taketh away."
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
They sell cars for $3000 in Tunisia? Interesting.


Last time I looked (more than a decade ago) the cheapest new cars in the US cost around $15,000. I think that the cheapest self driving cars here in the US will probably cost at least $50,000 more likely twice that. And they likely will have to be replaced much more frequently than the ones we have today. Thats the goal. Driving will become like flying, a rich persons game. Many countries are like that now, with huge taxes and permits, etc. which are required if you want to own a car inside of a major city. Just the cost of a parking space in SF or Manhattan is more than the cost of buying a home in many other places.

"What the large print giveth, the small print taketh away."
 

Offline hamdi.tn

  • Frequent Contributor
  • **
  • Posts: 623
  • Country: tn
They sell cars for $3000 in Tunisia? Interesting.


Last time I looked (more than a decade ago) the cheapest new cars in the US cost around $15,000. I think that the cheapest self driving cars here in the US will probably cost at least $50,000 more likely twice that. And they likely will have to be replaced much more frequently than the ones we have today. Thats the goal. Driving will become like flying, a rich persons game. Many countries are like that now, with huge taxes and permits, etc. which are required if you want to own a car inside of a major city. Just the cost of a parking space in SF or Manhattan is more than the cost of buying a home in many other places.



haha, if it's 20 years old yes  :-DD no they don't in fact price here are double cause we pay 100% customs fees on cars. it's just an exemple. The point is this will put extra cost on cars and make it harder for middle class people to own, just by adding this kind of unclear (yet) features. For now it's just some industrial talking about it, it will be mainstream once they got there. and as you said, once some car are autonomous and some are not, normal cars will be banned from some roads. i totally see that coming.


 

Offline TK

  • Super Contributor
  • ***
  • Posts: 1722
  • Country: us
  • I am a Systems Analyst who plays with Electronics
They sell cars for $3000 in Tunisia? Interesting.


Last time I looked (more than a decade ago) the cheapest new cars in the US cost around $15,000. I think that the cheapest self driving cars here in the US will probably cost at least $50,000 more likely twice that. And they likely will have to be replaced much more frequently than the ones we have today. Thats the goal. Driving will become like flying, a rich persons game. Many countries are like that now, with huge taxes and permits, etc. which are required if you want to own a car inside of a major city. Just the cost of a parking space in SF or Manhattan is more than the cost of buying a home in many other places.
I think Tesla 3 will be priced around $30,000 with full self-driving capabilities
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8646
  • Country: gb
Andy and coppice, the answer to your question seems to me to be that each self driving car will have a regulatory database of "the rules" that apply in its country, and it has to obey them. Whatever flexibility is in the law now will have to be removed for the corporations. Also, there will be a period, say five years, where self driving features will only be enabled in larger roads that have the technology to manage the traffic flow automatically. Of course initially, like now, some areas will be mostly old fashioned non-self driving cars so they will have to be given some time to buy new cars. Some areas - "off the beaten track" may remain non self driving indefinitely.
This completely misses the point. In real world driving we are forced to break the rules when we encounter the aftermath of an accident or really convoluted roadwork setups. We frequently puzzle over what to do for a while, but we have to take some rule breaking action, or we would be stuck until the obstacles are cleared. Getting a car to take safe action as it encounters such an incident is reasonably straightforward. Getting it safely out of such situations, so it can continue to make progress, it a whole different thing.
 

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb
OK, good example.

At some point, one of your self-driving cars is going to be involved in an accident on one of those roads. An injured party will claim that the vehicle was speeding - which it was! - and that this is was a contributory factor to the damage which resulted from the crash.

Do you suggest in court, that it was safer to program the car to do this? Statistically speaking, it may well indeed have been, but that defence rarely if ever goes down well in front of a magistrate when the driver is human, and especially so when personal injury has occurred.

I'd hate to be the engineer hauled up in front of a judge and jury, and asked to explain why I've deliberately programmed the machine to contravene regulations. I suspect pragmatism would be in short supply.

Having a bit of experience with jurisprudence in the UK, I expect that here the standard a self driving car would be held to (in the absence of any specific statutory law) would be that of "a reasonably skilled human driver". That is, the car's autonomous systems would be regarded as behaving correctly as far as the law was concerned if they undertook the same actions, under the same conditions as "a reasonably skilled driver".

As far as deliberately breaking the rules goes there's the defence of 'necessity'. If it was necessary to exceed the speed limit to avoid an accident (say accelerating when you were already travelling at the limit) then you have a defence to a charge of speeding, and so on. Necessity is a very broadly applicable, and rarely used, defence. Again, an autonomous system would almost certainly be judged on the basis of whether it did what a reasonable person would do under the same circumstances.

At the most simplistic, a self driving system would be given the rules "obey the law" and "don't hit things, especially people" with a higher priority given to "don't hit things" if there was a conflict between the two. If hauled before the beak to explain this I'd expect one's argument to be something like: "I did program the car to obey the law, but I also programmed it to avoid killing people and gave the latter priority as I understood that the underlying intent of the law relating to driving was ultimately to avoid killing people.".
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 

Offline FlyingHacker

  • Frequent Contributor
  • **
  • Posts: 807
  • Country: us
  • You're Doing it Wrong

So what if a self driving car drives perfectly on a highway, and some jaywalker attempts to cross the red light? No algorithms can predict someone standing and waiting for signal, and suddenly decides to jaywalk, and when computer vision system detects it, it's gonna be too late.

I encountered this once, and I was so lucky that that day was rainy and I drove exceptionally slow so I didn't hit the bastards almost rushing to me, seems to be on high. I swear to god I sweated for quite a while.
I would hope it would do what I do when I drive near pedestrians, namely slow down. If the pedestrian is close to or moving towards the curb I slow down more. It could do a much more accurate job of calculating the fastest that it could safely go and still stop if the pedestrian lept out.
--73
 

Offline Brumby

  • Supporter
  • ****
  • Posts: 12298
  • Country: au
I have a question, with follow-ups....

What about the situation where there are police officers on duty directing traffic?

Situations such as accidents, failed traffic lights and the like?

What if there is a large intersection that has 2 or more police officers controlling it?
 

Offline Brumby

  • Supporter
  • ****
  • Posts: 12298
  • Country: au

So what if a self driving car drives perfectly on a highway, and some jaywalker attempts to cross the red light? No algorithms can predict someone standing and waiting for signal, and suddenly decides to jaywalk, and when computer vision system detects it, it's gonna be too late.

I encountered this once, and I was so lucky that that day was rainy and I drove exceptionally slow so I didn't hit the bastards almost rushing to me, seems to be on high. I swear to god I sweated for quite a while.
I would hope it would do what I do when I drive near pedestrians, namely slow down. If the pedestrian is close to or moving towards the curb I slow down more. It could do a much more accurate job of calculating the fastest that it could safely go and still stop if the pedestrian lept out.

I can see it now ..... pedestrians playing "Baulk the autonomous car" by making the motions to step/jump out in front of such a vehicle only to laugh themselves silly when the car brakes and the passengers get shaken like a martini.
 

Offline X

  • Regular Contributor
  • *
  • Posts: 179
  • Country: 00
    • This is where you end up when you die...
Several months ago, the self-driving car manufacturers were trying to lobby governments (including the Australian government) to make legislation outlawing non-autonomous vehicles by the 2020's. I am not looking forward to a future where nobody can be blamed for an accident that cripple another person. This is enough to convince me that there is an agenda, and this is all about greed and profits rather than safety. Would you feel safe with hackable two-ton robots with human "hostages" running around the road?

Can you really trust that:
  • The algorithms will not have any bugs in them?
  • Everyone will be able to keep their firmware updated to deal with different infrastructure changes and regulations?
  • The car cannot be hacked or altered by others on the side of the road?
  • Manufacturers will issue timely firmware updates to handle changes to roads, infrastructure, regulations etc?
  • The failure modes of the self-driving mechanism are going to be safe?

If the self-driving cars are to be legal on the road, a law must be in place that renders manufacturers 100% liable to prosecution if it can be reasonably deduced that the auto-pilot function led to the accident, or didn't stop the avoidance of the accident.

I don't even know why we're bothering with autonomous vehicles on the road, because they have already invented trains. Autonomous vehicles takes the fun out of the road. I think I'll stick with a motorcycle. The world will be better if more people rode motorcycles.
« Last Edit: June 19, 2017, 03:28:55 am by X »
 

Offline rs20

  • Super Contributor
  • ***
  • Posts: 2318
  • Country: au
Can you really trust that:
  • The algorithms will not have any bugs in them?
  • Everyone will be able to keep their firmware updated to deal with different infrastructure changes and regulations?
  • The car cannot be hacked or altered by others on the side of the road?
  • Manufacturers will issue timely firmware updates to handle changes to roads, infrastructure, regulations etc?
  • The failure modes of the self-driving mechanism are going to be safe?

Interesting that you assert that self-driving cars must be 100% flawless before they should be allowed to replace human drivers, when on US roads alone 35 thousand people died in 2015 due to mostly human drivers.

The only really sensible minimum requirement for self-driving cars to be allowed is for them to be better than humans -- a low bar indeed, as noted above. If you hold an opinion that there should be a more stringent requirement, then either you consider your right to the freedom of driving your own car to be worth killing others for, or you are one of the 90% of humans who consider their ability at doing X (driving in this case) to be better than the median.

Are self driving cars better than humans yet? I dunno, I'm purely debating what the threshold should be.
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
Right, of course, and life is like that too.

But the problem is, as Ive heard again and again, business needs stability.

What that means is that when accidents occur,

(and I think we can expect them, and likely they wont involve just one car, they may involve thousands at the same time, like other computer glitches),

we can expect denial.. and lack of accountability. Its guaranteed in this context.

Like the recent fire in London, I think we also can expect news about their true impacts to be suppressed.


 If automated cars killed a lot of people, before the 1990s, given the opportunity people would have attempted by voting to force politicians to ban them. That would now be barred because of the transparency requirement that corporations get a chance to tell government they dont want prospective laws and demand compensation in advance.

They would frame that as being "measures tantamount to expropriation" of their investment..  "indirect expropriation"

 Compensation to the corporation must be paid - their expected lost profits.

The desire to appease the corporations is more powerful than any fear of public outrage.


What I think will happen at first is the car will be programmed to do whatever it can to protect the owner and his or her "life" even if that is dangerous or fatal to others - because thats what the buyers of these $100,000 cars will likely want.  But as time passes the governments and corporations will get together to fine tune what they both want the software to do.

Governments may also require that it be possible for them to assume control of the wheel for example, if a car was about to be repossessed that given car could be told to lock its doors and drive somewhere or perhaps immobilized. Or all cars could be immobilized. Laws on not using cell phones in moving cars would also be easy to implement.

But to get back to your original point, just as other changes being made for corporations seem to be consistent in their inflexibility, I think the law on self driving cars will likely be similar.
Quote from: coppice on Today at 18:49:44>Quote from: cdev on Today at 12:32:41
Andy and coppice, the answer to your question seems to me to be that each self driving car will have a regulatory database of "the rules" that apply in its country, and it has to obey them. Whatever flexibility is in the law now will have to be removed for the corporations. Also, there will be a period, say five years, where self driving features will only be enabled in larger roads that have the technology to manage the traffic flow automatically. Of course initially, like now, some areas will be mostly old fashioned non-self driving cars so they will have to be given some time to buy new cars. Some areas - "off the beaten track" may remain non self driving indefinitely.
This completely misses the point. In real world driving we are forced to break the rules when we encounter the aftermath of an accident or really convoluted roadwork setups. We frequently puzzle over what to do for a while, but we have to take some rule breaking action, or we would be stuck until the obstacles are cleared. Getting a car to take safe action as it encounters such an incident is reasonably straightforward. Getting it safely out of such situations, so it can continue to make progress, it a whole different thing.
« Last Edit: June 19, 2017, 04:52:40 am by cdev »
"What the large print giveth, the small print taketh away."
 

Offline JoeNTopic starter

  • Frequent Contributor
  • **
  • Posts: 991
  • Country: us
  • We Buy Trannies By The Truckload
This completely misses the point. In real world driving we are forced to break the rules when we encounter the aftermath of an accident or really convoluted roadwork setups. We frequently puzzle over what to do for a while, but we have to take some rule breaking action, or we would be stuck until the obstacles are cleared. Getting a car to take safe action as it encounters such an incident is reasonably straightforward. Getting it safely out of such situations, so it can continue to make progress, it a whole different thing.

Sometimes it even happens with a cop directing the "rules breaking" - using the wrong side of the road or shoulder, or grass, etc. for the duration of the incident.  How does a self-driving car interpret a cop or traffic warden directing traffic?   I would think not very well.
Have You Been Triggered Today?
 

Offline Brumby

  • Supporter
  • ****
  • Posts: 12298
  • Country: au

The only really sensible minimum requirement for self-driving cars to be allowed is for them to be better than humans -- a low bar indeed, as noted above. If you hold an opinion that there should be a more stringent requirement, then either you consider your right to the freedom of driving your own car to be worth killing others for, or you are one of the 90% of humans who consider their ability at doing X (driving in this case) to be better than the median.

Are self driving cars better than humans yet? I dunno, I'm purely debating what the threshold should be.

While the basic logic expressed here has obvious merit, there is one significant component that I can see being a rather nasty can of worms.... the nature of the incidents that occur.

Certainly, it may well be that there is a significant number of situations where an automated vehicle will reduce the number of accidents - such as lane changes without proper care and driver fatigue - there still lies the potential for absolutely obscure or incredibly complex situations which a human can make a better call than an automaton.  All you need is for a few events that the automated vehicle made a less than ideal choice in a situation where a human driver - even the most unskilled - would have easily coped and you will soon get ridicule and problems with acceptance - even if the bottom line is better.

Ask any programmer to write a moderately complex program that will handle ALL contingencies in an acceptably controlled manner and see how they squirm - especially if you put them on-call for the first run.

I am certain that there will be an incremental improvement over time - but like the vehicle and aeronautical industries have demonstrated, it takes death and serious injury to precipitate change.

How accepting will society be during that process, having already gone through such things over the last century or so?
 

Offline JoeNTopic starter

  • Frequent Contributor
  • **
  • Posts: 991
  • Country: us
  • We Buy Trannies By The Truckload
Several months ago, the self-driving car manufacturers were trying to lobby governments (including the Australian government) to make legislation outlawing non-autonomous vehicles by the 2020's.

Honestly, I just can't see it in the U.S.  The nation just got piqued and elected Trump as a big FU to the establishment.  Adopting a law like that at a national level would probably create so many dead politicians that it would make the French Revolution look tame in comparison. 

Manufacturers need to make this work in the current system of most people driving themselves.  That is what they promised.  If they can't do it, they can't sell their products.  End of story.
Have You Been Triggered Today?
 

Offline X

  • Regular Contributor
  • *
  • Posts: 179
  • Country: 00
    • This is where you end up when you die...
Interesting that you assert that self-driving cars must be 100% flawless before they should be allowed to replace human drivers, when on US roads alone 35 thousand people died in 2015 due to mostly human drivers.
Maybe not 100% perfect, but if not 100% then if something happens, the manufacturers of the algorithms must be liable to prosecution just like an equivalent human driver. Of course humans don't have WiFi connectivity, don't need firmware updates, and don't have the risk of being hacked by an RF blast from the whiz-kid delinquent next door.

Good luck getting some compensation from a driverless car manufacturer after a multi-car multi-casualty pile-up caused by a bug in the algorithm, where your car ran into another driverless car from a different manufacturer, because it couldn't detect the traffic cones for the contra-flow setup in place while road workers were mending a burst water main.

The only really sensible minimum requirement for self-driving cars to be allowed is for them to be better than humans -- a low bar indeed, as noted above. If you hold an opinion that there should be a more stringent requirement, then either you consider your right to the freedom of driving your own car to be worth killing others for, or you are one of the 90% of humans who consider their ability at doing X (driving in this case) to be better than the median.

Are self driving cars better than humans yet? I dunno, I'm purely debating what the threshold should be.
The bar is not as low as you make it to be. Humans actually make very few errors, given the sheer level of input they need to process in such a short time span, and this will be a tough act to follow. I have not seen any significant testing in this regard, only the "oh look our car stops when it sees an isolated bollard" style tests.
In a world where silicon valley invests in useless junk and Chinese manufacturers slip on loads of safety markings just for the sake of it, there must be laws that impose a very high level of accountability from the manufacturers of autonomous vehicles in the event something goes wrong.

Autonomous cars may have their place eventually, but it shouldn't render non-autonomous vehicles to be banned or forcibly made obsolete. This is a freedom that people should not be required to give up.
« Last Edit: June 19, 2017, 05:05:09 am by X »
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
Trump IS the Establishment a lot more than people think he is. Thats all a big act, like Clinton having seizures, etc. Just like Obama was. They signed away the rights to do most everything of importance in 1994, on December 8.

The real payload is the new trade deals, which are even more radical than them with their nagative list..

notice how TTIP and TISA are still on full speed ahead.. (and the worst parts of TPP like the SOE chapter just got moved to TiSA)


Quote from: JoeN on Today at 22:52:15>Quote from: X on Today at 21:19:12
Several months ago, the self-driving car manufacturers were trying to lobby governments (including the Australian government) to make legislation outlawing non-autonomous vehicles by the 2020's.

Honestly, I just can't see it in the U.S.  The nation just got piqued and elected Trump as a big FU to the establishment.  Adopting a law like that at a national level would probably create so many dead politicians that it would make the French Revolution look tame in comparison. 


No it wouldnt because people would think they would be able to afford them. It would be timed to create false expectations which would only be dashed in the most deceptive possible ways.. They are very very smart about this kind of thing.

Manufacturers need to make this work in the current system of most people driving themselves.  That is what they promised.  If they can't do it, they can't sell their products.  End of story.


----

People will buy them or start taking the bus, or more likely, staying at home, without any bus.. Read some history.

There never is an end of story.

Think of this Tesla as a sort of marketing gimmick. Nothing it does is revolutionary, its still 100% a human driven car, it just has some nifty add on features..


Kind of like my old Tek 2211 (hybrid analog "DSO")

I didn't see people taking a nap or having sex or whatever people will do in the initial batch of human driven cars with cruise control features.. nor will we see them doing that now.
in a Tesla..

But thats really what self driving cars are all about.
"What the large print giveth, the small print taketh away."
 

Offline rs20

  • Super Contributor
  • ***
  • Posts: 2318
  • Country: au
Maybe not 100% perfect, but if not 100% then if something happens, the manufacturers of the algorithms must be liable to prosecution just like an equivalent human driver.

So if I provide a box that, if we substitute it in for all humans drivers (admittedly a hypothetical), and it results in a total of 10,000 deaths per year instead of the current 30,000+ human-caused deaths, you say I should be liable for those 10,000 deaths? Because it feels like I saved 20,000 lives, even if there is manifest room for further improvement...

Of course humans don't have WiFi connectivity, don't need firmware updates, and don't have the risk of being hacked by an RF blast from the whiz-kid delinquent next door.

Heart attacks, sleep deprivation, blind spots...

The bar is not as low as you make it to be.

I said the bar is 30,000 deaths per year, I'm not sure what you mean by "that's not as a low as you make it to be". It's a known number: 30,000 deaths per year.

Humans actually make very few errors, given the sheer level of input they need to process in such a short time span, and this will be a tough act to follow. I have not seen any significant testing in this regard, only the "oh look our car stops when it sees an isolated bollard" style tests.

Google self driving cars have driven 2 million miles on open roads with 0 fatalities. Not sure if that 2 million has reached the stage of being compelling evidence yet, I'm not claiming that it has (in fact, the E(fatalities) for humans over the same distance would be 0.02 by my calculations, so probably not), but the work is being done.

Autonomous cars may have their place eventually, but it shouldn't render non-autonomous vehicles to be banned or forcibly made obsolete. This is a freedom that people should not be required to give up.

How many deaths per year would you consider an acceptable price for this freedom?

 

Offline X

  • Regular Contributor
  • *
  • Posts: 179
  • Country: 00
    • This is where you end up when you die...
Maybe not 100% perfect, but if not 100% then if something happens, the manufacturers of the algorithms must be liable to prosecution just like an equivalent human driver.

So if I provide a box that, if we substitute it in for all humans drivers (admittedly a hypothetical), and it results in a total of 10,000 deaths per year instead of the current 30,000+ human-caused deaths, you say I should be liable for those 10,000 deaths? Because it feels like I saved 20,000 lives, even if there is manifest room for further improvement...
Your device killed 10,000 people. If you are building something which kills 10,000 people through no fault of their own, you should be held fully responsible for all 10,000 deaths.

Of course humans don't have WiFi connectivity, don't need firmware updates, and don't have the risk of being hacked by an RF blast from the whiz-kid delinquent next door.

Heart attacks, sleep deprivation, blind spots...
These can be alleviated to a large degree (apart from the heart attacks of course).
As for blind spots, I agree this is where the machine will beat the man. I think you have raised a good point, and I think some automation in terms of crash avoidance is very handy, but not without a human override. It's like having another passenger in the vehicle.

The bar is not as low as you make it to be.

I said the bar is 30,000 deaths per year, I'm not sure what you mean by "that's not as a low as you make it to be". It's a known number: 30,000 deaths per year.
It's known for non-autonomous vehicles. I will be very surprised if it becomes lower when fully-autonomous vehicles take over.

Humans actually make very few errors, given the sheer level of input they need to process in such a short time span, and this will be a tough act to follow. I have not seen any significant testing in this regard, only the "oh look our car stops when it sees an isolated bollard" style tests.

Google self driving cars have driven 2 million miles on open roads with 0 fatalities. Not sure if that 2 million has reached the stage of being compelling evidence yet, I'm not claiming that it has (in fact, the E(fatalities) for humans over the same distance would be 0.02 by my calculations, so probably not), but the work is being done.
There aren't many self-driving cars around yet, so this is an insufficient sample space. 2 million isn't actually a lot of miles for a handful of cars, and it is very easy to pass this without a single car in the set having an accident.
Also, there have been a few deaths and injuries already.

Autonomous cars may have their place eventually, but it shouldn't render non-autonomous vehicles to be banned or forcibly made obsolete. This is a freedom that people should not be required to give up.

How many deaths per year would you consider an acceptable price for this freedom?
This is a slippery slope. If you want zero deaths a year, just put everybody in jail for safety's sake. It's all about balance. It's attitudes like "if it's one or more it's an issue" that result in nanny states in the first place.

My answer to this question: as many as reasonably necessary for this freedom to continue to exist.
« Last Edit: June 19, 2017, 06:27:37 am by X »
 

Offline Rick Law

  • Super Contributor
  • ***
  • Posts: 3441
  • Country: us
There are two issues mashed together in this discussion.  I think they need to be separated.

One issue is navigation of how to get from point A to point B.  The other issue is how to avoid accidents and I think this is the much more important one.

The car ahead of me has a bike on the roof bike/luggage-rack and my luck was that it felt off just in front of me.  How will an auto-driving car avoid it?  Will it decide to veer left and hit the tree, or veer right and hit the Rolls Royce with a little girl in it on the right?  Or, just brake hard and pray since both other options seem uninviting?  I made up the Rolls Royce part (just to make it sound less inviting), but next to me was indeed a car with at least one kid in it.  My choice was I braked hard and pray.  If it had been an auto-driving car, how will it decide and would I agree with that decision - after all, that decision will affect the rest of my life.

I think besides distraction, accidents typically originates by something unexpected.  Yet most of the current "self driving trials" seem not to be focused on accident avoidance but instead more focused on navigation.

No GPS or advanced navigation systems will help the self-driving car avoid the rabbit from running across.  Only vision-based system can help avoid the result of the rabbit's decision.  In my 30+ years of driving, besides that bike that felt from a car roof and at least half a dozen squirrels that I attempted to avoid (some unsucessfully), I netted one rabbit, two deer, and at least one bird.  All these could have been very costly.  (I was on River Road and yes it was a road running next to a river.  Accident avoidance there do carry a high risk of getting soaked).

Seeing the video on how the Tesla failed to detect the barrier makes me think they are no where near ready for prime time.

This looks like Tesla follows the Microsoft roll out model: let your customers be your beta testers.  They could test in the cities for years and the cars would have huge cumulative experience on how to avoid a texting pedestrians crossing your path, but if you are the first owner to have to drive past a soccer field on a hill everyday, their learning may mean you having to attend a kid's funeral and carry that feeling of guilt...
« Last Edit: June 19, 2017, 06:59:52 am by Rick Law »
 

Offline AndyC_772

  • Super Contributor
  • ***
  • Posts: 4228
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
Your device killed 10,000 people. If you are building something which kills 10,000 people through no fault of their own, you should be held fully responsible for all 10,000 deaths.

You've hit the nail on the head here.

If I go out and buy a new car today, I can be (almost!) completely certain that it won't kill me through any kind of design or manufacturing defect. That's an amazing achievement considering what it does, how it's achieved, and the levels of abuse to which it will be subjected.

That's not to say I can't drive it into a solid object, or that some other driver won't run me off the road, but those are outside the scope and responsibility of the car manufacturer.

But, if the car is self-driving, then responsibility for the car and its actions has to shift. There are a couple of logical steps which lead to this inevitable conclusion:

1) If we accept that the whole reason for self-driving is that a computer controlled car will be safer than a human controlled one, it logically follows that under some circumstances, it will do things that are different to what a human driver would have done.

2) In order to realise the benefit, the human occupant of the vehicle must allow the computer pilot to have control, even when it takes actions which were not what a human driver would have done

Therefore, responsibility for the car's actions - including any injury which results from its actions - can only rest with its manufacturer.

I happen to agree that a machine which is demonstrably (say) 3x safer than the people it replaces should, in theory, be regarded as a good thing. Unfortunately that's not the way society works, and the news headlines will be full of people saying "tell that to the families of the people your death-machine killed".

As engineers, we should be concerned about this. We will ultimately end up being the ones who write the algorithms and design the hardware. More than a few of us will probably end up in jail for causing death by dangerous driving, without ever leaving our offices, before the legal and social aspects of the technology work themselves out.

In the meantime, I too will be out enjoying my motorbike.

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb
1) If we accept that the whole reason for self-driving is that a computer controlled car will be safer than a human controlled one, it logically follows that under some circumstances, it will do things that are different to what a human driver would have done.

I suspect that the driving (no pun intended) forces are:
  • Consumer/purchaser convenience and/or brag factor
  • Corporate profitability - Establishing an early, dominant position in a market sector that didn't previously exist.
  • Engineer appeal - "Hot damn, I'm working on frickin' self-driving cars."

"Improved safety" is an after the fact claim to justify the pursuit, not a real driving force.

I happen to agree that a machine which is demonstrably (say) 3x safer than the people it replaces should, in theory, be regarded as a good thing. Unfortunately that's not the way society works, and the news headlines will be full of people saying "tell that to the families of the people your death-machine killed".

The road traffic incidents that make the headlines, the "something must be done" moments, involve multiple simultaneous fatalities, the classic "motorway pile-up". The same moment will come for self-driving vehicles in the same way, both politically and causally. Most 'pile-ups' happen because a number of drivers make the same error at the same time, following too close in unsuitable conditions (fog, rain, whatever). That is, there is a systematic failure in the human risk assessment algorithm.

At some point a similar failure will happen with self-driving algorithms, where a common systematic failure, probably an error in programmer's assumptions, causes a large number of vehicles in close proximity to make the same mistake, at the same time, at high speed, resulting in mass casualties and fatalities. It may even be worse than in human driven vehicles, where some variability between driver actions spreads the risk and diffuses the damage, whereas in self-driving vehicles it might be that they all make exactly the same error with no spread in responses which concentrates the damage.
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 

Offline rs20

  • Super Contributor
  • ***
  • Posts: 2318
  • Country: au
So if I provide a box that, if we substitute it in for all humans drivers (admittedly a hypothetical), and it results in a total of 10,000 deaths per year instead of the current 30,000+ human-caused deaths, you say I should be liable for those 10,000 deaths? Because it feels like I saved 20,000 lives, even if there is manifest room for further improvement...
Your device killed 10,000 people. If you are building something which kills 10,000 people through no fault of their own, you should be held fully responsible for all 10,000 deaths.

I suppose you consider the trolley problem a difficult question too? At least we've discovered the core point of disagreement between us. I'm not going to build this box if I'm going to be held responsible even for a decrease in deaths; the idea that you're more interested in assigning blame regarding the 10,000 deaths rather than the enormous suffering and heartbreak associated with the 20,000 extra deaths is just unconscionable, at least with respect to my personal axioms/approach to ethics. The fact that you have absolutely no interest in the 20,000 lives saved is just bewildering, absolutely bewildering to me. Those aren't statistics, those are real people in real families, only some of whom are even responsibile for their deaths?

Autonomous cars may have their place eventually, but it shouldn't render non-autonomous vehicles to be banned or forcibly made obsolete. This is a freedom that people should not be required to give up.

How many deaths per year would you consider an acceptable price for this freedom?
This is a slippery slope. If you want zero deaths a year, just put everybody in jail for safety's sake. It's all about balance. It's attitudes like "if it's one or more it's an issue" that result in nanny states in the first place.

My answer to this question: as many as reasonably necessary for this freedom to continue to exist.

Oh I agree; I'm not implying the answer should be zero. But I mean geez, 30,000 per year? That feels like is going a little too far off the other end of the balance. Not sure if I could live with that as a voter/lawmaker.

 

Offline Brumby

  • Supporter
  • ****
  • Posts: 12298
  • Country: au
So if I provide a box that, if we substitute it in for all humans drivers (admittedly a hypothetical), and it results in a total of 10,000 deaths per year instead of the current 30,000+ human-caused deaths, you say I should be liable for those 10,000 deaths? Because it feels like I saved 20,000 lives, even if there is manifest room for further improvement...
Your device killed 10,000 people. If you are building something which kills 10,000 people through no fault of their own, you should be held fully responsible for all 10,000 deaths.

I suppose you consider the trolley problem a difficult question too? At least we've discovered the core point of disagreement between us. I'm not going to build this box if I'm going to be held responsible even for a decrease in deaths; the idea that you're more interested in assigning blame regarding the 10,000 deaths rather than the enormous suffering and heartbreak associated with the 20,000 extra deaths is just unconscionable, at least with respect to my personal axioms/approach to ethics. The fact that you have absolutely no interest in the 20,000 lives saved is just bewildering, absolutely bewildering to me. Those aren't statistics, those are real people in real families, only some of whom are even responsibile for their deaths?

The issue is not one of statistical reality - it is of public perception.

No matter how many deaths are saved, the fact that 10,000 of the ones that do occur can be associated with a SINGLE point of commonality - ie autonomous vehicles - is what will get the press.

There is a parallel in the judicial system - in any society .... It's not whether justice is served that matters - it is that justice is SEEN to be served that matters.

The same perception bias applies.
 

Offline Jeroen3

  • Super Contributor
  • ***
  • Posts: 4078
  • Country: nl
  • Embedded Engineer
    • jeroen3.nl
...

Therefore, responsibility for the car's actions - including any injury which results from its actions - can only rest with its manufacturer.

I happen to agree that a machine which is demonstrably (say) 3x safer than the people it replaces should, in theory, be regarded as a good thing. Unfortunately that's not the way society works, and the news headlines will be full of people saying "tell that to the families of the people your death-machine killed".

As engineers, we should be concerned about this. We will ultimately end up being the ones who write the algorithms and design the hardware. More than a few of us will probably end up in jail for causing death by dangerous driving, without ever leaving our offices, before the legal and social aspects of the technology work themselves out.

In the meantime, I too will be out enjoying my motorbike.
We will indeed be the ones in control of the development of the algorithms. There isn't that much government involved yet...

Uncle Bob said in a talk that there will be software that kills 10,000 people. When that happens, the governments will step in, crafting laws to prevent such further events in the future. It should be the developers and engineers who make their workflow and ethics in such a way this event will never happen. I think he's right.

Meanwhile, have fun driving your organ donation machine.
 

Offline AndyC_772

  • Super Contributor
  • ***
  • Posts: 4228
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
Right. There are two separate things happening here.

One is that the overall death toll reduces from, let's say, 30000 to 10000. That's fantastic!

BUT: responsibility for those 10000 deaths is no longer distributed amongst the drivers of the cars they were in and the vehicles that may have hit them. It becomes concentrated at a single, identifiable point of blame: the person who signed off the auto-pilot as being fit for purpose.

I would never be willing to be that person, because however many lives my self-driving technology might save, I'll be the one straight up against the wall when the first absolutely inevitable fatality does happen.

Let me tell a short story...

A few years ago I (voluntarily!) attended a short motorcycle safety course run by the police. One of the people who spoke there was an accident investigator, and he gave an insightful and useful lecture on the common causes of bike crashes.

(Incidentally, one of the main ones is failing to recognise just how capable a modern motorcycle is when it comes to braking effectiveness, or achievable lean angle and corner speed... having made a mistake, riders panic rather than simply leaning further or braking harder in order to make it round a corner or stop in time - but I digress).

All was going well until the topic came up of modifications. It's very common here for riders to swap out stock suspension parts in favour of upgraded aftermarket alternatives, but he was dead set against the whole idea. He was adamant that manufacturers always "know best", and that he had seen a number of crashes where a rider had made his bike "unrideable" by modifying it.

What he failed to realise, was that his own data set was skewed. He only ever saw crashed bikes. He had no way of knowing if and when a rider had avoided a crash precisely because his bike was properly set up for his size, weight, preferences and riding style. For all his expertise, he was completely oblivious to the fact that his data set had holes in it.

I fear the same will happen with self-driving cars. Make an improvement that saves 20,000 lives, and you'll still end up taking responsibility for the 10,000 who - it will be argued - might not have died were it not for your "so-called safe" auto-pilot.

Offline AndyC_772

  • Super Contributor
  • ***
  • Posts: 4228
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
Meanwhile, have fun driving your organ donation machine.

Oh, I do, don't worry about that.

I'm *much* more scared that I'll find myself a 95 year old dribbling idiot in a hospital bed somewhere, regretting having spent an entire life not having lived, than I am of the prospect of a bike accident.

I have a good helmet and leathers, and the wisdom not to do anything stupid. The rest is a risk which I absolutely, completely, understand and accept.

Offline X

  • Regular Contributor
  • *
  • Posts: 179
  • Country: 00
    • This is where you end up when you die...
I suppose you consider the trolley problem a difficult question too?
No, but it has nothing to do with this discussion.

At least we've discovered the core point of disagreement between us. I'm not going to build this box if I'm going to be held responsible even for a decrease in deaths; the idea that you're more interested in assigning blame regarding the 10,000 deaths rather than the enormous suffering and heartbreak associated with the 20,000 extra deaths is just unconscionable, at least with respect to my personal axioms/approach to ethics. The fact that you have absolutely no interest in the 20,000 lives saved is just bewildering, absolutely bewildering to me. Those aren't statistics, those are real people in real families, only some of whom are even responsibile for their deaths?
You clearly don't care about the 10,000 people your device killed, and you're fine with "Oh well, at least I statistically saved 20,000 lives!" This is like a murderer saying she's not a murderer because she "saved 8 lives" by killing 2 people instead of 10.

In this case, the person who caused the accidents can easily be brought to justice (given the legal system works). In the case of driverless cars, the blame will be entirely on the manufacturer of the device. They won't care about heartbreak and suffering, and it will be a case of the small mouse trying to fight the big cat.
I'm not suggesting in any way that a decrease in deaths is bad, but there are real practical and legal considerations at play that you have conveniently (and hypocritically) ignored in favour of a purely sympathetic and emotional appeal.

If Microsoft/Apple/Google don't care about treating people's data and computers as their toys, I doubt they'll care about the suffering families endure when their driverless cars harm others.

Oh I agree; I'm not implying the answer should be zero. But I mean geez, 30,000 per year? That feels like is going a little too far off the other end of the balance. Not sure if I could live with that as a voter/lawmaker.
The US has over 325 million people, and this figure represents less than 0.009% of the population. I think that's quite acceptable given that far more people die from illness and other accidents. Perhaps we should ban people from going outside, and allow health inspectors forced entry so they can sterilise your home. While we're at it, the only food everyone's allowed to eat is government-approved nutritional paste, certified free of contaminants.

I'm all for saving lives, but not at the cost of freedom.
« Last Edit: June 19, 2017, 12:27:36 pm by X »
 

Offline rs20

  • Super Contributor
  • ***
  • Posts: 2318
  • Country: au
I agree with everything in your message, X, except what is quoted below;

At least we've discovered the core point of disagreement between us. I'm not going to build this box if I'm going to be held responsible even for a decrease in deaths; the idea that you're more interested in assigning blame regarding the 10,000 deaths rather than the enormous suffering and heartbreak associated with the 20,000 extra deaths is just unconscionable, at least with respect to my personal axioms/approach to ethics. The fact that you have absolutely no interest in the 20,000 lives saved is just bewildering, absolutely bewildering to me. Those aren't statistics, those are real people in real families, only some of whom are even responsibile for their deaths?
You clearly don't care about the 10,000 people your device killed, and you're fine with "Oh well, at least I statistically saved 20,000 lives!"

Statistically? Should we give no credit to the measles vaccine because the 17.1 million lives it have saved are merely "statistical" "estimates"?

I care about the fact that I am responsible for there being 10,000 deaths instead of 30,000, and would work my ass of to get it down to 5,000 and below. Again, people who have trouble with the Trolley problem can talk their way into allowing 30,000 people to die, but not me. I'm happy for us to agree to disagree on this point though.

I suppose you consider the trolley problem a difficult question too?
No, but it has nothing to do with this discussion.

This is like a murderer saying she's not a "real" murderer because he only killed 2 people instead of 10, thus saving 8 lives.

I mean, that is exactly the trolley problem with 10 people on the main line and 2 people on the siding, but eh whatever
 

Offline AndyC_772

  • Super Contributor
  • ***
  • Posts: 4228
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
Statistically? Should we give no credit to the measles vaccine because the 17.1 million lives it have saved are merely "statistical" "estimates"?

If that vaccine were known to have killed 1 million of the people who received it, would you give it to your children?

On balance, you certainly should. After all, it's 17 times as likely to save their lives, as it is to kill them.

People don't work that way though, which is probably why I prefer working with machines instead.
 
The following users thanked this post: Someone, rs20

Offline X

  • Regular Contributor
  • *
  • Posts: 179
  • Country: 00
    • This is where you end up when you die...
Statistically? Should we give no credit to the measles vaccine because the 17.1 million lives it have saved are merely "statistical" "estimates"?
This is again detracting from the issue.
The vaccine is not taking peoples' freedom away just for the sake of saving lives. It is actually immunising people (who are at risk of a gruesome death), who can then carry on with their lives as per normal. To the best of my knowledge, the vaccine didn't actually kill anyone, and to date the risk of that happening is demonstrably tiny, so better to take that tiny calculated risk.
Of course if the vaccine kills or injures anyone, questions will be asked of the doctor who performed the vaccination, and possibly the manufacturer of the vaccine, just like any other medical procedure gone wrong.

In your initial problem, your box actually resulted in the deaths of 10,000 when they might have otherwise been fine. I agree that 10,000 is better than 30,000 in this case, but it still doesn't change the fact that the responsibility for the deaths now all lies upon you, since those 10,000 people were certainly not at fault for their deaths.

I care about the fact that I am responsible for there being 10,000 deaths instead of 30,000, and would work my ass of to get it down to 5,000 and below. Again, people who have trouble with the Trolley problem can talk their way into allowing 30,000 people to die, but not me. I'm happy for us to agree to disagree on this point though.
Can't disagree with this one.

I mean, that is exactly the trolley problem with 10 people on the main line and 2 people on the siding, but eh whatever
The way I see it, with the trolley problem, whoever is responsible for either 1 or (n-1) deaths is the villain who tied the people onto the line in the first place, all you did was make a decision that influences how many deaths the villain is responsible for. If the people got stuck on the line by their own accord, it's probably their own fault, and you can't be held responsible for any of the deaths, regardless of whether or not you flicked the switch.
« Last Edit: June 19, 2017, 01:46:14 pm by X »
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
I completely agree, but the whole thing being 100% about profit and a planned obsolescence nirvana with the powers that be also increasingly concerned about their own legitimacy (this is a very big concern among them) and their power, and with likely  rights to all sorts of unknown features in self driving cars - being so useful for surveillance, or even worse, I think we'll see them much sooner than later, even if the business case for them gets sketchier and sketchier fast and indeed the number of people driving to work drops like a stone and we clearly don't need them which in fact I think will likely be the case.

Due to automation and the Internet, the number of people needing to go to work in their nearby city every day may get much smaller. Where people choose to live may not be so important to where they work.  That is what the original promise of InternetOfThings

 "Ubiquitous computing", at Xerox PARC It was about workplace cooperation at a level rarely seen in corporations, BTW.

 Interesting research, now quite old, but could be realized now to make people significantly more productive. (But surveillance nightmare too)

Good for very high functioning, high trust level, non-hierarchical research groups.. Kind of like my ideal of where we should be going, if we want to maintain high levels of employment, actually that is what we must do, so people at age 20 will be what a PhD is today, we need to accelerate the learning process and +stop wasting time beating the spirit out of people_, let them remain chilldlike in terms of curiosity their whole lives. (Creative geniuses are often like that)

"Ubiquitous computing"  - check it out.  They had a vision for the future many of us lack today. Its one where people are online but they are also much more engaged with their co-workers and friends, not detached and isolated.


Quote from: X on Yesterday at 22:56:33

    Autonomous cars may have their place eventually, but it shouldn't render non-autonomous vehicles to be banned or forcibly made obsolete. This is a freedom that people should not be required to give up.
« Last Edit: June 19, 2017, 01:14:36 pm by cdev »
"What the large print giveth, the small print taketh away."
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
The body IS a machine.. and this is a great example of why we should take a pragmatic approach to that and use what we know to protect it..

The grain of truth at the core of the IMHO phony and very contentious "vaccine issue" is a serious threat posed to all of us by exposures to strongly pro-oxidant substances - that threat being especially serious at certain points of the life cycle.

This is a fairly simple scientific fact which would be easy to teach people about..

Something that has been known for at least three decades to me, and explained in at least tens of thousands of scientific papers is the importance of glutathione to health.

Due to our need for glutathione  these pro-oxidant substances can become additive toxicants - Many are fairly common and many are completely unregulated..  However, the glutathione is there to protect our cells and when a toxic challenge from some substance is encountered, it must be there or the cell may have to kill itself to prevent DNA damage and cancer. So exposure to pro-oxidant toxicants is expensive to or health and should be compensated for.

As multiplication of cells is finite (see "Hayflick Limit") and as we age the built in repair mechanism- apoptosis of damaged cells and cell division of undamaged cells.. becomes less and less available. Also inflammation levels rise using up our precious glutathione even without toxic exposures. (everybody should supplement with NAC as they get older. Not a lot but enough to ensure we're getting enough cysteine to make enough glutathione for our age related needs and take more if we know we're geting toxic exposures. (lead is strongly pro-oxidant and NAC is useful in cases of chronic lead exposure. Another amino acid of use in that context is taurine.)

The big problem as described in the specific paper linked below is that pro-oxidant toxicants have the potential to cause birth defects in pregnancy due to changes in the expression of two genes "Fyn" and "c-Cbl" .  Very low level exposures can disrupts precursor cell function. In other words disrupt cell differentiation.

So, a very serious threat to humanity's reproductive process is posed by low level pollutants. So all pollutants and other chemicals that have strong pro-oxidant activity should be considered to be additive, not regulated separately.

Pro oxidant toxicants are toxic to the unborn children of pregnant women, and should be of concern to all others as well - They can dramatically effect the IQs of children and cause life chaging autoimmune disease and inflammation in people of all ages..

certain polulation grous aloready have low glutathione chronically due to the creation of cross links (advanced glycation endproducts, commonly abbreviated "AGE"s ) as we age - So we should be concerned when any of those groups, unborn children (pregnant women or women of childbearing age, infants, children, or the old or sick are exposed to them even at very low levels. 

However, many pro-oxidant substances areunregulated or regulated only at extremely high levels.That leads to binary hinking and behaviors that are extremely ill advised, people thinking that chronic 24 hour a day, seven day a week levels below some 30 year old "action level" picked to apply in a workplace setting, 8 hrs day 40 hours a week..
is okay.

Here is the paper.. Note that people exposed to a plethora of pro-oxidant substances can supplement with n-acetylcysteine to improve their cells "redox status" - substantially reducing the risk of toxicity to an unborn child and cancers (reducing apoptosis - programmed cell death of exposed cells ) and preserving much of the body's finite cellular repair capacity longer..

PLOS Biology: Chemically Diverse Toxicants Converge on Fyn and c-Cbl to Disrupt Precursor Cell Function

Quote from: AndyC_772 on Today at 06:40:25>Quote from: rs20 on Today at 06:30:59
Statistically? Should we give no credit to the measles vaccine because the 17.1 million lives it have saved are merely "statistical" "estimates"?

If that vaccine were known to have killed 1 million of the people who received it, would you give it to your children?

On balance, you certainly should. After all, it's 17 times as likely to save their lives, as it is to kill them.

People don't work that way though, which is probably why I prefer working with machines instead.
« Last Edit: June 19, 2017, 01:55:43 pm by cdev »
"What the large print giveth, the small print taketh away."
 

Offline CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5231
  • Country: us
This thread initiated talking about situations that will be difficult or impossible for self driving cars.  A recent letter to American Scientist poo-pooed the idea of a self driving car by relating the authors recent cross country trip that involved several difficult situations including iced up roadways, a stint off road and others that I don't remember.

The solution to these problems is simple and was identified in science fiction decades ago. As with all solutions it isn't perfect, but it will work most of the time.  You just retain the capability for the driver to control the car, and switch to the driver when in a situation outside of the autopilots capabilities.  You get the benefits of the autopilot on long boring cruises across country and don't have to pay for some genius of the future autopilot that can handle the bizarre cases.

Simple GPS maps could handle the decision function in most cases.  Augment with a few sensors and contact with the weather bureau and the autopilot would have a pretty robust way of identifying when it was in over its head.

It is easy to find fault with this solution.  Many of the cases identified in this thread and others are things that would escape this system and put the autopilot in a difficult situation.  But a little thought shows that human drivers don't generally do very well in these cases either.  The example of emergency braking with cars in adjacent lanes with a child in one car and so on.  How many of us can truly say that in an emergency braking situation we run through all of the options and select the best?
 

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb
How many of us can truly say that in an emergency braking situation we run through all of the options and select the best?

It depends if your body has decided that you're in real immanent danger of death. If it has the world goes into slow motion and you have a surprising amount of apparent time to think about what you're going to do - in real time no more than 200-300ms. I've had it happen to me (twice) and it's a truly weird feeling. Both times I had enough volitional thought and control to make a difference to the outcome. Whether this was enough to select the 'best' outcome is doubtful, but it was enough to select a 'better' outcome.
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
Well I know this, if I'm ever involved in an accident with a self driving car that is not my fault, I'm going to sue the manufacture of the self driving car. Given what I've observed about the legal system, there is a very good chance that I will win a very substantial settlement. I'm certain that I'm not unique here, a few dozen similar such incidents could bankrupt the manufacture.
 

Offline KE5FX

  • Super Contributor
  • ***
  • Posts: 1890
  • Country: us
    • KE5FX.COM
You just retain the capability for the driver to control the car, and switch to the driver when in a situation outside of the autopilots capabilities.

Sorry, but that's not going to work.

Quote
How many of us can truly say that in an emergency braking situation we run through all of the options and select the best?

That depends on what you were doing before the emergency braking situation came up. 

Were you scanning traffic ahead, reading road conditions, and keeping a mental note of cars in the vicinity of your own?  If so, then there's a good chance you'll do OK, and everyone will go home with their sheet metal intact.

Or, were you doing what drivers will actually be doing, and texting your friends or watching a movie on your phone when the "Emergency Intervention Required" alarm went off? 

If so, then there's not a chance in hell you'll be able to take over in time. 

Seriously: how can it not be completely obvious that self-driving cars are an all-or-nothing kind of thing?
 

Offline sokoloff

  • Super Contributor
  • ***
  • Posts: 1799
  • Country: us
I suppose you consider the trolley problem a difficult question too?
No, but it has nothing to do with this discussion.

At least we've discovered the core point of disagreement between us. I'm not going to build this box if I'm going to be held responsible even for a decrease in deaths; the idea that you're more interested in assigning blame regarding the 10,000 deaths rather than the enormous suffering and heartbreak associated with the 20,000 extra deaths is just unconscionable, at least with respect to my personal axioms/approach to ethics. The fact that you have absolutely no interest in the 20,000 lives saved is just bewildering, absolutely bewildering to me. Those aren't statistics, those are real people in real families, only some of whom are even responsibile for their deaths?
You clearly don't care about the 10,000 people your device killed, and you're fine with "Oh well, at least I statistically saved 20,000 lives!" This is like a murderer saying she's not a murderer because she "saved 8 lives" by killing 2 people instead of 10.
You do realize that the second underlined section is the trolley problem in a nutshell, right?

It seems to me that it has quite a bit to do with this discussion.
 

Offline JoeNTopic starter

  • Frequent Contributor
  • **
  • Posts: 991
  • Country: us
  • We Buy Trannies By The Truckload
Seriously: how can it not be completely obvious that self-driving cars are an all-or-nothing kind of thing?

Well, for one thing manufacturers are not saying this.  Only you are.  Secondly, then they are a nothing thing.  There is no way we are going to turn over the entire fleet of automobiles in North America just to implement this optional feature.  The people would never stand for it.  It would be the least popular law proposal ever.
Have You Been Triggered Today?
 

Online BrianHG

  • Super Contributor
  • ***
  • Posts: 7733
  • Country: ca
Statistically? Should we give no credit to the measles vaccine because the 17.1 million lives it have saved are merely "statistical" "estimates"?

If that vaccine were known to have killed 1 million of the people who received it, would you give it to your children?

On balance, you certainly should. After all, it's 17 times as likely to save their lives, as it is to kill them.

People don't work that way though, which is probably why I prefer working with machines instead.

This analogy only works if you have given out 18.1 million doses of the vaccine.  Otherwise the 17 times as likely doesn't add up.
 

Online BrianHG

  • Super Contributor
  • ***
  • Posts: 7733
  • Country: ca
I also don't like the 30000 lives today VS saving 20000 lives by lowering the death count to 10000 lives.  How can you say the now 10000 lives lost by machine driving are a sub-set of the original 30000?  Machines will make different mistakes than human drivers, no matter how well they are taught.  The court argument will always be in a number of these 10000 cases will always be that a human driver would have not made that type of mistake under the same circumstances.  Video logs from the car's computer will confirm this often enough to create public outcry.
 

Offline CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5231
  • Country: us
You just retain the capability for the driver to control the car, and switch to the driver when in a situation outside of the autopilots capabilities.

Sorry, but that's not going to work.

Quote
How many of us can truly say that in an emergency braking situation we run through all of the options and select the best?

That depends on what you were doing before the emergency braking situation came up. 

Were you scanning traffic ahead, reading road conditions, and keeping a mental note of cars in the vicinity of your own?  If so, then there's a good chance you'll do OK, and everyone will go home with their sheet metal intact.

Or, were you doing what drivers will actually be doing, and texting your friends or watching a movie on your phone when the "Emergency Intervention Required" alarm went off? 

If so, then there's not a chance in hell you'll be able to take over in time. 

Seriously: how can it not be completely obvious that self-driving cars are an all-or-nothing kind of thing?

A:  The kind of transition I am talking about isn't an "Oops, I'm out of my league - here it's your hot potato." thing.  The GPS says gee, at current rate of progress I am coming to a (construction zone, dense pedestrian zone, loonie driving zone, rain squall....) in five minutes.  Time to alert the driver that he/she needs to take over.  Which is handled with a handshake protocol.  If this hasn't happened with one minute to go the urgent alarms go off and the car starts looking for a place to pull over and park.  This is what the sci-fi folks wrote about.

B:  I am a fairly alert driver, scanning nearby and approaching traffic and all of that.  But I don't think I am alone in saying that I don't try to evaluate the human cargo of nearby vehicles to use in choosing which bad outcome to select in case of a possible emergency.   My time and attention is better spent looking for road hazards and keeping track of closure rates and consistency of driving of the dozen or more vehicles in my threat space at any given time.

C:  I have been a number of emergency reaction situations in my life, and have come through all but one of them without collision.  I agree that time does seem to slow in cases of emergency, but there is still never enough time to do all the things you would like.  In the case of a collision I arguably made the wrong choice as there wasn't quite enough room to stop behind the vehicle (by less than a half meter) in front and an emergency lane change might have been the better choice.  But even in retrospect I'm not sure since there was marginal passing space on either side of the stopped vehicle and if that margin actually negative I would have traded the very low speed collision I had for a much higher speed sideswipe, or perhaps a rollover into a ditch.  An autopilot could have judged all of these factors much more quickly and accurately than I.  Human beings make mistakes in these situations, it is ludicrous to expect an autopilot to be perfect.
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
Seriously: how can it not be completely obvious that self-driving cars are an all-or-nothing kind of thing?

Well, for one thing manufacturers are not saying this.  Only you are.  Secondly, then they are a nothing thing.  There is no way we are going to turn over the entire fleet of automobiles in North America just to implement this optional feature.  The people would never stand for it.  It would be the least popular law proposal ever.

To borrow a quote from the gun guys, they can pry my manually driven (and manually shifted) car from my cold, dead hands.
 

Offline sokoloff

  • Super Contributor
  • ***
  • Posts: 1799
  • Country: us
To borrow a quote from the gun guys, they can pry my manually driven (and manually shifted) car from my cold, dead hands.
My uncle has a country place
That no one knows about
He says it used to be a farm
Before the Motor Law
And on Sundays I elude the eyes
And hop the Turbine Freight
To far outside the Wire
Where my white-haired uncle waits

Jump to the ground
As the Turbo slows to cross the borderline
Run like the wind
As excitement shivers up and down my spine
Down in his barn
My uncle preserved for me an old machine
For fifty odd years
To keep it as new has been his dearest dream

I strip away the old debris
That hides a shining car
A brilliant red Barchetta
From a better vanished time
I fire up the willing engine
Responding with a roar
Tires spitting gravel
I commit my weekly crime

Wind
In my hair
Shifting and drifting
Mechanical music
Adrenaline surge...

Well-weathered leather
Hot metal and oil
The scented country air
Sunlight on chrome
The blur of the landscape
Every nerve aware
...

(Red Barchetta, Moving Pictures, Rush)
 

Offline sokoloff

  • Super Contributor
  • ***
  • Posts: 1799
  • Country: us
Our daily drivers are 3, 13, and 19 years old and my 2 classics are 51 and 52 years old.

I doubt that we'll see any kind of hard cutover from human-driven to AI-driven cars. If the AI cars can't co-exist with the human cars, who will buy an AI car?

I can't imagine the government banning all other cars. Can you imagine the outcry? It would literally be a giant F-U to poor people (and to those of us who choose to drive economical [older] cars. Depending on which year and which study you read, the average age of the car on the road in the US is 11.4-12 years old. It will take that long to cycle out roughly half of the cars on the road.
 
The following users thanked this post: james_s

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
If they wanted to do a hard cutover it would be far cheaper and more practical to build light rail everywhere, and a lot more efficient and effective too. I simply fail to see the attraction of a personal car that you can't drive, it's the worse of both worlds. All the fun of mass transit coupled with the cost of owning a premium car. Personal cars are a highly inefficient way to move people and goods, and making one drive itself is orders of magnitude more complex with a far greater opportunity for error than rail which is inherently self driving.
 

Offline KE5FX

  • Super Contributor
  • ***
  • Posts: 1890
  • Country: us
    • KE5FX.COM
If they wanted to do a hard cutover it would be far cheaper and more practical to build light rail everywhere, and a lot more efficient and effective too. I simply fail to see the attraction of a personal car that you can't drive, it's the worse of both worlds. All the fun of mass transit coupled with the cost of owning a premium car. Personal cars are a highly inefficient way to move people and goods, and making one drive itself is orders of magnitude more complex with a far greater opportunity for error than rail which is inherently self driving.

You'll see car ownership take a steep dive if/when this ever happens.  There won't be much point to owning one.  Call one up when/where you need it, and pay as you go.

And yes, it will be one hell of a lot better than trains and buses that are guaranteed not to be where you are and to go where you want.
 
The following users thanked this post: Someone

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
You've got it backwards, the inefficiency is a feature, not a bug. Jobs are going away, money may too, soon, so they all are trying to make as much as possible, and salt it all away offshore. Remember the money you may now may have to last your descendants forever.

Proof? just look at US's health care insurance system.  "the one bright spot in a dismal economy" as it was put to me a few years back. 

Quote from: james_s on Today at 20:27:22
If they wanted to do a hard cutover it would be far cheaper and more practical to build light rail everywhere, and a lot more efficient and effective too. I simply fail to see the attraction of a personal car that you can't drive, it's the worse of both worlds. All the fun of mass transit coupled with the cost of owning a premium car. Personal cars are a highly inefficient way to move people and goods, and making one drive itself is orders of magnitude more complex with a far greater opportunity for error than rail which is inherently self driving.
"What the large print giveth, the small print taketh away."
 

Offline poorchava

  • Super Contributor
  • ***
  • Posts: 1672
  • Country: pl
  • Troll Cave Electronics!
Lol, I bet nobody would buy an autonomous car here in Poland, and not due to financial reasons.

That car would obey the official rules, not real ones. Try coming here to Poland and driving 50km/h on a 50 limit. You'll be overtaken and honked on by literally everyone. I usually drive somewhere around 70-80km/h and still get overtaken on regular basis in very center of a 700k people city. I somehow can't see autonomous cars fitting into that image.
I love the smell of FR4 in the morning!
 

Offline X

  • Regular Contributor
  • *
  • Posts: 179
  • Country: 00
    • This is where you end up when you die...
You clearly don't care about the 10,000 people your device killed, and you're fine with "Oh well, at least I statistically saved 20,000 lives!" This is like a murderer saying she's not a murderer because she "saved 8 lives" by killing 2 people instead of 10.
You do realize that the second underlined section is the trolley problem in a nutshell, right?

It seems to me that it has quite a bit to do with this discussion.
No, the trolley problem is talking about choosing who is dead, with all people tied onto the line, who already have their lives in danger, not about playing with statistics. The number of people on each line are known to the exact number of people. This is very different to "A programming error on my part may have killed 10,000 with my magic box, but it might be possible that maybe 30,000 would have died if my magic box wasn't around."

If relevant to anything, it may be slightly relevant to that specific (and somewhat flawed) example, but not the discussion.

The best thing I ever heard in favour of self driving cars is that when you have quick errands to run and parking is inconvenient, expensive or just not available you can just leave it circling the block until you complete your business.
(arrives at work 4 hours late) "Sorry boss, there was a bug in the new firmware I flashed in my car, it just drove me all around in circles!"  >:D
« Last Edit: June 20, 2017, 03:27:19 pm by X »
 

Offline tszaboo

  • Super Contributor
  • ***
  • Posts: 7377
  • Country: nl
  • Current job: ATEX product design
I doubt that we'll see any kind of hard cutover from human-driven to AI-driven cars. If the AI cars can't co-exist with the human cars, who will buy an AI car?
I would. I throw away 2x30 min in my day doing things that a machine can do. I'm not one of those "my car is my life" kinda people. I have a Prius, and while I like driving, I dont like driving when there are other people on the road, because they are inbred morons, that are able to put my life at risk on, daily basis. If I want to drive, I rent a Catherham and I go to the Nürnburgring.
BTW, I recently read an article about "self driving" Peugeot. Guess what it was doing? Driving in zig-zag, and randomly breaking for no good reason. Just like the drivers that buy any new Peugeot.
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
But why not just take lightrail? When I visited Los Angeles I didn't bother to rent a car, I just got a hotel room near the light rail line and rode it everywhere. It worked great, the trains were fast, quiet and clean, ran every 15 minutes on predetermined routes, maps prominently displayed in each car and stops called out automatically as they approached. It worked really well, felt great to blast down the middle of a jammed up freeway at 70mph.

A hard cutover simply won't work, at least not in the US. People like to drive cars, people are attached to their cars, a large portion of the population can't afford a brand new car, I could afford one but I would never buy one, I refuse to spend so much on a depreciating asset. It just won't happen unless they can integrate seamlessly with manually driven cars and I don't think they will in practice, at least not for a long time. Once they are out there I look forward to the first unexpected snowstorm, I'll park somewhere well away from the streets and find a safe perch to sip a beer and watch the show.
« Last Edit: June 21, 2017, 03:36:07 pm by james_s »
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
This is so strange to read because for most of my life LA has been the city that just couldn't bring itself to have a decent public transport system. Unlike all the other big cities I know well.  The RTD buses were uniformly horrible.

Way back in the 30s and 40s (way before my time) they had a great trolley system that went everywhere, but like the others, a company controlled by the evil National City Lines bought it up and junked it.

Quote from: james_s on Today at 15:57:28
But why not just take lightrail? When I visited Los Angeles I didn't bother to rent a car, I just got a hotel room near the light rail line and rode it everywhere. It worked great, the trains were fast, quiet and clean, ran every 15 minutes on predetermined routes, maps prominently displayed in each car and stops called out automatically as they approached. It worked really well, felt great to blast down the middle of a jammed up freeway at 70mph.

A hard cutover simply won't work, at least not in the US. People like to drive cars, people are attached to their cars, a large portion of the population can't afford a brand new car, I could afford one but I would never buy on, I refuse to spend so much on a depreciating asset. It just won't happen unless they can integrate seamlessly with manually driven cars and I don't think they will in practice, at least not for a long time. Once they are out there I look forward to the first unexpected snowstorm, I'll park somewhere well away from the streets and find a safe perch to sip a beer and watch the show.
"What the large print giveth, the small print taketh away."
 

Offline sokoloff

  • Super Contributor
  • ***
  • Posts: 1799
  • Country: us
I doubt that we'll see any kind of hard cutover from human-driven to AI-driven cars. If the AI cars can't co-exist with the human cars, who will buy an AI car?
I would. I throw away 2x30 min in my day doing things that a machine can do. I'm not one of those "my car is my life" kinda people. I have a Prius, and while I like driving, I dont like driving when there are other people on the road, because they are inbred morons, that are able to put my life at risk on, daily basis. If I want to drive, I rent a Catherham and I go to the Nürnburgring.
But if they can't co-exist with human driven cars, you still have to drive it manually or at least monitor it, right? You don't get back that 60 minutes/day unless it's fully autonomous AI, IMO.

I don't see a parallel road network as likely to spring up. How does the AI car help you?
 

Offline tszaboo

  • Super Contributor
  • ***
  • Posts: 7377
  • Country: nl
  • Current job: ATEX product design
But if they can't co-exist with human driven cars, you still have to drive it manually or at least monitor it, right? You don't get back that 60 minutes/day unless it's fully autonomous AI, IMO.
I don't see a parallel road network as likely to spring up. How does the AI car help you?
There are 6 levels of autonomous cars. I have level 1 now, level 5 is "steering wheel optional". Tesla is level 2, people die when they think it is a level 3.
 

Offline sokoloff

  • Super Contributor
  • ***
  • Posts: 1799
  • Country: us
Exactly. That's my point in this sub-thread. If level 5 is only useful if you get all the human-driven cars off the road, who will buy the level 5 car in the first 10, or even 5, years? Sure, some gadget freaks might, but it won't deliver the significant practical advantage of "take me to work while I nap".
 

Offline AndyC_772

  • Super Contributor
  • ***
  • Posts: 4228
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
... or indeed, "take me home after I've been out for a few drinks". Or perhaps, "take me somewhere after I've had a stroke, and I can't ever drive again for myself".

Offline tszaboo

  • Super Contributor
  • ***
  • Posts: 7377
  • Country: nl
  • Current job: ATEX product design
You dont need to get rid of the human drivers. Although, you should, it would speed up the road network. You cannot get rid of the cyclists or pedestrians from the road. I know, these things are unknown in the USA. But a level 5 car for sure needs to be prepared to it. Even some level 2 handles them. Why would you need a separate network?
Volvo claims that nobody would die or get injured in their self driving car. They claim that as soon as 2020 they can do this.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
]But if they can't co-exist with human driven cars, you still have to drive it manually or at least monitor it, right? You don't get back that 60 minutes/day unless it's fully autonomous AI, IMO.

I don't see a parallel road network as likely to spring up. How does the AI car help you?

It will take a while to get rid of the non-automated cars but we'll do it through legislation just like we got rid of the gross polluters and we'll use the same logic.  The older cars prevent efficient use of the roadway and lead to increased pollution so we'll just make the pollution laws increasingly difficult to pass until only electric cars with a barn full of electronics can pass.

Or we will restrict non-automated vehicles to off-peak hours.

Seems simple to me!

We already have an example of this:  In California, early adopters of the Prius were allowed to drive in the HOV lane with no passengers.  Now that privilege is restricted to just a certain number of hybrids and will soon disappear entirely.  And it should!  One of the latest Prius cars has a 1.3 kWh battery that lasts just a couple of miles.  The gasoline engine is running all the time.  Even calling it a hybrid is a joke.  The battery is just there to absorb the regenerative braking.

The 2017 Chevy Bolt has a 60 kWh battery - nearly 50 times the size of the Prius battery.  And it doesn't even have a gasoline engine!  And no semiannual smog inspection either!  And darn little scheduled maintenance.

 

Offline tszaboo

  • Super Contributor
  • ***
  • Posts: 7377
  • Country: nl
  • Current job: ATEX product design
We already have an example of this:  In California, early adopters of the Prius were allowed to drive in the HOV lane with no passengers.  Now that privilege is restricted to just a certain number of hybrids and will soon disappear entirely.  And it should!  One of the latest Prius cars has a 1.3 kWh battery that lasts just a couple of miles.  The gasoline engine is running all the time.  Even calling it a hybrid is a joke.  The battery is just there to absorb the regenerative braking.
It has a more efficient engine, Atkinson cycle, that wouldn't work without the hibrid system, 0.25 drag ratio, more efficient air conditioning etc. On pure electric, city use, mine goes 2km. Sure that is not a lot. You cannot even charge it, but it is not only break regen, it will switch on and off the petrol engine if the speed is so. I got 56 MGP while not driving like a moving roadblock. That car is about getting the most energy from fuel, and making the smallest pollution possible while doing. Which is a lot more than the average car company does. Take for example Ford, who dares to call a 3.5L turbocharged engine "eco".
 

Offline Johnny10

  • Frequent Contributor
  • **
  • Posts: 899
  • Country: us
What type of algorithm could account for a reckless driver in another car?
Or 70 other drivers during rush hour with everyone wanting to do 80MPH.
Something I see every few minutes on I4 Orlando, FL during rush periods.

I would like a look at that predictive AI.






 
« Last Edit: June 21, 2017, 03:32:30 pm by Johnny10 »
Tektronix TDS7104, DMM4050, HP 3561A, HP 35665, Tek 2465A, HP8903B, DSA602A, Tek 7854, 7834, HP3457A, Tek 575, 576, 577 Curve Tracers, Datron 4000, Datron 4000A, DOS4EVER uTracer, HP5335A, EIP534B 20GHz Frequency Counter, TrueTime Rubidium, Sencore LC102, Tek TG506, TG501, SG503, HP 8568B
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
You dont need to get rid of the human drivers. Although, you should, it would speed up the road network. You cannot get rid of the cyclists or pedestrians from the road. I know, these things are unknown in the USA. But a level 5 car for sure needs to be prepared to it. Even some level 2 handles them. Why would you need a separate network?
Volvo claims that nobody would die or get injured in their self driving car. They claim that as soon as 2020 they can do this.

Cyclists and pedestrians are all over the place in the US, at least out in my area when the weather is pleasant.

As much as I love Volvo's older products, they're smoking something if they think they'll have practical fully autonomous cars by 2020. Maybe by 2040 if the idea really catches on and isn't a passing fad that gets killed by disappointment or spectacular failures.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
What type of algorithm could account for a reckless driver in another car?
Or 70 other drivers during rush hour with everyone wanting to do 80MPH.
Something I see every few minutes on I4 Orlando, FL during rush periods.

I would like a look at that predictive AI.

In every case, the reaction time of the AI will be superior to humans.  The algorithms have no ego, they'll just let the other car slide in.  Addle brained drivers don't always respond well to reckless drivers either.  Oh, and algorithms never drift when a favorite song comes up on the radio.  Nor do they reach down to pick up a tape and wipe out 5 bicyclists riding on the edge of the pavement.

Nearly 40 years ago, a VP of a major semiconductor company told me "Never bet against technology!".  So far, he's been right.  The discussion at the time concerned the power requirements of mainframe computers and whether megaflops vesus kW could be radically diminished.  Obviously it can, look at what we're using today!

 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us

Cyclists and pedestrians are all over the place in the US, at least out in my area when the weather is pleasant.


Darwin had a theory that basically says that cyclists and pedestrians will evolve to avoid cars rather than the other way around.

Many areas are creating dedicated bike lanes.  It's a good bet the AI will be able to distinguish these at some point.  That won't help the "Critical Mass" folks in San Francisco because it is their intent to create traffic problems.  Here, Darwin will be the solution.

http://www.sfchronicle.com/bayarea/nevius/article/Critical-Mass-is-dying-of-self-inflicted-wounds-6481511.php

I haven't followed the chaos since I retired in 2003 but it used to be a rolling riot (not the fun kind).  I didn't work in SF but I sure sympathized with the poor schmucks just trying to get home from work.
 

Offline tszaboo

  • Super Contributor
  • ***
  • Posts: 7377
  • Country: nl
  • Current job: ATEX product design
Cyclists and pedestrians are all over the place in the US, at least out in my area when the weather is pleasant.

As much as I love Volvo's older products, they're smoking something if they think they'll have practical fully autonomous cars by 2020. Maybe by 2040 if the idea really catches on and isn't a passing fad that gets killed by disappointment or spectacular failures.
I just mock you Americans a little bit  ;) , dont take it too hard.

Volvo does not claim a fully autonomous level 5 car. They claim that driver assist will be so advanced that major accidents can be totally avoided. Like pre-crash breaking and such.

Coming to think of it, right now, I pay more for insurance than pertol. If they would significantly reduce insurance on a level 3-4 self driving cars, the extra systems would pay their price in a few years. Plus, you know, the not dieing in the car is a big plus. They made seatbelts, airbags, ABS and ESP mandatory. Its not sci-fi to make other systems, mandatory too.
 

Offline Johnny10

  • Frequent Contributor
  • **
  • Posts: 899
  • Country: us
I still think there is a simplification of the problems.
Sensors don't exist that can laterally see through four metallic cars in parallel lanes on say a five lane thruway.
How is the car sensor going to know where a car is going to be that is blocked by other cars?
How is an algorithm going to avoid a car being pushed into my car by an accident two lanes over?
Braking is of no use from side collision.

I am not saying it is not going to happen sometime in the future. I just don't see it soon.
I live and breath technology.
I guess my curiosity is in the knowledge of the details.
Tektronix TDS7104, DMM4050, HP 3561A, HP 35665, Tek 2465A, HP8903B, DSA602A, Tek 7854, 7834, HP3457A, Tek 575, 576, 577 Curve Tracers, Datron 4000, Datron 4000A, DOS4EVER uTracer, HP5335A, EIP534B 20GHz Frequency Counter, TrueTime Rubidium, Sencore LC102, Tek TG506, TG501, SG503, HP 8568B
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
I still think there is a simplification of the problems.
Sensors don't exist that can laterally see through four metallic cars in parallel lanes on say a five lane thruway.


Neither can a human driver if they're paying attention to their child in the rear seat.  One thing about AI and cameras, they aren't distracted.  But if the scene is blocked, whether camera or Mk I eyeballs, it won't be seen.

Quote

How is the car sensor going to know where a car is going to be that is blocked by other cars?
How is an algorithm going to avoid a car being pushed into my car by an accident two lanes over?
Braking is of no use from side collision.


Can a distracted human do any better?  It wouldn't seem so.

We can already see a reduction in parking lot accidents (I'm just guessing) due to backup alarms and cameras.  We should be seeing a reduction in lane change accidents due to lane change cameras and proximity warnings.  For certain, there should be a reduction in rear-end collisions at stop lights/signs.  My Bolt will even apply the brakes if I fail to do so.  This will also work where right lane traffic stops when a slow driver tries to turn into a mall parking lot.

We're making great strides in reducing the most common accidents.  We are not ever going to anticipate a tractor/trailer crashing over the side of a bridge onto a multi-lane freeway.  There will still be accidents.  Just the fact that a small percentage of cars have collision avoidance features will be a great help.  After all, it takes at least 2 cars to have a collision.

What is going to happen is that the slight injury accidents will decline to near zero.  The fatalities will remain unchanged.  Head-on collisions from drunk drivers driving the wrong way on a freeway will be difficult to overcome.  All the AI can do is try to change lanes and reduce speed.  But at least it won't be distracted or confused.
 

Offline sokoloff

  • Super Contributor
  • ***
  • Posts: 1799
  • Country: us
It will take a while to get rid of the non-automated cars but we'll do it through legislation just like we got rid of the gross polluters and we'll use the same logic.  The older cars prevent efficient use of the roadway and lead to increased pollution so we'll just make the pollution laws increasingly difficult to pass until only electric cars with a barn full of electronics can pass.
I admit to not being familiar with the rules in California, but I have two classic Mustangs and know a lot of classic Mustang owners in California. My understanding is that those cars are not subject to tailpipe emissions testing anymore, and even when they had been, they were only required to meet the emissions standards present at the time of vehicle manufacture (which for 1965 and 1966 were quite loose, no catalytic converters, spec'd to run on leaded fuel of the time, etc.)

I'm very familiar with the laws of my home state (Massachusetts) and my prior home state (New Hampshire). Neither state requires emissions testing (both permit but do not practically require a visual inspection); MA now requires ODB2-equipped (1996 and later, generally) cars/light trucks to be in the "ready" and "no emissions faults" state.
Or we will restrict non-automated vehicles to off-peak hours.

Seems simple to me!
Seizing (or substantially economically impairing) the property of others often does.

Telling poor people that they are no longer allowed to commute in their cars during commuting hours without purchasing a new car is unlikely to carry the vote when the payoff is letting rich people drive level 5 cars without hindrance.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
It will take a while to get rid of the non-automated cars but we'll do it through legislation just like we got rid of the gross polluters and we'll use the same logic.  The older cars prevent efficient use of the roadway and lead to increased pollution so we'll just make the pollution laws increasingly difficult to pass until only electric cars with a barn full of electronics can pass.
I admit to not being familiar with the rules in California, but I have two classic Mustangs and know a lot of classic Mustang owners in California. My understanding is that those cars are not subject to tailpipe emissions testing anymore, and even when they had been, they were only required to meet the emissions standards present at the time of vehicle manufacture (which for 1965 and 1966 were quite loose, no catalytic converters, spec'd to run on leaded fuel of the time, etc.)


All of the above is true but I'm not sure about the dates.  As you say, different vehicles years have different requirements, usually based on the year of the engine.  A '32 Ford with a modern V8 will need to meet the 'modern' requirements.
Quote
Or we will restrict non-automated vehicles to off-peak hours.

Seems simple to me!
Seizing (or substantially economically impairing) the property of others often does.

Telling poor people that they are no longer allowed to commute in their cars during commuting hours without purchasing a new car is unlikely to carry the vote when the payoff is letting rich people drive level 5 cars without hindrance.

It certainly won't carry a vote in California but those old vehicles will die off sooner or later.  Getting even 10 years out of a vehicle is pretty tough!  Sure, there are some very old cars around but they are rare and unlikely to be much of a problem compared to the 10s of thousands of newer cars commuting to work.

We did have a federally funded "Cash For Clunkers" program but, unless renewed, it will expire in August:
http://www.bankrate.com/auto/6-steps-to-cash-for-clunkers/

California has a separate program funded by the Bureau of Automotive Repair that has several options and the truly needy get a higher cap ($1500) versus the standard $1000.  But, essentially, the car has to run long enough to get to the collection point.

Sooner or later, the clunkers will be off the road.  It might take 10 years or so but eventually the 2017 cars will be considered 'old'.

There is an example in Germany where the .gov is telling owners of certain diesel cars that they can't drive them into the cities during certain times of the year.

https://cleantechnica.com/2017/02/23/stuttgart-germany-begin-selective-banning-diesel-cars-high-pollution-periods-2018/

Basically, if you live in Germany and buy a German made diesel car, you can't drive it in certain towns in Germany because the EU is Brussels thinks it's a bad idea.  It probably is but still...  What about the poor schmucks who put a lot of money into buying the German made diesel car?

« Last Edit: June 21, 2017, 06:53:19 pm by rstofer »
 

Offline nfmax

  • Super Contributor
  • ***
  • Posts: 1560
  • Country: gb
An autonomous vehicle would have been unlikely to bring about the Great Heck rail crash https://en.m.wikipedia.org/wiki/Great_Heck_rail_crash, in which 10 people died when a driver fell asleep. No other (road) vehicles were involved. The resulting insurance claims are used as a textbook example of why reinsurance is essential.
« Last Edit: June 21, 2017, 07:05:21 pm by nfmax »
 

Offline tszaboo

  • Super Contributor
  • ***
  • Posts: 7377
  • Country: nl
  • Current job: ATEX product design
Basically, if you live in Germany and buy a German made diesel car, you can't drive it in certain towns in Germany because the EU is Brussels thinks it's a bad idea.  It probably is but still...  What about the poor schmucks who put a lot of money into buying the German made diesel car?
It has nothing to do with the EU, it is a local decision. To ban cars, that emit 50 times the NOx of regular, nice EURO 6 cars. I wish they would do it here also.
And you probably dont realise, but poor smucks in their shitty cars have no money to have a shitty car. Mandatory yearly inspection + tax + insurance sets you back about 1500 USD per year, and fuel costs twice than the US. And if you break your shitty car, just a little bit, it usually sold to east europe or goes into the junkyard, because repair will cost more than the value of the car. And you need repairs otherwise you dont pass mandatory yearly inspection.
So yeah, not a lot of rusty big smoking ugly diesel trucks driving around in west europe. And thats OK, we like it that way.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
I still think there is a simplification of the problems.
Sensors don't exist that can laterally see through four metallic cars in parallel lanes on say a five lane thruway.
How is the car sensor going to know where a car is going to be that is blocked by other cars?

I guess my curiosity is in the knowledge of the details.
The technology currently in development will allow cars to communicate which eachother, determine their position way more accurately and also communicate with the infrastructure itself. In the end this will allow cars to use eachothers sensors to get a far more accurate picture of the road than human drivers can and spread traffic flows more evenly across cities. The self driving car as we currently know it is just an intermediate stage.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: Johnny10

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
I still think there is a simplification of the problems.
Sensors don't exist that can laterally see through four metallic cars in parallel lanes on say a five lane thruway.
How is the car sensor going to know where a car is going to be that is blocked by other cars?

I guess my curiosity is in the knowledge of the details.
The technology currently in development will allow cars to communicate which eachother, determine their position way more accurately and also communicate with the infrastructure itself. In the end this will allow cars to use eachothers sensors to get a far more accurate picture of the road than human drivers can and spread traffic flows more evenly across cities. The self driving car as we currently know it is just an intermediate stage.


It will be hacked almost immediately. There are going to be countless incidents of people messing with the system for fun & profit, just look at how many data breaches we're having involving large companies that are supposed to know better. Now watch what happens when it's a bunch of cars rushed out by arrogant and idealistic tech dweebs. I guarantee there will be loads of people with devices to make other cars slow, shut down or otherwise misbehave. Any system *will* be abused.
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us

Cyclists and pedestrians are all over the place in the US, at least out in my area when the weather is pleasant.


Darwin had a theory that basically says that cyclists and pedestrians will evolve to avoid cars rather than the other way around.

Many areas are creating dedicated bike lanes.  It's a good bet the AI will be able to distinguish these at some point.  That won't help the "Critical Mass" folks in San Francisco because it is their intent to create traffic problems.  Here, Darwin will be the solution.

http://www.sfchronicle.com/bayarea/nevius/article/Critical-Mass-is-dying-of-self-inflicted-wounds-6481511.php

I haven't followed the chaos since I retired in 2003 but it used to be a rolling riot (not the fun kind).  I didn't work in SF but I sure sympathized with the poor schmucks just trying to get home from work.

We have a lot of bike lanes out here, but the damn cyclists almost never use them. Pretty much every day I see bicycles riding right on the white line between the car lane and the bike lane, too close for me to pass them safely. That or they're riding in the car lane next to a perfectly good and unoccupied bike lane. They go on about sharing the road but then blast through stop signs, dart in and out of traffic as they please, ride on the sidewalk, ride on the line, frankly I hate cyclists and I say that as a cyclist myself. I'm not one of the spandex wearing fanatics but I do bike around frequently in the summer.

Ah yes, Critical Mass, we call those idiots "Critical Massholes", they're just idiotic hooligans who ride around with a chip on their shoulder looking for an excuse to get in a fight. Bunch of morons, sometimes I wish it was legal to run them over.
 

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb
We have a lot of bike lanes out here, but the damn cyclists almost never use them.

The problem with bike lanes is that they tend to accumulate all the crap, rubber dust, small stones etc. etc. When you've got four tyres all at least 4 inches wide you don't notice this kind of cruft; if you have two tyres that have a contact patch 1/2" wide you do notice it and it's uncomfortable at least, highly dangerous at worst. Cycle lanes are fundamentally a poor idea unless they are routinely swept. The main carriageway gets cleared of all this junk, thrown aside by the tyres of passing vehicles, so it makes a much more inviting surface.
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
I've heard that argument, I don't know how true it is, but if cyclists are not going to use the bike lanes then we should stop wasting money putting them in and use that space for cars, or to provide a shoulder in case one needs to pull over, etc. The bike fanatics are always demanding more bike lanes and then when countless millions are spent installing them nobody uses them. It's a complete waste.
 

Offline JoeNTopic starter

  • Frequent Contributor
  • **
  • Posts: 991
  • Country: us
  • We Buy Trannies By The Truckload
We have a lot of bike lanes out here, but the damn cyclists almost never use them.

The problem with bike lanes is that they tend to accumulate all the crap, rubber dust, small stones etc. etc. When you've got four tyres all at least 4 inches wide you don't notice this kind of cruft; if you have two tyres that have a contact patch 1/2" wide you do notice it and it's uncomfortable at least, highly dangerous at worst. Cycle lanes are fundamentally a poor idea unless they are routinely swept. The main carriageway gets cleared of all this junk, thrown aside by the tyres of passing vehicles, so it makes a much more inviting surface.

I have a theory that roadways with cars tend to be self-cleaning if the cars go over a certain speed, probably as little as 30 miles an hour.  When cars hit the crap on the roads, the crap tends to move.  It will either stay on the road or move off it.  If it moves off it, it no longer has cars hitting it so it tends to stay off the roadway.  If it stay on the roadway, it will probably lose the next time or soon, and be off the roadway anyway.  That's why when you see a roadway that is rarely used it tends to be dirty and busy roadways tend to be clean.  Also, those cars are almost certainly moving crap onto bike lanes which probably do not self-clean at a hundredth the rate of the roadway.  I had not thought of that before, too bad that is a problem.
Have You Been Triggered Today?
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4530
  • Country: au
    • send complaints here
I've heard that argument, I don't know how true it is, but if cyclists are not going to use the bike lanes then we should stop wasting money putting them in and use that space for cars, or to provide a shoulder in case one needs to pull over, etc. The bike fanatics are always demanding more bike lanes and then when countless millions are spent installing them nobody uses them. It's a complete waste.
You don't say which jurisdiction you are in, but universally around the world even where bicycle lanes are required to be used they still have exemptions:
http://animalnewyork.com/2013/fuck-what-you-heard-nyc-cyclists-are-not-bound-by-bike-lanes/
If you'd like to provide examples where cyclists are riding the way you say perhaps it would be illustrative rather than just a asserting that there were no reasons for the cyclists to be away from the kerb (try cycling toward a light post and see how close you would pass by it, amongst traffic).

It's a complete waste.
Yes, installing bicycle lanes that aren't suitable for purpose is a complete waste and only increases the tension between road users. But it keeps going on around the world from planners who have never ridden a bicycle on their own designs.

Some reading for you:
http://www.ohiobikelawyer.com/bike-law-101/2016/05/three-foot-bill-passes-ohio-house/
http://www.executivestyle.com.au/why-bike-riders-sometimes-eschew-their-lanes-gjxy30
 

Online BrianHG

  • Super Contributor
  • ***
  • Posts: 7733
  • Country: ca
Avoiding and braking is all I heard about.  What if flooring the gas, going above the speed limit which you may be already traveling at is the only way to avoid an accident, potentially a fatal one.  This has happened to me once at an intersection.  Someone really late running a red light hit the rear of my car as I floored it after being mid way through the intersection noticing him coming toward me above full speed.  If I kept my speed or braked, his car would have hit my door dead on killing me instead of hitting my rear bumper spinning my car around.

What would have an auto-driver done in this situation.
Brake, I'm dead or in the hospital.
Keep the same speed, most likely the same.
Would have the AI chosen to floor the gas here, can the processor and the mechanics of the gaz pedal (not brake pedal) instantly floor it if it needs to?  Will it even be programed to accelerate as fast as possible to save the driver, or will braking be the default?
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4530
  • Country: au
    • send complaints here
Avoiding and braking is all I heard about.  What if flooring the gas, going above the speed limit which you may be already traveling at is the only way to avoid an accident, potentially a fatal one.  This has happened to me once at an intersection.  Someone really late running a red light hit the rear of my car as I floored it after being mid way through the intersection noticing him coming toward me above full speed.  If I kept my speed or braked, his car would have hit my door dead on killing me instead of hitting my rear bumper spinning my car around.

What would have an auto-driver done in this situation.
Brake, I'm dead or in the hospital.
Keep the same speed, most likely the same.
Would have the AI chosen to floor the gas here, can the processor and the mechanics of the gaz pedal (not brake pedal) instantly floor it if it needs to?  Will it even be programed to accelerate as fast as possible to save the driver, or will braking be the default?
Does your car accelerate faster than it brakes? Almost certainly not, so you could have moved a bigger delta position at impact by braking than by accelerating, or added some steering input too and changed from a t-bone type collision to a side-to-side vectoring the energy away from the vehicles. There are occasions where a collision to the rear panel could be a better outcome but the usual anecdotes about speeding being necessary for safety are clearly outliers, if the other driver had some level of autonomous driving assistance they wouldn't have run the red light and there would never have been any issue.
 

Online BrianHG

  • Super Contributor
  • ***
  • Posts: 7733
  • Country: ca
Avoiding and braking is all I heard about.  What if flooring the gas, going above the speed limit which you may be already traveling at is the only way to avoid an accident, potentially a fatal one.  This has happened to me once at an intersection.  Someone really late running a red light hit the rear of my car as I floored it after being mid way through the intersection noticing him coming toward me above full speed.  If I kept my speed or braked, his car would have hit my door dead on killing me instead of hitting my rear bumper spinning my car around.

What would have an auto-driver done in this situation.
Brake, I'm dead or in the hospital.
Keep the same speed, most likely the same.
Would have the AI chosen to floor the gas here, can the processor and the mechanics of the gaz pedal (not brake pedal) instantly floor it if it needs to?  Will it even be programed to accelerate as fast as possible to save the driver, or will braking be the default?
Does your car accelerate faster than it brakes? Almost certainly not, so you could have moved a bigger delta position at impact by braking than by accelerating, or added some steering input too and changed from a t-bone type collision to a side-to-side vectoring the energy away from the vehicles. There are occasions where a collision to the rear panel could be a better outcome but the usual anecdotes about speeding being necessary for safety are clearly outliers, if the other driver had some level of autonomous driving assistance they wouldn't have run the red light and there would never have been any issue.

I was already in motion.  Here is a photo of the area where it happened...

Agreed that if the other drive didn't have driver assistance, but was forcefully always automatic driving, then he would have never run the light in the first place and there would have been no accident.  But since he chose to speed through the intersection, I got screwed.  Note that he was traveling around 70km/h also where the speed limit was only 50km/h.

« Last Edit: June 22, 2017, 05:53:58 am by BrianHG »
 

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb
Does your car accelerate faster than it brakes? Almost certainly not, ...

Yes, but what about the car/driver system? You have to add the time taken to remove the foot from the accelerator, move it across to the brake and depress it. Compare with the time required to merely further depress the accelerator.

When you consider the car/driver system rather than just the capabilities of the car, I suspect you might find that accelerating, when already travelling forward with the accelerator partially depressed, creates a higher |delta V|.

Now consider the question posed about what an autonomous system would do. In this case the hypothetical self-driving car is most probably electric and thanks to the torque characteristics of electric motors can have innate acceleration capacities that far exceed internal combustion engine vehicles. Although in this case there is no foot to be moved, the mechanical actuation time of the braking system will still come into play. Even here it's not impossible that acceleration might yield higher |deltaV|.

I'm not going to try and quantify either system as I don't do differential equations on less than 3 cups of coffee, and I'm only on my second.
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 

Offline tszaboo

  • Super Contributor
  • ***
  • Posts: 7377
  • Country: nl
  • Current job: ATEX product design
I would say, it would
A) Notice the car faster, and avoid it completely
B) Based on numerous simulations and real world examples it would have known better than you what to do when you are T-boned.
This is what people dont realize. You are in a car accident hopefully never, but say only a few times in your life (I had two, one is my mistake). A self driving car system is trained by thousands of hours of real and simulated traffic every day. Maybe the software you have had already took into account a dozen T-boning accidents. That happened with the exact same model, same sensors and everything. And after each accident, the engineers can spend as much time as they please, to come up with a better response for a situation. How much time do you have? A split second.
 
The following users thanked this post: Someone

Offline sokoloff

  • Super Contributor
  • ***
  • Posts: 1799
  • Country: us
In this case the hypothetical self-driving car is most probably electric and thanks to the torque characteristics of electric motors can have innate acceleration capacities that far exceed internal combustion engine vehicles. Although in this case there is no foot to be moved, the mechanical actuation time of the braking system will still come into play. Even here it's not impossible that acceleration might yield higher |deltaV|.
Electric motors have maximum torque from rest (0 RPM) which makes them much quicker off the line than an internal combustion engine. That advantage drops off quite rapidly at highway cruise RPM as the IC engine picks up torque vs idle and the electric motor torque drops off with RPM.
 

Offline idpromnut

  • Supporter
  • ****
  • Posts: 613
  • Country: ca
I was already in motion.  Here is a photo of the area where it happened...

Good day fellow Montrealer :)
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Avoiding and braking is all I heard about.  What if flooring the gas, going above the speed limit which you may be already traveling at is the only way to avoid an accident, potentially a fatal one.  This has happened to me once at an intersection.  Someone really late running a red light hit the rear of my car as I floored it after being mid way through the intersection noticing him coming toward me above full speed.  If I kept my speed or braked, his car would have hit my door dead on killing me instead of hitting my rear bumper spinning my car around.

What would have an auto-driver done in this situation.
Brake, I'm dead or in the hospital.
Keep the same speed, most likely the same.
Would have the AI chosen to floor the gas here, can the processor and the mechanics of the gaz pedal (not brake pedal) instantly floor it if it needs to?  Will it even be programed to accelerate as fast as possible to save the driver, or will braking be the default?

Are you suggesting that the AI wouldn't do the same thing under the strategy of avoidance?  It will probably also weigh collateral damage and choose a strategy that results in the least damage to its own car while considering damage to other cars and people.

Anything a human can do, a decent AI can do faster.  It's going to take time to develop all the strategies but it will happen.

As stated elsewhere, the acceleration of battery vehicles can be breath taking.  My old Spark EV had 400 ft-lbs or torque.  That's right up there with muscle cars.

At the moment, driverless cars are just gathering data for the AI to consider.  They're mere infants.  Give them a few years to mature and things should get really interesting.

Oh, and a decent AI wouldn't let the other car run the red light in the first place.  Only ego causes these kinds of  accidents and AIs don't have ego.
 

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb
In this case the hypothetical self-driving car is most probably electric and thanks to the torque characteristics of electric motors can have innate acceleration capacities that far exceed internal combustion engine vehicles. Although in this case there is no foot to be moved, the mechanical actuation time of the braking system will still come into play. Even here it's not impossible that acceleration might yield higher |deltaV|.
Electric motors have maximum torque from rest (0 RPM) which makes them much quicker off the line than an internal combustion engine. That advantage drops off quite rapidly at highway cruise RPM as the IC engine picks up torque vs idle and the electric motor torque drops off with RPM.

Depends on the exact type of electric motor and there are lots of types, some are even constant torque.
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 

Offline sokoloff

  • Super Contributor
  • ***
  • Posts: 1799
  • Country: us
Looking at torque curves for current electric cars, I found none that were constant torque over their operating range. Each had max torque at 0 RPM, steady through a low RPM limit (town/side road driving) and then a declining torque curve with RPM.

There are no doubt electric motors that are torque limited/constant torque over their full operating range. I couldn't find any that made their way into electric cars.
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
You don't say which jurisdiction you are in, but universally around the world even where bicycle lanes are required to be used they still have exemptions:


I didn't say they are or necessarily should be bound to using bike lanes, but if they're not going to use bike lanes then we should stop wasting money putting them in. If you're going to lobby to spend my tax dollars on an infrastructure project, I expect to see you using the project you begged to have done.
 

Online helius

  • Super Contributor
  • ***
  • Posts: 3642
  • Country: us
Looking at torque curves for current electric cars, I found none that were constant torque over their operating range. Each had max torque at 0 RPM, steady through a low RPM limit (town/side road driving) and then a declining torque curve with RPM.

There are no doubt electric motors that are torque limited/constant torque over their full operating range. I couldn't find any that made their way into electric cars.
That's a feature of transmissionless BLDC motors, which are the only kind that really makes sense on electric cars. Really the kind of torque curve you get on an ICE is pretty peculiar, with the available power source increasing with crankshaft speed.
 
The following users thanked this post: Someone

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4530
  • Country: au
    • send complaints here
You don't say which jurisdiction you are in, but universally around the world even where bicycle lanes are required to be used they still have exemptions:
....
It's a complete waste.
Yes, installing bicycle lanes that aren't suitable for purpose is a complete waste and only increases the tension between road users. But it keeps going on around the world from planners who have never ridden a bicycle on their own designs.

I didn't say they are or necessarily should be bound to using bike lanes, but if they're not going to use bike lanes then we should stop wasting money putting them in. If you're going to lobby to spend my tax dollars on an infrastructure project, I expect to see you using the project you begged to have done.
You'll probably find the cyclists not using the bike lane, didn't want it (in that form or at all) either. But again this doesnt match your narrative.
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4530
  • Country: au
    • send complaints here
Does your car accelerate faster than it brakes? Almost certainly not, so you could have moved a bigger delta position at impact by braking than by accelerating

Yes, but what about the car/driver system? You have to add the time taken to remove the foot from the accelerator, move it across to the brake and depress it. Compare with the time required to merely further depress the accelerator.

When you consider the car/driver system rather than just the capabilities of the car, I suspect you might find that accelerating, when already travelling forward with the accelerator partially depressed, creates a higher |delta V|.
Seems its selective quoting day. Cars are optimised for stopping, some hypothetical future high performance electric car might reverse that but it would be incredibly dangerous to unleash on the public. Even the tesla S brakes faster than accelerating:
 

Online helius

  • Super Contributor
  • ***
  • Posts: 3642
  • Country: us
Cars are optimised for stopping, some hypothetical future high performance electric car might reverse that but it would be incredibly dangerous to unleash on the public. Even the tesla S brakes faster than accelerating:

As would always be expected for a vehicle with both regenerative braking and disc brakes. If friction brakes are required for safety reasons, braking power will always be greater than accelerating power, which must rely on the electric motor force alone.

However, that doesn't mean that braking is better than accelerating for avoiding every accident. In addition to simple rigid-body mechanics, there is also the issue of control. Standing on the brakes significantly reduces the degree of control you have over the vehicle's direction for dynamic stability reasons.
When the gates close around you at a RR grade crossing, do you really think you are safer slamming on the brakes than on the accelerator? :P
« Last Edit: June 23, 2017, 03:46:38 am by helius »
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4530
  • Country: au
    • send complaints here
Cars are optimised for stopping, some hypothetical future high performance electric car might reverse that but it would be incredibly dangerous to unleash on the public. Even the tesla S brakes faster than accelerating:

As would always be expected for a vehicle with both regenerative braking and disc brakes. If friction brakes are required for safety reasons, braking power will always be greater than accelerating power, which must rely on the electric motor force alone.

However, that doesn't mean that braking is better than accelerating for avoiding every accident. In addition to simple rigid-body mechanics, there is also the issue of control. Standing on the brakes significantly reduces the degree of control you have over the vehicle's direction for dynamic stability reasons.
When the gates close around you at a RR grade crossing, do you really think you are safer slamming on the brakes than on the accelerator? :P
Have you driven cars beyond their limits? A typical car has much more control under loss of traction when braking compared to loss of traction when under power, its just how they're designed. I'm lucky enough to have driven a variety of cars in a variety of situations.

There will be occasions where accelerating is the better choice, but its overblown for its applications to safety. For the railroad crossing example you can easily leave the area before the train arrives without speeding or accelerating, unless you are foolish enough to enter when you can't exit. As above a automated car will be able to make better decisions about the best choice in each particular situation than a human could. Speed limiters, automated braking, autonomy, etc will certainly decrease the overall road toll and accident rate and yes they might introduce a very small number of previously avoidable accidents but I'm yet to see any analysis which shows they'd be a worse choice than not including them.
 

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb
Does your car accelerate faster than it brakes? Almost certainly not, so you could have moved a bigger delta position at impact by braking than by accelerating

Yes, but what about the car/driver system? You have to add the time taken to remove the foot from the accelerator, move it across to the brake and depress it. Compare with the time required to merely further depress the accelerator.

When you consider the car/driver system rather than just the capabilities of the car, I suspect you might find that accelerating, when already travelling forward with the accelerator partially depressed, creates a higher |delta V|.
Seems its selective quoting day. Cars are optimised for stopping, some hypothetical future high performance electric car might reverse that but it would be incredibly dangerous to unleash on the public. Even the tesla S brakes faster than accelerating:

You miss the point. Yes, with the exception of ridiculous examples like drag cars, all cars brake better than they accelerate. It's not the car that needs to be considered in isolation, it's the complete car/driver system. It doesn't matter how good the car is at stopping until the driver has commanded it to stop.

The case in point is one where the car was in steady motion, with the drivers foot already depressing the accelerator. That foot takes time to move and we were talking about avoiding an accident in a few hundred millisecond time frame. It will take longer to move that foot off the accelerator and onto the brake than it will to just depress the accelerator further; my guess is at least 200 ms. s = ut + at2 does the rest.

I don't know which will be truly more effective in the scenario BrianHG described, accelerating or braking; none of us do without some controlled experiments. But appealing to the capabilities of the car alone and not considering the complete car/driver system is definitely not going to get to the correct answer.
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb
Have you driven cars beyond their limits? A typical car has much more control under loss of traction when braking compared to loss of traction when under power, its just how they're designed. I'm lucky enough to have driven a variety of cars in a variety of situations.

Eh? Loss of traction implies loss of control, period. That's essentially Newton's first law.

Ask any motorcyclist. Car drivers will believe all sorts of strange things because a car can slide all over the place and generally stay upright. Motorcyclists, on the other hand, firmly believe in Newtonian Mechanics, some times firmly enough that 'ouch' doesn't do it credit.
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 

Offline sokoloff

  • Super Contributor
  • ***
  • Posts: 1799
  • Country: us
Have you driven cars beyond their limits? A typical car has much more control under loss of traction when braking compared to loss of traction when under power, its just how they're designed. I'm lucky enough to have driven a variety of cars in a variety of situations.

Eh? Loss of traction implies loss of control, period. That's essentially Newton's first law.
Exceeding traction limits under braking is generally a straight-line affair, leaving the car oriented in the travel direction, more easily recovered, and in the event of an impact, oriented as to best absorb the energy and protect the occupants.

This is not a simple "All loss of traction events are equal because of Newton" case.
 

Offline AndyC_772

  • Super Contributor
  • ***
  • Posts: 4228
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
Eh? Loss of traction implies loss of control, period. That's essentially Newton's first law.

Not at all, though I have heard this stated as though it were a "fact" by persons with a vested interest in being able to (legally) define someone as being "in control" vs "not in control" of a vehicle. If you're in any doubt about the difference, watch Russ Swift do one of his parallel parking demonstrations, and consider at what specific times you think he's not in control of his car.

Quote
Ask any motorcyclist.

Motorcyclist here. It's not a valid comparison.

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb
Have you driven cars beyond their limits? A typical car has much more control under loss of traction when braking compared to loss of traction when under power, its just how they're designed. I'm lucky enough to have driven a variety of cars in a variety of situations.

Eh? Loss of traction implies loss of control, period. That's essentially Newton's first law.
Exceeding traction limits under braking is generally a straight-line affair, leaving the car oriented in the travel direction, more easily recovered, and in the event of an impact, oriented as to best absorb the energy and protect the occupants.

And in what way are you 'in control' if the car is sliding in a straight line forward with no traction? Can you decide to take a control action such as turn, stop and expect it to be effective?

This is not a simple "All loss of traction events are equal because of Newton" case.

What you have in direct quotes there is not something I said. Please do not put words into my mouth.
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb
Eh? Loss of traction implies loss of control, period. That's essentially Newton's first law.

Not at all, though I have heard this stated as though it were a "fact" by persons with a vested interest in being able to (legally) define someone as being "in control" vs "not in control" of a vehicle. If you're in any doubt about the difference, watch Russ Swift do one of his parallel parking demonstrations, and consider at what specific times you think he's not in control of his car.

The first point you're making I suspect alludes to wheelies and standies, both involve rolling rubber on the road, not loss of traction, the rider can apply controlled forces to the road (turning, braking, acceleration), so by my definition qualify as 'in control'.

As to Russ Swift, to be pedantic, was in control, is now in the grip of physics.

Let's not get sidetracked too far here or we're going to end up in the 'general motoring opinions' thread

Quote
Ask any motorcyclist.

Motorcyclist here. It's not a valid comparison.

Ex London dispatch rider here and one time (briefly) user of an ACU licence, I'd disagree.
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 

Offline sokoloff

  • Super Contributor
  • ***
  • Posts: 1799
  • Country: us
What you have in direct quotes there is not something I said. Please do not put words into my mouth.
I never said that you said that, nor have I altered your quotes. Please do not accuse me of things that I've not done.
 

Offline AndyC_772

  • Super Contributor
  • ***
  • Posts: 4228
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
The first point you're making I suspect alludes to wheelies and standies

I was thinking of the case where, in a car, the force between the road surface and the tyre is less than the friction limit given by (mu * r).

Consider, for example, how a rally car is driven along a track with a loose surface; it may spend the majority of its time in a condition where it has "lost traction", ie. there is relative motion between the road surface and the lowest point of the wheel. To assert that the driver is not in control, though, is plainly untrue.
 
The following users thanked this post: Someone

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb
What you have in direct quotes there is not something I said. Please do not put words into my mouth.
I never said that you said that, nor have I altered your quotes. Please do not accuse me of things that I've not done.

You used double quotation marks, in standard English usage that implies a direct quote. If that is not what you intended, please use single quotation marks.
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb
The first point you're making I suspect alludes to wheelies and standies

I was thinking of the case where, in a car, the force between the road surface and the tyre is less than the friction limit given by (mu * r).

Consider, for example, how a rally car is driven along a track with a loose surface; it may spend the majority of its time in a condition where it has "lost traction", ie. there is relative motion between the road surface and the lowest point of the wheel. To assert that the driver is not in control, though, is plainly untrue.

Has it 'lost traction' or is it experiencing 'reduced traction'? If it has 'lost traction' inertia applies and control inputs will have no effect. If it has 'reduced traction' then control input will have a reduced effect. In the former no amount of compensation will allow for effective control, in the latter control is possible by compensating for the reduced traction.

Perhaps others aren't taking such an absolutist view of the phase 'lost traction' as I am ("lost" = ?past perfect? = 'it has been lost'). If you think 'lost traction' includes 'reduced traction' or 'losing traction' ("losing" = ?present continuous? = 'it is being lost') then we're probably arguing at cross purposes.

[Errors of grammatical type naming are all my own, never could remember which was which; so much for a grammar school education, but at least it taught me where to use a semicolon.]

I have to make it clear that I'm most definitely talking about the physics of 'in control'. We're talking about autonomous vehicles and thereby control systems. So we're talking about when systems are 'controlled' (by a control loop, AI or driver) and 'uncontrolled' (the system is in the grip of inertia and external forces).

Your mention of 'legally' in control which reminded me of some prosecutions over wheelies and standies back in the day (mid eighties or nineties I think). You seem to be implying there have been cases around 'in control' and deliberate drifting? Frankly anybody who indulges in deliberate four wheel drifts on the public roads outside of a formal rally deserves to get prosecuted no matter how in control they are or are not in physical fact.

Can we get back to the actual case in point now, accident avoidance by autonomous vehicles (with a side helping of accident avoidance by humans). I'm conscious of having side-tracked this thread too far. I'm not going to indulge in any more commenting on this side issue.
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4530
  • Country: au
    • send complaints here
Does your car accelerate faster than it brakes? Almost certainly not, so you could have moved a bigger delta position at impact by braking than by accelerating

Yes, but what about the car/driver system? You have to add the time taken to remove the foot from the accelerator, move it across to the brake and depress it. Compare with the time required to merely further depress the accelerator.

When you consider the car/driver system rather than just the capabilities of the car, I suspect you might find that accelerating, when already travelling forward with the accelerator partially depressed, creates a higher |delta V|.
Seems its selective quoting day. Cars are optimised for stopping, some hypothetical future high performance electric car might reverse that but it would be incredibly dangerous to unleash on the public. Even the tesla S brakes faster than accelerating:

You miss the point. Yes, with the exception of ridiculous examples like drag cars, all cars brake better than they accelerate. It's not the car that needs to be considered in isolation, it's the complete car/driver system. It doesn't matter how good the car is at stopping until the driver has commanded it to stop.

The case in point is one where the car was in steady motion, with the drivers foot already depressing the accelerator. That foot takes time to move and we were talking about avoiding an accident in a few hundred millisecond time frame. It will take longer to move that foot off the accelerator and onto the brake than it will to just depress the accelerator further; my guess is at least 200 ms. s = ut + at2 does the rest.

I don't know which will be truly more effective in the scenario BrianHG described, accelerating or braking; none of us do without some controlled experiments. But appealing to the capabilities of the car alone and not considering the complete car/driver system is definitely not going to get to the correct answer.
You're compressing time down to unrealistic sizes, BrianHG showed an intersection and estimated speeds of 70km/h and 30km/h.
Avoiding and braking is all I heard about.  What if flooring the gas, going above the speed limit which you may be already traveling at is the only way to avoid an accident, potentially a fatal one.  This has happened to me once at an intersection.  Someone really late running a red light hit the rear of my car as I floored it after being mid way through the intersection noticing him coming toward me above full speed.  If I kept my speed or braked, his car would have hit my door dead on killing me instead of hitting my rear bumper spinning my car around.

What would have an auto-driver done in this situation.
Brake, I'm dead or in the hospital.
Keep the same speed, most likely the same.
Would have the AI chosen to floor the gas here, can the processor and the mechanics of the gaz pedal (not brake pedal) instantly floor it if it needs to?  Will it even be programed to accelerate as fast as possible to save the driver, or will braking be the default?
Does your car accelerate faster than it brakes? Almost certainly not, so you could have moved a bigger delta position at impact by braking than by accelerating, or added some steering input too and changed from a t-bone type collision to a side-to-side vectoring the energy away from the vehicles. There are occasions where a collision to the rear panel could be a better outcome but the usual anecdotes about speeding being necessary for safety are clearly outliers, if the other driver had some level of autonomous driving assistance they wouldn't have run the red light and there would never have been any issue.

I was already in motion.  Here is a photo of the area where it happened...

Agreed that if the other drive didn't have driver assistance, but was forcefully always automatic driving, then he would have never run the light in the first place and there would have been no accident.  But since he chose to speed through the intersection, I got screwed.  Note that he was traveling around 70km/h also where the speed limit was only 50km/h.
Which if you look at the specific intersection in question puts 40m of visibility from the car entering from the left to the point of possible collision, at a steady 70km/h thats 2 seconds to recognise the threat and take action so situational awareness and braking is far and away the safest option. Safe cars need to be predictable, slow acceleration and fast stopping is the direction to head in.
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4530
  • Country: au
    • send complaints here
Your mention of 'legally' in control which reminded me of some prosecutions over wheelies and standies back in the day (mid eighties or nineties I think). You seem to be implying there have been cases around 'in control' and deliberate drifting? Frankly anybody who indulges in deliberate four wheel drifts on the public roads outside of a formal rally deserves to get prosecuted no matter how in control they are or are not in physical fact.
I've drifted cars with both front and rear wheel drive for fun and pleasure, then when you have an idiot pull out in front of you on the highway and you get sideways while dumping on the brakes it all comes naturally and the car remains under control (ABS is a wondrous thing) despite being very much beyond the limits of adhesion. Just because the tires can only put 0.8g or so against the road simply limits you to that 0.8g of control, even if you'd like more and are sliding along with the full 0.8g in use.

P.S. it is currently an offence in my state to lose traction on any wheel of a vehicle:
http://www.austlii.edu.au/au/legis/vic/consol_act/rsa1986125/s65a.html
But that doesn't stop people spinning their wheels pulling away at traffic lights because its never enforced.
 

Offline tronde

  • Frequent Contributor
  • **
  • Posts: 307
  • Country: no
I look forward to when the Silicon Valley guys enter the real world of traffic.





« Last Edit: June 24, 2017, 01:54:45 am by tronde »
 

Online Marco

  • Super Contributor
  • ***
  • Posts: 6721
  • Country: nl
Fully autonomous cars are a pipe dream until we have human level AI. Blocking all traffic on an edge case is not acceptable, yet inevitable without a human to take over.
« Last Edit: June 24, 2017, 02:45:25 am by Marco »
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
I've spent a large part of my career working in primarily software companies and one thing that never changes is that no matter how well you test it in house, customers inevitably find all kinds of bugs as soon as it goes out into the field. People do unexpected things, a software developer simply cannot predict everything that real users will do. I'm quite confident that the same thing will happen with these cars, they will get hastily released into the field and within a short time all hell will break loose as all kinds of crazy things happen once ordinary people try using them instead of engineers with thousands of hours working with them and intimate understanding of the technology. It will get much worse if any of these contraptions last long enough that parts start to wear out, sensors get clogged up with road grime, faulty connections develop from wet or salty climates, damage accumulates from minor accidents, etc.
 

Online BrianHG

  • Super Contributor
  • ***
  • Posts: 7733
  • Country: ca
I've spent a large part of my career working in primarily software companies and one thing that never changes is that no matter how well you test it in house, customers inevitably find all kinds of bugs as soon as it goes out into the field. People do unexpected things, a software developer simply cannot predict everything that real users will do. I'm quite confident that the same thing will happen with these cars, they will get hastily released into the field and within a short time all hell will break loose as all kinds of crazy things happen once ordinary people try using them instead of engineers with thousands of hours working with them and intimate understanding of the technology. It will get much worse if any of these contraptions last long enough that parts start to wear out, sensors get clogged up with road grime, faulty connections develop from wet or salty climates, damage accumulates from minor accidents, etc.

+1

Don't forget what may happen when a Google brained cars have to deal not only with some people drivers, but with Tesla brained cars which may react at slightly different speeds and respond to different distances.  Or when one brand has a better ability to see icy road conditions and adapt expecting the other self driving cars to be as good as itself...
 

Offline sokoloff

  • Super Contributor
  • ***
  • Posts: 1799
  • Country: us
I worked on self-driving cars (well, small buses at the time due to the size of computing and sensor equipment of the time) back in 1992 for Daimler-Benz. I was an intern working on the computer vision system (it was a team of 2, including me).

During a test session at an abandoned airfield near Munich, we had my boss's great-grand-boss (3 levels higher) in the bus for a demo lap where we were to follow the car ahead of us at a safe distance and appropriate speed. Our vision system tracked cars frame-to-frame and looked for highly symmetric prominent horizontal lines. We used a pattern of horizontal lines with a trapezoidal overall shape and frame-to-frame consistency as a strong signal of "likely car" and the size of the bottom line (closest to the camera) as the signal for the distance to the car ahead.

The early tests and demos had gone well; we had little tuning work to do while some of the other teams were fiddling with their code and systems for a higher-speed demo after lunch. It had rained lightly in the morning before we arrived, but the sun was coming out and it was turning into a nice day in the late summer. Late summer, warm, sun breaking out meant for drying runway and taxiway conditions.

As the demo started, I was in the back of the bus, seated and belted in front of my rack of equipment and embedded display. Driver, front pax, and two other engineers were belted in their stations, and great-grand-boss was standing between and behind the two front seats to witness the demo. Initially, it was going well and the car we were following was gradually picking up the pace as we drove around the field.

The monitor in front of me showed the live black and white video annotated with output from the horizontal line finding algorithm and distance/closure rate estimates. At around 35-40 kph, I saw my system start to misbehave. As the car accelerated away, the bus was matching the acceleration. All of the sudden, the drying pavement presented a pattern of horizontal lines that our vision system locked onto as a trapezoidal pattern of horizontal lines associated with a car. It had frame-to-frame consistency and so our system decided it was a stopped car ahead and commanded emergency braking. The GGBoss was watching the car accelerate away and the bus matching its pace, so wasn't on guard and I couldn't even shout out an effective warning (as my German was weak and slow to find the words). Bus slams on maximum braking, sending GGBoss forward into the console and dash as the bus groans to a halt. Even after the halt, the "car" pattern on the roadway was still in front of us, so like a recalcitrant horse, the bus refused to move forward until we hit the big red emergency disconnect button and took over manual control to drive back to the paddock area and let the GGB collect himself.

Not my best demo ever...  :palm:
 

Offline tronde

  • Frequent Contributor
  • **
  • Posts: 307
  • Country: no
It will get much worse if any of these contraptions last long enough that parts start to wear out, sensors get clogged up with road grime, faulty connections develop from wet or salty climates, damage accumulates from minor accidents, etc.

You must be joking. Errors that can't be fixed with a software update?

I think they will learn the term "factory recall" just as the other car manufacturers has done. Some of them will maybe experience a Takata-moment too.
 

Online BrianHG

  • Super Contributor
  • ***
  • Posts: 7733
  • Country: ca
I feel all these self driving cars are relying too much on multiple sensors.  They should have a stereo camera in the cabin above the driver, center and passenger on the windshield, but low enough for the wiper blades as the primary visual cues and get the image interpretation as good or better than a human.  External sensors / lidar should just be supplements, not the primary focus.  The cameras Should be at least 480hz with 0 frame drop 480hz interpretation & they should be able to see in the dark and not get blinded.  An easy cheap feat today for the amount of $$$ involved in the R&D for smart cars.
 

Offline CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5231
  • Country: us
Fully autonomous cars are a pipe dream until we have human level AI. Blocking all traffic on an edge case is not acceptable, yet inevitable without a human to take over.

You may be right on your conclusion, but why is blocking traffic a disabling criteria?  Human drivers do this regularly. 

I have twice seen the classic case of four cars from four directions edge forward until the nose of each car is blocked by the rear end of the car coming from the right (this would be the one from the left for those who drive on that side of the road).  Then the drivers behind those cars edged forward until no one can move in any direction until someone far back in line leaves enough room for those in front to reverse until there is room to clear.

I have seen inexperienced or elderly drivers pull into traffic accidentally and then freeze in place rather than clearing the path more times than I can count.

Perhaps we should instead say that human driven cars are a pipe dream until rational behavior can be assured, even in edge cases.
 
The following users thanked this post: Someone

Offline X

  • Regular Contributor
  • *
  • Posts: 179
  • Country: 00
    • This is where you end up when you die...
With news such as this I have zero confidence that self-driving cars will improve road safety.

The possibility of a virus infecting a self-driving car with such complex software is very real, and seems to have been glossed over in this whole debate. Once your car is infected it doesn't even matter if you have perfected the necessary algorithms to make this work, and manufacturers will be weaseling their way out of liability whenever they can.

If a population cull is what you want, self-driving cars are capable of achieving this.
« Last Edit: June 25, 2017, 03:32:54 am by X »
 

Online Marco

  • Super Contributor
  • ***
  • Posts: 6721
  • Country: nl
You may be right on your conclusion, but why is blocking traffic a disabling criteria?  Human drivers do this regularly.

Shouting at them will generally get them moving. There is no way for the artificially stupid cars to communicate and cooperate with the rest of the road users.

Quote
Then the drivers behind those cars edged forward until no one can move in any direction until someone far back in line leaves enough room for those in front to reverse until there is room to clear.

Lets say there's an artificially stupid car in the line and lets say it's a one way street to make it more interesting, is it going to reverse? Of course not.
 

Offline CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5231
  • Country: us
You may be right on your conclusion, but why is blocking traffic a disabling criteria?  Human drivers do this regularly.

Shouting at them will generally get them moving. There is no way for the artificially stupid cars to communicate and cooperate with the rest of the road users.

Quote
Then the drivers behind those cars edged forward until no one can move in any direction until someone far back in line leaves enough room for those in front to reverse until there is room to clear.

Lets say there's an artificially stupid car in the line and lets say it's a one way street to make it more interesting, is it going to reverse? Of course not.

Shouting at car with a microphone may get a useful response.  Just as shouting at other drivers may.  Other times it just activates the shout back ! mechanism.

The key to your attitude is shown by the response.  An artificially stupid car (one that is dumber than it needs to be) will clearly have trouble.  Perhaps the human rider when he takes over can resolve the problem.  Or perhaps the only accidentally stupid autopilot would not let frustration and other emotions drive it into a locked intersection situation in the first place.  Or god forbid the programmers of said vehicle will have sometime on their lives also encountered this stupidity and included it in the admittedly massive decision trees.

Once it was thought that computers couldn't beat people at chess.  Which turned out to be wrong even though we still haven't figured out how to make computers play like people.  Most of the ad hoc objections to self driving cars sound similar to the arguments against computer chess success.
 

Online Marco

  • Super Contributor
  • ***
  • Posts: 6721
  • Country: nl
Shouting at them will generally get them moving.

The same liability concerns which would not let the car reverse down a one way street wouldn't let it just obey third party commands.

Quote
Most of the ad hoc objections to self driving cars sound similar to the arguments against computer chess success.

No, it's not on the same level of difficulty. Urban driving is a fully general creative and cooperative problem solving exercise, if we solve it we'll have solved strong AI. We have no clue how to do that.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Shouting at them will generally get them moving.

The same liability concerns which would not let the car reverse down a one way street wouldn't let it just obey third party commands.

Quote
Most of the ad hoc objections to self driving cars sound similar to the arguments against computer chess success.

No, it's not on the same level of difficulty. Urban driving is a fully general creative and cooperative problem solving exercise, if we solve it we'll have solved strong AI. We have no clue how to do that.

And yet Google is having some success at self-driving cars in urban environments.  There have been some crashes but not nearly as many as I would have thought.  The good news about urban environments is that the speeds are typically fairly low.  Sure, there are moving obstacles (pedestrians, etc) but nothing is happening at 80 MPH (160 MPH closing speed).

I'm not sure I want to trust my life to a low-bid programmer in some foreign country but I do admire the efforts.
 

Online Marco

  • Super Contributor
  • ***
  • Posts: 6721
  • Country: nl
I never mentioned safety. It's the frequent gridlocks which would make it unacceptable.
 

Offline tszaboo

  • Super Contributor
  • ***
  • Posts: 7377
  • Country: nl
  • Current job: ATEX product design
And yet Google is having some success at self-driving cars in urban environments.  There have been some crashes but not nearly as many as I would have thought.  The good news about urban environments is that the speeds are typically fairly low.  Sure, there are moving obstacles (pedestrians, etc) but nothing is happening at 80 MPH (160 MPH closing speed).

I'm not sure I want to trust my life to a low-bid programmer in some foreign country but I do admire the efforts.
EU and US urban environment, not the same thing.  Imagine the same roads, just half the size. And people park on the sides. In fact, they leave their car, wherever they want, T junctions, before pedestrian crossing, roundabouts. I frequently see stupid idiots leaving their cars in a big roundabout, if front of the bus station. Guess what, busses cannot pass, and it blocks the traffic, until the idiot comes back after buying his senility pills from the shop. And the best thing is, people driving for a decade still have no concept of the "Priority to the right" rule. Or stop signs. Meaning, you need to stop, not just slow down. Its like rocket science for some.
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us

And yet Google is having some success at self-driving cars in urban environments.  There have been some crashes but not nearly as many as I would have thought.  The good news about urban environments is that the speeds are typically fairly low.  Sure, there are moving obstacles (pedestrians, etc) but nothing is happening at 80 MPH (160 MPH closing speed).

Google cars have gone a lot of miles, but it's a lot of miles around the same carefully chosen and extremely well documented routes, and in a region with an extremely mild climate where surprise snow and heavy rain are very rare. The difference in difficulty between driving those routes and being able to punch in an arbitrary address anywhere in the country in any weather condition is orders of magnitude.
 

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb

Google cars have gone a lot of miles, ...

Not as much as you might think. The Google/Waymo odometer hit 3,000,000 miles in May 2017. That's only about 15 car lifetimes. Suddenly that doesn't sound very much at all.
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 

Offline f5r5e5d

  • Frequent Contributor
  • **
  • Posts: 349
training data shouldn't just be limited to self driving cars - I would expect learning algorithms to use the full sensor suites, data collection on hundreds (thousands?) of human drivers
 

Offline StillTrying

  • Super Contributor
  • ***
  • Posts: 2850
  • Country: se
  • Country: Broken Britain
UK TV 9pm tonight.

Horizon: Dawn of the Driverless Car

http://www.bbc.co.uk/programmes/b006mgxf/broadcasts/upcoming
.  That took much longer than I thought it would.
 

Offline CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5231
  • Country: us

Google cars have gone a lot of miles, ...

Not as much as you might think. The Google/Waymo odometer hit 3,000,000 miles in May 2017. That's only about 15 car lifetimes. Suddenly that doesn't sound very much at all.

Compares very favorably to the amount of training most human drivers receive.  Few who are not professionals log much over a million miles in a lifetime.  Humans aren't too efficient at transferring training from one to another.
 

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb

Google cars have gone a lot of miles, ...

Not as much as you might think. The Google/Waymo odometer hit 3,000,000 miles in May 2017. That's only about 15 car lifetimes. Suddenly that doesn't sound very much at all.

Compares very favorably to the amount of training most human drivers receive.  Few who are not professionals log much over a million miles in a lifetime.  Humans aren't too efficient at transferring training from one to another.

Yes, but unlike the autonomous cars we start off with the ability to go "That's a tree, it doesn't move", "That's a dog, it does move, it can be erratic, at the moment it is running away from me", "That's the centre line", "That's a kerb", "That barrel that has fallen of the back of the lorry in front and is bouncing towards me is bad news" and so on. That's the hard stuff, the training that humans get before they learn to drive. Learning to drive comes at the end of 16 years or so of learning to recognise and assess the physical world and learning 'simple' skills like catching and throwing balls in a cross wind; by comparison, learning to drive is trivial.

It took me about 15-20 hours of training and practice to pass my driving test and become licensed to drive unsupervised anywhere. How good was the first Google car after 15 hours? Up to passing a driving test on its own, I think not. Are they yet up to doing that after 3x106 miles, still not I think.
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 
The following users thanked this post: Someone

Offline KE5FX

  • Super Contributor
  • ***
  • Posts: 1890
  • Country: us
    • KE5FX.COM
How good was the first Google car after 15 hours? Up to passing a driving test on its own, I think not. Are they yet up to doing that after 3x106 miles, still not I think.

It doesn't matter in the least.  What matters is that they're getting better, and we're not.
 
The following users thanked this post: cdev

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
>It doesn't matter in the least.  What matters is that they're getting better, and we're not.

Also, each one of us has to learn on our own and when we die, the knowledge we amass dies with us. With computers learn a process, when one learns, (say by observing an expert human do their job for a while) all can gain that knowledge.

I could imagine a world - its not that far into the future, where only the very best individuals will be employed and that only to teach a single robot or computer their skill, once.

Then, its over,

But, there's one question, who will buy all the now useless products of industry, without any incomes?

I know the developing countries are expected to grow, a few years longer, but what about after that?
 
"What the large print giveth, the small print taketh away."
 

Online helius

  • Super Contributor
  • ***
  • Posts: 3642
  • Country: us
Also, each one of us has to learn on our own and when we die, the knowledge we amass dies with us. With computers learn a process, when one learns, (say by observing an expert human do their job for a while) all can gain that knowledge.
Machines process data. As presently constituted, they do not possess knowledge at all. Not having any knowledge to begin with, it's obviously impossible for them to transmit what they do not have. They also do not "observe" anything. The poetic fallacy is deep in these AI discussions.
 

Offline CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5231
  • Country: us
The language is confusing and probably not well suited to describing what may be going on.  The real question is whether driving requires intelligence - artificial or otherwise.  Not clear to me.  Certainly not clear that creating a new solution to the current problem on the fly is the right approach, or that it is the approach used in most human driving.  Also not clear that the processes that humans use to drive are the only or the best processes.

Expressions that humans on average drive better than the automatons do currently is an combination of factual observations and opinions about how to interpret that data.  I tend to agree with that currently self driving cars aren't too good.  But it isn't clear to me that they aren't already better than some portion of the human driving population.

Expressions that automatons will never, or not for a long, long time be able to compete with average or better humans in general driving is purely opinion with little factual backing.  Much like the opinions in the late nineties that it would be a decade or more before LCD monitors took over for CRTs.  The CRT died far faster than most expected.  We will have to wait a few years to see how self driving cars come along.  Whatever the answer I am sure it will surprise someone.
 

Offline KE5FX

  • Super Contributor
  • ***
  • Posts: 1890
  • Country: us
    • KE5FX.COM
Machines process data. As presently constituted, they do not possess knowledge at all. Not having any knowledge to begin with, it's obviously impossible for them to transmit what they do not have. They also do not "observe" anything. The poetic fallacy is deep in these AI discussions.

That argument worked fine, too, right up until Google wiped the floor with a 9-dan Go master.
 
The following users thanked this post: rs20

Offline rs20

  • Super Contributor
  • ***
  • Posts: 2318
  • Country: au
Machines process data. As presently constituted, they do not possess knowledge at all. Not having any knowledge to begin with, it's obviously impossible for them to transmit what they do not have. They also do not "observe" anything. The poetic fallacy is deep in these AI discussions.
It's deeply ironic that you're accusing us of logical fallacies when your argument is pure pedantic nomenclature "AI's don't have real knowledge" and "AI's don't really observe". What cdev is correctly saying, by example, is that if an AI encounters a traffic cone, and subsequently gains the ability to drive around it, that update is shared and persisted across all AIs of that model. If your main objection to that point is the naming of these phenomena as observing a cone and gaining the knowledge of how to deal with it, I'd rather not get bogged down with that and just continue discussing AI in a pragmatic and reasonable fashion.
 
The following users thanked this post: donotdespisethesnake

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb
Also, each one of us has to learn on our own and when we die, the knowledge we amass dies with us.

What a, frankly, silly thing to say. The fine points of epistemology and philosophy are one thing, but there's also what, least-ways around here in the East End of London, we call the bleedin' obvious. As soon as our species evolved language the problem of lost knowledge largely went away. All of us here have benefited from the accumulated knowledge of Isaac Newton (dead), Albert Einstein (dead), James Clark Maxwell (dead) and many others.
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8646
  • Country: gb
Also, each one of us has to learn on our own and when we die, the knowledge we amass dies with us.
What a, frankly, silly thing to say. The fine points of epistemology and philosophy are one thing, but there's also what, least-ways around here in the East End of London, we call the bleedin' obvious. As soon as our species evolved language the problem of lost knowledge largely went away. All of us here have benefited from the accumulated knowledge of Isaac Newton (dead), Albert Einstein (dead), James Clark Maxwell (dead) and many others.
Of course, what cdev said is silly. However, I think what he is really talking about is amassed skill. That does die with us. In fact, it generally goes downhill badly well before we die.
 

Offline CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5231
  • Country: us
Also, each one of us has to learn on our own and when we die, the knowledge we amass dies with us.

What a, frankly, silly thing to say. The fine points of epistemology and philosophy are one thing, but there's also what, least-ways around here in the East End of London, we call the bleedin' obvious. As soon as our species evolved language the problem of lost knowledge largely went away. All of us here have benefited from the accumulated knowledge of Isaac Newton (dead), Albert Einstein (dead), James Clark Maxwell (dead) and many others.

I like Coppice's point.  Knowledge and skill are different things.  Knowledge is saying "rotate the steering wheel counterclockwise to turn left", or "apply carefully modulated pressure to the brakes".  Skill is the combination of that knowledge with muscle memory, vision, and other sensory input to do those things in a useful manner.  Knowledge is easy to transfer.  The knowledge part of self driving cars is a done deal.  The skill part is that tough problem that is only partly solved - by humans or robots.   Skill is harder to transfer between humans, but whatever passes for skills in robots should transfer pretty easily.  Robot skills are likely to be quite different from human skills.   Different data sets, different accuracies, different actuators and most likely different logic.

One thing that will lead to differences between robot skills and human skills is different goals and metrics.  Don't kill or injure anybody should be a common goal for both, but have fun will have no part of the robots goals.  One difference that is obvious from some comments in this thread is that robot drivers may have a goal to minimize total human travel time, while most human drivers have a goal of minimizing their personal travel time.  Similarly it is likely that the overall fleet of robot drivers will have the same or similar goals in terms of weighting fuel efficiency and speed, while these goals vary widely among human drivers.
 

Offline donotdespisethesnake

  • Super Contributor
  • ***
  • Posts: 1093
  • Country: gb
  • Embedded stuff
Also, each one of us has to learn on our own and when we die, the knowledge we amass dies with us.

What a, frankly, silly thing to say. The fine points of epistemology and philosophy are one thing, but there's also what, least-ways around here in the East End of London, we call the bleedin' obvious. As soon as our species evolved language the problem of lost knowledge largely went away. All of us here have benefited from the accumulated knowledge of Isaac Newton (dead), Albert Einstein (dead), James Clark Maxwell (dead) and many others.

That is also a very silly thing to say, in the context of driverless cars i.e. the topic of the thread, and not the "how many angels fit on a pinhead" debate.

"Hello, this is your Captain speaking, welcome aboard. I've never flown a plane before, but rest assured I have read every book on the subject."

REALLY?
Bob
"All you said is just a bunch of opinions."
 

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb
Also, each one of us has to learn on our own and when we die, the knowledge we amass dies with us.

What a, frankly, silly thing to say. The fine points of epistemology and philosophy are one thing, but there's also what, least-ways around here in the East End of London, we call the bleedin' obvious. As soon as our species evolved language the problem of lost knowledge largely went away. All of us here have benefited from the accumulated knowledge of Isaac Newton (dead), Albert Einstein (dead), James Clark Maxwell (dead) and many others.

That is also a very silly thing to say, in the context of driverless cars i.e. the topic of the thread, and not the "how many angels fit on a pinhead" debate.

"Hello, this is your Captain speaking, welcome aboard. I've never flown a plane before, but rest assured I have read every book on the subject."

Not the same, and you know it.
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 

Online Marco

  • Super Contributor
  • ***
  • Posts: 6721
  • Country: nl
For the moment these system train, they don't learn. When they start being able to learn they'll be able to drive a car.

I found the philosophizing aspect of AI research a bit annoying in the past, but at least the philosophers used terms like strong AI and human level intelligence and drove research to chase it. Whereas modern "AI" research mostly pretend that their little toy systems deserve to be called intelligent, while they try to manipulate the language use in the field to make the words which suggest something better should be the aim unacceptable to utter.

AI research was more interesting without all the billions of VC bucks to create slightly better toy systems.
« Last Edit: July 01, 2017, 03:46:04 pm by Marco »
 

Offline stj

  • Super Contributor
  • ***
  • Posts: 2155
  • Country: gb
anybody who wants to see where this leads should watch an anime called Ex-Driver.

a squad of people who go after out of control cars in a time when nobody else is allowed to drive!!
http://www.anime-planet.com/anime/ex-driver

given that was made 17 years ago, the japanese sure have a way of seeing what's coming!!
 

Offline Kalvin

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
https://www.theguardian.com/technology/2017/jul/01/volvo-admits-its-self-driving-cars-are-confused-by-kangaroos

Probably they need to add more cameras and use Lidar.

About the lane markings: During the winter time there won't be necessarily any visible markings anyway. Using a Lidar for environmental feature set detection would possibly provide a solution as the road-side features typically stay in place no matter the weather conditions are, so basically no visible road markings are needed. The car could download the new feature set data and new navigation data from the net.
 

Offline KE5FX

  • Super Contributor
  • ***
  • Posts: 1890
  • Country: us
    • KE5FX.COM
https://www.theguardian.com/technology/2017/jul/01/volvo-admits-its-self-driving-cars-are-confused-by-kangaroos

Probably they need to add more cameras and use Lidar.

About the lane markings: During the winter time there won't be necessarily any visible markings anyway. Using a Lidar for environmental feature set detection would possibly provide a solution as the road-side features typically stay in place no matter the weather conditions are, so basically no visible road markings are needed. The car could download the new feature set data and new navigation data from the net.

Ground-penetrating radar may prove helpful there.  Subsurface features of the road bed are unique and stable over time and changing weather conditions.  Some people at MIT have done some interesting work on that.
 

Online BrianHG

  • Super Contributor
  • ***
  • Posts: 7733
  • Country: ca
UK TV 9pm tonight.

Horizon: Dawn of the Driverless Car

http://www.bbc.co.uk/programmes/b006mgxf/broadcasts/upcoming
Just watched it.  Ok, simple explanations but covers the goals and 1 of the companies shown was crap while the others are advancing.
The biggest problem I see is that maybe these self driving cars, (with too many sensors in my opinion, they got to get at least as good as a human purely visually interpreting, then worry about extra improved sensing as a powerful +), only seem to be worried about the cars software being aware of itself exclusively.  IE, Stay on the road.  Avoid accidents.  Navigate to set destination.  The software isn't going anywhere near putting itself in the mind of the other cars and drivers on the road.  If you want proper avoidance, the software should look out for thing like stay extra far away from someone who appears to be driving slightly drunk or making driving errors on the road like a student driver, or they the other drivers in oncoming traffic is making bad choices in poor weather, or, a different self driving AI from another vehicle appears to be not as good at merging or might not see you properly.
Also, here in Montreal, during rush hour, just to make some highway exits, or, left turns at some intersections, you must drive with forceful attitude or intent, otherwise, you will never get anywhere.  How will self driving cars express this, or, take advantage of another drivers slightly slow advancing habits and forcefully squeeze itself into the merging lane.  Human drivers will learn very quickly to never give that 'safe space' the self driving cars would require before they would perform a lane change and the self driving cars will always loose out or miss intersections or stop in one lane waiting for an opening which will never come during rush hour before they could get to where they were instructed to go, blocking the lane they are in, making rush hour even worse for everyone else behind them.  These cars will be despised under these circumstances.
« Last Edit: July 02, 2017, 11:31:45 am by BrianHG »
 

Offline KE5FX

  • Super Contributor
  • ***
  • Posts: 1890
  • Country: us
    • KE5FX.COM
Human drivers will learn very quickly to never give that 'safe space' the self driving cars would require before they would perform a lane change and the self driving cars will always loose out or miss intersections or stop in one lane waiting for an opening which will never come during rush hour before they could get to where they were instructed to go, blocking the lane they are in, making rush hour even worse for everyone else behind them.  These cars will be despised under these circumstances.

This is indeed a good point.  The hardest part of designing a self-driving car will be making it compatible with human drivers.  If I have reason to believe that the person who's signaling to move in front of me will fanatically adhere to a speed limit which has been set below the 85th-percentile goal for revenue-collection purposes, of course I'll try to keep them from merging.  And obviously no one will build the capacity to break traffic laws into a self-driving car.  It'd be professional suicide.
 



Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf