Author Topic: Self Driving Cars: How well do they work in areas with haphazard driving rules?  (Read 37496 times)

0 Members and 1 Guest are viewing this topic.

Offline BrianHG

  • Super Contributor
  • ***
  • Posts: 7789
  • Country: ca
I've spent a large part of my career working in primarily software companies and one thing that never changes is that no matter how well you test it in house, customers inevitably find all kinds of bugs as soon as it goes out into the field. People do unexpected things, a software developer simply cannot predict everything that real users will do. I'm quite confident that the same thing will happen with these cars, they will get hastily released into the field and within a short time all hell will break loose as all kinds of crazy things happen once ordinary people try using them instead of engineers with thousands of hours working with them and intimate understanding of the technology. It will get much worse if any of these contraptions last long enough that parts start to wear out, sensors get clogged up with road grime, faulty connections develop from wet or salty climates, damage accumulates from minor accidents, etc.

+1

Don't forget what may happen when a Google brained cars have to deal not only with some people drivers, but with Tesla brained cars which may react at slightly different speeds and respond to different distances.  Or when one brand has a better ability to see icy road conditions and adapt expecting the other self driving cars to be as good as itself...
 

Offline sokoloff

  • Super Contributor
  • ***
  • Posts: 1799
  • Country: us
I worked on self-driving cars (well, small buses at the time due to the size of computing and sensor equipment of the time) back in 1992 for Daimler-Benz. I was an intern working on the computer vision system (it was a team of 2, including me).

During a test session at an abandoned airfield near Munich, we had my boss's great-grand-boss (3 levels higher) in the bus for a demo lap where we were to follow the car ahead of us at a safe distance and appropriate speed. Our vision system tracked cars frame-to-frame and looked for highly symmetric prominent horizontal lines. We used a pattern of horizontal lines with a trapezoidal overall shape and frame-to-frame consistency as a strong signal of "likely car" and the size of the bottom line (closest to the camera) as the signal for the distance to the car ahead.

The early tests and demos had gone well; we had little tuning work to do while some of the other teams were fiddling with their code and systems for a higher-speed demo after lunch. It had rained lightly in the morning before we arrived, but the sun was coming out and it was turning into a nice day in the late summer. Late summer, warm, sun breaking out meant for drying runway and taxiway conditions.

As the demo started, I was in the back of the bus, seated and belted in front of my rack of equipment and embedded display. Driver, front pax, and two other engineers were belted in their stations, and great-grand-boss was standing between and behind the two front seats to witness the demo. Initially, it was going well and the car we were following was gradually picking up the pace as we drove around the field.

The monitor in front of me showed the live black and white video annotated with output from the horizontal line finding algorithm and distance/closure rate estimates. At around 35-40 kph, I saw my system start to misbehave. As the car accelerated away, the bus was matching the acceleration. All of the sudden, the drying pavement presented a pattern of horizontal lines that our vision system locked onto as a trapezoidal pattern of horizontal lines associated with a car. It had frame-to-frame consistency and so our system decided it was a stopped car ahead and commanded emergency braking. The GGBoss was watching the car accelerate away and the bus matching its pace, so wasn't on guard and I couldn't even shout out an effective warning (as my German was weak and slow to find the words). Bus slams on maximum braking, sending GGBoss forward into the console and dash as the bus groans to a halt. Even after the halt, the "car" pattern on the roadway was still in front of us, so like a recalcitrant horse, the bus refused to move forward until we hit the big red emergency disconnect button and took over manual control to drive back to the paddock area and let the GGB collect himself.

Not my best demo ever...  :palm:
 

Offline tronde

  • Frequent Contributor
  • **
  • Posts: 307
  • Country: no
It will get much worse if any of these contraptions last long enough that parts start to wear out, sensors get clogged up with road grime, faulty connections develop from wet or salty climates, damage accumulates from minor accidents, etc.

You must be joking. Errors that can't be fixed with a software update?

I think they will learn the term "factory recall" just as the other car manufacturers has done. Some of them will maybe experience a Takata-moment too.
 

Offline BrianHG

  • Super Contributor
  • ***
  • Posts: 7789
  • Country: ca
I feel all these self driving cars are relying too much on multiple sensors.  They should have a stereo camera in the cabin above the driver, center and passenger on the windshield, but low enough for the wiper blades as the primary visual cues and get the image interpretation as good or better than a human.  External sensors / lidar should just be supplements, not the primary focus.  The cameras Should be at least 480hz with 0 frame drop 480hz interpretation & they should be able to see in the dark and not get blinded.  An easy cheap feat today for the amount of $$$ involved in the R&D for smart cars.
 

Offline CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5258
  • Country: us
Fully autonomous cars are a pipe dream until we have human level AI. Blocking all traffic on an edge case is not acceptable, yet inevitable without a human to take over.

You may be right on your conclusion, but why is blocking traffic a disabling criteria?  Human drivers do this regularly. 

I have twice seen the classic case of four cars from four directions edge forward until the nose of each car is blocked by the rear end of the car coming from the right (this would be the one from the left for those who drive on that side of the road).  Then the drivers behind those cars edged forward until no one can move in any direction until someone far back in line leaves enough room for those in front to reverse until there is room to clear.

I have seen inexperienced or elderly drivers pull into traffic accidentally and then freeze in place rather than clearing the path more times than I can count.

Perhaps we should instead say that human driven cars are a pipe dream until rational behavior can be assured, even in edge cases.
 
The following users thanked this post: Someone

Offline X

  • Regular Contributor
  • *
  • Posts: 179
  • Country: 00
    • This is where you end up when you die...
With news such as this I have zero confidence that self-driving cars will improve road safety.

The possibility of a virus infecting a self-driving car with such complex software is very real, and seems to have been glossed over in this whole debate. Once your car is infected it doesn't even matter if you have perfected the necessary algorithms to make this work, and manufacturers will be weaseling their way out of liability whenever they can.

If a population cull is what you want, self-driving cars are capable of achieving this.
« Last Edit: June 25, 2017, 03:32:54 am by X »
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6726
  • Country: nl
You may be right on your conclusion, but why is blocking traffic a disabling criteria?  Human drivers do this regularly.

Shouting at them will generally get them moving. There is no way for the artificially stupid cars to communicate and cooperate with the rest of the road users.

Quote
Then the drivers behind those cars edged forward until no one can move in any direction until someone far back in line leaves enough room for those in front to reverse until there is room to clear.

Lets say there's an artificially stupid car in the line and lets say it's a one way street to make it more interesting, is it going to reverse? Of course not.
 

Offline CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5258
  • Country: us
You may be right on your conclusion, but why is blocking traffic a disabling criteria?  Human drivers do this regularly.

Shouting at them will generally get them moving. There is no way for the artificially stupid cars to communicate and cooperate with the rest of the road users.

Quote
Then the drivers behind those cars edged forward until no one can move in any direction until someone far back in line leaves enough room for those in front to reverse until there is room to clear.

Lets say there's an artificially stupid car in the line and lets say it's a one way street to make it more interesting, is it going to reverse? Of course not.

Shouting at car with a microphone may get a useful response.  Just as shouting at other drivers may.  Other times it just activates the shout back ! mechanism.

The key to your attitude is shown by the response.  An artificially stupid car (one that is dumber than it needs to be) will clearly have trouble.  Perhaps the human rider when he takes over can resolve the problem.  Or perhaps the only accidentally stupid autopilot would not let frustration and other emotions drive it into a locked intersection situation in the first place.  Or god forbid the programmers of said vehicle will have sometime on their lives also encountered this stupidity and included it in the admittedly massive decision trees.

Once it was thought that computers couldn't beat people at chess.  Which turned out to be wrong even though we still haven't figured out how to make computers play like people.  Most of the ad hoc objections to self driving cars sound similar to the arguments against computer chess success.
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6726
  • Country: nl
Shouting at them will generally get them moving.

The same liability concerns which would not let the car reverse down a one way street wouldn't let it just obey third party commands.

Quote
Most of the ad hoc objections to self driving cars sound similar to the arguments against computer chess success.

No, it's not on the same level of difficulty. Urban driving is a fully general creative and cooperative problem solving exercise, if we solve it we'll have solved strong AI. We have no clue how to do that.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9893
  • Country: us
Shouting at them will generally get them moving.

The same liability concerns which would not let the car reverse down a one way street wouldn't let it just obey third party commands.

Quote
Most of the ad hoc objections to self driving cars sound similar to the arguments against computer chess success.

No, it's not on the same level of difficulty. Urban driving is a fully general creative and cooperative problem solving exercise, if we solve it we'll have solved strong AI. We have no clue how to do that.

And yet Google is having some success at self-driving cars in urban environments.  There have been some crashes but not nearly as many as I would have thought.  The good news about urban environments is that the speeds are typically fairly low.  Sure, there are moving obstacles (pedestrians, etc) but nothing is happening at 80 MPH (160 MPH closing speed).

I'm not sure I want to trust my life to a low-bid programmer in some foreign country but I do admire the efforts.
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6726
  • Country: nl
I never mentioned safety. It's the frequent gridlocks which would make it unacceptable.
 

Offline tszaboo

  • Super Contributor
  • ***
  • Posts: 7414
  • Country: nl
  • Current job: ATEX product design
And yet Google is having some success at self-driving cars in urban environments.  There have been some crashes but not nearly as many as I would have thought.  The good news about urban environments is that the speeds are typically fairly low.  Sure, there are moving obstacles (pedestrians, etc) but nothing is happening at 80 MPH (160 MPH closing speed).

I'm not sure I want to trust my life to a low-bid programmer in some foreign country but I do admire the efforts.
EU and US urban environment, not the same thing.  Imagine the same roads, just half the size. And people park on the sides. In fact, they leave their car, wherever they want, T junctions, before pedestrian crossing, roundabouts. I frequently see stupid idiots leaving their cars in a big roundabout, if front of the bus station. Guess what, busses cannot pass, and it blocks the traffic, until the idiot comes back after buying his senility pills from the shop. And the best thing is, people driving for a decade still have no concept of the "Priority to the right" rule. Or stop signs. Meaning, you need to stop, not just slow down. Its like rocket science for some.
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us

And yet Google is having some success at self-driving cars in urban environments.  There have been some crashes but not nearly as many as I would have thought.  The good news about urban environments is that the speeds are typically fairly low.  Sure, there are moving obstacles (pedestrians, etc) but nothing is happening at 80 MPH (160 MPH closing speed).

Google cars have gone a lot of miles, but it's a lot of miles around the same carefully chosen and extremely well documented routes, and in a region with an extremely mild climate where surprise snow and heavy rain are very rare. The difference in difficulty between driving those routes and being able to punch in an arbitrary address anywhere in the country in any weather condition is orders of magnitude.
 

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb

Google cars have gone a lot of miles, ...

Not as much as you might think. The Google/Waymo odometer hit 3,000,000 miles in May 2017. That's only about 15 car lifetimes. Suddenly that doesn't sound very much at all.
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 

Offline f5r5e5d

  • Frequent Contributor
  • **
  • Posts: 349
training data shouldn't just be limited to self driving cars - I would expect learning algorithms to use the full sensor suites, data collection on hundreds (thousands?) of human drivers
 

Offline StillTrying

  • Super Contributor
  • ***
  • Posts: 2850
  • Country: se
  • Country: Broken Britain
UK TV 9pm tonight.

Horizon: Dawn of the Driverless Car

http://www.bbc.co.uk/programmes/b006mgxf/broadcasts/upcoming
.  That took much longer than I thought it would.
 

Offline CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5258
  • Country: us

Google cars have gone a lot of miles, ...

Not as much as you might think. The Google/Waymo odometer hit 3,000,000 miles in May 2017. That's only about 15 car lifetimes. Suddenly that doesn't sound very much at all.

Compares very favorably to the amount of training most human drivers receive.  Few who are not professionals log much over a million miles in a lifetime.  Humans aren't too efficient at transferring training from one to another.
 

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb

Google cars have gone a lot of miles, ...

Not as much as you might think. The Google/Waymo odometer hit 3,000,000 miles in May 2017. That's only about 15 car lifetimes. Suddenly that doesn't sound very much at all.

Compares very favorably to the amount of training most human drivers receive.  Few who are not professionals log much over a million miles in a lifetime.  Humans aren't too efficient at transferring training from one to another.

Yes, but unlike the autonomous cars we start off with the ability to go "That's a tree, it doesn't move", "That's a dog, it does move, it can be erratic, at the moment it is running away from me", "That's the centre line", "That's a kerb", "That barrel that has fallen of the back of the lorry in front and is bouncing towards me is bad news" and so on. That's the hard stuff, the training that humans get before they learn to drive. Learning to drive comes at the end of 16 years or so of learning to recognise and assess the physical world and learning 'simple' skills like catching and throwing balls in a cross wind; by comparison, learning to drive is trivial.

It took me about 15-20 hours of training and practice to pass my driving test and become licensed to drive unsupervised anywhere. How good was the first Google car after 15 hours? Up to passing a driving test on its own, I think not. Are they yet up to doing that after 3x106 miles, still not I think.
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 
The following users thanked this post: Someone

Offline KE5FX

  • Super Contributor
  • ***
  • Posts: 1905
  • Country: us
    • KE5FX.COM
How good was the first Google car after 15 hours? Up to passing a driving test on its own, I think not. Are they yet up to doing that after 3x106 miles, still not I think.

It doesn't matter in the least.  What matters is that they're getting better, and we're not.
 
The following users thanked this post: cdev

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
>It doesn't matter in the least.  What matters is that they're getting better, and we're not.

Also, each one of us has to learn on our own and when we die, the knowledge we amass dies with us. With computers learn a process, when one learns, (say by observing an expert human do their job for a while) all can gain that knowledge.

I could imagine a world - its not that far into the future, where only the very best individuals will be employed and that only to teach a single robot or computer their skill, once.

Then, its over,

But, there's one question, who will buy all the now useless products of industry, without any incomes?

I know the developing countries are expected to grow, a few years longer, but what about after that?
 
"What the large print giveth, the small print taketh away."
 

Offline helius

  • Super Contributor
  • ***
  • Posts: 3644
  • Country: us
Also, each one of us has to learn on our own and when we die, the knowledge we amass dies with us. With computers learn a process, when one learns, (say by observing an expert human do their job for a while) all can gain that knowledge.
Machines process data. As presently constituted, they do not possess knowledge at all. Not having any knowledge to begin with, it's obviously impossible for them to transmit what they do not have. They also do not "observe" anything. The poetic fallacy is deep in these AI discussions.
 

Offline CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5258
  • Country: us
The language is confusing and probably not well suited to describing what may be going on.  The real question is whether driving requires intelligence - artificial or otherwise.  Not clear to me.  Certainly not clear that creating a new solution to the current problem on the fly is the right approach, or that it is the approach used in most human driving.  Also not clear that the processes that humans use to drive are the only or the best processes.

Expressions that humans on average drive better than the automatons do currently is an combination of factual observations and opinions about how to interpret that data.  I tend to agree with that currently self driving cars aren't too good.  But it isn't clear to me that they aren't already better than some portion of the human driving population.

Expressions that automatons will never, or not for a long, long time be able to compete with average or better humans in general driving is purely opinion with little factual backing.  Much like the opinions in the late nineties that it would be a decade or more before LCD monitors took over for CRTs.  The CRT died far faster than most expected.  We will have to wait a few years to see how self driving cars come along.  Whatever the answer I am sure it will surprise someone.
 

Offline KE5FX

  • Super Contributor
  • ***
  • Posts: 1905
  • Country: us
    • KE5FX.COM
Machines process data. As presently constituted, they do not possess knowledge at all. Not having any knowledge to begin with, it's obviously impossible for them to transmit what they do not have. They also do not "observe" anything. The poetic fallacy is deep in these AI discussions.

That argument worked fine, too, right up until Google wiped the floor with a 9-dan Go master.
 
The following users thanked this post: rs20

Offline rs20

  • Super Contributor
  • ***
  • Posts: 2320
  • Country: au
Machines process data. As presently constituted, they do not possess knowledge at all. Not having any knowledge to begin with, it's obviously impossible for them to transmit what they do not have. They also do not "observe" anything. The poetic fallacy is deep in these AI discussions.
It's deeply ironic that you're accusing us of logical fallacies when your argument is pure pedantic nomenclature "AI's don't have real knowledge" and "AI's don't really observe". What cdev is correctly saying, by example, is that if an AI encounters a traffic cone, and subsequently gains the ability to drive around it, that update is shared and persisted across all AIs of that model. If your main objection to that point is the naming of these phenomena as observing a cone and gaining the knowledge of how to deal with it, I'd rather not get bogged down with that and just continue discussing AI in a pragmatic and reasonable fashion.
 
The following users thanked this post: donotdespisethesnake

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb
Also, each one of us has to learn on our own and when we die, the knowledge we amass dies with us.

What a, frankly, silly thing to say. The fine points of epistemology and philosophy are one thing, but there's also what, least-ways around here in the East End of London, we call the bleedin' obvious. As soon as our species evolved language the problem of lost knowledge largely went away. All of us here have benefited from the accumulated knowledge of Isaac Newton (dead), Albert Einstein (dead), James Clark Maxwell (dead) and many others.
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf