EEVblog Electronics Community Forum

EEVblog => EEVblog Specific => Topic started by: EEVblog on March 22, 2018, 03:52:16 am

Title: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: EEVblog on March 22, 2018, 03:52:16 am
How did the Uber autonomous car fatality in Tempe Arizona happen?
It basically shouldn't have.
A look at the newly released camera footage of the accident, the location, and the car LIDAR, RADAR, and camera sensor suites available to prevent such an accident.
Video footage: https://twitter.com/TempePolice/status/976585098542833664 (https://twitter.com/TempePolice/status/976585098542833664)
Location of accident: https://www.google.com.au/maps/@33.4369934,-111.9429875,3a,75y,115.16h,92.53t/data= (https://www.google.com.au/maps/@33.4369934,-111.9429875,3a,75y,115.16h,92.53t/data=)!3m6!1e1!3m4!1scUyILaxFs5z63AL2SupCJw!2e0!7i13312!8i6656
Inside Uber’s self-driving car mess:
https://www.recode.net/2017/3/24/14737438/uber-self-driving-turmoil-otto-travis-kalanick-civil-war (https://www.recode.net/2017/3/24/14737438/uber-self-driving-turmoil-otto-travis-kalanick-civil-war)

https://www.youtube.com/watch?v=HjeR13u74Mg (https://www.youtube.com/watch?v=HjeR13u74Mg)
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: BravoV on March 22, 2018, 04:18:49 am
Trust the "Car" Autonomous System with your life ?  :scared:

Even the "Train" Autonomous System thats still evolving for century, still keep crashing (includes each other) almost every week all over the world including in so called advanced developed countries, oh btw, its still running on a "fixed" rail, not roams freely like car on road.

 -> List of rail accidents (2010–present) (https://en.wikipedia.org/wiki/List_of_rail_accidents_(2010%E2%80%93present)#2017)
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: EEVblog on March 22, 2018, 04:29:50 am
LIDAR footage form Google self driving car:

https://www.youtube.com/watch?v=tiwVMrTLUWg&feature=youtu.be&t=9m5s (https://www.youtube.com/watch?v=tiwVMrTLUWg&feature=youtu.be&t=9m5s)
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: bitseeker on March 22, 2018, 04:37:27 am
Although we still need more data, specifically what the car did or didn't know about its surroundings during the time in question, after scrubbing through the exterior and interior footage, it seems highly unusual that the car behaved as though it picked up nothing at all.

The driver wasn't jostled by any change in G-forces during the moments prior to impact, there were no other distracting objects on the roadway, and the orientation of the pedestrian with bicycle made for a very large obstruction. If a fence had been placed across the lanes, it's as if the car would've just driven right through it as well.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: EEVblog on March 22, 2018, 04:38:47 am
I personally do not think autonomous vehicles should hit the road too early instead look at creating or dividing roads into separate driving territories. This will be very difficult to implement though due to numbers of cars in proportion. I would hate to be an authority to decide this.

This is another reason why I don't think fully atonymous cars will be put into mainstream use any time soon. Some politician has to sign off on that, and they like covering their arse. Look at how the virtually never want the raise the speed limits or remove speed humps etc once they have lowered them, no one wants to take responsibility.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: EEVblog on March 22, 2018, 04:39:51 am
Although we still need more data, specifically what the car did or didn't know about its surroundings during the time in question, after scrubbing through the exterior and interior footage, it seems highly unusual that the car behaved as though it picked up nothing at all.
The driver wasn't jostled by any change in G-forces during the moments prior to impact, there were no other distracting objects on the roadway, and the orientation of the pedestrian with bicycle made for a very large obstruction. If a fence had been placed across the lanes, it's as if the car would've just driven right through it as well.

Yep, it's almost a perfect test case. Uber may be in trouble over this.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Brumby on March 22, 2018, 04:51:10 am
Looking at it as a visual human there is still conjecture. I know that my dash-cam cannot pick up as much detail as my eyes, so the video footage is not entirely final.

Indeed.  I tried pulling a still from that footage, just before the pedestrian was visible.  Unsurprisingly, there was not enough information to get any sense of there being something there.  Human vision is far superior than this example.

If you will notice - even though the safety driver was obviously distracted by something in their hands - probably a mobile - they DID, in fact, glance up just before impact - as the direction of their attention was slightly to the left of the vehicle.  It was as though something caught their eye.  I might go so far as to say if they had been driving normally, with full attention on the road, they might have seen enough to have been able to have taken some sort of evasive action.  I doubt it would have avoided the accident completely, but it may have turned out to be non-fatal.  Just my thoughts.

BUT, as Dave has said - that is not the point.  The point is the technology's ability to detect (and respond to) the hazard.

In this case, it is hard to see how the technology would have found the environment difficult.  It was open road with ample clearance and unobstructed view.  It's a scenario I would have expected in the early days of machine vision where simple recognition scenarios were being explored at varying velocities.


I am at a loss.


I can only think something really dumb has happened - like someone forgot to plug in a cable or there was a bad solder joint that gave way.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: tpowell1830 on March 22, 2018, 05:08:07 am
I have said many, many times that the technology is too premature to be viable. I live in a very high traffic area and as I drive down the freeway with 5 lanes full of traffic at road speeds (60-70 mph) and marvel at how all of the vehicles merge into other lanes and the varying speeds of different vehicles do not crash into each other because humans can anticipate and adjust. Even when there are traffic cones and construction, the number of fender benders, compared to how many vehicles on a given street is minimal. The statistics tell the story of vehicle fatalities, and, given the sheer number of vehicles on the road, the fatalities, IMHO are extremely low.

I dare say that many may not grasp this concept because in city centers like where I live, the traffic is horrifying for the unskilled driver in such conditions. The roads, streets and highways in the US are huge compared to most countries, so I can't even imagine how autonomous driving could even work in countries where the roads and streets are tiny.

Although this film has been around a long time, when an autonomous car can do this, then it might be ready for public release.

If you are a bit squimish about fast driving, do not watch this film.

https://vimeo.com/34039780

Admittedly, the streets were fairly empty in this film.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Monkeh on March 22, 2018, 05:17:17 am
Even if a human driver could've seen her from further away, how much further?

As far as I'm aware, the US white lines are 10 feet long with 30 feet gaps - so a very generous estimation of range at the moment of visibility on the camera is about 75 feet - let's call it 25 metres. How far do you think the driver could've seen her? 30? 35?

It's a 45mph road. UK guidance on thinking + braking time for 40mph is 36 metres. Okay, a lot of cars will brake better, and some drivers are faster. But this is also both dark and at night, which will slow reactions further.

I'm pretty sure I'd have hit her.

I'm absolutely sure the car shouldn't have.

given the sheer number of vehicles on the road, the fatalities, IMHO are extremely low.

IMHO the number of fatalities on your roads are distressingly high. You realise your roads kill more than three times as many people as ours, right? I saw more dangerous driving in three weeks in NC than I've seen in the last few years here. Hell, I saw three crashes in as many miles immediately after leaving the airport..
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: EEVblog on March 22, 2018, 05:31:12 am
If you will notice - even though the safety driver was obviously distracted by something in their hands - probably a mobile - they DID, in fact, glance up just before impact - as the direction of their attention was slightly to the left of the vehicle. 

I'd just like to re-affirm that I was the one who synced up these videos, and really didn't have a solid reference for that. So there is no real evidence the driver noticed anything before the actual impact.

Quote
It was as though something caught their eye.  I might go so far as to say if they had been driving normally, with full attention on the road, they might have seen enough to have been able to have taken some sort of evasive action.  I doubt it would have avoided the accident completely, but it may have turned out to be non-fatal.  Just my thoughts.

Possibly.

Quote
BUT, as Dave has said - that is not the point.  The point is the technology's ability to detect (and respond to) the hazard.

Indeed, and I don't think that any discussion should lead down that path.

Quote
I can only think something really dumb has happened - like someone forgot to plug in a cable or there was a bad solder joint that gave way.

Yes, that bike and person should have been a really good target for the Lidar, and as a researcher on the video comments pointed out, they can get 2cm detail at 100m with such systems.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: ataradov on March 22, 2018, 05:36:32 am
I'm absolutely sure the car shouldn't have.
And they won't as they get better. If this was a human accident, we would not even known about it. And in the same time span, human drivers killed way more people than self driving cars.

Those cars are everywhere in Phoenix area, by many companies. It is not like some one-off test that went wrong, they have been driving there for along time. And this rate of accident is kind of surprising.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: pb2k on March 22, 2018, 05:41:57 am
The person crossing the road wins a Darwin award, full stop. The saving grace of this situation is we get to examine the logs to figure out what went wrong and tweak the algorithm and hardware to make it better tomorrow. I suspect there was still some sort of serious failure here, be it code or hardware, but we'll have to wait on the official report.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: taemun on March 22, 2018, 05:42:57 am
Would a LIDAR be able to 'see' the pedestrian's black jumper? I'm not sure if it would be reflective to IR, or if the diffuse/soft surface would reflect enough energy anyway. Regardless, I suppose you'd be able to 'see' a hole - a lack of other reflections from beyond that surface.

This does pose an interesting question - what standard should we be holding autonomous car systems to? Human level? The best possible, with the sensors available? No fatalities at all?

Clearly the Tempe police chief was holding it to human standard, when he said that Uber wasn't at fault.

How would you confirm "the best possible, with the sensors available"? That sort of firmware/software qualification sounds verging on impossible.

It won't happen, but I'd like to see a full sensor dump from Uber for this event.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: EEVblog on March 22, 2018, 05:54:23 am
I'm absolutely sure the car shouldn't have.
And they won't as they get better. If this was a human accident, we would not even known about it. And in the same time span, human drivers killed way more people than self driving cars.

Accident statistics are an entirely different argument to what we are having here.

Quote
Those cars are everywhere in Phoenix area, by many companies. It is not like some one-off test that went wrong, they have been driving there for along time. And this rate of accident is kind of surprising.

Because most humans pedestrians are pretty good at avoiding these accidents to begin with. So given the relatively low percentage of these autonomous cars on the road, the amount of time the driver uses that mode and is not paying attention, and humans doing the avoiding, it not surprising that it hasn't happened much. But now it's happened and it looks to be that the tech failed in an almost ideal practical test case, it raises serious questions.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: tpowell1830 on March 22, 2018, 05:56:43 am
LIDAR footage form Google self driving car:

https://www.youtube.com/watch?v=tiwVMrTLUWg&feature=youtu.be&t=9m5s (https://www.youtube.com/watch?v=tiwVMrTLUWg&feature=youtu.be&t=9m5s)

The little Google cars are driving in city street conditions, driving <30mph, but highway/freeway speed conditions, no. In my mind, cars can safely drive at high speeds (100mph or more) in open highways but under normal large city traffic conditions, again, no. Vehicles will need to be able to communicate with each other from a distance in order to be safe and, unless we put communication devices on foot/bicycle/skateboard traffic, autonomous vehicles will not work.

The examples given in the TED video are a tiny tip of the iceberg for problems to overcome on roadways.

With all of that said, when all of this autonomous driverless vehicles was first proposed/discussed, my thoughts at the time was, again, too many challenges to overcome to be safe. The only way that I thought that this could become a reality is to create a train type of roadway that had redundant communications posts setup every few hundred meters on a roadway and all vehicle communicated with each other so that the onboard computer could fall in line with other vehicles moving in the same direction, at the same speed, like a train. The 'train' of vehicles could then achieve high speeds, and the onboard, pre-programmed destination computer could coordinate with other vehicles in the 'train' to allow the vehicle to navigate to it's destination. As one commenter above said, this roadway would be strictly for autonomous use, and once the vehicle breaks out of this autonomous roadway, the human driver would need to take over. The logistics for this type of roadway/vehicle are also a very large problem. The vehicle maintenance for this type of vehicle would have to be regulated to a cost and would be so expensive for all situations that it becomes non-viable. The roadway itself would need a computer every few hundred meters as a traffic regulator to communicate with all the vehicle on its' roads. This means tax dollars to accomplish this, which escalates the cost of the system. There would need to be certified safety technicians/engineers to maintain all of this equipment, which escalates the cost.

In my humble opinion, the autonomous vehicle is too costly and premature. In fact, unless there are tremendous breakthroughs in technology that will lower costs and raise the AI to much higher levels, autonomous vehicles are not ready for public use.

The airplane industry back in the beginnings struggled with this problem, and eventually airplanes became safer to use for transporting people. That took 50 years to accomplish before planes were carrying large numbers of travelers. Planes today are much safer, statistically, for travel than any ground vehicle (not sure about trains). The problem with airplanes, when one falls out of the sky, hundreds of people die in one felled swoop, which makes flying seem unsafe to some, but when you look at the numbers, planes win the safety race, hands down. In 50 years, some may look back to now and marvel at the crude methods that we are currently using to navigate our roads, but much more work will have to be done in order to get to that point.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: EEVblog on March 22, 2018, 05:57:34 am
This does pose an interesting question - what standard should we be holding autonomous car systems to? Human level? The best possible, with the sensors available? No fatalities at all?

We expect better than humans when we take humans out of the loop. All this extra tech means that it should perform better than a human in the areas that the sensor excels against a human. e.g. at night with LIDAR.

Quote
It won't happen, but I'd like to see a full sensor dump from Uber for this event.

I suspect that may never be released, unless there is some legal requirement to do so. Uber will probably claim it's proprietary data and will only release to coronial inquest etc.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: ataradov on March 22, 2018, 06:02:47 am
What is the point you want to make by saying this?
That it is not a huge deal that self driving car had an accident. It will be investigated and resolved.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: ataradov on March 22, 2018, 06:03:32 am
We expect better than humans when we take humans out of the loop. All this extra tech means that it should perform better than a human in the areas that the sensor excels against a human. e.g. at night with LIDAR.
Even if accident rate is exactly the same, but we have gained ability to do other stuff while commuting, it is a win overall.

And things will only get better from there.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Xenon Photon on March 22, 2018, 06:07:38 am
So sad to hear about this accident.
A huge research and industry efforts are going in autonomous vehicles with a lot of hype everywhere!

AI research is great but unfortunately, not perfect (yet!). State of the art object detection and semantic segmentation networks still misses some obstacles even in consecutive frames. Relying on tracking does fail in some of these cases too. The network might get confused because of the cyclist and pedestrians instances with this unusual orientation. Probably not enough training data - if any - having a similar situation.
Let alone existing hardware limitations. For example, the latency during data buffering and encoding from camera/LIDAR to GPUs.

However, the safety and behavior planning algorithms should have detected that and at the very least issued a warning to the driver. Did this happen?
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: tpowell1830 on March 22, 2018, 06:15:29 am
Even if a human driver could've seen her from further away, how much further?

As far as I'm aware, the US white lines are 10 feet long with 30 feet gaps - so a very generous estimation of range at the moment of visibility on the camera is about 75 feet - let's call it 25 metres. How far do you think the driver could've seen her? 30? 35?

It's a 45mph road. UK guidance on thinking + braking time for 40mph is 36 metres. Okay, a lot of cars will brake better, and some drivers are faster. But this is also both dark and at night, which will slow reactions further.

I'm pretty sure I'd have hit her.

I'm absolutely sure the car shouldn't have.

given the sheer number of vehicles on the road, the fatalities, IMHO are extremely low.

IMHO the number of fatalities on your roads are distressingly high. You realise your roads kill more than three times as many people as ours, right? I saw more dangerous driving in three weeks in NC than I've seen in the last few years here. Hell, I saw three crashes in as many miles immediately after leaving the airport..

I am not sure that you can compare our roads and traffic to your little country, and I am not sure about your point. I was talking about in my experience on my roads in my area. If you want to make this a country comparison, get your facts together and make your case. As I commented, my area is a very high traffic area and there are fender benders all over, rarely do I ever see a fatal accident, for that matter, I have seen but a few fatal vehicle accidents in my 65 years, but I encounter thousands of vehicles on my short 7 mile drive to my office every day and that is why I say that it is in my experience. Please don't make this about country comparisons.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: lgarbo on March 22, 2018, 06:32:13 am
I've been lucky enough to hear a in-depth talk about the physics of the specific Waymo/Velodyne LIDAR. I also had an opportunity to talk to an Uber ATC (their AV wing) engineer at a career fair. Going from memory, I think some of the lidar and radar sensors fused had a minimum range of ~50m in poor conditions (i.e. fog or rain).

I'd wager this to be a software problem, not a sensing/hardware problem. As someone in the video comments mentioned, it's likely Uber has fewer layers of failsafes than other companies and instead relies on a small number of high level programs running various SLAM and signal processing algorithms that handle all situations, emergency or not. This is in contrast to having a dedicated emergency breaking system that is looking for a precise but limited set of circumstances. Based on my (very) limited knowledge of what Uber has been doing, it would not surprise me one bit to find out that in a rush to market they had put all their eggs in a single control system.

Obviously this is all speculation, but I will be looking out for the NTSB (US federal investigators known mostly for excellent air crash investigations) reports on it. It's very strongly my belief that we have 100% of the tech needed for safe self driving, but there is a huge rush to get it to market and I think that is leaving cut corners.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: iaso on March 22, 2018, 06:32:42 am
I work at a large T1 automotive supplier and as expected this is a hot topic at the water cooler. We are quite experienced with self-driving cars, most of our management drives a Tesla and our pool cars are also Tesla's.

We have some very anti-self driving car employees here in the office and this was the thing they were waiting for to push their archaic agenda. "A human would not have made this error!",  "If a person was driving that poor person would be alive now!".

I took the video that was released and just asked them before playing the video to say "BRAKE" whenever they saw the person. I might have converted a few people.

Sad as it is, this incident will help shape legal conundrums that we have been talking about since self-driving cars were first envisioned. Hopefully, this starts shaping a legal framework for liability opening the way for more innovation.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: orin on March 22, 2018, 06:35:24 am
That view of stepping out from the shadows may be misleading. A human may have had better vision than the vision shown by the camera footage.


I really doubt it.

To put things in perspective, I rode my bicycle to/from work today; 13 miles each way.  When the weather is good, I ride maybe two days out of five.  I see more poor driving than you'd believe, both driving and bicycling, but I really cannot fault the driving here.  There are idiots in cars not paying attention and there are idiots on bicycles.

The big problem here for a human driver is the crossing from deep shadow to the area illuminated by the street light to another area of deep shadow.  The same happens with dappled shadows from trees in the summer and I've witnessed a bicycle rider go head on into another on a trail in such lighting conditions... at a speed much lower than we have here.  You should really slow down when there are such abrupt changes in lighting, but no-one ever does and I'd get harassed if I did.

Certainly in WA, even if a crosswalk was involved, by law, you have to give traffic time to stop before crossing.  You can't just cross the street and say to hell with the cars.  For sure, as a pedestrian, you have to be aggressive at stepping out at times or they'll never stop for you, but you always have to be prepared to stop and let a car go by once you do enter a crosswalk.  But in this case, there was no crosswalk so even that doesn't apply.

Now Dave's argument is that the sensors on the car should have done better than the average driver's vision and I think that has to be true before we let autonomous cars loose on their own.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Brumby on March 22, 2018, 06:52:16 am
If you will notice - even though the safety driver was obviously distracted by something in their hands - probably a mobile - they DID, in fact, glance up just before impact - as the direction of their attention was slightly to the left of the vehicle. 

I'd just like to re-affirm that I was the one who synced up these videos, and really didn't have a solid reference for that. So there is no real evidence the driver noticed anything before the actual impact.

My comment had nothing to do with your syncing of the two videos.  It is purely on my observations of the face of the "safety driver".  They look to the left of the vehicle and there is only something of interest in that direction a second before the impact.
(https://www.eevblog.com/forum/blog/eevblog-1066-uber-autonomous-car-fatality-how/?action=dlattach;attach=405850;image)
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: iaso on March 22, 2018, 07:09:52 am
If you will notice - even though the safety driver was obviously distracted by something in their hands - probably a mobile - they DID, in fact, glance up just before impact - as the direction of their attention was slightly to the left of the vehicle. 

I'd just like to re-affirm that I was the one who synced up these videos, and really didn't have a solid reference for that. So there is no real evidence the driver noticed anything before the actual impact.

My comment had nothing to do with your syncing of the two videos.  It is purely on my observations of the face of the "safety driver".  They look to the left of the vehicle and there is only something of interest in that direction a second before the impact.
(https://www.eevblog.com/forum/blog/eevblog-1066-uber-autonomous-car-fatality-how/?action=dlattach;attach=405850;image)
Image this is the picture of you that will be seen by millions. Yowza!
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: itdontgo on March 22, 2018, 07:27:08 am
How?  Seriously? Cars are the world's greatest violent killer. War is responsible for about 3% of violent deaths, homicide about 11% and the products of the automobile industry; 24%.

Every day the products of that industry and their users violently kill 3500 people. Remember that when you're crying over the next tragic news event.  What you do will kill more people this century than all the wars of the previous century managed.

If you have massive fast cars moving at speed by where there are people you will kill people. You can have as many sensors and laws as you like. People will die. When it's someone like you doing the killing as it is with the 3500 daily fatalities and 50,000 severe injuries you won't give a crap. It will be a tragic accident which could not possibly be avoided. You won't be able to imagine a way in which that life could have been saved apart from perhaps the moron who got killed should not have been walking around by a road.  This case is something out of the ordinary so all of a sudden there is a story.

We've become accustomed to the speed cars go and have settled on it. It's not been decided upon by research or evidence it's what people will accept so they can have the cars they want. Our society lives in a fantasy world when it comes to cars. They poison everyone, they violently injure people, they wreck the planet and you don't care because you're in love with them. You're as much a True Believer in cars as people are in religion. I'm sure you're objecting to what I'm saying despite the evidence in the same way a religious nut cannot hear criticism of their beloved religion. But these are facts. The cars we accept as a society being used by people will kill in vast numbers. And we put having a bit of go in our cars above people's lives.

This rant is not aimed at any one individual by the way.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: EEVblog on March 22, 2018, 07:36:06 am
How?  Seriously? Cars are the world's greatest violent killer. War is responsible for about 3% of violent deaths, homicide about 11% and the products of the automobile industry; 24%.

Every day the products of that industry and their users violently kill 3500 people. Remember that when you're crying over the next tragic news event.  What you do will kill more people this century than all the wars of the previous century managed.

If you have massive fast cars moving at speed by where there are people you will kill people. You can have as many sensors and laws as you like. People will die. When it's someone like you doing the killing as it is with the 3500 daily fatalities and 50,000 severe injuries you won't give a crap. It will be a tragic accident which could not possibly be avoided. You won't be able to imagine a way in which that life could have been saved apart from perhaps the moron who got killed should not have been walking around by a road.  This case is something out of the ordinary so all of a sudden there is a story.

We've become accustomed to the speed cars go and have settled on it. It's not been decided upon by research or evidence it's what people will accept so they can have the cars they want. Our society lives in a fantasy world when it comes to cars. They poison everyone, they violently injure people, they wreck the planet and you don't care because you're in love with them. You're as much a True Believer in cars as people are in religion. I'm sure you're objecting to what I'm saying despite the evidence in the same way a religious nut cannot hear criticism of their beloved religion. But these are facts. The cars we accept as a society being used by people will kill in vast numbers. And we put having a bit of go in our cars above people's lives.

This rant is not aimed at any one individual by the way.

You missed the entire point of this video.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: EEVblog on March 22, 2018, 07:39:04 am
That view of stepping out from the shadows may be misleading. A human may have had better vision than the vision shown by the camera footage.

Which is why I give no value judgement on that and do not hold him responsible for the impact in that respect.

Quote
This could easily have been a non fatal accident. I doubt it could have been avoided entirely by a human driver but I also don't think the camera footage is clear enough to be sure of that.

And that's why it's not the point of the video, and really shouldn't be discussed here.
This is a video about autonymous car safety and sensor tech etc, it is not about a car accident that happens a thousand times a day.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Brumby on March 22, 2018, 07:46:01 am
This is a video about autonymous car safety and sensor tech etc, it is not about a car accident that happens a thousand times a day.

Exactly.

Our interest here is understanding why the tech didn't respond.  That's it.

And further discussion will be about the tech ... nothing else.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: EEVblog on March 22, 2018, 07:46:26 am
We expect better than humans when we take humans out of the loop. All this extra tech means that it should perform better than a human in the areas that the sensor excels against a human. e.g. at night with LIDAR.
Even if accident rate is exactly the same, but we have gained ability to do other stuff while commuting, it is a win overall.

Unless you are the person that gets hit because of a LIDAR system that didn't work properly when it could have worked properly. And that that same issue could effect other people in future because it's a fixable problem. In that case you might start to re-think your position.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Brumby on March 22, 2018, 07:53:47 am
Indeed.  How absurd would it be to accept a compromised technology simply because of something fixable?

The range of what is "fixable" is a broad one, true, and some things might be rather challenging - but what if the fix is a simple solder joint?
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: itdontgo on March 22, 2018, 08:17:50 am
"Uber Autonomous Car Fatality - How"

I answered it. Cars are too fast to be used by people. You cannot help but kill people if the cars go that speed. There is no solution but slower cars.

Engineers think they can engineer away road fatalities without slowing down cars. But you can't because you're engineering the car and the road and not the people who are fixed. They're there and they're easily killable at over 20mph.

This I admit is a diversion from your video but I think what I am saying is entirely relevant
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: CNe7532294 on March 22, 2018, 08:19:59 am
Honestly, technology no matter how good isn't a substitute for brains and actions. |O :palm: It a tool. A tool to be used to enhance human activity. Doesn't matter if things are visually terrible. A smart and alert human being would slow down and be cautious in darken areas. If you need guidance on self driving cars then look no further than what is required of commercial airlines. Making the driver a passenger is a terrible idea. :palm: Hopefully lessons will be learned. Sad its going to be the hard way.

(https://i.ytimg.com/vi/u9RNLaajCes/maxresdefault.jpg)
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Brumby on March 22, 2018, 08:31:29 am
Doesn't matter if things are visually terrible. A smart and alert human being would slow down and be cautious in darken areas.
That's a rather interesting statement.  More holes than a cheese grater, but interesting (or should I say naive?).

Quote
If you need guidance on self driving cars then look no further than what is required of commercial airlines.
Really?  TCAS would go apeshit!!
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: EEVblog on March 22, 2018, 08:31:58 am
"Uber Autonomous Car Fatality - How"
I answered it.

No you didn't, because you refuse to acknowledge what the video and question is actually about.

Quote
This I admit is a diversion from your video but I think what I am saying is entirely relevant

And this is not the place to discuss that stuff.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: HKJ on March 22, 2018, 09:05:25 am
It do not look like she goes out in front of the car, more like she already is there, but because the head light is very limited in range, she is first seen to late.
This may mean the car is driving too fast compare to the headlight range (I know that is very common), but I do not see how lidar or radar could miss her.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: CNe7532294 on March 22, 2018, 09:27:36 am
Doesn't matter if things are visually terrible. A smart and alert human being would slow down and be cautious in darken areas.
That's a rather interesting statement.  More holes than a cheese grater, but interesting (or should I say naive?).

Quote
If you need guidance on self driving cars then look no further than what is required of commercial airlines.
Really?  TCAS would go apeshit!!

Please do point out the cheese holes.

TCAS is to avoid plane collisions. Not detect other objects in the way (ie. birds, people, truck, cars, etc.). Its not even turned on yet while taxiing from the gate to the runaway.

Overall you misunderstood my post. I'm not for shutting down the tech. I laugh at anyone thinking this is the end of automated driving. However, I'm against shutting down the extra layer of protection called human action. A quick search on piloting will turn up pilots work in tandem with the autopilot. Tech is not there to make people lazy. As I said, only there to make things easier. There is a difference between those things. Also if you notice, planes try to have 2 or more things for a reason (ie. two engines, two pilots along with the autopilot, two sets of gauges, 2 radio sets, 2-3 hydraulic systems, 2-3 fuel lines with crossfeed valves, etc.). Its called being redundant. It works. :palm:

I have no proof in saying this exact part here but there is no doubt in my mind yet uber reps told the driver everything is fine. This car drives itself. Causes her to lower her guard. Pays attention to the phone. We all know the rest. :palm: If you're by yourself and need to sleep, drunk, or pay more attention to your phone, take a real taxi or bus please. Basically saying err on the side of caution. This is yet another case to build on this wise saying. Certainly won't be the last. :rant:
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: firewalker on March 22, 2018, 10:00:53 am
Companies on the autonomous car industry exchange infos for their systems? Should they forced to do so?

Alexander.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Brumby on March 22, 2018, 10:53:49 am
Overall you misunderstood my post. I'm not for shutting down the tech. I laugh at anyone thinking this is the end of automated driving. However, I'm against shutting down the extra layer of protection called human action. A quick search on piloting will turn up pilots work in tandem with the autopilot. Tech is not there to make people lazy. As I said, only there to make things easier. There is a difference between those things. Also if you notice, planes try to have 2 or more things for a reason (ie. two engines, two pilots along with the autopilot, two sets of gauges, 2 radio sets, 2-3 hydraulic systems, 2-3 fuel lines with crossfeed valves, etc.). Its called being redundant. It works. :palm:

I have no proof in saying this exact part here but there is no doubt in my mind yet uber reps told the driver everything is fine. This car drives itself. Causes her to lower her guard. Pays attention to the phone. We all know the rest. :palm: If you're by yourself and need to sleep, drunk, or pay more attention to your phone, take a real taxi or bus please. Basically saying err on the side of caution. This is yet another case to build on this wise saying. Certainly won't be the last. :rant:

To put it simply, your implication is impractical.  Put anyone in a car that drives itself and after a while, EVERYbody is going to drop their attention at times.  AS IT IS, we can't even be assured that people who have full driving responsibilities will pay attention.  Just look at the issues arising from mobile phones.

Anyway, that's not the topic of this thread.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: G7PSK on March 22, 2018, 10:56:40 am
What I would like to know is for how long the car had been driving autonomously, was the driver in the act of handing over to the car or taking over from it. Also how does roadside clutter affect the lidar and radar, it was not so long ago that military airborne radar systems were confusing things like helicopters with trucks and even trees. That cyclist was pushing the bike across the road so would have been on the open road for some seconds, certainly should have been enough time for the lidar and or the radar to pick up.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: AF6LJ on March 22, 2018, 11:42:49 am
The car didn't even slow down.
This technology has much further to go before it is ready for prime time.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Fungus on March 22, 2018, 11:56:28 am
I think it's very complicated, I don't think we can say "It should have seen that on the LIDAR!", or "Shadows don't affect robots!".

LIDAR can see shapes but it can't see things like white lines painted on the road, you need visible light for that. The vision system has to be a complex composition of all the inputs. Shadows will definitely cause problems at night.

It's a two lane road, and an object in the other lane as a car is driving along isn't unusual. I'm sure the LIDAR saw it but how is the car supposed to know the object is about to move in front of an approaching car? It makes no sense. Somewhere in the software there has to be some assumptions that other road users simply don't do that. The car would be constantly braking otherwise and could never overtake anything.

The real question is: How could the pedestrian not see the approaching car? They just walked straight across the road without even looking. No human would have expected that either.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Fungus on March 22, 2018, 12:03:37 pm
The car didn't even slow down.

Should it have? Should it brake every time there's an object in the other lane?

This technology has much further to go before it is ready for prime time.

I disagree. The time to switch over is when cars are safer than humans, not when cars are 100% perfect (which they can never be).

The question to ask is therefore: Is it likely a human have done better in that situation? I say "no".

https://en.wikipedia.org/wiki/Nirvana_fallacy
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: AF6LJ on March 22, 2018, 12:08:31 pm
I agree on the LIDAR issue....
I would also add there is so much input a human driver receives under the same circumstances, I find myself wondering if there is enough parallel processing taking place for the car to react as fast as a human.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: AF6LJ on March 22, 2018, 12:12:18 pm
The car didn't even slow down.

Should it have? Should it brake every time there's an object in the other lane?

This technology has much further to go before it is ready for prime time.

I disagree. The time to switch over is when cars are safer than humans, not when cars are 100% perfect (which they can never be).

The question to ask is therefore: Is it likely a human have done better in that situation? I say "no".

https://en.wikipedia.org/wiki/Nirvana_fallacy

The car should have seen the pedestrian in the shadows.
The car should have calculated the path the pedestrian was taking.
The car should have  slowed down.

Granted...
The Pedestrian was jay walking...
In The People's Republic of Kalifornia you must be within 30' of an intersection (about 9 meters) to be considered to be making a lawful street crossing. The pedestrian should have walked to the intersection, even though it was 500' away.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Fungus on March 22, 2018, 12:15:32 pm
The pedestrian should have walked to the intersection, even though it was 500' away.

If only they had a bicycle...

I find myself wondering if there is enough parallel processing taking place for the car to react as fast as a human.

Humans are fast?
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Fungus on March 22, 2018, 12:26:56 pm
The car should have seen the pedestrian in the shadows.

I'm sure it did see the pedestrian.

(and that a human driver wouldn't have)

The car should have calculated the path the pedestrian was taking.
The car should have  slowed down.

That part I'm not so sure about.

It's easy to say in when you only look at this one data point in isolation.

In general: Not so much. I bet you can find examples where this exact set of inputs is perfectly Ok (right up until it's too late).

eg. How do you propose cars should deal with motorcyclists in the other lane? Hitting the brakes every time they see one move slightly sideways? That would be dangerous.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: orion242 on March 22, 2018, 12:35:08 pm
So does Uber have a case to go back on the judgement over stealing Google's AV tech?  Certainly it wasn't as good and valuable as G made it out to be LOL.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: AF6LJ on March 22, 2018, 12:41:29 pm
The car should have seen the pedestrian in the shadows.

I'm sure it did see the pedestrian.

(and that a human driver wouldn't have)

The car should have calculated the path the pedestrian was taking.
The car should have  slowed down.

That part I'm not so sure about.

It's easy to say in when you only look at this one data point in isolation.

In general: Not so much. I bet you can find examples where this exact set of inputs is perfectly Ok (right up until it's too late).

eg. How do you propose cars should deal with wobbly motorcyclists in the other lane? Hitting the brakes every time they see one? That would be dangerous.

That video is far from one data point in isolation; that video is evidence the autonomous driving system is far from ready. This proves to me that UBER has no idea what the hell they are doing, no software QC, no dynamic testing with actual moving targets that need to be tracked.
While that video may not be all you need to know the system is not anywhere near ready for prime time, the video is proof Uber along with what I have seen come out of Google are nowhere near ready to deploy these cars on the street.

Don't even get me started on how stupidly foolish it is to solely rely on GPS as a means of navigation.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Brumby on March 22, 2018, 01:10:11 pm
The car should have seen the pedestrian in the shadows.

I'm sure it did see the pedestrian.
One would hope the technology should have - but that's not been demonstrated as yet.

Quote
(and that a human driver wouldn't have)
I'm not so sure about that.

Quote
The car should have calculated the path the pedestrian was taking.
The car should have  slowed down.

That part I'm not so sure about.
I am.

Quote
It's easy to say in when you only look at this one data point in isolation.
Careful how you use the term "data point".  I interpret it as you meaning "this one example".

Quote
In general: Not so much. I bet you can find examples where this exact set of inputs is perfectly Ok (right up until it's too late).
This is where I would ask for examples - but I cannot see you succeeding.

The car knows its speed and direction.  It detects the location of the potential hazard at a specific point in time.  It should also detect the approximate size.  A fraction of a second later, these measurements are repeated.  The change in position can be used to calculate a movement vector, the paths of the potential hazard and the vehicle can be assessed and collision risk determined.  Subsequent measurements can be used to confirm this - or show changes to that risk.

These - and a lot of other measurements - are captured and processed in real time ... and if you think computing power is a problem, then you really need to update your understanding.

Bottom line - the intersecting paths of the pedestrian and the vehicle should have been easily determined.

Quote
eg. How do you propose cars should deal with motorcyclists in the other lane? Hitting the brakes every time they see one move slightly sideways? That would be dangerous.
Using exactly the same process as I outlined above.  A bike wandering towards a lane is not a risk.  Crossing into might be.

The same for a bike zipping through traffic, ducking in and out of lanes.  As long as the bike's vector does not indicate a collision course with the vehicle, no action needs to be taken.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Fungus on March 22, 2018, 01:14:38 pm
Google are nowhere near ready to deploy these cars on the street.

Not even Google says they are. Google is currently aiming at "2020".

no software QC, no dynamic testing with actual moving targets that need to be tracked.

Really?  :palm:
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Monkeh on March 22, 2018, 01:26:18 pm
I am not sure that you can compare our roads and traffic to your little country

Uh, why not? They're roads, with supposedly educated adult drivers using them. Ours kill fewer people per capita and per car.. and per mile, too. Why can't we compare them, because we're too 'little'?

Quote
and I am not sure about your point

I was disagreeing with your opinion that there are few fatalities. There are many more than ours and ours are higher than they need to be. You seem to have taken this personally.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: coppice on March 22, 2018, 01:40:41 pm
Google are nowhere near ready to deploy these cars on the street.
Not even Google says they are. Google is currently aiming at "2020".
So you consider 21 months to be long term?
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Fungus on March 22, 2018, 01:45:22 pm
In general: Not so much. I bet you can find examples where this exact set of inputs is perfectly Ok (right up until it's too late).
This is where I would ask for examples - but I cannot see you succeeding.

That's because you're only thinking of ordinary examples of everyday traffic. You need to think of really mad things, just like what happened here (seriously, what was going through her head?)

My thoughts on what happened are that somewhere in the software there has to be a variable:

float probabilityOfPedestrianWalkingAcrossTheRoadWithoutLooking = 0.5;

In the city you set it to quite a high value.

On a highway/freeway/motorway? A lower value - you don't dozens of cars to be emergency braking from 55mph every time a shrub grows towards the road.

Bottom line:

This sort of thing was bound to happen. This is very complex software, it was 100% likely that there would be a death (or three) through software bugs.

Let's not kid ourselves though: The road death rate is going to drop sharply in a few years time and it will be thanks to these cars.

(no, it won't ever be 0%, but that's no reason not to do it (https://en.wikipedia.org/wiki/Nirvana_fallacy)).

Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: coppice on March 22, 2018, 01:46:19 pm
In the video the cyclist appears from the darkness at the very last moment, as though the dipped headlights were dipped way too far. If a human were driving on that dark road they would have had their high beams on, and it looks like they would have seen the cyclist reasonably early, when it was still realistic to stop. I don't think the "human driver would have hit this cyclist anyway" argument, which I have seen, holds water.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Fungus on March 22, 2018, 01:51:07 pm
Google are nowhere near ready to deploy these cars on the street.
Not even Google says they are. Google is currently aiming at "2020".
So you consider 21 months to be long term?

a) Nobody said it was a fixed date.

b) The testing is increasing exponentially:
(https://www.eevblog.com/forum/blog/eevblog-1066-uber-autonomous-car-fatality-how/?action=dlattach;attach=405917;image)

c) The simulated testing is increasing at an even faster rate than that! Google drove 2.7 billion miles in simulators last year.

(Simulators are where people dream up scenarios like this one and feed the input to virtual cars)

d) Maybe they meant December 2020, that's nearly 33 months!!
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: mdijkens on March 22, 2018, 01:52:28 pm
The fact that this DASHCAM video shows such short headlight-beams is proof that this camera has auto-adjusted the exposure for that.
The headlight-beams of an XC90 are much further but not visible here because of that auto-exposure.

It makes me feel much more confident that a person would have seen much more, much sooner than this video shows...
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Fungus on March 22, 2018, 01:55:05 pm
It makes me feel much more confident that a person would have seen much more, much sooner than this video shows...

...assuming they weren't looking down at their phone or something.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: AF6LJ on March 22, 2018, 02:10:54 pm
In the video the cyclist appears from the darkness at the very last moment, as though the dipped headlights were dipped way too far. If a human were driving on that dark road they would have had their high beams on, and it looks like they would have seen the cyclist reasonably early, when it was still realistic to stop. I don't think the "human driver would have hit this cyclist anyway" argument, which I have seen, holds water.

Agreed...
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Monkeh on March 22, 2018, 02:20:47 pm
The video is deceiving.

It's a lit road in a semi-built area immediately approaching a 4-way light-controlled junction. Are you sure you'd be using full beam?
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: coppice on March 22, 2018, 02:32:28 pm
The video is deceiving.

It's a lit road in a semi-built area immediately approaching a 4-way light-controlled junction. Are you sure you'd be using full beam?
Someone else said the dynamic range of the camera might be distorting the situation, and making a well lit spot look dark. Whether lit by the street lighting or lit by the car's headlamps, a human driver should have been able to see the cyclist long before we see them in the video. This is a perfectly normal night driving with careless cross traffic scenario, and we rarely have a problem with it.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: AF6LJ on March 22, 2018, 02:34:50 pm
The video is deceiving.

It's a lit road in a semi-built area immediately approaching a 4-way light-controlled junction. Are you sure you'd be using full beam?
Yes.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: CNe7532294 on March 22, 2018, 02:59:32 pm
Overall you misunderstood my post. I'm not for shutting down the tech. I laugh at anyone thinking this is the end of automated driving. However, I'm against shutting down the extra layer of protection called human action. A quick search on piloting will turn up pilots work in tandem with the autopilot. Tech is not there to make people lazy. As I said, only there to make things easier. There is a difference between those things. Also if you notice, planes try to have 2 or more things for a reason (ie. two engines, two pilots along with the autopilot, two sets of gauges, 2 radio sets, 2-3 hydraulic systems, 2-3 fuel lines with crossfeed valves, etc.). Its called being redundant. It works. :palm:

I have no proof in saying this exact part here but there is no doubt in my mind yet uber reps told the driver everything is fine. This car drives itself. Causes her to lower her guard. Pays attention to the phone. We all know the rest. :palm: If you're by yourself and need to sleep, drunk, or pay more attention to your phone, take a real taxi or bus please. Basically saying err on the side of caution. This is yet another case to build on this wise saying. Certainly won't be the last. :rant:

To put it simply, your implication is impractical.  Put anyone in a car that drives itself and after a while, EVERYbody is going to drop their attention at times.  AS IT IS, we can't even be assured that people who have full driving responsibilities will pay attention.  Just look at the issues arising from mobile phones.

Anyway, that's not the topic of this thread.

Full automation of anything when lives are on the line is impractical in and of itself. Actually let me rephrase this. Having the computer bare all the work load is impractical. I also say again, having the best tools, like a computer, doesn't excuse one from using their brain. Dropping attention is negligence at its finest. :palm:

Automation will continue to exist just not in full form. At least not until we develop a system that separates people from vehicles. Actually thats exactly why we have trains. We also have ABS. But these are besides the point. Onto the my main point. Again. |O

As for topic, this is highly related plus Dave clearly titled it "How?" not specifically "How did those fancy sensors fail?". |O It would be very unwise not to look at something that already has experience before. Its disappointing that you would ignore this. You might as well just ignore what the NTSB, BEA, AAIB, ATSB and other transportation boards has to say. As the saying goes, "ignore history, history repeats". Again, the airline industry already has the answers working with flight computers.

I highly encourage you to talk to a Quantas pilot, flight engineer, or mechanic at the very least. I also would like to see Dave follow thru on this topic by having a discussion with Quantas. If you can't meet up with one I suggest you at least look and compare air disaster cases. I'll give you a start. Air France Flight 447. This is just one of many where either the sensor or flight computer failed while the pilot put in full trust into a faulty auto system. End result, many people died.

As for solutions and recommendations, I have one. Make not only a detection system for the computer but an independent detection system for the driver. Have it vibrate (like a stick shaker during a stall) or alarm (like TCAS). Put it on the phone even. Have it activate when the cross-sectional heat of a poodle comes across a separate FLIR camera feed. At least make the driver aware. It wouldn't stop the accident but she could have swerved to the left or slowed down. Could have left the victim live with injuries instead of I assume flying several yards along with her bike. Redundancy works.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Decoman on March 22, 2018, 03:09:54 pm
The driver of the car (heh) is OBVIOUSLY to blame for hitting this pedestrian walking across the road. It really does look like the car had NOT long range lights enabled (unsure what this is called in English).

Here in Norway, it is required that you expertly keep switching between short range lights and long range lights to actually get to see where you are driving at night (because you must not end up blinding other cars passing you in the other lane, opposite of your direction).

The type of driving seen in the footage, seem to be as reckless as the hilarious car accidents in Hollywood movies where cars collide into things at speed, or failing to lower speed when spotting a potential obstacle ahead.

This death is imo something that would have been prevented by a moderately skilled driver.

When learning to drive a car, it is taught that one must always adjust the speed to suit the environment. Obviously, no driver should ever be allowed to drive at speed across a dark road:
1) When being unable to see well ahead of the car
2) When being unable to come to a stop within a reasonable range relative to the conditions of the environment

This is imo obviously the driver's fault. This kind of collision is precisely what is taught as what will happen if you are at speed in a car, while having limited view ahead. If you can't see far ahead (and also the entire width of the road), you must drive more slowly to suit the conditions of the environment. Anything else is just crazy, if you are not willing to maintain the minimum amount of control over a vehicle that is otherwise required to safely and responsibly operate a motorized vehicle.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Kalvin on March 22, 2018, 03:19:00 pm
It would be most interesting to see the sensor data from the car. What did the car detect? Hopefully they will release the data at some point, not just this video footage. Without sensor data this video is quite useless.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Decoman on March 22, 2018, 03:20:29 pm
Without sensor data this video is quite useless.

The video shows the car to be driving on a road having no visibility ahead, and with no long range lights enabled, can we agree on this?
I also wonder if you drive a car yourself. :| Bonus question. I can drive most large vehicles myself.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: mdijkens on March 22, 2018, 03:22:50 pm
The video shows the car to be driving on a road having no visibility ahead, and with no long range lights enabled, can we agree on this?
I also wonder if you drive a car yourself. :| Bonus question.

No  8)

The fact that this DASHCAM video shows such short headlight-beams is proof that this camera has auto-adjusted the exposure for that.
The headlight-beams of an XC90 are much further but not visible here because of that auto-exposure.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: coppice on March 22, 2018, 03:25:00 pm
Without sensor data this video is quite useless.

The video shows the car to be driving on a road having no visibility ahead, and with no long range lights enabled, can we agree on this?
I also wonder if you drive a car yourself. :| Bonus question.
If you've read through the comments the answer is clearly no, we can't agree on that. The darkness may be entirely due to the exposure setting on the camera, while the road was well lit with street lamps.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Decoman on March 22, 2018, 03:28:02 pm
I wonder if these cars have a diagnostic system running, because if a sensor fails gracefully and stealthily, then that would be bad I think.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: mdijkens on March 22, 2018, 03:29:28 pm
It might even be that there was so much light that any human driver could have seen the victim from 100ft/300m

We can't tell based on the DASHCAM footage...
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Decoman on March 22, 2018, 03:30:51 pm
Protip: When using long range lights on the road, you actually see where you are driving. To think that a camera could even auto adjust that away in a recording seems bizarre imho.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: CatalinaWOW on March 22, 2018, 03:35:18 pm
The only thing that is clear here is that a lot of technical people are willing to speculate on facts and develop firm opinions based on those speculations.  If this thread's authors are representative of those writing the self driving software it does not bode well for the results.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Kalvin on March 22, 2018, 03:38:40 pm
Without sensor data this video is quite useless.

The video shows the car to be driving on a road having no visibility ahead, and with no long range lights enabled, can we agree on this?
I also wonder if you drive a car yourself. :| Bonus question.
Sure, we can agree on that, of course. Long range lights may have helped in this situation, but in the city area using long lights is not always possible due to other traffic or pedestrians. In this situation human vision might have been somewhat better than the video camera (cannot know how much better, this is just speculation), but nevertheless the self-driving cars are using sensor data to detect obstacles and other vehicles / pedestrians / just name it, so the sensor data is the key here.

Sure, I drive. We have dark fall/winter when there is no snow yet, and driving in the poorly lit areas is pretty demanding, especially when the roads are wet and absorb all light - or if someone is driving towards me with headlights reflecting from wet asphalt straight into eyes. Hitting a moose in the country side in dark roads is not uncommon either. Quite often I do hope to have some sensor fusion available.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: mdijkens on March 22, 2018, 03:40:22 pm
Compare with this:
https://www.youtube.com/watch?v=XAhyQC7fai0 (https://www.youtube.com/watch?v=XAhyQC7fai0)
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: CNe7532294 on March 22, 2018, 03:44:32 pm
Here's an interesting read related to any automated vehicle. Worth a look over. Its from the Austrialian Transportation Safety Board. The incident of Quantas Flight 72.

https://www.atsb.gov.au/media/3532398/ao2008070.pdf (https://www.atsb.gov.au/media/3532398/ao2008070.pdf)

Start on "Executive Summary". You're welcome to read the rest of 300+ pages though.  >:D
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Brumby on March 22, 2018, 04:13:49 pm
Here's an interesting read related to any automated vehicle. Worth a look over. Its from the Austrialian Transportation Safety Board. The incident of Quantas Flight 72.

https://www.atsb.gov.au/media/3532398/ao2008070.pdf (https://www.atsb.gov.au/media/3532398/ao2008070.pdf)

Start on "Executive Summary". You're welcome to read the rest of 300+ pages though.  >:D


Sorry, not going to read it - because it's totally irrelevant.  You can throw words around as much as you like, but when it comes to average people driving cars, they are NOT going to be anywhere NEAR as involved as you are suggesting they should be.  To do what you are saying would require a whole new level of training and accountability ... and autonomous vehicles are moving us further away from such imposts.

It's just NOT GOING TO HAPPEN.

Also, I strongly feel that your analogy with the airline industry is an insult to everybody involved.  For one thing, one of THE main dangers of aviation comes from the fact that when something goes wrong, a pilot cannot simply "pull over" to the side of the road, like you can in a car.  Running out of fuel is a life and death matter in the air.  On the ground, it's a nuisance.  I could go on and on....
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Brumby on March 22, 2018, 04:22:46 pm
Protip: When using long range lights on the road, you actually see where you are driving. To think that a camera could even auto adjust that away in a recording seems bizarre imho.

You haven't worked with many cameras, have you?

Dynamic range is one of the greatest challenges and things don't get much more dynamic than night driving situations.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: coppice on March 22, 2018, 04:26:17 pm
Protip: When using long range lights on the road, you actually see where you are driving. To think that a camera could even auto adjust that away in a recording seems bizarre imho.

You haven't worked with many cameras, have you?

Dynamic range is one of the greatest challenges and things don't get much more dynamic than night driving situations.
Most dash cameras do a pretty good job at night. This Uber one seems to have done a rather poor job.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Brumby on March 22, 2018, 04:29:02 pm
Agreed.  This one does an absolutely crap job, which is more the reason not to assign too much importance to the information it gives - or more importantly, does NOT give.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Brumby on March 22, 2018, 04:31:38 pm
Moving along from matters that this thread was never intended to debate, I would really like more discussion about how the technology failed to respond.

I'm pretty sure we can discount the possibility of an iced-up angle of attack sensor, so, what's next?
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: TheDane on March 22, 2018, 04:50:40 pm
Sad news!
Hopefully something good will come of this, and security - research and development won't get butchered, but improved in the future.

It seems to me that CEO's of various CORPiSES get away with too much, not even apologizing if/when something goes wrong:

The ledger has a security flaw - it seems. CEO hides behind  :=\
https://saleemrashid.com/2018/03/20/breaking-ledger-security-model/ (https://saleemrashid.com/2018/03/20/breaking-ledger-security-model/)

"The vulnerability reported by Saleem requires physical access to the device BEFORE setup of the seed, installing a custom version of the MCU firmware, installing a malware on the target’s computer and have him confirm a very specific transaction."
"I am puzzled as to where this claim could have originated from. From later contact with Ledger, I was informed that the CEO had not at all been briefed on the security vulnerability when they made these comments on Reddit."


Then there's the recent Facebook data indicent, where Mark finally seems to be doing something - and apologizing. Guess being regulated isn't fun  :wtf:
https://www.theverge.com/2018/3/21/17150158/mark-zuckerberg-cnn-interview-cambridge-analytica (https://www.theverge.com/2018/3/21/17150158/mark-zuckerberg-cnn-interview-cambridge-analytica)

https://www.theverge.com/2018/3/19/17140962/facebook-chief-security-officer-leaving-alex-stamos (https://www.theverge.com/2018/3/19/17140962/facebook-chief-security-officer-leaving-alex-stamos)
"As part of Stamos leaving, Facebook has reportedly broken down and reassigned his security team.
Almost all of the 120 employees have now been reassigned to product and infrastructure teams" 

120 people, wow. https://newsroom.fb.com/company-info/ (https://newsroom.fb.com/company-info/) reports 25,105 employees as of December 31, 2017  :scared:

Added - Full disclosureof all telemetry data from the vehicle would be needed in order to 'get a clear picture of what happened' imho.
and, don't cross any road without looking to the left/right making sure it is clear. Speeding however, is a criminal offense in most places, and only makes any incident worse  :rant:
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: orion242 on March 22, 2018, 05:07:38 pm
Speeding however, is a criminal offense in most places, and only makes any incident worse

Was 45mph speed limit and going 40 according to the reports.  You can see the sign right before the overpass on streetview.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: jonovid on March 22, 2018, 05:09:22 pm
think autonomous vehicles will be part of the new so called 5G system including so called 5G or IOT smart roads technology.
that are roads with light poles that have electronic LiDAR or infrared sensors that see warm body's of pedestrians in a 360 view.
autonomous vehicles will pull data from the road system that will includ over the horizon and round the corner data.
of pedestrians vehicles esc. so autonomous vehicles will be part an electronic fixed rail type 5G system that is the smart road.
so dumb roads will be off limits to autonomous vehicles unless the vehicle has a steering wheel , pedals and a licensed driver behind it.
IMO do not expect anything too soon, but by 2025 autonomous vehicles will married to smart roads like trains to railroads.
it makes sense to have sensors outside the vehicle itself.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: CNe7532294 on March 22, 2018, 05:42:48 pm
Here's an interesting read related to any automated vehicle. Worth a look over. Its from the Austrialian Transportation Safety Board. The incident of Quantas Flight 72.

https://www.atsb.gov.au/media/3532398/ao2008070.pdf (https://www.atsb.gov.au/media/3532398/ao2008070.pdf)

Start on "Executive Summary". You're welcome to read the rest of 300+ pages though.  >:D


Sorry, not going to read it - because it's totally irrelevant.  You can throw words around as much as you like, but when it comes to average people driving cars, they are NOT going to be anywhere NEAR as involved as you are suggesting they should be.  To do what you are saying would require a whole new level of training and accountability ... and autonomous vehicles are moving us further away from such imposts.

It's just NOT GOING TO HAPPEN.

Also, I strongly feel that your analogy with the airline industry is an insult to everybody involved.  For one thing, one of THE main dangers of aviation comes from the fact that when something goes wrong, a pilot cannot simply "pull over" to the side of the road, like you can in a car.  Running out of fuel is a life and death matter in the air.  On the ground, it's a nuisance.  I could go on and on....

You're missing the main point. :palm: Just because one operates in 3D and the other in 2D doesn't mean they have absolutely zero relation to each other. I don't expect you to read all of it. You didn't even bother reading my previous response in which I provide a solution to the thermal problem. :palm: I expect to connect the dots that caused the entire system to fail. One failure is not enough to cause the whole thing to fail after all. Its more of an insult to people who died or will die if you ignore all the possibilities and recommended preventative actions. Another thing, you really think that lowly of average people. Insulting much?

Also since you mentioned training, yes I do think people need to be educated and trained. Should it be as involved as a pilot? Of course not, but it goes a long way to help try preventing tragedies like this. Future drivers will also learn something new instead of slowly becoming dumb and lazy. Most people these days don't even know how to change a tire and straight away call AAA. Worse off, I wouldn't be surprised if people cover the sensor locations they are uneducated about with decals, flags or whatever. Who knows, maybe contributed here? |O

One more thing, you really haven't made any case to say why we should take a human being completely out of the loop (my main point!). Really, why should we take human beings out? If you could go on and on, please enlighten us. I have absolutely no doubt in my mind having a human out partly caused this accident's outcome. Its definitely a part of the "how". I highly doubt this question is irrelevant. At least try to say why its irrelevant instead of just saying "totally irrelevant" while being so focused on the airplane part.... :palm: You can improve all you want on the cameras but something else will just fail anyways. Don't be shocked if we come back to this same topic talking about yet another "superficial" failure if they end up fixing the camera only.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: AF6LJ on March 22, 2018, 06:34:05 pm
I am sorry AI isn't far enough along for a car to be able to drive itself and have a perfect driving record.

Problems that need solutions.

1. get enough computing power and enough software to react to potentially dangerous situations for the car, it's passengers and anybody on the road. Make that reaction time equal to or better than human reaction time. The video shown in this thread shows the car doesn't have enough AI to "look ahead to potential accident situations, and act.".

2. When someone gets harmed by one of these robot cars, who pays medical costs. I think the software developers should pay the medical costs incurred by injured parties. Every software developer should be responsible for the actions of their software, no exceptions.

This was as stupid as space elevators. A human driver, even a marginally skilled human driver is much better at avoiding accidents than the state of the art in autonomous vehicles.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: wraper on March 22, 2018, 06:53:22 pm
1. get enough computing power and enough software to react to potentially dangerous situations for the car, it's passengers and anybody on the road. Make that reaction time equal to or better than human reaction time. The video shown in this thread shows the car doesn't have enough AI to "look ahead to potential accident situations, and act.".
Here autopilot detected that shit will happen way before human would expect.
https://www.youtube.com/watch?v=FadR7ETT_1k (https://www.youtube.com/watch?v=FadR7ETT_1k)
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: coppice on March 22, 2018, 06:59:32 pm
1. get enough computing power and enough software to react to potentially dangerous situations for the car, it's passengers and anybody on the road. Make that reaction time equal to or better than human reaction time. The video shown in this thread shows the car doesn't have enough AI to "look ahead to potential accident situations, and act.".
Here autopilot detected that shit will happen way before human would expect.
https://www.youtube.com/watch?v=FadR7ETT_1k (https://www.youtube.com/watch?v=FadR7ETT_1k)
The Tesla was keeping a very sensible distance from the car ahead, so a human driver would have been able to easily stop in time. This is a nice demonstration of why tailgating is dumb, not a demonstration of where machines have an edge.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Fungus on March 22, 2018, 07:04:15 pm
The Tesla was keeping a very sensible distance from the car ahead, so a human driver would have been able to easily stop in time. This is a nice demonstration of why tailgating is dumb, not a demonstration of where machines have an edge.

Whoosh!

Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: wraper on March 22, 2018, 07:08:43 pm
The Tesla was keeping a very sensible distance from the car ahead, so a human driver would have been able to easily stop in time. This is a nice demonstration of why tailgating is dumb, not a demonstration of where machines have an edge.
The fact is, it started braking before human would. So there was still quiet a distance once crash happened.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Fungus on March 22, 2018, 07:15:45 pm
The Tesla was keeping a very sensible distance from the car ahead, so a human driver would have been able to easily stop in time. This is a nice demonstration of why tailgating is dumb, not a demonstration of where machines have an edge.
The fact is, it started breaking before human would. So there was still quiet a distance once crash happened.

The fact is, the Tesla car beeped long before the accident even happened.

PS: The English word is "brake".
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: wraper on March 22, 2018, 07:15:50 pm
This is a nice demonstration of why tailgating is dumb, not a demonstration of where machines have an edge.
No, this is demonstration that people are stupid and the reason why autonomous cars would be a good thing. Because they don't act dumb unlike people.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: orion242 on March 22, 2018, 07:24:28 pm
Because they don't act dumb unlike people.

Perhaps we should refer back to the video of this AV plowing over a ped again?
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: coppice on March 22, 2018, 07:27:45 pm
The Tesla was keeping a very sensible distance from the car ahead, so a human driver would have been able to easily stop in time. This is a nice demonstration of why tailgating is dumb, not a demonstration of where machines have an edge.
The fact is, it started breaking before human would. So there was still quiet a distance once crash happened.
As soon as you see the car that gets hit over the top of the red one, you can see that the Tesla is closing rapidly on it. I think I would have backed off the power even before the beeps, although I probably wouldn't have been braking hard before the beeps. There's a strong luck element here, as the car that gets hit is quite tall. A lower car might not have been visible so early. It looks like the Tesla probably beeps as it picks up the doppler from the car in front of the one that gets hits. Again, there is a luck element in that being detected so early, as its a little to the left of the car that gets hit. If it had been a little further to the right the beeps would probably have been delayed a little.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: coppice on March 22, 2018, 07:33:47 pm
The Tesla was keeping a very sensible distance from the car ahead, so a human driver would have been able to easily stop in time. This is a nice demonstration of why tailgating is dumb, not a demonstration of where machines have an edge.
The fact is, it started breaking before human would. So there was still quiet a distance once crash happened.

The fact is, the Tesla car beeped long before the accident even happened.

PS: The English word is "brake".
The Tesla can't detect accidents. It simply detects speed. As soon as the red car moves to the right, the doppler system detected the much slower speed of the cars in front of it, and the alert occured. A human driving could have recognised the same thing.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Fungus on March 22, 2018, 07:38:25 pm
A human driving could have recognised the same thing.

I think you need to go and type "Russian dash cam" into youtube.

Come back in a few hours and see if you still have the same faith in human drivers.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: sunnyhighway on March 22, 2018, 07:44:28 pm
Protip: When using long range lights on the road, you actually see where you are driving. To think that a camera could even auto adjust that away in a recording seems bizarre imho.

You haven't worked with many cameras, have you?

Dynamic range is one of the greatest challenges and things don't get much more dynamic than night driving situations.
Most dash cameras do a pretty good job at night. This Uber one seems to have done a rather poor job.

Uber must have used a potato instead of a dashcam.

Below a footage decent dashcam at the same spot at night.

https://youtu.be/CRW0q8i3u6E?t=30 (https://youtu.be/CRW0q8i3u6E?t=30)
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: mdijkens on March 22, 2018, 07:47:38 pm
really enlighting  :-+
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: tpowell1830 on March 22, 2018, 07:48:14 pm
Quote

Uber must have used a potato instead of a dashcam.

Below a footage decent dashcam at the same spot at night.

https://youtu.be/CRW0q8i3u6E?t=30


This footage is obviously from a camera that someone is holding, therefore it is probably not a dashcam. Although I do agree that the video from the dashcam of the uber car sucked.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Monkeh on March 22, 2018, 07:51:35 pm
If the lighting was more as that video suggests the incident would've been a non-incident with a human driver - and simple computer vision alone should've been adequate to prevent it without needing lidar or radar.

Mind you, if it were that well lit the human driver likely could have seen her during one of the several glances up prior to the incident and paid attention.

Here's hoping they release more footage and sensor data, because what we have is simply not good enough.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: tszaboo on March 22, 2018, 08:13:34 pm
I suspect that the Lidar system is much lower resolution then they let us believe. Which gets worse the faster the car goes. And it would need to not only detect the cyclist, but also heading, and speed. And while a cyclist is "big" but it is full with holes. I guess it cannot even detect a cycle reliably, because it is getting confusing info, some parts are 20m away, others are 200...
The Tesla was keeping a very sensible distance from the car ahead, so a human driver would have been able to easily stop in time. This is a nice demonstration of why tailgating is dumb, not a demonstration of where machines have an edge.
Tailgaters should be shot on site by the police. Also the people who start overtaking when they clearly have no idea what is ahead on the road. And the others who think "Now I'm going to teach you a lesson"

Since none of the Autonomous Cars are doing this, they are infinitely better drivers than most human. I suspect BMW is holding on to their self-driving software, because they need to program the car to drive like the usual * buying it..
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: timgiles on March 22, 2018, 08:20:55 pm
Seemed to me the person walked out in front of the car. I doubt the car with headlights on was hard to see / notice - they chose to cross at the point where there was no light, in front of a moving vehicle...

So long as automated cars can be shown to drive as safe as your average human driver, I dont seem why we should hold them more accountable. The insurance companies will point us in the direction as to how safe they are, and it is likely in the not too distant future that they will start pricing non automated cars off the roads as they become significantly safer. Indeed it will become almost impossible to justify driving a car when the automated pilot can do so.

For now, it is a tragic accident, it needs investigating and lessons learnt, but every car company deciding to stop testing of autonomous cars seems extremely short sighted.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Bob Sava on March 22, 2018, 08:27:03 pm
Based on line markings (10ft per line segment) camera footage seems to either show that car lights are improperly adjusted (pointing down) or that the camera is not showing how dark it really was. 

Either way LIDAR does not depend on headlight illumination (or daylight) as it's projecting it's own light (laser beam).  Therefore this was definitively a system failure and program should be halted and system retested on closed circuit.

Personally I think they should make all autonomous vehicles emit a warning sound or light so pedestrian know that this car will not behave or see like human drivers do (I bet this pedestrian expected the car to slow down as visibility was good - to human). 



Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Monkeh on March 22, 2018, 08:30:36 pm
Personally I think they should make all autonomous vehicles emit a warning sound or light

It has an engine and the lights are on. If you're stupid enough to think it's going to stop for you in that situation you deserve it.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: wraper on March 22, 2018, 08:31:22 pm
(I bet this pedestrian expected the car to slow down as they did many times before).
I bet the only thing that could be expected from car driver is swearing at such dumbfuck.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Bob Sava on March 22, 2018, 09:02:05 pm
Without sensor data this video is quite useless.

The video shows the car to be driving on a road having no visibility ahead, and with no long range lights enabled, can we agree on this?
I also wonder if you drive a car yourself. :| Bonus question.
Sure, we can agree on that, of course. Long range lights may have helped in this situation, but in the city area using long lights is not always possible due to other traffic or pedestrians. In this situation human vision might have been somewhat better than the video camera (cannot know how much better, this is just speculation), but nevertheless the self-driving cars are using sensor data to detect obstacles and other vehicles / pedestrians / just name it, so the sensor data is the key here.

Sure, I drive. We have dark fall/winter when there is no snow yet, and driving in the poorly lit areas is pretty demanding, especially when the roads are wet and absorb all light - or if someone is driving towards me with headlights reflecting from wet asphalt straight into eyes. Hitting a moose in the country side in dark roads is not uncommon either. Quite often I do hope to have some sensor fusion available.

The reason why long range lights are not needed in the city is that there are street lights and speed limit is lower.  In this case weather conditions were excellent and there no external circumstances preventing lidar from operating. 

Bottom line, the system *probably* failed to detect object on the road and *definitively* failed to react.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: ez24 on March 22, 2018, 09:18:01 pm

Although this film has been around a long time, when an autonomous car can do this, then it might be ready for public release.

Exciting video - anyone know what car it was?  I hope the driver got 20 years in jail for this.  My bet is in 200 years an autonomous car will be able to do this but with electric motors, but I will also bet I will not be around to see it.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: wraper on March 22, 2018, 09:43:05 pm
though this film has been around a long time, when an autonomous car can do this, then it might be ready for public release.

This was 7 years ago:
https://www.youtube.com/watch?v=e_lGPRIRG3Y (https://www.youtube.com/watch?v=e_lGPRIRG3Y)

https://www.youtube.com/watch?v=-cj375UZyaI (https://www.youtube.com/watch?v=-cj375UZyaI)
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: ez24 on March 22, 2018, 09:48:33 pm
FYI

From what I know - the victim was homeless, maybe this is why she was not at a crosswalk?

- The driver was an ex-felon female (not male).  And I think with a violent conviction and I think served 4 years.  This was the biggest surprise for me.  I thought Uber were using trained techs to do this job, not felons  (in general I am not against felons but I have met many (my tenants) and none would be qualified for this).   

If Uber uses felons to test their cars then, in my opinion, they should not be in business.  I am sure this is going to set back things for many years.  Regulators will have a field day on this.

If anyone has a Toyota with TSS-P - would their car avoided this?  TSS-P has pedestrian avoidance.
I had a distracted accident (100% my fault) and someday I want to get a car with AEB  but only Toyota seems to be the one I can get someday. 

If I need to correct my facts let me know and I will.

cheers
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Smokey on March 22, 2018, 10:33:40 pm

It's a two lane road, and an object in the other lane as a car is driving along isn't unusual. I'm sure the LIDAR saw it but how is the car supposed to know the object is about to move in front of an approaching car? It makes no sense. Somewhere in the software there has to be some assumptions that other road users simply don't do that. The car would be constantly braking otherwise and could never overtake anything.

The real question is: How could the pedestrian not see the approaching car? They just walked straight across the road without even looking. No human would have expected that either.

This... This, this, this, this!

This isn't about if the Lidar was working, it is about what predictions were made based on the available data from all systems and the information about the surroundings.
We ALL make predictions about the world around us based on the rules of reality.  How many times would you have been in a terrible accident and likely killed on your way into work this morning if the car next to you on the highway went crazy (broke the rules) and turn hard into the side of your car at 65MPH?  Likely 100% of the time.  What extra sensors would save you?  How many times were you actually in that possible situation in just this one drive to work?  100s of times if not more, in just this one ride?  With something that potentially deadly, why is that not the greatest ever present fear in all of us all the time every time we get behind the wheel???  Because we make the prediction that people will stay in their lane and follow the rules of the road.  Is it a good prediction?  Well at least most of us reading this made it where we were going today (and the vast majority of days like it), but some small amount of people actually did get hit/killed in that same time in this exact way.  This is the fault of the person that broke the rules by which we make predictions, not the person expecting the rules to be followed. Just because it's a computer and not a person, and just because they have more sensors, doesn't mean they aren't making the exact same predictions, and relying on everything around them to fundamentally follow the rules.

Now in this particular case you need to look at the situation and the reasonable predictions being made.
The car does have an impressive set of sensors, and under optimal conditions those sensors all contribute to the total perception of the surroundings.  When multiple sensors contribute the same feedback [camera sees and identifies a person, lidar sees an object of some sort (lower rez), radar sees an object (again low rez)] then confidence can be high what some thing is and how you can expect it to act.
Based on the video, the visual light cameras could not see the person in the shadows, at least not well.  Mark that off as one of the sensor contributions to the feedback.  The camera based computer vision is probably the most capable of determining what something actually is since it has the highest resolution representation of the details of a thing.  Colors, faces, textures, etc are all the domain of the computer vision that the other sensors can't "see". 
The camera system, even multiple cameras in an array, is less good at distance and velocity determination.  That's where the Lidar comes in.  Excellent quick distance measurement but terrible resolution.  It gives you a point cloud, which some computer has to put back together and.. .you guessed it... make a prediction of what that object could be.  Did the Lidar detect there was some sort of object over there?  We don't know yet, but I would guess it probably did.  The question now becomes, without the visual camera data what did the Lidar see and what assumptions did the computer make based on that sensor data about the probable object?

Question1: What can be reasonably expected to be in the left lane of a two lane highway in the middle of the highway far between street crossings? 
Answer1: If you said a person with no regard for their own surroundings wandering between lanes at night dragging a bike loaded up with stuff with no lights on, then you are crazy.  That is a poor prediction by person or computer.  Probability of that is low.  Very low.  It would be a better prediction to think it was a couch or some other junk that fell off someone's truck that's blowing around in the street, which is more probable.
A much better prediction would be a motorcycle of some sort.  Probability of that is fairly high.  You would expect a motorcycle to be in the street, and expect it to be in the left lane, possibly stopped or moving slowly in the beginning of a left turn lane.  In the place where the accident took place, the highway opened up from 2 lanes into 4 lanes with 2 left turn lanes on the left.
Question2: What actions can you reasonably expect a motorcycle to take while in a left turn lane?
Answer2: If you said purposefully swerve into a car driving at a normal highway speeds in the right lane without any regard or hesitation, then again, you are crazy.  Probability is incredibly low.  A much better prediction would be merging back into the left through lane from the beginning of the left turn lane.  Probably of that is good.  This would give you the profile of a side view of a motorcycle moving at slow speeds, which is something the Lidar could tell on it's own and matches what we see in the video.

Now I'm not saying this is a perfect exoneration of the computer.  For example; is it a good prediction for a motorcycle to have no lights on at night?  Probably not, but it's also not a terrible prediction.  Vehicles are pretty routinely found driving at night with no lights on.

Should the computer have realized it has sub-optimal sensor information without the full camera detail, and there was a questionable condition of a vehicle with no lights on in the adjacent lane, and therefore proceeded with more caution?  Maybe, but not 100% certainly does that mean it needs to take evasive action.  Maybe it did reduce it's speed to what it considered optimal based on the predictions in the seconds before the video.   I'm sure we will hear from the Uber team eventually, but if the prediction was a vehicle such as a motorcycle (which I think is a reasonable prediction by human or computer standards given the conditions and probabilities) that doesn't warrant the follow up prediction people are making here that the object will try to jump in-front of the car and demands panic breaking.  Now why the breaks didn't lock up once it was clear the object was going to get hit is another question, but it might come out that they did and it's just not obvious from the video.  I'm willing to give them the benefit of the doubt on that one since it's not clear.  Dave was the one that synced the driver with the front video and that could be off.

Here is a completely incomplete list of things we (or a computer) would never attempt if we had to be 100% certain of every prediction or be able to take 100% successful evasive action:
1) Never enter a building of any kind.  It could collapse, it could light on fire, an airplane could crash into it, etc. 
2) Never turn on a light switch.  You could be electrocuted, you could start a fire, etc...
3) Never walk on the sidewalk.  A car could hop the curb, you could fall in a hole, a tree could fall on you, etc...
4) Never drive on a highway.  65MPH is too fast to react to all possible situations.
5) Never drive on a city street.  35MPH is too fast to react to all possible situations.
6) Never get in your car at all.  Breaks could fail, steering wheel could pop off, gas could explode, etc....

All of these things WILL happen, but that doesn't mean we need to account for them in everyday predictions.  Self driving cars will kill people.  Period.  It will happen fairly often when the technology is more prevalent.  There is no way around that with 2ton steel boxes moving at high speeds.  The question is whether they kill people less than people drivers kill people, which will most certainly be the case. 

TL;DR
Don't break the rules and walk in-front of moving cars.  More sensors doesn't ensure perfect predictions 100% of the time.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Bud on March 22, 2018, 10:37:56 pm
It is difficult to read such long posts. Chances are people will skip at least 50% of it.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Smokey on March 22, 2018, 10:40:26 pm
Ya, that didn't start out that way.  Sorry.  Slow day at work :)
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: tpowell1830 on March 22, 2018, 11:28:29 pm

Although this film has been around a long time, when an autonomous car can do this, then it might be ready for public release.

Exciting video - anyone know what car it was?  I hope the driver got 20 years in jail for this.  My bet is in 200 years an autonomous car will be able to do this but with electric motors, but I will also bet I will not be around to see it.

 Ferrari 275 GTB
Agreed on the electric motor part, but I think 50 years we will see autonomous vehicles (they may not be cars as we know them).
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: wraper on March 22, 2018, 11:47:28 pm

Although this film has been around a long time, when an autonomous car can do this, then it might be ready for public release.

Exciting video - anyone know what car it was?  I hope the driver got 20 years in jail for this.  My bet is in 200 years an autonomous car will be able to do this but with electric motors, but I will also bet I will not be around to see it.
With the same (=high) probability of accident they can do it right now.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: ez24 on March 22, 2018, 11:51:32 pm
Ferrari 275 GTB

Thanks for the video - exciting.  Since there is a red car in your profile and you knew what car it was  - is it your car ?

If not, do you know if the driver was a professional?  Now to start a new topic - How to make my 2000 Corolla sound like a Ferrari 275 GTB.    Uber needs to make their cars sound like a 275, then I would use them, felons or not.

FYI  I once had a taxi drive like this across Taipei in the 70s.  I told him I was in a big hurry,  the ride was just like the video so it brought back memories.  But it was not a 275.



Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: tpowell1830 on March 23, 2018, 12:02:38 am
Ferrari 275 GTB

Thanks for the video - exciting.  Since there is a red car in your profile and you knew what car it was  - is it your car ?

If not, do you know if the driver was a professional?  Now to start a new topic - How to make my 2000 Corolla sound like a Ferrari 275 GTB.    Uber needs to make their cars sound like a 275, then I would use them, felons or not.

FYI  I once had a taxi drive like this across Taipei in the 70s.  I told him I was in a big hurry,  the ride was just like the video so it brought back memories.  But it was not a 275.

I wish I had a Ferrari 275 GTB, but the pic is my 1966 Ford Mustang GT.
This was done in 1976 and the driver was a professional Formula 1 driver, however, it is rumored that the actual vehicle was a Mercedes 450 SEL 6.9 with the Ferrari soundtrack dubbed over it. Who knows?

http://www.marotprodriving.com/film-ferrari-275-gtb-paris/ (http://www.marotprodriving.com/film-ferrari-275-gtb-paris/)
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: SilverSolder on March 23, 2018, 12:51:30 am
This isn't about if the Lidar was working, it is about what predictions were made based on the available data from all systems and the information about the surroundings.

You are on to something here, it seems to me.  The software maintains a model of what it thinks the reality around the car looks like.  Perhaps it mistook the sideways facing pedestrian/bicycle combination for a larger vehicle with its front pointing towards the autonomous vehicle...  perhaps the absence of lights made it think it was a stationary/parked object...  suddenly, the vehicle started to move sideways...  does not compute.

What the system seems to be missing is a "gut feeling that something is not right" (sensor inputs are becoming inconsistent with the model of the surrounding world) which should have led it to slow down, or at least be prepared to slow down very quickly.

Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: tpowell1830 on March 23, 2018, 02:21:45 am
This isn't about if the Lidar was working, it is about what predictions were made based on the available data from all systems and the information about the surroundings.

You are on to something here, it seems to me.  The software maintains a model of what it thinks the reality around the car looks like.  Perhaps it mistook the sideways facing pedestrian/bicycle combination for a larger vehicle with its front pointing towards the autonomous vehicle...  perhaps the absence of lights made it think it was a stationary/parked object...  suddenly, the vehicle started to move sideways...  does not compute.

What the system seems to be missing is a "gut feeling that something is not right" (sensor inputs are becoming inconsistent with the model of the surrounding world) which should have led it to slow down, or at least be prepared to slow down very quickly.

Yes, to put it another way, the friggin crap failed and killed a human... this is no way to move forward. And yet, the corporate engine will prevail and the testing will continue. The marketing fucks will spin it positive and the arsehole board will say (in their own minds) 'We can profit from a positive spin'. WTF!

Sorry about rant, but the wheel keeps turning. We do need a different mode of ground travel that is expedient, cost effective and safe (not necessarily in that order). The autonomous car could be the answer (and electric EZ24), but not now.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: wraper on March 23, 2018, 02:31:51 am
Yes, to put it another way, the friggin crap failed and killed a human... this is no way to move forward.
That stupid human (who actually deserved it by own recklessness) would be killed regardless if it was autonomous car or driven by human driver. How about banning vehicles as such? So many people die under the tires. Also lets ban elevators and trains as people get killed there too. Heck, lets ban electricity, it sometimes causes fire or electric shock with human casualties.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Monkeh on March 23, 2018, 02:33:14 am
Yes, to put it another way, the friggin crap failed and killed a human...

The friggin' crap failed to protect the human from itself.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Rasz on March 23, 2018, 03:00:56 am
I took the video that was released and just asked them before playing the video to say "BRAKE" whenever they saw the person. I might have converted a few people.

good thing drivers arent observing the road thru $10 chinese dashcraps then, right?

That view of stepping out from the shadows may be misleading. A human may have had better vision than the vision shown by the camera footage.


I really doubt it.

this is what human sees at same spot same time of night:
(https://discourse-cdn.freetls.fastly.net/boingboing/uploads/default/original/4X/d/8/4/d843b795d4028d3694d9f763d38ad36b1bd4c4b8.jpg)


Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: james_s on March 23, 2018, 03:10:04 am
This is another reason why I don't think fully atonymous cars will be put into mainstream use any time soon. Some politician has to sign off on that, and they like covering their arse. Look at how the virtually never want the raise the speed limits or remove speed humps etc once they have lowered them, no one wants to take responsibility.

As I've said for a long time, I suspect fully autonomous cars are at least 20 years behind where a lot of enthusiasts think. The companies making these things are going to have to watch out for arrogance and use an abundance of caution when deploying them because what can go wrong, will. Even if they do cause fewer deaths overall than individual drivers, it's easier to assign blame to a specific individual while it will only take a few incidents like this before the whole concept of an autonomous car has a reputation for killing people. It could turn out to be a classic textbook case like that of the Therac-25 incidents.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: wraper on March 23, 2018, 03:13:58 am
this is what human sees at same spot same time of night:
This is what good camera sees if long exposure is used, not human eye. I dod not notice anything like this when streets are lit with similar lights.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Rasz on March 23, 2018, 03:28:43 am
this is what human sees at same spot same time of night:
This is what good camera sees if long exposure is used, not human eye. I dod not notice anything like this when streets are lit with similar lights.

it was taken by someone who specifically selected exposure to produce same picture he saw with his own eyes, plus you have YT clip of that road at night and someone elses pictures on twatter https://twitter.com/thekaufaz/status/976686336877871104

_this is a well lit road_
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Fungus on March 23, 2018, 04:32:29 am
Off topic, but...


Although this film has been around a long time, when an autonomous car can do this, then it might be ready for public release.

Exciting video - anyone know what car it was?

It was a Mercedes S-class. The tire squeals and sound of a Ferrari engine were dubbed on afterwards (listen for the occasional mismatch/crossfade in the sound).

If you take away the camera angle and sound of a Ferrari going through the gears then it's not as fast as you think (mute the sound and look at the objects going past, not at the road surface...)

People have calculated real speeds:

https://en.wikipedia.org/wiki/C%27%C3%A9tait_un_rendez-vous#Route

Ferrari 275 GTB

Nope.

Admittedly, the streets were fairly empty in this film.

It was the 1970s, very early Sunday morning. It would be impossible to do these days - there's a lot more cars in the world now.

FYI  I once had a taxi drive like this across Taipei in the 70s. 

I've done 150mph/250kph in a Taxi in Germany.

I don't know what the fuel consumption was, I wonder if it was profitable?
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: hendorog on March 23, 2018, 05:07:52 am
So:

Failure of the LIDAR
and failure of the system which detects failures in the LIDAR
and failure of the human backup driver
and failure of the camera to capture the scene
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: ez24 on March 23, 2018, 05:19:04 am
this is what human sees at same spot same time of night:

There will be a lot of accident recreations made to prove this.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: hendorog on March 23, 2018, 05:27:16 am
Failure of the LIDAR

Says who?

More likely a failure to figure out that somebody would be stupid enough to cross the road like that.

Based on the reports and observations of the video, the car didn't brake, despite the pedestrian being in the lane. The driver did not look up either, which they would have done if the car braked.

Therefore the hazard was not detected. Therefore the LIDAR failed.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Brumby on March 23, 2018, 06:40:01 am
@CNe7532294
I have not missed any of you points at all.  It's just that a lot of it is irrelevant.

I will address this specific point in the hope that you will see my point - which you continue to ignore...

One more thing, you really haven't made any case to say why we should take a human being completely out of the loop (my main point!).

AT NO TIME have I ever directly stated - or even suggested - that we take a human being out of the loop.

What I AM saying is that people will take themselves out of the loop.

The fundamental expectation is that the vehicle will drive itself - and after hours and hours of this happening (quite probably including some successful collision avoidance), the human "driver" will become confident in the AV system and they will lessen their focus.  This is human nature and there is no way to overcome it.  In fact, the legal system is going to be on their side since "self driving" is the very claim being made for these systems.


Your continued reference to the aviation industry is just annoying.  The correlation between the two is extremely poor.  If I just take your reference to QF72, by the time the pilots took the correct action, a car on the road would have had the collision and, quite possibly, emergency services would have been contacted and on their way.  No amount of training could have enabled a timely response.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Brumby on March 23, 2018, 06:49:03 am
There is, however, one parallel with the aircraft industry that I can see having a perfect correlation - accident investigation and subsequent recommendations.

Even with highly trained pilots, people die - but the industry doesn't abandon technology.  It simply works out what went wrong and then takes steps to fix it.  Autonomous vehicle tech will go the same way.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: pickle9000 on March 23, 2018, 07:09:28 am
So the vehicle must have a safe area the size of which is determined by speed. I can't see the system needing to determine the type of object only size and direction when inside the safe area. If it's far away steer clear otherwise brake to reduce the impact. The only other problem would be avoiding other vehicles.

If he was being tailgated would that have prevented braking?

 
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: oldway on March 23, 2018, 11:08:13 am
This police inquery and its conclusions are staggering .... I wonder how is it possible that people do not notice that ....

The video presented does not have the scope of formal proof because that it is provided by the main interested party who can very well have manipulated it for its own interest ... (reduction of luminosity, cuts to make believe the impossibility of reaction and braking, etc ...)

Remember that this is not a surveillance camera video.

Several evidences seem to show that there is a problem with this video.

1) the road was lit, there is no appearance of lighting on the video.

2) The efficiency and the range of the headlights do not correspond to the headlights of a modern car. (Looks like headlights of a Ford Model T ... !!!)

3) There is no progressivity in the appearance of the pedestrian with his bike, it seems that portions of video are missing.

4) The sensitivity of the camera seems totally abnormal. Digital cameras are usually more sensitive to light than the human eyes .... There are even cameras capable of shooting in very low light levels.
Here, the video seems to have been made with the sensitivity of an old super 8 camera

Under such conditions, a reconstruction was essential to be done and it would have been necessary to publish the video made during the reconstitution so that one can compare.

Publish only the video of which there is no proof of authenticity, is biased and abnormal.

The result of the police investigation implies that the pedestrian would have committed suicide by knowingly crossing the road in a dark place in front of a vehicle that could not see and avoid it ..... this is no sense.

The reality would be rather that the pedestrian thought it was perfectly visible and that a motorist would certainly have braked to avoid it .... Of course, he committed an imprudence, but not a suicide.
Unfortunately, there was no driver, but an automatic system that failed to detect her and did not brake .....

UBER wants to pass this for a fatality (and succeeded), but for me, it is clear that it was not inevitable.

A pedestrian crossing a lighted road is visible from a distance enought for the driver to brake and at least to reduce the speed enough not to kill the pedestrian.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Decoman on March 23, 2018, 11:11:04 am
Looking at the video again, I find it odd that the passenger looks up and looks startled, but his eyes are looking to the left of center, with the passenger being seated on the left side of the car. The exterior shot (whatever is really the true video quality) seem to show the pedestrian only when the pedestrian is at the center. Making me think that the passenger perhaps saw the pedestrian to the left of the car some time before the car hit the pedestrian, which doesn't match the exterior recording (or whatever it is). Just to wildly speculate, if opening the possibility of there being a fake exterior view, then maybe the interior view might be fake as well (though I guess it would be easy to find out if that really was the case as it would probably be easy to correlate a longer interior view recording with a longer exterior view recording.

I am not a technologist or an expert, but couldn't help but finding it a little odd that the interior video looks to be all in gray, making me wonder if maybe the interior camera is based on FLIR (infrared camera). Apologies if something like this has already been commented.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: onesixright on March 23, 2018, 11:38:19 am
I have seen some cars (like Audi) already have some TI camera's onboard.

Anybody know if this technology is already implemented in these autonomous test cars?

Volvo already has some automatic breaking for pedestrians. Is this switched off during these tests?






Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: coppice on March 23, 2018, 11:57:14 am
I have seen some cars (like Audi) already have some TI camera's onboard.

Anybody know if this technology is already implemented in these autonomous test cars?

Volvo already has some automatic breaking for pedestrians. Is this switched off during these tests?
Automated braking is quite widespread on new models now. A car can't get a 5 star NCAP rating now without some of these collision avoidance mechanisms. If the cyclist were moving very slowly you would expect a standard Volvo to have avoided the accident. With a moving cyclist it would depend how quickly they moved into the view of the sensors, but the brakes would have been on well before the impact.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: apis on March 23, 2018, 12:08:28 pm
Yes, the lidar and radar would have seen that pedestrian. The car should have been able to detect, predict and prevent this accident. This accident shouldn't have happened and was ultimately a software failure. Even if the sensors were malfunctioning and sending false data to the computer the software should have detected the malfunction and acted accordingly. Well, I suppose there is a theoretical possibility the sensors were sending false data that looked real enough that a hardware error couldn't be detected, but I find that highly unlikely. A thorough technical analysis will tell eventually. But I find it very likely that Uber really **** up and a person died because of it.

I'm sure a lot of naysayers who doesn't know how the tech and software works were waiting for this to happen, but having been following the progress of Google's self-driving car over the years, this accident really surprised me. I can say with 99 % certainty* that it would NOT have happened to any of the Google cars. Please realise that all software is NOT created equal.
(* 1% to allow for some really weird and unlikely technical problem that couldn't possibly have been predicted or avoided.)

But to say this proves self-driving cars are bad and should never be allowed is really disingenuous. What it proves is that Uber's technology is bad and shouldn't be driving on public roads. I hope Uber are held accountable if they are found to have made an error that should have been avoided if they followed good engineering practices. I can't imagine how the software could fail in such a predictable scenario, surely they would have tested the software with thousands of test cases of exactly this kind of situation, before allowing it to drive on public streets.

It is humans who create the software and humans makes mistakes (which is why it's a good idea to try to replace human drivers to begin with). But once you have a system that is a provably better driver than the average human, that system is the better option, that's a simple fact.

And all evidence to date shows that self driving cars have (the potential) to be far superior to humans in almost every regard. (I'll admit Daves point that they will probably not be good at determining if a parked car is about to leave, at least for now, but even so, the advantages far outweigh the disadvantages).

Take a look at this presentation from 2011 of how the google cars work (and keep in mind, this is NOT the same as the Uber system). Clearly, avoiding pedestrians and cyclists is a rudimentary task their system solves successfully all the time:

https://www.youtube.com/watch?v=YXylqtEQ0tk (https://www.youtube.com/watch?v=YXylqtEQ0tk)

Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: SilverSolder on March 23, 2018, 12:15:34 pm
... industry doesn't abandon technology.  It simply works out what went wrong and then takes steps to fix it.

That is probably the rational response to this incident.

We know that air crashes typically have more than one cause.  If the NTSB is investigating, they are not going to be fooled by low quality video etc. -  and they are not going to be afraid of saying that 'driverless babysitter' inattention was a contributory cause in this particular accident, if that is what the evidence shows. 

We might end up with a situation where the technology gets most of the blame for failing to spot the bicyclist,  while contributory causes might include the driverless babysitter not paying sufficient attention, having perhaps not been sufficiently trained. 

For the technology part, it won't be enough to say "it failed".  They will look at the nitty gritty details of how the software was developed and tested to make sure this kind of thing wouldn't happen.  The development and QA process obviously failed - why did it fail? - Uber is vulnerable to blame on that point, as is any development or QA manager that hasn't dotted his i's and crossed his t's.

The driverless babysitter failed to prevent the accident too.  Was he paying attention and doing his job responsibly in this situation?  If the babysitter is found to have not been suitable for the task (right personality, right training), this is an area where Uber is vulnerable too.

Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: apis on March 23, 2018, 12:24:00 pm
Don't even get me started on how stupidly foolish it is to solely rely on GPS as a means of navigation.
Do you have any source that can corroborate that statement? Self driving cars do not solely rely on GPS. Well, at least the google cars don't (it's mentioned briefly in the video in my previous post).
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: coppice on March 23, 2018, 12:34:37 pm
But to say this proves self-driving cars are bad and should never be allowed is really disingenuous. What it proves is that Uber's technology is bad and shouldn't be driving on public roads. I hope Uber are held accountable if they are found to have made an error that should have been avoided if they followed good engineering practices. I can't imagine how the software could fail in such a predictable scenario, surely they would have tested the software with thousands of test cases of exactly this kind of situation, before allowing it to drive on public streets.
Spot on. We see these figures of millions of actual miles and billions of simulated miles safely driven by autonomous cars, but every time I find a little of what lies behind those numbers its quite disturbing. Most of the real miles seem to be endlessly trundling over the same small number of paths, not exposing the system to an ever widening range of real world scenarios. Simulated miles at this stage are basically worthless. Simulation is a fantastic way to get to a prototype, but real world testing is about finding what stimuli your simulations missed. To continue building miles on the simulators is more of a publicity stunt that engineering.

What is need now is genuinely independent testing of these systems, complementing the in house testing. The group that builds anything is too highly motivated to gloss over problems to be trusted on their own.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: coppice on March 23, 2018, 12:39:35 pm
Don't even get me started on how stupidly foolish it is to solely rely on GPS as a means of navigation.
Do you have any source that can corroborate that statement? Self driving cars do not solely rely on GPS. Well, at least the google cars don't (it's mentioned briefly in the video in my previous post).
From what I have seen, Google have relied solely on GPS and maps for what most of us would consider navigation - i.e. "what route do I take?". They then use radar and lidar for short range fine tuning, and obvious things like collision avoidance. I assume by this time they have started working temporary road sign recognition into the mix, to deal with road works. The signing at road works tends to be pretty weak, and haphazard, though.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: apis on March 23, 2018, 12:53:23 pm
Simulated miles at this stage are basically worthless. Simulation is a fantastic way to get to a prototype, but real world testing is about finding what stimuli your simulations missed. To continue building miles on the simulators is more of a publicity stunt that engineering.

What is need now is genuinely independent testing of these systems, complementing the in house testing. The group that builds anything is too highly motivated to gloss over problems to be trusted on their own.
Yes, the "miles" spent in simulation is pretty useless metric unless you know exactly what happens in the simulation. Miles spent on real streets is useful though, in order to be able to compare accident statistics with other drivers.
I agree they should do independent testing before allowing a car to drive on public roads, I'm not sure what the requirements for a licence are now. Such an independent test would have been able to show this car wasn't ready for the streets.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: apis on March 23, 2018, 01:02:54 pm
Don't even get me started on how stupidly foolish it is to solely rely on GPS as a means of navigation.
Do you have any source that can corroborate that statement? Self driving cars do not solely rely on GPS. Well, at least the google cars don't (it's mentioned briefly in the video in my previous post).
From what I have seen, Google have relied solely on GPS and maps for what most of us would consider navigation - i.e. "what route do I take?". They then use radar and lidar for short range fine tuning, and obvious things like collision avoidance. I assume by this time they have started working temporary road sign recognition into the mix, to deal with road works. The signing at road works tends to be pretty weak, and haphazard, though.
No, they mention specifically in the video I linked that they do not rely solely on GPS, that wouldn't be possible since GPS isn't accurate enough and the cars have to be able to drive even when there is poor GPS reception (which happens often in cities, e.g. "urban canyons" and tunnels), or if the sensor fails. Of course, it would be stupid not to use GPS as one additional sensor, but the GPS data is used together with all other sensor data to determine where the car is on the internal maps, and then you use the internal maps for navigation and route planing. (GPS can only give you a location of course, it can't help you find a route, for that you need a map, and using a map to plan your route is exactly what humans do as well, so I don't see how that could be considered stupid).
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: coppice on March 23, 2018, 01:24:59 pm
Don't even get me started on how stupidly foolish it is to solely rely on GPS as a means of navigation.
Do you have any source that can corroborate that statement? Self driving cars do not solely rely on GPS. Well, at least the google cars don't (it's mentioned briefly in the video in my previous post).
From what I have seen, Google have relied solely on GPS and maps for what most of us would consider navigation - i.e. "what route do I take?". They then use radar and lidar for short range fine tuning, and obvious things like collision avoidance. I assume by this time they have started working temporary road sign recognition into the mix, to deal with road works. The signing at road works tends to be pretty weak, and haphazard, though.
No, they mention specifically in the video I linked that they do not rely solely on GPS, that wouldn't be possible since GPS isn't accurate enough and the cars have to be able to drive even when there is poor GPS reception (which happens often in cities, e.g. "urban canyons" and tunnels), or if the sensor fails. Of course, it would be stupid not to use GPS as one additional sensor, but the GPS data is used together with all other sensor data to determine where the car is on the internal maps, and then you use the internal maps for navigation and route planing. (GPS can only give you a location of course, it can't help you find a route, for that you need a map, and using a map to plan your route is exactly what humans do as well, so I don't see how that could be considered stupid).
To be clear, do you think anything you said is in conflict with what I said?

The latest videos I've seen from Google still seem to focus on their high res maps being correlated with lidar and radar images to fine tune the GPS guidance. I still can't find much about how they deal with a world they didn't expect, like roadworks or a massive crash blocking the way.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Fungus on March 23, 2018, 01:39:22 pm
... industry doesn't abandon technology.  It simply works out what went wrong and then takes steps to fix it.

That is probably the rational response to this incident.

Yep. Find out what happened, fix it, make it so it can't happen again. Continue as before.

The only problem with this is the lawyers. If 1000 ordinary cars kill people then it's just a thing. If 1 (one) autonomous car kills somebody then it's a potential class-action lawsuit!

We've already seen this with Teslas. Hundreds of cars go up in flames every day but if one Tesla catches fire then it's headline news.

Batteries are dangerous!!!  :scared:

We know that air crashes typically have more than one cause.

Not in the early days.  :popcorn:

The airline failure model isn't really the same as cars, if there's a mechanical failure on an airliner you can't simply pull over and call a tow truck. It has to continue flying and land safely.

With cars it's all about the software. It's complex software but fortunately for us it's very simulatable. Inside a computer we can generate bad situations for autonomous cars all day long and I'm sure the people at google are going wild dreaming up crazy situations to test the car's ability.

Simulated miles at this stage are basically worthless. Simulation is a fantastic way to get to a prototype, but real world testing is about finding what stimuli your simulations missed. To continue building miles on the simulators is more of a publicity stunt that engineering.

Strongly disagree. Simulation is the only way the engineering of these cars can progress.

The most important part of the "simulation" will be regression testing. Every change to the car's software will create a trip to the simulator to verify it still works in all their test situations. This is completely impractical with real cars.

You can also create situations far more extreme than in real life. If I were Google I'd have it set up like a video game where people could go at lunchtime to try and create a situation that beats the software.

(random clowns falling from the sky!)
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: coppice on March 23, 2018, 01:44:08 pm
Simulated miles at this stage are basically worthless. Simulation is a fantastic way to get to a prototype, but real world testing is about finding what stimuli your simulations missed. To continue building miles on the simulators is more of a publicity stunt that engineering.

Strongly disagree. Simulation is the only way the engineering of these cars can progress.

The most important part of the "simulation" will be regression testing. Every change to the car's software will create a trip to the simulator to verify it still works in all their test situations. This is completely impractical with real cars.

You can also create situations far more extreme than in real life. If I were Google I'd have it set up like a video game where people could go at lunchtime to try and create a situation to beat the software.

(clowns falling from the sky!)
I agree with what you said. When I said simulated miles I meant the endless clocking up of miles is worthless. Steadily adding missed test cases to the simulation suite, and then simulating for regression testing is definitely the right approach for practically any kind of engineering development.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: FrankBuss on March 23, 2018, 01:47:38 pm
Someone who is an adviser for the robotics cars team at Google says, there is a rumour that LIDAR was turned off (search for "rumour", but the rest of the analysis is very good as well) :

http://ideas.4brad.com/it-certainly-looks-bad-uber (http://ideas.4brad.com/it-certainly-looks-bad-uber)
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: onesixright on March 23, 2018, 01:52:58 pm
Someone who is an adviser for the robotics cars team at Google says, there is a rumour that LIDAR was turned off (search for "rumour", but the rest of the analysis is very good as well) :

Don't these cars perform a self test, and simply don't drive (at-least warn) when systems are turned-off / not working ?

Checklist comes to mind  ::)
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Brumby on March 23, 2018, 02:06:58 pm
Looking at the video again, I find it odd that the passenger looks up and looks startled, but his eyes are looking to the left of center, with the passenger being seated on the left side of the car. The exterior shot (whatever is really the true video quality) seem to show the pedestrian only when the pedestrian is at the center. Making me think that the passenger perhaps saw the pedestrian to the left of the car some time before the car hit the pedestrian

This is precisely the point I made in reply 26 where I took this still from the interior video.

(https://www.eevblog.com/forum/blog/eevblog-1066-uber-autonomous-car-fatality-how/?action=dlattach;attach=405850;image)

It does seem their reaction is consistent with noticing the pedestrian well before impact.  I am not at all surprised by their lack of evasive action because of the points I outlined here:

 a) The "safety driver" would need to have observed the hazard before impact (which I think they may have). 
 b) They would immediately expect the AV to have responded and they would hesitate.
 c) When the AV hadn't responded, they then would have had to remember that they had the ability to take control.
 d) They then would have been challenged by the taking the decision to do so.
 e) Once they had done that, they would have had to "find" the controls - the steering wheel and brake pedal as a minimum, since their hands (certainly) and feet (possibly) were not in normal driver position.
 f) They then would apply whatever action they chose.

For a normal "hands on" driver, only steps a) and f) are involved - the others are not automatic steps and they will take a finite time to be processed.


This really makes me want to find out what happened with the tech.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Fungus on March 23, 2018, 02:15:57 pm
This is precisely the point I made in reply 26 where I took this still from the interior video.

(https://www.eevblog.com/forum/blog/eevblog-1066-uber-autonomous-car-fatality-how/?action=dlattach;attach=405850;image)

It does seem their reaction is consistent with noticing the pedestrian well before impact.

I'd say that reaction is either from emergency braking or from the impact. Unfortunately the video is cut off right at that frame so I can't decide which (maybe both?)

The whole way the 'interior' video is cut makes me think Uber is hiding something, eg. Did the car brake at all? :popcorn:
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: coppice on March 23, 2018, 02:17:34 pm
It does seem their reaction is consistent with noticing the pedestrian well before impact.  I am not at all surprised by their lack of evasive action because of the points I outlined here:

 a) The "safety driver" would need to have observed the hazard before impact (which I think they may have). 
 b) They would immediately expect the AV to have responded and they would hesitate.
 c) When the AV hadn't responded, they then would have had to remember that they had the ability to take control.
 d) They then would have been challenged by the taking the decision to do so.
 e) Once they had done that, they would have had to "find" the controls - the steering wheel and brake pedal as a minimum, since their hands (certainly) and feet (possibly) were not in normal driver position.
 f) They then would apply whatever action they chose.

For a normal "hands on" driver, only steps a) and f) are involved - the others are not automatic steps and they will take a finite time to be processed.

This really makes me want to find out what happened with the tech.
I agree. A safety driver who is supposed to be the measure of last resort is really in a horrible position. In a standard Volvo XC90 they would have applied the brakes early, whether the autonomous braking system had started to or not.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Fungus on March 23, 2018, 02:19:30 pm
I have a question for the LIDAR experts:

What happens when there's many cars using LIDAR simultaneously?

The world will be flooded with LIDAR dots, do LIDAR scanners interfere with each other?

I'm assuming the answer is "no" otherwise it wouldn't be considered for self-driving cars but I fail to see how it can't interfere. The real world isn't made of perfect retroreflecting surfaces.

Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: FrankBuss on March 23, 2018, 02:21:50 pm
Someone who is an adviser for the robotics cars team at Google says, there is a rumour that LIDAR was turned off (search for "rumour", but the rest of the analysis is very good as well) :

Don't these cars perform a self test, and simply don't drive (at-least warn) when systems are turned-off / not working ?

Checklist comes to mind  ::)

If the rumour is right, they turned it off intentionally to test operation with just the camera and radar.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Fungus on March 23, 2018, 02:28:09 pm
I agree with what you said. When I said simulated miles I meant the endless clocking up of miles is worthless.

Checking for slowly accumulating memory leaks/fragmentation...?  :popcorn:

Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Brumby on March 23, 2018, 02:29:08 pm
I'd say that reaction is either from emergency braking or from the impact. Unfortunately the video is cut off right at that frame so I can't decide which (maybe both?)

What happens after the cut is irrelevant to the point I was making.

Two points:
 1. The person is clearly looking to the left of the car.  The impact would not have happened.
 2. There is no body movement in the seat that braking or evasive action would cause.  The car's velocity did not change during the video.


The whole way the 'interior' video is cut makes me think Uber is hiding something,
The cut may not be for that reason.  It may have been to prevent further stress on the safety driver.

Quote
eg. Did the car brake at all? :popcorn:
THAT, however, is a very good question.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Rasz on March 23, 2018, 02:32:21 pm
If the rumour is right, they turned it off intentionally to test operation with just the camera and radar.

Tesla drove for over a year on that combination using repurposed mobileye line following tech, with occasional driver decapitation :'(
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: apis on March 23, 2018, 02:36:18 pm
To be clear, do you think anything you said is in conflict with what I said?

The latest videos I've seen from Google still seem to focus on their high res maps being correlated with lidar and radar images to fine tune the GPS guidance. I still can't find much about how they deal with a world they didn't expect, like roadworks or a massive crash blocking the way.
When I hear GPS, I think of the satellite system and a receiver providing coordinates and time. Not one of those talking navigator units with built in maps etc, although they might be called simply GPS colloquially. Those things rely solely on GPS for localisation and often have outdated maps and crappy software. That is not how these cars work, although no doubt the cars have path planning algoritms and maps of course.

There really isn't anything "solely" about the use of the GPS [in google's self driving cars]. The cars can use the other sensors to precisely predict it's position and drive for long periods without GPS [they have to in order to be able to drive in tunnels and cities where GPS reception is not always available]. (If the other sensors alone are more precise than the GPS that in itself indicate they don't need the GPS.) I'm pretty sure they can drive entirely without GPS but they might have problem finding their initial location after a reboot without the GPS, if the car was moved from it's last known position while turned off, or something like that. I don't know enough details to speculate about that though.

Being able to read road signs, plan a new route if a road is blocked or handle a situation where the map doesn't agree with reality doesn't really have anything to do with GPS. I don't know how they handle that actually, but it's my impression they do not only rely on static data (in fact it's hinted at in that video as well, although indirectly). If you know of a source with details about that it would be much appreciated. Finding alternative routes is something all route planning software do though, can't imagine it would be a deal-breaker.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Fungus on March 23, 2018, 02:58:22 pm
The whole way the 'interior' video is cut makes me think Uber is hiding something,
The cut may not be for that reason.  It may have been to prevent further stress on the safety driver.

a) Is somebody forcing them to watch it in a loop?

b) Having that face posted all over the Internet isn't stressful?

I suspect that the only reason they posted the interior shot at all was to show that the "safety driver" wasn't watching the road. What other reason is there for releasing it?


Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: AF6LJ on March 23, 2018, 03:44:40 pm
I wouldn't get near Uber, they don't do background checks on their drivers.
Some of us have standards.
 :popcorn:
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: SilverSolder on March 23, 2018, 05:32:57 pm

The airline failure model isn't really the same as cars, if there's a mechanical failure on an airliner you can't simply pull over and call a tow truck.

Well, what is being investigated here is the reasons that a fatal accident occurred.  There were multiple factors that enabled this accident to happen, including the bicyclist crossing the road unsafely...  everyone had a part to play here. 

The issues around safety are reminiscent of the issues around data security:  adaptive security controls at various layers is essential to securing systems in today's environment.

Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: darrellg on March 23, 2018, 06:01:35 pm
If anyone has a Toyota with TSS-P - would their car avoided this?  TSS-P has pedestrian avoidance.
I had a distracted accident (100% my fault) and someday I want to get a car with AEB  but only Toyota seems to be the one I can get someday. 
Some Honda models are available with Honda Sensing, which has the Collision Mitigation Braking System and is supposed to brake when it senses pedestrians. I haven't "tested" mine yet.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: ez24 on March 23, 2018, 06:06:00 pm
Was he paying attention and doing his job responsibly in this situation?  If the babysitter is found to have not been suitable for the task (right personality, right training), this is an area where Uber is vulnerable too.

FYI  Both the victim and Uber driver were female.  The victim was homeless and the driver was a felon.  I think both of these facts will make the investigation more difficult, especially if the victim does not have any family to sue.  I feel really sorry for her.

Does anyone know of any official site that would be good to go to for the ongoing facts?  Who will be the lead in the investigation - I hope it is not the police department.



Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: coppice on March 23, 2018, 06:08:32 pm
Some Honda models are available with Honda Sensing, which has the Collision Mitigation Braking System and is supposed to brake when it senses pedestrians. I haven't "tested" mine yet.
For the current European Honda CR-V the sensing package is a costly option, which you can only add to the top spec version. With the new CR-V, appearing in the summer, sensing will be standard across the range. I find that a interesting data point for how much collision avoidance features are working their way into the market. I guess the NCAP rating system is a significant factor pushing this. They can't keep those 5 star ratings on new models if they don't have at least some automated safety kit.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: coppice on March 23, 2018, 06:13:12 pm
Was he paying attention and doing his job responsibly in this situation?  If the babysitter is found to have not been suitable for the task (right personality, right training), this is an area where Uber is vulnerable too.

FYI  Both the victim and Uber driver were female.  The victim was homeless and the driver was a felon.  I think both of these facts will make the investigation more difficult, especially if the victim does not have any family to sue.  I feel really sorry for her.

Does anyone know of any official site that would be good to go to for the ongoing facts?  Who will be the lead in the investigation - I hope it is not the police department.
Who but the desperate would take a job babysitting an autonomous car? It has to be one of the most boring, soul destroying jobs around. It is implausible that whoever you put behind that wheel, with nothing much to do all day but look ahead, is going to maintain full attention hour after hour.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: apis on March 23, 2018, 06:18:26 pm
If anyone has a Toyota with TSS-P - would their car avoided this?  TSS-P has pedestrian avoidance.
I had a distracted accident (100% my fault) and someday I want to get a car with AEB  but only Toyota seems to be the one I can get someday. 
Some Honda models are available with Honda Sensing, which has the Collision Mitigation Braking System and is supposed to brake when it senses pedestrians. I haven't "tested" mine yet.
Found a video of such a system being tested:
http://www.bosch-presse.de/pressportal/de/en/emergency-braking-in-two-blinks-of-an-eye-121600.html (http://www.bosch-presse.de/pressportal/de/en/emergency-braking-in-two-blinks-of-an-eye-121600.html)
It's not clear at what speed the car is driving and the max speed it's able to stop in time, but visibility clearly doesn't matter to the radar here.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: apis on March 23, 2018, 06:25:56 pm
Was he paying attention and doing his job responsibly in this situation?  If the babysitter is found to have not been suitable for the task (right personality, right training), this is an area where Uber is vulnerable too.

FYI  Both the victim and Uber driver were female.  The victim was homeless and the driver was a felon.  I think both of these facts will make the investigation more difficult, especially if the victim does not have any family to sue.  I feel really sorry for her.

Does anyone know of any official site that would be good to go to for the ongoing facts?  Who will be the lead in the investigation - I hope it is not the police department.
Who but the desperate would take a job babysitting an autonomous car? It has to be one of the most boring, soul destroying jobs around. It is implausible that whoever you put behind that wheel, with nothing much to do all day but look ahead, is going to maintain full attention hour after hour.
Yeah, and one reason you are sitting there is just to take the blame if something goes wrong... pretty thankless job.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Fungus on March 23, 2018, 06:38:28 pm
Found a video of such a system being tested:
http://www.bosch-presse.de/pressportal/de/en/emergency-braking-in-two-blinks-of-an-eye-121600.html (http://www.bosch-presse.de/pressportal/de/en/emergency-braking-in-two-blinks-of-an-eye-121600.html)
It's not clear at what speed the car is driving and the max speed it's able to stop in time, but visibility clearly doesn't matter to the radar here.

Here's another one:
https://www.youtube.com/watch?v=BYY7OfQ4-5A (https://www.youtube.com/watch?v=BYY7OfQ4-5A)

And another:
https://www.youtube.com/watch?v=PzHM6PVTjXo (https://www.youtube.com/watch?v=PzHM6PVTjXo)

Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: ez24 on March 23, 2018, 06:38:34 pm
Some Honda models are available with Honda Sensing, which has the Collision Mitigation Braking System and is supposed to brake when it senses pedestrians. I haven't "tested" mine yet.

In my area (So Calif)  Honda Fit EV also has AEB  (about 40k !).  I believe Edmunds rates Honda Sensing as poor.  One thing about AEB is that it is not a standard.  For example Sentra with AEB tops out at 50 MPH.  Toyota makes the most makes with full AEB.  In general you have to read the owners manual to get the facts.  The dumbest people to ask are the salesmen.  The technology is so new the salespeople cannot keep up with it.

Consumer Reports is the best place to start looking for AEB.   My bet is a Toyota with TSS-P would not have hit the victim,  TSS-P  is full range AEB with pedestrian avoidance.  Someday I hope to get a car with AEB.

https://www.consumerreports.org/car-safety/cars-with-advanced-safety-systems/ (https://www.consumerreports.org/car-safety/cars-with-advanced-safety-systems/)

Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: darrellg on March 23, 2018, 06:52:41 pm
Some Honda models are available with Honda Sensing, which has the Collision Mitigation Braking System and is supposed to brake when it senses pedestrians. I haven't "tested" mine yet.
For the current European Honda CR-V the sensing package is a costly option, which you can only add to the top spec version. With the new CR-V, appearing in the summer, sensing will be standard across the range. I find that a interesting data point for how much collision avoidance features are working their way into the market. I guess the NCAP rating system is a significant factor pushing this. They can't keep those 5 star ratings on new models if they don't have at least some automated safety kit.

I have a US 2018 Honda Civic Hatchback, and it's available at all trim levels as a $1000 option. I didn't buy it for the emergency braking, but for the adaptive cruise control.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: apis on March 23, 2018, 07:06:20 pm
Found a video of such a system being tested:
http://www.bosch-presse.de/pressportal/de/en/emergency-braking-in-two-blinks-of-an-eye-121600.html (http://www.bosch-presse.de/pressportal/de/en/emergency-braking-in-two-blinks-of-an-eye-121600.html)
It's not clear at what speed the car is driving and the max speed it's able to stop in time, but visibility clearly doesn't matter to the radar here.

Here's another one:
[snip]

And another:
[snip]
The second one is pretty interesting, the Volvo is doing great but it uses both a camera and a single radar sensor, while the Mercedes one uses three radar sensors but still fails miserably. According to this http://ideas.4brad.com/it-certainly-looks-bad-uber (http://ideas.4brad.com/it-certainly-looks-bad-uber) that FrankBuss found, radar sensors doesn't have enough resolution to be reliable on their own. And apparently Uber might have turned off the lidar to test how their system worked without it.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: tpowell1830 on March 23, 2018, 08:19:17 pm
Found a video of such a system being tested:
http://www.bosch-presse.de/pressportal/de/en/emergency-braking-in-two-blinks-of-an-eye-121600.html (http://www.bosch-presse.de/pressportal/de/en/emergency-braking-in-two-blinks-of-an-eye-121600.html)
It's not clear at what speed the car is driving and the max speed it's able to stop in time, but visibility clearly doesn't matter to the radar here.

Here's another one:
[snip]

And another:
[snip]

One of these videos is from 2010 and the other is from 2013. I wonder if there are more recent improvements? The fellow with the Tesla video on one of the previous pages shows the Tesla several hundred feet back when the alarm goes off.

The video of the uber accident seems to show that the car did not alarm or brake at all, but it would be nice to see video from the actual system camera rather than the dinky dashcam video. If the vehicle had even started braking the nose would have plunged down, as you see in these videos.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: gearshredder on March 23, 2018, 08:54:45 pm
In the video the cyclist appears from the darkness at the very last moment, as though the dipped headlights were dipped way too far. If a human were driving on that dark road they would have had their high beams on, and it looks like they would have seen the cyclist reasonably early, when it was still realistic to stop. I don't think the "human driver would have hit this cyclist anyway" argument, which I have seen, holds water.

It probably wouldn't have mattered due to that extra reaction time by the driver but I agree, the headlights look poorly adjusted. You can't even see the guys feet before it's too late to stop. It's a HID system with projectors, they'd have no problems seeing them if they were properly set. Looks like the car at factory has them poorly adjusted. I have mine set A lot higher, Just below the mirrors on a small car. I retrofitted some Morimoto Mini d2s projectors into my 93 Impreza.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: taydin on March 23, 2018, 09:12:58 pm
As a safety critical system, that car MUST HAVE and probably does have multiple means of detecting an obstruction (LIDAR, LASER, thermal camera, maybe others). And the design of the sensor grid would be such that a fault in one sensor won't bring down the other sensors. Based on these assumptions, I'm concluding that AT LEAST ONE sensor system has reacted to the pedestrian's presence and reported it to the main computer.

Whatever happened, must have happened in the algorithm of the main computer that looks at all the sensor data and makes car control decisions.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Fungus on March 23, 2018, 09:16:23 pm
It probably wouldn't have mattered due to that extra reaction time by the driver but I agree, the headlights look poorly adjusted.

If only there was a way to automatically select high beams based on ambient lighting and oncoming traffic.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: r3bers on March 23, 2018, 09:26:02 pm
All kangaroos in AU must have GPS now...
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: maginnovision on March 23, 2018, 09:27:43 pm
It probably wouldn't have mattered due to that extra reaction time by the driver but I agree, the headlights look poorly adjusted.

If only there was a way to automatically select high beams based on ambient lighting and oncoming traffic.

Pfft, maybe in 100 years. If uber and the safety driver get off scot free something bad is happening with this situation. I can't believe the POLICE are the ones releasing such a deceptive video. Also the driver is a man, long hair and manboobs don't change that because you add an A to your name.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: pickle9000 on March 23, 2018, 09:56:40 pm
I'd like to see some laws giving the cars some minimum standards. First LIDAR is a critical system. No LIDAR pull over and stop the vehicle. Decision time on critical events. and so on. This could be a big selling point as well.

As for the driver. Sitting behind a wheel for say 2-3 hours with nothing to do and expecting him to react to a situation like this? Can't see it. I for one would not expect him to have any kind of reaction time close to a driver operating a real vehicle. He has little situational awareness.

If there was a critical system being tested (disabled) was the driver made aware so he could pay more attention?

There is a great deal unknown.

 
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Brumby on March 24, 2018, 03:27:20 am
The whole way the 'interior' video is cut makes me think Uber is hiding something,
The cut may not be for that reason.  It may have been to prevent further stress on the safety driver.

a) Is somebody forcing them to watch it in a loop?

b) Having that face posted all over the Internet isn't stressful?

I suspect that the only reason they posted the interior shot at all was to show that the "safety driver" wasn't watching the road. What other reason is there for releasing it?

We could go around and around in circles over this - so I'm not going to discuss the motives in releasing of the interior video or when it was cut.


What I AM interested in addressing is the actual, defined, specific role of the "safety driver".

There have been FAR TOO MANY suggestions that this "safety driver" should have prevented this accident from happening and because they didn't, they are a contributor to fault.  Such opinions are, in my opinion (and the opinion of some others) absolutely flawed, completely unfair and totally wrong.  It is completely unrealistic to expect someone who is disengaged from the active role of driving to have been able to respond in time...
... would take a job babysitting an autonomous car? It has to be one of the most boring, soul destroying jobs around. It is implausible that whoever you put behind that wheel, with nothing much to do all day but look ahead, is going to maintain full attention hour after hour.

So ... if that be the case, then what IS the role of the "safety driver"?


In response to that question, I would like to ask the following - noting that NOBODY has raised this question so far:  As we have determined, the car DID STOP at some point - but why? 
 - Was it because an impact sensor triggered that response? 
 - Was it because a delay in the sensor processing had caught up and commanded the vehicle to stop
 -OR-
 - Was it because the vehicle continued driving as if nothing had happened and the safety driver took control and brought it to a stop?

Considering the vehicle did not take ANY identifiable action (from the video supplied) who's to say that it was ever going to stop because of the collision?

This, I believe, is a quite reasonable expectation for a safety driver and if that is the case, they did exactly what was expected of them, preventing any further danger.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Fungus on March 24, 2018, 03:52:59 am
So ... if that be the case, then what IS the role of the "safety driver"?

Scapegoat?  :popcorn:

(and that's why they employ felons)
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Rasz on March 24, 2018, 07:02:05 am
It probably wouldn't have mattered due to that extra reaction time by the driver but I agree, the headlights look poorly adjusted. You can't even see the guys feet

we have been over this, nothing wrong with car lights, nothing wrong with road lighting, you are basing your opinion on a bad $30 dashcam mounted by company hiring criminals to drive people around.
what the road actually looks like to humanoids: https://twitter.com/thekaufaz/status/976686336877871104


So ... if that be the case, then what IS the role of the "safety driver"?

rubber stamped bureaucratic requirement
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Marco on March 24, 2018, 08:44:53 am
So ... if that be the case, then what IS the role of the "safety driver"?

AFAICS risk minimization (preventing run away due to bugs, ramming firetrucks, killing pedestrians) and handling edge cases the autopilot can't handle.

You could argue that they can't always handle the risks when low latency intervention is necessary, but that's no reason for Uber not to make them try their best. For that they need strict vigilance control systems. Not the pussy stuff Tesla&Co put in their cars, but ones which start being annoying immediately. Might give false positives when the driver scratches his nose or something, but false positives aren't really a problem.

If Uber doesn't want to make partial low latency accident prevention part of their role then someone needs to make them. Making the safety driver more comfortable isn't worth lives, there will always be takers for that job regardless. There's worse jobs, far worse.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: TheDane on March 24, 2018, 09:29:15 am
(https://media.rs-online.com/t_large/R7467154-01.jpg)

If the driver can't (or won't) use feet to break the vehicle - it should be mandatory to being able to use 'only' hands, and the switch(ES) should be placed accordingly.
Naturally it costs money, and takes a brain and time to implement.
Perhaps the people in charge should be the ones doing the walk-in-front-of-car-when-testing to ensure proper care has been taken  :o

Maybe also a pressure switch under the drivers ass, so a person jumping in the seat will engage the safety system (or, at least arm it further). I know this is already mandatory on lawnmovers - those moving blades have cut people into small pieces because they tipped over a slope and the machine fell upon them.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: JoeN on March 24, 2018, 09:44:26 am
Do these self-driving cars record and store all sensor data over reasonable timeframes so the raw data can be played back against the software to see where the algorithms went wrong and to prove or disprove the quality of the data collection capabilities of the sensors themselves?  This is a teachable moment, I hope they have all that data.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Brumby on March 24, 2018, 10:32:43 am
So do I.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Towger on March 24, 2018, 10:49:11 am
LIDAR is useless in the real world, for several years I have been wondering why Google etc rely on it so much.  Maybe useless is too strong a word, but it does not work in the rain.  Which is why all the testing is done in very dry areas.


There was recent interview on Embedded FM with a firmware developer (day job) of  LIDAR systems.  I'll try and add the link when I get to a proper computer.  It does not appear to be in any show notes.


I also have the say the released video looks doctored.  Way to dark.  Street lighting?  A basic dash cam is better.  An XC90 is a highend luxury car with HID lighting, probably self levelling.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: BrianHG on March 24, 2018, 11:59:08 am
LIDAR is useless in the real world, for several years I
That's just crummy single beam laser scanning lidar.
Newer 3D lidar is coming which scans the 3D environment in front of you like a high speed 2D camera.  But, current costs and owners of the patents make it too prohibitive for today, it's cheaper just to insure up and gamble with existing tech.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: BrianHG on March 24, 2018, 12:10:48 pm
As for all the comments and arguments here, forget the BS dashcam, which may have been deliberately crapped up, until the accident scene is replicated by forensics, IE an equally dressed pedestrian, in the same place walking at the same speed.  With an equivalent car & headlights, traveling at the same speed as well, same location, and a real human eye-quality camera + the same camera from the self driving car's AI are re-filmed properly without heavy compressing by a investigating third party with no ties to Uber or any self driving car firms, absolutely everything observed to date, and every other driver doing their own filming on the same road posting it on youtube only serves to spread BS and we will never be sure of anything about the true visibility from that accident.

If proper third party replication isn't done, this accident will get buried away with time, which is what I expect to happen.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: apis on March 24, 2018, 12:15:19 pm
Do these self-driving cars record and store all sensor data over reasonable timeframes so the raw data can be played back against the software to see where the algorithms went wrong and to prove or disprove the quality of the data collection capabilities of the sensors themselves?  This is a teachable moment, I hope they have all that data.
Yes, that's the main task of the test cars: to collect data.

LIDAR is useless in the real world, for several years I have been wondering why Google etc rely on it so much.  Maybe useless is too strong a word, but it does not work in the rain.  Which is why all the testing is done in very dry areas.
Lidar is the best sensor at the moment without competition. It gives you accurate 3D data which is easier to analyse and interpret and works well regardless of lighting conditions. All types of sensors have problems. Vehicle radar have very poor resolution. Cameras (eyes) don't work in the dark, in fog and works poorly in heavy rain and snow. Lidar works poorly in heavy rain and snow and have limited range. But if you combine data from many different types of sensors they complement each other. So combining many types of sensors is the best option from a technical and safety perspective.

The problem with lidar is that it's expensive, which is why many companies want to get rid of it. After all, humans manage with only two cameras on a stick so it should be possible for a robot car to do the same... in theory. But computer vision algoritms aren't reliable enough yet, so you still really need lidar today. Anyone who's played around with computer vision should/would have realised that imho, and sadly it's evident from the tesla decapitation experiment and now possibly this uber experiment if nothing else.

Waymo is already driving cars completely autonomously (no human in the car) so clearly their lidar equipped cars are very useful in the real world. Just because you can't use it in every situation and weather condition doesn't mean it's useless in the real world. Like everything else the tech will improve gradually and be able to handle more and more situations and locations.

In the beginning we will probably see them used in dry climates with good road infrastructure working as taxis, busses or transports. I.e. where the vehicles operate on well defined routes or within a well defined area and you can have a command central that can quickly handle any unexpected situation and send out a tech if necessary. And as long as the safety statistics for the autonomous cars are better than the average human driver that is a win for everyone.

I also have the say the released video looks doctored.  Way to dark.  Street lighting?  A basic dash cam is better.  An XC90 is a highend luxury car with HID lighting, probably self levelling.
Yes, it is a well lit street according to people who live there.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Uwe Quast on March 24, 2018, 12:25:05 pm
Viewing the last moment before the crash, it looks to me as like this was recorded from a camera on roof level.
It's a bit strange  to see the white shoes (the lowest parts) suddenly appearing in the middle of the street, almost as if those yellowish lamp-lights drastically reduced the dynamic range of the camera as long as they where visible.
As a german car driver this dark field in the upper middle of the headlight illumination field looks to me like a bad headlight alignment.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: apis on March 24, 2018, 12:34:13 pm
I'm just speculating now, but assuming they have several cameras with different exposure level (to get better dynamic range) this might be footage from the camera with the shortest exposure.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: SilverSolder on March 24, 2018, 01:09:29 pm
FYI  Both the victim and Uber driver were female. 

Accepted, no disrespect to the women involved intended.

Who will be the lead in the investigation - I hope it is not the police department.

Apparently, the NTSB (National Transportation Safety Board), the organisation that normally investigates air accidents, will investigate this incident.
Those people conduct the highest level and most professional investigations...   but don't expect quick results.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: SilverSolder on March 24, 2018, 01:16:53 pm
Who but the desperate would take a job babysitting an autonomous car? It has to be one of the most boring, soul destroying jobs around. It is implausible that whoever you put behind that wheel, with nothing much to do all day but look ahead, is going to maintain full attention hour after hour.

There are people that do boring, soul destroying jobs...   for example, an electronics engineer trying to eke the last bit of performance out of an analog circuit...  so you are right, most people are probably not suitable for babysitting an autonomous car, just like most people are not interested in electronics?

The problem here is might be that Uber did not take the babysitting job as seriously as they perhaps could have, so ended up hiring someone who was not suitably resistant to boredom.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Decoman on March 24, 2018, 01:27:46 pm
I wonder if *maybe* there is some unethical testing going on with autonomous cars.

I remember that there was this local story years ago in the news, that told about how a paramedic supposedly decided to go against the testing regime that was in progress with the use of a drug in ambulances, for being injected into people with cardiac arrest (as I remember it), and so, the paramedic when performing life saving tasks, was supposed to simply open an envelope that they carried with them, and there it would state, to issue the drug this once, or not. Obviously, the idea was to get some statistical data on the efficiency on the drug, instead of doing the best to save the patient.

So, this makes me wonder if maybe they test autonomous cars, with limited capabilities to see what works best.

I have also been wondering myself, if maybe manufacturers of vaccines are able to develop regional variants, that in a similar way, can be used for testing, which would be unethical if being experimental, if people taking such a drug (which might not exist) in good faith of it working as expected.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: wraper on March 24, 2018, 01:29:34 pm
The problem with lidar is that it's expensive, which is why many companies want to get rid of it. After all, humans manage with only two cameras on a stick so it should be possible for a robot car to do the same... in theory. But computer vision algoritms aren't reliable enough yet, so you still really need lidar today. Anyone who's played around with computer vision should/would have realised that imho, and sadly it's evident from the tesla decapitation experiment and now possibly this uber experiment if nothing else.
FWIW, there are around 250 decapitation deaths under the truck in the US annually. And more than 1000 people seriously injured. All because in US they don't use underride side guards unlike in Europe.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Decoman on March 24, 2018, 01:37:13 pm
Btw, I think that if the passenger on autonomous cars in testing is really supposed to react and at best try prevent accidents by interfere with the car's movement, then presumably it would make the best sense that such an individual would have the experience of that of those that teach other people to drive, which afaik is able to slow down the vehicle with their secondary set of pedals found in such cars at driving schools. I wonder if such autonomous cars are compatible with such type of interaction with a human being being able to slowing down the speed on demand.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Fungus on March 24, 2018, 01:51:02 pm
LIDAR is useless in the real world, for several years I have been wondering why Google etc rely on it so much.  Maybe useless is too strong a word, but it does not work in the rain.  Which is why all the testing is done in very dry areas.
Lidar is the best sensor at the moment without competition. It gives you accurate 3D data which is easier to analyse and interpret and works well regardless of lighting conditions. All types of sensors have problems.

I asked before but nobody replied so I'll ask again:

If there's several LIDAR scanners in the same place, do they interfere with each other?

Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Fungus on March 24, 2018, 02:02:14 pm
Interesting article:

https://www.nytimes.com/2018/03/23/technology/uber-self-driving-cars-arizona.html (https://www.nytimes.com/2018/03/23/technology/uber-self-driving-cars-arizona.html)
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: nixxon on March 24, 2018, 03:37:23 pm
I have said many, many times that the technology is too premature to be viable. I live in a very high traffic area and as I drive down the freeway with 5 lanes full of traffic at road speeds (60-70 mph) and marvel at how all of the vehicles merge into other lanes and the varying speeds of different vehicles do not crash into each other because humans can anticipate and adjust. Even when there are traffic cones and construction, the number of fender benders, compared to how many vehicles on a given street is minimal. The statistics tell the story of vehicle fatalities, and, given the sheer number of vehicles on the road, the fatalities, IMHO are extremely low.

I dare say that many may not grasp this concept because in city centers like where I live, the traffic is horrifying for the unskilled driver in such conditions. The roads, streets and highways in the US are huge compared to most countries, so I can't even imagine how autonomous driving could even work in countries where the roads and streets are tiny.

Although this film has been around a long time, when an autonomous car can do this, then it might be ready for public release.

If you are a bit squimish about fast driving, do not watch this film.

https://vimeo.com/34039780

Admittedly, the streets were fairly empty in this film.

HAHA! Is this some 1960's footage taken from a 32 HP Citroën 2CV?
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: jonovid on March 24, 2018, 03:49:43 pm
smart roads will save autonomous vehicles in my opinion.
this is NOT about solar roadways to melt snow  :bullshit: .but a smart road is about technology that works!  :-+
magnetic peg tracking or magnetic lane markers are to help autonomous vehicles find the lanes in fog
or snow. bad weather. the magnetic pegs are set in to the road surface at a known spacing.
passive infrared sensors for the monitoring of pedestrians, animals , cyclists. useing street pole lights.
smart road street poles will not only support street lights, but infrared sensors and tracking cameras.
only when autonomous vehicles are married to smart roads will vehicles be truly autonomous, & truly safe!
at the present time many smart road proposals are to incorporate solar panels underneath the road surface,  |O
in my opinion electricity generation is not what smart roads are about! but its about supporting autonomous vehicles
secondly smart roads are about monitoring & conveying information to the users of it. wifi , dot matrix display signs
on street poles and or embedded in the road surface where it makes sense to do so.  lane markers.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: nixxon on March 24, 2018, 04:06:24 pm
How did the Uber autonomous car fatality in Tempe Arizona happen?

A relatively stationary object of significant size and mass was positioned in the path (lane) of the Uber car traveling at a modest 65 km/h. The Uber car failed to detect and brake before hitting the obstacle.

The LIDAR system did not seem to work as supposed to.
The radar system did not seem to work as supposed to.

Or maybe the brakes didn't work?

If the car was driven by a prudent human being, he or she would have applied the high beams of the headlights (night time with no oncoming traffic), and probably would have visually detected the obstacle at a distance of 100-150 meter. The driver would probably have had no problem stopping before hitting the obstacle, even with a 1 second reaction time (18 meters rolling at 65 km/h) before brakes were fully applied.

On top of that, a real person could easily turn the vehichle to the side of the obstacle, even when applying full braking power (if driving a car less than 15 years old with ABS).
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Marco on March 24, 2018, 05:37:58 pm
If there's several LIDAR scanners in the same place, do they interfere with each other?
With the rotating LIDARs it's extremely unlikely. The receiver almost certainly has a lens and will only "look" at a narrow view, where the beam is currently being transmitted. Even when two LIDARs do match up, it will be for a small vertical section of the frame, the section would also move between frames. Easy to filter.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: SilverSolder on March 24, 2018, 05:51:42 pm

It will hopefully be a long time before autonomous vehicles get the same level of situational awareness and precise vehicle control of a good human driver...

https://www.youtube.com/watch?v=gpAzXqIcWzc (https://www.youtube.com/watch?v=gpAzXqIcWzc)
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: StillTrying on March 24, 2018, 05:56:54 pm
Perhaps the car was busy at the time.
"Updating Firmware, Do not switch off, brake or change lanes."
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: wraper on March 24, 2018, 06:17:35 pm

It will hopefully be a long time before autonomous vehicles get the same level of situational awareness and precise vehicle control of a good human driver...

https://www.youtube.com/watch?v=gpAzXqIcWzc (https://www.youtube.com/watch?v=gpAzXqIcWzc)
That human driver retard died in a car crash (in same car as on the video) 4 months after this video was put on youtube :horse:. It wasn't him driving that time, but it's pretty safe to assume car was driven in the same style. Yet another Darwin award.

http://www.dailymail.co.uk/news/article-2348365/Infamous-Georgian-street-drifter-raced-packed-city-streets-viral-videos-killed-car-crash.html (http://www.dailymail.co.uk/news/article-2348365/Infamous-Georgian-street-drifter-raced-packed-city-streets-viral-videos-killed-car-crash.html)
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: mikerj on March 24, 2018, 10:02:49 pm
HAHA! Is this some 1960's footage taken from a 32 HP Citroën 2CV?

1976.  It's a fairly infamous short film called "C'était un rendez-vous".  The sound track is from a Ferrari and was dubbed onto the film, the actual car was Mercedes.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: G7PSK on March 24, 2018, 10:16:51 pm
Every one is talking about Lidar. This car also had Radar the pedestrian was pushing a good radar target, unless of course the bike was a fancy racing bike made from carbon fibre. surely the radar should have picked up the bike if not why not.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: taydin on March 24, 2018, 10:18:07 pm
That human driver retard died in a car crash

Good riddance :) The most favorable outcome, he could have caused others to die with his reckless driving.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: SilverSolder on March 24, 2018, 10:44:29 pm
Here's another traffic situation that might cause "issues" for autonomous vehicles....  Got to love how they don't even bother painting lines on the road, those are for amateurs.

https://youtu.be/pLUm3Q-7iZA
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: coppice on March 24, 2018, 11:13:00 pm
Every one is talking about Lidar. This car also had Radar the pedestrian was pushing a good radar target, unless of course the bike was a fancy racing bike made from carbon fibre. surely the radar should have picked up the bike if not why not.
The radar on most autonomous cars is quite crude. I assume the Uber one is no more sophisticated than others which are well documented. The simplest ones use a fixed (no scanning) broad forward facing beam. The better ones use a fixed broad beam for close in detection, and a fixed narrower beam for further out detection. In both cases they will pick up a lot of echo from stationary objects outside the car's lane. Therefore they have to be made fairly insensitive to near-zero doppler objects. If you look at videos of radar guided emergency braking systems on production cars, they only stop when approaching a stationary target if the car is fairly slow. If the car approaches a much slower car, the slower car creates enough doppler shift to stand out from the stationary objects. You will find videos of, say, a car at 100kph approaching one at 50kph, and slowing nicely to keep a proper distance from the 50kph car.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: grumpydoc on March 24, 2018, 11:44:10 pm
If you are a bit squimish about fast driving, do not watch this film.

https://vimeo.com/34039780

Admittedly, the streets were fairly empty in this film.
It probably looks faster than it is in reality - for instance the section between the Arc de Triomphe and the Place de la Concorde - a distance of almost exactly 2km is covered in about 1 minute 6 seconds (1:29 to 2:35 in the video) or roughly 110kmph(68 mph) - a bit reckless and not likely to be viewed kindly by the authorities should you try to re-create the stunt but given the very light traffic conditions encountered not that extreme. I have to say that the driver seems to have an impressive disregard for red lights.

I'm not sure it would be possible at all these days though.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Marco on March 25, 2018, 12:37:36 am
The radar on most autonomous cars is quite crude.

60 GHz scanning imaging radar using the same concept as LIDAR would be nice (for a HUD display in a car, not autonomous murder machines). Neither fog nor rain would really bother it.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Fungus on March 25, 2018, 12:39:48 am
It will hopefully be a long time before autonomous vehicles get the same level of situational awareness and precise vehicle control

Why "hopefully"?

of a good human driver...

I wouldn't hold that up as an example of skillful driving.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Fungus on March 25, 2018, 12:41:29 am
It probably looks faster than it is in reality

...as noted in a previous post (https://www.eevblog.com/forum/blog/eevblog-1066-uber-autonomous-car-fatality-how/msg1459750/#msg1459750).
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: nctnico on March 25, 2018, 01:12:13 am
Here's another traffic situation that might cause "issues" for autonomous vehicles....  Got to love how they don't even bother painting lines on the road, those are for amateurs.
https://youtu.be/pLUm3Q-7iZA
That traffic moves much slower. In such situations (been there, done that) driving is much easier. Point the car in the direction you want to go and don't hit anything. If something comes up you can brake in time because you are only travelling at 10km/h.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Brumby on March 25, 2018, 01:35:32 am
of a good human driver...

I wouldn't hold that up as an example of skillful driving.

Neither would I.

Certainly it is an exercise of accurate vehicle positioning - but it is based on a comparatively predictable environment.  Such driving reduces the margin of safety and adding just ONE unexpected motion of another object and you are in trouble.

AV technology could be trained to drive like this - but it would not find itself being received too warmly.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: nctnico on March 25, 2018, 01:45:19 am
It will hopefully be a long time before autonomous vehicles get the same level of situational awareness and precise vehicle control of a good human driver...
Most (99.9%) human drivers have ZERO situational awareness and ZERO predictability skills. Most modern cars already have ESP which does the precise vehicle control for the driver when necessary.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: SilverSolder on March 25, 2018, 02:08:57 am

That traffic moves much slower. In such situations (been there, done that) driving is much easier. Point the car in the direction you want to go and don't hit anything. If something comes up you can brake in time because you are only travelling at 10km/h.

Think how many objects the computer has to track, and predict, in that scenario...  it almost amounts to a denial of service attack...
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: pantagruel on March 25, 2018, 02:11:31 am
There was an engineer of sensors for autonomous cars on the radio the other day - very interesting discussion about this accident . Here is the link if anyone wants to listen to it .
https://www.wpr.org/shows/effects-and-ethics-driverless-vehicles (https://www.wpr.org/shows/effects-and-ethics-driverless-vehicles)
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: SilverSolder on March 25, 2018, 02:15:09 am
It will hopefully be a long time before autonomous vehicles get the same level of situational awareness and precise vehicle control

Why "hopefully"?


Today, young guys hop up their cars by tuning engines, suspensions, brakes, etc...

In the future, the youngsters will download illegal software that makes their autonomous cars drive like our skilled, but misguided, (and dead) Georgian driver... in addition to tuning the engine, suspension, brakes, etc. ...   

Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: ez24 on March 25, 2018, 03:26:38 am
My two cents from reading links posted by members:

- I read that Arizona does not allow employers to ask if the person was a  felon -  so the felon issue is off the books.  But I bet it will be back on because of all the damage this has caused.

-  I think the police video is "fishy".    I think the police chief called the governor first and they discussed the video and it was "released" in the effort to show it was not Uber's fault because of the "warm" relationship between the governor and Uber.  I think the video is a misrepresentation of the lighting and the police knew this.  Money talks.

-  Even though Arizona is in the desert, there are fishy things going on there.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Bud on March 25, 2018, 04:36:27 am
Quote from: nctnico
Most (99.9%) human drivers have ZERO situational awareness and ZERO predictability skills.
What do you guys smoke  there in Netherlands?
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Monkeh on March 25, 2018, 04:53:12 am
Quote from: nctnico
Most (99.9%) human drivers have ZERO situational awareness and ZERO predictability skills.
What do you guys smoke  there in Netherlands?

I have no idea, but again, they kill almost half as many people on the roads, so they're doing something right.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: james_s on March 25, 2018, 05:20:49 am
Quote from: nctnico
Most (99.9%) human drivers have ZERO situational awareness and ZERO predictability skills.
What do you guys smoke  there in Netherlands?

I have no idea, but again, they kill almost half as many people on the roads, so they're doing something right.

Seems I read that cars are extremely expensive there, so it's quite possible that a much smaller percentage of the population owns a car. Those that do are likely to be better trained and drive more carefully.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Monkeh on March 25, 2018, 05:33:13 am
Quote from: nctnico
Most (99.9%) human drivers have ZERO situational awareness and ZERO predictability skills.
What do you guys smoke  there in Netherlands?

I have no idea, but again, they kill almost half as many people on the roads, so they're doing something right.

Seems I read that cars are extremely expensive there, so it's quite possible that a much smaller percentage of the population owns a car. Those that do are likely to be better trained and drive more carefully.

Households owning cars:
Canada: 84.4%
USA: 95%
UK: 73%
Netherlands: Approximately 75%

Fatalities per 100,000 population:
Canada: 6.0
USA: 10.4
UK: 2.9
Netherlands: 3.4

So you have a mild point, but it doesn't make up the numbers.

Perhaps Americans and Canadians could learn to drive more carefully? Again, we all seem to be doing something right.

ps. my car is worth less than a new iPhone.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: james_s on March 25, 2018, 06:12:40 am
Those numbers alone don't tell the whole story. 75% of households own cars, but how many have multiple cars and how many miles do they drive? Do teenagers drive cars around? Are drivers there better trained? Are the roads different? Cultural aspects? There are countless possible factors.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: maginnovision on March 25, 2018, 08:14:09 am
Using the above numbers it is interesting to note the correlation between fatalities and %household ownership. However there is no cause, population density isn't mentioned, age of vehicles, %of vehicles with current required safety equipment... As a former BMW technician, and before that all makes/models, I can say alot of people drive cars that feel like death traps. It's also important to note that smart roads will never happen on a large scale due to price and the ease with which they could be tampered with.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Decoman on March 25, 2018, 08:26:07 am
There was an engineer of sensors for autonomous cars on the radio the other day - very interesting discussion about this accident . Here is the link if anyone wants to listen to it .
https://www.wpr.org/shows/effects-and-ethics-driverless-vehicles (https://www.wpr.org/shows/effects-and-ethics-driverless-vehicles)

Wow, I stopped listening after a minute. I couldn't stand the improper attitude I thought this guy had, who sort of implies that one would normally think that this death is "devastating" to the industry. But, you guys go ahead.
"Q: (...) What are you thinking about it?"
"Well first of all.. the loss of life is.. a.. it is a.. devastating thing.. and not just to the autonomous vehicle industry, but to everybody.. and ah.. we really need to scrutinize why something like this has happened.. (...)"

This opening statement has elements which is the reverse of what I would expect as a sensible reaction in the aftermath of a traffic death, regardless of it being about new technology. The victim's family would have to come first (unless you don't care about human life), then secondly anyone else having been directly impacted by this traffic death, then well after that would be the autonomous vehicle industry; however this guy seemed to imply that one would be thinking about the car industry by mentioning it first, which I found very odd because doing so is also suggestive, as if being an authority on reacting to things. Also, the very idea of this traffic related death being a 'devastating thing' to "everybody" is another red flag in my brain, which to me seem to be what one would hear from someone being insincere, as if making exaggerations and avoiding responding in a sensible manner. It is ofc entirely possible that the guy is not used to talking, nor, writing, which I find difficult to believe given how this podcast is supposed to be discussing the ethics of driverless vehicles, with someone that is apparently involved professionally with working with things to some degree, and I would have expected more precise statements in this regard. And there are other flags for me: the phrase "we really need to scrutinize" is another type of exaggeration, that doesn't lend well to sensible arguments, which rely on an emotional reaction, but not logic, as the point about this "need"is not explained at all. And then, the phrase "why something like this has happened" seems yet again to be this aversion to making sensible arguments. This might ofc just another dumb north american thing, like when they say "you need to do this" in movies, or variations of such expressions. Such points are not only illogical, because of how the meaning of 'need' seems suspect, when needs tend to be a very personal thing, especially when referred to as a specific thing as such (someone having a need), they would in real life be potentially insulting or intimidating/commanding, because of how such expressions can only be understood as an abbreviation of some longer argument, but which is never made if not made explicitly clear in the same sentence, or immediately after, or specifically referred back to later on. Another contention to the use of "need" in language, is that whenever a point is made about anyone's "need", then if the meaning of a "need" imply there being the very same need for everybody, then that is something imo deeply insincere and offensive in a way, as it dehumanizes people in making the notion of such needs non-personal, as if you had no say in it perhaps, or that a particular 'need' is imparted onto you by others, which again would be unreasonable on the merit of it being non personal in the first place.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Fungus on March 25, 2018, 08:44:21 am
Think how many objects the computer has to track, and predict, in that scenario...  it almost amounts to a denial of service attack...

Most of them will move out of the way if you just drive slowly through the crowd.

Anybody here ever cross the street in Rome? If you wait for traffic you'll be there all day. After a while you spot that the residents don't wait, they just cross. The traffic responds.

It's scary the first few times you do it but it definitely works.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: G7PSK on March 25, 2018, 09:10:42 am
It will hopefully be a long time before autonomous vehicles get the same level of situational awareness and precise vehicle control of a good human driver...
Most (99.9%) human drivers have ZERO situational awareness and ZERO predictability skills. Most modern cars already have ESP which does the precise vehicle control for the driver when necessary.

I live in the UK and drive a VW which has ESP but it certainly does not mean extra sensory perception, it is "electronic skid protection" ie. anti lock brakes with a fancy name, also has a button to turn it off which got used for the first time recently with all the snow we had. Road safety would be enhanced greatly by two things phone which turn off automatically when inside a car and the removal of radios and music systems.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: nctnico on March 25, 2018, 11:04:33 am
Quote from: nctnico
Most (99.9%) human drivers have ZERO situational awareness and ZERO predictability skills.
What do you guys smoke  there in Netherlands?
I have no idea, but again, they kill almost half as many people on the roads, so they're doing something right.
I'm not writing about the Netherlands but more in general. I've driven in many countries and it seems that most people just tail the car in front of them and don't look ahead at all. There just isn't any planning involved. It is logical too: doing the driving yourself is a necessary evil and people don't like to pay attention.

@james_s: the roads in the NL are usually in very good condition and well laid out to offer save driving conditions. The NL government is very active to reduce the number of car accident related fatalities but it isn't easy in one of the most densely populated countries in the world. Getting a driver's license means passing a theoretical and practical exam. Passing these exams requires some serious training (usually it takes someone 6 months to a year). Cars are expensive to own so people usually have the least number of cars they need. There are around 17 million people in the NL and around 8 million cars. Together those cars drove 188 billion kilometers (2016) so that makes an average of around 15k km per car.

@Fungus: driving in Italy is quite different indeed  :scared:

Edit: I'm convinced self driving cars will be safer overall compared to people driving their cars themselves. Trying to think about single scenarios where a self driving car may not work is rather moot because I see some human drivers also struggle with some situations. The basic rule is not to hit anything. I'm also convinced more self driving car related lethal accidents (*) will happen but when you are going to look at the accident statistics (the big picture) the self driving car will beat the human driver hands down.

* People do stupid things like crossing the street without looking. This happened to me too when I was a teenager on a dark-ish morning during the winter. Somehow I overlooked a car and it hit me when I crossed the street. I still don't get why I didn't see it. Perhaps the driver didn't switch the lights on. My father (strongly) blamed the hood on the jacket I was wearing so ever since I wear a cap during the winter.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Marco on March 25, 2018, 04:53:10 pm
Trying to think about single scenarios where a self driving car may not work is rather moot
It's more of a question where will they work. I think till we have human level AI the best bet is to use them as slightly more flexible trams, but outside of Japan I don't really see the point ... labour costs not really much of an issue for trams.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: nctnico on March 25, 2018, 05:19:41 pm
Trying to think about single scenarios where a self driving car may not work is rather moot
It's more of a question where will they work.
From the experiments so far it seems the self driving cars work in cities and on highways. That is close to 100% of what cars are used for. I do wonder how well an EV handles ice & snow.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Marco on March 25, 2018, 06:19:43 pm
From the experiments using cars driving in perfectly mapped environment, with teams actively engaged in surveying roadworks to make them avoid those, with human backup for when their programming breaks. Also they aren't double parking in stupid situations yet, which they will be if they actually start doing a taxi service.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: gearshredder on March 25, 2018, 06:50:17 pm
I do wonder how well an EV handles ice & snow.

Well, depends on if it's a DC or AC motor and it's controller. Modern AC induction motor and controller is going to have amazing anti-slip capabilities.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: wraper on March 25, 2018, 08:18:02 pm
I do wonder how well an EV handles ice & snow.

https://www.youtube.com/watch?v=Fr4-JZxFJ2s (https://www.youtube.com/watch?v=Fr4-JZxFJ2s)

But seriously:

https://www.youtube.com/watch?v=Pcn01JPAnpo (https://www.youtube.com/watch?v=Pcn01JPAnpo)
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: nctnico on March 25, 2018, 08:47:46 pm
From the experiments using cars driving in perfectly mapped environment, with teams actively engaged in surveying roadworks to make them avoid those
I strongly doubt they do mapping of roadworks because that would invalidate the experimental results. It would also mean that the autonomous car can't handle obstacles like broken down vehicles, accidents and delivery trucks which can happen at any time. Perhaps mapping roadworks was done in very early (more controlled) tests with trained drivers but I can't imagine they still do it with 'generic' drivers while the car is supposed to do all the work.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: nctnico on March 25, 2018, 08:49:21 pm
I do wonder how well an EV handles ice & snow.
Well, depends on if it's a DC or AC motor and it's controller. Modern AC induction motor and controller is going to have amazing anti-slip capabilities.
Crap  :palm: User I/O error. I meant an autonomous car ofcourse!
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: coppice on March 25, 2018, 08:59:11 pm
From the experiments using cars driving in perfectly mapped environment, with teams actively engaged in surveying roadworks to make them avoid those
I strongly doubt they do mapping of roadworks because that would invalidate the experimental results. It would also mean that the autonomous car can't handle obstacles like broken down vehicles, accidents and delivery trucks which can happen at any time. Perhaps mapping roadworks was done in very early (more controlled) tests with trained drivers but I can't imagine they still do it with 'generic' drivers while the car is supposed to do all the work.
Google's presentations emphasise how much they rely on their 11cm resolution map of the area, and correlation between the lidar input and the map, to work out exactly where the car is. I haven't seen a presentation where they address the topic of what happens when the world doesn't match the map. It they really had good solutions for this, surely they would be bragging about them.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: nctnico on March 25, 2018, 09:09:53 pm
From the experiments using cars driving in perfectly mapped environment, with teams actively engaged in surveying roadworks to make them avoid those
I strongly doubt they do mapping of roadworks because that would invalidate the experimental results. It would also mean that the autonomous car can't handle obstacles like broken down vehicles, accidents and delivery trucks which can happen at any time. Perhaps mapping roadworks was done in very early (more controlled) tests with trained drivers but I can't imagine they still do it with 'generic' drivers while the car is supposed to do all the work.
Google's presentations emphasise how much they rely on their 11cm resolution map of the area, and correlation between the lidar input and the map, to work out exactly where the car is. I haven't seen a presentation where they address the topic of what happens when the world doesn't match the map. It they really had good solutions for this, surely they would be bragging about them.
That may be but there is a difference between comparing stationary objects like buildings and needing to map temporary things like road blocks. I think they need the accurate maps to do accurate positioning so the car doesn't drive on a lane in the wrong direction. Unfortunately accurate positioning is very hard to do and GPS really isn't up to the task.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: mikerj on March 26, 2018, 12:01:16 pm
I live in the UK and drive a VW which has ESP but it certainly does not mean extra sensory perception, it is "electronic skid protection" ie. anti lock brakes with a fancy name, also has a button to turn it off which got used for the first time recently with all the snow we had. Road safety would be enhanced greatly by two things phone which turn off automatically when inside a car and the removal of radios and music systems.

ESP is "Electronic Stability Program", and is far more than just Anti-lock brakes.  These systems detect loss of steering control by comparing the demanded yaw rate (steering angle input and speed) vs measured yaw (using a gyro), and applies braking to individual wheels to try to correct the discrepancy.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: G7PSK on March 26, 2018, 01:05:35 pm
I live in the UK and drive a VW which has ESP but it certainly does not mean extra sensory perception, it is "electronic skid protection" ie. anti lock brakes with a fancy name, also has a button to turn it off which got used for the first time recently with all the snow we had. Road safety would be enhanced greatly by two things phone which turn off automatically when inside a car and the removal of radios and music systems.

ESP is "Electronic Stability Program", and is far more than just Anti-lock brakes.  These systems detect loss of steering control by comparing the demanded yaw rate (steering angle input and speed) vs measured yaw (using a gyro), and applies braking to individual wheels to try to correct the discrepancy.
And totally useless as soon as conditions get tricky which is why there is a button to turn it off in icy or muddy conditions with recommendation to do so in the hand book under such conditions. IE. its just more electronic clutter on the car and cannot in reality offer much if any protection or help to the driver. 
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: nctnico on March 26, 2018, 01:53:51 pm
It probably depends on the brand of the car as well. People I know are very happy with ESP and it did help them to keep control over the car on icy roads.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: james_s on March 26, 2018, 04:57:27 pm
For the average person these systems would likely be hard pressed to do worse. Several years ago there was a snow storm and the hill I live on became covered in ice. An older gentleman got his Prius stuck against the curb trying to go down the hill so I helped get it un-stuck. He then proceeded to lock up all 4 wheels and slide sideways then backwards down the hill, uselessly trying to steer. I repeatedly yelled to let off the brakes so the wheels could turn but he didn't and the car just kept sliding completely out of control, miraculously not hitting anything else on the way down. Had he let off the brakes the car would likely have remained straight and controllable, but as far as the ABS system was concerned the car was stationary. Quite a few drivers have virtually no understanding of physics or automotive systems.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Koen on March 26, 2018, 05:44:38 pm
and cannot in reality offer much if any protection or help to the driver.
I'm very thankful for my car's ESP. It's absolutely brilliant.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: thm_w on March 26, 2018, 08:39:42 pm
And totally useless as soon as conditions get tricky which is why there is a button to turn it off in icy or muddy conditions with recommendation to do so in the hand book under such conditions. IE. its just more electronic clutter on the car and cannot in reality offer much if any protection or help to the driver.

I think you've got that wrong, according to my manual:
Quote
The vehicle should be driven with the Vehicle Dynamic Control (VDC) system on for most driving conditions. If the vehicle is stuck in mud or snow, the VDC system reduces the traction motor output to reduce wheel spin. The traction motor speed will be reduced even if the accelerator is depressed to the floor. If maximum traction motor power is needed to free a stuck vehicle, turn the VDC system off

They only recommend turning it off IF you are stuck. Turning it off when its muddy or icy is completely counter to the additional safety and traction it will provide you. There is a reason its mandated to be in all newer cars, its useful.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: SilverSolder on March 26, 2018, 09:37:59 pm

[ESP is] totally useless as soon as conditions get tricky which is why there is a button to turn it off in icy or muddy conditions with recommendation to do so in the hand book under such conditions. IE. its just more electronic clutter on the car and cannot in reality offer much if any protection or help to the driver.

Once, on the motorway, I got the passenger side of the car too far over to the side, into the soft snow.  As if by magic, ESP applied the brakes on the opposite side of the car which had the effect of straigthening things up quickly.   That's the only time I've seen ESP activate, but it worked in that situation...
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: janoc on March 26, 2018, 09:54:09 pm
Not sure whether this has been posted already, but it is a pretty good analysis of how these technologies work and what can realistically be expected (e.g. LIDAR may not have saved the day even if it was installed and was working):

https://medium.com/@rebane/could-ai-have-saved-the-cyclist-had-i-programmed-the-uber-car-6e899067fefe

Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Rasz on March 26, 2018, 10:10:39 pm
Not sure whether this has been posted already, but it is a pretty good analysis of how these technologies work and what can realistically be expected (e.g. LIDAR may not have saved the day even if it was installed and was working):

https://medium.com/@rebane/could-ai-have-saved-the-cyclist-had-i-programmed-the-uber-car-6e899067fefe

Dont know what background author has, but I stopped reading after few paragraphs of nonsense.
Lidar doesnt "think" and doesnt do classification, it just provides additional data points to classification algorithm, plus this dude keeps using dashcap footage as a proof :/
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: G7PSK on March 27, 2018, 07:45:13 am
Not sure whether this has been posted already, but it is a pretty good analysis of how these technologies work and what can realistically be expected (e.g. LIDAR may not have saved the day even if it was installed and was working):

https://medium.com/@rebane/could-ai-have-saved-the-cyclist-had-i-programmed-the-uber-car-6e899067fefe
Having read that article it looks like all pedestrians will need to be fitted with a radar transponder and a strobe light.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: janoc on March 27, 2018, 10:18:18 am
Not sure whether this has been posted already, but it is a pretty good analysis of how these technologies work and what can realistically be expected (e.g. LIDAR may not have saved the day even if it was installed and was working):

https://medium.com/@rebane/could-ai-have-saved-the-cyclist-had-i-programmed-the-uber-car-6e899067fefe (https://medium.com/@rebane/could-ai-have-saved-the-cyclist-had-i-programmed-the-uber-car-6e899067fefe)

Dont know what background author has, but I stopped reading after few paragraphs of nonsense.
Lidar doesnt "think" and doesnt do classification, it just provides additional data points to classification algorithm, plus this dude keeps using dashcap footage as a proof :/


Next time read with comprehension, mate. He explicitly talks about the AI algorithms doing the "thinking" and that LIDAR only feeds the data in! Most of the text is about how the system needs to balance data from multiple sensors.

And re his background - if I am not mistaken, it is this dude:
https://www.researchgate.net/profile/Martin_Rebane (https://www.researchgate.net/profile/Martin_Rebane)

I would say he has more clue about what he is talking about than most, given that he is working directly in the field.  What is your background to judge the article as "nonsense"?
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: janoc on March 27, 2018, 10:45:08 am
Having read that article it looks like all pedestrians will need to be fitted with a radar transponder and a strobe light.

Possibly.

But the fact is that the current tech is massively overrated and often ascribed almost magical capabilities. I don't work in autonomous vehicles but I can certainly relate to what he is saying about the LIDAR data processing - e.g. that LIDAR scans 10x per second doesn't automatically mean you have the data 10x per second!

I have been working on SLAM and point cloud fusion (mostly for 3D scanning and virtual reality navigation - the algorithms used are very much the same) and this stuff requires enormous computing power, especially when you want dense and accurate point clouds, otherwise you can literally hide an elephant (or a cyclist in this case!) in the holes in the data. E.g. merging a measurement campaign worth of high-res scans from a laser scanner (which is basically slower but much more accurate LIDAR) can take a few hours on a PC, even with a GPU acceleration.

Now in a car, even at 10Hz, that's 0.1s * 60km/h (16.7m/s) the car was doing = ~1.7m traveled between every scan where it needs to rely on other sensors only. If the car is fusing the data slower (which is likely due to the complexity of the process), you do the math. E.g. at 1s effective data rate you have almost 17m traveled between the map updates. Then you typically need two or more "frames" for the AI system to evaluate the new data as an obstacle (as opposed to e.g. spurious reflection or some transient noise - e.g. a car moving nearby). So, in the worst case, during those 2-3s the car is effectively driving "blind" (or rather - with outdated information) as far as LIDAR is concerned - that's some 50m traveled! You can literally have an elephant step in front of the car during that time and the LIDAR-based system wouldn't see it.

Of course, a real car would likely do this faster and also have other systems covering that "blind" time that could prevent the collision as well (and the author actually says that explicitly) but the point is that LIDAR is far from the miracle it is often "sold" as.

Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: SilverSolder on March 27, 2018, 11:34:07 am
But the fact is that the current tech is massively overrated and often ascribed almost magical capabilities.

Wouldn't some of the "lower tech" collision avoidance technology (the type you can already get on production cars) have noticed the bicyclist at all, and at least have started to brake?
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: coppice on March 27, 2018, 12:30:50 pm
But the fact is that the current tech is massively overrated and often ascribed almost magical capabilities.

Wouldn't some of the "lower tech" collision avoidance technology (the type you can already get on production cars) have noticed the bicyclist at all, and at least have started to brake?
If you look at the performance specs of current collision avoidance systems fitted to cars, you'll find they are far from generic "stop before we hit the object in front" systems. Their crude designs (needed to avoid the cost exploding) means they will only properly react when the car AND the object being approached fit certain speed profiles. They tend to have big problems when driving rapidly towards a broken down vehicle on a motorway, but will slow down nicely to follow a car which is rapidly decelerating for some reason. Travelling around the city at 50kph, the same car would probably slow down and stop smoothly behind a broken down vehicle.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Wollvieh on March 27, 2018, 12:38:13 pm
Wouldn't some of the "lower tech" collision avoidance technology (the type you can already get on production cars) have noticed the bicyclist at all, and at least have started to brake?

According to this article Mobileye would have detected the pedestrian even from the shitty dashcam-footage 1 second before the impact.
https://techcrunch.com/2018/03/26/mobileye-chastises-uber-by-detecting-struck-pedestrian-in-footage-well-before-impact/
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Rasz on March 27, 2018, 12:49:23 pm
Next time read with comprehension, mate. He explicitly talks about the AI algorithms doing the "thinking" and that LIDAR only feeds the data in!

does he now? whats this then:  "now one of them thinks it’s a dog" as if sensor did image classification, or as if someone used object classification on lidar output alone.

Most of the text is about how the system needs to balance data from multiple sensors.

First paragraphs I read seemed to be from distant past, not current state of the art AI, but 30 year old expert systems level of thinking. I guess in a sense author is right - AI couldnt save this pedestrian if HE was the one who programmed it.

I would say he has more clue about what he is talking about than most, given that he is working directly in the field.  What is your background to judge the article as "nonsense"?
1 he is not working in the field, he is a researcher at an university
2 its not my sole opinion, many people on hackernews had same sentiment.
3 Btw did Uber recruit in Talin?

btw "Intel Corp.’s Mobileye, which makes chips and sensors used in collision-avoidance systems and is a supplier to Aptiv, said Monday that it tested its own software after the crash by playing a video of the Uber incident on a television monitor. Mobileye said it was able to detect Herzberg one second before impact in its internal tests, despite the poor second-hand quality of the video relative to a direct connection to cameras equipped to the car."

Even Mobileye, famous Tesla decapitation platform, would be able to at least initiate braking with shitty chinese dashcam as a data source.



Wouldn't some of the "lower tech" collision avoidance technology (the type you can already get on production cars) have noticed the bicyclist at all, and at least have started to brake?

Dont worry, Uber was on top of things: https://www.sfgate.com/business/article/Uber-Disabled-Volvo-SUV-s-Standard-Safety-System-12782878.php (https://www.sfgate.com/business/article/Uber-Disabled-Volvo-SUV-s-Standard-Safety-System-12782878.php)
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: coppice on March 27, 2018, 12:52:58 pm
Not sure whether this has been posted already, but it is a pretty good analysis of how these technologies work and what can realistically be expected (e.g. LIDAR may not have saved the day even if it was installed and was working):

https://medium.com/@rebane/could-ai-have-saved-the-cyclist-had-i-programmed-the-uber-car-6e899067fefe (https://medium.com/@rebane/could-ai-have-saved-the-cyclist-had-i-programmed-the-uber-car-6e899067fefe)

Dont know what background author has, but I stopped reading after few paragraphs of nonsense.
Lidar doesnt "think" and doesnt do classification, it just provides additional data points to classification algorithm, plus this dude keeps using dashcap footage as a proof :/


Next time read with comprehension, mate. He explicitly talks about the AI algorithms doing the "thinking" and that LIDAR only feeds the data in! Most of the text is about how the system needs to balance data from multiple sensors.

And re his background - if I am not mistaken, it is this dude:
https://www.researchgate.net/profile/Martin_Rebane (https://www.researchgate.net/profile/Martin_Rebane)

I would say he has more clue about what he is talking about than most, given that he is working directly in the field.  What is your background to judge the article as "nonsense"?
The article reads like a series of excuses, rather than an analysis. The car wasn't slowing, even as it hit the bicycle. It looks like the car's awareness of its situation was so weak that it would have continued on its journey with the cyclist and cycle stuck to the front grill, as if nothing had happened. It seems only the "driver" caused the car to react. Situational awareness issues are one of the major reasons people fail driving tests, and there are sounds reasons for that.  A vehicle with such poor situational awareness doesn't belong on public roads.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: janoc on March 27, 2018, 02:16:37 pm
But the fact is that the current tech is massively overrated and often ascribed almost magical capabilities.

Wouldn't some of the "lower tech" collision avoidance technology (the type you can already get on production cars) have noticed the bicyclist at all, and at least have started to brake?

It possibly could - except Uber has disabled it so that it doesn't interfere with their testing. Volvo's subcontractor which supplies their automatic braking systems has said that today.

https://www.sfgate.com/business/article/Uber-Disabled-Volvo-SUV-s-Standard-Safety-System-12782878.php#item-85307-tbla-5 (https://www.sfgate.com/business/article/Uber-Disabled-Volvo-SUV-s-Standard-Safety-System-12782878.php#item-85307-tbla-5)

Now these systems aren't infallible neither - e.g. Volvo's pedestrian detection works only at low speed and not 60km/h the car was doing. And even then it is not fool-proof:

https://tv.sme.sk/v/16594/test-zastavi-volvo-s60-bez-zasahu-vodica-pred-chodcom.html (https://tv.sme.sk/v/16594/test-zastavi-volvo-s60-bez-zasahu-vodica-pred-chodcom.html)

(Video in Slovak and shows an S60 instead of the XC90 SUV Uber has, but should give you an idea).
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: janoc on March 27, 2018, 02:22:34 pm
The article reads like a series of excuses, rather than an analysis. The car wasn't slowing, even as it hit the bicycle. It looks like the car's awareness of its situation was so weak that it would have continued on its journey with the cyclist and cycle stuck to the front grill, as if nothing had happened. It seems only the "driver" caused the car to react. Situational awareness issues are one of the major reasons people fail driving tests, and there are sounds reasons for that.  A vehicle with such poor situational awareness doesn't belong on public roads.

You are reading waay too much into it. Rebane's article is only meant to demystify for general public how these systems work and make decisions, nothing else. I don't see why an Estonian machine learning PhD student would want to "make excuses" for the poor performance of the Uber's car they are not involved with - and which is not in dispute, btw, and Arizona has banned further tests already for that reason).

Also the safety driver was evidently distracted (looking at a phone?) and stopped the car only after the impact has occurred - if that (the car could have detected the impact and stopped by itself too).
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Marco on March 27, 2018, 02:33:24 pm
He has a commercial interest that a field which doesn't really deliver all that much commercial value stays as hot as it is.

The market only needs so many PhD's to datamine for advertising and security services.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: janoc on March 27, 2018, 02:37:34 pm
Next time read with comprehension, mate. He explicitly talks about the AI algorithms doing the "thinking" and that LIDAR only feeds the data in!

does he now? whats this then:  "now one of them thinks it’s a dog" as if sensor did image classification, or as if someone used object classification on lidar output alone.


Selective quoting without understanding, yay.

Here is the full quote:
Quote
Modern CA systems are operating with memory, they have maps and registers of what they have seen. They keep track of recorded objects from image to image. If two seconds ago both sensors (or more precisely, the algorithms that interprets the sensor readings) agreed the object to be a lorry and now one of them thinks it’s a dog,
(emphasis mine).


Most of the text is about how the system needs to balance data from multiple sensors.

First paragraphs I read seemed to be from distant past, not current state of the art AI, but 30 year old expert systems level of thinking. I guess in a sense author is right - AI couldnt save this pedestrian if HE was the one who programmed it.

Because the first paragraph does talk about an deterministic, expert system kind of thing! Namely the collision avoidance you may have in your car today - aka automatic braking. The immediately following paragraph speaks about the difference of modern AI from this.

Jeeze, did you actually even make a token effort to understand the text?


1 he is not working in the field, he is a researcher at an university

So working in machine learning research and directly with the tech involved (e.g. LIDAR mapping) is not in the field.  Where do you think companies like Waymo or Uber got their machine learning algorithms from? :palm:


2 its not my sole opinion, many people on hackernews had same sentiment.
3 Btw did Uber recruit in Talin?

How is that relevant?

btw "Intel Corp.’s Mobileye, which makes chips and sensors used in collision-avoidance systems and is a supplier to Aptiv, said Monday that it tested its own software after the crash by playing a video of the Uber incident on a television monitor. Mobileye said it was able to detect Herzberg one second before impact in its internal tests, despite the poor second-hand quality of the video relative to a direct connection to cameras equipped to the car."

Even Mobileye, famous Tesla decapitation platform, would be able to at least initiate braking with shitty chinese dashcam as a data source.

Yes and Aptiv has also explicitly said that Uber has disabled their stuff on the Volvo as to not interfere with their own systems (Aptiv doesn't want to be tainted by the scandal). The poor performance of the Uber's car is not in dispute, get off your high horse.

Reban only explains how these systems work so that even mere mortals can understand it, he is not defending the poor performance of the Uber's tech (which is apparently not news and was known even before this accident).

But given that you didn't understand nor read his article, I don't think you are in position to judge it neither.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: janoc on March 27, 2018, 02:42:03 pm
He has a commercial interest that a field which doesn't really deliver all that much commercial value stays as hot as it is.

The market only needs so many PhD's to datamine for advertising and security services.

 :palm: Seriously?

So someone with "a commercial interest in the field" is going to spend 5 3 years**  in Tallinn in Estonia, slaving away for next to nothing (and likely accruing debt in the process) doing a PhD at uni instead of joining some startup today, right? You don't need a PhD to do apply a bunch of existing tools like Keras or Tensorflow to a problem, such as that data mining, you know.

Holy crap that logic ...

** corrected before someone attacks me on it, missing the point - but 5y PhDs are quite common, especially when done in external form, i.e. along having a regular job outside of the uni.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: coppice on March 27, 2018, 02:45:51 pm
I don't see why an Estonian machine learning PhD student would want to "make excuses" for the poor performance of the Uber's car they are not involved with
That's human nature, and you'll see it when flaws are pointed out with any technology being developed. You generally get two types of response:
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: janoc on March 27, 2018, 02:55:54 pm
That's human nature, and you'll see it when flaws are pointed out with any technology being developed. You generally get two types of response:
  • "Their system is flawed, but we have a robust fix for that flaw in our version."
  • Replies like the one from the PhD student, because an attack on one of us is an attack on all of us.

You forgot the third type of response - anyone posting something trying to provide context and not right away joining the mob calling for lynching is obviously defending the indefensible and needs to be tarred and feathered. Ideally by folks who didn't even bother to read (and understand) what they have said. The good old - "if you aren't with us, you are against us" mentality.

I fail to see where is he providing any "defense" or "responding to an attack". But I rest my case, I have better things to do than to argue with people who are not even attempting to understand what others have to say before attacking them.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: ez24 on March 27, 2018, 05:12:29 pm
Wouldn't some of the "lower tech" collision avoidance technology (the type you can already get on production cars) have noticed the bicyclist at all, and at least have started to brake?

Toyota is coming out with what they call TSS-P 2.0 next year and in the specs is "bicycle avoidance".
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Rasz on March 27, 2018, 07:38:04 pm
Selective quoting without understanding, yay.

Here is the full quote:
Quote
Modern CA systems are operating with memory, they have maps and registers of what they have seen. They keep track of recorded objects from image to image. If two seconds ago both sensors (or more precisely, the algorithms that interprets the sensor readings) agreed the object to be a lorry and now one of them thinks it’s a dog,
(emphasis mine).

directly suggests running object classification on lidar output alone, this is where I stopped reading

So working in machine learning research and directly with the tech involved (e.g. LIDAR mapping) is not in the field.  Where do you think companies like Waymo or Uber got their machine learning algorithms from? :palm:

by this measure I could claim to be working in the field as well, after all I do have a depth camera on my desk and did 3 semesters of machine learning.
For me working in the field actually means deploying the tech, not learning about it.


2 its not my sole opinion, many people on hackernews had same sentiment.
3 Btw did Uber recruit in Talin?

How is that relevant?
-other ML people also didnt quite agree with his conclusions
-self explanatory

Yes and Aptiv has also explicitly said that Uber has disabled their stuff on the Volvo as to not interfere with their own systems (Aptiv doesn't want to be tainted by the scandal). The poor performance of the Uber's car is not in dispute, get off your high horse.

The whole thing is a list of excuses with a conclusion its so hard it was basically unavoidable, meanwhile pretty crappy mobileye has no problem even with shitty copy of a copy of a dashcam footage, same footage author deemed too bad for image classification (and implied this was the picture actual system would use!?!).


So someone with "a commercial interest in the field" is going to spend 5 3 years**  in Tallinn in Estonia, slaving away for next to nothing (and likely accruing debt in the process) doing a PhD

Higher ed is mostly tuition free in Estonia.

Basically people who actually do work with technology all agreed this was a big fat WTF from Uber, and then this dude pops out with "its really hard guys, mmmkay". This was the PERFECT setup for autonomous car: slow moving person, dragging metal reflector, crossing 3-4 lanes of road at 90 angle, directly under two street lamps, on an empty road with good visibility (actual one, not the lol dashcam).


Edit:

https://www.theverge.com/2018/3/27/17168606/nvidia-suspends-self-driving-test-uber (https://www.theverge.com/2018/3/27/17168606/nvidia-suspends-self-driving-test-uber)

"NVIDIA Titan V Reportedly Producing Errors in Scientific Simulations" https://wccftech.com/nvidia-titan-v-error/ (https://wccftech.com/nvidia-titan-v-error/)

Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: alho on March 27, 2018, 08:12:24 pm
Producer of No Agenda Show had this to say.
Quote
 
  Boots on the ground from Brian Kaufman
       
        I live right by where the Uber self-driving accident
        occurred, and the Uber released video showing that apparently no driver could
        have avoided hitting that woman in a million years is totally misleading.
        See my YouTube video below where I drive the
        same path as the Uber autonomous car was on. The attached photos were
        taken from the point of the accident.

Link to video in question
https://youtu.be/CRW0q8i3u6E?t=22s (https://youtu.be/CRW0q8i3u6E?t=22s)
What kind of camera are these Uber folks using? Cheapest that can be found from shenzhen?

Never heard about No Agenda Show, The Best Podcast In The Universe? Take few minutes to listen, maybe you'll find it interesting.
http://naplay.it/1018/2-13-31 (http://naplay.it/1018/2-13-31)
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: ez24 on March 27, 2018, 08:43:32 pm

https://www.theverge.com/2018/3/27/17168606/nvidia-suspends-self-driving-test-uber (https://www.theverge.com/2018/3/27/17168606/nvidia-suspends-self-driving-test-uber)

From this link is the attached picture of the "Brain" used by AVs.   Does anyone know anything about the connectors?

thanks
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: apis on March 27, 2018, 11:42:48 pm
And re his background - if I am not mistaken, it is this dude:
https://www.researchgate.net/profile/Martin_Rebane (https://www.researchgate.net/profile/Martin_Rebane)

I would say he has more clue about what he is talking about than most, given that he is working directly in the field.  What is your background to judge the article as "nonsense"?
He probably knows a bit about AI but he doesn't seem to know anything about how these cars work in practice.

His conclusion is nonsense at least: "The internals of modern AI system are far too complex to assess without having exactly the same data as was available for the Uber car". While that is technically correct, without all the data it is hard to tell what happened, that is true for all and every system, AI or not. He seems to think these programs are black boxes though, which they most definitely are not (at least not google's, I don't know much about Uber's cars). It's not like you take all sensor data, feed it to a neural net and then feed the output to the car's controls.

The car's cameras would have had a large dynamic range, which means it shouldn't have had any problem with lighting conditions (which he doesn't seem to realise either). There should also have been lidar and radar which are independent of lighting. At the very least the lidar is certain to have detected there was a large object directly in front of the car and the program should have noticed that and activated the brakes in time. There really is no excuse for the software not to handle this situation, it's such a common and predictable scenario. So unless the breaks suddenly failed for some other reason this really shouldn't have happened. Hopefully a proper investigation will tell exactly what went wrong eventually.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: maginnovision on March 28, 2018, 02:16:26 am

https://www.theverge.com/2018/3/27/17168606/nvidia-suspends-self-driving-test-uber (https://www.theverge.com/2018/3/27/17168606/nvidia-suspends-self-driving-test-uber)

From this link is the attached picture of the "Brain" used by AVs.   Does anyone know anything about the connectors?

thanks

Not sure on exact series but pretty basic automotive stuff. Some LVDS, coax, and a big rugged connector. Probably for dlc/dme. Hard to say for sure though. A TCU looks pretty similar but fewer connectors. Could actually be fiber optic for the blue? Just can't be sure. Also I don't think that board is out yet.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: G7PSK on March 28, 2018, 07:29:06 am
There is one thing that has not been brought up yet. Suppose it was not an accident but some one was hacking uber for whatever reason and deliberately either made the car blind to the pedestrian or just overrode the controls and drove the car over her.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Brumby on March 28, 2018, 09:41:03 am
Conspiracy theories are such a fascinating waste of time.    JMHO
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: apis on March 28, 2018, 02:46:29 pm
If it was hacked that would be relatively easy to tell afterwards. Doubt it's the case here but if autonomous cars are networked they will likely be susceptible to hacking, which is a big problem imho. You can build unhackable systems in theory but it seems very hard to do in practice.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Decoman on March 28, 2018, 04:39:01 pm
Somehow, I suspect that autonomous vehicles with mediocre hardware/software (whatever is current really), might be as "impractical" (as in being directly hazardous, because of poor operational security) and as dangerous as having people in boxes go ballistic through pipes just to go as fast as one can that way.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: TheDane on March 28, 2018, 05:39:42 pm
If it was hacked that would be relatively easy to tell afterwards. Doubt it's the case here but if autonomous cars are networked they will likely be susceptible to hacking, which is a big problem imho. You can build unhackable systems in theory but it seems very hard to do in practice.

You could say it was 'hacked' - literally - if removing almost all of the LIDAR's is equal to 'hacking a system into defective pieces'  :-//
https://www.reuters.com/article/us-uber-selfdriving-sensors-insight/ubers-use-of-fewer-safety-sensors-prompts-questions-after-arizona-crash-idUSKBN1H337Q (https://www.reuters.com/article/us-uber-selfdriving-sensors-insight/ubers-use-of-fewer-safety-sensors-prompts-questions-after-arizona-crash-idUSKBN1H337Q)
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: CatalinaWOW on March 29, 2018, 01:02:51 am
Maybe this is a case of someone "Muntzing" themselves to the lowest cost solution.  And going at least one step too far.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Marco on March 29, 2018, 09:29:28 am
Everything is build down to a price. I don't necessarily see the problem with using LIDAR + stereo.

Stereo cameras in IR should be able to get a good depth picture near the car.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: CatalinaWOW on March 29, 2018, 01:45:01 pm
Everything is build down to a price. I don't necessarily see the problem with using LIDAR + stereo.

Stereo cameras in IR should be able to get a good depth picture near the car.

The car had an adequate set of sensors installed.  But various reports indicate one or more of the sensor systems was disabled.  No comments have been made on the processors, but there is always the possibility that some processing capacity was also disabled.

I agree, everything is built to a price.  But not everyone agrees on the minimum required performance, and there are certainly mistakes made on the route to whatever performance level a particular group is trying to achieve.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Brumby on March 30, 2018, 01:09:34 am
Everything is build down to a price.

I will challenge this statement (but not too harshly).

Certainly, when it comes to mass production, this statement is true, but we are looking at research and development at this stage - and cost per unit considerations are not as important as establishing reliable solutions.

Once those reliable solutions are found, then the cost pressures come to the fore.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Stray Electron on March 30, 2018, 02:11:00 am
  That's a great point. If this can happen on what is arguably the best system available, regardless of the costs, then what happens when the bean counters are placed in control and the system is designed to be mass produced at the absolute minimum costs?   Every mass production car on the road is filled with countless examples of systems that were designed for minimum production cost at the expense of reliability, maintainability, repairability and performance. The radios, the carpet material, the seats, the ultra thin wiring, the poor quality connectors, plastic head lamp covers, cheap plastic body parts, the list goes on and on.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: SilverSolder on March 30, 2018, 03:14:57 pm
  Every mass production car on the road is filled with countless examples of systems that were designed for minimum production cost at the expense of reliability, maintainability, repairability and performance. The radios, the carpet material, the seats, the ultra thin wiring, the poor quality connectors, plastic head lamp covers, cheap plastic body parts, the list goes on and on.

Cars (and every component in it) are built to a spec:  100,000 miles or 10 years.  Most cars beat that spec, if well taken care of you can double or even triple it. 
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: emptech on March 31, 2018, 11:14:10 pm
There is no law against stupidity, there will always be stupid people.  Unfortunately this person paid the ultimate penalty.  If it was a normal vehicle, I'd say the driver was over-driving his/her headlights, according to what I saw in the video.  I know that when police use LIDR, shiny objects give the best signal, I suppose there was nothing reflective on the bicyclist.  I didn't say victim, not sure if that is the automobile or the bicycle.

Jim
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Fungus on April 01, 2018, 12:01:55 pm
I wonder how much a company would have to pay somebody to take a hit from an autonomous car?

It seems a good way to put a competitor out of business.  :popcorn:

https://www.youtube.com/watch?v=MHNL49Qemd0 (https://www.youtube.com/watch?v=MHNL49Qemd0)
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: apis on May 24, 2018, 10:52:25 pm
National Transportation Safety Board (NTSB) preliminary report:
https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH010-prelim.pdf (https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH010-prelim.pdf)

Apparently the car saw the woman and realised it had to brake. But apparently it wasn't programmed to neither brake on its own nor warn the safety driver (because the system is so bad it generates too many false positives). The safety driver wasn't looking at her phone, instead she was looking at a monitor with diagnostics information (as instructed):

Quote
According to Uber, the developmental self-driving system relies on an attentive operator to intervene if the system fails to perform appropriately during testing. In addition, the operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review

I don't understand how Uber expect their safety drivers to stare on a diagnostics monitor and at the same time keep track of what happens on the road.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: orion242 on May 24, 2018, 11:09:02 pm
For facks sake, the NTSB has an invalid SSL cert on their page?
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: ez24 on May 24, 2018, 11:10:09 pm
National Transportation Safety Board (NTSB) preliminary report:
https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH010-prelim.pdf (https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH010-prelim.pdf)


Another quote which I find shocking,  the car was designed NOT to stop !  Surprised the regulators allowed this  :--
Quote
According to Uber , emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for  erratic vehicle behavior.  The vehicle operator is relied on to intervene and take action.  The system is not designed to alert the operator.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: orion242 on May 24, 2018, 11:27:49 pm
More surprised if the locals don't grab their torches and pitchforks and camp out in front of the politicians that welcomed that bull shit in.  Alpha test on public roadways here!
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: james_s on May 25, 2018, 12:37:09 am
Most of the locals are probably excited about the concept, having very little technical understanding they have bought into the hype. I suspect most people have absolutely no idea just how challenging it is to make a sufficiently reliable fully autonomous car. I also worry that partially autonomous cars will breed complacency by making it possible to get away with not paying attention to what's happening, most of the time.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Brumby on May 25, 2018, 01:53:28 am
National Transportation Safety Board (NTSB) preliminary report:
https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH010-prelim.pdf (https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH010-prelim.pdf)

Apparently the car saw the woman and realised it had to brake. But apparently it wasn't programmed to neither brake on its own nor warn the safety driver (because the system is so bad it generates too many false positives). The safety driver wasn't looking at her phone, instead she was looking at a monitor with diagnostics information (as instructed):

Quote
According to Uber, the developmental self-driving system relies on an attentive operator to intervene if the system fails to perform appropriately during testing. In addition, the operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review

I don't understand how Uber expect their safety drivers to stare on a diagnostics monitor and at the same time keep track of what happens on the road.
The poor operator was set up to fail - even doing the "right" thing.

"The system is not designed to alert the operator."  MAJOR fail, IMO.  The system started detecting something and didn't notify the operator straight away, which one can understand - to a point - but it would have been nice to have some sort of "imminent danger" warning.  Even a couple of seconds might have provided enough warning to lessen the severity of the impact.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: r3bers on June 22, 2018, 04:12:07 pm
They found main reason!
It's Hulu show... "The Voice"
https://www.cnbc.com/2018/06/22/uber-driver-streamed-the-voice-before-self-driving-car-crash.html (https://www.cnbc.com/2018/06/22/uber-driver-streamed-the-voice-before-self-driving-car-crash.html)
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: chris_leyson on June 22, 2018, 04:28:43 pm
Posted this under another topic, oops.
A police report released Thursday would seem to imply that the driver was looking at her mobile phone and only looked up 0.5s before the crash. Wouldn't be surprised if the driver faces charges of vehicle manslaughter. It's illegal in the UK to drive whilst using a mobile phone and it's probably illegal in a lot of other countries. Probably illegal in the state of Arizona as well, but people still do it.
https://eu.azcentral.com/story/news/local/tempe-breaking/2018/06/21/uber-self-driving-car-crash-tempe-police-elaine-herzberg/724344002/
Looking back at the dash cam footage, I initialy thought the driver must have been looking at some Uber intrumentation screen, obviously not and your cellphone provider knows exactly what you're up or down loading at any given time. Definitely vehicle manslaughter you can't blame AI.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Rasz on June 24, 2018, 09:44:44 pm
They found main reason!
It's Hulu show... "The Voice"
https://www.cnbc.com/2018/06/22/uber-driver-streamed-the-voice-before-self-driving-car-crash.html (https://www.cnbc.com/2018/06/22/uber-driver-streamed-the-voice-before-self-driving-car-crash.html)

I will totally trust US Police over NTSB investigation findings!!1 Its not like Police in US lie all the time, and Tempe Chief of Police didnt run to the papers proclaiming driver blame 30 minutes after the accident, or released misleading $10 Dash Cam recording.

Yes, this was main reason, not the software coded to plow thru obstacles, even when it recognizes live person with a bicycle on the road.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Brumby on June 25, 2018, 01:33:14 am
I agree.  Focussing on the female occupant for blame is a distraction from the real issues.

A fundamental question about which I haven't seen any information is: How many hours had she been doing this kind of work?  This is critically important in understanding the human factor.

Put a someone who is a real driver into such a vehicle for the first time and for the first hour, they are going to be hovering over the controls like a nervous nellie.  After 3 hours of the vehicle driving itself like it was on a licence test and they are going to relax.  After 5 hours they are going to start to relax and after 10 hours, they are going to become rather complacent and their mind will start wandering, if it hasn't already.  Some people may take longer to get to that point ... and some may get there much quicker.

IMO, the female occupant simply cannot be held culpable because the whole vehicular environment was set up to compromise her ability to act in the same way as a driver of an ordinary vehicle ... and I would expect any half-decent lawyer to get her off if charges were brought.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: apis on June 25, 2018, 10:53:20 pm
Watching TV is pretty inexcusable but Uber's problems are still there. They designed the safety driver protocols and procedures they used. If they weren't so cheap they could have had two drivers in the car for example, then no one would even be tempted to try to watch TV while working. They could have audible alarms when the car detect something odd. They could have systems that monitor safety driver alertness. Maybe the safety driver is guilty of negligence, but so is Uber.
Title: Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
Post by: Reianami on March 30, 2021, 10:18:16 am
I agree with this quote. Now almost everything depends on prices