Author Topic: EEVblog #1066 - Uber Autonomous Car Fatality - How?  (Read 37098 times)

0 Members and 1 Guest are viewing this topic.

Offline ez24

  • Super Contributor
  • ***
  • Posts: 3082
  • Country: us
  • L.D.A.
Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
« Reply #125 on: March 23, 2018, 05:19:04 am »
this is what human sees at same spot same time of night:

There will be a lot of accident recreations made to prove this.
YouTube and Website Electronic Resources ------>  https://www.eevblog.com/forum/other-blog-specific/a/msg1341166/#msg1341166
 

Offline hendorog

  • Super Contributor
  • ***
  • Posts: 1583
  • Country: nz
Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
« Reply #126 on: March 23, 2018, 05:27:16 am »
Failure of the LIDAR

Says who?

More likely a failure to figure out that somebody would be stupid enough to cross the road like that.

Based on the reports and observations of the video, the car didn't brake, despite the pedestrian being in the lane. The driver did not look up either, which they would have done if the car braked.

Therefore the hazard was not detected. Therefore the LIDAR failed.
 

Offline Brumby

  • Supporter
  • ****
  • Posts: 11154
  • Country: au
Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
« Reply #127 on: March 23, 2018, 06:40:01 am »
@CNe7532294
I have not missed any of you points at all.  It's just that a lot of it is irrelevant.

I will address this specific point in the hope that you will see my point - which you continue to ignore...

One more thing, you really haven't made any case to say why we should take a human being completely out of the loop (my main point!).

AT NO TIME have I ever directly stated - or even suggested - that we take a human being out of the loop.

What I AM saying is that people will take themselves out of the loop.

The fundamental expectation is that the vehicle will drive itself - and after hours and hours of this happening (quite probably including some successful collision avoidance), the human "driver" will become confident in the AV system and they will lessen their focus.  This is human nature and there is no way to overcome it.  In fact, the legal system is going to be on their side since "self driving" is the very claim being made for these systems.


Your continued reference to the aviation industry is just annoying.  The correlation between the two is extremely poor.  If I just take your reference to QF72, by the time the pilots took the correct action, a car on the road would have had the collision and, quite possibly, emergency services would have been contacted and on their way.  No amount of training could have enabled a timely response.
 

Offline Brumby

  • Supporter
  • ****
  • Posts: 11154
  • Country: au
Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
« Reply #128 on: March 23, 2018, 06:49:03 am »
There is, however, one parallel with the aircraft industry that I can see having a perfect correlation - accident investigation and subsequent recommendations.

Even with highly trained pilots, people die - but the industry doesn't abandon technology.  It simply works out what went wrong and then takes steps to fix it.  Autonomous vehicle tech will go the same way.
 

Offline pickle9000

  • Super Contributor
  • ***
  • Posts: 2212
  • Country: ca
Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
« Reply #129 on: March 23, 2018, 07:09:28 am »
So the vehicle must have a safe area the size of which is determined by speed. I can't see the system needing to determine the type of object only size and direction when inside the safe area. If it's far away steer clear otherwise brake to reduce the impact. The only other problem would be avoiding other vehicles.

If he was being tailgated would that have prevented braking?

 
 

Offline oldway

  • Super Contributor
  • ***
  • !
  • Posts: 2174
Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
« Reply #130 on: March 23, 2018, 11:08:13 am »
This police inquery and its conclusions are staggering .... I wonder how is it possible that people do not notice that ....

The video presented does not have the scope of formal proof because that it is provided by the main interested party who can very well have manipulated it for its own interest ... (reduction of luminosity, cuts to make believe the impossibility of reaction and braking, etc ...)

Remember that this is not a surveillance camera video.

Several evidences seem to show that there is a problem with this video.

1) the road was lit, there is no appearance of lighting on the video.

2) The efficiency and the range of the headlights do not correspond to the headlights of a modern car. (Looks like headlights of a Ford Model T ... !!!)

3) There is no progressivity in the appearance of the pedestrian with his bike, it seems that portions of video are missing.

4) The sensitivity of the camera seems totally abnormal. Digital cameras are usually more sensitive to light than the human eyes .... There are even cameras capable of shooting in very low light levels.
Here, the video seems to have been made with the sensitivity of an old super 8 camera

Under such conditions, a reconstruction was essential to be done and it would have been necessary to publish the video made during the reconstitution so that one can compare.

Publish only the video of which there is no proof of authenticity, is biased and abnormal.

The result of the police investigation implies that the pedestrian would have committed suicide by knowingly crossing the road in a dark place in front of a vehicle that could not see and avoid it ..... this is no sense.

The reality would be rather that the pedestrian thought it was perfectly visible and that a motorist would certainly have braked to avoid it .... Of course, he committed an imprudence, but not a suicide.
Unfortunately, there was no driver, but an automatic system that failed to detect her and did not brake .....

UBER wants to pass this for a fatality (and succeeded), but for me, it is clear that it was not inevitable.

A pedestrian crossing a lighted road is visible from a distance enought for the driver to brake and at least to reduce the speed enough not to kill the pedestrian.
 

Offline Decoman

  • Regular Contributor
  • *
  • Posts: 161
  • Country: no
Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
« Reply #131 on: March 23, 2018, 11:11:04 am »
Looking at the video again, I find it odd that the passenger looks up and looks startled, but his eyes are looking to the left of center, with the passenger being seated on the left side of the car. The exterior shot (whatever is really the true video quality) seem to show the pedestrian only when the pedestrian is at the center. Making me think that the passenger perhaps saw the pedestrian to the left of the car some time before the car hit the pedestrian, which doesn't match the exterior recording (or whatever it is). Just to wildly speculate, if opening the possibility of there being a fake exterior view, then maybe the interior view might be fake as well (though I guess it would be easy to find out if that really was the case as it would probably be easy to correlate a longer interior view recording with a longer exterior view recording.

I am not a technologist or an expert, but couldn't help but finding it a little odd that the interior video looks to be all in gray, making me wonder if maybe the interior camera is based on FLIR (infrared camera). Apologies if something like this has already been commented.
« Last Edit: March 23, 2018, 11:17:46 am by Decoman »
 

Offline onesixright

  • Frequent Contributor
  • **
  • Posts: 604
  • Country: nl
Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
« Reply #132 on: March 23, 2018, 11:38:19 am »
I have seen some cars (like Audi) already have some TI camera's onboard.

Anybody know if this technology is already implemented in these autonomous test cars?

Volvo already has some automatic breaking for pedestrians. Is this switched off during these tests?






 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 6504
  • Country: gb
Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
« Reply #133 on: March 23, 2018, 11:57:14 am »
I have seen some cars (like Audi) already have some TI camera's onboard.

Anybody know if this technology is already implemented in these autonomous test cars?

Volvo already has some automatic breaking for pedestrians. Is this switched off during these tests?
Automated braking is quite widespread on new models now. A car can't get a 5 star NCAP rating now without some of these collision avoidance mechanisms. If the cyclist were moving very slowly you would expect a standard Volvo to have avoided the accident. With a moving cyclist it would depend how quickly they moved into the view of the sensors, but the brakes would have been on well before the impact.
 

Offline apis

  • Super Contributor
  • ***
  • Posts: 1667
  • Country: se
  • Hobbyist
Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
« Reply #134 on: March 23, 2018, 12:08:28 pm »
Yes, the lidar and radar would have seen that pedestrian. The car should have been able to detect, predict and prevent this accident. This accident shouldn't have happened and was ultimately a software failure. Even if the sensors were malfunctioning and sending false data to the computer the software should have detected the malfunction and acted accordingly. Well, I suppose there is a theoretical possibility the sensors were sending false data that looked real enough that a hardware error couldn't be detected, but I find that highly unlikely. A thorough technical analysis will tell eventually. But I find it very likely that Uber really **** up and a person died because of it.

I'm sure a lot of naysayers who doesn't know how the tech and software works were waiting for this to happen, but having been following the progress of Google's self-driving car over the years, this accident really surprised me. I can say with 99 % certainty* that it would NOT have happened to any of the Google cars. Please realise that all software is NOT created equal.
(* 1% to allow for some really weird and unlikely technical problem that couldn't possibly have been predicted or avoided.)

But to say this proves self-driving cars are bad and should never be allowed is really disingenuous. What it proves is that Uber's technology is bad and shouldn't be driving on public roads. I hope Uber are held accountable if they are found to have made an error that should have been avoided if they followed good engineering practices. I can't imagine how the software could fail in such a predictable scenario, surely they would have tested the software with thousands of test cases of exactly this kind of situation, before allowing it to drive on public streets.

It is humans who create the software and humans makes mistakes (which is why it's a good idea to try to replace human drivers to begin with). But once you have a system that is a provably better driver than the average human, that system is the better option, that's a simple fact.

And all evidence to date shows that self driving cars have (the potential) to be far superior to humans in almost every regard. (I'll admit Daves point that they will probably not be good at determining if a parked car is about to leave, at least for now, but even so, the advantages far outweigh the disadvantages).

Take a look at this presentation from 2011 of how the google cars work (and keep in mind, this is NOT the same as the Uber system). Clearly, avoiding pedestrians and cyclists is a rudimentary task their system solves successfully all the time:



 

Offline SilverSolder

  • Super Contributor
  • ***
  • Posts: 5141
  • Country: 00
Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
« Reply #135 on: March 23, 2018, 12:15:34 pm »
... industry doesn't abandon technology.  It simply works out what went wrong and then takes steps to fix it.

That is probably the rational response to this incident.

We know that air crashes typically have more than one cause.  If the NTSB is investigating, they are not going to be fooled by low quality video etc. -  and they are not going to be afraid of saying that 'driverless babysitter' inattention was a contributory cause in this particular accident, if that is what the evidence shows. 

We might end up with a situation where the technology gets most of the blame for failing to spot the bicyclist,  while contributory causes might include the driverless babysitter not paying sufficient attention, having perhaps not been sufficiently trained. 

For the technology part, it won't be enough to say "it failed".  They will look at the nitty gritty details of how the software was developed and tested to make sure this kind of thing wouldn't happen.  The development and QA process obviously failed - why did it fail? - Uber is vulnerable to blame on that point, as is any development or QA manager that hasn't dotted his i's and crossed his t's.

The driverless babysitter failed to prevent the accident too.  Was he paying attention and doing his job responsibly in this situation?  If the babysitter is found to have not been suitable for the task (right personality, right training), this is an area where Uber is vulnerable too.

« Last Edit: March 23, 2018, 12:28:29 pm by SilverSolder »
 

Offline apis

  • Super Contributor
  • ***
  • Posts: 1667
  • Country: se
  • Hobbyist
Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
« Reply #136 on: March 23, 2018, 12:24:00 pm »
Don't even get me started on how stupidly foolish it is to solely rely on GPS as a means of navigation.
Do you have any source that can corroborate that statement? Self driving cars do not solely rely on GPS. Well, at least the google cars don't (it's mentioned briefly in the video in my previous post).
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 6504
  • Country: gb
Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
« Reply #137 on: March 23, 2018, 12:34:37 pm »
But to say this proves self-driving cars are bad and should never be allowed is really disingenuous. What it proves is that Uber's technology is bad and shouldn't be driving on public roads. I hope Uber are held accountable if they are found to have made an error that should have been avoided if they followed good engineering practices. I can't imagine how the software could fail in such a predictable scenario, surely they would have tested the software with thousands of test cases of exactly this kind of situation, before allowing it to drive on public streets.
Spot on. We see these figures of millions of actual miles and billions of simulated miles safely driven by autonomous cars, but every time I find a little of what lies behind those numbers its quite disturbing. Most of the real miles seem to be endlessly trundling over the same small number of paths, not exposing the system to an ever widening range of real world scenarios. Simulated miles at this stage are basically worthless. Simulation is a fantastic way to get to a prototype, but real world testing is about finding what stimuli your simulations missed. To continue building miles on the simulators is more of a publicity stunt that engineering.

What is need now is genuinely independent testing of these systems, complementing the in house testing. The group that builds anything is too highly motivated to gloss over problems to be trusted on their own.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 6504
  • Country: gb
Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
« Reply #138 on: March 23, 2018, 12:39:35 pm »
Don't even get me started on how stupidly foolish it is to solely rely on GPS as a means of navigation.
Do you have any source that can corroborate that statement? Self driving cars do not solely rely on GPS. Well, at least the google cars don't (it's mentioned briefly in the video in my previous post).
From what I have seen, Google have relied solely on GPS and maps for what most of us would consider navigation - i.e. "what route do I take?". They then use radar and lidar for short range fine tuning, and obvious things like collision avoidance. I assume by this time they have started working temporary road sign recognition into the mix, to deal with road works. The signing at road works tends to be pretty weak, and haphazard, though.
« Last Edit: March 23, 2018, 12:41:31 pm by coppice »
 

Offline apis

  • Super Contributor
  • ***
  • Posts: 1667
  • Country: se
  • Hobbyist
Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
« Reply #139 on: March 23, 2018, 12:53:23 pm »
Simulated miles at this stage are basically worthless. Simulation is a fantastic way to get to a prototype, but real world testing is about finding what stimuli your simulations missed. To continue building miles on the simulators is more of a publicity stunt that engineering.

What is need now is genuinely independent testing of these systems, complementing the in house testing. The group that builds anything is too highly motivated to gloss over problems to be trusted on their own.
Yes, the "miles" spent in simulation is pretty useless metric unless you know exactly what happens in the simulation. Miles spent on real streets is useful though, in order to be able to compare accident statistics with other drivers.
I agree they should do independent testing before allowing a car to drive on public roads, I'm not sure what the requirements for a licence are now. Such an independent test would have been able to show this car wasn't ready for the streets.
 

Offline apis

  • Super Contributor
  • ***
  • Posts: 1667
  • Country: se
  • Hobbyist
Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
« Reply #140 on: March 23, 2018, 01:02:54 pm »
Don't even get me started on how stupidly foolish it is to solely rely on GPS as a means of navigation.
Do you have any source that can corroborate that statement? Self driving cars do not solely rely on GPS. Well, at least the google cars don't (it's mentioned briefly in the video in my previous post).
From what I have seen, Google have relied solely on GPS and maps for what most of us would consider navigation - i.e. "what route do I take?". They then use radar and lidar for short range fine tuning, and obvious things like collision avoidance. I assume by this time they have started working temporary road sign recognition into the mix, to deal with road works. The signing at road works tends to be pretty weak, and haphazard, though.
No, they mention specifically in the video I linked that they do not rely solely on GPS, that wouldn't be possible since GPS isn't accurate enough and the cars have to be able to drive even when there is poor GPS reception (which happens often in cities, e.g. "urban canyons" and tunnels), or if the sensor fails. Of course, it would be stupid not to use GPS as one additional sensor, but the GPS data is used together with all other sensor data to determine where the car is on the internal maps, and then you use the internal maps for navigation and route planing. (GPS can only give you a location of course, it can't help you find a route, for that you need a map, and using a map to plan your route is exactly what humans do as well, so I don't see how that could be considered stupid).
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 6504
  • Country: gb
Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
« Reply #141 on: March 23, 2018, 01:24:59 pm »
Don't even get me started on how stupidly foolish it is to solely rely on GPS as a means of navigation.
Do you have any source that can corroborate that statement? Self driving cars do not solely rely on GPS. Well, at least the google cars don't (it's mentioned briefly in the video in my previous post).
From what I have seen, Google have relied solely on GPS and maps for what most of us would consider navigation - i.e. "what route do I take?". They then use radar and lidar for short range fine tuning, and obvious things like collision avoidance. I assume by this time they have started working temporary road sign recognition into the mix, to deal with road works. The signing at road works tends to be pretty weak, and haphazard, though.
No, they mention specifically in the video I linked that they do not rely solely on GPS, that wouldn't be possible since GPS isn't accurate enough and the cars have to be able to drive even when there is poor GPS reception (which happens often in cities, e.g. "urban canyons" and tunnels), or if the sensor fails. Of course, it would be stupid not to use GPS as one additional sensor, but the GPS data is used together with all other sensor data to determine where the car is on the internal maps, and then you use the internal maps for navigation and route planing. (GPS can only give you a location of course, it can't help you find a route, for that you need a map, and using a map to plan your route is exactly what humans do as well, so I don't see how that could be considered stupid).
To be clear, do you think anything you said is in conflict with what I said?

The latest videos I've seen from Google still seem to focus on their high res maps being correlated with lidar and radar images to fine tune the GPS guidance. I still can't find much about how they deal with a world they didn't expect, like roadworks or a massive crash blocking the way.
 

Online Fungus

  • Super Contributor
  • ***
  • Posts: 12583
  • Country: 00
Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
« Reply #142 on: March 23, 2018, 01:39:22 pm »
... industry doesn't abandon technology.  It simply works out what went wrong and then takes steps to fix it.

That is probably the rational response to this incident.

Yep. Find out what happened, fix it, make it so it can't happen again. Continue as before.

The only problem with this is the lawyers. If 1000 ordinary cars kill people then it's just a thing. If 1 (one) autonomous car kills somebody then it's a potential class-action lawsuit!

We've already seen this with Teslas. Hundreds of cars go up in flames every day but if one Tesla catches fire then it's headline news.

Batteries are dangerous!!!  :scared:

We know that air crashes typically have more than one cause.

Not in the early days.  :popcorn:

The airline failure model isn't really the same as cars, if there's a mechanical failure on an airliner you can't simply pull over and call a tow truck. It has to continue flying and land safely.

With cars it's all about the software. It's complex software but fortunately for us it's very simulatable. Inside a computer we can generate bad situations for autonomous cars all day long and I'm sure the people at google are going wild dreaming up crazy situations to test the car's ability.

Simulated miles at this stage are basically worthless. Simulation is a fantastic way to get to a prototype, but real world testing is about finding what stimuli your simulations missed. To continue building miles on the simulators is more of a publicity stunt that engineering.

Strongly disagree. Simulation is the only way the engineering of these cars can progress.

The most important part of the "simulation" will be regression testing. Every change to the car's software will create a trip to the simulator to verify it still works in all their test situations. This is completely impractical with real cars.

You can also create situations far more extreme than in real life. If I were Google I'd have it set up like a video game where people could go at lunchtime to try and create a situation that beats the software.

(random clowns falling from the sky!)
« Last Edit: March 23, 2018, 01:43:30 pm by Fungus »
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 6504
  • Country: gb
Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
« Reply #143 on: March 23, 2018, 01:44:08 pm »
Simulated miles at this stage are basically worthless. Simulation is a fantastic way to get to a prototype, but real world testing is about finding what stimuli your simulations missed. To continue building miles on the simulators is more of a publicity stunt that engineering.

Strongly disagree. Simulation is the only way the engineering of these cars can progress.

The most important part of the "simulation" will be regression testing. Every change to the car's software will create a trip to the simulator to verify it still works in all their test situations. This is completely impractical with real cars.

You can also create situations far more extreme than in real life. If I were Google I'd have it set up like a video game where people could go at lunchtime to try and create a situation to beat the software.

(clowns falling from the sky!)
I agree with what you said. When I said simulated miles I meant the endless clocking up of miles is worthless. Steadily adding missed test cases to the simulation suite, and then simulating for regression testing is definitely the right approach for practically any kind of engineering development.
 

Offline FrankBuss

  • Supporter
  • ****
  • Posts: 2348
  • Country: de
    • Frank Buss
Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
« Reply #144 on: March 23, 2018, 01:47:38 pm »
Someone who is an adviser for the robotics cars team at Google says, there is a rumour that LIDAR was turned off (search for "rumour", but the rest of the analysis is very good as well) :

http://ideas.4brad.com/it-certainly-looks-bad-uber
So Long, and Thanks for All the Fish
Electronics, hiking, retro-computing, electronic music etc.: https://www.youtube.com/c/FrankBussProgrammer
 

Offline onesixright

  • Frequent Contributor
  • **
  • Posts: 604
  • Country: nl
Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
« Reply #145 on: March 23, 2018, 01:52:58 pm »
Someone who is an adviser for the robotics cars team at Google says, there is a rumour that LIDAR was turned off (search for "rumour", but the rest of the analysis is very good as well) :

Don't these cars perform a self test, and simply don't drive (at-least warn) when systems are turned-off / not working ?

Checklist comes to mind  ::)
 

Offline Brumby

  • Supporter
  • ****
  • Posts: 11154
  • Country: au
Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
« Reply #146 on: March 23, 2018, 02:06:58 pm »
Looking at the video again, I find it odd that the passenger looks up and looks startled, but his eyes are looking to the left of center, with the passenger being seated on the left side of the car. The exterior shot (whatever is really the true video quality) seem to show the pedestrian only when the pedestrian is at the center. Making me think that the passenger perhaps saw the pedestrian to the left of the car some time before the car hit the pedestrian

This is precisely the point I made in reply 26 where I took this still from the interior video.



It does seem their reaction is consistent with noticing the pedestrian well before impact.  I am not at all surprised by their lack of evasive action because of the points I outlined here:

 a) The "safety driver" would need to have observed the hazard before impact (which I think they may have). 
 b) They would immediately expect the AV to have responded and they would hesitate.
 c) When the AV hadn't responded, they then would have had to remember that they had the ability to take control.
 d) They then would have been challenged by the taking the decision to do so.
 e) Once they had done that, they would have had to "find" the controls - the steering wheel and brake pedal as a minimum, since their hands (certainly) and feet (possibly) were not in normal driver position.
 f) They then would apply whatever action they chose.

For a normal "hands on" driver, only steps a) and f) are involved - the others are not automatic steps and they will take a finite time to be processed.


This really makes me want to find out what happened with the tech.
« Last Edit: March 23, 2018, 02:09:07 pm by Brumby »
 

Online Fungus

  • Super Contributor
  • ***
  • Posts: 12583
  • Country: 00
Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
« Reply #147 on: March 23, 2018, 02:15:57 pm »
This is precisely the point I made in reply 26 where I took this still from the interior video.



It does seem their reaction is consistent with noticing the pedestrian well before impact.

I'd say that reaction is either from emergency braking or from the impact. Unfortunately the video is cut off right at that frame so I can't decide which (maybe both?)

The whole way the 'interior' video is cut makes me think Uber is hiding something, eg. Did the car brake at all? :popcorn:
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 6504
  • Country: gb
Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
« Reply #148 on: March 23, 2018, 02:17:34 pm »
It does seem their reaction is consistent with noticing the pedestrian well before impact.  I am not at all surprised by their lack of evasive action because of the points I outlined here:

 a) The "safety driver" would need to have observed the hazard before impact (which I think they may have). 
 b) They would immediately expect the AV to have responded and they would hesitate.
 c) When the AV hadn't responded, they then would have had to remember that they had the ability to take control.
 d) They then would have been challenged by the taking the decision to do so.
 e) Once they had done that, they would have had to "find" the controls - the steering wheel and brake pedal as a minimum, since their hands (certainly) and feet (possibly) were not in normal driver position.
 f) They then would apply whatever action they chose.

For a normal "hands on" driver, only steps a) and f) are involved - the others are not automatic steps and they will take a finite time to be processed.

This really makes me want to find out what happened with the tech.
I agree. A safety driver who is supposed to be the measure of last resort is really in a horrible position. In a standard Volvo XC90 they would have applied the brakes early, whether the autonomous braking system had started to or not.
 

Online Fungus

  • Super Contributor
  • ***
  • Posts: 12583
  • Country: 00
Re: EEVblog #1066 - Uber Autonomous Car Fatality - How?
« Reply #149 on: March 23, 2018, 02:19:30 pm »
I have a question for the LIDAR experts:

What happens when there's many cars using LIDAR simultaneously?

The world will be flooded with LIDAR dots, do LIDAR scanners interfere with each other?

I'm assuming the answer is "no" otherwise it wouldn't be considered for self-driving cars but I fail to see how it can't interfere. The real world isn't made of perfect retroreflecting surfaces.

« Last Edit: March 23, 2018, 02:25:07 pm by Fungus »
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf