Author Topic: Tesla’s Autopilot disengages itself before crash?  (Read 2852 times)

0 Members and 1 Guest are viewing this topic.

Offline madiresTopic starter

  • Super Contributor
  • ***
  • Posts: 8481
  • Country: de
  • A qualified hobbyist ;)
Tesla’s Autopilot disengages itself before crash?
« on: March 19, 2025, 04:29:24 pm »
Tesla fans expose Tesla’s own shadiness in attempt to defend Autopilot crash, https://electrek.co/2025/03/17/tesla-fans-exposes-shadiness-defend-autopilot-crash/
 

Offline mtwieg

  • Frequent Contributor
  • **
  • Posts: 600
  • Country: us
Re: Tesla’s Autopilot disengages itself before crash?
« Reply #1 on: March 24, 2025, 12:38:10 pm »
Pretty sure there have been many confirmed examples of Autopilot/FSD disengaging just before incidents occurs over the years, not sure why anyone would be surprised at this point.

The question of why this occurs is more speculative. There may be valid reasons to disable Autopilot in the case of an imminent crash. Tesla probably doesn't want to get caught in trolley problem scenarios (do I plow into the clearly visible fire engine 50m ahead of me, or swerve into the Subaru next to me?). But the fact that it apparently doesn't even attempt to brake at all is basically indefensible.

Reminds me of these tests with child-size dummies from 2022:
« Last Edit: March 24, 2025, 12:44:27 pm by mtwieg »
 

Online jbb

  • Super Contributor
  • ***
  • Posts: 1281
  • Country: nz
Re: Tesla’s Autopilot disengages itself before crash?
« Reply #2 on: March 24, 2025, 08:33:26 pm »
Personally, I wouldn’t use a self driving system that may request I take over without warning. I know that it would be hard for me to keep focus on the road when not actively driving; to then get control handed back to me  when something is already wrong seems like A Bad Plan.

On the videos: there’s a chance it’s ‘natural’ behaviour from the autopilot system; I imagine there must be some kind of ‘confidence’ score generated by the software, and if it goes low enough then human driving is required. So the car realises something isn’t quite right and requests manual control quite late in the scenario.

On automatic emergency braking; I imagine that the Tesla engineers are trying really hard on this front. Ideally, they want emergency braking to be both sensitive - ie always brake in a real emergency - and selective - ie never brake when it’s not an emergency. It’s a trade off. Especially when you consider that some drivers tailgate.
 
The following users thanked this post: thm_w, voltsandjolts

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 22215
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online thm_w

  • Super Contributor
  • ***
  • Posts: 8079
  • Country: ca
  • Non-expert
Re: Tesla’s Autopilot disengages itself before crash?
« Reply #4 on: March 24, 2025, 11:58:18 pm »
Discussed to death on other sites and videos but:
- Autopilot and FSD can disengage before a crash, that is normal behavior. If the crash happens within 5s of the disengagement, its still recorded as a fault of the software.

There is another guy that showed HW3 vehicle would have crashed into a fake image and HW4 did not. Maybe due to the extra detail and processing power, or maybe just just the angle of the light at the time of filming. Its not clear: https://www.youtube.com/watch?v=9KyIWpAevNs

The video would have been fine if Rober didn't misrepresent stuff up front, eg state specifically "I'm testing Autopilot on an older HW3 car for these examples", the implication was it was the latest and greatest which it is not. Autopilot is probably 5 years behind tech wise. They likely don't care about updating autopilot abilities, a free feature, because it will just eat into their FSD sales.

I would disengage such a system, too.

So do many others https://www.which.co.uk/policy-and-insight/article/car-safety-tech-driving-motorists-round-the-bend-which-finds-avra32B7heXH

Not really that related, you turn on autopilot as cruise control, its not on by default. Useful for long drives or slow moving traffic.
Profile -> Modify profile -> Look and Layout ->  Don't show users' signatures
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 22215
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Tesla’s Autopilot disengages itself before crash?
« Reply #5 on: March 25, 2025, 11:10:10 am »
I would disengage such a system, too.

So do many others https://www.which.co.uk/policy-and-insight/article/car-safety-tech-driving-motorists-round-the-bend-which-finds-avra32B7heXH

Not really that related, you turn on autopilot as cruise control, its not on by default. Useful for long drives or slow moving traffic.

Did you read the article? It doesn't mention cruise control, but mentions ADAS systems that
  • modify what the car is doing, i.e. change direction dangerously, change speed unpredictably and for no reason
  • distract the driver at critical moments (my experience in a hire van), i.e. blind spot monitoring. Drowsiness monitoring and alerting may well have similar problems
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 7162
  • Country: nl
Re: Tesla’s Autopilot disengages itself before crash?
« Reply #6 on: March 25, 2025, 11:33:00 am »
Did you read the article? It doesn't mention cruise control, but mentions ADAS systems that
  • modify what the car is doing, i.e. change direction dangerously, change speed unpredictably and for no reason
  • distract the driver at critical moments (my experience in a hire van), i.e. blind spot monitoring. Drowsiness monitoring and alerting may well have similar problems

Which describes level 2 in every car it is in.
 

Online thm_w

  • Super Contributor
  • ***
  • Posts: 8079
  • Country: ca
  • Non-expert
Re: Tesla’s Autopilot disengages itself before crash?
« Reply #7 on: March 25, 2025, 10:11:56 pm »
Did you read the article? It doesn't mention cruise control, but mentions ADAS systems that
  • modify what the car is doing, i.e. change direction dangerously, change speed unpredictably and for no reason
  • distract the driver at critical moments (my experience in a hire van), i.e. blind spot monitoring. Drowsiness monitoring and alerting may well have similar problems

This thread is more about autopilot/FSD specifically in a crash scenario, not ADAS in general.

But as jbb says, its a tough balance with auto braking, to guarantee that you never get a false positive and yet always brake in emergency scenarios.
They've seemingly almost eliminated it from the latest FSD versions, though, there were a few situations where the driver had to take over or the car would have crashed.

I just don't see it being possible, at this time, without additional sensors.
Profile -> Modify profile -> Look and Layout ->  Don't show users' signatures
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 22215
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Tesla’s Autopilot disengages itself before crash?
« Reply #8 on: March 25, 2025, 11:24:46 pm »
Did you read the article? It doesn't mention cruise control, but mentions ADAS systems that
  • modify what the car is doing, i.e. change direction dangerously, change speed unpredictably and for no reason
  • distract the driver at critical moments (my experience in a hire van), i.e. blind spot monitoring. Drowsiness monitoring and alerting may well have similar problems

This thread is more about autopilot/FSD specifically in a crash scenario, not ADAS in general.

But as jbb says, its a tough balance with auto braking, to guarantee that you never get a false positive and yet always brake in emergency scenarios.
They've seemingly almost eliminated it from the latest FSD versions, though, there were a few situations where the driver had to take over or the car would have crashed.

I just don't see it being possible, at this time, without additional sensors.

If companies can't get nominally simpler systems working reliably, they have no chance of.getting a more complex system working reliably, e.g. Tesla's FSD-beta.

If you think Tesla FSD is remotely trustworthy, then you haven't been paying attention to the wealth of experiences to the contrary that have been well documented.

Now that Musk is best pals with President Chamberlain, expect all the US Transport Department investigations to be terminated before they can produce a report. If they do produce a report, expect it to be removed from all web sites and other records. I know that is implausible and ridiculous, but that is what is happening in many spheres in the US this year.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online thm_w

  • Super Contributor
  • ***
  • Posts: 8079
  • Country: ca
  • Non-expert
Re: Tesla’s Autopilot disengages itself before crash?
« Reply #9 on: March 25, 2025, 11:59:24 pm »
If companies can't get nominally simpler systems working reliably, they have no chance of.getting a more complex system working reliably, e.g. Tesla's FSD-beta.

If you think Tesla FSD is remotely trustworthy, then you haven't been paying attention to the wealth of experiences to the contrary that have been well documented.

Not sure where I said or implied that it was trustworthy, just that v13 is a large improvement over autopilot, and seems to no longer have phantom braking issues while also reducing number of interventions. You can find example videos of hundreds of miles of driving with no interventions.

Of course it still does dangerous and dumb things regularly, which is why its level 2, human must be able to take over. Though, I'd take FSD as a pedestrian over most human drivers, it seems fairly cautious and always aware of where people are walking.
Profile -> Modify profile -> Look and Layout ->  Don't show users' signatures
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 22215
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Tesla’s Autopilot disengages itself before crash?
« Reply #10 on: March 26, 2025, 12:39:21 am »
If companies can't get nominally simpler systems working reliably, they have no chance of.getting a more complex system working reliably, e.g. Tesla's FSD-beta.

If you think Tesla FSD is remotely trustworthy, then you haven't been paying attention to the wealth of experiences to the contrary that have been well documented.

Not sure where I said or implied that it was trustworthy, just that v13 is a large improvement over autopilot, and seems to no longer have phantom braking issues while also reducing number of interventions. You can find example videos of hundreds of miles of driving with no interventions.

That is, of course, irrelevant. There are hundreds of photos of me not snapping my tendon, and of me remembering to lower the undercarriage before landing.

Quote
Of course it still does dangerous and dumb things regularly, which is why its level 2, human must be able to take over. Though, I'd take FSD as a pedestrian over most human drivers, it seems fairly cautious and always aware of where people are walking.

It reliably fails to recognise children - or at least objects that look like and move like children. The ethics of putting a child in a Tesla FSD test are highly suspect :) FFI see the videos at https://dawnproject.com/

The problem with LLM/NN/ML systems is that they cannot be designed. They can merely be trained and tested. Hence it is not possible to predict what decision they will make in a given circumstance. The examples of that problem are legion, going back to Igor Aleksander's WISARD in the early 1980s. Regrettably youngsters don't remember history, and so repeat mistakes.
« Last Edit: March 26, 2025, 12:43:28 am by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline m98

  • Frequent Contributor
  • **
  • Posts: 643
  • Country: de
Re: Tesla’s Autopilot disengages itself before crash?
« Reply #11 on: March 26, 2025, 02:50:55 am »
Dan O’Dowd, someone whose marketing claims are theoretically impossible, is not to be taken seriously.
Autonomous cars and robots will necessarily use neural networks unless someone comes up with a fundamental revolution, as current rule-based algorithms are entirely unsuitable to work in diverse, real-world environments.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 22215
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Tesla’s Autopilot disengages itself before crash?
« Reply #12 on: March 26, 2025, 10:12:44 am »
Dan O’Dowd, someone whose marketing claims are theoretically impossible, is not to be taken seriously.
Autonomous cars and robots will necessarily use neural networks unless someone comes up with a fundamental revolution, as current rule-based algorithms are entirely unsuitable to work in diverse, real-world environments.

No, and no.

The videos have to be take seriously, as do the reports that have lead to multiple investigations of the cars' misbehaviour. If you don't like those specific videos, have a look on yootoob; there are many examples there.

As for marketing claims, Must is the one who makes excessive claims - and is being investigated for them.

Dan O'Dowd has serious credibility w.r.t. software. His safety critical software products have made him a billionaire, which means Musk cannot use legal threats to silence him.

Just because no other technique works doesn't mean that NN works, can ever work, should be used or must be used. Sometimes tasks are just not possible with the available technology.
« Last Edit: March 26, 2025, 12:22:50 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 7162
  • Country: nl
Re: Tesla’s Autopilot disengages itself before crash?
« Reply #13 on: March 26, 2025, 12:03:49 pm »
Autonomous cars and robots will necessarily use neural networks unless someone comes up with a fundamental revolution, as current rule-based algorithms are entirely unsuitable to work in diverse, real-world environments.

All of the current systems have an outer layer of rule based algorithms doing boundary checks, some of which with nearly raw sensor data.

They are indeed unsuitable for diverse real-world environments. Which is why Waymo cars have remote operators which can push it through the safety checks. It is also why the only level 3 self driving cars can only follow the duckling in slow traffic on highways.

This is unlikely to change until AI decides to make decisions for us.
 

Offline m98

  • Frequent Contributor
  • **
  • Posts: 643
  • Country: de
Re: Tesla’s Autopilot disengages itself before crash?
« Reply #14 on: March 26, 2025, 02:11:37 pm »
Just because no other technique works doesn't mean that NN works, can ever work, should be used or must be used. Sometimes tasks are just not possible with the available technology.
Are you living in some parallel reality where we aren't already massively utilizing NNs and the halting problem was solved by Dan O'Dowd?
Demanding a system to be free of failure modes is simply something that is fundamentally impossible.
 
The following users thanked this post: Siwastaja

Offline Marco

  • Super Contributor
  • ***
  • Posts: 7162
  • Country: nl
Re: Tesla’s Autopilot disengages itself before crash?
« Reply #15 on: March 26, 2025, 03:16:39 pm »
It doesn't have to be free of failures, it has to present acceptable liability to beancounters. It won't and self driving car companies are unlikely to get handouts to wave the liability.

Slow urban driving in LIDAR mapped areas with frequent stops and human handhold requests and follow the duckling driving in highway jams is the best we'll get.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 22215
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Tesla’s Autopilot disengages itself before crash?
« Reply #16 on: March 26, 2025, 05:33:30 pm »
Just because no other technique works doesn't mean that NN works, can ever work, should be used or must be used. Sometimes tasks are just not possible with the available technology.
Are you living in some parallel reality where we aren't already massively utilizing NNs and the halting problem was solved by Dan O'Dowd?
Demanding a system to be free of failure modes is simply something that is fundamentally impossible.

Wrong, irrelevant and a strawman argument.

Now please try to address the the other points. Here they are again, for ease of reference...

Dan O’Dowd, someone whose marketing claims are theoretically impossible, is not to be taken seriously.
Autonomous cars and robots will necessarily use neural networks unless someone comes up with a fundamental revolution, as current rule-based algorithms are entirely unsuitable to work in diverse, real-world environments.

No, and no.

The videos have to be take seriously, as do the reports that have lead to multiple investigations of the cars' misbehaviour. If you don't like those specific videos, have a look on yootoob; there are many examples there.

As for marketing claims, Must is the one who makes excessive claims - and is being investigated for them.

Dan O'Dowd has serious credibility w.r.t. software. His safety critical software products have made him a billionaire, which means Musk cannot use legal threats to silence him.

Just because no other technique works doesn't mean that NN works, can ever work, should be used or must be used. Sometimes tasks are just not possible with the available technology.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 7162
  • Country: nl
Re: Tesla’s Autopilot disengages itself before crash?
« Reply #17 on: March 26, 2025, 10:24:28 pm »
Dan O'Dowd has serious credibility w.r.t. software.

He's got some profitable defence contracts, but he does oversell himself a bit. "Never Fails and Can't Be Hacked".
 

Online thm_w

  • Super Contributor
  • ***
  • Posts: 8079
  • Country: ca
  • Non-expert
Re: Tesla’s Autopilot disengages itself before crash?
« Reply #18 on: March 26, 2025, 10:55:34 pm »
That is, of course, irrelevant. There are hundreds of photos of me not snapping my tendon, and of me remembering to lower the undercarriage before landing.

Of course its relevant to the topic at hand, Robers test.
Either try to stick to that main discussion point, or continue to irritate people by pushing discussion off topic.

Quote
It reliably fails to recognise children - or at least objects that look like and move like children. The ethics of putting a child in a Tesla FSD test are highly suspect :) FFI see the videos at https://dawnproject.com/

Here is an older v12.5 version where it stops and sees the children OK (current version is 13, pointing out flaws in older versions isn't too relevant):
2:00 in


It doesn't "reliably fail to recognize children" here.
Does that mean I would trust it to drive through a school zone unsupervised? no not even close.
Profile -> Modify profile -> Look and Layout ->  Don't show users' signatures
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 22215
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Tesla’s Autopilot disengages itself before crash?
« Reply #19 on: March 26, 2025, 10:57:52 pm »
Dan O'Dowd has serious credibility w.r.t. software.

He's got some profitable defence contracts, but he does oversell himself a bit. "Never Fails and Can't Be Hacked".

A buffer overflow possibility doesn't mean something has been hacked, and there might be other defense mechanisms to prevent that causing a problem. I suspect it hasn't been hacked, but personally - without knowing more details - I'd be unhappy about claiming it can't be hacked. Nonetheless, O'Dowd does have serious credibility in the software field. Note that O'Dowd has Teslas and likes them (apart from the FSD-beta, of course). He has nothing (significant) to gain/lose by forcing Tesla to acknowledge the FSB-beta's misbehaviour.

More importantly, Tesla's FSD-beta is demonstrably defective, despite Musk's wild claims. Only a fanbois, paid shill, or similar could attempt to claim otherwise.

Personally I have serious doubts that it can be made reliable. The problems appear to stem from the NN/ML system classifying some groups of pixels as "immovable background that can be ignored". If that filter is made too loose then too many background objects will have to be actively avoided (e.g. braking for non-existent crossroads). If that filter is made too tight then relevant objects will be ignored (there are many examples of that happening, from children to parked lorries, school busses, emergency vehicles stopped at the roadside).

AIUI, other manufacturers don't rely solely on cameras, but have other sensors (e.g. LIDAR) that have complementary advantages and disadvantages. Sensor fusion is, in general, a good technique.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 22215
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Tesla’s Autopilot disengages itself before crash?
« Reply #20 on: March 26, 2025, 11:10:04 pm »
It reliably fails to recognise children - or at least objects that look like and move like children. The ethics of putting a child in a Tesla FSD test are highly suspect :) FFI see the videos at https://dawnproject.com/

Here is an older v12.5 version where it stops and sees the children OK (current version is 13, pointing out flaws in older versions isn't too relevant):
2:00 in

It doesn't "reliably fail to recognize children" here.
Does that mean I would trust it to drive through a school zone unsupervised? no not even close.

Are you seriously suggesting that because it notices a child in one test that it doesn't ignore them in others?!

Show us the design equations (or equivalent) that will now prevent it ignoring children. That's impossible, of course, since there are no design equations or principles - merely a few tests. (Few relative to the number of situations that will be encountered in the real world).

What's to prevent the new system failing in ways that the old system didn't? Nothing, because there is no design, merely tests.

What's to prevent the next update reverting to previous bad behaviour? Nothing.

Remember the old engineering adage "You can't inspect quality into a product; it has to be designed in". Modern software weenies seriously believe that their product works because it passes the tests. They have difficulty comprehending that if the tests are inadequate then all bets are off. Idiocracy in action :(
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline vk6zgo

  • Super Contributor
  • ***
  • Posts: 7933
  • Country: au
Re: Tesla’s Autopilot disengages itself before crash?
« Reply #21 on: March 26, 2025, 11:59:53 pm »
Many years ago the Public Works Dept in Western Australia decided it would be a good idea to put a reflective warning on the rear of their "utes" so that people wouldn't run into the rear of them.

To that end, they used a smaller version of the ones used to warn drivers on a terminating road that they were approaching  a "T" junction, so that they didn't happily continue & sail off into the bush at night.

Following a PWD "ute" one night, as we both slowed, suddenly, the 'ute' disappeared, as its warning pattern lined up perfectly with that on the other side of the "T" junction.
The reflections were so strong that the tail lights didn't spoil the illusion, until he applied his brakes.

I'm not sure what "FSD" would make of that, ---maybe not having the instinctive pattern recognition of a human brain, its "dumber" reaction may have not been fooled, but maybe it still would have.
A human can see the illusion, but also know that cars don't just disappear, perhaps a machine may assume that the original "ute" was a false image.
 

Offline Andy Chee

  • Super Contributor
  • ***
  • Posts: 1512
  • Country: au
Re: Tesla’s Autopilot disengages itself before crash?
« Reply #22 on: March 27, 2025, 05:20:00 am »
FWIW, Mark Rober's contribution to the issue

 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 22215
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Tesla’s Autopilot disengages itself before crash?
« Reply #23 on: March 27, 2025, 10:43:49 am »
Many years ago the Public Works Dept in Western Australia decided it would be a good idea to put a reflective warning on the rear of their "utes" so that people wouldn't run into the rear of them.

To that end, they used a smaller version of the ones used to warn drivers on a terminating road that they were approaching  a "T" junction, so that they didn't happily continue & sail off into the bush at night.

Following a PWD "ute" one night, as we both slowed, suddenly, the 'ute' disappeared, as its warning pattern lined up perfectly with that on the other side of the "T" junction.
The reflections were so strong that the tail lights didn't spoil the illusion, until he applied his brakes.

I'm not sure what "FSD" would make of that, ---maybe not having the instinctive pattern recognition of a human brain, its "dumber" reaction may have not been fooled, but maybe it still would have.
A human can see the illusion, but also know that cars don't just disappear, perhaps a machine may assume that the original "ute" was a false image.
- and concerning in some real-life situations.

Ah yes, illusions.

NN/ML vision systems are known to have significant issues, e.g. changing one pixel completely changing the interpretation. Having said that, human vision is far from perfect; some of the "dots" and "motion induced" illusions here are astounding: https://michaelbach.de/ot/index.html

I wonder how NN/ML vision copes with contrast reversal, where white road line appear black (after shower, sun reflecting off water film), and I'm sure there are others.

I also wonder how students will "prank" NN/ML systems, in the vein of putting traffic cones to divert all traffic through the campus car park. I'd start with painting a pothole on the road :) Any such susceptibility would be highly specific, so it might be possible to confuse just one car one brand.

One day I'm going to go to a local Tesla saleroom, pretend to be interested, and have a test drive with the salesman in control using FSD-beta. I'd get him to go to my house, along a motorway and narrow roads with potholes, missing verges and blind bends.
« Last Edit: March 27, 2025, 10:52:35 am by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 22215
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Tesla’s Autopilot disengages itself before crash?
« Reply #24 on: March 27, 2025, 11:14:02 am »
FWIW, Mark Rober's contribution to the issue

I haven't seen that before; thanks. It is a good use of yootoob: short and requires moving pictures :) The only quibble I would have is that I don't think the "painted wall" illusion is the most damning. I'm sure similar things will exist, but they will be much rarer than the other everyday problems demonstrated.

Real world example on a highway in Taiwan


I've seen similar videos of a Teslas hitting an inflatable lorry parked on a runway. Why on earth did the Tesla NN/ML decide to ignore it?

Amusing Tesla accessory, which illustrates the company's attitudes: https://electrek.co/2023/06/30/tesla-gives-new-owners-uk-grabbing-stick-after-forcing-left-hand-drive/

I suspect "Consumer Reports" is the US version of "Which?". https://www.consumerreports.org/cars/car-safety/tesla-autopilot-recall-fix-does-not-address-safety-problems-a5133751100/
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf