The situation in Denmark is a little special though, they have crazy high car sales taxes there (something like 180%) so any discount really matters! I recall seeing a Mercedes in Copenhagen with a personalised number plate: "PAID 3X"
In the Netherlands the tax breaks on EVs are partly removed (no tax breaks on EVs over 50k euro). Sales of the Tesla Model-S and Model-X have dropped to insignificant numbers. The only model selling in reasonable numbers is the much cheaper Model 3.
Mercedes tried that with the 190's in the 1980s. It didn't go over too well.
but with the added dimension of staking your life on the software controlling a high speed vehicle.
Even alert, skilled humans get fooled sometimes and end up in accidents.
* The nature of the problem prevents 'perfection.' Road driving is a highly complex task, with infinite numbers of obscure marginal and tricky cases. Even alert, skilled humans get fooled sometimes and end up in accidents. Some of which are high speed and fatal. No level of AI is going to completely eliminate all situational f*ck-ups. Self-driving cars ultimately involve a statistical risk evaluation: What added probability of death or maiming do you accept for the 'convenience' of not being in control of the vehicle yourself?
Mercedes tried that with the 190's in the 1980s. It didn't go over too well.
FWIW, The 190 series became the C class, which is not only the most popular model that Mercedes sells in the US, it's the most popular model out of all luxury brands.
Waymo disengagement rate for 2018 was 1 disengage per 11,017 miles out of 1.28 million self-driven miles in CA 2018. That corresponds to about one disengage per year for an average driver in the US.
Waymo disengagement rate for 2018 was 1 disengage per 11,017 miles out of 1.28 million self-driven miles in CA 2018. That corresponds to about one disengage per year for an average driver in the US.Anything above zero means you need a steering wheel and an attentive driver. No steering wheel or no automation. A hybrid mix is the worst of all possibilities. It means you can't trust the automation or you can't blame the driver.
The family of someone struck by an auto needs to seek a just outcome and lawyers at 20 paces with a large corporation is going to be an issue. You can't treat it the same a a plane crash due to a firmware error. You can't take the cars off the road you can't spend millions on a crash investigation.
The only solution I can see is to have each vehicle carry a designated driver who will accept the legal responsibility for the firmware choices. Perhaps they insert a card to load ethical parameters to customise the firmwares programming. Which obviously means you can't set the car to circle the block whilst you pick up your dry cleaning. Nor can you use it to freight children to school or Saturday morning sports. Nor can you get dropped off at the door and have the car go and find a parking spot.
If autos are the solution then I don't think the problem is well understood. Buses and trains will solve congestion better. Clustering people closer to work and community facilities will work better. Children will be better off walking to school close to home or socialising with friends on a bus or train.
For the most part safety means don't crash into something or drive off the road, which you can do deterministically quite well using good sensors (lidar), good maps and traditional control theory. That's what Waymo is doing. Tesla has taken a very different approach, they use only cheap sensors (cameras) and then use machine learning algorithms to convert the video feed into a 3D representation of the world. Such machine learning based computer vision systems have been making really amazing progress the last decades. But as we have seen, the Tesla system sometimes makes fatal mistakes. The old school, but more expensive, Waymo approach is much more robust. Waymo using lidar means they don't have to convert a 2d representation into 3d, they get a very accurate 3d representation directly from the lidar sensor (and on top of that they add sensor information from cameras and radar).
For the most part safety means don't crash into something or drive off the road, which you can do deterministically quite well using good sensors (lidar), good maps and traditional control theory. That's what Waymo is doing. Tesla has taken a very different approach, they use only cheap sensors (cameras) and then use machine learning algorithms to convert the video feed into a 3D representation of the world. Such machine learning based computer vision systems have been making really amazing progress the last decades. But as we have seen, the Tesla system sometimes makes fatal mistakes. The old school, but more expensive, Waymo approach is much more robust. Waymo using lidar means they don't have to convert a 2d representation into 3d, they get a very accurate 3d representation directly from the lidar sensor (and on top of that they add sensor information from cameras and radar).I might add that Waymo also uses machine learning, and Tesla has radar sensors as well as cameras (but no expensive LIDAR). But from what I know there is a large difference in their approach and Tesla rely much more on machine learning.
In the last question of this talk the Waymo engineer also mentions they use a hybrid approach, only using machine learning when it improves safety (even if they have to use the more expensive sensors).
"You want to be safe in the environment, so you don't want to make errors in perception, prediction and planning. And the state of machine learning is not at the point where it never makes errors."
https://youtu.be/Q0nGo2-y0xY?t=3774
I am begining to think that all Tesla related material should be moved to the Dodgy technology page.
I'm pretty sure ti has mmwave that can give you size, speed, and direction of travel for detected objects.
Waymo approach is garbage frankly. They may have a very small number of disengagements when driving in areas with their high detail maps. But when on a large scale, it's a dead end approach. You will never have precise high detail maps for everything, not to say without errors. Also changes happen all of the time, keeping it up to date is a huge challenge on it's own. It's simply unfeasible to make up to date high detail maps for every area. Not to say when it comes to bad weather, LIDAR becomes an erratic piece of garbage.
Google has already demonstrated they are able to map the parts of the world where they would want to drive in the foreseeable future, so the need for maps is not much of a limitation. Using maps is safer so why not use them if you can? Clearly they are able to deal with some changes in the map environment or else they wouldn't be able to drive as much as they do.
I have nothing against Musk (e.g. what he achieved with SpaceX is amazing!) or Tesla but if we keep the debate to this style where every criticism is brushed away with "I don't hear you, go away, Luddite!", sticking head in the sand and/or mindlessly repeating hype about machine learning as some sort of magic mantra that will fix everything just give it enough data, people will die - in completely avoidable and preventable accidents.