General > General Technical Chat
Driverless taxi service getting approved in SF
<< < (5/11) > >>
rstofer:

--- Quote from: Stray Electron on June 09, 2022, 02:25:14 am ---
--- Quote from: eti on June 09, 2022, 01:30:06 am ---If a human driver kills people, there is accountability. If SOFTWARE does so, it is waved away as "It was an unfortunate accident, <blah blah rehearsed corporate 'apology' (ass-covering)> and we will take extra steps..." blah blah blah fking blah>

They want ALL the profit, but none of the accountability, and a robot can't be sent to prison.

--- End quote ---


  This pretty much sums up the whole issue.  Ask yourself, has anyone gone to jail yet over the two Boeing 737Max crashes?  And how many people died in those accidents?

--- End quote ---

You can only go to jail if you are convicted of a CRIME and that can only occur if there was intent or gross negligence.  An accident is just that; an accident.  In most cases, there are so many contributing factors that it is impossible to determine negligence and proving intent is even harder.

Now, that doesn't mean there won't be civil lawsuits, that's a given.  But nobody goes to jail over a civil judgement.

There is still plenty of work to do on the AI driving the car.  Right now it has a propensity for driving into police cars and getting confused with traffic cones.

https://www.autoweek.com/news/green-cars/a37425353/another-tesla-hits-police-car/




When you ride in a self-driving car, your life is not only in the hands of a programmer but also a math major.  The day won't come when I let a car drive me around.  Not happening!

It is estimated that it takes 70,000 GPU hours to rebuild the Tesla neural network.  I'm still trying to find a definition for GPU-hour since GPU apparently refers to the entire device rather than the highly variable number of CUDA cores in the device.  I have one graphics card with a couple of hundred CUDA cores and another with nearly 6000.  Both are GPUs (of a sort) but they certainly aren't equivalent.


SiliconWizard:
We had a number of threads about AI and liability, those points are definitely valid and we could already see this is a very polarizing topic.
Yes, AI in everything is a great way of abolishing the concept of liability.

But what got me even more concerned in this particular case is how authorities are dealing with this.
As I said above, it would appear that the "new normal" is to put the precautionary principle aside and approve things that aren't formally proven safe, just because there is now a sense of emergency that would justify it and that we are all supposed to embrace.
jonpaul:
consider the situation vs airline flights crashes investigation.

Radar, grounds radio altimeter, ILS, control towers, extensive weather and runway conditions..

Still every year some failures and crashes.

Predicted much worse for autonomie véhicule
s.

Jon
Bud:

--- Quote from: SiliconWizard on June 09, 2022, 06:08:47 pm ---We had a number of threads about AI and liability, those points are definitely valid and we could already see this is a very polarizing topic.
Yes, AI in everything is a great way of abolishing the concept of liability.

But what got me even more concerned in this particular case is how authorities are dealing with this.
As I said above, it would appear that the "new normal" is to put the precautionary principle aside and approve things that aren't formally proven safe, just because there is now a sense of emergency that would justify it and that we are all supposed to embrace.

--- End quote ---
What kind of emergency do you refer to in regards to taxi service, or to generic autonomic passenger vehicle for that matter?  :-//
Bassman59:

--- Quote from: jonpaul on June 09, 2022, 01:50:56 pm ---"No 9000 computer has ever made a mistake or distorted information....."

HAL9000 in Stanley Kubrick's classic 1968 film, "2001, A Space Odyssey"

--- End quote ---

That statement is correct in the universe of the film.

HAL did not make a mistake when it removed the life support from the sleeping astronauts. HAL did not make a mistake when it killed Frank and when it tried to kill David.

HAL's primary goal was to ensure the success of the mission to Jupiter. As part of its continuing operation and calculation, it made a determination that the fallibility of the humans put that mission in jeopardy. Thus the only logic action was to eliminate the human element.

You might remember that HAL had a twin on Earth, and a test was run, comparing HAL to that twin, and there was no evidence of malfunction. The test they didn't run was to ask the Earth-bound twin to "be in HAL's shoes on the Discovery," and get its response. Would the twin have made the same fatal decision? Obviously yes.

Remember that Clarke was clearly aware of Asimov's Three Laws of Robotics. But HAL was not a robot -- specifically, HAL was not sentient. Though maybe Clarke's 4th Law should have been "Any sufficiently-advanced processing power/programming is indistinguishable from sentience" ?
Navigation
Message Index
Next page
Previous page
There was an error while thanking
Thanking...

Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod