General > General Technical Chat
Driverless taxi service getting approved in SF
<< < (7/11) > >>
pcprogrammer:

--- Quote from: SiliconWizard on June 11, 2022, 02:36:58 am ---All I'm saying is that they seem to be acting as though it was urgent to deploy those taxi services, and thus seem to be taking some unusual shortcuts.
The underlying reasons, I do not know, although we could think of a few potential causes that would be likely to trigger political action.

--- End quote ---

Money 8)
pcprogrammer:

--- Quote from: jpanhalt on June 10, 2022, 08:16:12 pm ---News from Missouri.  Will driverless taxis be liable for any STD one might contract while a passenger? 
https://www.foxbusiness.com/economy/geico-std-lawsuit-settlement-car
OMG   :palm:

--- End quote ---

What is wrong with people. Having consensual sex is your own responsibility. Contracting a disease from it is then a risk you can't blame on an insurance company.
sleemanj:

--- Quote from: jpanhalt on June 10, 2022, 08:16:12 pm ---https://www.foxbusiness.com/economy/geico-std-lawsuit-settlement-car

--- End quote ---

For people who didn't want to enable ads, woman has sex in the back of a car with some dude, catches an STD, files a claim with the company that insured the car, they refused the claim, and it went to an arbitration, where the woman was awarded 5.2 million dollars.

America is weird.
RoGeorge:

--- Quote from: Bassman59 on June 10, 2022, 08:10:19 pm ---
--- Quote from: jonpaul on June 09, 2022, 01:50:56 pm ---"No 9000 computer has ever made a mistake or distorted information....."

HAL9000 in Stanley Kubrick's classic 1968 film, "2001, A Space Odyssey"

--- End quote ---

That statement is correct in the universe of the film.

HAL did not make a mistake when it removed the life support from the sleeping astronauts. HAL did not make a mistake when it killed Frank and when it tried to kill David.

HAL's primary goal was to ensure the success of the mission to Jupiter. As part of its continuing operation and calculation, it made a determination that the fallibility of the humans put that mission in jeopardy. Thus the only logic action was to eliminate the human element.

--- End quote ---

That's yet another illustration over a much deeper fact, that there is no such thing as a standalone good/evil, or write/wrong.  These notions only makes sense in relation to a given goal.  If something suits the given goal, then it's good/write otherwise is evil/wrong.

Same with the HAL computer.  For the stated goal's perspective (of completing the mission) it was good to kill the crew, for the human goal's perspective (of staying alive) it was wrong.


Once we realize nothing can be classified as good or evil without firstly stating the goals' context in which we want to judge that, all the ethics debates became meaningless.  Ethics seems complicated only because we do not state clear enough what is the goal in the first place.

We each drag with us a big bag of goals and tacit assumptions, and we would expect/want everybody else to carry the same set of goals and principles as ours.  The set of goals from any two such bags is never the same.  They never perfectly overlap.  So we point fingers and say that's wrong, or evil, and get upset or angry about that, instead of thinking that's a different bag of goals.
pcprogrammer:
Found this older video that near the end made me laugh :-DD



It is about AI taking over and we all need to behave because otherwise it will kill us all. The planet and nature would benefit from it greatly.
Navigation
Message Index
Next page
Previous page
There was an error while thanking
Thanking...

Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod