Author Topic: "God Father" of AI leave Google, warns of Danger Ahead  (Read 2891 times)

0 Members and 1 Guest are viewing this topic.

Offline Rick LawTopic starter

  • Super Contributor
  • ***
  • Posts: 3470
  • Country: us
"God Father" of AI leave Google, warns of Danger Ahead
« on: May 01, 2023, 05:43:52 pm »
From NY Times article:

Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.

On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said
“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.
After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I technologies pose “profound risks to society and humanity.”
As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”

Quoted from NY Times article "‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead", by Cade Metz, 5/1/2023
« Last Edit: May 01, 2023, 05:46:07 pm by Rick Law »
The following users thanked this post: MK14, RJSV

Offline golden_labels

  • Super Contributor
  • ***
  • Posts: 1284
  • Country: pl
Re: "God Father" of AI leave Google, warns of Danger Ahead
« Reply #1 on: May 02, 2023, 09:38:00 am »
Imagine spending ten years playing an important role in supporting a company, whose business model is based on and inseparable from privacy violation, manipulation and using all the shady tricks in their arsenal to circumvent regulations and protections, where humans are deprived of any rights, which will happily lead people to harm as long as it gives $0.001 profit, which will reinforce and exaggerate biases and tensions at a global scale.

After all this one gets retired for old age and starts blabbing about how dangerous machine learning is. Maybe it’s not ML, that is dangerous, but everything “somebody” actively supported with his work, for a decade? With smortnets at best giving a slight advantage? Maybe it’s not ML, but the reality somebody shaped for half a century? With any potential ML abuse simply filling its voids?

« Last Edit: May 02, 2023, 09:43:43 am by golden_labels »
People imagine AI as T1000. What we got so far is glorified T9.

Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo