Products > ChatGPT/AI

People are falling in love with AI voices

(1/3) > >>

xrunner:

--- Quote ---People are falling in love with — and getting addicted to — AI voices
Vox
Aug 18, 2024

“This is our last day together.”

It’s something you might say to a lover as a whirlwind romance comes to an end. But could you ever imagine saying it to… software?

Well, somebody did. When OpenAI tested out GPT-4o, its latest generation chatbot that speaks aloud in its own voice, the company observed users forming an emotional relationship with the AI — one they seemed sad to relinquish.

In fact, OpenAI thinks there’s a risk of people developing what it called an “emotional reliance” on this AI model, as the company acknowledged in a recent report.

“The ability to complete tasks for the user, while also storing and ‘remembering’ key details and using those in the conversation,” OpenAI notes, “creates both a compelling product experience and the potential for over-reliance and dependence.”

That sounds uncomfortably like addiction. And OpenAI’s chief technology officer Mira Murati straight-up said that in designing chatbots equipped with a voice mode, there is “the possibility that we design them in the wrong way and they become extremely addictive and we sort of become enslaved to them.”

What’s more, OpenAI says that the AI’s ability to have a naturalistic conversation with the user may heighten the risk of anthropomorphization — attributing humanlike traits to a nonhuman — which could lead people to form a social relationship with the AI. And that in turn could end up “reducing their need for human interaction,” the report says.

Nevertheless, the company has already released the model, complete with voice mode, to some paid users, and it’s expected to release it to everyone this fall.

...

3 reasons to worry about relationships with AI companions

First, chatbots make it seem like they understand us — but they don’t. Their validation, their emotional support, their love — it’s all fake, just zeros and ones arranged via statistical rules.

At the same time it’s worth noting that if the emotional support helps someone, then that effect is real even if the understanding is not.

Second, there’s a legitimate concern about entrusting the most vulnerable aspects of ourselves to addictive products that are, ultimately, controlled by for-profit companies from an industry that has proven itself very good at creating addictive products. These chatbots can have enormous impacts on people’s love lives and overall well-being, and when they’re suddenly ripped away or changed, it can cause real psychological harm (as we saw with Replika users).

Some argue this makes AI companions comparable to cigarettes. Tobacco is regulated, and maybe AI companions should come with a big black warning box as well. But even with flesh-and-blood humans, relationships can be torn asunder without warning. People break up. People die. That vulnerability — that awareness of the risk of loss — is part of any meaningful relationship.

Finally, there’s the worry that people will get addicted to their AI companions at the expense of getting out there and building relationships with real humans. This is the worry that OpenAI flagged. But it’s not clear that many people will out-and-out replace humans with AIs. So far, reports suggest that most people use AI companions not as a replacement for, but as a complement to, human companions. Replika, for example, says that 42 percent of its users are married, engaged, or in a relationship.

More -

https://www.vox.com/future-perfect/367188/love-addicted-ai-voice-human-gpt4-emotion
--- End quote ---

rhodges:
Can "the AI" give me useful information that real people have given on CH32V003 recently? Can "the AI" give me information on soil boron levels in the US? NO!

The term "AI" implies artificial intelligence, and this does not exist... today.. on Earth.

Forty years ago, I remember the claim that automatic code generators would take over programming. So where are those claims now?

So the claimed "AI" can respond to questions about literature and maybe other soft subjects. That can be useful.
What would be really useful is when a doctor can repeat a patient's dialog and actually give a good diagnosis. Doctors are still today just highly trained monkeys. They memorized a thousand lists, and a thousand prescriptions. But most of them do not actually think. This is my observation. I welcome any suggestions on doctors who do think.

SiliconWizard:
Artificial Idiocy.

Kim Christensen:
Sounds like the 2013 movie, "Her".

golden_labels:
Artificial companions are evil, because they are just shallow imitation of real, deeper understanding. They can’t provide profound exchange of thoughts or supply new ideas: they are just parroting content they seen elsewhere. Simplistic messages, which can be easily consumed, because any reference to the vast body of thought would require effort from the recipient, and that produces discomfort. Instead, they evoke emotional responses which keep you attached. A perfect drug, like cigarettes: makes you feel good, you come back for more, even if subconsciously you feel there is nothing there and it’s harmful to you. What’s worse: they are designed to keep you engaged. Produced to learn as much about you as possible, and sell the information to whoever pays most. Each interaction is recorded and instantly sent to a few customers eager to sell you further down the line in a blink of an eye. Imagine the dystopian world, where algorithms follow you each step of your life, decide what you should consume, and slowly replace real-world interaction with humans, keeping you in a golden cage you aren’t even willing to escape, because it’s so comfortable.

Oh, I made a mistake above. Please replace “artificial companions” with “media”.

Navigation

[0] Message Index

[#] Next page

There was an error while thanking
Thanking...
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod