General > General Technical Chat
why should i be afraid of AI?
<< < (3/4) > >>
Zero999:

--- Quote from: aeberbach on March 30, 2023, 10:29:30 pm ---Large Learning Models like ChatGPT and Bard and all the ones in the news lately?

They are dangerous because you probably can't tell them apart from a real person and because they are great at presenting inaccurate information authoritatively. Garbage in, garbage out! They hoover up any and all text and provided the source isn't invalidated due to some rule it is accepted. Another danger is human earning. Anti-plagiarism tools will tell you they can detect LLM output but I doubt it very much. You have the prospect of someone being qualified with a degree about which they know very little.
--- End quote ---
The problem is the poor method of assessing students. Many courses just teach students how to write and regurgitate information. I see this as a positive thing because it might actually force the education system to bring back academic rigour and put an end to the numerous pointless degrees, which should have never have existed in the first place.
magic:

--- Quote from: aeberbach on March 30, 2023, 10:29:30 pm ---They are dangerous because you probably can't tell them apart from a real person and because they are great at presenting inaccurate information authoritatively.

--- End quote ---
And this is the exact reason why it's hard to tell them from humans.
The only difference is that they generate bullshit somewhat faster and maybe cheaper.
When the generation time and cost are hidden from you, that's when it may fool you.

All the worries about propaganda, low quality products, etc - we already have them without AI.
daqq:
For nefarious purposes, I'd say that the biggest problem is that you can't tell if there's a person on the other end. This results in among other things, say, automating online arguments and finishing off that little bit of democracy we have. If there's an automated entity that you can have convince people of something, then you've won at democracy, or at least severely skewed it. While a mass of people is not exactly known for using careful examination and logical analysis, there was at least some effort needed to convince people. Now you could have an AI "argue" with people 24/7 under any amount of sock puppet accounts. While I'm sure that there are troll farms out there now, they are still humans and to achieve X influence you need to invest constant*X resources. AI would give you much better value for money. This application would basically break online discussions.

The same goes, for scams, spam, pretty much any nefarious activity that requires talking to a person - and lets be honest, an AI fed on the sum of human knowledge and every bit of literature available will be a better scammer than a random Nigerian/Indian fellow with iffy English claiming to be from the government and wanting a payment in Tesco coupons for the naughty stuff they know you did on your computer.

Yes, in these terms it doesn't really bring anything new to the table, it just amps up the existing stuff by a factor of a gajillion.

Then there's the jobs. Let's be honest here, there's a lot of jobs that can be automated and the only reason they haven't is that it's expensive to have the software written and cheaper to have a bunch of interns/entry level people do it - in accounting (I personally know at least one person in accounting that says her job could be automated by a proper spreadsheet in Excel), miscellany bureaucracy and I'm sure there's a lot of other things.

Publishing or creative stuff? I have a fairly large text that I'm working on and I tried asking the openai chatbot to suggest improvements, find errors etc. The results were really good, something that I'd expect from a lower/middle end editor. Obviously I needed to filter through them, but still, the text improved by quite a lot. Actually useful images can be generated without any talent required. Decent essays/articles as well.

It's an awesome tool, but as all tools, there's potential for abuse and there's the question of what will happen to the people who are in positions that will be made obsolete by it.

And I fed the text above into chatGPT and it gave:


--- Quote ---Me:What is your opinion of the following text:
...
AI:As an AI language model, I don't possess personal opinions or emotions. However, I can analyze and provide insights into the text you provided.

The text discusses the potential drawbacks and negative consequences of AI technology, specifically in the context of nefarious activities and the displacement of certain jobs. It highlights the possibility that AI can amplify existing issues like scams, spam, and online manipulation. Furthermore, it acknowledges that AI can automate many jobs, which may lead to unemployment or the need for people to adapt to new roles.

The text also points out some positive aspects of AI, such as its ability to improve text quality, generate useful images, and produce decent essays or articles. It emphasizes that AI is a tool with potential for both good and bad outcomes, depending on how it is used.

In summary, the text presents a balanced view of AI technology, discussing its potential for abuse while acknowledging the benefits it can provide. The concerns raised are legitimate and need to be addressed through a combination of policy, regulation, and responsible development and deployment of AI technologies.
Me:
Would you say that it is factually correct?
AI:
The text contains a mix of factual observations and opinions. Here's a breakdown of the key points:

AI can be used for nefarious purposes, such as automating online arguments and undermining democracy: This is a valid concern. AI technology, like any tool, can be used for both good and harmful purposes. The potential for AI to spread disinformation and manipulate public opinion is a genuine risk.

AI could potentially be more cost-effective than human troll farms: This is plausible. As AI technology becomes more advanced, it could be used to amplify the scale and efficiency of online manipulation campaigns, although the extent of its impact depends on the specific situation.

AI can be used for scams and spam: This is true. AI technology is increasingly being used for malicious purposes, including scams and spamming activities.

Many jobs can be automated, and the only reason they haven't is the cost of software development: This statement is partly true. Cost is one factor, but there are also other reasons why jobs have not been automated yet, such as the limitations of current AI technology and the social, ethical, and legal considerations involved in displacing human workers.

AI can improve text quality, generate useful images, and produce decent essays or articles: This is true. AI has made significant strides in natural language processing, image generation, and other creative fields, enabling it to produce high-quality content.

In conclusion, the text contains a mixture of factual observations and opinions, most of which are grounded in the current state of AI technology and its potential impact on society. Some statements might require further context or clarification, but overall, the text presents a reasonably accurate picture of the potential benefits and drawbacks of AI.
--- End quote ---
That's a decent summary that I'd expect from a human.
Wallace Gasiewicz:
Human question to AI:
Is there a God?
AI answer,:
Yes, you just met me.
Infraviolet:
I'd agree with the general outlook Psi mentioned.
I've posted the same link before, but when it comes to what to fear "from" AI, fear this, fear what human leaders already having got a taste of totalitarianism and liked it would use AI for. Fear particularly what Chinese communist party style regimes would do with AI, using it to put extra manpower in to their societal control systems without needing to divert any people away from productive work to become social credit administrators, the only limits on totalitarianism's reach are when the proportion of the population a tyrant wants working as their enforcers becomes so high that there isn't enough population left to work at everything else. It isn't rogue AIs people should fear, it is AI's doing exactly what rogue rulers want them to.

The "So are we safe then" final paragraphs at:
https://dailysceptic.org/2023/02/21/dont-believe-the-hype-there-is-not-a-sentient-being-trapped-in-bing-chat/
Navigation
Message Index
Next page
Previous page
There was an error while thanking
Thanking...

Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod