For nefarious purposes, I'd say that the biggest problem is that you can't tell if there's a person on the other end. This results in among other things, say, automating online arguments and finishing off that little bit of democracy we have. If there's an automated entity that you can have convince people of something, then you've won at democracy, or at least severely skewed it. While a mass of people is not exactly known for using careful examination and logical analysis, there was at least some effort needed to convince people. Now you could have an AI "argue" with people 24/7 under any amount of sock puppet accounts. While I'm sure that there are troll farms out there now, they are still humans and to achieve X influence you need to invest constant*X resources. AI would give you much better value for money. This application would basically break online discussions.
The same goes, for scams, spam, pretty much any nefarious activity that requires talking to a person - and lets be honest, an AI fed on the sum of human knowledge and every bit of literature available will be a better scammer than a random Nigerian/Indian fellow with iffy English claiming to be from the government and wanting a payment in Tesco coupons for the naughty stuff they know you did on your computer.
Yes, in these terms it doesn't really bring anything new to the table, it just amps up the existing stuff by a factor of a gajillion.
Then there's the jobs. Let's be honest here, there's a lot of jobs that can be automated and the only reason they haven't is that it's expensive to have the software written and cheaper to have a bunch of interns/entry level people do it - in accounting (I personally know at least one person in accounting that says her job could be automated by a proper spreadsheet in Excel), miscellany bureaucracy and I'm sure there's a lot of other things.
Publishing or creative stuff? I have a fairly large text that I'm working on and I tried asking the openai chatbot to suggest improvements, find errors etc. The results were really good, something that I'd expect from a lower/middle end editor. Obviously I needed to filter through them, but still, the text improved by quite a lot. Actually useful images can be generated without any talent required. Decent essays/articles as well.
It's an awesome tool, but as all tools, there's potential for abuse and there's the question of what will happen to the people who are in positions that will be made obsolete by it.
And I fed the text above into chatGPT and it gave:
Me:What is your opinion of the following text:
...
AI:As an AI language model, I don't possess personal opinions or emotions. However, I can analyze and provide insights into the text you provided.
The text discusses the potential drawbacks and negative consequences of AI technology, specifically in the context of nefarious activities and the displacement of certain jobs. It highlights the possibility that AI can amplify existing issues like scams, spam, and online manipulation. Furthermore, it acknowledges that AI can automate many jobs, which may lead to unemployment or the need for people to adapt to new roles.
The text also points out some positive aspects of AI, such as its ability to improve text quality, generate useful images, and produce decent essays or articles. It emphasizes that AI is a tool with potential for both good and bad outcomes, depending on how it is used.
In summary, the text presents a balanced view of AI technology, discussing its potential for abuse while acknowledging the benefits it can provide. The concerns raised are legitimate and need to be addressed through a combination of policy, regulation, and responsible development and deployment of AI technologies.
Me:
Would you say that it is factually correct?
AI:
The text contains a mix of factual observations and opinions. Here's a breakdown of the key points:
AI can be used for nefarious purposes, such as automating online arguments and undermining democracy: This is a valid concern. AI technology, like any tool, can be used for both good and harmful purposes. The potential for AI to spread disinformation and manipulate public opinion is a genuine risk.
AI could potentially be more cost-effective than human troll farms: This is plausible. As AI technology becomes more advanced, it could be used to amplify the scale and efficiency of online manipulation campaigns, although the extent of its impact depends on the specific situation.
AI can be used for scams and spam: This is true. AI technology is increasingly being used for malicious purposes, including scams and spamming activities.
Many jobs can be automated, and the only reason they haven't is the cost of software development: This statement is partly true. Cost is one factor, but there are also other reasons why jobs have not been automated yet, such as the limitations of current AI technology and the social, ethical, and legal considerations involved in displacing human workers.
AI can improve text quality, generate useful images, and produce decent essays or articles: This is true. AI has made significant strides in natural language processing, image generation, and other creative fields, enabling it to produce high-quality content.
In conclusion, the text contains a mixture of factual observations and opinions, most of which are grounded in the current state of AI technology and its potential impact on society. Some statements might require further context or clarification, but overall, the text presents a reasonably accurate picture of the potential benefits and drawbacks of AI.
That's a decent summary that I'd expect from a human.