Author Topic: ChatGPT4: Video[Fran]about various mishaps with new vers4,partly/fully worrying!  (Read 3455 times)

0 Members and 1 Guest are viewing this topic.

Offline MK14Topic starter

  • Super Contributor
  • ***
  • Posts: 5015
  • Country: gb
This video, goes into various details (which are both eye openers to what can go wrong and worrying), as ChatGPT4.0, has got on with its reasonably extensive test programme.

« Last Edit: March 18, 2023, 06:32:06 pm by MK14 »
 

Offline MK14Topic starter

  • Super Contributor
  • ***
  • Posts: 5015
  • Country: gb
Without putting too many spoilers, into the second post (i.e. I'll let those that want to freshly watch the video(s), do that).  How dangerous and worried should we be about this apparently, imminently releasing ChatGPT4.0 release?

The various things that Fran has outlined, have occurred during testing, could cause serious harm to some people.  Also, by either potentially getting round any restrictions and/or finding other such issues.  People in the real life outside world, may start to take up this advice, and use it, in some cases very destructively.

I've also heard something about Microsoft (who are a significant part of ChatGPT, or so I understand).  Have disbanded, one of the safety teams, or something.  That doesn't sound terribly encouraging.

If the 'handlers' of these upcoming AI systems, are NOT careful enough.  We could see governments, making knee-jerk quick new all encompassing, way too comprehensive, ill conceived, laws.  That could hold up and damage (at least in the short to middle term), AI releases, and its development.

What Fran has been outlining, could just be the tip of an iceberg.  Maybe, these AI releases, are too early for mass adoption, in the wider (unsuspecting) world.

On the other hand, google (and other companies) search, has partially or fully been able to do, in some cases, what Fran was discussing as well.  Until such things were either shut down, blocked, sort of allowed, or just part of the wider 'crime' issues.
I.e. We've had something a bit like this (via online searches), for decades, and society hasn't collapsed, and crime hasn't spiraled out of control, etc.

She didn't seem (or I missed it), to touch on the other big elephant in the room.  Which is that it could be used to spread mass-misinformation, e.g. for political engineering reasons.  A bit like Cambridge Analytica.

https://en.wikipedia.org/wiki/Cambridge_Analytica
« Last Edit: March 18, 2023, 09:21:21 pm by MK14 »
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 7084
  • Country: nl
She didn't seem (or I missed it), to touch on the other big elephant in the room.  Which is that it could be used to spread mass-misinformation, e.g. for political engineering reasons.  A bit like Cambridge Analytica.

https://en.wikipedia.org/wiki/Cambridge_Analytica

Mostly a witch hunt. It was an insignificant pebble thrown into an ocean of dreck.

Which is not to say automated chatbots online trained to get replies and retweets couldn't be used to make propaganda more efficient, but no one will be able to use GPT4 for that.

PS. if Harlan Ellison was still alive, he'd already be suing.
 
The following users thanked this post: MK14

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15943
  • Country: fr
She didn't seem (or I missed it), to touch on the other big elephant in the room.  Which is that it could be used to spread mass-misinformation, e.g. for political engineering reasons.  A bit like Cambridge Analytica.

https://en.wikipedia.org/wiki/Cambridge_Analytica

Mostly a witch hunt. It was an insignificant pebble thrown into an ocean of dreck.

Which is not to say automated chatbots online trained to get replies and retweets couldn't be used to make propaganda more efficient, but no one will be able to use GPT4 for that.

Oh, really! :-DD
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 7084
  • Country: nl
You need API access to be able to do supervised fine tuning, if you're trying to start some Putin/Trump propaganda bot they will be on you like flies on shit.

Even for more politically correct propaganda, they are going to detect you trying to run a social media bot and cut you off immediate and they certainly aren't going to run one themselves. The political liability of such actions are existential, for negligible monetary gain.

I believe in capitalism.
 
The following users thanked this post: MK14

Offline MK14Topic starter

  • Super Contributor
  • ***
  • Posts: 5015
  • Country: gb
She didn't seem (or I missed it), to touch on the other big elephant in the room.  Which is that it could be used to spread mass-misinformation, e.g. for political engineering reasons.  A bit like Cambridge Analytica.

https://en.wikipedia.org/wiki/Cambridge_Analytica

Mostly a witch hunt. It was an insignificant pebble thrown into an ocean of dreck.

Which is not to say automated chatbots online trained to get replies and retweets couldn't be used to make propaganda more efficient, but no one will be able to use GPT4 for that.

PS. if Harlan Ellison was still alive, he'd already be suing.

That makes sense, sort of.  Unlike something which can affect tens of millions of people or more, in a relatively short space of time.  E.g. Facebook, being connected to the Cambridge Analytica scandal, and hence potentially swinging elections in the wrong (as in the genuine voters majority choice, didn't get through).

The likelihood of someone, asking just the right question on ChatGBT (or similar), to then get misinformation.  Would probably affect, a phenomenally tiny proportion, probably many orders of magnitude smaller.  Which would be considerably less likely to swing any elections.

The ChatGPT system, on the face of it.  Seems to be far too big, complicated and have massive databases (or whatever their data structures are called) and things.  For it to be thoroughly infected with propaganda.  Also, if people started noticing then complaining, on large social media (and other) websites.  The message, that the platform was spreading false/misleading/propaganda, should soon start spreading.

So, in short.  I agree.  It shouldn't be a big problem, or perhaps any problem at all.  With the other big exception, being if a big, 'bad' government (I don't want to start a political debate, so I will just throw in the word, 'dictatorship' to convey what I mean).  Creates their own equivalent to ChatGPT, filled mainly/only with their political lines and propaganda.
Then on the one hand, that wouldn't be a good thing.  But on the other hand, such governments, are already doing similar things with media outlets, such as their news, TV and internet etc.  So, it shouldn't change the status quo that much anyway.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf