Without putting too many spoilers, into the second post (i.e. I'll let those that want to freshly watch the video(s), do that). How dangerous and worried should we be about this apparently, imminently releasing ChatGPT4.0 release?
The various things that Fran has outlined, have occurred during testing, could cause serious harm to some people. Also, by either potentially getting round any restrictions and/or finding other such issues. People in the real life outside world, may start to take up this advice, and use it, in some cases very destructively.
I've also heard something about Microsoft (who are a significant part of ChatGPT, or so I understand). Have disbanded, one of the safety teams, or something. That doesn't sound terribly encouraging.
If the 'handlers' of these upcoming AI systems, are NOT careful enough. We could see governments, making knee-jerk quick new all encompassing, way too comprehensive, ill conceived, laws. That could hold up and damage (at least in the short to middle term), AI releases, and its development.
What Fran has been outlining, could just be the tip of an iceberg. Maybe, these AI releases, are too early for mass adoption, in the wider (unsuspecting) world.
On the other hand, google (and other companies) search, has partially or fully been able to do, in some cases, what Fran was discussing as well. Until such things were either shut down, blocked, sort of allowed, or just part of the wider 'crime' issues.
I.e. We've had something a bit like this (via online searches), for decades, and society hasn't collapsed, and crime hasn't spiraled out of control, etc.
She didn't seem (or I missed it), to touch on the other big elephant in the room. Which is that it could be used to spread mass-misinformation, e.g. for political engineering reasons. A bit like Cambridge Analytica.
https://en.wikipedia.org/wiki/Cambridge_Analytica