General > General Technical Chat
Why does OpenAI ChatGPT, Possibly Want to disagree/annoy and change my eating...
Zero999:
It's inevitable it's going to be overly PC. Heck that's often the case with the automatic moderation AI used on YouTube comments. Ideally it should be neutral when it comes to politicians, even if they're a little controversial, unless they're objectively bad i.e. have been responsible for mass killings such as Hitler, Stalin, Mao etc.
It's obviously been taught that vaccine = good, which is normally true, but isn't always the case. There have been vaccines made in the past, which have been discontinued because they were deemed to be unsafe, such as the one developed against swine flu.
MK14:
--- Quote from: Zero999 on December 29, 2022, 06:58:32 pm ---It's inevitable it's going to be overly PC. Heck that's often the case with the automatic moderation AI used on YouTube comments. Ideally it should be neutral when it comes to politicians, even if they're a little controversial, unless they're objectively bad i.e. have been responsible for mass killings such as Hitler, Stalin, Mao etc.
It's obviously been taught that vaccine = good, which is normally true, but isn't always the case. There have been vaccines made in the past, which have been discontinued because they were deemed to be unsafe, such as the one developed against swine flu.
--- End quote ---
You're right, I agree.
I suppose, as the various AI developments progress in time, such as ChatGPT itself. Their abilities to moderate things, in an intelligent way, should improve in a good way, over time.
So, even if action is not taken as such, to limit the (annoying to some of us humans), over-moderation, post nannying, kind of effects. It should hopefully, become less cumbersome and annoying as it becomes much more able to detect and moderate the real problematic things, and effect the things that don't need moderating, less and less.
At times, being a bit NON-POLITICALLY correct as such. Seems to be a useful tool, to keep things under control at times. E.g. A beginner, insists on making an uncased, 230/240 Vac, mains powered device. When they clearly don't have any real clue what they are doing, and the extended bare wires at full mains potential. Could end up causing a serious incident.
So, strong messages (without resorting to swearing, or belittling the original poster, in the beginners section of this forum), may be required. Because, otherwise, they may not realize how potentially (pun accidental), dangerous it can be.
MK14:
--- Quote from: RJHayward on December 29, 2022, 06:50:27 pm --- The problem, which likely has a nasty persistence, is that 'humans' can still expert their bias, for good or not.
Example would be, using the convenient COVID examples, saying "That guy over there keeps saying that he doesn't trust the vaccination requirements".
Now, a fully open-minded BOT would be cautious, about someone who 'questions whether some questions are meant to be disinformation...'.
THIS, last sentence I've deliberately made in a self-contradictory form. I guess I'm saying that this structure and approach will NEVER be reconciled, under truly open discussion dynamics. That there then must be a (corrupting) mechanism, to bypass the mis-handling of logic...remembering that these dynamics, of conversation flow have very high and real stakes, in real-world society. The free-flow dynamic therefore MUST be corrupted...I.E. some sort of 'Information Policing' is necessitated, and so the (players) scramble to justify various proposals, Information Czar, Information Bureau, etc. and lots of claims using terms like 'dangerous', 'harmful', etc. often accompanied with exaggerated references to events.
Witness the following 'facts', stated in MSNBC recently:
They said:. 'Racism, on Twitter, under Musk, is now up significantly'. Really? I'm saying that statement, about Twitter current state, has a use, and the use is to justify some formal agency, or persons, needed to 'screen' out harmful content.
This is why it does not matter, if there is hypocrisy, as the other side issues their own lies and 'disinformation'. It does not matter, that inconsistency, as the info censorship, and who controls it, is the real game being played.
--- End quote ---
Really/arguably, twitter is a very big, and massive, news outlet (as part of its functionality), much like big newspapers use to be, a rather long time ago. Before common TV ownership, the internet etc, changed things.
So, allowing it to just simply be bought by someone, with certain, possibly strong political and other views. Could be problematic.
In fairness, going back to the days of newspapers, when they were one of the most influential news mediums. Particular newspapers (and their groups), would be owned by various owners, who would have various political leanings.
But these days, there is only really ONE twitter as such. So, if in the days gone by, there had only been one, very big, world-wide newspaper. Care should be taken (in an ideal world), as to who owns and runs it.
So in theory, it could cause issues.
Back on topic. What if these ChatGPT and similar AI systems, fall onto company and/or individual ownership. To a person or entity, that will happily use it to attempt to influence, which political parties get into power.
A bit like Cambridge Analytica apparently did.
https://en.wikipedia.org/wiki/Cambridge_Analytica
Which some blame for being a significant part of the cause (via unhealthy social media manipulation/advertising), of political upheaval in the US, which seemed to cause Cambridge Analytica's downfall.
I.e. Could a bad player, take ownership, in the future of most/all of the then current ChatGOT, AI things, and get them to be programmed, to influence people to vote in certain ways, believe in certain things, and maybe even buy a companies products. Simply because a ChatGPT like thing, was made to influence people, to buying things, that was not necessarily for the right reasons.
SiliconWizard:
--- Quote from: MK14 on December 29, 2022, 07:37:21 pm ---So, allowing it to just simply be bought by someone, with certain, possibly strong political and other views. Could be problematic.
--- End quote ---
Let us know how else it could ever be.
Any organization is either privately owned or public. I don't think being public would be any better. To me, it would be much worse.
MK14:
--- Quote from: SiliconWizard on December 29, 2022, 07:54:43 pm ---
--- Quote from: MK14 on December 29, 2022, 07:37:21 pm ---So, allowing it to just simply be bought by someone, with certain, possibly strong political and other views. Could be problematic.
--- End quote ---
Let us know how else it could ever be.
Any organization is either privately owned or public. I don't think being public would be any better. To me, it would be much worse.
--- End quote ---
Well in the UK (I'm less familiar, with the situation in the US, and the rest of the world), and I suspect the EU is similar. There are many strict rules and regulations, keeping careful control of the media. To make sure, one entity, can't adversly take control of the news, too much.
E.g. (I've only very quickly skimmed the first bit of this, but have a basic idea, of what it says):
https://www.ofcom.org.uk/__data/assets/pdf_file/0030/127929/Media-ownership-rules-report-2018.pdf
Because in real-terms, social media, and hence twitter, and potentially ChatGPT, and other, similar AI systems, as they become (I would expect, but it might not happen, in the future), much more prevalent and common-place. Especially amongst the general population. Will in effect, become the new, mass-market, news and influence, mediums.
So, if future developments, meant there was a crazy war mongering country(s). That wanted to influence the West and rest of the world. They could buy it and/or pay for some third-party to buy it, then secretly pull its strings behind the scene.
E.g. It could hide information about bad things, that war(s) are causing. Help unsuitable country leaders to get into power. Keep secret, various bad things and warning signs that are happening. Etc.
So in summary. It could lead to bad things, in the future.
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version