The problem, which likely has a nasty persistence, is that 'humans' can still expert their bias, for good or not.
Example would be, using the convenient COVID examples, saying "That guy over there keeps saying that he doesn't trust the vaccination requirements".
Now, a fully open-minded BOT would be cautious, about someone who 'questions whether some questions are meant to be disinformation...'.
THIS, last sentence I've deliberately made in a self-contradictory form. I guess I'm saying that this structure and approach will NEVER be reconciled, under truly open discussion dynamics. That there then must be a (corrupting) mechanism, to bypass the mis-handling of logic...remembering that these dynamics, of conversation flow have very high and real stakes, in real-world society. The free-flow dynamic therefore MUST be corrupted...I.E. some sort of 'Information Policing' is necessitated, and so the (players) scramble to justify various proposals, Information Czar, Information Bureau, etc. and lots of claims using terms like 'dangerous', 'harmful', etc. often accompanied with exaggerated references to events.
Witness the following 'facts', stated in MSNBC recently:
They said:. 'Racism, on Twitter, under Musk, is now up significantly'. Really? I'm saying that statement, about Twitter current state, has a use, and the use is to justify some formal agency, or persons, needed to 'screen' out harmful content.
This is why it does not matter, if there is hypocrisy, as the other side issues their own lies and 'disinformation'. It does not matter, that inconsistency, as the info censorship, and who controls it, is the real game being played.
Really/arguably, twitter is a very big, and massive, news outlet (as part of its functionality), much like big newspapers use to be, a rather long time ago. Before common TV ownership, the internet etc, changed things.
So, allowing it to just simply be bought by someone, with certain, possibly strong political and other views. Could be problematic.
In fairness, going back to the days of newspapers, when they were one of the most influential news mediums. Particular newspapers (and their groups), would be owned by various owners, who would have various political leanings.
But these days, there is only really ONE twitter as such. So, if in the days gone by, there had only been one, very big, world-wide newspaper. Care should be taken (in an ideal world), as to who owns and runs it.
So in theory, it could cause issues.
Back on topic. What if these ChatGPT and similar AI systems, fall onto company and/or individual ownership. To a person or entity, that will happily use it to attempt to influence, which political parties get into power.
A bit like Cambridge Analytica apparently did.
https://en.wikipedia.org/wiki/Cambridge_AnalyticaWhich some blame for being a significant part of the cause (via unhealthy social media manipulation/advertising), of political upheaval in the US, which seemed to cause Cambridge Analytica's downfall.
I.e. Could a bad player, take ownership, in the future of most/all of the then current ChatGOT, AI things, and get them to be programmed, to influence people to vote in certain ways, believe in certain things, and maybe even buy a companies products. Simply because a ChatGPT like thing, was made to influence people, to buying things, that was not necessarily for the right reasons.