This is one of the things that greatly worries me about ANY AI software, how much absolute fiction is it going to insert into all instructions, replies and everything else in order to meet the Political Correction objectives of it's software authors?
The national news today reported that AI is now being used to rate the resumes of job applicants and that New York State has passed a law that applicants can opt out of letting AI do that. But will that mean that anyone who objects will automatically be rejected for a job? Probably, IMO. But if AI is allowed to screen job applicants, what is stop the AI's authors creating rules that screen applicants for religious views, political views, zip code, etc, etc and rejecting them on that basis? When real people make bad decisions they can be held accountable but how are you going to hold AI responsible when your application for a home mortgage is rejected, or your resume gets "lost", or you get pulled out of line at every airport for an in-depth screening or worse, you get repeatedly stopped and held by law enforcement because you fit a "profile".
I think it would be extremely naive of anyone to think that AI is not going to reflect the political/religious/cultural biases of it's authors and for AI to make decisions on that basis.