If a law cannot be policed and enforced, it holds little value. Can you identify the usage of ChatGPT? It might be straightforward to detect if ChatGPT is generating all of a user's output, but would you be able to distinguish that this paragraph was composed by ChatGPT?
I don't follow. The idea is to ban accounts that post identifiable ChatGPT-generated content. Users who post identifiable ChatGPT-generated content without mentioning will be told to stop, because it is annoying. If you use ChatGPT in ways that make it unidentifiable, then you obviously filter its output (or modify it enough), and as a member take the responsibility for its content, just like you do for the content you actually created yourself. No difference to e.g. copying the key points of your answer from somewhere else, is there? It is
kinda fake, and not okay, but not something we can police.
While
usage of ChatGPT is not really identifiable,
reliance on ChatGPT for content is identifiable, just like those who spout most common rules of thumb as the full complete truth. Hopefully, other members will put such members to the task. It does not matter how the content is generated to me, bad advice should always be pointed out and corrected.
As I see it, the ban on ChatGPT bots is a line set in the rules for interaction. We all agree that theft is wrong, yet not all thieves will ever be caught. Making theft illegal and punishable, we define the rules we
try to operate by. Similarly, by banning ChatGPT bots we are stating that we are not interested in bot-generated posts. (Members such as myself will, however, ignore any member who starts relying too much on ChatGPT in their posts, even if they state clearly they do, simply because they contain little of interest to myself. I find the same content doing a few quick online searches.)
So, while I do agree that selective application of a law is definitely unfair, I do not think it applies here, exactly; because all rules here are mediated by explicit human action by Dave or one of the moderators he trusts. It is a softer, "behavioural rule", subject to discussion.