I wasn't posing a rhetorical question. Is AI really intelligence?
If so, I believe I answered it. If that post was your reply to what I wrote, then it seems my entire point was missed. Either that or I fail to see how my words are reflected in your response.
I will repeat, just with more details, what I wrote above.
Word “intelligence” has no actual meaning, if used as a generic term. Just like to everybody, I myself not being excluded from
the everybodies, to you it may seem you know what this “intelligence” is. But ask yourself: did you ever attach any precise properties to it? Properties that would either constrain it or be usable in discerning “intelligence” from other things? The same way you could do for a cat, for a house, for speech, for crying, for smell, even for such abstract and variable concepts as love, or things as ephemeral as “negation?” Did you? Can you?
There is an irony in the entire situation of talking about “intelligence” in machines, and in particular in generative LLMs. In majority of such debates and monologues, humans’ treatment of term “intelligence” is not different than what GPT networks do. Which, perhaps even more ironically, is a kind of the answer too.

Of course there are more strict uses of word “intelligence.” And if they are being referenced, the question is also much easier to answer. The problem is that it would likely not satisfy anybody.
Perhaps word “ignosticism” was the problem. I now see the Wikipedia article on it may be confusing to a reader, who didn’t know the concept earlier. There is a question about existence of god(s). The stance an ignosticist takes is: refusing to answer the question, until term “god” is properly defined. It isn’t an attempt to avoid dealing with it. Instead, it’s a reasonable reaction to how the statements about god(s) existence are made. It should also not be assumed to be an inherently atheistic view.
Well, I see a small difference though: while most of these guys probably do believe that whatever they promote will benefit everyone, I don't think that's the case in particular for those in the "AI" field currently. Most of them have clearly stated, on the contrary, that AI was potentially very dangerous to humanity. So, no. They know it's not for the good of humanity. It's for something else.
You missed one part of their statement. It’s dangerous to humanity. But they don’t say it has to be stopped, but that it has to be tightly controlled. And that “tight control” means preventing “the bad guys” from acquiring it. Which is achieved by keeping the technology in “trusted hands.”
Those are, what a surprise, their own hands. But that can’t be taken as doing so in bad faith. No, instead once again Hanlon’s razor does its job well. A common and simple human folly. The same thinking that makes dictators believe they’re doing good.