Getting back to the discussion a bit, a few points.
Determinism: I don't think that's exactly the issue here. Not just because this isn't really what matters, but also because, actually, current AI systems ARE deterministic. For a given a set of inputs, a given trained NN will give the same output(s). Likewise, for a given training dataset, a given NN structure will end up with the same coefficients. This may form a complex system, but it's still deterministic. Now for two sets of inputs that seem very close to *us*, NNs can sometimes give a completely different output. That's doesn't make them non-deterministic, if that's what those who mentioned the term meant. But that certainly makes them uncomprehensible to us.
Comparing to human intelligence: it's a kinda lost cause here. Especially regarding the ability to explain a given decision. Sure humans are not perfect and can also make bogus decisions. But the key difference is that people being in charge of critical decisions impacting others must usually document their decision before making it effective. That's how it's done in a lot of areas such as justice, medical, etc. At the moment, we somehow don't expect AI to provide the decision process (mostly because we are unable to do that technically for now), so it's completely different. Being able to explain a decision is a key part of any safety-critical process. It's even more important than just "being correct" per se.
Now that part may not be a completely lost cause with AI. We could design systems than are made to output the decision process in an understandable form before giving the decision itself. Yes, I've seen attempts at doing that in a couple papers. But so far, this is just research mostly. And it's not just about being able to implement this technically: it's also about willing to *enforce* it, and I haven't seen anything like that so far. That may change and regulations may come into place over time.
snarkysparky made a good point: there definitely are applications for which all this is NOT a problem, and for which a success rate above a certain threshold is perfectly good, whatever the reasons for the failing cases. But as he said, we seem to insist on applying AI to a lot of applications for which this is fundamentally not acceptable.
Then comes again the question of accountability. If a human adult makes a mistake with consequences, they'll be accountable (unless they are considered mentally deficient or something like that.) If some AI system makes a mistake with bad consequences, who the heck is going to be accountable exactly? It's still a major question for which I haven't really seen a proper and definite answer. Will it be company directly providing the system using AI? Will it be the company which has designed the AI subsystem itself? Will it be the company which has tdesigned the datasets and trained the AI subsystem? Or will it be the end-user? It's all a big fuzzy mess, but I'll be glad to hear about some progress about this, maybe there is!
Also, if we think about AI as a tool - which it is - it's quite normal that we expect it to perform in a predictable and understandable way. To make a fun parallel, imagine you buy a hammer that goes down when you give it a downwards movement 99% of the time, but for 1% it will go up and hit anything else it might get into. Does that sound like a decent tool? Also for law of physics: we may not understand them fully yet, for sure, we still have a lot to learn. But in a given context, the laws we have determined still hold 100% of the time. Quantum gravity is a complex matter, for sure, but if I jump off a bridge, there's 100% probability that I'll fall down and 0% that I'll magically go up and end up orbiting the Earth. What's interesting is the question of why some of us seem to be willing to consider AI not as a tool, but as something else.