Yes. As a thought, I find it interesting how "AI" has become so popular while being, in a way, the antithesis of what computer science is usually about - for instance, proving correctness, defining and analysing algorithms, etc.
We should not confuse different levels of concepts here: while the algorithms used for ML and NNs are well understood, and relatively simple, they, by themselves, do not do anything "useful" regarding the task we're interested in - for instance, image recognition. The "networks" they create is what does the interesting part, and so they kind of implement ""hidden" algorithms that we are unable to analyze. Since we can't really analyze them, is this still really science? tggzzz said it's a bit of "magic", and in a way, that's exactly it as far as we are concerned.
Just because we have formalized ways to "induce" this magic doesn't really make it less magic.
Another thought is that it's, at first, all based on trying to mimick neural-based living intelligence. Thing is, we have captured only a part of what it is. Maybe just a small part. It's not just about the amount of neurons and learning data, IMO. Neural structures, AFAIK, in living beings do not just merely form based on inputs. We know there are certain structures that form, almost always in the same way and at the same locations (in the brain for instance), "coding" specific things with a specific level of abstractions. That part of "intelligence" mostly eludes us in AI at the moment, and that's only just scratching the surface.