Again, I disagree on both. Google developed an AI that can learn a game knowing the rules and playing it, rather than by being programmed to understand it or getting large datasets supplied. This AI beat the pants off the previous AlphaGo, which in turn beat the pants off one of the best human players. It learnt to play at this level within three days. More importantly, it should be able to learn itself other games without a lot of manual intervention too. You could argue whether this AI understands the concepts of the game, but you can't argue with the results. It simply works.
Humans also learn by brute force. We literally need to repeat something over and over to refine and hone our skills and that's exactly how this AI does it. Obviously, it can play a lot more matches than humans, so it learns quicker. There isn't a human who can be explained the rules of chess and suddenly is a chess master. He needs to go through the motions.
Solving engineering problems isn't much different. You have a limited set of constraints and a computer can optimize within them. It's probably already quite feasible to have AI design a bridge that's both as strong as needed and as cheap as possible.
"Silver explained that as Zero played itself, it rediscovered Go strategies developed by humans over millennia. “It started off playing very naively like a human beginner, [but] over time it played games which were hard to differentiate from human professionals,” he said. The program hit upon a number of well-known patterns and variations during self-play, before developing never-before-seen stratagems. “It found these human moves, it tried them, then ultimately it found something it prefers,” he said. As with earlier versions of AlphaGo, DeepMind hopes Zero will act as an inspiration to professional human players, suggesting new moves and stratagems for them to incorporate into their game. "
https://www.theverge.com/2017/10/18/16495548/deepmind-ai-go-alphago-zero-self-taught