I have noticed that, the 'fears' about AI - specifically about creating a 'intelligence greater than us' as opposed to the fears about job losses - tend to come from the older generation, and those who really don't begin to understand what we call artificial intelligence. Either that, or see it still in sci-fi terms, rather than the modern 'pattern recognition' algorithms of today, as the article alludes to.
Algorithms are everywhere and have been for decades, and really can determine life or death situations. Think about modern Aircraft, railway control systems, I'm sure most don't see these as examples of AI, but simple control systems and electronics. We put our trust in these systems every day and they very rarely let us down (when they do, tends to be human error anyway). Modern 'AI' is pretty impressive, and has already moved into to many areas of life but mostly in terms of voice recognition, face recognition and using 'big data' to try and tease out hidden information (like political persuasion from facebook likes).
There is a major concern that we're automating the human race out of employment, but that's for a reason, because we are. You never mention how all those people will eat, or find water, or if they even will. Nobody is, actually, and it's because we're setting up a system to deny it to them. Which has nothing to do with automation or AI except that its a reaction to the impending changes.
The world of the future will be one of abundance of all kinds, but those currently in power are trying to create a false scarcity by triggering a race to the bottom for the remaining jobs.
The labor saving tools we build now save lots and lots of labor. Should the wealth they create be shared, and not concentrated, or not?
Right now, that's being prevented, by eliminating democracy except in name only.
Thats because knowledge and technology is naturally generous and that generosity is seen as a threat by the insanely hierarchical system whose core principle- if it could be said to have one is the concentration of wealth and power in those already wealthy and powerful. (So, it won't ever rationally handle what's happening. That's not its way.)
Yes, we're saving lots of labor. Sure, a relatively tiny number of jobs are created building things like web applications that are designed to run themselves.
We build more of them and better and better tools and they also get segmented according to the global value chain ideology. That leaves low skill workers in high skill countries in an impossible situation. Before we know it, billions of people are going to be out of work in a way that has never happened before. They won't be able to just take less money and get another job at lower pay. They just wont be needed.
Also, much of this is about predicting the far future - something we are notoriously bad at.
Yes, and we keep getting worse at it too as the rate of change increases. Because its human nature to think that the rate of change in the future will be proportionate to the rate of change in the past, when in fact the two curves are best approximated as more like perpendicular to one another.
The biggest problem as I see it is the reaction thats being forced on the world which has occurred due to the positive changes in the late 20th century. There is a project now, a global one thats trying to push us back to a form of feudalism. Think "divine right of kings" feudalism. Thats because corporations, who are writing the rules for countries ow, made a deal with countries that necessitates them treating al countries the same and as if all governments were legitimate, when they are not. Also, around 23 years ago they started putting agreements into place that make democracy more and more impossible.
They are very sophisticated in their methods but what they are creating is still feudalism. With all that implies.
Its goal is keeping the planet divided.
As with any discussion about 'evolution', or technological advancement, there is often a tendency to assume what is considered 'advanced', which is an assumption about what the future will be. Like any discussion of human evolution.. ask people to state what they think an 'advanced' human will be like and you'll get all sorts of bizarre answers like bigger brains, or telekinesis, like somehow evolution follows what we think rather than environmental pressure.
The only 'danger' I can see about AI is humanity giving it more power without appreciating what it actually is, or without fail-safes. Even today there are stories where AI fails terribly - often because of human error, such as only training face recognition on a certain race, leading to 'racist' results. Ironically it is those who see AI as 'magic' or ''dangerous' who will allow it to be used in critical situations without controls.