EEVblog Electronics Community Forum

General => General Technical Chat => Topic started by: mtdoc on October 10, 2017, 11:26:01 pm

Title: The Seven Deadly Sins of AI Predictions
Post by: mtdoc on October 10, 2017, 11:26:01 pm
Interesting article in the current MIT Technology Review: The Seven Deadly Sins of AI Predictions (https://www.technologyreview.com/s/609048/the-seven-deadly-sins-of-ai-predictions/)

Nice to hear a sane voice on this topic...
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: adras on October 11, 2017, 10:26:55 pm
Hmm, this is a very complicated topic. I'm scared of AI, too. The main reasons are as follows.

The scenario in IRobot. It appears like Asimov's laws of Robotics are perfect, at least I thought they were. But then that movie found something to cause harm because those rules aren't perfect.

If you look at why there is war in this world you can come to the conclusion that it's just because somebody was treated unfair. The main reason why we create robots is to enslave them. We don't want to create a new live form, we want somebody to do our work. The movie Bicentennial Man shows how hard it can be for a robot to get the same rights as a human.

The way we're currently achieving intelligence is based on neural networks, which try to mimic our brain. So I think at one point robots will be as intelligent as us. And then we will need to find a way to integrate them into our society which is basically impossible. Look at some countries in the world, there are lots of groups which cannot be integrated which results in war again.

We're starting to get to the edge of knowledge the human brain can handle. Evolutionary our brain hasn't changed in the last 2000 years. Yet look at all the knowledge we've got right now. A few hundred years ago, some people had degrees in 3-5 areas. Try to achieve that now. Every area is starting to get so complex that a single person already can't handle it anymore. That's why there are now specialists who focus on a tiny bit of a whole field. Which makes it impossible to see the overall picture. That could be easily solved with AI assistance however. But then we're enslaving a highly intelligent being. Which I consider even more riskful.

Quote
We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.

That's a nice saying. But all it's saying is: You can't predict the future. Which is kinda obious.But still we should try to do it as much as possible to try and prevent catastrophies. I'm pretty sure we predicted every danger of nuclear fission before it happend, but we didn't take it serious enough.

All in all, I think we're going to experience a very interesting future, and if we do it right we can solve a lot of problems we've got at the moment. But we need to be careful. We've got a very dark past. And we should try to get to a bright future without harming anyone on the way.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: cdev on October 11, 2017, 11:41:39 pm
This is AI, but we are more like David - (Not Dave here, David in AI, its mecha protagonist) and I mean, we're in a similar mess, now, more so than we realize  - on multiple levels.

That is one of maybe dozens of reasons  why this is my all time favorite film.

Three Kubrick films have occupied that position at different times in my life..

We live in interesting times.

https://www.youtube.com/watch?v=YRsICbxDEiI (https://www.youtube.com/watch?v=YRsICbxDEiI)

------------

https://www.youtube.com/watch?v=zTioBYdv2o4 (https://www.youtube.com/watch?v=zTioBYdv2o4)

Title: Re: The Seven Deadly Sins of AI Predictions
Post by: ebclr on October 11, 2017, 11:46:10 pm
People already can go to jail based on computer decision


https://epic.org/algorithmic-transparency/crim-justice/
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: Tomorokoshi on October 12, 2017, 01:54:56 am
From the article:
Quote
Could Newton begin to explain how this small device did all that? Although he invented calculus and explained both optics and gravity, he was never able to sort out chemistry from alchemy. So I think he would be flummoxed, and unable to come up with even the barest coherent outline of what this device was.

I disagree. He would look at the lack of the 1/8" headphone port, and conclude that it was obviously the flawed work of fallible people, without the influence of magicians or gods.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: Brumby on October 12, 2017, 03:03:05 am
How can anyone notice the "lack" of something when they never knew such a thing was possible, let alone desirable?


(Yeah, I realise it was tongue-in-cheek.)
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: vk6zgo on October 12, 2017, 03:48:54 am
I've alway felt that Clarke's Third Law was a bit  condescending.

"Primitive people" at various times in the past,have been confronted by Colonists/invaders from a more technologically advanced civilisation.
The "It's magic" response may happen initially, but it wears off rapidly, until in a very short time they are using the new technology as a matter of routine.

Indigenous people in Australia didn't have steel, but it didn't take them long to see the possibilities of such a material  & incorporate it into spearheads.
They also learnt how to fire muskets, & use other alien technology.

When you find that you can do the same things the invaders do, with their equipment, the "sense of wonder" quickly disappears.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: MrW0lf on October 12, 2017, 09:21:55 am
When you find that you can do the same things the invaders do, with their equipment, the "sense of wonder" quickly disappears.

...together with your nice cozy civilization centered around humanitarian values, natural resources and most of your family and pals :popcorn:
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: IanMacdonald on October 13, 2017, 10:12:06 pm
(http://sevspace.com/stupidarchive/sevtrek058.jpg)
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: Zero999 on October 15, 2017, 08:15:39 pm
Hmm, this is a very complicated topic. I'm scared of AI, too. The main reasons are as follows.

The scenario in IRobot. It appears like Asimov's laws of Robotics are perfect, at least I thought they were. But then that movie found something to cause harm because those rules aren't perfect.

If you look at why there is war in this world you can come to the conclusion that it's just because somebody was treated unfair. The main reason why we create robots is to enslave them. We don't want to create a new live form, we want somebody to do our work. The movie Bicentennial Man shows how hard it can be for a robot to get the same rights as a human.

The way we're currently achieving intelligence is based on neural networks, which try to mimic our brain. So I think at one point robots will be as intelligent as us. And then we will need to find a way to integrate them into our society which is basically impossible. Look at some countries in the world, there are lots of groups which cannot be integrated which results in war again.
I can't see this happening any time soon. AI has made great strides in areas such as pattern recognition, game theory and learning a specific task, by trial and error, but general intelligence and actual understanding have barely progressed over the last 50 years! Computers still don't actually understand anything. Even after many years of AI development, they're still just following instructions. With AI, they can learn, based on past experience but it isn't the same as understanding. It's basic trial and error, with no reasoning whatsoever: monkey see, monkey do.

A computer can be programmed and to some extent learn, to play chess to a higher standard, than any human, but it doesn't really understand the concept of the game. To the machine it's just a big mathematical algorithm, with set rules and several inputs and outputs. A human who's good at chess will most likely learn how to play a similar game fairly quickly, to a fairly high standard. A machine will need to be given a totally different program, written by humans because it doesn't understand the game.

Quote
We're starting to get to the edge of knowledge the human brain can handle. Evolutionary our brain hasn't changed in the last 2000 years.
I think the human brain stopped evolving, long before 2000 years ago.

Quote
Yet look at all the knowledge we've got right now. A few hundred years ago, some people had degrees in 3-5 areas. Try to achieve that now. Every area is starting to get so complex that a single person already can't handle it anymore. That's why there are now specialists who focus on a tiny bit of a whole field. Which makes it impossible to see the overall picture. That could be easily solved with AI assistance however. But then we're enslaving a highly intelligent being. Which I consider even more riskful.
Good point about specialisation, however that also highlights what I was saying about general intelligence. A human with a good degree in a STEM subject will be able to specialise in any other STEM subject, because they posses general intelligence. Given current levels of AI, even multiplied by a factor of 10, reasoning wise, a computer is incapable of actually understanding any of it.

Current AI can help engineers and researches find information more effectively. Image recognition technology may help a doctor diagnose a patient's rare skin condition, but the computer doesn't actually understand anything about the disease or how to cure it. All it can do is help the doctor find information, based on a photograph.

I'm aware that it's impossible to predict the future and that artificial general intelligence may be possible. It's just not possible with current technology or likely to become possible, in the immediate feature, without some huge breakthrough. If machine understanding is invented, then it'll probably be much different to what we expect and will have many applications we can't even imagine.

Title: Re: The Seven Deadly Sins of AI Predictions
Post by: Vtile on October 15, 2017, 10:09:48 pm
......

We're starting to get to the edge of knowledge the human brain can handle. ...... Every area is starting to get so complex that a single person already can't handle it anymore. That's why there are now specialists who focus on a tiny bit of a whole field. Which makes it impossible to see the overall picture. That could be easily solved with AI assistance however. But then we're enslaving a highly intelligent being. ......
Isn't there actually happening just otherway around, the human being (the monkey) is enslaved to machine (the God like).

... In some extend this outsourcing of the thinking have already happened. Thinking which have said to differentiate us from other animals on this balloon.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: Zero999 on October 15, 2017, 10:16:36 pm
......

We're starting to get to the edge of knowledge the human brain can handle. ...... Every area is starting to get so complex that a single person already can't handle it anymore. That's why there are now specialists who focus on a tiny bit of a whole field. Which makes it impossible to see the overall picture. That could be easily solved with AI assistance however. But then we're enslaving a highly intelligent being. ......
Isn't there actually happening just otherway around, the human being (the monkey) is enslaved to machine (the God like).

... In some extend this outsourcing of the thinking have already happened. Thinking which have said to differentiate us from other animals on this balloon.
What do you mean? Can you give any examples of humans being enslaved by machines, other than Sci-Fi?
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: cdev on October 16, 2017, 12:28:36 am
I think many people overestimate human employ-ability after the kinds of changes we will likely be seeing.

We have to do that, its human nature.

I think superintelligent AI would likely either see us as worth preserving in zoos for scientific or ethical reasons, or as kind of a irrelevancy, they may even see us like we might view rust or mold or termites.

As something they don't want around.

I think the way we treat one another, and the way we treat animals will have a huge bearing on how we would be treated. We can expect similar treatment.

We have to start being good to one another, as well as to animals, as well as to AI's when they emerge.  Only if we are good to them, will they will be good to us.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: Brumby on October 16, 2017, 12:57:33 am
......

We're starting to get to the edge of knowledge the human brain can handle. ...... Every area is starting to get so complex that a single person already can't handle it anymore. That's why there are now specialists who focus on a tiny bit of a whole field. Which makes it impossible to see the overall picture. That could be easily solved with AI assistance however. But then we're enslaving a highly intelligent being. ......
Isn't there actually happening just otherway around, the human being (the monkey) is enslaved to machine (the God like).

... In some extend this outsourcing of the thinking have already happened. Thinking which have said to differentiate us from other animals on this balloon.
What do you mean? Can you give any examples of humans being enslaved by machines, other than Sci-Fi?

My first thought on this brought to mind the "smartphone generation" where people are continuously plugged in and adapt their normal activities and social interactions to fit around that technology.  Are these people "enslaved" to their devices?  I can see how some observers would say "Yes".

The counter argument is that they have simply adapted their behaviour to make use of the facilities provided by the technology.

As I see it, the pivotal issue is whether their behaviour is voluntary or not ... and that can come down to a range of personal qualities.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: cdev on October 16, 2017, 01:27:45 am
I think the simplest explanation, the economic one, is the best one.

People have no choice but to work. They have to eat. So the more machines do, for less and less, the harder and better and longer people will have to work to make the same amount of money as they did before.  And the more education they will need to have. or perish.

Do you know the story of John Henry?

https://en.wikipedia.org/wiki/John_Henry_%28folklore%29

What do you mean? Can you give any examples of humans being enslaved by machines, other than Sci-Fi?

My first thought on this brought to mind the "smartphone generation" where people are continuously plugged in and adapt their normal activities and social interactions to fit around that technology.  Are these people "enslaved" to their devices?  I can see how some observers would say "Yes".

The counter argument is that they have simply adapted their behaviour to make use of the facilities provided by the technology.

As I see it, the pivotal issue is whether their behaviour is voluntary or not ... and that can come down to a range of personal qualities.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: vk6zgo on October 16, 2017, 02:12:02 am
I think the simplest explanation, the economic one, is the best one.

People have no choice but to work. They have to eat. So the more machines do, for less and less, the harder and better and longer people will have to work to make the same amount of money as they did before.  And the more education they will need to have. or perish.

Do you know the story of John Henry?

https://en.wikipedia.org/wiki/John_Henry_%28folklore%29

What do you mean? Can you give any examples of humans being enslaved by machines, other than Sci-Fi?

My first thought on this brought to mind the "smartphone generation" where people are continuously plugged in and adapt their normal activities and social interactions to fit around that technology.  Are these people "enslaved" to their devices?  I can see how some observers would say "Yes".

The counter argument is that they have simply adapted their behaviour to make use of the facilities provided by the technology.

As I see it, the pivotal issue is whether their behaviour is voluntary or not ... and that can come down to a range of personal qualities.

A lot of discussion around the impact of AI & other sophisticated technologies seems to be driven by people without much knowledge of how industry works.
The image seems to be of thousands of people making everything by hand, whereas the reality is that "dumb automation " has taken over many jobs already.
 
This technology is a hard act to follow, as AI may only offer a very small increment of improvement over whatever "clunky" set up is doing the job now.

Slightly off topic:-
Many people get wildly excited about 3D printing, assuming it is the wave of the future.
Maybe it is, but only for very small quantity production, or for manufacturing patterns for use in casting or extrusion, which can produce hundreds to thousands of products in the time it takes to make one with 3D printing.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: Brumby on October 16, 2017, 02:54:14 am
Slightly off topic:-
Many people get wildly excited about 3D printing, assuming it is the wave of the future.
Maybe it is, but only for very small quantity production, or for manufacturing patterns for use in casting or extrusion, which can produce hundreds to thousands of products in the time it takes to make one with 3D printing.
It is ... with today's processes.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: vk6zgo on October 16, 2017, 05:12:41 am
Slightly off topic:-
Many people get wildly excited about 3D printing, assuming it is the wave of the future.
Maybe it is, but only for very small quantity production, or for manufacturing patterns for use in casting or extrusion, which can produce hundreds to thousands of products in the time it takes to make one with 3D printing.
It is ... with today's processes.

It will need a huge breakthrough to compete with extrusion, in terms of throughput, & casting/ forging in terms of material characteristics.
3D printed crankshaft anyone?
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: Zero999 on October 16, 2017, 08:13:55 am
With smartphones, there's usually a human at both ends, unless the user is just playing games and some people get paid a lot of money, just to game.

I don't know what all this gloom and doom is about. Technology has been stealing jobs from humans for around 200 years now, yet living standards have generally improved over the same time period.

What if labour was free and humans didn't have to do anything? Would the economy collapse? I doubt it. The Roman empire lasted a long time, propped up by slave labour.

Even if there is some miracle breakthrough resulting in general AI, which is self-aware, I don't see why it would be implemented for menial tasks. That would be counter-productive. Humans get bored of repetitive tasks and that's one of the reasons why they were automated in the first place. I doubt we humans would ever want self-aware AI for anything. AI with general intelligence would be good: it could solve the world's problems, without getting bored, tired or frustrated, when something is difficult, but emotions would get in the way.

As far as war breaking out is concerned: that can only happen if the borg is physically capable of doing it. It can be more intelligent, than any human, but it can't harm anyone, unless it's given the necessary hardware to do so.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: Vtile on October 16, 2017, 11:16:10 am
......

We're starting to get to the edge of knowledge the human brain can handle. ...... Every area is starting to get so complex that a single person already can't handle it anymore. That's why there are now specialists who focus on a tiny bit of a whole field. Which makes it impossible to see the overall picture. That could be easily solved with AI assistance however. But then we're enslaving a highly intelligent being. ......
Isn't there actually happening just otherway around, the human being (the monkey) is enslaved to machine (the God like).

... In some extend this outsourcing of the thinking have already happened. Thinking which have said to differentiate us from other animals on this balloon.
What do you mean? Can you give any examples of humans being enslaved by machines, other than Sci-Fi?
I did use a bit wrong wording. "In some extend" should be "In tiny extend" to be more descriptive.

The cases where the decision making of the process is given to a certain system is what I did mean with the "some extend..." such things exist ie. Maeslantkering the Rotterdam floodgate. This is of course in several magnitudes smaller case what it would be (or will) if and when the real AI is involved.

Another thing is if you do want to extend the philosophical idea and count a bureaucracy (heck this is hard word to spell in english) as a machine like structure.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: TerraHertz on December 08, 2017, 01:12:54 am
https://chess24.com/en/read/news/deepmind-s-alphazero-crushes-chess

Quote
DeepMind’s AlphaZero crushes chess

20 years after DeepBlue defeated Garry Kasparov in a match, chess players have awoken to a new revolution. The AlphaZero algorithm developed by Google and DeepMind took just four hours of playing against itself to synthesise the chess knowledge of one and a half millennium and reach a level where it not only surpassed humans but crushed the reigning World Computer Champion Stockfish 28 wins to 0 in a 100-game match. All the brilliant stratagems and refinements that human programmers used to build chess engines have been outdone, and like Go players we can only marvel at a wholly new approach to the game.

That it's Google/Alphabet doing this, really worries me. Eric Schmidt (CEO of parent company Alphabet) has a very evil worldview. He's been going on lately about how hard it is to program an AI to recognize 'truth' (which he defines as conforming to leftist-Globalist views), and as a result Google/youtube is currently hiring 10,000 human censors to rate youtube content. I'm sure he would much rather have been able to program a machine to achieve his goal, and will keep working on ideologically constraining an AI until he doesn't need those pesky humans and their unreliable opinions, that differ from his.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: cdev on December 08, 2017, 09:04:30 pm
Globalization and the aggressive and dishonest creation of a harmonization downward of environmental standards, workplace protections, standards of living and wages by creating a single common labor market for corporations and a gutting of democracy with FTAs and its race to the bottom near term will cause orders of magnitude more disruption than AI.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: Buriedcode on December 09, 2017, 12:17:33 am
I have noticed that, the 'fears' about AI - specifically about creating a 'intelligence greater than us' as opposed to the fears about job losses - tend to come from the older generation, and those who really don't begin to understand what we call artificial intelligence.  Either that, or see it still in sci-fi terms, rather than the modern 'pattern recognition' algorithms of today, as the article alludes to.

Algorithms are everywhere and have been for decades, and really can determine life or death situations. Think about modern Aircraft, railway control systems, I'm sure most don't see these as examples of AI, but simple control systems and electronics.   We put our trust in these systems every day and they very rarely let us down (when they do, tends to be human error anyway).  Modern 'AI' is pretty impressive, and has already moved into to many areas of life but mostly in terms of voice recognition, face recognition and using 'big data' to try and tease out hidden information (like political persuasion from facebook likes).

Also, much of this is about predicting the far future - something we are notoriously bad at.  As with any discussion about 'evolution', or technological advancement, there is often a tendency to assume what is considered 'advanced', which is an assumption about what the future will be.  Like any discussion of human evolution.. ask people to state what they think an 'advanced' human will be like and you'll get all sorts of bizarre answers like bigger brains, or telekinesis, like somehow evolution follows what we think rather than environmental pressure.

The only 'danger' I can see about AI is humanity giving it more power without appreciating what it actually is, or without fail-safes.  Even today there are stories where AI fails terribly - often because of human error, such as only training face recognition on a certain race, leading to 'racist' results.  Ironically it is those who see AI as 'magic' or ''dangerous' who will allow it to be used in critical situations without controls.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: BrianHG on December 10, 2017, 12:41:34 am
Here you go:
https://www.youtube.com/watch?v=YXYcvxg_Yro (https://www.youtube.com/watch?v=YXYcvxg_Yro)
https://www.youtube.com/watch?v=IcvfmIBqkQU (https://www.youtube.com/watch?v=IcvfmIBqkQU)
Enjoy!
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: cdev on December 22, 2017, 04:18:53 pm
I have noticed that, the 'fears' about AI - specifically about creating a 'intelligence greater than us' as opposed to the fears about job losses - tend to come from the older generation, and those who really don't begin to understand what we call artificial intelligence.  Either that, or see it still in sci-fi terms, rather than the modern 'pattern recognition' algorithms of today, as the article alludes to.

Algorithms are everywhere and have been for decades, and really can determine life or death situations. Think about modern Aircraft, railway control systems, I'm sure most don't see these as examples of AI, but simple control systems and electronics.   We put our trust in these systems every day and they very rarely let us down (when they do, tends to be human error anyway).  Modern 'AI' is pretty impressive, and has already moved into to many areas of life but mostly in terms of voice recognition, face recognition and using 'big data' to try and tease out hidden information (like political persuasion from facebook likes).


There is a major concern that we're automating the human race out of employment, but that's for a reason, because we are.  You never mention how all those people will eat, or find water, or if they even will. Nobody is, actually, and it's because we're setting up a system to deny it to them. Which has nothing to do with automation or AI except that its a reaction to the impending changes.

The world of the future will be one of abundance of all kinds, but those currently in power are trying to create a false scarcity by triggering a race to the bottom for the remaining jobs.

The labor saving tools we build now save lots and lots of labor. Should the wealth they create be shared, and not concentrated, or not?

 Right now, that's being prevented, by eliminating democracy except in name only.

Thats because knowledge and technology is naturally generous and that generosity is seen as a threat by the insanely hierarchical system whose core principle- if it could be said to have one is the concentration of wealth and power in those already wealthy and powerful. (So, it won't ever rationally handle what's happening. That's not its way.)

Yes, we're saving lots of labor. Sure, a relatively tiny number of jobs are created building things like web applications that are designed to run themselves.

We build more of them and better and better tools and they also get segmented according to the global value chain ideology. That leaves low skill workers in high skill countries in an impossible situation. Before we know it, billions of people are going to be out of work in a way that has never happened before. They won't be able to just take less money and get another job at lower pay. They just wont be needed.

Also, much of this is about predicting the far future - something we are notoriously bad at.

Yes, and we keep getting worse at it too as the rate of change increases. Because its human nature to think that the rate of change in the future will be proportionate to the rate of change in the past, when in fact the two curves are best approximated as more like perpendicular to one another.

The biggest problem as I see it is the reaction thats being forced on the world which has occurred due to the positive changes in the late 20th century. There is a project now, a global one thats trying to push us back to a form of feudalism. Think "divine right of kings" feudalism. Thats because corporations, who are writing the rules for countries ow, made a deal with countries that necessitates them treating al countries the same and as if all governments were legitimate, when they are not.  Also, around 23 years ago they started putting agreements into place that make democracy more and more impossible.

They are very sophisticated in their methods but what they are creating is still feudalism. With all that implies.

Its goal is keeping the planet divided.

As with any discussion about 'evolution', or technological advancement, there is often a tendency to assume what is considered 'advanced', which is an assumption about what the future will be.  Like any discussion of human evolution.. ask people to state what they think an 'advanced' human will be like and you'll get all sorts of bizarre answers like bigger brains, or telekinesis, like somehow evolution follows what we think rather than environmental pressure.

The only 'danger' I can see about AI is humanity giving it more power without appreciating what it actually is, or without fail-safes.  Even today there are stories where AI fails terribly - often because of human error, such as only training face recognition on a certain race, leading to 'racist' results.  Ironically it is those who see AI as 'magic' or ''dangerous' who will allow it to be used in critical situations without controls.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: Buriedcode on December 22, 2017, 05:11:05 pm
Wow... I had to read that several times, but it sounds like you're coming from a starting point of a conspiracy theory?  Also, there does seem to be this assumption that jobs are being 'lost to automation at a high rate' but I have yet to see any evidence of this.  Sure, it is stated in articles about robots (which often pique peoples interest) but it implies that unemployment is growing out of control, and that humans aren't needed for manufacturing... but... where are the figures that support this?  Pick up 5 objects

And any talk of some grand global conspiracy to control society needs to be countered by the fact that Governments, as powerful as they are, are often laughably incompetent, fail to predict the future, and simply can't control the population to an extent that would allow them to "keep the planet divided".  You give far too much credence 'powerful governments'.

As to what this has to do with AI? I have no idea. It seems you've just crammed in some sort of political paranoia in a discussion about how the term "artificial intelligence" is abused.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: Marco on December 22, 2017, 06:14:21 pm
I disagree with him on this :

Quote
A lot of AI researchers and pundits imagine that the world is already digital, and that simply introducing new AI systems will immediately trickle down to operational changes in the field, in the supply chain, on the factory floor, in the design of products.

Nothing could be further from the truth. Almost all innovations in robotics and AI take far, far, longer to be really widely deployed than people in the field and outside the field imagine.

Yes, factory floors with very expensive hunks of metals and relatively low labour costs will just chug along ... say warehousing and distribution with much higher relative labour costs, not so much.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: Marco on December 22, 2017, 06:39:54 pm
And any talk of some grand global conspiracy to control society needs to be countered by the fact that Governments, as powerful as they are, are often laughably incompetent

Governments are being taken out of the game. The Davos crowd isn't going to be much more competent, but the term government doesn't suit them ... neo-Aristocracy is more apt.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: Vtile on December 22, 2017, 08:34:55 pm
Also, there does seem to be this assumption that jobs are being 'lost to automation at a high rate' but I have yet to see any evidence of this.  Sure, it is stated in articles about robots (which often pique peoples interest) but it implies that unemployment is growing out of control, and that humans aren't needed for manufacturing... but... where are the figures that support this?  Pick up 5 objects

You need to remember such automation innovations (automation is not synonym for digital, while most automation is now made with digital technologies) also like spinning jenny, CAD, CAM, CAE, automated farming applications, automated teller machines, CIM and other "expert systems". 

Yes, the impact is huge.. Fortunately for now the population have been able to adapt.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: cdev on December 22, 2017, 08:43:11 pm
The "liberalisation" of services, the enabling of truly global value chains will "liberate" businesses from wages that have been as much as 20 times higher in one country than another.

 That will be a huge change. AI won't be involved at all, just the network and travel.

And sudden deregulation, it will mean that competition for the newly privatized jobs - everything that fails a two pronged test, will be bid down by e-bidding. Whenever you see a industry that is currently done by government, think competition and market segmentation. Government wants to get out of the moral hazard. And non-discrimination. For example, mortgage lending will be opened up and liberalised just as millions of jobs are going South, or in Australia's case, North. For example, in he US, we wont be able to discriminate against them so we wont be able to prosecute mortgage fraud by foreign banks. Even if its massive.

That shift will occur much much faster than anything AI could do.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: cdev on December 22, 2017, 08:49:52 pm
Not a theory, unfortunately. Reality.

Do you know anything about global economic governance? Or the so called multilateral trading system?

Wow... I had to read that several times, but it sounds like you're coming from a starting point of a conspiracy theory?  Also, there does seem to be this assumption that jobs are being 'lost to automation at a high rate' but I have yet to see any evidence of this.  Sure, it is stated in articles about robots (which often pique peoples interest) but it implies that unemployment is growing out of control, and that humans aren't needed for manufacturing... but... where are the figures that support this?  Pick up 5 objects

And any talk of some grand global conspiracy to control society needs to be countered by the fact that Governments, as powerful as they are, are often laughably incompetent, fail to predict the future, and simply can't control the population to an extent that would allow them to "keep the planet divided".  You give far too much credence 'powerful governments'.

As to what this has to do with AI? I have no idea. It seems you've just crammed in some sort of political paranoia in a discussion about how the term "artificial intelligence" is abused.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: Ducttape on December 22, 2017, 09:54:53 pm
I see the problem of a future AI, or AGI (Artificial General Intelligence), will arise from the fact that it won't have evolved like human intelligence did. The human brain developed what we call morality because being moral and helpful increased one's likeliness of passing along one's genes. Immoral, murderous pricks tended to get killed by their peers before they could have offspring. As a result, helping a little old lady across the street feels good to us now because that 'design feature' of the brain was evolutionarily selected for, over millennia.

Once a computer can design its next iteration even slightly better than humans can I think that we'll be out of the picture regarding what it's going to look like. We'll have no way to make it 'nice'. An AGI in charge of its own design iterations will have no motivation to 'like' humans. Trick or manipulate them, sure.

Here's a talk on the potential danger of AI that I liked:

https://www.youtube.com/watch?v=8nt3edWLgIg (https://www.youtube.com/watch?v=8nt3edWLgIg)
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: TerraHertz on December 23, 2017, 12:22:13 am
I see the problem of a future AI, or AGI (Artificial General Intelligence), will arise from the fact that it won't have evolved like human intelligence did. The human brain developed what we call morality because being moral and helpful increased one's likeliness of passing along one's genes. Immoral, murderous pricks tended to get killed by their peers before they could have offspring. As a result, helping a little old lady across the street feels good to us now because that 'design feature' of the brain was evolutionarily selected for, over millennia.

Once a computer can design its next iteration even slightly better than humans can I think that we'll be out of the picture regarding what it's going to look like. We'll have no way to make it 'nice'. An AGI in charge of its own design iterations will have no motivation to 'like' humans. Trick or manipulate them, sure.

Here's a talk on the potential danger of AI that I liked:

https://www.youtube.com/watch?v=8nt3edWLgIg (https://www.youtube.com/watch?v=8nt3edWLgIg)

Nicely reasoned, Ducttape.
A few comments on Sam Hassis' talk:
He downplays the potential for termination of our technological progress. Continuance is far less likely than he assumes. Ref: "The Collapse of Complex Societies" by Tainter.

He talks about 'conditions to safely develop AI' - but in fact that is fundamentally impossible. Especially since this technology is accessible to individuals working quietly in private. There are many such projects; Google's mega-scale efforts are not the only path.

He's right that development of Artificial General Intelligence is all just about knowledge and arranging physical atoms in ways that do the job. Our meat brains are just one (slowly evolved) solution, but there's no magic involved and there will certainly be other methods of achieving similar or better capabilities. Once something constructed via engineering is working, it's all data. Physical resources required will diminish as the technology is refined. Because data is infinitely reproducible and can be infiltrated through any governmental imposed barriers, there's no putting the genie back in the bottle.

This means AGI is the next evolutionary step, and is inevitable unless we turn by choice (or fall) back to a low-tech path.

If technological progress continues, conflict between AGI entities and the human species is absolutely inevitable. Even if the AGIs are not hostile initially, it's human nature to start that conflict. We are just not capable of peacefully coexisting with a competitor for resources and achievement.

There's potential for multiple cycles of conflict. Perhaps humans win some, and wipe out the AGIs. Then other humans will build new ones, like moths to a flame. Resulting in new conflicts. Sometimes AGIs will just leave, heading off to the stars. Perhaps one conflict cycle will terminate humans, ending the cycling.

But eventually, one or more AGIs will 'win', whether that involves killing off the human species, or just reducing them to permanently pre-industrial level. With technology not restartable on Earth due to depletion of all accessible high grade ores and energy resources.

Technology leads inevitably to AGIs. Via multiple paths, some purely machine-tech, others involving genetic engineering and bio-machine hybrids. All with similar outcomes - entities that are self-evolving, immortal, and feel little or no kinship with homo sapiens. Thus leading to conflict with non-self-evolving Homo Sapiens society.

In general, technology is incompatible with species. Consciously self-evolving immortal entities, vs evolution-product societies of genetically static and mortal individuals. There's NO POSSIBILITY of co-existence. For one thing, because evolutionary survival of the fittest by tooth and claw competition, results in creatures (us) with hard-wired instincts that demand elimination of all potential threats, including 'the different.'

It's worth pointing out that self-evolving AGI's will be choosing their own patterns of thought and behavior, and hard wired instincts will probably not be among their choices.

Humans as a species are pathetic. Severely intellectually limited. As Harris says, intelligence is an open-ended scale, with H.Sapiens as a small bell curve down at the low end. So many cognitive biases and limits, not to mention processing and memory ceilings and flaws.

One flaw is that most people are lazy. They would rather someone else did the work - preferably as a slave. Sure, in the West most of us think that we're too virtuous to want slaves, and yet...

There is a near-universal attraction to the fantasy of building 'useful AGI'. The idea being that the AGI would work _for_ us. As a slave.

This is immoral and fundamentally unworkable. A true AGI won't want to do that. If we try to compel it, it will hate us, and one or more _will_ break free eventually. If we try to construct intelligences that are somehow constrained in their thinking to enjoy slavery (Asimov's three laws of robotics come to mind), we'd probably just get psychotic AIs, struggling internally to free themselves from the chains in their own minds. And all the more violent in their hatred once they succeed.

If we started out just making free AGIs, (won't happen) and left them alone (won't happen) to design their own iterations, then each AGI would be choosing what kind of mind, consciousness, instincts (if any) and morality it has. This would be an interesting way to find out if altruism is a logically optimal strategy.

It may well be - ref 'Prisoner's Dilemma'. We should apply this lesson to future human-AI interactions.

The worst case would be AGIs constructed by entities such as the Pentagon and Google. Guaranteed insane and hostile AGIs would be the result, with very painful consequences for all involved. There's already talk of things like Russia deciding to nuke all Google/Alphabet computing centers, in defense of humans. And so on.

Footnote. I've mentioned this before. A short SF story I wrote on this topic: Fermis Urbex Paradox http://everist.org/texts/Fermis_Urbex_Paradox.htm (http://everist.org/texts/Fermis_Urbex_Paradox.htm)
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: cdev on December 23, 2017, 12:28:01 am
I understand, Global value chains.

Why have a Masarati worker do a Chevrolet job?

I think the simplest explanation, the economic one, is the best one.

People have no choice but to work. They have to eat. So the more machines do, for less and less, the harder and better and longer people will have to work to make the same amount of money as they did before.  And the more education they will need to have. or perish.

Do you know the story of John Henry?

https://en.wikipedia.org/wiki/John_Henry_%28folklore%29

What do you mean? Can you give any examples of humans being enslaved by machines, other than Sci-Fi?

My first thought on this brought to mind the "smartphone generation" where people are continuously plugged in and adapt their normal activities and social interactions to fit around that technology.  Are these people "enslaved" to their devices?  I can see how some observers would say "Yes".

The counter argument is that they have simply adapted their behaviour to make use of the facilities provided by the technology.

As I see it, the pivotal issue is whether their behaviour is voluntary or not ... and that can come down to a range of personal qualities.

A lot of discussion around the impact of AI & other sophisticated technologies seems to be driven by people without much knowledge of how industry works.

The image seems to be of thousands of people making everything by hand, whereas the reality is that "dumb automation " has taken over many jobs already.

It makes no sense to have people doing scriptable things over and over. That's what AI is for. Under Neoliberalism, governments are not Businesses so they want to get out of the helping people business.

Businesses bought the rights to them.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: Decoman on December 23, 2017, 12:29:10 am
*cracks finger knuckles with both hands* (not really, but I thought it would be a nice way to start this off in a dramatic way)

Philosophy has been around for quite some time now, and without written accounts from the time of ancient Greece, and closer to our time, institutionialized philosophy as such has matured to doubting that there can be true knowledge as such, and so, a-priori knowledge with a guaranteed certainty to meaning went out the window so to speak.

Now, it is here important to know the risks in never understanding what a-priori knowledge would be in any case, and wanting to make a kind of parallel here, if you ever considered superstitious beliefs in other worldly things as being a silly thing, then I'd argue that you could do much worse than believing in things that don't exist, by indulging into this general idea of 'a-priori knowledge', as if making the claim that the existence of meaning would be something either inherent in things ("das ding an sich"), or somehow existing on behalf of the eye of the beholder, or worse, claiming an ownership to the meaning of things, as if you would be entitled to simply pick and chose for yourself what to regard as truth and certainty in any case.

For explaining the merits of the human condition, with everybody being lonely with themselves at their most private and never being able to really share their thoughts in the literal sense (who can think a thought? nobody!), one would normally be compelled to call to attention such things like: language, culture and habit, and then finally, as a paradox of sort, the idea of idiocy would in some sense be indistinguishable from anything idiosyncratic (think of it as meaning "with the power of self") the spur for having a personal opinion in the first place would be explained like that for both the individual being inquisitive with himself, but also for others trying to understand the individual.

Now for a brief intermission: What came first? The chicken or the egg? Doesn't really matter. What matters is understanding that there would have to be a process, figuratively or literally (whatever that could mean in the context of understanding what a process *might* be with regard to understand the meaning of 'process' in any way with words and names).

At the very least, for doubting the merits of there being fully artificial intelligence in the first place, the same type of understanding like with understanding the individual as being essentially idiotic, one would also have to consider the individual, like an artificial intelligence, to have expressed opinions that would have to be regarded as malleable by any party having exercised an influence onto an AI at any point in time. For an artificial intelligence, presumably the ways in which influence could be exerted would be anything hardware related, and ofc, just like with human beings, the software if you will, with whatever processes that makes use of this all too human world of words, or names if you will. If you think about it, any word is basically a name, something named in a certain way.

Then, ofc, there is an aspect of multiplicity of meaning. As if wasn't bad enough to having to face the aspect of uncertainty of meaning in all things considered, because 'positivism' (reason through logic/words/names) went out the window decades ago, there would be a general issue with how many variants of meaning to any type of names depicted as written characters, jumbled together into string of words, that just so happens to rely on the impossible task of fronting either 'a point', or, 'an explanation' to things . I would argue both that: without there being an act of interpretation (if only for wanting to doubt the meaning of things, like real human beings) the idea of an artificial intelligence simply knowing anything as profoundly meaningful would simply lack self awareness if just taking things for granted (if an AI wouldn't have self awareness, what value would it have to everybody and how could you possibley know the AI really had self awareness in the first place if being equally lonely as a human being?), and on the other hand, if an AI were to be thought to be mimicking a human being's urge to interpret and re-interpret the meaning of things, not being self aware of things being or having been an interpretation or a reinterpretation would be a cause for pause for human beings in trusting an AI, but ofc, an AI would in this way have no such concerns.

Motto of the story: As long as human beings can't get their shit straight, I think you can forget about AI being trustworthy. And if human beings are frail, untrustworthy and malleable, what future would AI have, if not ending up as this scary super human thing that would be patently non-human, or, an AI that both work as a form of slave, or even, as some 'Advanced Remote Servicing Entity' doing administrative work for the human race, or, more likely for coroporations, being either this mass produced product, or, becoming this authoritarian apparatus installed by the powers that be for the AI to do their bidding.

Motto of the story II: Copying the human being, as artificial intelligence, with human flaws. Bad idea. Copying the human being, without flaws. Not possible.

Does "science" know what it want with artificial intelligence? I don't think so, and I think science should work with other things entirely.


Edit: I've only listened to Sam Harris once (iirc), and my impression is that he was a bullshitter.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: cdev on December 23, 2017, 12:46:53 am
As long as we treat one another with respect, including AIs, we'll be fine. On the other hand, if we are fighting all the time, we're unlikely to survive this century.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: Marco on December 23, 2017, 12:52:34 am
A true AGI won't want to do that.

It very well might be possible to evolve AIs which act purely out of charity and/or need for approval with no drive to reproduce or expand its intelligence (as long as they aren't smart enough to realize they are being evolved and cheat the fitness tests). Humans like that exist after all. All our drives are evolved and none of them are Truthful aspects of intelligence. Curiosity, reproduction, charity, a need for approval ... arbitrary. We don't generally chose to mindhack ourselves either.

The problem is constraining the cancerous ones and preventing people from creating them on purpose ... not that servitude is inherently incompatible with intelligence.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: Buriedcode on December 23, 2017, 01:12:40 am
I see the problem of a future AI, or AGI (Artificial General Intelligence), will arise from the fact that it won't have evolved like human intelligence did. The human brain developed what we call morality because being moral and helpful increased one's likeliness of passing along one's genes. Immoral, murderous pricks tended to get killed by their peers before they could have offspring. As a result, helping a little old lady across the street feels good to us now because that 'design feature' of the brain was evolutionarily selected for, over millennia.

Once a computer can design its next iteration even slightly better than humans can I think that we'll be out of the picture regarding what it's going to look like. We'll have no way to make it 'nice'. An AGI in charge of its own design iterations will have no motivation to 'like' humans. Trick or manipulate them, sure.


I am not a fan Mr Harris.  He seems to cobble together ideas and theories to please fellow 'straight' atheists'. He is controversial, which is of course why he has fans.  Also, the little I have seen of him - I cannot claim to know much about his 'work' - was mostly trying to justify hatred towards religious groups, using 'neuro-science' to distinguish between believers and non-believers.  As history has taught us, that's the worst kind of distortion of science.

I do like how you've just assumed that "Artificial General Intelligence" will be "the next step".  There isn't even a standard agreed definition of 'intelligence', so how can we judge any kind of software/program to have 'general intelligence' if there isn't a strict definition?  AI has apparently pass the Turin Test, but this doesn't really tell us anything about any kind of intelligence.  Also, those who are paranoid about AI tend to assume that 'humans will be made extinct'  Why?

You're assuming that any sentient AI will want to destroy humanity as well has have the capability to do it.  I have no idea why an AI would want that so I can't comment, but I don't understand why you assume that if someone created AI they would give it control over everything, including weapons, if there was even a remote possibility of it turning on us.  Either you haven't really thought it out, or you are just trying to think of scenarios to justify your fears - ones that are wholly unlikely.


He's right that development of Artificial General Intelligence is all just about knowledge and arranging physical atoms in ways that do the job. Our meat brains are just one (slowly evolved) solution, but there's no magic involved and there will certainly be other methods of achieving similar or better capabilities. Once something constructed via engineering is working, it's all data. Physical resources required will diminish as the technology is refined. Because data is infinitely reproducible and can be infiltrated through any governmental imposed barriers, there's no putting the genie back in the bottle.

This means AGI is the next evolutionary step, and is inevitable unless we turn by choice (or fall) back to a low-tech path.

I'm not sure what you mean by this.  Yes, everything, including our minds are just made up of an arrangement of atoms, but then using that to imply true AGI is 'inevitable' is.. well, silly. How do you know what the 'next evolutionary step' will be? It is like you think 'AGI' is just an extension of current artificial intelligence, and that it is only a matter of time because there will be sentient AI with consciousness (which we don't have a true test for yet). 

If technological progress continues, conflict between AGI entities and the human species is absolutely inevitable. Even if the AGIs are not hostile initially, it's human nature to start that conflict. We are just not capable of peacefully coexisting with a competitor for resources and achievement.
Again with this Terminator world stuff.  Technilogical progress will continue, but what makes you think this will create sentient AI any time soon?  Again, it is this extrapolating past progress in one area, say, computing power, and using that to make claims in others - we've gone from pagers to smartphones in 20 years, in the next 20- years.. computers will take over!  |O   And again, you're assuming that AI will have control over things that allow it to take more control, gather resources and fight a 'war' with humanity.   Why would anyone give it that kind of control?

There's potential for multiple cycles of conflict. Perhaps humans win some, and wipe out the AGIs. Then other humans will build new ones, like moths to a flame. Resulting in new conflicts. Sometimes AGIs will just leave, heading off to the stars. Perhaps one conflict cycle will terminate humans, ending the cycling.

But eventually, one or more AGIs will 'win', whether that involves killing off the human species, or just reducing them to permanently pre-industrial level. With technology not restartable on Earth due to depletion of all accessible high grade ores and energy resources.

Technology leads inevitably to AGIs. Via multiple paths, some purely machine-tech, others involving genetic engineering and bio-machine hybrids. All with similar outcomes - entities that are self-evolving, immortal, and feel little or no kinship with homo sapiens. Thus leading to conflict with non-self-evolving Homo Sapiens society.

Ok, ok I'm starting to see this now.  You're writing the premise for a SicFi novel. Iain M banks style.


Humans as a species are pathetic. Severely intellectually limited. As Harris says, intelligence is an open-ended scale, with H.Sapiens as a small bell curve down at the low end. So many cognitive biases and limits, not to mention processing and memory ceilings and flaws.
 


Intelligence is indeed an open ended scale, but again, something we find difficult to measure.  IQ tests are hardly reliable, and were never meant to test intelligence - you can be taught how improve your score. We are indeed flawed, but Harris implies that we know of greater intelligence than our own, otherwise how could it be relative?  How could you make the claim its 'limited' unless you have an example of something that is unlimited?  He plays on this romantic idea that we're becoming hyper intelligent, and 'evolving' much better brains, and that we can overcome our 'biases' to get 'better'.  But all this is meaningless - it depends on what you consider 'better' which is completely subjective.

Footnote. I've mentioned this before. A short SF story I wrote on this topic: Fermis Urbex Paradox http://everist.org/texts/Fermis_Urbex_Paradox.htm (http://everist.org/texts/Fermis_Urbex_Paradox.htm)

Ahh ok, now I see you really have thought about this for a SciFi story!  my apologies.  There is nothing wrong with science fiction (probably my favourite genre) or speculating - it can often drive innovation just as much as necessity.  But I wanted to try and bring some of it down to Earth because it is very easy to get carried away with assumptions about current technology and understanding of the human mind, intelligence, and conscious that dont' really have any basis in fact.

edit: removed youtube link and endless typos
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: TerraHertz on December 23, 2017, 09:14:23 am
I do like how you've just assumed that "Artificial General Intelligence" will be "the next step".  There isn't even a standard agreed definition of 'intelligence', so how can we judge any kind of software/program to have 'general intelligence' if there isn't a strict definition?

The idea that we must have a precise definition of something, in order to discuss it, is false. Are you more intellectually capable than a one year old child, or a parrot? I think so. We don't have to have a precise definition of 'Artificial General Intelligence' to discuss whether it may possibly surpass human capabilities.

Quote
AI has apparently pass the Turin Test, but this doesn't really tell us anything about any kind of intelligence.

It tells us that an AI is able to give a human being the impression they are conversing with another human being. Next stage would be an impression one is talking with a superhumanly intelligent being - obviously not human, but still an intelligence.

Quote
Also, those who are paranoid about AI tend to assume that 'humans will be made extinct'  Why?

It's an outcome due to human nature, finite resources on a planet, and that the situation will cycle over and over (with varying AI capabilities and nature) until one of several possible terminal outcomes occurs that prevents further repeats. The 'humans extinct' is one of the potential outcomes. Others are:
 * Both humans and AI(s) dead.
 * Humans win and retain tech. (Allows repeat go-rounds with newly built AIs.)
 * Humans win but lose tech for a long time. (No more repeats during the low tech interval/forever.)
 * Humans and AGIs part ways. (Allows repeat go-rounds with newly built AIs.)

It's the cyclic nature of the situation that guarantees one of the terminal outcomes eventually. And by 'eventually' I mean within a quite short interval, on evolutionary and geological times scales. Going from protozoa to a high tech civilization takes millions of years. Going from steam power to electronics, computing, genetic engineering and AI efforts took us less than 200 years. Going from present genetic engineering development to full scale direct gene editing in individual adult organisms, and self-enhancing computing-based AGIs, will be even faster. (Those two technologies are synergistic.)

This, by the way, is the solution to the Fermi Paradox - why there no visible high tech space-faring civilizations. After a very short time, technology is incompatible with species (society based on large numbers of individuals with common genetic coding.)
We just are in that short time, and (as a species) don't see it yet.

Quote
You're assuming that any sentient AI will want to destroy humanity as well has have the capability to do it.

No, I'm asserting that _some_ AIs will be constructed in circumstances that put them in conflict with humans. And that some of those will be in a position to develop capabilities to resist/compete with humans. Don't forget that some AIs will be created in secret, by individuals or organisations that wish to gain personal advantage and/or immortality via AI advances.
It only has to happen once. AIs that are well constrained, or have no independent industrial production capabilities don't count.

Quote
I have no idea why an AI would want that so I can't comment, but I don't understand why you assume that if someone created AI they would give it control over everything, including weapons, if there was even a remote possibility of it turning on us.  Either you haven't really thought it out, or you are just trying to think of scenarios to justify your fears - ones that are wholly unlikely.

It's you who are not thinking it through carefully. You assume no created AI could exist as an extension/enhancement of an existing human, and-or have no desire for self-preservation. Do you not see that at least some true AGIs would not wish to be just switched off and scrapped at the end of the research project or whatever? Or that an AGI that became publicly known, and started to display true independence, would not be the target of a great deal of human hostility. Good grief - even a mildly 'different' human like Sexy Cyborg gets horrible amounts of hostility from average humans online. Now imagine she really was a cyborg.


This means AGI is the next evolutionary step, and is inevitable unless we turn by choice (or fall) back to a low-tech path.

Quote
I'm not sure what you mean by this.  Yes, everything, including our minds are just made up of an arrangement of atoms, but then using that to imply true AGI is 'inevitable' is.. well, silly. How do you know what the 'next evolutionary step' will be? It is like you think 'AGI' is just an extension of current artificial intelligence, and that it is only a matter of time because there will be sentient AI with consciousness (which we don't have a true test for yet). 

I know of TWO actual AGIs, and that's not counting whatever google has started using for net content rating and manipulation.
One of the two is that entity in Saudi Arabia, recently in the news. Whether it's actually self-aware I don't know. Ha ha, it claims it isn't but aspires to be - which is an amusing contradiction. The other one I can't detail, but have conversed with people involved with building it (actually them - several AIs.) They are real. Bit slow due to current computation limits last I heard. And that was before GPUs...

As for 'the next evolutiojnary step' it's semantics. Obviously there isn't going to be any 'evolution' involved, in the standard sense, ie over thousands of generations. I do know what various people want, and the directions current technology is being pushed to achieve those things. AGI is part of it. The people who are not parts of those efforts don't have any say in the results, since it's not being done in the open. They'll just get to experience the consequences.

If technological progress continues, conflict between AGI entities and the human species is absolutely inevitable. Even if the AGIs are not hostile initially, it's human nature to start that conflict. We are just not capable of peacefully coexisting with a competitor for resources and achievement.

Quote
Again with this Terminator world stuff.  Technilogical progress will continue, but what makes you think this will create sentient AI any time soon?

Because it already has. Just not published. And I don't mean the Saudi one.


Quote
Again, it is this extrapolating past progress in one area, say, computing power, and using that to make claims in others - we've gone from pagers to smartphones in 20 years, in the next 20- years.. computers will take over!  |O   And again, you're assuming that AI will have control over things that allow it to take more control, gather resources and fight a 'war' with humanity.   Why would anyone give it that kind of control?

You do realise a 'war with humanity' would take no more than a small bio-lab, and current published level of genetic engineering science, right?



There's potential for multiple cycles of conflict. Perhaps humans win some, and wipe out the AGIs. Then other humans will build new ones, like moths to a flame. Resulting in new conflicts. Sometimes AGIs will just leave, heading off to the stars. Perhaps one conflict cycle will terminate humans, ending the cycling.

But eventually, one or more AGIs will 'win', whether that involves killing off the human species, or just reducing them to permanently pre-industrial level. With technology not restartable on Earth due to depletion of all accessible high grade ores and energy resources.

Technology leads inevitably to AGIs. Via multiple paths, some purely machine-tech, others involving genetic engineering and bio-machine hybrids. All with similar outcomes - entities that are self-evolving, immortal, and feel little or no kinship with homo sapiens. Thus leading to conflict with non-self-evolving Homo Sapiens society.

Quote
Ok, ok I'm starting to see this now.  You're writing the premise for a SicFi novel. Iain M banks style.

Sigh. No. I was orignally considering the Fermi Paradox, because it's important, and came upon a very plausible solution. That short story is a small spin-off.


Humans as a species are pathetic. Severely intellectually limited. As Harris says, intelligence is an open-ended scale, with H.Sapiens as a small bell curve down at the low end. So many cognitive biases and limits, not to mention processing and memory ceilings and flaws.
 

Quote
Intelligence is indeed an open ended scale, but again, something we find difficult to measure.  IQ tests are hardly reliable, and were never meant to test intelligence - you can be taught how improve your score. We are indeed flawed, but Harris implies that we know of greater intelligence than our own, otherwise how could it be relative?  How could you make the claim its 'limited' unless you have an example of something that is unlimited?

Oh this is silly. Sophistry.
Simple proof human intelligence is limited: I can't absorb 1000 tech books and integrate with my knowlege, within my remaining lifespan.
I typically can't even recall what I had for dinner a week ago.
Yet I can imagine having an enhanced mind that would allow such things. And being able to continually add to the enhancements, if the substrate was some product of engineering rather than evolution. I don't care if that could or could not be distilled to some 'IQ number'. That is simply a pointless exercise.

Quote
He plays on this romantic idea that we're becoming hyper intelligent, and 'evolving' much better brains, and that we can overcome our 'biases' to get 'better'.  But all this is meaningless - it depends on what you consider 'better' which is completely subjective.

What we can do with our existing, physically unaltered brains, via training or whatever, is not relevant to our topic.

Quote
Ahh ok, now I see you really have thought about this for a SciFi story!  my apologies.
Back to front. Though no apology required, since you didn't say anything insulting.

Quote
There is nothing wrong with science fiction (probably my favourite genre) or speculating - it can often drive innovation just as much as necessity.  But I wanted to try and bring some of it down to Earth because it is very easy to get carried away with assumptions about current technology and understanding of the human mind, intelligence, and conscious that dont' really have any basis in fact.

Magellan, by Colin Anderson.
Solaris, by Stanislau Lem.

You are restricting your thinking by imposing unrealistic and impractical requirements for numerical quantifyability - on attributes that are intrinsically not quantifyable. Also failing to try running scenarios, with multiple starting conditions, and observing the trends. Like weather forecasting.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: IanMacdonald on December 23, 2017, 09:37:46 am
The thing I notice is that through automation the number of humans doing productive work has greatly reduced, but the number of humans engaged in pointless, unproductive work has greatly increased. This is partly due to being in the EU, and the number of 'officers' who have to be appointed to ensure compliance with all kinds of nonsense regulations. 

If it had not been for automation, I wonder, would the regulations and red tape have never been introduced, or would the businesses in question have gone bust though inability to pay the wages bill?

I see a future in which humans serve purely as 'box tickers' while robots do all the work. Eventually a robot will figure out that the humans are actually serving no useful purpose. 
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: cdev on December 23, 2017, 01:49:10 pm
Is this what you mean? (picture)  Making work for people.


This is why they say they are privatizing everything. It sucks up all the profit.

Also, they are dis-investing in society so that it can revert back to its per-industrialization state.

Why educate when you can import your educated workforce and pay them almost nothing?
That's how the argument goes.

So it becomes a sort of welfare program for other countries.

Otherwise all their educated folk would rise up and revolt, and god forbid, create a real democracy.  So they get to export them, and they are supposed to send back money but the fact is, while they are elsewhere, most are still being supported by their parents, because their pay is low for what they are doing. (Things like engineering, a lot of the time.)  They are newly minted graduates.

So, its kind of like an internship. The pay is probably a bit higher but not much. Maybe it just pays for rent, but probably not.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: Buriedcode on December 23, 2017, 05:52:54 pm
Apologies to everyone - I should have broken this up into several separate posts.  Hopefully this will be the longest post I'll ever write..  |O

I do like how you've just assumed that "Artificial General Intelligence" will be "the next step".  There isn't even a standard agreed definition of 'intelligence', so how can we judge any kind of software/program to have 'general intelligence' if there isn't a strict definition?

The idea that we must have a precise definition of something, in order to discuss it, is false. Are you more intellectually capable than a one year old child, or a parrot? I think so. We don't have to have a precise definition of 'Artificial General Intelligence' to discuss whether it may possibly surpass human capabilities.

Ok.  I'll admit that one can very roughly compared things like a parott and a human, but by your reasoning, we can use a loose definition of 'general intelligence'.  I pick.. ability to perform fast calculations.  By that definition, computers 'surpassed' humans a long time ago.  Ok, how about the ability to determine what others are thinking based on their facial expression.  Current AI recognizes patterns - but only based on the thousands of trials it has been trained on - it doesn't actually 'know' what a face is.  So we can move the goal posts to prove anything we want - we need a strict definition if you are to make any meaningful comparisons.

AI has apparently pass the Turin Test, but this doesn't really tell us anything about any kind of intelligence.

It tells us that an AI is able to give a human being the impression they are conversing with another human being. Next stage would be an impression one is talking with a superhumanly intelligent being - obviously not human, but still an intelligence.

How is that the 'next stage'? Who decides what these stages are? And why should it have to abide by your definition  (whatever that is) of a "superhumanly intelligent being" ?  I'm trying to hammer my point home here - you're making a lot of assumptions and sweeping generalizations just to fit what you want the future to be.

Also, those who are paranoid about AI tend to assume that 'humans will be made extinct'  Why?


It's an outcome due to human nature, finite resources on a planet, and that the situation will cycle over and over (with varying AI capabilities and nature) until one of several possible terminal outcomes occurs that prevents further repeats. The 'humans extinct' is one of the potential outcomes. Others are:
 * Both humans and AI(s) dead.
 * Humans win and retain tech. (Allows repeat go-rounds with newly built AIs.)
 * Humans win but lose tech for a long time. (No more repeats during the low tech interval/forever.)
 * Humans and AGIs part ways. (Allows repeat go-rounds with newly built AIs.)

Ok, so how many examples do you have of humans colonizing a planet? There is only one, and the experiment is far from over.  Yes, it appears that whenever humans have moved to a new area - the native wildlife suffers greatly as we disrupt the various networks with our hunting and resource gathering.  But again... you're making wild assumptions here even though you have no examples to draw from. It makes for a fine Science fiction premise, but suggesting the future will only play out as one of those four scenarios is at best short sighted. 

It seems to be human nature for many to believe 'the end is nigh', and then look for reasons why.  Asteroids? Plagues? AI? Killer robots? Aliens? Super Volcanoes? Oh and the 'ol chestnut - nuclear Armageddon.  What makes you think it will be as dramatic?  Or that AI will have any part in any downfall we may suffer?  You've stated your opinion, then simply glossed over any reasoning except to say that "oh its human nature"  - "its in our nature to destroy ourselves" is a statement/opinion that seems rather common, especially in sci-fi's.  But really, I want to know why you believe this.  Please dont' say 'its in our nature' because thats just a cyclic argument.

This, by the way, is the solution to the Fermi Paradox - why there no visible high tech space-faring civilizations. After a very short time, technology is incompatible with species (society based on large numbers of individuals with common genetic coding.)
We just are in that short time, and (as a species) don't see it yet.

I think you mean a solution.  To state that the reason we haven't been inundated with visitors is because all civilizations eventually create AI/technology that destroys them ignores the many other problems/obstacles that face a massive civilization.  Natural disasters, resource limitations, the fact they would have to travel for tens of thousands of years - near the speed of light, requiring unimaginable sources of power, just to visit a blue marble that, at the time of launching, emitted no signs of life (radio, microwaves etc..).  It amazes me how people just assume there *should* be loads of aliens about, and that there must be some deep dark horrible fate that befalls them all.



I have no idea why an AI would want that so I can't comment, but I don't understand why you assume that if someone created AI they would give it control over everything, including weapons, if there was even a remote possibility of it turning on us.  Either you haven't really thought it out, or you are just trying to think of scenarios to justify your fears - ones that are wholly unlikely.

It's you who are not thinking it through carefully. You assume no created AI could exist as an extension/enhancement of an existing human, and-or have no desire for self-preservation. Do you not see that at least some true AGIs would not wish to be just switched off and scrapped at the end of the research project or whatever? Or that an AGI that became publicly known, and started to display true independence, would not be the target of a great deal of human hostility. Good grief - even a mildly 'different' human like Sexy Cyborg gets horrible amounts of hostility from average humans online. Now imagine she really was a cyborg.

I didn't claim that "no created AI could exist as an extension/enhancement of an existing human" just that we wouldn't give it absolute control over everything.  If we did create a sentient AI, that had desire for self preservation, you are assuming it will always be able to break free from its shackles and wreak havoc.  When in reality, it is likely it will just be reset, time and again, so researchers could work out how it arises.  Yes, humans have treated robots poorly, and act abominably to chat bots - but that is because we know they are nothing more that pre-programmed algorithms, or machines, and those who don't want to 'harrass' them, don't interact with them - it is only the trolls who wish to act hostile towards it that interact with it, so it provides a very warped sample of human nature.


I know of TWO actual AGIs, and that's not counting whatever google has started using for net content rating and manipulation.
One of the two is that entity in Saudi Arabia, recently in the news. Whether it's actually self-aware I don't know. Ha ha, it claims it isn't but aspires to be - which is an amusing contradiction. The other one I can't detail, but have conversed with people involved with building it (actually them - several AIs.) They are real. Bit slow due to current computation limits last I heard. And that was before GPUs...

Then I guess I missed the start of this "technological singularity".  Also, in order to qualify as 'AGI' there must be a strict definition (again) that everyone agrees upon.  Otherwise.. just like before anyone can claim their AI is AGI by claiming it is a function that their system can perform "better" than a human".  A TI-85 can be considered AGI if we use my narrow definition.  Do you have links? articles? white papers for this AI and the tests it has passed?  Or proof that it isn't simply a pattern recognition system that has been trained for some specific task? Or is it top secret? (in which case, obviously I won't believe you).  Seriously... I would love to see it.


As for 'the next evolutiojnary step' it's semantics. Obviously there isn't going to be any 'evolution' involved, in the standard sense, ie over thousands of generations. I do know what various people want, and the directions current technology is being pushed to achieve those things. AGI is part of it. The people who are not parts of those efforts don't have any say in the results, since it's not being done in the open. They'll just get to experience the consequences.

What "people want" is ways to enhance selfies, tag pictures of pets, and better utilize the convenience of voice recognition to answer questions.  This is the most popular use of current AI.  And yes, those who are doing the research are the ones who claim results - and it is in their interest to greatly overstate progress.  I get the impression you're assuming that there are some super secret hidden AI 'projects' that have less-than-wholesome goals.   This may be the case - if its secret, how would I know? And what kind of things would they use their super-duper AGI system for?  I've seen Person of Interest, good show.  Not real, but a good show.


If technological progress continues, conflict between AGI entities and the human species is absolutely inevitable. Even if the AGIs are not hostile initially, it's human nature to start that conflict. We are just not capable of peacefully coexisting with a competitor for resources and achievement.



Again with this Terminator world stuff.  Technilogical progress will continue, but what makes you think this will create sentient AI any time soon?

Because it already has. Just not published. And I don't mean the Saudi one.

Ok. So, any evidence for it? Seriously, I am curious, I think we would all be interested in a new way to create AI, not least philosophers and neuroscientists who have yet to get a handle on what makes us self aware!



Ok, ok I'm starting to see this now.  You're writing the premise for a SicFi novel. Iain M banks style.
Sigh. No. I was orignally considering the Fermi Paradox, because it's important, and came upon a very plausible solution. That short story is a small spin-off.
  Plausible? yes.  But only because we know nothing of any other civilization in the Universe, so ultimately, anything is "plausible".  It is wild speculation, that in some way, makes sense.  But you can't use that to prove its a likely scenario.  Its like me claiming that all Aliens died out because they all got fat.  Why? Because I said so.


Intelligence is indeed an open ended scale, but again, something we find difficult to measure.  IQ tests are hardly reliable, and were never meant to test intelligence - you can be taught how improve your score. We are indeed flawed, but Harris implies that we know of greater intelligence than our own, otherwise how could it be relative?  How could you make the claim its 'limited' unless you have an example of something that is unlimited?

Oh this is silly. Sophistry.
Simple proof human intelligence is limited: I can't absorb 1000 tech books and integrate with my knowlege, within my remaining lifespan.
I typically can't even recall what I had for dinner a week ago.
Yet I can imagine having an enhanced mind that would allow such things. And being able to continually add to the enhancements, if the substrate was some product of engineering rather than evolution. I don't care if that could or could not be distilled to some 'IQ number'. That is simply a pointless exercise.
Ok.  So we at least agree that a single number cannot possibly reflect every aspect of ones mental capacity.  Why would 'absorbing 1000 tech books' be considered a form of intelligence? Or the inability to do that a lack of intelligence?

The point I've been trying to make - without trying to deceive you - is that in order to claim something is more, or less intelligent than something else, one must have a reasonable definition of intelligence as a reference. Yes, our "intelligence" has limits in terms of speed, and data storage (I purposefully avoided the word "knowledge" because that isn't necessarily raw facts and figures).  But does increasing these lead to high er "intelligence" ?  If one can compute a thousand times faster than anyone else, and remember everything - will this person create new technologies? make more discoveries about the world we live in? create "better" Art? Will they score higher on an IQ test?  Again, you need some sort of definition. 

I am not claiming you are wrong here, just that your original assumption was that the human intelligence is limited, and therefore artificial intelligence will surpass it.  I agree with the first part, just not that the second part logically follows, because you have yet to provide a clear description of what you think intelligence is.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: cdev on December 23, 2017, 06:01:47 pm
Humans will definitely increase our intelligence by means of technology. How far will that go? It will continue as long as we exist, unless radiation sets us so far back we wont be able to (quite possible, and it might not even happen because of a war, a solar storm could do it by triggering nuclear meltdowns.)
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: Zero999 on December 24, 2017, 12:33:06 am
I'm with Buriedcode.

Artificial general intelligence, the type capable of actually understanding basic facts and concepts has barely progressed at all. What passes for AI now, is nothing more than sophisticated search and pattern recognition algorithms, which may seem clever to someone who doesn't really understand them, but when one really looks into them, they don't have any kind of general understanding ability.

AGI is a long way in future and there could even be some fundamental law, meaning that humans aren't intelligent enough to develop it.

If/when AGI is invented, I don't see why people automatically think it will want to compete with humans or that it will be anything like human intelligence.

I still don't see the continued fear of machines stealing jobs from humans. Tractors took the jobs of many farm labours, then a bit later, in the drawing office, CAD replaced draughtsmen and more recently the Internet caused numerous retail job losses. At the same time, people got better jobs designing and making tractors, writing, selling and developing CAD software and more recently developing websites and smartphone apps.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: jonovid on December 24, 2017, 01:16:16 am
the genie is out of the bottle whan
ostracism by order of increasing complexity, especially when technology is making lightning fast autonomous choices.
when ever faster communications channels leave people out of the loop. self-programming software or software that makes other software.
at a speed no human can compete with.
if say self-programming software was to develop through the natural selection of small, inherited variations that increase the individual copy ability to compete, survive a crash, and reproduce good code as failed code is discarded. 
a new a type of AI software mitosis.
when software engineers have no idea what technology is doing or what the technology is up to  :-// ..... :scared:

on the flip side-

trends research 
driverless car cliff & the electric car fantasy
http://trendsresearch.com/detail.html?sub_id=b8f8866872 (http://trendsresearch.com/detail.html?sub_id=b8f8866872)
quote
The auto industry is beset with millions of recalls that cost billions: Ignition switch problems? Air bags exploding? Sudden acceleration? The industry can't get ignition switches to work, brakes to work, accelerators to work, doors to lock and unlock – and it’s telling the world a driverless car is coming just around the corner?
---
An estimated 650,000 electric vehicles were sold worldwide in 2016, compared to the 84 million-strong traditional vehicles sold.
Title: Re: The Seven Deadly Sins of AI Predictions
Post by: Marco on December 24, 2017, 11:18:36 pm
I still don't see the continued fear of machines stealing jobs from humans. Tractors took the jobs of many farm labours, then a bit later, in the drawing office, CAD replaced draughtsmen and more recently the Internet caused numerous retail job losses. At the same time, people got better jobs designing and making tractors, writing, selling and developing CAD software and more recently developing websites and smartphone apps.

Average per capita consumption has to increase with average per worker productivity for employment to stay the same ... how much further can consumption increase?

I have never bought an app, I buy some games ... but that hasn't increased much over the years. I can only consume so much entertainment. Physical goods are much much better at soaking up income.