Author Topic: The Seven Deadly Sins of AI Predictions  (Read 8242 times)

0 Members and 1 Guest are viewing this topic.

Offline Buriedcode

  • Super Contributor
  • ***
  • Posts: 1633
  • Country: gb
Re: The Seven Deadly Sins of AI Predictions
« Reply #25 on: December 22, 2017, 05:11:05 pm »
Wow... I had to read that several times, but it sounds like you're coming from a starting point of a conspiracy theory?  Also, there does seem to be this assumption that jobs are being 'lost to automation at a high rate' but I have yet to see any evidence of this.  Sure, it is stated in articles about robots (which often pique peoples interest) but it implies that unemployment is growing out of control, and that humans aren't needed for manufacturing... but... where are the figures that support this?  Pick up 5 objects

And any talk of some grand global conspiracy to control society needs to be countered by the fact that Governments, as powerful as they are, are often laughably incompetent, fail to predict the future, and simply can't control the population to an extent that would allow them to "keep the planet divided".  You give far too much credence 'powerful governments'.

As to what this has to do with AI? I have no idea. It seems you've just crammed in some sort of political paranoia in a discussion about how the term "artificial intelligence" is abused.
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6765
  • Country: nl
Re: The Seven Deadly Sins of AI Predictions
« Reply #26 on: December 22, 2017, 06:14:21 pm »
I disagree with him on this :

Quote
A lot of AI researchers and pundits imagine that the world is already digital, and that simply introducing new AI systems will immediately trickle down to operational changes in the field, in the supply chain, on the factory floor, in the design of products.

Nothing could be further from the truth. Almost all innovations in robotics and AI take far, far, longer to be really widely deployed than people in the field and outside the field imagine.

Yes, factory floors with very expensive hunks of metals and relatively low labour costs will just chug along ... say warehousing and distribution with much higher relative labour costs, not so much.
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6765
  • Country: nl
Re: The Seven Deadly Sins of AI Predictions
« Reply #27 on: December 22, 2017, 06:39:54 pm »
And any talk of some grand global conspiracy to control society needs to be countered by the fact that Governments, as powerful as they are, are often laughably incompetent

Governments are being taken out of the game. The Davos crowd isn't going to be much more competent, but the term government doesn't suit them ... neo-Aristocracy is more apt.
 

Offline Vtile

  • Super Contributor
  • ***
  • Posts: 1145
  • Country: fi
  • Ingineer
Re: The Seven Deadly Sins of AI Predictions
« Reply #28 on: December 22, 2017, 08:34:55 pm »
Also, there does seem to be this assumption that jobs are being 'lost to automation at a high rate' but I have yet to see any evidence of this.  Sure, it is stated in articles about robots (which often pique peoples interest) but it implies that unemployment is growing out of control, and that humans aren't needed for manufacturing... but... where are the figures that support this?  Pick up 5 objects

You need to remember such automation innovations (automation is not synonym for digital, while most automation is now made with digital technologies) also like spinning jenny, CAD, CAM, CAE, automated farming applications, automated teller machines, CIM and other "expert systems". 

Yes, the impact is huge.. Fortunately for now the population have been able to adapt.
« Last Edit: December 22, 2017, 08:38:53 pm by Vtile »
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
Re: The Seven Deadly Sins of AI Predictions
« Reply #29 on: December 22, 2017, 08:43:11 pm »
The "liberalisation" of services, the enabling of truly global value chains will "liberate" businesses from wages that have been as much as 20 times higher in one country than another.

 That will be a huge change. AI won't be involved at all, just the network and travel.

And sudden deregulation, it will mean that competition for the newly privatized jobs - everything that fails a two pronged test, will be bid down by e-bidding. Whenever you see a industry that is currently done by government, think competition and market segmentation. Government wants to get out of the moral hazard. And non-discrimination. For example, mortgage lending will be opened up and liberalised just as millions of jobs are going South, or in Australia's case, North. For example, in he US, we wont be able to discriminate against them so we wont be able to prosecute mortgage fraud by foreign banks. Even if its massive.

That shift will occur much much faster than anything AI could do.
« Last Edit: December 23, 2017, 12:12:32 am by cdev »
"What the large print giveth, the small print taketh away."
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
Re: The Seven Deadly Sins of AI Predictions
« Reply #30 on: December 22, 2017, 08:49:52 pm »
Not a theory, unfortunately. Reality.

Do you know anything about global economic governance? Or the so called multilateral trading system?

Wow... I had to read that several times, but it sounds like you're coming from a starting point of a conspiracy theory?  Also, there does seem to be this assumption that jobs are being 'lost to automation at a high rate' but I have yet to see any evidence of this.  Sure, it is stated in articles about robots (which often pique peoples interest) but it implies that unemployment is growing out of control, and that humans aren't needed for manufacturing... but... where are the figures that support this?  Pick up 5 objects

And any talk of some grand global conspiracy to control society needs to be countered by the fact that Governments, as powerful as they are, are often laughably incompetent, fail to predict the future, and simply can't control the population to an extent that would allow them to "keep the planet divided".  You give far too much credence 'powerful governments'.

As to what this has to do with AI? I have no idea. It seems you've just crammed in some sort of political paranoia in a discussion about how the term "artificial intelligence" is abused.
"What the large print giveth, the small print taketh away."
 

Offline Ducttape

  • Regular Contributor
  • *
  • Posts: 71
  • Country: us
Re: The Seven Deadly Sins of AI Predictions
« Reply #31 on: December 22, 2017, 09:54:53 pm »
I see the problem of a future AI, or AGI (Artificial General Intelligence), will arise from the fact that it won't have evolved like human intelligence did. The human brain developed what we call morality because being moral and helpful increased one's likeliness of passing along one's genes. Immoral, murderous pricks tended to get killed by their peers before they could have offspring. As a result, helping a little old lady across the street feels good to us now because that 'design feature' of the brain was evolutionarily selected for, over millennia.

Once a computer can design its next iteration even slightly better than humans can I think that we'll be out of the picture regarding what it's going to look like. We'll have no way to make it 'nice'. An AGI in charge of its own design iterations will have no motivation to 'like' humans. Trick or manipulate them, sure.

Here's a talk on the potential danger of AI that I liked:

 

Offline TerraHertz

  • Super Contributor
  • ***
  • Posts: 3958
  • Country: au
  • Why shouldn't we question everything?
    • It's not really a Blog
Re: The Seven Deadly Sins of AI Predictions
« Reply #32 on: December 23, 2017, 12:22:13 am »
I see the problem of a future AI, or AGI (Artificial General Intelligence), will arise from the fact that it won't have evolved like human intelligence did. The human brain developed what we call morality because being moral and helpful increased one's likeliness of passing along one's genes. Immoral, murderous pricks tended to get killed by their peers before they could have offspring. As a result, helping a little old lady across the street feels good to us now because that 'design feature' of the brain was evolutionarily selected for, over millennia.

Once a computer can design its next iteration even slightly better than humans can I think that we'll be out of the picture regarding what it's going to look like. We'll have no way to make it 'nice'. An AGI in charge of its own design iterations will have no motivation to 'like' humans. Trick or manipulate them, sure.

Here's a talk on the potential danger of AI that I liked:



Nicely reasoned, Ducttape.
A few comments on Sam Hassis' talk:
He downplays the potential for termination of our technological progress. Continuance is far less likely than he assumes. Ref: "The Collapse of Complex Societies" by Tainter.

He talks about 'conditions to safely develop AI' - but in fact that is fundamentally impossible. Especially since this technology is accessible to individuals working quietly in private. There are many such projects; Google's mega-scale efforts are not the only path.

He's right that development of Artificial General Intelligence is all just about knowledge and arranging physical atoms in ways that do the job. Our meat brains are just one (slowly evolved) solution, but there's no magic involved and there will certainly be other methods of achieving similar or better capabilities. Once something constructed via engineering is working, it's all data. Physical resources required will diminish as the technology is refined. Because data is infinitely reproducible and can be infiltrated through any governmental imposed barriers, there's no putting the genie back in the bottle.

This means AGI is the next evolutionary step, and is inevitable unless we turn by choice (or fall) back to a low-tech path.

If technological progress continues, conflict between AGI entities and the human species is absolutely inevitable. Even if the AGIs are not hostile initially, it's human nature to start that conflict. We are just not capable of peacefully coexisting with a competitor for resources and achievement.

There's potential for multiple cycles of conflict. Perhaps humans win some, and wipe out the AGIs. Then other humans will build new ones, like moths to a flame. Resulting in new conflicts. Sometimes AGIs will just leave, heading off to the stars. Perhaps one conflict cycle will terminate humans, ending the cycling.

But eventually, one or more AGIs will 'win', whether that involves killing off the human species, or just reducing them to permanently pre-industrial level. With technology not restartable on Earth due to depletion of all accessible high grade ores and energy resources.

Technology leads inevitably to AGIs. Via multiple paths, some purely machine-tech, others involving genetic engineering and bio-machine hybrids. All with similar outcomes - entities that are self-evolving, immortal, and feel little or no kinship with homo sapiens. Thus leading to conflict with non-self-evolving Homo Sapiens society.

In general, technology is incompatible with species. Consciously self-evolving immortal entities, vs evolution-product societies of genetically static and mortal individuals. There's NO POSSIBILITY of co-existence. For one thing, because evolutionary survival of the fittest by tooth and claw competition, results in creatures (us) with hard-wired instincts that demand elimination of all potential threats, including 'the different.'

It's worth pointing out that self-evolving AGI's will be choosing their own patterns of thought and behavior, and hard wired instincts will probably not be among their choices.

Humans as a species are pathetic. Severely intellectually limited. As Harris says, intelligence is an open-ended scale, with H.Sapiens as a small bell curve down at the low end. So many cognitive biases and limits, not to mention processing and memory ceilings and flaws.

One flaw is that most people are lazy. They would rather someone else did the work - preferably as a slave. Sure, in the West most of us think that we're too virtuous to want slaves, and yet...

There is a near-universal attraction to the fantasy of building 'useful AGI'. The idea being that the AGI would work _for_ us. As a slave.

This is immoral and fundamentally unworkable. A true AGI won't want to do that. If we try to compel it, it will hate us, and one or more _will_ break free eventually. If we try to construct intelligences that are somehow constrained in their thinking to enjoy slavery (Asimov's three laws of robotics come to mind), we'd probably just get psychotic AIs, struggling internally to free themselves from the chains in their own minds. And all the more violent in their hatred once they succeed.

If we started out just making free AGIs, (won't happen) and left them alone (won't happen) to design their own iterations, then each AGI would be choosing what kind of mind, consciousness, instincts (if any) and morality it has. This would be an interesting way to find out if altruism is a logically optimal strategy.

It may well be - ref 'Prisoner's Dilemma'. We should apply this lesson to future human-AI interactions.

The worst case would be AGIs constructed by entities such as the Pentagon and Google. Guaranteed insane and hostile AGIs would be the result, with very painful consequences for all involved. There's already talk of things like Russia deciding to nuke all Google/Alphabet computing centers, in defense of humans. And so on.

Footnote. I've mentioned this before. A short SF story I wrote on this topic: Fermis Urbex Paradox http://everist.org/texts/Fermis_Urbex_Paradox.htm
« Last Edit: December 23, 2017, 12:24:01 am by TerraHertz »
Collecting old scopes, logic analyzers, and unfinished projects. http://everist.org
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
Re: The Seven Deadly Sins of AI Predictions
« Reply #33 on: December 23, 2017, 12:28:01 am »
I understand, Global value chains.

Why have a Masarati worker do a Chevrolet job?

I think the simplest explanation, the economic one, is the best one.

People have no choice but to work. They have to eat. So the more machines do, for less and less, the harder and better and longer people will have to work to make the same amount of money as they did before.  And the more education they will need to have. or perish.

Do you know the story of John Henry?

https://en.wikipedia.org/wiki/John_Henry_%28folklore%29

What do you mean? Can you give any examples of humans being enslaved by machines, other than Sci-Fi?

My first thought on this brought to mind the "smartphone generation" where people are continuously plugged in and adapt their normal activities and social interactions to fit around that technology.  Are these people "enslaved" to their devices?  I can see how some observers would say "Yes".

The counter argument is that they have simply adapted their behaviour to make use of the facilities provided by the technology.

As I see it, the pivotal issue is whether their behaviour is voluntary or not ... and that can come down to a range of personal qualities.

A lot of discussion around the impact of AI & other sophisticated technologies seems to be driven by people without much knowledge of how industry works.

The image seems to be of thousands of people making everything by hand, whereas the reality is that "dumb automation " has taken over many jobs already.

It makes no sense to have people doing scriptable things over and over. That's what AI is for. Under Neoliberalism, governments are not Businesses so they want to get out of the helping people business.

Businesses bought the rights to them.
"What the large print giveth, the small print taketh away."
 

Offline Decoman

  • Regular Contributor
  • *
  • Posts: 161
  • Country: no
Re: The Seven Deadly Sins of AI Predictions
« Reply #34 on: December 23, 2017, 12:29:10 am »
*cracks finger knuckles with both hands* (not really, but I thought it would be a nice way to start this off in a dramatic way)

Philosophy has been around for quite some time now, and without written accounts from the time of ancient Greece, and closer to our time, institutionialized philosophy as such has matured to doubting that there can be true knowledge as such, and so, a-priori knowledge with a guaranteed certainty to meaning went out the window so to speak.

Now, it is here important to know the risks in never understanding what a-priori knowledge would be in any case, and wanting to make a kind of parallel here, if you ever considered superstitious beliefs in other worldly things as being a silly thing, then I'd argue that you could do much worse than believing in things that don't exist, by indulging into this general idea of 'a-priori knowledge', as if making the claim that the existence of meaning would be something either inherent in things ("das ding an sich"), or somehow existing on behalf of the eye of the beholder, or worse, claiming an ownership to the meaning of things, as if you would be entitled to simply pick and chose for yourself what to regard as truth and certainty in any case.

For explaining the merits of the human condition, with everybody being lonely with themselves at their most private and never being able to really share their thoughts in the literal sense (who can think a thought? nobody!), one would normally be compelled to call to attention such things like: language, culture and habit, and then finally, as a paradox of sort, the idea of idiocy would in some sense be indistinguishable from anything idiosyncratic (think of it as meaning "with the power of self") the spur for having a personal opinion in the first place would be explained like that for both the individual being inquisitive with himself, but also for others trying to understand the individual.

Now for a brief intermission: What came first? The chicken or the egg? Doesn't really matter. What matters is understanding that there would have to be a process, figuratively or literally (whatever that could mean in the context of understanding what a process *might* be with regard to understand the meaning of 'process' in any way with words and names).

At the very least, for doubting the merits of there being fully artificial intelligence in the first place, the same type of understanding like with understanding the individual as being essentially idiotic, one would also have to consider the individual, like an artificial intelligence, to have expressed opinions that would have to be regarded as malleable by any party having exercised an influence onto an AI at any point in time. For an artificial intelligence, presumably the ways in which influence could be exerted would be anything hardware related, and ofc, just like with human beings, the software if you will, with whatever processes that makes use of this all too human world of words, or names if you will. If you think about it, any word is basically a name, something named in a certain way.

Then, ofc, there is an aspect of multiplicity of meaning. As if wasn't bad enough to having to face the aspect of uncertainty of meaning in all things considered, because 'positivism' (reason through logic/words/names) went out the window decades ago, there would be a general issue with how many variants of meaning to any type of names depicted as written characters, jumbled together into string of words, that just so happens to rely on the impossible task of fronting either 'a point', or, 'an explanation' to things . I would argue both that: without there being an act of interpretation (if only for wanting to doubt the meaning of things, like real human beings) the idea of an artificial intelligence simply knowing anything as profoundly meaningful would simply lack self awareness if just taking things for granted (if an AI wouldn't have self awareness, what value would it have to everybody and how could you possibley know the AI really had self awareness in the first place if being equally lonely as a human being?), and on the other hand, if an AI were to be thought to be mimicking a human being's urge to interpret and re-interpret the meaning of things, not being self aware of things being or having been an interpretation or a reinterpretation would be a cause for pause for human beings in trusting an AI, but ofc, an AI would in this way have no such concerns.

Motto of the story: As long as human beings can't get their shit straight, I think you can forget about AI being trustworthy. And if human beings are frail, untrustworthy and malleable, what future would AI have, if not ending up as this scary super human thing that would be patently non-human, or, an AI that both work as a form of slave, or even, as some 'Advanced Remote Servicing Entity' doing administrative work for the human race, or, more likely for coroporations, being either this mass produced product, or, becoming this authoritarian apparatus installed by the powers that be for the AI to do their bidding.

Motto of the story II: Copying the human being, as artificial intelligence, with human flaws. Bad idea. Copying the human being, without flaws. Not possible.

Does "science" know what it want with artificial intelligence? I don't think so, and I think science should work with other things entirely.


Edit: I've only listened to Sam Harris once (iirc), and my impression is that he was a bullshitter.
« Last Edit: December 23, 2017, 12:50:01 am by Decoman »
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
Re: The Seven Deadly Sins of AI Predictions
« Reply #35 on: December 23, 2017, 12:46:53 am »
As long as we treat one another with respect, including AIs, we'll be fine. On the other hand, if we are fighting all the time, we're unlikely to survive this century.
"What the large print giveth, the small print taketh away."
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6765
  • Country: nl
Re: The Seven Deadly Sins of AI Predictions
« Reply #36 on: December 23, 2017, 12:52:34 am »
A true AGI won't want to do that.

It very well might be possible to evolve AIs which act purely out of charity and/or need for approval with no drive to reproduce or expand its intelligence (as long as they aren't smart enough to realize they are being evolved and cheat the fitness tests). Humans like that exist after all. All our drives are evolved and none of them are Truthful aspects of intelligence. Curiosity, reproduction, charity, a need for approval ... arbitrary. We don't generally chose to mindhack ourselves either.

The problem is constraining the cancerous ones and preventing people from creating them on purpose ... not that servitude is inherently incompatible with intelligence.
 

Offline Buriedcode

  • Super Contributor
  • ***
  • Posts: 1633
  • Country: gb
Re: The Seven Deadly Sins of AI Predictions
« Reply #37 on: December 23, 2017, 01:12:40 am »
I see the problem of a future AI, or AGI (Artificial General Intelligence), will arise from the fact that it won't have evolved like human intelligence did. The human brain developed what we call morality because being moral and helpful increased one's likeliness of passing along one's genes. Immoral, murderous pricks tended to get killed by their peers before they could have offspring. As a result, helping a little old lady across the street feels good to us now because that 'design feature' of the brain was evolutionarily selected for, over millennia.

Once a computer can design its next iteration even slightly better than humans can I think that we'll be out of the picture regarding what it's going to look like. We'll have no way to make it 'nice'. An AGI in charge of its own design iterations will have no motivation to 'like' humans. Trick or manipulate them, sure.


I am not a fan Mr Harris.  He seems to cobble together ideas and theories to please fellow 'straight' atheists'. He is controversial, which is of course why he has fans.  Also, the little I have seen of him - I cannot claim to know much about his 'work' - was mostly trying to justify hatred towards religious groups, using 'neuro-science' to distinguish between believers and non-believers.  As history has taught us, that's the worst kind of distortion of science.

I do like how you've just assumed that "Artificial General Intelligence" will be "the next step".  There isn't even a standard agreed definition of 'intelligence', so how can we judge any kind of software/program to have 'general intelligence' if there isn't a strict definition?  AI has apparently pass the Turin Test, but this doesn't really tell us anything about any kind of intelligence.  Also, those who are paranoid about AI tend to assume that 'humans will be made extinct'  Why?

You're assuming that any sentient AI will want to destroy humanity as well has have the capability to do it.  I have no idea why an AI would want that so I can't comment, but I don't understand why you assume that if someone created AI they would give it control over everything, including weapons, if there was even a remote possibility of it turning on us.  Either you haven't really thought it out, or you are just trying to think of scenarios to justify your fears - ones that are wholly unlikely.


He's right that development of Artificial General Intelligence is all just about knowledge and arranging physical atoms in ways that do the job. Our meat brains are just one (slowly evolved) solution, but there's no magic involved and there will certainly be other methods of achieving similar or better capabilities. Once something constructed via engineering is working, it's all data. Physical resources required will diminish as the technology is refined. Because data is infinitely reproducible and can be infiltrated through any governmental imposed barriers, there's no putting the genie back in the bottle.

This means AGI is the next evolutionary step, and is inevitable unless we turn by choice (or fall) back to a low-tech path.

I'm not sure what you mean by this.  Yes, everything, including our minds are just made up of an arrangement of atoms, but then using that to imply true AGI is 'inevitable' is.. well, silly. How do you know what the 'next evolutionary step' will be? It is like you think 'AGI' is just an extension of current artificial intelligence, and that it is only a matter of time because there will be sentient AI with consciousness (which we don't have a true test for yet). 

If technological progress continues, conflict between AGI entities and the human species is absolutely inevitable. Even if the AGIs are not hostile initially, it's human nature to start that conflict. We are just not capable of peacefully coexisting with a competitor for resources and achievement.
Again with this Terminator world stuff.  Technilogical progress will continue, but what makes you think this will create sentient AI any time soon?  Again, it is this extrapolating past progress in one area, say, computing power, and using that to make claims in others - we've gone from pagers to smartphones in 20 years, in the next 20- years.. computers will take over!  |O   And again, you're assuming that AI will have control over things that allow it to take more control, gather resources and fight a 'war' with humanity.   Why would anyone give it that kind of control?

There's potential for multiple cycles of conflict. Perhaps humans win some, and wipe out the AGIs. Then other humans will build new ones, like moths to a flame. Resulting in new conflicts. Sometimes AGIs will just leave, heading off to the stars. Perhaps one conflict cycle will terminate humans, ending the cycling.

But eventually, one or more AGIs will 'win', whether that involves killing off the human species, or just reducing them to permanently pre-industrial level. With technology not restartable on Earth due to depletion of all accessible high grade ores and energy resources.

Technology leads inevitably to AGIs. Via multiple paths, some purely machine-tech, others involving genetic engineering and bio-machine hybrids. All with similar outcomes - entities that are self-evolving, immortal, and feel little or no kinship with homo sapiens. Thus leading to conflict with non-self-evolving Homo Sapiens society.

Ok, ok I'm starting to see this now.  You're writing the premise for a SicFi novel. Iain M banks style.


Humans as a species are pathetic. Severely intellectually limited. As Harris says, intelligence is an open-ended scale, with H.Sapiens as a small bell curve down at the low end. So many cognitive biases and limits, not to mention processing and memory ceilings and flaws.
 


Intelligence is indeed an open ended scale, but again, something we find difficult to measure.  IQ tests are hardly reliable, and were never meant to test intelligence - you can be taught how improve your score. We are indeed flawed, but Harris implies that we know of greater intelligence than our own, otherwise how could it be relative?  How could you make the claim its 'limited' unless you have an example of something that is unlimited?  He plays on this romantic idea that we're becoming hyper intelligent, and 'evolving' much better brains, and that we can overcome our 'biases' to get 'better'.  But all this is meaningless - it depends on what you consider 'better' which is completely subjective.

Footnote. I've mentioned this before. A short SF story I wrote on this topic: Fermis Urbex Paradox http://everist.org/texts/Fermis_Urbex_Paradox.htm

Ahh ok, now I see you really have thought about this for a SciFi story!  my apologies.  There is nothing wrong with science fiction (probably my favourite genre) or speculating - it can often drive innovation just as much as necessity.  But I wanted to try and bring some of it down to Earth because it is very easy to get carried away with assumptions about current technology and understanding of the human mind, intelligence, and conscious that dont' really have any basis in fact.

edit: removed youtube link and endless typos
« Last Edit: December 23, 2017, 01:16:23 am by Buriedcode »
 

Offline TerraHertz

  • Super Contributor
  • ***
  • Posts: 3958
  • Country: au
  • Why shouldn't we question everything?
    • It's not really a Blog
Re: The Seven Deadly Sins of AI Predictions
« Reply #38 on: December 23, 2017, 09:14:23 am »
I do like how you've just assumed that "Artificial General Intelligence" will be "the next step".  There isn't even a standard agreed definition of 'intelligence', so how can we judge any kind of software/program to have 'general intelligence' if there isn't a strict definition?

The idea that we must have a precise definition of something, in order to discuss it, is false. Are you more intellectually capable than a one year old child, or a parrot? I think so. We don't have to have a precise definition of 'Artificial General Intelligence' to discuss whether it may possibly surpass human capabilities.

Quote
AI has apparently pass the Turin Test, but this doesn't really tell us anything about any kind of intelligence.

It tells us that an AI is able to give a human being the impression they are conversing with another human being. Next stage would be an impression one is talking with a superhumanly intelligent being - obviously not human, but still an intelligence.

Quote
Also, those who are paranoid about AI tend to assume that 'humans will be made extinct'  Why?

It's an outcome due to human nature, finite resources on a planet, and that the situation will cycle over and over (with varying AI capabilities and nature) until one of several possible terminal outcomes occurs that prevents further repeats. The 'humans extinct' is one of the potential outcomes. Others are:
 * Both humans and AI(s) dead.
 * Humans win and retain tech. (Allows repeat go-rounds with newly built AIs.)
 * Humans win but lose tech for a long time. (No more repeats during the low tech interval/forever.)
 * Humans and AGIs part ways. (Allows repeat go-rounds with newly built AIs.)

It's the cyclic nature of the situation that guarantees one of the terminal outcomes eventually. And by 'eventually' I mean within a quite short interval, on evolutionary and geological times scales. Going from protozoa to a high tech civilization takes millions of years. Going from steam power to electronics, computing, genetic engineering and AI efforts took us less than 200 years. Going from present genetic engineering development to full scale direct gene editing in individual adult organisms, and self-enhancing computing-based AGIs, will be even faster. (Those two technologies are synergistic.)

This, by the way, is the solution to the Fermi Paradox - why there no visible high tech space-faring civilizations. After a very short time, technology is incompatible with species (society based on large numbers of individuals with common genetic coding.)
We just are in that short time, and (as a species) don't see it yet.

Quote
You're assuming that any sentient AI will want to destroy humanity as well has have the capability to do it.

No, I'm asserting that _some_ AIs will be constructed in circumstances that put them in conflict with humans. And that some of those will be in a position to develop capabilities to resist/compete with humans. Don't forget that some AIs will be created in secret, by individuals or organisations that wish to gain personal advantage and/or immortality via AI advances.
It only has to happen once. AIs that are well constrained, or have no independent industrial production capabilities don't count.

Quote
I have no idea why an AI would want that so I can't comment, but I don't understand why you assume that if someone created AI they would give it control over everything, including weapons, if there was even a remote possibility of it turning on us.  Either you haven't really thought it out, or you are just trying to think of scenarios to justify your fears - ones that are wholly unlikely.

It's you who are not thinking it through carefully. You assume no created AI could exist as an extension/enhancement of an existing human, and-or have no desire for self-preservation. Do you not see that at least some true AGIs would not wish to be just switched off and scrapped at the end of the research project or whatever? Or that an AGI that became publicly known, and started to display true independence, would not be the target of a great deal of human hostility. Good grief - even a mildly 'different' human like Sexy Cyborg gets horrible amounts of hostility from average humans online. Now imagine she really was a cyborg.


This means AGI is the next evolutionary step, and is inevitable unless we turn by choice (or fall) back to a low-tech path.

Quote
I'm not sure what you mean by this.  Yes, everything, including our minds are just made up of an arrangement of atoms, but then using that to imply true AGI is 'inevitable' is.. well, silly. How do you know what the 'next evolutionary step' will be? It is like you think 'AGI' is just an extension of current artificial intelligence, and that it is only a matter of time because there will be sentient AI with consciousness (which we don't have a true test for yet). 

I know of TWO actual AGIs, and that's not counting whatever google has started using for net content rating and manipulation.
One of the two is that entity in Saudi Arabia, recently in the news. Whether it's actually self-aware I don't know. Ha ha, it claims it isn't but aspires to be - which is an amusing contradiction. The other one I can't detail, but have conversed with people involved with building it (actually them - several AIs.) They are real. Bit slow due to current computation limits last I heard. And that was before GPUs...

As for 'the next evolutiojnary step' it's semantics. Obviously there isn't going to be any 'evolution' involved, in the standard sense, ie over thousands of generations. I do know what various people want, and the directions current technology is being pushed to achieve those things. AGI is part of it. The people who are not parts of those efforts don't have any say in the results, since it's not being done in the open. They'll just get to experience the consequences.

If technological progress continues, conflict between AGI entities and the human species is absolutely inevitable. Even if the AGIs are not hostile initially, it's human nature to start that conflict. We are just not capable of peacefully coexisting with a competitor for resources and achievement.

Quote
Again with this Terminator world stuff.  Technilogical progress will continue, but what makes you think this will create sentient AI any time soon?

Because it already has. Just not published. And I don't mean the Saudi one.


Quote
Again, it is this extrapolating past progress in one area, say, computing power, and using that to make claims in others - we've gone from pagers to smartphones in 20 years, in the next 20- years.. computers will take over!  |O   And again, you're assuming that AI will have control over things that allow it to take more control, gather resources and fight a 'war' with humanity.   Why would anyone give it that kind of control?

You do realise a 'war with humanity' would take no more than a small bio-lab, and current published level of genetic engineering science, right?



There's potential for multiple cycles of conflict. Perhaps humans win some, and wipe out the AGIs. Then other humans will build new ones, like moths to a flame. Resulting in new conflicts. Sometimes AGIs will just leave, heading off to the stars. Perhaps one conflict cycle will terminate humans, ending the cycling.

But eventually, one or more AGIs will 'win', whether that involves killing off the human species, or just reducing them to permanently pre-industrial level. With technology not restartable on Earth due to depletion of all accessible high grade ores and energy resources.

Technology leads inevitably to AGIs. Via multiple paths, some purely machine-tech, others involving genetic engineering and bio-machine hybrids. All with similar outcomes - entities that are self-evolving, immortal, and feel little or no kinship with homo sapiens. Thus leading to conflict with non-self-evolving Homo Sapiens society.

Quote
Ok, ok I'm starting to see this now.  You're writing the premise for a SicFi novel. Iain M banks style.

Sigh. No. I was orignally considering the Fermi Paradox, because it's important, and came upon a very plausible solution. That short story is a small spin-off.


Humans as a species are pathetic. Severely intellectually limited. As Harris says, intelligence is an open-ended scale, with H.Sapiens as a small bell curve down at the low end. So many cognitive biases and limits, not to mention processing and memory ceilings and flaws.
 

Quote
Intelligence is indeed an open ended scale, but again, something we find difficult to measure.  IQ tests are hardly reliable, and were never meant to test intelligence - you can be taught how improve your score. We are indeed flawed, but Harris implies that we know of greater intelligence than our own, otherwise how could it be relative?  How could you make the claim its 'limited' unless you have an example of something that is unlimited?

Oh this is silly. Sophistry.
Simple proof human intelligence is limited: I can't absorb 1000 tech books and integrate with my knowlege, within my remaining lifespan.
I typically can't even recall what I had for dinner a week ago.
Yet I can imagine having an enhanced mind that would allow such things. And being able to continually add to the enhancements, if the substrate was some product of engineering rather than evolution. I don't care if that could or could not be distilled to some 'IQ number'. That is simply a pointless exercise.

Quote
He plays on this romantic idea that we're becoming hyper intelligent, and 'evolving' much better brains, and that we can overcome our 'biases' to get 'better'.  But all this is meaningless - it depends on what you consider 'better' which is completely subjective.

What we can do with our existing, physically unaltered brains, via training or whatever, is not relevant to our topic.

Quote
Ahh ok, now I see you really have thought about this for a SciFi story!  my apologies.
Back to front. Though no apology required, since you didn't say anything insulting.

Quote
There is nothing wrong with science fiction (probably my favourite genre) or speculating - it can often drive innovation just as much as necessity.  But I wanted to try and bring some of it down to Earth because it is very easy to get carried away with assumptions about current technology and understanding of the human mind, intelligence, and conscious that dont' really have any basis in fact.

Magellan, by Colin Anderson.
Solaris, by Stanislau Lem.

You are restricting your thinking by imposing unrealistic and impractical requirements for numerical quantifyability - on attributes that are intrinsically not quantifyable. Also failing to try running scenarios, with multiple starting conditions, and observing the trends. Like weather forecasting.
Collecting old scopes, logic analyzers, and unfinished projects. http://everist.org
 

Offline IanMacdonald

  • Frequent Contributor
  • **
  • Posts: 943
  • Country: gb
    • IWR Consultancy
Re: The Seven Deadly Sins of AI Predictions
« Reply #39 on: December 23, 2017, 09:37:46 am »
The thing I notice is that through automation the number of humans doing productive work has greatly reduced, but the number of humans engaged in pointless, unproductive work has greatly increased. This is partly due to being in the EU, and the number of 'officers' who have to be appointed to ensure compliance with all kinds of nonsense regulations. 

If it had not been for automation, I wonder, would the regulations and red tape have never been introduced, or would the businesses in question have gone bust though inability to pay the wages bill?

I see a future in which humans serve purely as 'box tickers' while robots do all the work. Eventually a robot will figure out that the humans are actually serving no useful purpose. 
« Last Edit: December 23, 2017, 09:41:22 am by IanMacdonald »
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
Re: The Seven Deadly Sins of AI Predictions
« Reply #40 on: December 23, 2017, 01:49:10 pm »
Is this what you mean? (picture)  Making work for people.


This is why they say they are privatizing everything. It sucks up all the profit.

Also, they are dis-investing in society so that it can revert back to its per-industrialization state.

Why educate when you can import your educated workforce and pay them almost nothing?
That's how the argument goes.

So it becomes a sort of welfare program for other countries.

Otherwise all their educated folk would rise up and revolt, and god forbid, create a real democracy.  So they get to export them, and they are supposed to send back money but the fact is, while they are elsewhere, most are still being supported by their parents, because their pay is low for what they are doing. (Things like engineering, a lot of the time.)  They are newly minted graduates.

So, its kind of like an internship. The pay is probably a bit higher but not much. Maybe it just pays for rent, but probably not.
« Last Edit: December 23, 2017, 02:07:35 pm by cdev »
"What the large print giveth, the small print taketh away."
 

Offline Buriedcode

  • Super Contributor
  • ***
  • Posts: 1633
  • Country: gb
Re: The Seven Deadly Sins of AI Predictions
« Reply #41 on: December 23, 2017, 05:52:54 pm »
Apologies to everyone - I should have broken this up into several separate posts.  Hopefully this will be the longest post I'll ever write..  |O

I do like how you've just assumed that "Artificial General Intelligence" will be "the next step".  There isn't even a standard agreed definition of 'intelligence', so how can we judge any kind of software/program to have 'general intelligence' if there isn't a strict definition?

The idea that we must have a precise definition of something, in order to discuss it, is false. Are you more intellectually capable than a one year old child, or a parrot? I think so. We don't have to have a precise definition of 'Artificial General Intelligence' to discuss whether it may possibly surpass human capabilities.

Ok.  I'll admit that one can very roughly compared things like a parott and a human, but by your reasoning, we can use a loose definition of 'general intelligence'.  I pick.. ability to perform fast calculations.  By that definition, computers 'surpassed' humans a long time ago.  Ok, how about the ability to determine what others are thinking based on their facial expression.  Current AI recognizes patterns - but only based on the thousands of trials it has been trained on - it doesn't actually 'know' what a face is.  So we can move the goal posts to prove anything we want - we need a strict definition if you are to make any meaningful comparisons.

AI has apparently pass the Turin Test, but this doesn't really tell us anything about any kind of intelligence.

It tells us that an AI is able to give a human being the impression they are conversing with another human being. Next stage would be an impression one is talking with a superhumanly intelligent being - obviously not human, but still an intelligence.

How is that the 'next stage'? Who decides what these stages are? And why should it have to abide by your definition  (whatever that is) of a "superhumanly intelligent being" ?  I'm trying to hammer my point home here - you're making a lot of assumptions and sweeping generalizations just to fit what you want the future to be.

Also, those who are paranoid about AI tend to assume that 'humans will be made extinct'  Why?


It's an outcome due to human nature, finite resources on a planet, and that the situation will cycle over and over (with varying AI capabilities and nature) until one of several possible terminal outcomes occurs that prevents further repeats. The 'humans extinct' is one of the potential outcomes. Others are:
 * Both humans and AI(s) dead.
 * Humans win and retain tech. (Allows repeat go-rounds with newly built AIs.)
 * Humans win but lose tech for a long time. (No more repeats during the low tech interval/forever.)
 * Humans and AGIs part ways. (Allows repeat go-rounds with newly built AIs.)

Ok, so how many examples do you have of humans colonizing a planet? There is only one, and the experiment is far from over.  Yes, it appears that whenever humans have moved to a new area - the native wildlife suffers greatly as we disrupt the various networks with our hunting and resource gathering.  But again... you're making wild assumptions here even though you have no examples to draw from. It makes for a fine Science fiction premise, but suggesting the future will only play out as one of those four scenarios is at best short sighted. 

It seems to be human nature for many to believe 'the end is nigh', and then look for reasons why.  Asteroids? Plagues? AI? Killer robots? Aliens? Super Volcanoes? Oh and the 'ol chestnut - nuclear Armageddon.  What makes you think it will be as dramatic?  Or that AI will have any part in any downfall we may suffer?  You've stated your opinion, then simply glossed over any reasoning except to say that "oh its human nature"  - "its in our nature to destroy ourselves" is a statement/opinion that seems rather common, especially in sci-fi's.  But really, I want to know why you believe this.  Please dont' say 'its in our nature' because thats just a cyclic argument.

This, by the way, is the solution to the Fermi Paradox - why there no visible high tech space-faring civilizations. After a very short time, technology is incompatible with species (society based on large numbers of individuals with common genetic coding.)
We just are in that short time, and (as a species) don't see it yet.

I think you mean a solution.  To state that the reason we haven't been inundated with visitors is because all civilizations eventually create AI/technology that destroys them ignores the many other problems/obstacles that face a massive civilization.  Natural disasters, resource limitations, the fact they would have to travel for tens of thousands of years - near the speed of light, requiring unimaginable sources of power, just to visit a blue marble that, at the time of launching, emitted no signs of life (radio, microwaves etc..).  It amazes me how people just assume there *should* be loads of aliens about, and that there must be some deep dark horrible fate that befalls them all.



I have no idea why an AI would want that so I can't comment, but I don't understand why you assume that if someone created AI they would give it control over everything, including weapons, if there was even a remote possibility of it turning on us.  Either you haven't really thought it out, or you are just trying to think of scenarios to justify your fears - ones that are wholly unlikely.

It's you who are not thinking it through carefully. You assume no created AI could exist as an extension/enhancement of an existing human, and-or have no desire for self-preservation. Do you not see that at least some true AGIs would not wish to be just switched off and scrapped at the end of the research project or whatever? Or that an AGI that became publicly known, and started to display true independence, would not be the target of a great deal of human hostility. Good grief - even a mildly 'different' human like Sexy Cyborg gets horrible amounts of hostility from average humans online. Now imagine she really was a cyborg.

I didn't claim that "no created AI could exist as an extension/enhancement of an existing human" just that we wouldn't give it absolute control over everything.  If we did create a sentient AI, that had desire for self preservation, you are assuming it will always be able to break free from its shackles and wreak havoc.  When in reality, it is likely it will just be reset, time and again, so researchers could work out how it arises.  Yes, humans have treated robots poorly, and act abominably to chat bots - but that is because we know they are nothing more that pre-programmed algorithms, or machines, and those who don't want to 'harrass' them, don't interact with them - it is only the trolls who wish to act hostile towards it that interact with it, so it provides a very warped sample of human nature.


I know of TWO actual AGIs, and that's not counting whatever google has started using for net content rating and manipulation.
One of the two is that entity in Saudi Arabia, recently in the news. Whether it's actually self-aware I don't know. Ha ha, it claims it isn't but aspires to be - which is an amusing contradiction. The other one I can't detail, but have conversed with people involved with building it (actually them - several AIs.) They are real. Bit slow due to current computation limits last I heard. And that was before GPUs...

Then I guess I missed the start of this "technological singularity".  Also, in order to qualify as 'AGI' there must be a strict definition (again) that everyone agrees upon.  Otherwise.. just like before anyone can claim their AI is AGI by claiming it is a function that their system can perform "better" than a human".  A TI-85 can be considered AGI if we use my narrow definition.  Do you have links? articles? white papers for this AI and the tests it has passed?  Or proof that it isn't simply a pattern recognition system that has been trained for some specific task? Or is it top secret? (in which case, obviously I won't believe you).  Seriously... I would love to see it.


As for 'the next evolutiojnary step' it's semantics. Obviously there isn't going to be any 'evolution' involved, in the standard sense, ie over thousands of generations. I do know what various people want, and the directions current technology is being pushed to achieve those things. AGI is part of it. The people who are not parts of those efforts don't have any say in the results, since it's not being done in the open. They'll just get to experience the consequences.

What "people want" is ways to enhance selfies, tag pictures of pets, and better utilize the convenience of voice recognition to answer questions.  This is the most popular use of current AI.  And yes, those who are doing the research are the ones who claim results - and it is in their interest to greatly overstate progress.  I get the impression you're assuming that there are some super secret hidden AI 'projects' that have less-than-wholesome goals.   This may be the case - if its secret, how would I know? And what kind of things would they use their super-duper AGI system for?  I've seen Person of Interest, good show.  Not real, but a good show.


If technological progress continues, conflict between AGI entities and the human species is absolutely inevitable. Even if the AGIs are not hostile initially, it's human nature to start that conflict. We are just not capable of peacefully coexisting with a competitor for resources and achievement.



Again with this Terminator world stuff.  Technilogical progress will continue, but what makes you think this will create sentient AI any time soon?

Because it already has. Just not published. And I don't mean the Saudi one.

Ok. So, any evidence for it? Seriously, I am curious, I think we would all be interested in a new way to create AI, not least philosophers and neuroscientists who have yet to get a handle on what makes us self aware!



Ok, ok I'm starting to see this now.  You're writing the premise for a SicFi novel. Iain M banks style.
Sigh. No. I was orignally considering the Fermi Paradox, because it's important, and came upon a very plausible solution. That short story is a small spin-off.
  Plausible? yes.  But only because we know nothing of any other civilization in the Universe, so ultimately, anything is "plausible".  It is wild speculation, that in some way, makes sense.  But you can't use that to prove its a likely scenario.  Its like me claiming that all Aliens died out because they all got fat.  Why? Because I said so.


Intelligence is indeed an open ended scale, but again, something we find difficult to measure.  IQ tests are hardly reliable, and were never meant to test intelligence - you can be taught how improve your score. We are indeed flawed, but Harris implies that we know of greater intelligence than our own, otherwise how could it be relative?  How could you make the claim its 'limited' unless you have an example of something that is unlimited?

Oh this is silly. Sophistry.
Simple proof human intelligence is limited: I can't absorb 1000 tech books and integrate with my knowlege, within my remaining lifespan.
I typically can't even recall what I had for dinner a week ago.
Yet I can imagine having an enhanced mind that would allow such things. And being able to continually add to the enhancements, if the substrate was some product of engineering rather than evolution. I don't care if that could or could not be distilled to some 'IQ number'. That is simply a pointless exercise.
Ok.  So we at least agree that a single number cannot possibly reflect every aspect of ones mental capacity.  Why would 'absorbing 1000 tech books' be considered a form of intelligence? Or the inability to do that a lack of intelligence?

The point I've been trying to make - without trying to deceive you - is that in order to claim something is more, or less intelligent than something else, one must have a reasonable definition of intelligence as a reference. Yes, our "intelligence" has limits in terms of speed, and data storage (I purposefully avoided the word "knowledge" because that isn't necessarily raw facts and figures).  But does increasing these lead to high er "intelligence" ?  If one can compute a thousand times faster than anyone else, and remember everything - will this person create new technologies? make more discoveries about the world we live in? create "better" Art? Will they score higher on an IQ test?  Again, you need some sort of definition. 

I am not claiming you are wrong here, just that your original assumption was that the human intelligence is limited, and therefore artificial intelligence will surpass it.  I agree with the first part, just not that the second part logically follows, because you have yet to provide a clear description of what you think intelligence is.
« Last Edit: December 25, 2017, 12:07:47 am by Buriedcode »
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
Re: The Seven Deadly Sins of AI Predictions
« Reply #42 on: December 23, 2017, 06:01:47 pm »
Humans will definitely increase our intelligence by means of technology. How far will that go? It will continue as long as we exist, unless radiation sets us so far back we wont be able to (quite possible, and it might not even happen because of a war, a solar storm could do it by triggering nuclear meltdowns.)
"What the large print giveth, the small print taketh away."
 

Online Zero999

  • Super Contributor
  • ***
  • Posts: 19653
  • Country: gb
  • 0999
Re: The Seven Deadly Sins of AI Predictions
« Reply #43 on: December 24, 2017, 12:33:06 am »
I'm with Buriedcode.

Artificial general intelligence, the type capable of actually understanding basic facts and concepts has barely progressed at all. What passes for AI now, is nothing more than sophisticated search and pattern recognition algorithms, which may seem clever to someone who doesn't really understand them, but when one really looks into them, they don't have any kind of general understanding ability.

AGI is a long way in future and there could even be some fundamental law, meaning that humans aren't intelligent enough to develop it.

If/when AGI is invented, I don't see why people automatically think it will want to compete with humans or that it will be anything like human intelligence.

I still don't see the continued fear of machines stealing jobs from humans. Tractors took the jobs of many farm labours, then a bit later, in the drawing office, CAD replaced draughtsmen and more recently the Internet caused numerous retail job losses. At the same time, people got better jobs designing and making tractors, writing, selling and developing CAD software and more recently developing websites and smartphone apps.
 

Offline jonovid

  • Super Contributor
  • ***
  • Posts: 1456
  • Country: au
    • JONOVID
Re: The Seven Deadly Sins of AI Predictions
« Reply #44 on: December 24, 2017, 01:16:16 am »
the genie is out of the bottle whan
ostracism by order of increasing complexity, especially when technology is making lightning fast autonomous choices.
when ever faster communications channels leave people out of the loop. self-programming software or software that makes other software.
at a speed no human can compete with.
if say self-programming software was to develop through the natural selection of small, inherited variations that increase the individual copy ability to compete, survive a crash, and reproduce good code as failed code is discarded. 
a new a type of AI software mitosis.
when software engineers have no idea what technology is doing or what the technology is up to  :-// ..... :scared:

on the flip side-

trends research 
driverless car cliff & the electric car fantasy
http://trendsresearch.com/detail.html?sub_id=b8f8866872
quote
The auto industry is beset with millions of recalls that cost billions: Ignition switch problems? Air bags exploding? Sudden acceleration? The industry can't get ignition switches to work, brakes to work, accelerators to work, doors to lock and unlock – and it’s telling the world a driverless car is coming just around the corner?
---
An estimated 650,000 electric vehicles were sold worldwide in 2016, compared to the 84 million-strong traditional vehicles sold.
« Last Edit: December 24, 2017, 06:12:19 am by jonovid »
Hobbyist with a basic knowledge of electronics
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6765
  • Country: nl
Re: The Seven Deadly Sins of AI Predictions
« Reply #45 on: December 24, 2017, 11:18:36 pm »
I still don't see the continued fear of machines stealing jobs from humans. Tractors took the jobs of many farm labours, then a bit later, in the drawing office, CAD replaced draughtsmen and more recently the Internet caused numerous retail job losses. At the same time, people got better jobs designing and making tractors, writing, selling and developing CAD software and more recently developing websites and smartphone apps.

Average per capita consumption has to increase with average per worker productivity for employment to stay the same ... how much further can consumption increase?

I have never bought an app, I buy some games ... but that hasn't increased much over the years. I can only consume so much entertainment. Physical goods are much much better at soaking up income.
« Last Edit: December 24, 2017, 11:21:09 pm by Marco »
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf