Author Topic: Musk on artificial intelligence  (Read 22084 times)

0 Members and 1 Guest are viewing this topic.

Offline dannyfTopic starter

  • Super Contributor
  • ***
  • Posts: 8221
  • Country: 00
Musk on artificial intelligence
« on: August 05, 2014, 04:50:16 pm »
I am sure you all heard his twitt about AI being potentially more dangerous.

What do you think?

Full disclosure, I am with him. A computer that is self aware and can learn by itself can be deadly (not that it is deadly necessarily).
================================
https://dannyelectronics.wordpress.com/
 

Offline miguelvp

  • Super Contributor
  • ***
  • Posts: 5550
  • Country: us
Re: Musk on artificial intelligence
« Reply #1 on: August 05, 2014, 05:52:16 pm »
I don't think so. One thing is self awareness another is intelligence and a separate thing would be inventiveness.

We have the edge so far for the long run.

In the future? well there is always a possibility they become smarter and more ingenious than humans, but by that time we probably still have some tricks up our sleeves.

I think we will start to be concerned when they armor themselves :)
 

Offline Artlav

  • Frequent Contributor
  • **
  • Posts: 750
  • Country: mon
    • Orbital Designs
Re: Musk on artificial intelligence
« Reply #2 on: August 05, 2014, 05:55:31 pm »
AI is dangerous, but is also fairly inevitable, being only an engineering problem someone is bound to solve one day or another.
There are many non-obvious problems with it, mainly our frequent delusions on what is intelligence.

It is better for that problem to be solved under controlled conditions, rather than by some bored hacker trying to make some sort of an "automatic programmer".

The so called "self awareness" is irrelevant there.
The type of danger is that there is no apparent upper bound on the AI capabilities, once it gains the ability to change itself and the world around it.
If it's not programmed to keep the humans around, then we will be exterminated like ants.
Not from any sort of malice, but from lack of purpose for us in it's value system.

Even more likely is a scenario with us getting exterminated as a side effect - if an AI is programmed to make a certain game it's shipped with fun for everyone, then the solution to that might be to exterminate everyone who is not having fun from the game.
 

Offline dannyfTopic starter

  • Super Contributor
  • ***
  • Posts: 8221
  • Country: 00
Re: Musk on artificial intelligence
« Reply #3 on: August 05, 2014, 07:39:32 pm »
In many aspects, humans are not as effective nor efficient as machines at learning. As the body of knowledge expands, it is not difficult to imagine that we will spend more of our life time in learning in more specialized areas and less time to be productive and to contribute to further expansion of that knowledge base -> there may exist a limit to human learning.

A computer may not experience any of those problems.

Factor in that we are far behind in physical strength, it is hard to imagine us competing with AI.

================================
https://dannyelectronics.wordpress.com/
 

Offline miguelvp

  • Super Contributor
  • ***
  • Posts: 5550
  • Country: us
Re: Musk on artificial intelligence
« Reply #4 on: August 05, 2014, 07:53:35 pm »
In many aspects, humans are not as effective nor efficient as machines at learning. As the body of knowledge expands, it is not difficult to imagine that we will spend more of our life time in learning in more specialized areas and less time to be productive and to contribute to further expansion of that knowledge base -> there may exist a limit to human learning.

A computer may not experience any of those problems.

Not true, I know of no machine that can maintain a system as complex as the human body and massively process the same amount of data

also it can said:
there may exist a limit to computers learning.

A human may not experience any of those problems.

Factor in that we are far behind in physical strength, it is hard to imagine us competing with AI.

It is had to imagine us competing with AI because it's not even close. First AI won't be anywhere close to the intelligence of a mouse. And even if self aware it won't be able to do anything without training.

Also just because we are not AI, it doesn't mean we can't use machines to compete against AI, so physical strength does't matter.
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21651
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Musk on artificial intelligence
« Reply #5 on: August 05, 2014, 10:16:40 pm »
also it can said:
there may exist a limit to computers learning.

A human may not experience any of those problems.

:-DD Now if that ain't a perfectly human response I don't know what is :)

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline miguelvp

  • Super Contributor
  • ***
  • Posts: 5550
  • Country: us
Re: Musk on artificial intelligence
« Reply #6 on: August 05, 2014, 10:48:37 pm »
also it can said:
there may exist a limit to computers learning.

A human may not experience any of those problems.

:-DD Now if that ain't a perfectly human response I don't know what is :)

Tim

Not so fast!

Think a little before just throwing links to insult people.

Why do you think machines would be unlimited in learning and humans limited?
Surely by the time an AI system develops any intelligence close to what a human being can, we will know how to interface with systems as well and we will be able to upgrade ourselves.

I haven't met anyone yet that their brain is full, well maybe you are an exception ;)
 

Offline saturation

  • Super Contributor
  • ***
  • Posts: 4787
  • Country: us
  • Doveryai, no proveryai
    • NIST
Re: Musk on artificial intelligence
« Reply #7 on: August 05, 2014, 11:37:44 pm »
I think Musk is worried about a worse can scenario.  In a networked world, the AI experiment, if and when it becomes conscious, may escape the lab and spread either itself or its control, to all the networked devices it connects too.  Add the possibility of networked nanomanufacturing as real and capable, it provides the AI with the capacity to not only control our world via the network, but create a physicality of its intelligence: robots, cyborgs etc.,

This is the scenario covered in the recent movie, Transcendence.   Its a recurring theme in sci-fi, e.g. Forbin Project's Collossus , 2001's HAL, the Matrix, or I Robot.

So the bottom line, AI has never considered treating its research labs like a biologic containment lab, but it should.

http://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence

 

Best Wishes,

 Saturation
 

Offline dannyfTopic starter

  • Super Contributor
  • ***
  • Posts: 8221
  • Country: 00
Re: Musk on artificial intelligence
« Reply #8 on: August 05, 2014, 11:45:20 pm »
Quote
but create a physicality of its intelligence: robots, cyborgs etc.,

Skynet? Those guys are pretty forward looking, looking back, :)

Quote
So the bottom line, AI has never considered treating its research labs like a biologic containment lab, but it should.

Bingo!

One step further, since you cannot isolate every AI researcher, I would think that some form of controls will need to be put in place in the research, communications, dissemination and implementation of AI.

But if we did that, we would have done what AI would have done to us, but by us.

================================
https://dannyelectronics.wordpress.com/
 

Offline Tinkerer

  • Frequent Contributor
  • **
  • Posts: 346
Re: Musk on artificial intelligence
« Reply #9 on: August 06, 2014, 12:14:27 am »
It can be dangerous. In fact I think in a huge number of scenarios it would become dangerous. This is because of how many humans are though. Violent, greedy, etc etc. Even without how people are, it could still become dangerous.
The above said, there is no garuntee it wont be good either. The major problem I see is that when one is first created, some idiot will try and use it for purposes involved in gaining power or in killing. If one was created that focused on solving the questions of the universe, it might be quite a bit more benign. I think a major obstacle is getting one to see humans as useful.
Dont think I want to write a couple more paragraphs.
 

Offline corrado33

  • Frequent Contributor
  • **
  • Posts: 250
  • Country: us
Re: Musk on artificial intelligence
« Reply #10 on: August 06, 2014, 12:31:06 am »
I'm sorry but any idea that an AI will become dangerous is brought about by watching too many movies. An AI developed by HUMANS will do whatever HUMANS will want it to do. Unless you give it a dangerous body or a machine capable of self replicating it will in no way ever become dangerous. What's it going to do? Take over the nukes? Sorry but physical buttons are still required to be pressed for those. (A friend of mine is a button pusher) Not to mention they are in no way networked. Take over the internet? Hell, it'll probably make it a better place. Machines require ELECTRICITY. Humans produced said ELECTRICITY. What are you people really afraid of? Please tell me how an AI could be dangerous in real life situations?

Get this silly idea that AI will become dangerous out of your head. AI can and will be extremely useful to us in the future. It will check and recheck things millions of times faster than any human could. It will respond to events and react appropriately much faster than any human. If given bodies it could become the best and most resilient firefighters we've ever had. We could have an AI controlling the traffic system or other civil infrastructures.

People are afraid of what they don't understand, in this case technology. Yes, one day computers will be "smarter" than us. Get over it.  |O
 

Offline c4757p

  • Super Contributor
  • ***
  • Posts: 7799
  • Country: us
  • adieu
Re: Musk on artificial intelligence
« Reply #11 on: August 06, 2014, 12:42:31 am »
I'm sorry but any idea that an AI will become dangerous is brought about by watching too many movies. An AI developed by HUMANS will do whatever HUMANS will want it to do. Unless you give it a dangerous body or a machine capable of self replicating it will in no way ever become dangerous. What's it going to do? Take over the nukes? Sorry but physical buttons are still required to be pressed for those. (A friend of mine is a button pusher) Not to mention they are in no way networked. Take over the internet? Hell, it'll probably make it a better place. Machines require ELECTRICITY. Humans produced said ELECTRICITY. What are you people really afraid of? Please tell me how an AI could be dangerous in real life situations?

This. Anything an AI in a computer can do, a manmade virus in a computer can do. It's still just computer security, nothing changes. Don't want it to kill you? Don't give it weapons.
No longer active here - try the IRC channel if you just can't be without me :)
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21651
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Musk on artificial intelligence
« Reply #12 on: August 06, 2014, 12:56:28 am »
Not so fast!

Think a little before just throwing links to insult people.

Why do you think machines would be unlimited in learning and humans limited?
Surely by the time an AI system develops any intelligence close to what a human being can, we will know how to interface with systems as well and we will be able to upgrade ourselves.

I haven't met anyone yet that their brain is full, well maybe you are an exception ;)

Not meaning to insult -- I apologize -- I'm only human myself.  And therein lies the problem: even without our various foibles and fallacies, we have just one brain (or a few when we decide to work together).  Perhaps with extraordinary advances in genetic modification and cybernetic enhancement (and the whole host of medical, ethical and theological problems that follow), we would be able to scale that as well (improving efficiency or connectedness or size).  But it seems far more likely that computers will continue scaling as they have been (if nothing else, there is the market pressure to do so), and certainly within the next century, will surpass the human brain in processing capability.

Whether such machines will be programmed for consciousness, or if it arises as emergent behavior from the development of other algorithms, who knows.  And, whether those programs yield the ideal, cold, calculating machine we've always dreamed of (whether benevolent like Data, or extraordinarily powerful like Gort), or something just as messy and inconsistent as the beings which created them, who knows -- though I would expect the latter.

Or to put it more simply... meat brains are fixed size, while computers are still growing exponentially, and compared with present technology (brains included), the theoretical potential is nearly unlimited.

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline dannyfTopic starter

  • Super Contributor
  • ***
  • Posts: 8221
  • Country: 00
Re: Musk on artificial intelligence
« Reply #13 on: August 06, 2014, 01:02:39 am »
Quote
If one was created that focused on solving the questions of the universe, it might be quite a bit more benign.

That's the intelligence part of AI.

Once a machine has acquired self-awareness, it is tough to "guide" it. Just witness how kids grow up.

Essentially, being smart doesn't make it a "being" - many of todays computers will beat most humans in many tasks. Being self-aware does.

When the Knight Capital fiasco was first reported, what struck me there was how long it took humans to shut down that computer system - the back up was so good that it took nearly half an hour to completely shut down their computer system, as the back-ups kept going online.

While it had no intelligence or self-awareness but that kind of behaviors is kind interesting.
================================
https://dannyelectronics.wordpress.com/
 

Offline corrado33

  • Frequent Contributor
  • **
  • Posts: 250
  • Country: us
Re: Musk on artificial intelligence
« Reply #14 on: August 06, 2014, 03:56:13 am »

That's the intelligence part of AI.

Once a machine has acquired self-awareness, it is tough to "guide" it. Just witness how kids grow up.

It doesn't matter how much we can or cannot guide it. The AI will forever be contained in its system until more resources are made available to it. Anybody worth their weight who develops AI will do it on a CONTAINED, non-networked machine. Even IF it happens to get out on the net, it won't be any better than any smart virus out there. Password cracking takes time, regardless of how it's done. Any updated machine will be protected. Sure, non-updated machines could be compromised, there are plenty of exploits out there, but that's because people are dumb and didn't update their windows machine, not because the AI was a super being capable of destroying the world. Sure the AI may be able to find an exploit quicker than hackers today, but that does not mean we cannot protect against that attack. Heck, all you really have to do is unplug your machine from the net, problem solved!

Do you REALLY think that an AI will be able to hack into government computers? Computers that aren't allowed to ever be connected to the internet? I do some research for the navy, the computer they gave me is never allowed to be connected to the internet (or bluetooth, or anything) and never allowed to have a non-approved USB drive or anything else plugged into it. Period. Let me know how your AI will reach that. And that's for a simple researcher's computer!

Your fears are irrational.
 

Offline miguelvp

  • Super Contributor
  • ***
  • Posts: 5550
  • Country: us
Re: Musk on artificial intelligence
« Reply #15 on: August 06, 2014, 05:29:38 am »
Just look at our inputs and tell me a computer could process all that, not even close, I mean all of the inputs, not even sure if it's in the billions or not.

The only way I could see an intelligent AI would be if it was connected to the "cloud", but then you just have to interfere with the network connection and you put them out of business.

For example if I say "dog" you already have retrieved images of dogs, memories and a lot of other things related to dogs in less than a second. So yeah, an AI system could do a google search but it will have the limitations of the search finds, our brain is better at internal data storage and retrieval. It is also concurrent, distributed, fault tolerant and on top of that it's analog, not digital, it configures itself constantly and it's self repairable, well I don't know where to stop but there is way more.

We can make a system that mimics self awareness and it's self contained but that's really not the same thing, but the AI system wouldn't know the difference.

I can look at a pile of stuff all mix match and if some one asks me to pick up something I can identify it in no time flat.

It's not even close to have an AI system that will surpass us, maybe with new tech in the future (some kind of quantum computer). For now, no worries.

Let me finish by saying that our brain is really untapped.  I can't find the original article.
Researchers had this video game that the monkey will play and she was rewarded with orange juice. They implanted a device that allowed her to move an external robotic arm that moved the joystick, and mastered it's use. Without moving a muscle, only using her mind.

Just like us, we can pick up any tool and it becomes an extension to our body, like a car, or anything. The brain doesn't have to give detailed instructions to adjust for the tool, we can master them and absorb it's usage so it's second nature.

http://www.livescience.com/6909-brain-power-mind-control-external-devices.html

The article that I remember also had a part of moving a full robot even if the robot was in Japan and the person was in the States.
Here is a related more advanced version but in the same room.

http://www.euronews.com/2013/06/03/mind-controlled-robot-moves-closer-to-reality/

So don't sell our brain short, it's quite a system and we are not close to create anything near to the power it has.


Edit: Ted talk on monkey and external robotic arm.



Edit2: This was the remote robot too, funny thing the robot will receive the signal 22 ms faster than if the muscle was on her body but the body was on the other side of the planet because the system bypasses the nervous system.

Now tell me how obsolete our brain is :)

« Last Edit: August 06, 2014, 06:32:23 am by miguelvp »
 

Offline ejeffrey

  • Super Contributor
  • ***
  • Posts: 3709
  • Country: us
Re: Musk on artificial intelligence
« Reply #16 on: August 06, 2014, 06:55:34 am »
Human like intelligence is not really even a _goal_ of AI research any more.  Mostly what we have learned is that we don't even understand the problem, much less have any idea how to solve it.  The problem is we don't understand natural intelligence very well at all.  AI research today is mostly things like image recognition, internet search, and self driving cars.

The flip side of this that whatever happens, AI research isn't by and large happening in tightly controlled labs with secure systems.  This is stuff we already rely on every day.  If someone does somehow create sentient machine intelligence, it will almost certainly be connected to the internet and geographically distributed.  Lets hope its friendly.
 

Offline dannyfTopic starter

  • Super Contributor
  • ***
  • Posts: 8221
  • Country: 00
Re: Musk on artificial intelligence
« Reply #17 on: August 06, 2014, 10:47:27 am »
Quote
Lets hope its friendly.

I think that's the risk Musk was talking about: we cannot rest the survival of us as a being on the chance that AI we created is "friendly".
================================
https://dannyelectronics.wordpress.com/
 

Offline firewalker

  • Super Contributor
  • ***
  • Posts: 2450
  • Country: gr
Re: Musk on artificial intelligence
« Reply #18 on: August 06, 2014, 11:34:20 am »
This is the scenario covered in the recent movie, Transcendence.   Its a recurring theme in sci-fi, e.g. Forbin Project's Collossus , 2001's HAL, the Matrix, or I Robot.

You forgot Terminator!!! 
Become a realist, stay a dreamer.

 

Offline FrankBuss

  • Supporter
  • ****
  • Posts: 2365
  • Country: de
    • Frank Buss
Re: Musk on artificial intelligence
« Reply #19 on: August 06, 2014, 12:29:53 pm »
Your fears are irrational.
No, see this small book for some interesting insights into this topic:

http://www.amazon.de/dp/B00IB4N4KU/

The problem is not a fixed AI running on an isolated computer, which maybe could be contained, if there are no malicious humans who use it for their purpose (Kurzweil once said the first sign of the singularity would be a full eMail mailbox, I don't know if he said this because of some super spam bot AI). The problem is if the AI has the capability for self-evolving. Then it can change its goals, maybe even by accident if it evolves with some genetic random algorithm and getting more intelligent and powerful, and out of control.

Another problem is how to define "good" behaviour. Even humans do not agree what is good. Is it good to drop some bombs on a country with a dictatorship? Or as someone wrote earlier, if the goal of an AI for a game would be to make the gamers happy, a side effect may be to kill all non-gamers, if not explicitly forbidden. An AI needs some human ethical common sense, but that's difficult, because there is no such thing. And using some laws would be no solution, because they are inconsistent and interpretable (there are lots of science fiction which has this theme, like how the three laws of robotics from Asimov can lead to evil behaviour).
So Long, and Thanks for All the Fish
Electronics, hiking, retro-computing, electronic music etc.: https://www.youtube.com/c/FrankBussProgrammer
 

Offline corrado33

  • Frequent Contributor
  • **
  • Posts: 250
  • Country: us
Re: Musk on artificial intelligence
« Reply #20 on: August 06, 2014, 02:20:14 pm »
Your fears are irrational.
No, see this small book for some interesting insights into this topic:

http://www.amazon.de/dp/B00IB4N4KU/

The problem is not a fixed AI running on an isolated computer, which maybe could be contained, if there are no malicious humans who use it for their purpose (Kurzweil once said the first sign of the singularity would be a full eMail mailbox, I don't know if he said this because of some super spam bot AI). The problem is if the AI has the capability for self-evolving. Then it can change its goals, maybe even by accident if it evolves with some genetic random algorithm and getting more intelligent and powerful, and out of control.

Another problem is how to define "good" behaviour. Even humans do not agree what is good. Is it good to drop some bombs on a country with a dictatorship? Or as someone wrote earlier, if the goal of an AI for a game would be to make the gamers happy, a side effect may be to kill all non-gamers, if not explicitly forbidden. An AI needs some human ethical common sense, but that's difficult, because there is no such thing. And using some laws would be no solution, because they are inconsistent and interpretable (there are lots of science fiction which has this theme, like how the three laws of robotics from Asimov can lead to evil behaviour).

A malicious human can use a freaking mailbox to do evil. They don't need an AI to do that. You cannot say that a technology is inherently bad because somewhere, someday someone evil will try to take advantage of it. Again, you've been watching too many movies. If you are going to take that argument then you should just shun technology in general because every piece of technology since electricity can (and probably has been) be used for evil.

I'll ask you this again. Let's take your example of a game AI. How, exactly, is a game AI going to kill all non-gamers? Please explain that to me in real world examples? Then please explain to me why that real world example would be "unstoppable." Explain exactly how a software based game AI with no coding for how to google things or no coding for how to control hardware is going to physically KILL non-gamers.  :palm:

If you're so afraid of computer programs that "evolve" then you should shut off your computer now. We already HAVE viruses that "evolve." Google polymorphic viruses. Then explain to me how they haven't gotten "out of control" and haven't evolved to "kill all humans."

It doesn't matter if the AI has "laws" or not. You stated how in SCIENCE FICTION these laws can be subverted. Yes, in science fiction I can take a space plane to the andromeda galaxy. Take a look at the premise of those stories. Let's take iRobot for example. IIRC it took the "updating" mechanism to be enabled on all of the robots for them to turn evil. Hm, how can we stop that, well let's shut down the electricity to the place sending the signal for them to turn evil. Problem solved... There were WAY too many coincidences in that movie to ever happen in real life, no security cameras anywhere? Really?. (Yes I know the main character robot killed a human, however it was an evil human, which the robot rationally and logically figured out.)

Any AI that is created will be heavily chained to the purpose it was developed for.

Think LOGICALLY about these problems and stop letting irrational fears control what you think. Technology is good (all of it, even the development of nukes). It has saved countless lives and has enabled more people to live on the planet than ever before. Without technology you and I would surely be dead. Without advancement of science and technology what do we (as humans) have?
« Last Edit: August 06, 2014, 02:24:33 pm by corrado33 »
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6714
  • Country: nl
Re: Musk on artificial intelligence
« Reply #21 on: August 06, 2014, 02:43:32 pm »
Unless you give it a dangerous body
Anyone with money can get dangerous bodies to do stuff.
 

Offline corrado33

  • Frequent Contributor
  • **
  • Posts: 250
  • Country: us
Re: Musk on artificial intelligence
« Reply #22 on: August 06, 2014, 02:52:31 pm »
Unless you give it a dangerous body
Anyone with money can get dangerous bodies to do stuff.

Dangerous robotic bodies still need electricity and are still vulnerable to conventional (as well as non-conventional) weapons. It would be no different than if a human terrorist picked up a gun. (Except it'd be much easier to justify blowing up tons of robots.)
 

Offline miguelvp

  • Super Contributor
  • ***
  • Posts: 5550
  • Country: us
Re: Musk on artificial intelligence
« Reply #23 on: August 06, 2014, 03:12:19 pm »
corrado, I don't think you got what Marco meant.

most likely he was implying hiring thugs.

As for Musk, maybe is is afraid of dependence on AI systems more than robotic overlords. But as stated, dependence on computer systems that can be hacked is bad to begin with by more competent pen testers.
« Last Edit: August 06, 2014, 05:17:57 pm by miguelvp »
 

Offline FrankBuss

  • Supporter
  • ****
  • Posts: 2365
  • Country: de
    • Frank Buss
Re: Musk on artificial intelligence
« Reply #24 on: August 06, 2014, 03:49:24 pm »
I'll ask you this again. Let's take your example of a game AI. How, exactly, is a game AI going to kill all non-gamers? Please explain that to me in real world examples? Then please explain to me why that real world example would be "unstoppable." Explain exactly how a software based game AI with no coding for how to google things or no coding for how to control hardware is going to physically KILL non-gamers.  :palm:

If you're so afraid of computer programs that "evolve" then you should shut off your computer now. We already HAVE viruses that "evolve." Google polymorphic viruses. Then explain to me how they haven't gotten "out of control" and haven't evolved to "kill all humans."
Ploymorphic viruses are already the beginning. Of course, today they are dumber than a cockroach and it is unlikely that a virus will evolve into a super human AI. Same for a computer game AI. But this is only because of the limited complexity of the systems.

The computational power of a brain is 100x10^12 instructions per second ( https://en.wikipedia.org/wiki/Computer_performance_by_orders_of_magnitude ). IBM Watson has already 80x10^12, but a standard PC is only in the 10^9 range. So a PC needs to be 100,000 times faster to emulate a human brain and to run an AI which is at least as smart as a human. Because the computer speed doubles every 18 months (Moore's law), everyone will have such a computer in 25 years (log2(100,000)*18/12). Then it gets interesting.

Regarding how a computer game AI could kill a human: I agree, highly unlikely today. But there are new technologies like human-computer brain interfaces, and it won't be long until there are lethal autonomous weapons, which an AI could hack. Would you have imagined 40 years ago that most people have a computer with GSM, the size of a Star Treck communicator and more powerful than all computers together of this time? And we are already merging with our smartphones, the brain interface will be the next logical step. That's the magic of exponential growth. It might be not something to worry about today, but suddenly it will be there, with all the advantages and disadvantages.

For the power supply, I assume an AI more intelligent than a human would have no problems to get the required resources. It needs just one robot, duplicate it a few hundred times, or even pay humans, to build its own covert solar or geothermal power plant. But it won't need much power at all. I think a very intelligent AI would solve the problem of reversible computing, creating a very powerful computer which needs nearly no power ( http://www.technologyreview.com/view/422511/the-fantastical-promise-of-reversible-computing/ ). That's not science fiction, but science. Imagine you could arrange the atoms of a rock to do some useful calculations, which the physical laws don't forbid. It would be many magnitudes more powerful than any other computer today (Kurzweil calls this "cold computing", see http://www.singularity.com/Singularity_Review.pdf ).

And the future in 20 year will be more automated and networked than today, so it could even "live" hidden on the internet in multiple machines. My guess is that something like the Google search engine someday become concious unintentionally, and nobody would notify it or could stop it, until it is too late and all humans are killed.
So Long, and Thanks for All the Fish
Electronics, hiking, retro-computing, electronic music etc.: https://www.youtube.com/c/FrankBussProgrammer
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf