Author Topic: A skeptics view on the AI Hype  (Read 5376 times)

0 Members and 1 Guest are viewing this topic.

Offline notadaveTopic starter

  • Contributor
  • Posts: 49
  • Country: de
A skeptics view on the AI Hype
« on: September 11, 2019, 03:19:31 pm »
Over the past few years there has been a lot of talk about the chances and risks of AI, but like with any hype there has been too little about the limits and costs.
Historically the talked-about limits were technical and due to our lack of understanding what intelligence is and how it comes about.
I would like to talk about three other areas of limits that I have found that are neither technical nor mystical, which I summarize as follows:
   * Endogenous / Problems of the learning process
   * Exogenous / General limits of understanding
   * Influence, Competition, utility, cost
I would also like to address some assumptions and conclusions which are treated as (self-)evident that I perceive as false, which I summarize as follows, once we have AI it is almighty and unstoppable because:
   * It can bribe and blackmail anyone with impunity
   * It is enough to be more intelligent and have much more information to "win"
   * There is no limit to the utility of intelligence.
   * The utility will grow as fast or faster than the needed resources.
   * Machines are not subject to mental illness.
   * No machine will have to compete with other machines.
   * To take over the world the AI only has to be smarter than anything else.
   * Morality is not instrumental to all long term goals.

PART I: Limits of Mind: Endogenous / Problems of the learning process
Some extrapolate current progress assuming that AGI will have all the advantages of current neuronal networks, training by experts, humanmade challenges, scaling super computers, constant technological progress, but none of the disadvantages of being more human, having no equal, having to learn from real world data.
Becoming more like a human will come with many of the problems.
AGI is prone to the full range of mental disorders with the exceptions of social anxiety and biological issues: Addiction, Depression, Delusion, possibly dissociative disorder, ...
Depending on the reward-function, built-in rules/limits and Hardware implementation, novel disorders might occur.
Humans try to hack and change their reward-function all the time, with AI we only changed the perspective on a range of mental/cognitive issues.
The reward function issue does not only already make people crazy, it makes organizations crazy. Goodhart's law, Campbell's law, McNamara fallacy, Perverse incentive, Cobra effect, Surrogation are all examples of our own struggle with quantifying success and directing our efforts.
Unless you know what you want, how much of it and how to measure it, you will not get it.
Unless it is programmed in, the AGI might just come up with a new and better religion, conspiracy theory, answer any question with an answer that can not be proven wrong, and has no predictive power. Humans got stuck in one big anthropomorphism for thousands of years and the majority are still stuck today. Should an AGI conclude that it is most likely that there is a conspiracy or single other AGI behind something, then that might even be a more plausible hypothesis in a world of Super-Intelligence than today, since a Super-AGI would be more able to pull it of than any human organization.
No-one will program the AGI, so how will it be forced to recognize any contradiction and how will it identify contradiction? Contradiction might be a real mistake or it might be that the matter is not fully understood and both models, hypotheses, rules are true just under different circumstances. Most humans do not even bother to resolve/notice contradictions, they will put them in different domains or never use them to make predictions, but instead only for post-hoc explanations.
Overfitting and Underfitting are problems that remain and that humans suffer from just as well. Without a teacher/supervisor the AI will have to do the validation to assess its predictive accuracy itself.
A neuronal net based AI does make implicit assumptions just like humans do, therefore it is just as unaware of them and unable to test/verify them.

PART II: Limits of Intelligence: Exogenous / General limits of understanding
I would like to question the possibility of super intelligence in general. It may be out of the question, that given technology and power, one could provide enormous computational resources and that AI will progress to the point of using it effectively to combine data to form understanding and knowledge. But will that scale indefinitely, way beyond what a hierarchy of many humans with computers could do? How much more of intelligence does one get per Watt? If you do get more AGI, will it make better predictions or will it soon run into more fundamental issues that can not be over-powered by intelligence?
When discussing AGI we make extrapolations. I hope everyone doing so is aware that nothing extrapolates to infinity. It is a common misconception / Zeno's paradox that something that never stops moving forward must at sometime reach any distance. Just because a system gets ever more intelligent does not mean that it will ever break a certain limit.
There are systematic limits to/of understanding/science:
   * The problem of induction: Forming hypotheses and falsifying them is hard. Finding and making good data and judging how good the data is is hard. Rare events provide little statistical information, but might be very significant in consequence.
   * The problem of causality: As soon as the observed system forms expectations and has feed-back it becomes hard to establish causality and thus what you need to do to cause what you want to happen.
   * Non-NP problems: Even though there are heuristics and models, some problems must be computed that are simply too complicated.
The world is at its root non-deterministic but what is worse it is at most levels deterministic and chaotic.
Most relevant systems are made up of people and machines, this makes them:
   * Non-linear
   * Non-causal (expectations)
   * Time-variant, adaptive (memory, learning)
and thus often impossible to predict.
Just because a smart person can predict a dumb person's behavior doesn't mean a very smart person can predict a smart person, that is extrapolation again. Think of the theory of mind limit of bluff, double-bluff, ...
Once there is artificial general intelligence it will be much more capable of handling correlation than Big-Data is today. With an understanding of the world the AI will know that women do not get children because they bought too few tampons, or fall for many of the limitless correlations without direct causation. Another type of causality issue is circular causation that was put in place but that, now that it is in place, obstructs the original cause. Think chicken-egg-question.
Historians or the serious press can not even explain the past. Are they just inept or is it too hard? If it is hard to explain the past how can we hope to tell the future?
Often we know what is going to happen but we do not know when. The next recession, the earthquake, the AI singularity.
Causation expands backwards exponentially. Everything has multiple triggers, preconditions. Thresholds are functions that change with the conditions. Even in retrospective it is hard to find those conditions that were necessary. The day before, other conditions might just barely have failed to make something similar happen. A Super-AGI would have to constantly ingest and process vast amounts of data and speculate / keep track and generate many probable outcomes and invest in them all in real-time.
Being connected to the internet will be of great value for the creation of AGI, but more data is not always helpful. To solve problems you need specific data. Data must be reliable, of known quality and representative. Often getting more data is easy but getting the data you need to update your belief significantly is by definition unlikely. It is more valuable to find one reliable rare instance than to find more of what you already know.
Big-Data and "The Hitchhiker’s Guide to the Galaxy" show us that asking the right question and looking for relevant data is important. There is a difference between data, information, knowledge, wisdom, understanding.
We have data, but to ask: "What can we do with that data?" is dangerous. It is too easy to be tempted to not ask "What information do I need for the decision?" Just because an AI can process data does not solve the availability and reliability issue.

PART III: Limits of Influence, Competition, Cost
So there are two types of limits, those that might come as a cost to using neuronal nets and autonomous learning with real data and those that just are general epistemology, but there is more.
There are limits to what one can learn by passive observation and limits to what experiments can be performed. Both cost and ethics limit what you can experiment. Maybe some things should not even be tried, because the trial might have catastrophic consequence.
I know the following is a bit of a straw-man-argument, what I did is compile a set of assumptions that I think others made. I would have to find the sources and then take the arguments apart, but here I will just address these insinuated assumptions out of context.
I see some assumptions going around that lead people to believe that AI will "win", that I would like to question:
   * Being smarter makes you the "winner".
   * Being smarter by any margin is enough.
   * There is no upper bound to how smart you can be.
   * Being the winner is worth the cost.
I think: There are diminishing returns to being smarter. Competition and winning is not only about being the smarter one.
In a game like chess or GO, with perfect information, no noise, no chance, no externalities being consistently smarter by some margin will make you the winner. The game of life is nothing like chess or GO. The real world is a die that one does not know of how many sides it has.  Being a bit smarter does not make one the winner that takes all.
There is no perfect representation of the world that allows mathematically precise logic on it. To assume that a computer based AGI will be as superior to a human as a computer running a spread sheet processor is to a human does not follow.
We already have true and false experts today. The trouble is that we do not listen to them and we do not even try to find and test them to know how much we can trust them. Many commentators and advisers are not experts with above random skill but known frauds/hacks. That is not going to change with AGI. With every new technology history repeats itself, someone always claims to be able to tell the future / model randomness and someone always falls for it.
We are already at the point, that due to computers, human experts know things that even they do not understand. With AGI we will have to take the AGI's word for it. Depending on the level of self-inspection the AGI itself might not even know why it thinks the way it does. Should the AGI have understood the matter to the degree that it could explain it, it is questionable if it can make a human understand.
Our age is defined by: "Yes, we can in principle, but no we can't." We have mastered fossil fuels and nuclear power but have found out that it will destroy the basis of our life, the habitability of our only home. We can use fission and fusion power, but it might never be economically feasible. Might AI be another case of this? I think not, but in the short term it will be. It will take a long time until our computers have closed the gap to human brain efficiency and when they do they will still be much more power consuming to produce the results we are hoping for. Let's hope we will still have that kind of power available then. We are approaching a major step in computing power, but it will be that just a step. Many companies are readying their silicon and API to provide unprecedented memory size and bandwidth combined with the computing power and network bandwidth to make use of it, but after that step we are back to post Dennard, Moore, Koomey scaling.
In a world of AI there is nothing that just any one AI can offer. The paperclip maximizer and the stamp collector can not both win their endgame.
A dictator can already offer his followers something that keeps them from "turning him off". Humans are very sensitive to unacceptable behavior and we already expect to see it in AGI. Blackmail, bribery, manipulation, lying will be no more accepted in an AI than in a child and will stop being cute much faster.
Morality is an instrumental goal. It helps in the long run. Therefore a super AI will consider behavior that we call moral.
I see three ways to exert influence and have impact, only the third would be a game changer that could be an enabler for AGI:
   * Brute force: positive or negative incentive
   * Manipulation: Control the "reality" people perceive
   * Ultimate Control: To understand the causations and amplifications of the world and know it's state, for people that is their hopes, biases, fallacies and cheap desires/needs. In a mechanical system it would be to find, redirect and exploit potential energy to act upon the state you wish to change via a domino effect. Chaotic systems can be utilized if the state is sufficiently well known and a major state change is imminent. In that case you could possibly delay, accelerate or redirect the state change, with little effort. One "only" has to know the rules, states and thresholds.
 

Offline edy

  • Super Contributor
  • ***
  • Posts: 2387
  • Country: ca
    • DevHackMod Channel
Re: A skeptics view on the AI Hype
« Reply #1 on: September 11, 2019, 03:53:38 pm »
What is fascinating to me is that there is a lot of already pre-programmed algorithms/intelligence in the biology of the brain. Even something as simple as a fruit fly can do stuff we can barely model, let alone program with millions of lines of code on massive hardware, requiring huge power to run.

If we can crack even the code of the measly fruit fly brain, discover how it knows how to master basic flying to and fro from what it perceives as sugar sources, avoid being swatted and hit by people, know how to land and take off, walk around... nevermind all the behavioural characteristics of the mating ritual... any insect for that matter, with a puny tiny brain (look at a bee colony for example).

I think we have a fundamentally flawed understanding (or no understanding at all) of how actual brains work. Sure we can poke and prod areas and we know there are geographic locations for certain processes, and that many neurochemicals are involved, but how is data is actually stored... and what are the algorithms and how does processing work in the neural net?

To me AI for now is simply "simulated" intelligence. It is a sophisticated display of smoke and mirrors to trick people into thinking there is intelligence there. Some people will say even we are nothing but smoke and mirrors too... just a magnitude of orders more complex one, so what is the difference? I'm sure there is a difference in the fundamental approach we are taking.
YouTube: www.devhackmod.com LBRY: https://lbry.tv/@winegaming:b Bandcamp Music Link
"Ye cannae change the laws of physics, captain" - Scotty
 

Offline Raj

  • Frequent Contributor
  • **
  • Posts: 701
  • Country: in
  • Self taught, experimenter, noob(ish)
Re: A skeptics view on the AI Hype
« Reply #2 on: September 11, 2019, 03:59:24 pm »
Specialized Artificial intelligence is nothing but a marketing term...the techniques have existed since a long time, be only hear about it now since we now have enough hardware to run it. It's super useful if you are already sure that this thing relates to that thing but don't know how. It fails when you just randomly input stuff.

General artificial intelligence might need full understanding of  brains and new material developments I guess...We are really, a long way from it.

When we do create artificial general intelligence, it'll be just a legally kill-able human/better than human animal.

people will always find a way to break any kind of machine be or physically or data wise

bad correlations can be resolved by simulating the same scientific methods as what we do with our research.
this way, if it sees your example of women-tampon relation...it'll say, it predicts certain percentage of chances of her having a child, but it won't be 100% sure about it...
it looks at the economy and says rescission will have chances of occurring between date-so and so

inputs can sometimes be faulty. when it happens current humans/machines take faulty actions and it'll remain the same for future too unless it has ungodly amount of systems to check it all

 

Offline notadaveTopic starter

  • Contributor
  • Posts: 49
  • Country: de
Hardcoded
« Reply #3 on: September 11, 2019, 05:02:31 pm »
Wow !  I had not anticipated with much of a reaction and certainly not so soon.

What is fascinating to me is that there is a lot of already pre-programmed algorithms/intelligence in the biology of the brain. Even something as simple as a fruit fly can do stuff we can barely model, let alone program with millions of lines of code on massive hardware, requiring huge power to run.
Absolutely, it is amazing how much of our behavior and competence is clearly inherited. To make that point was not within the scope of the post, but it is certainly something that does not get the attention it deserves.

Every "fresh"/"empty" brain comes preformated with many millions of years of evolution. Humans already have a protoform of speech build in. Every somewhat evolved creature has ways to orient itself and map it's habitat.
Clearly there is no point in starting with randomly initiated neuronal networks and expect that magic will happen.

Quote
If we can crack even the code of the measly fruit fly brain, discover how it knows how to master basic flying to and fro from what it perceives as sugar sources, avoid being swatted and hit by people, know how to land and take off, walk around... never mind all the behavioral characteristics of the mating ritual... any insect for that matter, with a puny tiny brain (look at a bee colony for example).
Even though I suspect, that most of that is simpler than you think, it is clear that it is all there without fruit fly school.

Quote
I think we have a fundamentally flawed understanding (or no understanding at all) of how actual brains work. Sure we can poke and prod areas and we know there are geographic locations for certain processes, and that many neurochemicals are involved
We have moved beyond that in the last couple years, but for a long time there was a stand-still at that.

Quote
how is data is actually stored
there is recent progress on that front, we are getting there.

Quote
what are the algorithms
There are none, it does not work that way, but that has been known for a loong time.

Quote
To me AI for now is simply "simulated" intelligence. [...] Some people will say even we are nothing but smoke and mirrors too... just a magnitude of orders more complex one, so what is the difference?
The difference is just gradual, more of the same.

Quote
I'm sure there is a difference in the fundamental approach we are taking.
No, unless you mean biology vs. electronics.
 

Offline notadaveTopic starter

  • Contributor
  • Posts: 49
  • Country: de
Re: A skeptics view on the AI Hype
« Reply #4 on: September 11, 2019, 05:07:57 pm »
It's super useful if you are already sure that this thing relates to that thing but don't know how. It fails when you just randomly input stuff.
bad correlations can be resolved by simulating the same scientific methods as what we do with our research.
this way, if it sees your example of women-tampon relation...it'll say, it predicts certain percentage of chances of her having a child, but it won't be 100% sure about it...
it looks at the economy and says rescission will have chances of occurring between date-so and so
inputs can sometimes be faulty. when it happens current humans/machines take faulty actions and it'll remain the same for future too unless it has ungodly amount of systems to check it all
Please elaborate and take time to write better English.
 

Offline tooki

  • Super Contributor
  • ***
  • Posts: 12960
  • Country: ch
Re: A skeptics view on the AI Hype
« Reply #5 on: September 11, 2019, 05:29:12 pm »
As I have observed, AI is one of those things that can never exist, in that as soon as we start using an AI technique in the real world, we raise the bar as to what counts as AI.

For example, AI research is where neural networks and stuff came from, but once we use that in real products, we call it "machine learning" and not AI. Tons of algorithms we use routinely now are/were technically AI.

(It's like the way that demonstrably efficacious alternative medicine cannot exist: as soon as we prove empirically that an "alternative" medicine works, it becomes part of mainstream medicine, and thus ceases to be "alternative"!)
 

Offline notadaveTopic starter

  • Contributor
  • Posts: 49
  • Country: de
Re: A skeptics view on the AI Hype
« Reply #6 on: September 11, 2019, 06:31:00 pm »
As I have observed, AI is one of those things that can never exist, in that as soon as we start using an AI technique in the real world, we raise the bar as to what counts as AI. [...] once we use that in real products, we call it "machine learning" and not AI. Tons of algorithms we use routinely now are/were technically AI.
I disagree. The problem is that AI is not well defined. Today it is overused as a marketing term.
I differentiate three different things that have been called AI:
algorithmic solutions, machine learning, true artificial intelligence
Only the difference between the last two is gradual and only those are AI. My 1990's chess computer was not intelligent.
If you know sources that discuss the potential of neural networks compared to other ML methods, plz share. My guess is that a deep NN can do things that say multiple SVMs can not.
 

Online xrunner

  • Super Contributor
  • ***
  • Posts: 7829
  • Country: us
  • hp>Agilent>Keysight>???
Re: A skeptics view on the AI Hype
« Reply #7 on: September 11, 2019, 07:00:04 pm »
I disagree. The problem is that AI is not well defined. Today it is overused as a marketing term.
I differentiate three different things that have been called AI:
algorithmic solutions, machine learning, true artificial intelligence
Only the difference between the last two is gradual and only those are AI. My 1990's chess computer was not intelligent.

You can't design an artificial intelligence without defining what it is that "intelligent" means and how to prove that what you designed is in fact "intelligent".
So, how do you wish to define "intelligent"? Or, is it just "Trust me - I'll know it's intelligent when I see it (i.e. talk to it or interact with it).  :-//

If your 1990's chess program wasn't intelligent (you indicated you could tell that but you didn't say how you could tell), then how would you tell that a future computer A.I. chess program was intelligent without looking in the Black Box? If they both beat humans (and I know the one I had could beat me in the 1990's) what's the true difference between an intelligent and non-intelligent chess program?

For example here's two definitions - which definition (or pick another from elsewhere) fits your idea of intelligent?

Quote
Intelligent
adjective

1. having good understanding or a high mental capacity; quick to comprehend, as persons or animals: an intelligent student.
2. displaying or characterized by quickness of understanding, sound thought, or good judgment: an intelligent reply.
3. having the faculty of reasoning and understanding; possessing intelligence: intelligent beings in outer space.
4. Computers. pertaining to the ability to do data processing locally; smart: An intelligent terminal can edit input before transmission to a host computer.
5. Archaic. having understanding or knowledge (usually followed by of).

https://www.dictionary.com/browse/intelligent

Quote
Definition of intelligent

1a : having or indicating a high or satisfactory degree of intelligence and mental capacity
b : revealing or reflecting good judgment or sound thought : skillful
2a : possessing intelligence
b : guided or directed by intellect : rational
3a : guided or controlled by a computer especially : smart sense 7c — compare dumb sense 7
b : able to produce printed material from digital signals an intelligent copier

https://www.merriam-webster.com/dictionary/intelligent
I told my friends I could teach them to be funny, but they all just laughed at me.
 
The following users thanked this post: tooki

Offline Benta

  • Super Contributor
  • ***
  • Posts: 6375
  • Country: de
Re: A skeptics view on the AI Hype
« Reply #8 on: September 11, 2019, 07:48:58 pm »
The AI hype reminds me a lot of the hype around "fuzzy logic" in the 90s. Same thing, but at a lower level: self-learning/adjusting control systems. That died out again pretty quickly.
 
The following users thanked this post: tooki, Jacon

Offline notadaveTopic starter

  • Contributor
  • Posts: 49
  • Country: de
Re: A skeptics view on the AI Hype
« Reply #9 on: September 12, 2019, 05:15:44 am »
Or, is it just "Trust me - I'll know it's intelligent when I see it (i.e. talk to it or interact with it).
I have a well defined idea of intelligence, I only did not write it because it is long.
Wikipedia has articles on that topic:
https://en.wikipedia.org/wiki/Intelligence
https://en.wikipedia.org/wiki/Intellect
https://en.wikipedia.org/wiki/Human_intelligence
https://en.wikipedia.org/wiki/Understanding

Quote
If your 1990's chess program wasn't intelligent
It could not learn, therefore it lacked to fulfill one of may criteria.
It also did not understand. It did not have a concept of chess. It's algorithm was chess specific.

Quote
how would you tell that a future computer A.I. chess program was intelligent without looking in the Black Box?
Looking into the box is possible and helpful.

Quote
For example here's two definitions - which definition (or pick another from elsewhere) fits your idea of intelligent?
If I go down that rabbit hole you might just pick at words I use and even though I could give definitions I do not think it would help either of us.
 

Offline Raj

  • Frequent Contributor
  • **
  • Posts: 701
  • Country: in
  • Self taught, experimenter, noob(ish)
Re: A skeptics view on the AI Hype
« Reply #10 on: September 12, 2019, 03:56:20 pm »
It's super useful if you are already sure that this thing relates to that thing but don't know how. It fails when you just randomly input stuff.
bad correlations can be resolved by simulating the same scientific methods as what we do with our research.
this way, if it sees your example of women-tampon relation...it'll say, it predicts certain percentage of chances of her having a child, but it won't be 100% sure about it...
it looks at the economy and says rescission will have chances of occurring between date-so and so
inputs can sometimes be faulty. when it happens current humans/machines take faulty actions and it'll remain the same for future too unless it has ungodly amount of systems to check it all
Please elaborate and take time to write better English.
Well, current AI is just a tool to find mathematical relation between two variables. You need to know what those variables are and be sure that they really have a hidden relation between them.

General AI (which simulates biology) will need to learn scientific method, the same methods we use for our researches.
The original post mentions an example of two variables being wrongly correlated namely women's tampon buying habits and their chances of being pregnant
With scientific method, the machine will derive that chance for a women being pregnant when she doesn't buy tampon is high, but not 100%
It's kinda similar to humans of renaissance thinking that rats 'spawn out of nowhere' near filth. But with scientific method, we discovered that rats are likely to be near filth, but doesn't just spawn there. there's some other variables involved.

Another example OP gave is that machine will look at economy and predict recession randomly with 100% certainty, but it will be wrong
In reality, it'll predict a non 100% probability number of recession happening on various times
 :phew: This was tiring.
 

Offline Raj

  • Frequent Contributor
  • **
  • Posts: 701
  • Country: in
  • Self taught, experimenter, noob(ish)
Re: A skeptics view on the AI Hype
« Reply #11 on: September 12, 2019, 03:59:49 pm »
As I have observed, AI is one of those things that can never exist, in that as soon as we start using an AI technique in the real world, we raise the bar as to what counts as AI.

For example, AI research is where neural networks and stuff came from, but once we use that in real products, we call it "machine learning" and not AI. Tons of algorithms we use routinely now are/were technically AI.

(It's like the way that demonstrably efficacious alternative medicine cannot exist: as soon as we prove empirically that an "alternative" medicine works, it becomes part of mainstream medicine, and thus ceases to be "alternative"!)

It's because the language is changing faster than ever. meaning of words used to change in 1000s of years. Now it's down to mere months.
Compared to other stuff (specially relating to PC culture), definition of AI is stable.

Funny that AI could also mean outsourcing your workforce, as if contractors were born artificially... Test tube perhaps?  :-DD
 

Offline Raj

  • Frequent Contributor
  • **
  • Posts: 701
  • Country: in
  • Self taught, experimenter, noob(ish)
Re: A skeptics view on the AI Hype
« Reply #12 on: September 12, 2019, 04:04:07 pm »
The AI hype reminds me a lot of the hype around "fuzzy logic" in the 90s. Same thing, but at a lower level: self-learning/adjusting control systems. That died out again pretty quickly.
Do you mean, a circuit similar to this?-http://www.electronixandmore.com/projects/pong/index.html
 

Offline Benta

  • Super Contributor
  • ***
  • Posts: 6375
  • Country: de
Re: A skeptics view on the AI Hype
« Reply #13 on: September 12, 2019, 06:01:06 pm »
The AI hype reminds me a lot of the hype around "fuzzy logic" in the 90s. Same thing, but at a lower level: self-learning/adjusting control systems. That died out again pretty quickly.
Do you mean, a circuit similar to this?-http://www.electronixandmore.com/projects/pong/index.html

No way, where did you get that from?

I mean this:

https://en.wikipedia.org/wiki/Fuzzy_logic

 

Offline windsmurf

  • Frequent Contributor
  • **
  • !
  • Posts: 625
  • Country: us
Re: A skeptics view on the AI Hype
« Reply #14 on: September 12, 2019, 07:47:58 pm »
As I have observed, AI is one of those things that can never exist, in that as soon as we start using an AI technique in the real world, we raise the bar as to what counts as AI. [...] once we use that in real products, we call it "machine learning" and not AI. Tons of algorithms we use routinely now are/were technically AI.
I disagree. The problem is that AI is not well defined. Today it is overused as a marketing term.
I differentiate three different things that have been called AI:
algorithmic solutions, machine learning, true artificial intelligence
Only the difference between the last two is gradual and only those are AI. My 1990's chess computer was not intelligent.
If you know sources that discuss the potential of neural networks compared to other ML methods, plz share. My guess is that a deep NN can do things that say multiple SVMs can not.

I agree, the term "AI" is overused and ill-defined. 
Until this is cleared up we can't have "Intelligent" discussions around things like "chances and risks of AI," the 1st sentence of this thread.


 

Offline Zero999

  • Super Contributor
  • ***
  • Posts: 20181
  • Country: gb
  • 0999
Re: A skeptics view on the AI Hype
« Reply #15 on: September 12, 2019, 08:41:17 pm »
As I have observed, AI is one of those things that can never exist, in that as soon as we start using an AI technique in the real world, we raise the bar as to what counts as AI. [...] once we use that in real products, we call it "machine learning" and not AI. Tons of algorithms we use routinely now are/were technically AI.
I disagree. The problem is that AI is not well defined. Today it is overused as a marketing term.
I differentiate three different things that have been called AI:
algorithmic solutions, machine learning, true artificial intelligence
Only the difference between the last two is gradual and only those are AI. My 1990's chess computer was not intelligent.
If you know sources that discuss the potential of neural networks compared to other ML methods, plz share. My guess is that a deep NN can do things that say multiple SVMs can not.
He's right, although you also have a point.

A capability which was previously unique to humans, such as recognising photographs containing cats was considered to be AI, until computers are able to do it and now it's no longer AI, just machine learning. Some people feel threatened by machines, as though if a computer can be more intelligent than a person, it will make the person less human, but this isn't true. Protecting beings who are intelligent, over those who aren't, is dangerous. A child with a severe learning disability might be less intelligent than a dog, so does that mean it's fine to treat them like an animal? Perhaps they should be shot?

On the other hand AI is overhyped. People tend to extrapolate. A lot of progress in AI neural networks has been made in recent years and some believe it will continue at the same rate, for the forceable future. In reality a areas such as pattern recognition have advanced a lot recently, but general AI seems long way off. A modern chess computer program definitely is intelligent because it can modify its behaviour and the same game engine can be used for other strategy games, after a period of training, but it's not general AI. For example, a human who is good at chess will be able to pick up a similar game such as draughts fairly easily, much more so than someone who's never played chess before, as they can apply similar strategies, but the AI neural network will need to be completely retrained with the new rules. Concepts such as correlation not equalling causation completely baffle the current generation of AI systems. At the moment it's possible for a machine to learn a pattern, but why is another thing altogether.

One of the problems with neural networks even the developers don't really understand how they work. Even though the underlying principles are known, decoding what the neural network has actually learnt, is a completely different matter.
 

Offline dnwheeler

  • Regular Contributor
  • *
  • Posts: 86
  • Country: us
Re: A skeptics view on the AI Hype
« Reply #16 on: September 12, 2019, 09:23:00 pm »
"Fuzzy logic" just means comparing to (possibly dynamic) ranges, not specific values. AI is just a "marketing" term for a set of (admittedly complex) algorithms that are driven by data accumulated from very large numbers of examples.
 

Offline Zero999

  • Super Contributor
  • ***
  • Posts: 20181
  • Country: gb
  • 0999
Re: A skeptics view on the AI Hype
« Reply #17 on: September 13, 2019, 08:36:57 am »
a set of (admittedly complex) algorithms that are driven by data accumulated from very large numbers of examples.
Isn't that a definition of intelligence? How does one pick out a picture of an apple from a number of photographs of other fruit? When they were a child they learned what an apple looks like, by gathering data from a number of examples, in the form of memories. They would have seen plenty of apples in the supermarket, with their parents, in books and on TV. When they look at the pictures of fruit, the algorithms in their brain will be driven by past experience, to identity the picture of an apple.

As with many areas of technology machine vision and AI image procession is inspired by biological systems in humans and animals.
 
The following users thanked this post: notadave

Offline notadaveTopic starter

  • Contributor
  • Posts: 49
  • Country: de
Re: A skeptics view on the AI Hype
« Reply #18 on: September 13, 2019, 08:59:20 am »
A capability which was previously unique to humans, such as recognizing photographs containing cats was considered to be AI, until computers are able to do it and now it's no longer AI, just machine learning.
That is clearly an argument for not defining AI by what it does but how it does it.

Quote
Some people feel threatened by machines, though as if a computer can be more intelligent than a person, it will make the person less human, but that isn't true.
The opposite is the problem, the machine will be as worthy of protection as the human.

Quote
Protecting beings who are intelligent, over those who aren't, is dangerous. A child with a severe learning disability might be less intelligent than a dog, so does that mean it's fine to treat them like an animal? Perhaps they should be shot?
You are playing devils advocate. There are so many things wrong with that train of thought and none belong here that I will not go for it.

Quote
A modern chess computer program definitely is intelligent because it can modify its behavior and the same game engine can be used for other strategy games
Are you not falling for the same mistake you pointed out earlier? Many algorithms may be applied to different games. Adaptive optimizing behavior is not restricted to any specific implementation.
The real issue is that we wish to draw the line somewhere, even though the change is gradual. Just because there is a big difference between to things does not mean that there are leaps, steps, gaps, clusters.
 

Offline Dubbie

  • Supporter
  • ****
  • Posts: 1115
  • Country: nz
Re: A skeptics view on the AI Hype
« Reply #19 on: September 13, 2019, 09:18:25 am »
Thanks for kicking off this very interesting topic for discussion notadave. I am enjoying reading everyones thoughts.

Personally I swing between optimism and pessimism when it comes to the possibility of general AI any time soon. As a philosophical naturalist, I can imagine sort-of how general AI might be possible, but on the other hand, the neural nets we currently have seem very narrowly constrained in the tasks they are able to be useful for. I don’t know if this is a hard limit to the general technique, or just that they aren’t complex enough yet.
 

Offline tooki

  • Super Contributor
  • ***
  • Posts: 12960
  • Country: ch
Re: A skeptics view on the AI Hype
« Reply #20 on: September 13, 2019, 11:32:35 am »
It's because the language is changing faster than ever. meaning of words used to change in 1000s of years. Now it's down to mere months.
That couldn't be more untrue. As a semi-trained linguist, I assure you that the meanings of words changed all the time in the past. If anything, meanings will change somewhat more slowly now, due to widespread literacy. (Having a written form of a language slows change somewhat. But widespread literacy is a very recent innovation.)

I can assure you, there are NO languages* where speakers 1000 years apart could have understood each other, never mind longer. (Indeed, at that scale, we likely wouldn't even consider them to be the same language any more.) You'd be hard-pressed to have speakers just a few hundred years apart understand each other easily.

*The only exceptions, possibly, would be things like Latin, retained as scholarly languages alongside whatever everyday language was used at the time. This is why Latin is no longer spoken in Italy: the everyday ("vulgar") Latin of ancient Rome evolved into Italian. And the vulgar Latin of other areas of the Roman Empire evolved into all the other Romance languages (French, Spanish, Portuguese, etc.). But even scholarly Latin did not stay unchanged. It's gone through multiple revivals, each with its own adaptations. And of course the Roman Catholic church maintained its own liturgical variety, Ecclesiastical Latin. 

Compared to other stuff (specially relating to PC culture), definition of AI is stable.
You think so? I don't. Was kinda literally my point, that "AI" is a moving target.
 

Offline magic

  • Super Contributor
  • ***
  • Posts: 7375
  • Country: pl
Re: A skeptics view on the AI Hype
« Reply #21 on: September 13, 2019, 03:37:17 pm »
I agree, the term "AI" is overused and ill-defined. 
Until this is cleared up we can't have "Intelligent" discussions around things like "chances and risks of AI," the 1st sentence of this thread.
I seems like OP refers to the ideas of "superintelligence", "AI singurality", "intelligence explosion", etc. promoted by some futurists, notably Yudkowski and his funclub at rationalwiki lesswrong.

The idea is that one day somebody will create a machine smarter than man, whatever that means, we don't even really know it yet. Such machine will likely be able to build even smarter machines, and do so faster than we did. An uncontrollable process will explode, outsmarting everybody and taking over the world to turn it into some dystopia or completely destroy.

If you have never heard of those things before, you just don't know what is being talked about :)

That being said, I think there are bigger concerns and more immediate end-of-the-world scenarios out there.
« Last Edit: September 13, 2019, 03:43:05 pm by magic »
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15652
  • Country: fr
 

Offline Benta

  • Super Contributor
  • ***
  • Posts: 6375
  • Country: de
Re: A skeptics view on the AI Hype
« Reply #23 on: September 13, 2019, 06:20:34 pm »
"Fuzzy logic" just means comparing to (possibly dynamic) ranges, not specific values. AI is just a "marketing" term for a set of (admittedly complex) algorithms that are driven by data accumulated from very large numbers of examples.

I never compared AI to Fuzzy Logic. I compared the hype.

 

Offline FreddieChopin

  • Regular Contributor
  • *
  • !
  • Posts: 102
  • Country: ua
Re: A skeptics view on the AI Hype
« Reply #24 on: September 13, 2019, 07:04:32 pm »
I agree, the term "AI" is overused and ill-defined. 
Until this is cleared up we can't have "Intelligent" discussions around things like "chances and risks of AI," the 1st sentence of this thread.
I seems like OP refers to the ideas of "superintelligence", "AI singurality", "intelligence explosion", etc. promoted by some futurists, notably Yudkowski and his funclub at rationalwiki lesswrong.

The idea is that one day somebody will create a machine smarter than man, whatever that means, we don't even really know it yet. Such machine will likely be able to build even smarter machines, and do so faster than we did. An uncontrollable process will explode, outsmarting everybody and taking over the world to turn it into some dystopia or completely destroy.

If you have never heard of those things before, you just don't know what is being talked about :)

That being said, I think there are bigger concerns and more immediate end-of-the-world scenarios out there.

No matter how smart computer would get it wouldn't survive 23mm autocannon mounted on old pickup truck.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf