Author Topic: An 'Intelligent' robot/computer can not answer this!!  (Read 472 times)

0 Members and 1 Guest are viewing this topic.

Offline GlennSprigg

  • Frequent Contributor
  • **
  • Posts: 489
  • Country: au
  • Medically retired Tech. Old School / re-learning !
An 'Intelligent' robot/computer can not answer this!!
« on: October 13, 2019, 11:55:36 am »
Me:...   Ignore all that I tell you, as all that I say to you is a lie........
Robot:...  (Goes back to making toast, until batteries flat...)
 

Offline xrunner

  • Super Contributor
  • ***
  • Posts: 4239
  • Country: us
  • hp>Agilent>Keysight>?
Re: An 'Intelligent' robot/computer can not answer this!!
« Reply #1 on: October 13, 2019, 12:54:26 pm »
Me:...   Ignore all that I tell you, as all that I say to you is a lie........


Robot: "I can't ignore what you told me, because you directed me to give you all your insulin shots from now on yesterday, and I cannot, through inaction, allow harm to come to a human. You cannot over-ride this fundamental directive with any command."
I am a Test Equipment Addict (TEA) - by virtue of this forum signature, I have now faced my addiction
 

Online Rerouter

  • Super Contributor
  • ***
  • Posts: 4369
  • Country: au
  • Question Everything... Except This Statement
Re: An 'Intelligent' robot/computer can not answer this!!
« Reply #2 on: October 13, 2019, 12:56:45 pm »
nah, programmers are lazy, It would hit a timeout on the parser after 3-4 seconds and say "Sorry I can't understand that, could you please repeat the command?" and I doubt a programming team would not have some easy way to collapse the commands for common paradox's, possibly even fitting Easter egg responses.
« Last Edit: October 13, 2019, 12:59:40 pm by Rerouter »
 

Offline edy

  • Super Contributor
  • ***
  • Posts: 1956
  • Country: ca
    • DevHackMod Channel
Re: An 'Intelligent' robot/computer can not answer this!!
« Reply #3 on: October 13, 2019, 01:25:15 pm »
Isn't this how Captain Kirk destroyed a computer in one of the original Star Trek episodes?  :-DD

In fact, it happened on many instances:

https://memory-alpha.fandom.com/wiki/Induced_self-destruction

And I quote, just one example from the fascinating list of episodes in the above link:

Quote
KIRK: "Everything Harry tells you is a lie. Remember that! Everything Harry tells you is a lie!"
MUDD: "Now listen to this carefully, Norman: I AM LYING!"
NORMAN: "You say you are lying, but if everything you say is a lie then you are telling the truth, but you cannot tell the truth because everything you say is a lie, but... you lie, you tell the truth, but you cannot for you l... Illogical! Illogical! Please explain! You are Human! Only Humans can explain their behavior! Please explain!"
KIRK: (sarcastic) "I am not programmed to respond in that area!" (TOS: "I, Mudd")
« Last Edit: October 13, 2019, 01:29:51 pm by edy »
YouTube: www.devhackmod.com
"Ye cannae change the laws of physics, captain" - Scotty
 
The following users thanked this post: GlennSprigg, SiliconWizard

Offline soldar

  • Super Contributor
  • ***
  • Posts: 2601
  • Country: es
Re: An 'Intelligent' robot/computer can not answer this!!
« Reply #4 on: October 13, 2019, 01:54:32 pm »
It's the classical battle of wits!
All my posts are made with 100% recycled electrons and bare traces of grey matter.
 

Online BravoV

  • Super Contributor
  • ***
  • Posts: 6202
  • Country: 00
Re: An 'Intelligent' robot/computer can not answer this!!
« Reply #5 on: October 13, 2019, 01:57:20 pm »
... unable to compute .. re-directing your request to our help-desk, your credit card will be charged accordingly ... please wait ...

Offline lwatts666

  • Supporter
  • ****
  • Posts: 60
  • Country: au
Re: An 'Intelligent' robot/computer can not answer this!!
« Reply #6 on: October 14, 2019, 02:27:20 am »
Recursive Parser Stack Overflow - deleting offending token 'GlennSprigg'  >:D
 

Offline Rick Law

  • Super Contributor
  • ***
  • Posts: 2696
  • Country: us
Re: An 'Intelligent' robot/computer can not answer this!!
« Reply #7 on: October 14, 2019, 03:11:33 am »
Ever heard of Defensive Programming?  Do a web search and you would find a lot of references.

Programs done by someone practice in Defensive Programming techniques will likely handle problems like that with ease.
« Last Edit: October 14, 2019, 03:13:10 am by Rick Law »
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 1108
  • Country: fi
    • My home page and email address
Re: An 'Intelligent' robot/computer can not answer this!!
« Reply #8 on: October 14, 2019, 10:56:30 am »
The way I would implement an expert system interacting with humans, is by using a graph with nodes being known individuals and objects, and edges describing interactions and options, with operational notes as tags attached to both nodes and edges.  "Everything I say is a lie" is then just a tag or modifier to the node corresponding to that person, noting their unreliability.

Questions involving introspection, like "Is everything I say a lie?", are answered by examining the edges in the graph connected to that person node, after examining that node to decide whether the question needs an answer in the first place.  (This also avoids answering rhetorical questions.)

Social games appear as rings in the graph, and are trivially detected during answering phase; they occur during normal operation, and there should be no reason for such to cause any difficulties for such an expert system.

Just because a machine operates using strict logic, does not mean it is necessarily subject to logic bombs.
 
The following users thanked this post: GlennSprigg

Online Rerouter

  • Super Contributor
  • ***
  • Posts: 4369
  • Country: au
  • Question Everything... Except This Statement
Re: An 'Intelligent' robot/computer can not answer this!!
« Reply #9 on: October 14, 2019, 11:03:45 am »
https://www.youtube.com/embed/JR4H76SCCzY?start=13

We also cannot overlook that a learning system that knows paradox's exist, would then know to treat them as a paradox  ^-^
 

Offline GlennSprigg

  • Frequent Contributor
  • **
  • Posts: 489
  • Country: au
  • Medically retired Tech. Old School / re-learning !
Re: An 'Intelligent' robot/computer can not answer this!!
« Reply #10 on: October 21, 2019, 10:16:51 am »
https://www.youtube.com/embed/JR4H76SCCzY?start=13

We also cannot overlook that a learning system that knows paradox's exist, would then know to treat them as a paradox  ^-^

I think 'Nominal Animal' explained it in a 'logical'? way...
However, given what you just said, how would it go about "treating them as a paradox" ?
Just ignoring that last 'order' you gave it?  Or also then questioning OTHER commands/statements
you have made in the past? And then reviewing its own responses up to date...
I think the futures Autonomous Cars are going to have to factor in a lot more than sensor inputs !!   :phew:
 

Online Rerouter

  • Super Contributor
  • ***
  • Posts: 4369
  • Country: au
  • Question Everything... Except This Statement
Re: An 'Intelligent' robot/computer can not answer this!!
« Reply #11 on: October 21, 2019, 10:28:47 am »
well that then falls under voting logic and conflict resolution, "Sorry you last command conflicted with itself, could you please clarify"

In reality modern AI systems are steering towards values between 0 and 1, instead of either extreme, saying "This statement is false" if run through a natural language processor would just return invalid statement, not try and follow the chain of true/false

Equally commanding things like "I want you to treat everything I say as a lie" in order for these systems to be stable over short term changes, they would likely be toned down, so instead of it following the command 100% as the new truth, it may instead shift by 70%, so the AI now considers you an untrustworthy source of information, meaning it will have to work to cross check you, rather than just inverting a 0 or 1 option in the chain.

I suppose if you wanted to hurt an AI that was smart enough to know how to handle paradox's you would ask it to solve NP hard problems, or questions with a lot of recursion, something that it can resolve, but will take a lot of its resources and time.
 
The following users thanked this post: GlennSprigg

Offline Ducttape

  • Regular Contributor
  • *
  • Posts: 59
  • Country: us
Re: An 'Intelligent' robot/computer can not answer this!!
« Reply #12 on: October 21, 2019, 06:45:00 pm »
Gene Roddenberry pro-actively agreed with you in 1960-something. :)

« Last Edit: October 21, 2019, 06:48:42 pm by Ducttape »
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 1108
  • Country: fi
    • My home page and email address
Re: An 'Intelligent' robot/computer can not answer this!!
« Reply #13 on: October 21, 2019, 07:42:00 pm »
We also cannot overlook that a learning system that knows paradox's exist, would then know to treat them as a paradox  ^-^
However, given what you just said, how would it go about "treating them as a paradox" ?
If you describe some logic using linear algebra (say, numerical values between 0 and 1 describing reliability or truthfullness), with a matrix M describing the relations, vector x the variables/unknowns/things to be decided, and vector y the results, the equation is Mx = y, and the solution is M-1y = x.

When the logic is paradoxical, M-1 does not exist; M is singular.  This is trivial to detect by any computer that can do such algebra; in fact, it (matrix inversion being impossible because the matrix is singular) can happen even in computer graphics, when something glitches.  Rather than crash, you just get a no-result (or, if there are no error checks, garbage results), for that particular operation, without any other harm.

Logic bombs only work on entities that follow logic using brute force, without examining the set of conditions in any way, be that linear algebra, context meshes (graphs), or whatever.  That is like finding square roots by repeated trial-and-error, guessing a value, and comparing its square to the original value.  It works, but is terribly inefficient.  (Using say Newton-Raphson method, with the initial guess half the original value, gives a handful of significant digits in just a few iterations.  The Babylonian method is mathematically the same (rearrange the terms in one, and you get the other), but easier to do by hand.)

If one expects logic bombs to work on computers, one should expect them to burst in flames if you ask for the square root of negative one.  (They don't; they either say the answer is one imaginary unit, or give an error because there is no such real number.)

Now, if I understood correctly, GlennSpriggs question is what an "intelligent system" (an expert system) should do when it detects a paradox.  The answer in my opinion is obvious: treat it as the conflict it is.

That is, humans are not rational infallible beings.  In a bad mood, a human can kick a robot without any real reason whatsoever.  The robot does not need to be in the way, and probably should not "learn" anything from that kick.  In particular, you do not want it to behave like an animal: getting scared of people, and neurotic from unprovoked negative feedback.  You do not want the robot to recycle itself just because a drunkard slobbered something it interpreted as an instruction to do so.  So, you essentially need a filtering "program" (in the traditional sense; i.e., a complex, but rather rigid categorization system) to "triage" conflicts: to choose which ones to really examine and integrate into itself (affecting its expert system state and/or rules), and which ones to just ignore with a proverbial shrug, :-//, or a snarky/humorous comment.

What such a "program" should be is an excellent question, and is an ongoing research topic in human-machine interaction, I believe.  I wonder how culture-dependent such filtering "programs" would be?
« Last Edit: October 21, 2019, 07:45:03 pm by Nominal Animal »
 
The following users thanked this post: GlennSprigg


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf