We also cannot overlook that a learning system that knows paradox's exist, would then know to treat them as a paradox 
However, given what you just said, how would it go about "treating them as a paradox" ?
If you describe some logic using linear algebra (say, numerical values between 0 and 1 describing reliability or truthfullness), with a matrix
M describing the relations, vector
x the variables/unknowns/things to be decided, and vector
y the results, the equation is
M x =
y, and the solution is
M-1 y =
x.
When the logic is paradoxical,
M-1 does not exist;
M is singular. This is trivial to detect by any computer that can do such algebra; in fact, it (matrix inversion being impossible because the matrix is singular) can happen even in computer graphics, when something glitches. Rather than crash, you just get a no-result (or, if there are no error checks, garbage results), for that particular operation, without any other harm.
Logic bombs only work on entities that follow logic using brute force, without examining the set of conditions in any way, be that linear algebra, context meshes (graphs), or whatever. That is like finding square roots by repeated trial-and-error, guessing a value, and comparing its square to the original value. It works, but is terribly inefficient. (Using say Newton-Raphson method, with the initial guess half the original value, gives a handful of significant digits in just a few iterations. The Babylonian method is mathematically the same (rearrange the terms in one, and you get the other), but easier to do by hand.)
If one expects logic bombs to work on computers, one should expect them to burst in flames if you ask for the square root of negative one. (They don't; they either say the answer is one imaginary unit, or give an error because there is no such real number.)
Now, if I understood correctly, GlennSpriggs question is what an "intelligent system" (
an expert system) should do when it
detects a paradox. The answer in my opinion is obvious: treat it as the conflict it is.
That is, humans are not rational infallible beings. In a bad mood, a human can kick a robot without any real reason whatsoever. The robot does not need to be in the way, and probably should not "learn" anything from that kick. In particular, you do not want it to behave like an animal: getting scared of people, and neurotic from unprovoked negative feedback. You do not want the robot to recycle itself just because a drunkard slobbered something it interpreted as an instruction to do so. So, you essentially need a filtering "program" (in the traditional sense; i.e., a complex, but rather rigid categorization system) to "triage" conflicts: to choose which ones to really examine and integrate into itself (affecting its expert system state and/or rules), and which ones to just ignore with a proverbial shrug,

, or a snarky/humorous comment.
What such a "program" should be is an excellent question, and is an ongoing research topic in human-machine interaction, I believe. I wonder how culture-dependent such filtering "programs" would be?