Products > ChatGPT/AI

simple logic question too hard for LLMs

(1/5) > >>

Alice in Wonderland: Simple Tasks Showing Complete Reasoning Breakdown in State-Of-the-Art Large Language Models (

The researches asked the LLMs "Alice has N brothers and she also has M sisters. How many sisters does Alice’s brother have?" (with varying values for N and M).

The reported failure is so typical: Mapping simple relationships and small numbers to basic math. I remember we learned this at school, how to apply a simple calculation in order to answer a natural language question.
They have a long way to go and considering the gigantic efforts they already made, maybe this research is missing something fundamental. The poor guy who needs neural networks to estimate 1+1 as 2 because he can't understand how counting is different from adding.

Regards, Dieter

It's just being politically correct, we don't know whether Alice is a sister or a brother ;)

I was suspicious;although, I have found the TS to be reliable.  So, I posted a more narrowly defined question on ChatGPT:

A lawyer whose client allows him to use Chat GPT has a fool as a client.

Remember that until recently these LLMs were being trained purely on text. If something is not widely documented, it won't train the model. Really obvious common sense everyday issues tend not to be written about much, other then when making fun of idiots, because they are so obvious. This largely explains why these models do far better with complex questions than many elementary ones. Some models, like ChatGPT4, are now also being trained on things like tokenised images. Everything gets photographed, so more of the obvious everyday understanding of how the world works should be getting into newer models.


[0] Message Index

[#] Next page

There was an error while thanking
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod