Challenging the Notion That LLMs Can’t Reason: A Case Study with Einstein’s Puzzle

When Apple declared that LLMs can't reason, they forgot one crucial detail: a hammer isn't meant to turn screws. In our groundbreaking study of Einstein's classic logic puzzle, we discovered something fascinating. While language models initially stumbled with pure reasoning - making amusing claims like "Plumbers don't drive Porsches" - they excelled at an unexpected task.
Challenging the Notion That LLMs Can’t Reason: A Case Study with Einstein’s Puzzle
Image Credit: SmartR AI

Introduction to LLMs and the Reasoning Debate

A recent Apple publication argued that Large Language Models (LLMs) cannot effectively reason. While there is some merit to this claim regarding out-of-the-box performance, this article demonstrates that with proper application, LLMs can indeed solve complex reasoning problems.

The Initial Experiment: Einstein’s Puzzle


We set out to test LLM reasoning capabilities using Einstein’s puzzle, a complex logic problem involving 5 houses with different characteristics and 15 clues to determine who owns a fish. Our initial tests with leading LLMs showed mixed results:

  • OpenAI’s model correctly guessed the answer, but without clear reasoning
  • Claude provided an incorrect answer
  • When we modified the puzzle with new elements (cars, hobbies, drinks, colors, and jobs), both models failed significantly

Tree of Thoughts Approach and Its Challenges

We implemented our Tree of Thoughts approach, where the model would:

  1. Make guesses about house arrangements
  2. Use critics to evaluate rule violations
  3. Feed this information back for the next round

However, this revealed several interesting failures in reasoning:

Logic Interpretation Issues

The critics often struggled with basic logical concepts. For example, when evaluating the rule “The Plumber lives next to the Pink house,” we received this confused response:

“The Plumber lives in House 2, which is also the Pink house. Since the Plumber lives in the Pink house, it means that the Plumber lives next to the Pink house, which is House 1 (Orange).”

Bias Interference

The models sometimes inserted unfounded biases into their reasoning. For instance:

“The Orange house cannot be in House 1 because the Plumber lives there and the Plumber does not drive a Porsche.”

The models also made assumptions about what music Porsche drivers would listen to, demonstrating how internal biases can interfere with pure logical reasoning.

A Solution Through Code Generation

While direct reasoning showed limitations, we discovered that LLMs could excel when used as code generators. We asked SCOTi to write MiniZinc code to solve the puzzle, resulting in a well-formed constraint programming solution. The key advantages of this approach were:

  1. Each rule could be cleanly translated into code statements
  2. The resulting code was highly readable
  3. MiniZinc could solve the puzzle efficiently

Example of Clear Rule Translation

The MiniZinc code demonstrated elegant translation of puzzle rules into constraints. For instance:

% Statement 11: The man who enjoys Music lives next to the man who drives Porsche
% Note / means AND in minizinc
constraint exists(i,j in 1..5)(abs(i-j) == 1 / hobbies[i] = Music / cars[j] = Porsche);

If you would like to get the full MiniZinc code, please contact me.

Implications and Conclusions: Rethinking the Role of LLMs

This experiment reveals several important insights about LLM capabilities:

  1. Direct reasoning with complex logic can be challenging for LLMs
  2. Simple rule application works well, but performance degrades when multiple steps of inference are required
  3. LLMs excel when used as agents to generate code for solving logical problems
  4. The combination of LLM code generation and traditional constraint solving tools creates powerful solutions

The key takeaway is that while LLMs may struggle with certain types of direct reasoning, they can be incredibly effective when properly applied as components in a larger system. This represents a significant advancement in software development capabilities, demonstrating how LLMs can be transformative when used strategically rather than as standalone reasoning engines.

This study reinforces the view that LLMs are best understood as transformational software components rather than complete reasoning systems. Their impact on software development and problem-solving will continue to evolve as we better understand how to leverage their strengths while working around their limitations.


Recent Content

IMDEA Networks, with partners UC3M, UAM, and UPM, launches DISCO6G—an ambitious 6G project integrating real-time communication and environmental sensing. Led by Jess Omar Lacruz, the initiative focuses on ISAC systems, intelligent surfaces, AI-driven signal optimization, and non-invasive diagnostics to enhance healthcare, smart mobility, and autonomous systems.
MATRIXX Software introduces dynamic billing support for satellite and non-terrestrial network (NTN) services, enabling telecom operators to expand coverage, monetize emerging LEO partnerships, and unify revenue management. The platform supports flexible commercial models, powering growth in underserved regions and across consumer, enterprise, and wholesale markets.
Microsoft has upgraded its 365 Copilot with AI-driven tools—Researcher and Analyst—designed to handle deep research, strategic analysis, and data insights. Powered by OpenAI models, these features allow users to perform complex tasks like market planning, client reporting, and advanced analytics, while integrating data from platforms like Salesforce and Confluence.
AI is transforming supply chain management by enhancing demand forecasting, optimizing inventory, and streamlining logistics. With the rise of Generative AI, businesses gain real-time insights for better efficiency and sustainability, from ethical sourcing to reducing carbon footprints. Companies like Fujitsu are leading the way with AI-powered solutions across logistics, quality control, and food/pharma safety.
AMD and Rapt AI are partnering to improve AI workload efficiency across AMD Instinct GPUs, including MI300X and MI350. By integrating Rapt AI’s intelligent workload automation tools, the collaboration aims to optimize GPU performance, reduce costs, and streamline AI training and inference deployment. This partnership positions AMD as a stronger competitor to Nvidia in the high-performance AI GPU market while offering businesses better scalability and resource utilization.
Observe.AI has unveiled VoiceAI agents—intelligent, realistic voice-powered AI tools designed to automate contact center operations. These AI agents manage routine customer interactions using advanced voice technology, reduce support costs by up to 80%, and integrate easily with tools like Salesforce and Zendesk. With features like interruption detection and robust data security, VoiceAI agents mark a leap forward in contact center automation.
Whitepaper
5G network rollouts are now sprouting around the globe as operators get to grips with the potential of new enterprise applications. Yet behind the scenes, several factors still could strongly impact just how transformative this technology will be in years to come. Ultimately, it will all boil down to one...
NetInsight Logo
Whitepaper
System integrators play a crucial role in the network ecosystem by bringing together various components and technologies from the diverse network ecosystem players to build, deploy, and operate comprehensive end-to-end solutions that meet the specific needs of their clients....
Tech Mahindra Logo

It seems we can't find what you're looking for.

Download Magazine

With Subscription

Subscribe To Our Newsletter

Scroll to Top