Home Blog Newsfeed MIT Researchers Teach LLMs to Solve Complex Planning Challenges with Novel Framework
MIT Researchers Teach LLMs to Solve Complex Planning Challenges with Novel Framework

MIT Researchers Teach LLMs to Solve Complex Planning Challenges with Novel Framework

Large language models (LLMs) are powerful, but they often struggle with complex planning problems. Imagine a coffee company optimizing its supply chain, balancing bean sourcing, roasting, and shipping to meet a demand increase. Asking ChatGPT for an optimal plan might seem logical, but LLMs often underperform in such scenarios.

Researchers at MIT have developed a novel framework that guides LLMs to break down complex problems like humans do, leveraging powerful software tools for automated solutions. This approach allows users to describe problems in natural language, eliminating the need for task-specific examples for LLM training or prompting.

The framework encodes user prompts into a format decipherable by optimization solvers designed for tough planning challenges. During formulation, the LLM checks its work at intermediate steps, correcting errors rather than abandoning the process.

In tests across nine complex challenges, including optimizing warehouse robot routes, the framework achieved an 85% success rate, significantly outperforming the best baseline at 39%. This versatile framework has potential applications in scheduling airline crews or managing factory machine time.

“Our research introduces a framework that essentially acts as a smart assistant for planning problems. It can figure out the best plan that meets all the needs you have, even if the rules are complicated or unusual,” says Yilun Hao, a graduate student in the MIT Laboratory for Information and Decision Systems (LIDS) and lead author of a paper on this research.

The team’s approach, named LLM-Based Formalized Programming (LLMFP), involves providing a natural language description of the problem, background information, and a query outlining the goal. LLMFP then guides the LLM to reason about the problem, identify key decision variables and constraints, and encode the information into a mathematical formulation suitable for an optimization solver.

The LLM details the requirements of each variable before encoding the information into a mathematical formulation of an optimization problem. It writes code that encodes the problem and calls the attached optimization solver, which arrives at an ideal solution.

To ensure a viable plan, LLMFP analyzes the solution and corrects any errors in the formulation. This self-assessment module also allows the LLM to add any implicit constraints missed initially. For example, the system recognizes that a coffeeshop can’t ship a negative amount of roasted beans.

In a series of tests, their framework achieved an average success rate between 83 and 87 percent across nine diverse planning problems using several LLMs. Unlike other approaches, LLMFP doesn’t require domain-specific examples for training and can be adapted for different optimization solvers by adjusting the prompts fed to the LLM.

The researchers plan to enhance LLMFP by enabling it to take images as input, supplementing natural language descriptions to solve tasks that are difficult to fully describe with text.

Add comment

Sign Up to receive the latest updates and news

Newsletter

© 2025 Proaitools. All rights reserved.