Home Blog Newsfeed MIT Researchers Teach LLMs to Tackle Complex Planning Challenges with Novel Framework
MIT Researchers Teach LLMs to Tackle Complex Planning Challenges with Novel Framework

MIT Researchers Teach LLMs to Tackle Complex Planning Challenges with Novel Framework

Large language models (LLMs) have demonstrated remarkable capabilities across various domains, but they often struggle with complex planning problems. Imagine a coffee company optimizing its supply chain, balancing supplier capacities, roasting costs, and shipping logistics to meet a demand increase. Directly asking an LLM like ChatGPT for an optimal plan often yields unsatisfactory results. To address this limitation, researchers at MIT have developed a novel framework that guides LLMs to break down complex planning problems and leverage powerful software tools for efficient solutions.

Instead of modifying LLMs to become better planners, the MIT team introduced a framework that mirrors human problem-solving approaches. This framework enables LLMs to dissect problems and utilize optimization solvers to efficiently tackle intricate planning challenges automatically. The user simply describes the problem in natural language, eliminating the need for task-specific examples for training or prompting the LLM. The model then translates the user’s text into a format compatible with an optimization solver designed to handle extremely tough planning scenarios.

During this formulation process, the LLM continuously validates its work at each step to ensure accurate translation for the solver. When errors are detected, the LLM attempts to correct the broken parts of the formulation instead of halting the process.

In tests across nine complex challenges, including minimizing warehouse robot travel distances, the MIT framework achieved an impressive 85% success rate, significantly outperforming the best baseline, which only attained a 39% success rate. This versatile framework has the potential to be applied in various multistep planning tasks, such as scheduling airline crews or managing machine time within a factory setting.

According to Yilun Hao, a graduate student at the MIT Laboratory for Information and Decision Systems (LIDS) and lead author of the research paper, their work introduces a smart assistant for planning problems. It can determine the best plan that satisfies all requirements, even with complex or unusual constraints. Hao’s co-authors include Yang Zhang, a research scientist at the MIT-IBM Watson AI Lab, and Chuchu Fan, an associate professor of aeronautics and astronautics and LIDS principal investigator. The research is set to be presented at the International Conference on Learning Representations.

The Fan group specializes in developing algorithms that automatically solve combinatorial optimization problems, characterized by numerous interrelated decision variables and billions of potential choices. These algorithmic solvers apply principles to optimization problems too complex for humans.

Fan noted that LLMs could potentially allow non-experts to utilize solving algorithms, formalizing domain expert problems into a format solvable by their solver. This approach led to the development of LLM-Based Formalized Programming (LLMFP), where users input a natural language description of the problem, relevant background, and a query outlining their goal.

LLMFP prompts an LLM to reason about the problem and determine the decision variables and key constraints needed to shape the optimal solution. The LLM details the requirements of each variable before encoding the information into a mathematical formulation of an optimization problem. It then writes code that encodes the problem and calls the attached optimization solver, which arrives at an ideal solution. Any mistakes in the solution come from errors in the formulation process, according to Fan.

To ensure a working plan, LLMFP analyzes the solution and modifies any incorrect steps in the problem formulation. The self-assessment module allows the LLM to add any implicit constraints it missed initially. For instance, the system would flag an error if optimizing a coffee shop supply chain suggests shipping a negative amount of roasted beans.

Furthermore, LLMs can adapt to user preferences, suggesting changes that align with their needs. In a series of tests, the framework achieved an average success rate between 83 and 87 percent across nine diverse planning problems using several LLMs. Unlike other approaches, LLMFP does not require domain-specific examples for training. In the future, the researchers aim to enable LLMFP to take images as input to supplement the descriptions of a planning problem.

</n

Add comment

Sign Up to receive the latest updates and news

Newsletter

Bengaluru, Karnataka, India.
Follow our social media
© 2025 Proaitools. All rights reserved.