
MIT Researchers Develop AI Framework to Solve Complex Planning Problems Using LLMs
Researchers at MIT have introduced a novel framework that empowers Large Language Models (LLMs) to tackle intricate planning challenges by mirroring human problem-solving techniques. This innovative approach guides LLMs to deconstruct problems and leverages powerful software tools for automated solutions.
The framework, detailed in a paper presented at the International Conference on Learning Representations, enables users to describe problems in natural language without needing task-specific examples for training or prompting the LLM. The model then encodes the prompt into a format decipherable by an optimization solver, effectively addressing complex planning scenarios.
Yilun Hao, a graduate student at MIT’s Laboratory for Information and Decision Systems (LIDS) and lead author of the paper, explains, “Our research introduces a framework that essentially acts as a smart assistant for planning problems. It can figure out the best plan that meets all the needs you have, even if the rules are complicated or unusual.”
The LLM meticulously checks its work at intermediate stages, ensuring accurate problem formulation for the solver. When an error is detected, the LLM attempts to rectify the specific issue rather than abandoning the process.
In tests involving nine complex challenges, including optimizing warehouse robot routes, the framework achieved an 85 percent success rate, significantly outperforming the best baseline, which only reached 39 percent. The framework’s versatility makes it suitable for various multistep planning tasks, such as airline crew scheduling or factory machine time management.
The research was conducted in collaboration with Yang Zhang, a research scientist at the MIT-IBM Watson AI Lab, and Chuchu Fan, an associate professor of aeronautics and astronautics and LIDS principal investigator.
The team’s approach, named LLM-Based Formalized Programming (LLMFP), involves the user providing a natural language description of the problem, background information, and a query outlining their goal. LLMFP then prompts the LLM to reason about the problem, identifying key decision variables and constraints to formulate an optimal solution.
The LLM details the requirements of each variable before encoding the information into a mathematical formulation and uses an attached optimization solver to arrive at a solution. LLMFP also includes a self-assessment module to analyze the solution, modify incorrect steps, and incorporate implicit constraints, enhancing the accuracy and practicality of the generated plans.
According to Fan, the self-assessment module allows the LLM to adapt to user preferences. The model can learn not to alter travel plans’ time or budget, suggesting alternative changes that better align with the user’s needs.
The researchers aim to enhance LLMFP further by enabling it to process images as input, complementing natural language descriptions and facilitating the solution of tasks that are challenging to describe fully in words.



