Recent works on Large Language Models (LLMs) have demonstrated their effectiveness in learning general policies in automated planning. In particular, a system called PlanGPT has achieved impressive performance in terms of coverage in various domains. However, it may produce invalid plans that either satisfy only some goal fluents of the corresponding planning problem or violate the planned actions’ preconditions. To overcome this limitation, we propose a novel neuro-symbolic approach that combines PlanGPT with a planner capable of repairing (or completing) the plan generated by PlanGPT, thereby leveraging model-based reasoning. When PlanGPT generates a candidate plan for a specific planning problem, we validate it using a symbolic validator. If the generated plan is invalid, we execute the repair procedure of the planner LPG to obtain a valid solution plan from it. In this paper, we empirically evaluate the effectiveness of our approach and demonstrate its performances across various planning domains. Our results show significant improvements in the performance of both PlanGPT and LPG, highlighting the effectiveness of combining learning methods with traditional planning techniques.
Integrating Classical Planners with GPT-Based Planning Policies
Tummolo, Massimiliano
;Rossetti, Nicholas;Gerevini, Alfonso Emilio
;Olivato, Matteo;Putelli, Luca;Serina, Ivan
2025-01-01
Abstract
Recent works on Large Language Models (LLMs) have demonstrated their effectiveness in learning general policies in automated planning. In particular, a system called PlanGPT has achieved impressive performance in terms of coverage in various domains. However, it may produce invalid plans that either satisfy only some goal fluents of the corresponding planning problem or violate the planned actions’ preconditions. To overcome this limitation, we propose a novel neuro-symbolic approach that combines PlanGPT with a planner capable of repairing (or completing) the plan generated by PlanGPT, thereby leveraging model-based reasoning. When PlanGPT generates a candidate plan for a specific planning problem, we validate it using a symbolic validator. If the generated plan is invalid, we execute the repair procedure of the planner LPG to obtain a valid solution plan from it. In this paper, we empirically evaluate the effectiveness of our approach and demonstrate its performances across various planning domains. Our results show significant improvements in the performance of both PlanGPT and LPG, highlighting the effectiveness of combining learning methods with traditional planning techniques.| File | Dimensione | Formato | |
|---|---|---|---|
|
Integrating classical planners with gpt-based planning policies.pdf
accesso aperto
Tipologia:
Full Text
Licenza:
Copyright dell'editore
Dimensione
310.8 kB
Formato
Adobe PDF
|
310.8 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


