The research focused on improving human-aware task planning (HATP) by addressing false beliefs that could hinder human-robot collaboration. The team extended the HATP/EHDA planner developed at LAAS-CNRS, Toulouse, to anticipate incorrect human beliefs and generate implicitly coordinated plans involving both human and robot actions.
To achieve this, the researchers modeled situation assessment processes based on co-presence and co-location, ensuring that agents’ perceptions of their environment were accurately represented. This allowed the robot to assess the negative impact of false human beliefs and adapt its planning accordingly. If a misconception arose from an unobserved action, the robot could delay execution until the human noticed it, preventing misunderstandings. The improved planner was tested in three domains and presented at ICAPS PlanRob and RO-MAN 2023.
Despite its advances, the approach faced limitations, particularly in evaluating environmental attributes in dynamic settings. For instance, a mobile robot’s movement could implicitly alter conditions for assessing objects’ orientations, complicating situation evaluation. Additionally, the system struggled to refute false beliefs. To address these issues, refinements were made in the knowledge representation, planning algorithm, and belief management strategies, leading to an improved framework published in SGAI 2023.
Recognizing that excessive communication could be frustrating for humans, the study explored epistemic planning, incorporating dynamic epistemic logic (DEL). Instead of managing a single world with false beliefs, the framework maintained multiple possible worlds representing human perspectives. This allowed the planner to better model how humans anticipated changes in their environment, improving collaboration efficiency.
The updated HATP/EHDA framework integrated epistemic planning principles, enhancing state transitions, input representation, and belief management. A major step forward involved developing an interface for GRAAL, an external reasoner, which dynamically updated human beliefs based on observable robot actions. This enabled richer inference capabilities beyond the existing framework.
The refined system allowed robots to act strategically, deciding when to communicate or wait for human queries. In cases where direct observation sufficed, the robot could delay an action rather than explain it, reducing unnecessary verbal interactions. Preliminary tests demonstrated that this approach resulted in more natural and efficient robotic behaviors.