Cautious Markov Games as a Framework for Human-Robot Interaction

By Sanjay Lall and Rohan Sinha.

To view the extended version of this work please click here.

Abstract:

Safe interaction between autonomous vehicles (AVs) and humans requires decision-making strategies that intelligently reason about the effect an AV’s decisions have on actors in its environment. As a result, recent years have seen much progress towards interaction-aware methods that consider the non-cooperative nature of the human-robot interactions in autonomous driving. In contrast with previous work, we propose to exploit the fact that humans and autonomous agents typically interact in environments with rules and conventions that all agents should follow, such as in traffic. To do this, we express the rules as linear temporal logical constraints on the joint state trajectory and model the multi-agent interaction as a stochastic game. We fix the likelihood of other agents making decisions that can violate the rules instead of implicitly encoding it in the agents’ preference structure. This formulation results in decision-making agents that take the likelihood of others breaking the rules into account in an interpretable manner based on quantities that are straightforward to estimate. We dub this framework the \emph{cautious Markov game} (CMG), for which we efficiently construct policies using robust dynamic programming. We also show that classic results on Nash equilibria in the 2-player zero-sum setting extend to the CMG. By exploiting the rule-based nature of the game, we can significantly reduce the conservatism of robust policies on simple illustrative examples.