Coordinating multi-agent navigation by learning communication

Dalton Hildreth, Stephen J. Guy

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

This work presents a decentralized multi-agent navigation approach that allows agents to coordinate their motion through local communication. Our approach allows agents to develop their own emergent language of communication through an optimization process that simultaneously determines what agents say in response to their spatial observations and how agents interpret communication from others to update their motion. We apply our communication approach together with the TTC-Forces crowd simulation algorithm (a recent, high performing, anticipatory collision technique) and show a significant decrease in congestion and bottle-necking of agents, especially in scenarios where agents benefit from close coordination. In addition to reaching their goals faster, agents using our approach show coordinated behaviors including greeting, flocking, following, and grouping. Furthermore, we observe that communication strategies optimized for one scenario often continue to provide time-efficient, coordinated motion between agents when applied to different scenarios. This suggests that the agents are learning to generalize strategies for coordination through their communication "language".

Original languageEnglish (US)
Article numbera20
JournalProceedings of the ACM on Computer Graphics and Interactive Techniques
Volume2
Issue number2
DOIs
StatePublished - Jul 2019

Bibliographical note

Publisher Copyright:
© 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM.

Keywords

  • Crowd simulation
  • Language discovery
  • Learning for animation
  • Local motion planning
  • Multi-agent communication

Fingerprint

Dive into the research topics of 'Coordinating multi-agent navigation by learning communication'. Together they form a unique fingerprint.

Cite this