Situational awareness is the key to safe navigation and operation in autonomous air and ground systems. Multi-sensor scene segmentation can facilitate situational awareness for autonomous navigation. In this work, we introduce an alternative to learning-based approaches for scene segmentation based on dynamical systems theory. The streaming dynamic mode decomposition (DMD) algorithm is tailored to isolate moving foreground objects from stationary background objects in the scene. We show that LiDAR point cloud data can be transformed into a cylindrical depth map that is amenable to analysis within the DMD framework. We further extend the streaming DMD algorithm to adapt to changes in the scene by introducing a forgetting factor that weights the relative importance of past versus present information in the data stream. The proposed streaming DMD methods are applied for on-the-fly scene segmentation of video and LiDAR data streams from the KITTI Dataset. We find that the approach is able to distinguish between stationary background and moving foreground objects in a consistent and reliable manner across various scenes.
|Original language||English (US)|
|Title of host publication||AIAA Scitech 2020 Forum|
|Publisher||American Institute of Aeronautics and Astronautics Inc, AIAA|
|State||Published - 2020|
|Event||AIAA Scitech Forum, 2020 - Orlando, United States|
Duration: Jan 6 2020 → Jan 10 2020
|Name||AIAA Scitech 2020 Forum|
|Conference||AIAA Scitech Forum, 2020|
|Period||1/6/20 → 1/10/20|
Bibliographical noteFunding Information:
This material is based upon work supported by Honeywell Aerospace and MnDRIVE. MSH Thanks Dr. Matthew O. Williams for initial discussions related to streaming DMD for video processing applications.
© 2020, American Institute of Aeronautics and Astronautics Inc, AIAA. All rights reserved.