In this work we present a moving target segmentation technique and apply it to a vision-based robot-following problem. The capability to do autonomous multi-robot following is useful for many robot-team applications; however, the problem becomes very challenging when the robots can carry only a small camera or when they exhibit unpredictable motion. The ability to segment a moving target while the camera is also in motion is critical to the solution of this problem and is the focus of our work. Our contributions include: (i) Matching targets using feature-based covariance matrices; (ii) Enhancing matching performance by using features based upon the Fourier transform; and (iii) Initializing a target model for cases without a known target model. We compare the proposed method with the scale-invariant feature transform and existing covariance matching methods. We then validate our proposed segmentation method through real-robot experiments.