This paper presents an autonomous docking system based on visual cues on a docking station. Autonomous docking is essential for large scale robotic teams to be delivered by larger robots, recovered, recharged, and redeployed for continuous operation. Using a computer vision based approach, we identify cues to line up for docking by extracting corner pixels and combining this information with color information. Potential target points are extracted and clustered using Euclidean distance in the image plane. Using these clusters of points the appropriate motion behavior is selected to reposition the robot into the desired position and orientation. This paper will present examples of this implementation using an eROSI robot which uses only vision to navigate.