We present a novel algorithm for an autonomous underwater robot to visually detect and follow its companion human diver. Using both spatial-domain and frequency-domain features pertaining to human swimming patterns, we devise an algorithm to visually detect the position and swimming direction of the diver. Our algorithm is unique in the way that it allows detection of arbitrary motion directions, in addition to keeping track of a diver's position through the image sequence over time. A Hidden Markov Model (HMM)-based approach prunes the search-space of all potential trajectories relying on image intensities in the spatial-domain. A diver's motion signature is subsequently detected in a sequence of non-overlapping image subwindows exhibiting human swimming patterns. The pruning step ensures efficient computation by avoiding exponentially large search-spaces, whereas the frequency-domain detection allows us to detect the diver's position and motion direction accurately. Experimental validation of the proposed approach is presented on datasets collected from open-water and closed-water environments.