A robot scrub nurse (RSN) is an example of a robotic assistant for surgical environments. Ideally, by taking over management of instruments, it would lower costs of an operation and cut down on mistakes. Of vital importance for such robots is how they interface with the environment. A scrub nurse robot requires the ability to sense the human operators before it can assist. Computer vision offers here a number of advantages over other sensing modalities. In this paper we examine a visual tracking system for a robot scrub nurse. The system works by estimating the hand position and orientation of the main surgeon. This information is needed to guide the robot in delivering instruments directly to the surgeon. Our work outlines the entire visual tracking process and evaluates robustness and accuracy. The end result is a re-implementable and working application, suitable for surgical environments, that also offers a degree of operation robustness.