Purpose. Human observers are surprisingly good at finding smooth contours in clutter. Here we use a new ideal observer formulation for computationally intractable problems to measure the efficiency of contour grouping and to investigate the underlying perceptual mechanisms. Methods. Human observers are presented with a sequence of two images comprised of randomly-arranged, oriented elements. One of the images also contains an element sequence generated by a stochastic contour process whose parameters are derived from natural image statistics (Elder&Goldberg, JOV02). QUEST is used to estimate the complexity (number of elements in the display) at threshold performance for contour detection. A doublepass technique is also used to estimate observer consistency. Results. The computational complexity of the problem precludes a direct simulation of the ideal observer. Instead we use two sub-optimal machine observers to derive rigorous, tight bounds on ideal observer performance. Human efficiency, defined as the ratio of display complexity at threshold for human and ideal observers, is in the 25-50 % range. Inefficiencies can arise from 3 sources: 1) internal noise, 2) systematic error in the internal model of the contour process, and 3) failure to consider all possible paths (algorithm error). We model internal noise as additive Gaussian error in the perceived location and orientation of the local line elements, and use the doublepass data to estimate these two noise parameters. Our main result is that this local noise can account for nearly all of the inefficiency in contour grouping. Discussion. Attribution of the majority of error to internal noise suggests that the underlying grouping algorithm is close to optimal. In particular, strictly local or greedy sub-optimal strategies can be ruled out.