The paper deals with the problem of classification of human activities from video as one way of performing activity monitoring. Our approach uses motion features that are computed very efficiently and subsequently projected into a lower dimension space where matching is performed. Each action is represented as a manifold in this lower dimension space and matching is done by comparing these manifolds. To demonstrate the effectiveness of this approach, it was used on a large data set of similar actions, each performed by many different actors. Classification results are accurate and show that this approach can handle many challenges such as variations in performers' physical attributes, color of clothing, and style of motion. An important result is that the recovery of three-dimensional properties of a moving person, or even two-dimensional tracking of the person's limbs, is not a necessary step that must precede action recognition.