In this paper, we present the analysis and experimentalvalidation of a vision-aided inertial navigation algorithmfor planetary landing applications. The system employs tightintegration of inertial and visual feature measurements to computeaccurate estimates of the lander's terrain-relative position,attitude, and velocity in real time. Two types of features areconsidered: mapped landmarks, i.e., features whose global 3D positionscan be determined from a surface map, and opportunisticfeatures, i.e., features that can be tracked in consecutive images,but whose 3D positions are not known. Both types of features areprocessed in an extended Kalman filter (EKF) estimator and areoptimally fused with measurements from an inertial measurementunit (IMU). Results from a sounding rocket test, covering thedynamic profile of typical planetary landing scenarios, showestimation errors of magnitude 0.16 m/s in velocity and 6.4 min position at touchdown. These results vastly improve currentstate of the art for non-vision based EDL navigation, and meetthe requirements of future planetary exploration missions.