We consider the problem of recovering a complete (i.e., square and invertible) dictionary A0, from Y = A0X0 with Y Rn×p. This recovery setting is central to the theoretical understanding of dictionary learning. We give the first efficient algorithm that provably recovers A0 when X0 has O (n) nonzeros per column, under suitable probability model for X0. Prior results provide recovery guarantees when X0 has only O (√n) nonzeros per column. Our algorithm is based on nonconvex optimization with a spherical constraint, and hence is naturally phrased in the language of manifold optimization. Our proofs give a geometric characterization of the high-dimensional objective landscape, which shows that with high probability there are no spurious local minima. This invited talk summarizes these results, presented in . It also presents numerical experiments demonstrating their implications for practical problems in representation learning and the more general algorithmic problem of recovering matrix decompositions with structured factors.