Local approximation of functions based on values and derivatives at the nodes of a discretized grid are often used in solving problems numerically for which analytical solutions do not exist. In gradient dynamic programming (Foufoula‐Georgiou and Kitanidis, 1988) the use of such functions for the approximation of the cost‐to‐go function alleviates the “curse of dimensionality” by reducing the number of discretization nodes per state while obtaining high‐accuracy solutions. Also, efficient Newton‐type schemes can be used for the stage‐wise optimization, since now the approximation functions have continuous first derivatives. Our interest is in the case where the cost‐to‐go function is convex. However, the interpolants may not always be convex, introducing numerical problems. In this paper we address the problem of interpolating nodal values and derivatives of a one‐dimensional convex function with a convex interpolant so that potential computational difficulties due to approximation‐induced nonconvexity are avoided, and an efficient convergence to global instead of local optimal controls is guaranteed at every single‐stage optimization.