Accurate localization of landmarks in the vicinity of a robot is a first step towards solving the SLAM problem. In this work, we propose algorithms to accurately estimate the 3D location of the landmarks from the robot only from a single image taken from its on board camera. Our approach differs from previous efforts in this domain in that it first reconstructs accurately the 3D environment from a single image, then it defines a coordinate system over the environment, and later it performs the desired localization with respect to this coordinate system using the environment's features. The ground plane from the given image is accurately estimated and this precedes segmentation of the image into ground and vertical regions. A Markov Random Field (MRF) based 3D reconstruction is performed to build an approximate depth map of the given image. This map is robust against texture variations due to shadows, terrain differences, etc. A texture segmentation algorithm is also applied to determine the ground plane accurately. Once the ground plane is estimated, we use the respective camera's intrinsic and extrinsic calibration information to calculate accurate 3D information about the features in the scene.