The multi-agent distributed consensus optimization problem arises in many engineering applications. Recently, the alternating direction method of multipliers (ADMM) has been applied to distributed consensus optimization which, referred to as the consensus ADMM (C-ADMM), can converge much faster than conventional consensus subgradient methods. However, C-ADMM can be computationally expensive when the cost function to optimize has a complicated structure or when the problem dimension is large. In this paper, we propose an inexact C-ADMM (IC-ADMM) where each agent only performs one proximal gradient (PG) update at each iteration. The PGs are often easy to obtain especially for structured sparse optimization problems. Convergence conditions for IC-ADMM are analyzed. Numerical results based on a sparse logistic regression problem show that IC-ADMM, though converges slower than the original C-ADMM, has a considerably reduced computational complexity.