Recommender systems algorithms are generally evaluated primarily on machine learning criteria such as recommendation accuracy or top-n precision. In this work, we evaluate six recommendation algorithms from a user-centric perspective, collecting both objective user activity data and subjective user perceptions. In a field experiment involving 1508 users who participated for at least a month, we compare six algorithms built using machine learning techniques, ranging from supervised matrix factorization, contextual bandit learning to Q learning. We found that the objective design in machine-learning-based recommender systems significantly affects user experience. Specifically, a recommender optimizing for implicit action prediction error engages users more than optimizing for explicit rating prediction error when modeled with the classical matrix factorization algorithms, which empirically explains the historical transition of recommender system research from modeling explicit feedback data to implicit feedback data. However, the action-based recommender is not as precise as the rating-based recommender in that it increases not only positive engagement but also negative engagement, e.g., negative action rate and user browsing effort which are negatively correlated with user satisfaction. We show that blending both explicit and implicit feedback from users through an online learning algorithm can gain the benefits of engagement and mitigate one of the possible costs (i.e., the increased browsing effort).