TY - GEN
T1 - Explicit or implicit feedback? engagement or satisfaction?
T2 - 33rd Annual ACM Symposium on Applied Computing, SAC 2018
AU - Zhao, Qian
AU - Harper, F. Maxwell
AU - Adomavicius, Gediminas
AU - Konstan, Joseph A.
PY - 2018/4/9
Y1 - 2018/4/9
N2 - Recommender systems algorithms are generally evaluated primarily on machine learning criteria such as recommendation accuracy or top-n precision. In this work, we evaluate six recommendation algorithms from a user-centric perspective, collecting both objective user activity data and subjective user perceptions. In a field experiment involving 1508 users who participated for at least a month, we compare six algorithms built using machine learning techniques, ranging from supervised matrix factorization, contextual bandit learning to Q learning. We found that the objective design in machine-learning-based recommender systems significantly affects user experience. Specifically, a recommender optimizing for implicit action prediction error engages users more than optimizing for explicit rating prediction error when modeled with the classical matrix factorization algorithms, which empirically explains the historical transition of recommender system research from modeling explicit feedback data to implicit feedback data. However, the action-based recommender is not as precise as the rating-based recommender in that it increases not only positive engagement but also negative engagement, e.g., negative action rate and user browsing effort which are negatively correlated with user satisfaction. We show that blending both explicit and implicit feedback from users through an online learning algorithm can gain the benefits of engagement and mitigate one of the possible costs (i.e., the increased browsing effort).
AB - Recommender systems algorithms are generally evaluated primarily on machine learning criteria such as recommendation accuracy or top-n precision. In this work, we evaluate six recommendation algorithms from a user-centric perspective, collecting both objective user activity data and subjective user perceptions. In a field experiment involving 1508 users who participated for at least a month, we compare six algorithms built using machine learning techniques, ranging from supervised matrix factorization, contextual bandit learning to Q learning. We found that the objective design in machine-learning-based recommender systems significantly affects user experience. Specifically, a recommender optimizing for implicit action prediction error engages users more than optimizing for explicit rating prediction error when modeled with the classical matrix factorization algorithms, which empirically explains the historical transition of recommender system research from modeling explicit feedback data to implicit feedback data. However, the action-based recommender is not as precise as the rating-based recommender in that it increases not only positive engagement but also negative engagement, e.g., negative action rate and user browsing effort which are negatively correlated with user satisfaction. We show that blending both explicit and implicit feedback from users through an online learning algorithm can gain the benefits of engagement and mitigate one of the possible costs (i.e., the increased browsing effort).
KW - Contextual bandit
KW - Machine learning
KW - Q learning
KW - Recommender systems
KW - User experiment
KW - User-centric evaluation
UR - http://www.scopus.com/inward/record.url?scp=85050554283&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85050554283&partnerID=8YFLogxK
U2 - 10.1145/3167132.3167275
DO - 10.1145/3167132.3167275
M3 - Conference contribution
AN - SCOPUS:85050554283
T3 - Proceedings of the ACM Symposium on Applied Computing
SP - 1331
EP - 1340
BT - Proceedings of the 33rd Annual ACM Symposium on Applied Computing, SAC 2018
PB - Association for Computing Machinery
Y2 - 9 April 2018 through 13 April 2018
ER -