TY - GEN
T1 - Explaining decision-making algorithms through UI
T2 - 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019
AU - Cheng, Hao Fei
AU - Wang, Ruotong
AU - Zhang, Zheng
AU - O'Connell, Fiona
AU - Gray, Terrance
AU - Harper, Max
AU - Zhu, Haiyi
PY - 2019/5/2
Y1 - 2019/5/2
N2 - Increasingly, algorithms are used to make important decisions across society. However, these algorithms are usually poorly understood, which can reduce transparency and evoke negative emotions. In this research, we seek to learn design principles for explanation interfaces that communicate how decision-making algorithms work, in order to help organizations explain their decisions to stakeholders, or to support users' "right to explanation". We conducted an online experiment where 199 participants used different explanation interfaces to understand an algorithm for making university admissions decisions. We measured users' objective and self-reported understanding of the algorithm. Our results show that both interactive explanations and "white-box" explanations (i.e. that show the inner workings of an algorithm) can improve users' comprehension. Although the interactive approach is more effective at improving comprehension, it comes with a trade-off of taking more time. Surprisingly, we also find that users' trust in algorithmic decisions is not affected by the explanation interface or their level of comprehension of the algorithm.
AB - Increasingly, algorithms are used to make important decisions across society. However, these algorithms are usually poorly understood, which can reduce transparency and evoke negative emotions. In this research, we seek to learn design principles for explanation interfaces that communicate how decision-making algorithms work, in order to help organizations explain their decisions to stakeholders, or to support users' "right to explanation". We conducted an online experiment where 199 participants used different explanation interfaces to understand an algorithm for making university admissions decisions. We measured users' objective and self-reported understanding of the algorithm. Our results show that both interactive explanations and "white-box" explanations (i.e. that show the inner workings of an algorithm) can improve users' comprehension. Although the interactive approach is more effective at improving comprehension, it comes with a trade-off of taking more time. Surprisingly, we also find that users' trust in algorithmic decisions is not affected by the explanation interface or their level of comprehension of the algorithm.
KW - Algorithmic decision-making
KW - Explanation interfaces
UR - http://www.scopus.com/inward/record.url?scp=85067624181&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85067624181&partnerID=8YFLogxK
U2 - 10.1145/3290605.3300789
DO - 10.1145/3290605.3300789
M3 - Conference contribution
AN - SCOPUS:85067624181
T3 - Conference on Human Factors in Computing Systems - Proceedings
BT - CHI 2019 - Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems
PB - Association for Computing Machinery
Y2 - 4 May 2019 through 9 May 2019
ER -