calendar_month Publicación: 01/10/2022
Autor: Tomás Reyes, Edgar Kausel, Álvaro Chacón
Research suggests that algorithms—based on artificial intelligence or linear regression models—make better predictions than humans in a wide range of domains. Several studies have examined the degree to which people use algorithms. However, these studies have been mostly cross-sectional and thus have failed to address the dynamic nature of algorithm use. In the present paper, we examined algorithm use with a novel longitudinal approach outside the lab. Specifically, we conducted two ecological momentary assessment studies in which 401 participants made financial predictions for 18 days in two tasks. Relying on the judge-advisor system framework, we examined how time interacted with advice source (human vs. algorithm) and advisor accuracy to predict advice taking. Our results showed that when the advice was inaccurate, people tended to use algorithm advice less than human advice across the period studied. Inaccurate algorithms were penalized logarithmically; the effect was initially strong but tended to fade over time. This suggests that first impressions are crucial and produce significant changes in advice taking at the beginning of the interaction, which later tends to stabilize as days go by. Therefore, inaccurate algorithms are more likely to accrue a negative reputation than inaccurate humans, even when having the same level of performance.