
How we really judge AI
The rise of artificial intelligence has sparked a myriad of reactions, often categorized broadly into techno-optimism or Luddism. However, new research reveals that human perception of AI is far more nuanced, shifting dramatically based on the specific context and perceived capabilities of the technology.
Consider two scenarios: an AI offering accurate stock predictions for your portfolio versus an AI screening your resume for a job application. While both involve AI, a recent study indicates that comfort levels vary significantly. This isn’t a simple like or dislike; it’s a discerning evaluation of AI’s practical implications on a case-by-case basis.
According to MIT Professor Jackson Lu, co-author of the newly published paper, “AI appreciation occurs when AI is perceived as being more capable than humans and personalization is perceived as being unnecessary in a given decision context.” Conversely, “AI aversion occurs when either of these conditions is not met, and AI appreciation occurs only when both conditions are satisfied.” This foundational insight forms the “Capability–Personalization Framework,” detailed in their paper, “AI Aversion or Appreciation? A Capability–Personalization Framework and a Meta-Analytic Review,” published in the esteemed Psychological Bulletin.
This study aims to reconcile previously conflicting findings in the field. Earlier research saw papers like a 2015 study on “algorithm aversion,” which suggested less forgiveness for AI errors than human errors, juxtaposed with a 2019 paper on “algorithm appreciation,” where people preferred AI advice. By conducting a meta-analysis of 163 prior studies, comparing human preferences for AI versus humans, Lu and his co-authors — including Xin Qin, Chen Chen, Hansen Zhou, Xiaowei Dong, and Limei Cao of Sun Yat-sen University; Xiang Zhou of Shenzhen University; and Dongyuan Wu of Fudan University — found a unifying explanation.
Analyzing over 82,000 individual reactions across 93 distinct “decision contexts”—ranging from cancer diagnoses to fraud detection—the research team confirmed the efficacy of their Capability–Personalization Framework. “Both dimensions are important: Individuals evaluate whether or not AI is more capable than people at a given task, and whether the task calls for personalization,” Professor Lu emphasizes. “People will prefer AI only if they think the AI is more capable than humans and the task is nonpersonal.” He clarifies that high perceived capability alone isn’t enough; personalization is equally critical.
Illustrative examples abound: individuals readily embrace AI for tasks such as fraud detection or large dataset sorting, where AI’s speed and scale far surpass human capabilities and personal nuances are irrelevant. However, resistance surfaces in areas like therapy, job interviews, or medical diagnoses. In these sensitive contexts, people crave human interaction, believing a human can better understand and adapt to their unique circumstances. “People have a fundamental desire to see themselves as unique and distinct from other people,” Lu states, adding that AI is often seen as impersonal and rigid, unable to grasp personal situations even with extensive training data.
Beyond capability and personalization, the study uncovered other significant influencing factors. AI tends to be more favorably viewed when embodied in tangible robots rather than intangible algorithms. Economic conditions also play a role; AI appreciation is more pronounced in countries with lower unemployment rates, intuitively suggesting a reduced fear of job displacement.
While Professor Lu acknowledges this meta-analysis isn’t the definitive last word, he believes the Capability–Personalization Framework offers a robust lens through which to understand the intricate and evolving public attitudes toward AI. “We’re not claiming perceived capability and personalization are the only two dimensions that matter, but according to our meta-analysis, these two dimensions capture much of what shapes people’s preferences for AI versus humans across a wide range of studies,” he concludes. This groundbreaking research was supported in part by grants to Qin and Wu from the National Natural Science Foundation of China.



