In a brand new examine, contributors tended to assign higher blame to synthetic intelligences (AIs) concerned in real-world ethical transgressions once they perceived the AIs as having extra human-like minds. Minjoo Joo of Sookmyung Girls’s College in Seoul, Korea, presents these findings within the open-access journal PLOS ONE on December 18, 2024.
Prior analysis has revealed an inclination of individuals in charge AI for numerous ethical transgressions, comparable to in circumstances of an autonomous car hitting a pedestrian or selections that induced medical or army hurt. Further analysis suggests that individuals are likely to assign extra blame to AIs perceived to be able to consciousness, considering, and planning. Folks could also be extra prone to attribute such capacities to AIs they understand as having human-like minds that may expertise acutely aware emotions.
On the premise of that earlier analysis, Joo hypothesized that AIs perceived as having human-like minds might obtain a higher share of blame for a given ethical transgression.
To check this concept, Joo carried out a number of experiments wherein contributors have been introduced with numerous real-world cases of ethical transgressions involving AIs — comparable to racist auto-tagging of pictures — and have been requested questions to guage their thoughts notion of the AI concerned, in addition to the extent to which they assigned blame to the AI, its programmer, the corporate behind it, or the federal government. In some circumstances, AI thoughts notion was manipulated by describing a reputation, age, top, and passion for the AI.
Throughout the experiments, contributors tended to assign significantly extra blame to an AI once they perceived it as having a extra human-like thoughts. In these circumstances, when contributors have been requested to distribute relative blame, they tended to assign much less blame to the concerned firm. However when requested to fee the extent of blame independently for every agent, there was no discount in blame assigned to the corporate.
These findings counsel that AI thoughts notion is a essential issue contributing in charge attribution for transgressions involving AI. Moreover, Joo raises issues concerning the doubtlessly dangerous penalties of misusing AIs as scapegoats and requires additional analysis on AI blame attribution.
The writer provides: “Can AIs be held accountable for ethical transgressions? This analysis exhibits that perceiving AI as human-like will increase blame towards AI whereas lowering blame on human stakeholders, elevating issues about utilizing AI as an ethical scapegoat.”

