People's Trust in AI Systems to Make Moral Decisions is still some Way Off

Psychologists warn that AI's perceived lack of human experience and genuine understanding may limit its acceptance to make higher-stakes moral decisions.

Artificial moral advisors (AMAs) are systems based on artificial intelligence (AI) that are starting to be designed to assist humans in making moral decisions based on established ethical theories, principles, or guidelines. While prototypes are being developed, at present AMAs are not yet being used to offer consistent, bias-free recommendations and rational moral advice. As machines powered by artificial intelligence increase in their technological capacities and move into the moral domain it is critical that we understand how people think about such artificial moral advisors.

Research led by the University of Kent's School of Psychology explored how people would perceive these advisors and if they would trust their judgement, in comparison with human advisors. It found that while artificial intelligence might have the potential to offer impartial and rational advice, people still do not fully trust it to make ethical decisions on moral dilemmas.

Published in the journal Cognition, the research shows that people have a significant aversion to AMAs (vs humans) giving moral advice even when the advice given is identical, while also showing that this is particularly the case when advisors - human and AI alike - gave advice based on utilitarian principles (actions that could positively impact the majority). Advisors who gave non-utilitarian advice (e.g. adhering to moral rules rather than maximising outcomes) were trusted more, especially in dilemmas involving direct harm. This suggests that people value advisors - human or AI - who align with principles that prioritise individuals over abstract outcomes.

Even when participants agreed with the AMA’s decision, they still anticipated disagreeing with AI in the future, indicating inherent scepticism.

Dr Jim Everett led the research at Kent, alongside Dr Simon Myers at the University of Warwick.

Dr Jim Everett who led the research at Kent said: "Trust in moral AI isn't just about accuracy or consistency - it's about aligning with human values and expectations. Our research highlights a critical challenge for the adoption of AMAs and how to design systems that people truly trust. As technology advances, we might see AMAs become more integrated into decision-making processes, from healthcare to legal systems, therefore there is a major need to understand how to bridge the gap between AI capabilities and human trust."

Myers S, Everett JAC.
People expect artificial moral advisors to be more utilitarian and distrust utilitarian moral advisors.
Cognition. 2025 Mar;256:106028. doi: 10.1016/j.cognition.2024.106028

Most Popular Now

Giving Doctors an AI-Powered Head Start …

Detection of melanoma and a range of other skin diseases will be faster and more accurate with a new artificial intelligence (AI) powered tool that analyses multiple imaging types simultaneously...

AI Medical Receptionist Modernizing Doct…

A virtual medical receptionist named "Cassie," developed through research at Texas A&M University, is transforming the way patients interact with health care providers. Cassie is a digital-human assistant created by Humanate...

Using Data and AI to Create Better Healt…

Academic medical centers could transform patient care by adopting principles from learning health systems principles, according to researchers from Weill Cornell Medicine and the University of California, San Diego. In...

Northern Ireland Completes Nationwide Ro…

Go-lives at Western and Southern health and social care trusts mean every pathology service is using the same laboratory information management system; improving efficiency and quality. An ambitious technology project to...

AI Detects Hidden Heart Disease Using Ex…

Mass General Brigham researchers have developed a new AI tool in collaboration with the United States Department of Veterans Affairs (VA) to probe through previously collected CT scans and identify...

AI Tool Set to Transform Characterisatio…

A multinational team of researchers, co-led by the Garvan Institute of Medical Research, has developed and tested a new AI tool to better characterise the diversity of individual cells within...

Human-AI Collectives Make the Most Accur…

Diagnostic errors are among the most serious problems in everyday medical practice. AI systems - especially large language models (LLMs) like ChatGPT-4, Gemini, or Claude 3 - offer new ways...

Highland Marketing Announced as Official…

Highland Marketing has been named, for the second year running, the official communications partner for HETT Show 2025, the UK's leading digital health conference and exhibition. Taking place 7-8 October...

MHP-Net: A Revolutionary AI Model for Ac…

Liver cancer is the sixth most common cancer globally and a leading cause of cancer-related deaths. Accurate segmentation of liver tumors is a crucial step for the management of the...

Groundbreaking TACIT Algorithm Offers Ne…

Researchers at VCU Massey Comprehensive Cancer Center have developed a novel algorithm that could provide a revolutionary tool for determining the best options for patients - both in the treatment...

The Many Ways that AI Enters Rheumatolog…

High-resolution computed tomography (HRCT) is the standard to diagnose and assess progression in interstitial lung disease (ILD), a key feature in systemic sclerosis (SSc). But AI-assisted interpretation has the potential...