People's Trust in AI Systems to Make Moral Decisions is still some Way Off

Psychologists warn that AI's perceived lack of human experience and genuine understanding may limit its acceptance to make higher-stakes moral decisions.

Artificial moral advisors (AMAs) are systems based on artificial intelligence (AI) that are starting to be designed to assist humans in making moral decisions based on established ethical theories, principles, or guidelines. While prototypes are being developed, at present AMAs are not yet being used to offer consistent, bias-free recommendations and rational moral advice. As machines powered by artificial intelligence increase in their technological capacities and move into the moral domain it is critical that we understand how people think about such artificial moral advisors.

Research led by the University of Kent's School of Psychology explored how people would perceive these advisors and if they would trust their judgement, in comparison with human advisors. It found that while artificial intelligence might have the potential to offer impartial and rational advice, people still do not fully trust it to make ethical decisions on moral dilemmas.

Published in the journal Cognition, the research shows that people have a significant aversion to AMAs (vs humans) giving moral advice even when the advice given is identical, while also showing that this is particularly the case when advisors - human and AI alike - gave advice based on utilitarian principles (actions that could positively impact the majority). Advisors who gave non-utilitarian advice (e.g. adhering to moral rules rather than maximising outcomes) were trusted more, especially in dilemmas involving direct harm. This suggests that people value advisors - human or AI - who align with principles that prioritise individuals over abstract outcomes.

Even when participants agreed with the AMA’s decision, they still anticipated disagreeing with AI in the future, indicating inherent scepticism.

Dr Jim Everett led the research at Kent, alongside Dr Simon Myers at the University of Warwick.

Dr Jim Everett who led the research at Kent said: "Trust in moral AI isn't just about accuracy or consistency - it's about aligning with human values and expectations. Our research highlights a critical challenge for the adoption of AMAs and how to design systems that people truly trust. As technology advances, we might see AMAs become more integrated into decision-making processes, from healthcare to legal systems, therefore there is a major need to understand how to bridge the gap between AI capabilities and human trust."

Myers S, Everett JAC.
People expect artificial moral advisors to be more utilitarian and distrust utilitarian moral advisors.
Cognition. 2025 Mar;256:106028. doi: 10.1016/j.cognition.2024.106028

Most Popular Now

Open Medical Works with Moray's Dig…

Open Medical is working with the Digital Health & Care Innovation Centre’s Rural Centre of Excellence on a referral management plan, as part of a research and development scheme to...

Generative AI on Track to Shape the Futu…

Using advanced artificial intelligence (AI), researchers have developed a novel method to make drug development faster and more efficient. In a new paper, Xia Ning, lead author of the study and...

AI could Help Improve Early Detection of…

A new study led by investigators at the UCLA Health Jonsson Comprehensive Cancer Center suggests that artificial intelligence (AI) could help detect interval breast cancers - those that develop between...

AI-Human Task-Sharing could Cut Mammogra…

The most effective way to harness the power of artificial intelligence (AI) when screening for breast cancer may be through collaboration with human radiologists - not by wholesale replacing them...

Reorganisation, Consolidation, and Cuts:…

NHS England has been downsized and abolished. Integrated care boards have been told to change function, consolidate, and deliver savings. Trusts are planning big cuts. The Highland Marketing advisory board...

Siemens Healthineers infection Control S…

Klinikum Region Hannover (KRH) has commissioned Siemens Healthineers to install infection control system (ICS) at the Klinikum Siloah hospital. The ICS aims to effectively tackle nosocomial infections and increase patient...

AI Tool Uses Face Photos to Estimate Bio…

Eyes may be the window to the soul, but a person's biological age could be reflected in their facial characteristics. Investigators from Mass General Brigham developed a deep learning algorithm...

Philips Future Health Index 2025 Report …

Royal Philips (NYSE: PHG, AEX: PHIA), a global leader in health technology, today unveiled its 2025 Future Health Index U.S. report, "Building trust in healthcare AI," spotlighting the state of...

AI-Powered Precision: Unlocking the Futu…

A team of researchers from the Department of Diagnostic and Therapeutic Ultrasonography at the Tianjin Medical University Cancer Institute & Hospital, have published a review in Cancer Biology & Medicine...

AI Model Improves Delirium Prediction, L…

An artificial intelligence (AI) model improved outcomes in hospitalized patients by quadrupling the rate of detection and treatment of delirium. The model identifies patients at high risk for delirium and...

Building Trust in Artificial Intelligenc…

A new review, published in the peer-reviewed journal AI in Precision Oncology, explores the multifaceted reasons behind the skepticism surrounding artificial intelligence (AI) technologies in healthcare and advocates for approaches...

SALSA: A New AI Tool for the Automated a…

Investigators of the Vall d'Hebron Institute of Oncology's (VHIO) Radiomics Group, led by Raquel Perez-Lopez, have developed SALSA (System for Automatic Liver tumor Segmentation And detection), a fully automated deep...