Why Do Evaluations of eHealth Programs Fail? An Alternative Set of Guiding Principles

Why Do Evaluations of eHealth Programs Fail? An Alternative Set of Guiding Principles
Much has been written about why electronic health (eHealth) initiatives fail. Less attention has been paid to why evaluations of such initiatives fail to deliver the insights expected of them. PLoS Medicine has published three papers offering a "robust" and "scientific" approach to eHealth evaluation. One recommended systematically addressing each part of a "chain of reasoning", at the centre of which was the program's goals. Another proposed a quasi-experimental step-wedge design, in which late adopters of eHealth innovations serve as controls for early adopters. Interestingly, the authors of the empirical study flagged by these authors as an exemplary illustration of the step-wedge design subsequently abandoned it in favour of a largely qualitative case study because they found it impossible to establish anything approaching a controlled experiment in the study's complex, dynamic, and heavily politicised context.

The approach to evaluation presented in the previous PLoS Medicine series rests on a set of assumptions that philosophers of science call "positivist": that there is an external reality that can be objectively measured; that phenomena such as "project goals", "outcomes", and "formative feedback" can be precisely and unambiguously defined; that facts and values are clearly distinguishable; and that generalisable statements about the relationship between input and output variables are possible.

Read on-line: Why Do Evaluations of eHealth Programs Fail? An Alternative Set of Guiding Principles

Download from eHealthNews.eu Portal's mirror: Why Do Evaluations of eHealth Programs Fail? An Alternative Set of Guiding Principles (.pdf, 99 KB).

Citation: Greenhalgh T, Russell J (2010) Why Do Evaluations of eHealth Programs Fail? An Alternative Set of Guiding Principles. PLoS Med 7(11): e1000360. doi:10.1371/journal.pmed.1000360

Copyright: © 2010 Greenhalgh, Russell. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Most Popular Now

Do Fitness Apps do More Harm than Good?

A study published in the British Journal of Health Psychology reveals the negative behavioral and psychological consequences of commercial fitness apps reported by users on social media. These impacts may...

AI Tool Beats Humans at Detecting Parasi…

Scientists at ARUP Laboratories have developed an artificial intelligence (AI) tool that detects intestinal parasites in stool samples more quickly and accurately than traditional methods, potentially transforming how labs diagnose...

Making Cancer Vaccines More Personal

In a new study, University of Arizona researchers created a model for cutaneous squamous cell carcinoma, a type of skin cancer, and identified two mutated tumor proteins, or neoantigens, that...

AI can Better Predict Future Risk for He…

A landmark study led by University' experts has shown that artificial intelligence can better predict how doctors should treat patients following a heart attack. The study, conducted by an international...

A New AI Model Improves the Prediction o…

Breast cancer is the most commonly diagnosed form of cancer in the world among women, with more than 2.3 million cases a year, and continues to be one of the...

AI System Finds Crucial Clues for Diagno…

Doctors often must make critical decisions in minutes, relying on incomplete information. While electronic health records contain vast amounts of patient data, much of it remains difficult to interpret quickly...