Why Do Evaluations of eHealth Programs Fail? An Alternative Set of Guiding Principles

Why Do Evaluations of eHealth Programs Fail? An Alternative Set of Guiding Principles
Much has been written about why electronic health (eHealth) initiatives fail. Less attention has been paid to why evaluations of such initiatives fail to deliver the insights expected of them. PLoS Medicine has published three papers offering a "robust" and "scientific" approach to eHealth evaluation. One recommended systematically addressing each part of a "chain of reasoning", at the centre of which was the program's goals. Another proposed a quasi-experimental step-wedge design, in which late adopters of eHealth innovations serve as controls for early adopters. Interestingly, the authors of the empirical study flagged by these authors as an exemplary illustration of the step-wedge design subsequently abandoned it in favour of a largely qualitative case study because they found it impossible to establish anything approaching a controlled experiment in the study's complex, dynamic, and heavily politicised context.

The approach to evaluation presented in the previous PLoS Medicine series rests on a set of assumptions that philosophers of science call "positivist": that there is an external reality that can be objectively measured; that phenomena such as "project goals", "outcomes", and "formative feedback" can be precisely and unambiguously defined; that facts and values are clearly distinguishable; and that generalisable statements about the relationship between input and output variables are possible.

Read on-line: Why Do Evaluations of eHealth Programs Fail? An Alternative Set of Guiding Principles

Download from eHealthNews.eu Portal's mirror: Why Do Evaluations of eHealth Programs Fail? An Alternative Set of Guiding Principles (.pdf, 99 KB).

Citation: Greenhalgh T, Russell J (2010) Why Do Evaluations of eHealth Programs Fail? An Alternative Set of Guiding Principles. PLoS Med 7(11): e1000360. doi:10.1371/journal.pmed.1000360

Copyright: © 2010 Greenhalgh, Russell. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Most Popular Now

ChatGPT can Produce Medical Record Notes…

The AI model ChatGPT can write administrative medical notes up to ten times faster than doctors without compromising quality. This is according to a new study conducted by researchers at...

Alcidion and Novari Health Forge Strateg…

Alcidion Group Limited, a leading provider of FHIR-native patient flow solutions for healthcare, and Novari Health, a market leader in waitlist management and referral management technologies, have joined forces to...

Can Language Models Read the Genome? Thi…

The same class of artificial intelligence that made headlines coding software and passing the bar exam has learned to read a different kind of text - the genetic code. That code...

Study Shows Human Medical Professionals …

When looking for medical information, people can use web search engines or large language models (LLMs) like ChatGPT-4 or Google Bard. However, these artificial intelligence (AI) tools have their limitations...

Advancing Drug Discovery with AI: Introd…

A transformative study published in Health Data Science, a Science Partner Journal, introduces a groundbreaking end-to-end deep learning framework, known as Knowledge-Empowered Drug Discovery (KEDD), aimed at revolutionizing the field...

Bayer and Google Cloud to Accelerate Dev…

Bayer and Google Cloud announced a collaboration on the development of artificial intelligence (AI) solutions to support radiologists and ultimately better serve patients. As part of the collaboration, Bayer will...

Shared Digital NHS Prescribing Record co…

Implementing a single shared digital prescribing record across the NHS in England could avoid nearly 1 million drug errors every year, stopping up to 16,000 fewer patients from being harmed...

Ask Chat GPT about Your Radiation Oncolo…

Cancer patients about to undergo radiation oncology treatment have lots of questions. Could ChatGPT be the best way to get answers? A new Northwestern Medicine study tested a specially designed ChatGPT...

North West Anglia Works with Clinisys to…

North West Anglia NHS Foundation Trust has replaced two, legacy laboratory information systems with a single instance of Clinisys WinPath. The trust, which serves a catchment of 800,000 patients in North...

Can AI Techniques Help Clinicians Assess…

Investigators have applied artificial intelligence (AI) techniques to gait analyses and medical records data to provide insights about individuals with leg fractures and aspects of their recovery. The study, published in...

AI Makes Retinal Imaging 100 Times Faste…

Researchers at the National Institutes of Health applied artificial intelligence (AI) to a technique that produces high-resolution images of cells in the eye. They report that with AI, imaging is...

GPT-4 Matches Radiologists in Detecting …

Large language model GPT-4 matched the performance of radiologists in detecting errors in radiology reports, according to research published in Radiology, a journal of the Radiological Society of North America...