How AI Bias Shapes Everything from Hiring to Healthcare

Generative AI tools like ChatGPT, DeepSeek, Google's Gemini and Microsoft’s Copilot are transforming industries at a rapid pace. However, as these large language models become less expensive and more widely used for critical decision-making, their built-in biases can distort outcomes and erode public trust.

Naveen Kumar, an associate professor at the University of Oklahoma's Price College of Business, has co-authored a study emphasizing the urgent need to address bias by developing and deploying ethical, explainable AI. This includes methods and policies that ensure fairness and transparency and reduce stereotypes and discrimination in LLM applications.

"As international players like DeepSeek and Alibaba release platforms that are either free or much less expensive, there is going to be a global AI price race," Kumar said. "When price is the priority, will there still be a focus on ethical issues and regulations around bias? Or, since there are now international companies involved, will there be a push for more rapid regulation? We hope it’s the latter, but we will have to wait and see."

According to research cited in their study, nearly a third of those surveyed believe they have lost opportunities, such as financial or job prospects, due to biased AI algorithms. Kumar notes that AI systems have focused on removing explicit biases, but implicit biases remain. As these LLMs become smarter, detecting implicit bias will be more challenging. This is why the need for ethical policies is so important.

"As these LLMs play a bigger role in society, specifically in finance, marketing, human relations and even healthcare, they must align with human preferences. Otherwise, they could lead to biased outcomes and unfair decisions," he said. "Biased models in healthcare can lead to inequities in patient care; biased recruitment algorithms could favor one gender or race over another; or biased advertising models may perpetuate stereotypes."

While explainable AI and ethical policies are being established, Kumar and his collaborators call on scholars to develop proactive technical and organizational solutions for monitoring and mitigating LLM bias. They also suggest that a balanced approach should be used to ensure AI applications remain efficient, fair and transparent.

"This industry is moving very fast, so there is going to be a lot of tension between stakeholders with differing objectives. We must balance the concerns of each player - the developer, the business executive, the ethicist, the regulator - to appropriately address bias in these LLM models," he said. "Finding the sweet spot across different business domains and different regional regulations will be the key to success."

Xiahua Wei, Naveen Kumar, Han Zhang.
Addressing bias in generative AI: Challenges and research opportunities in information management.
Information & Management, 2025. doi: 10.1016/j.im.2025.104103

Most Popular Now

Philips Foundation 2024 Annual Report: E…

Marking its tenth anniversary, Philips Foundation released its 2024 Annual Report, highlighting a year in which the Philips Foundation helped provide access to quality healthcare for 46.5 million people around...

New AI Transforms Radiology with Speed, …

A first-of-its-kind generative AI system, developed in-house at Northwestern Medicine, is revolutionizing radiology - boosting productivity, identifying life-threatening conditions in milliseconds and offering a breakthrough solution to the global radiologist...

Scientists Argue for More FDA Oversight …

An agile, transparent, and ethics-driven oversight system is needed for the U.S. Food and Drug Administration (FDA) to balance innovation with patient safety when it comes to artificial intelligence-driven medical...

New Research Finds Specific Learning Str…

If data used to train artificial intelligence models for medical applications, such as hospitals across the Greater Toronto Area, differs from the real-world data, it could lead to patient harm...

Giving Doctors an AI-Powered Head Start …

Detection of melanoma and a range of other skin diseases will be faster and more accurate with a new artificial intelligence (AI) powered tool that analyses multiple imaging types simultaneously...

AI Agents for Oncology

Clinical decision-making in oncology is challenging and requires the analysis of various data types - from medical imaging and genetic information to patient records and treatment guidelines. To effectively support...

Patients say "Yes..ish" to the…

As artificial intelligence (AI) continues to be integrated in healthcare, a new multinational study involving Aarhus University sheds light on how dental patients really feel about its growing role in...

Brains vs. Bytes: Study Compares Diagnos…

A University of Maine study compared how well artificial intelligence (AI) models and human clinicians handled complex or sensitive medical cases. The study published in the Journal of Health Organization...

'AI Scientist' Suggests Combin…

An 'AI scientist', working in collaboration with human scientists, has found that combinations of cheap and safe drugs - used to treat conditions such as high cholesterol and alcohol dependence...

Start-ups in the Spotlight at MEDICA 202…

17 - 20 November 2025, Düsseldorf, Germany. MEDICA, the leading international trade fair and platform for healthcare innovations, will once again confirm its position as the world's number one hotspot for...