DiscoverThe Geneva Learning FoundationAI podcast: Artificial intelligence, accountability, and authenticity in global health
AI podcast: Artificial intelligence, accountability, and authenticity in global health

AI podcast: Artificial intelligence, accountability, and authenticity in global health

Update: 2025-03-11
Share

Description

# Global Health Podcast Examines AI's Impact on Power Dynamics and Knowledge Production

Read Reda Sadki’s article: Artificial intelligence, accountability, and authenticity: knowledge production and power in global health crisis
https://redasadki.me/2025/03/09/artificial-intelligence-accountability-and-authenticity-knowledge-production-and-power-in-global-health-crisis/

Get the framework here:
https://redasadki.me/2025/01/24/a-global-health-framework-for-artificial-intelligence-as-co-worker-to-support-networked-learning-and-local-action/

A recent podcast delivers a thought-provoking discussion on artificial intelligence's evolving role in global health, examining researcher Reda Sadki's new article on AI, authenticity, and power dynamics in knowledge production.

The conversational deep dive, featuring two health experts discussing Sadki's work, explores how AI tools could either exacerbate existing inequalities or potentially democratize knowledge in global health settings.

## Case study grounds theoretical concerns

The podcast hosts begin by discussing how Sadki's article uses a specific case study to illustrate broader concerns about AI in global health.

"What's really interesting about Reda Sadki's piece here is how he uses this really specific case study to ground all of these ideas," explains one host, describing how Sadki introduces Joseph, a Kenyan health leader known for thoughtful contributions to a global health learning program called Teach to Reach.

The hosts explain that according to Sadki's article, Joseph began using AI to respond to questions designed to elicit personal narratives. One speaker notes, "And that obviously caused a lot of confusion. People were so used to these really insightful personal stories from Joseph and then all of a sudden they're getting these like really generic AI generated responses."

## Accountability pressures in global health

The podcast delves into how Sadki connects Joseph's experience to broader accountability structures in global health, highlighting the pressures practitioners face from international donors.

"Health workers are under a ton of pressure from international donors. Their funding is tied to these performance evaluations, and there's this constant fear of getting penalized if they step outside the box or take any risks," says one of the hosts, summarizing a key point from Sadki's analysis.

This creates what the hosts describe as the "transparency paradox" discussed in Sadki's article – practitioners must choose between acknowledging AI use and potentially having their work devalued, or concealing it and risking accusations of dishonesty.

"If Joseph had just said, 'Hey, I'm using AI,' people might have just dismissed his work as not authentic, even if it was good," explains one of the speakers, adding, "But if he kept it a secret, he'd risk being accused of being dishonest and maybe even losing his funding."

## Economic considerations and emerging inequalities

The discussion turns to the economic dimensions of AI in global health, with the hosts referencing Sadki's mention of OpenAI's announcement of specialized AI agents.

"Sadki says a high level AI agent might cost $20,000 a month," notes one speaker, highlighting the financial aspects that might make AI attractive to organizations facing budget constraints.

The hosts examine how these developments intersect with existing labor inequalities in global health. "Sadki highlights how historically there's been this labor stratification in global health where people in the global north have often been paid way more than their colleagues in the global south," explains a speaker.

This raises concerns about potential new divides: "Will these AI agents just make those inequalities worse? Will it lead to a situation where the people who can afford these fancy tools have even more control over knowledge and decision making?"

## Framework for responsible AI integration

The podcast outlines what the hosts describe as Sadki's framework for integrating AI responsibly in global health contexts.

"Sadki has put forward this framework that sees AI as a coworker supporting network learning and local action," explains one host. "It's not about replacing human expertise, it's about making it stronger."

This approach emphasizes local context and community empowerment, with one speaker noting, "You can't just take some tool that was developed in Silicon Valley and expect it to work perfectly in a rural village in Kenya."

The hosts connect Sadki's ideas to other researchers' work, mentioning "what Newton Lewis and her colleagues have written about performance management in complex adaptive systems," highlighting the importance of flexible, participatory approaches that value local knowledge.

🤖 This podcast was generated by AI to explore the article. While the conversation is AI-generated, everything is based on the article.
Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

AI podcast: Artificial intelligence, accountability, and authenticity in global health

AI podcast: Artificial intelligence, accountability, and authenticity in global health

The Geneva Learning Foundation