Vaccines and motherhood: Are AI-generated health messages working in Kenya and Nigeria?

Kenya and Nigeria are long-time pioneers in health tech innovation, continually evolving their communication strategies with each wave of new technology.
Yewande O. Addie, University of Florida
More To Read
- Universities can turn AI from threat to opportunity by teaching critical thinking
- Trump revives ‘America First’ agenda, declares end to policing Kenya, Somalia
- Kenya, Ethiopia, DRC lead East Africa’s stunning export surge amid global economic uncertainty
- Morocco, Nigeria, South Africa begin FIFA U-20 World Cup campaigns in Chile
- AI in Africa: Five issues that must be tackled for digital equality
- Turkana moves to safeguard health with bold One Health Bill
Picture this: an artificial intelligence (AI) system creates a bright, youth-focused social media post for young Kenyans, complete with local slang and the phrase “YOUNG, LIT, AND VAXXED!” This message tackles the fear that vaccination will affect fertility – a fear that has serious health consequences. But something feels off about an algorithm trying to sound cool while discussing reproductive health.
This scenario is one of dozens of health messages analysed in a recent study of health campaign communication in Nigeria and Kenya.
Our research team analysed and compared 120 health messages: 80 from traditional sources like health ministries and non-government organisations, and 40 generated by AI systems.
We focused on two critical health topics: vaccine hesitancy and maternal healthcare.
The results reveal a surprising twist in the global rush to use AI for health communication: neither approach proved superior. AI was more creative but error-prone.
Traditional campaigns were authoritative but rigid. This underscores the real challenge: designing health communication that is accurate and culturally responsive.
Health systems riding the technology wave
Kenya and Nigeria aren’t newcomers to health technology innovation. Both have consistently adapted their health communication as new technologies emerged.
In the 1980s and 1990s, health campaigns relied on printed posters, radio jingles and clinic-based education. By the 2010s, mobile phones and platforms like WhatsApp and Facebook were transforming health messaging.
Text message alerts and WhatsApp groups became essential tools for organisations and health ministries. They shared updates on HIV, maternal health and Covid-19. In Nigeria, Igbo-language radio campaigns like “Kill Mosquito, Stop Malaria” improved message understanding among rural women.
Now AI represents the next wave. The World Health Organisation has launched S.A.R.A.H. (Smart AI Resource Assistant for Health), designed specifically for health communication. Meanwhile, general AI tools like ChatGPT (Chat Generative Pre-trained Transformer) are being used to craft vaccination campaigns and maternal health advice.
Kenya has developed a national AI strategy for healthcare. Nigeria, too, is exploring AI tools to strengthen health systems.
The appeal is obvious: AI can produce messages quickly, in multiple languages, and at scale. This efficiency matters especially as global health funding faces uncertainty. It requires health systems to do more with potentially fewer resources.
Other Topics To Read
So our research asks: when AI creates health messages for these contexts, does it understand what really matters to local communities?
What we discovered
The results surprised us. AI-generated messages actually included more cultural references than traditional campaigns. Where human-created materials often stuck to clinical, western medical language, AI systems attempted to use local metaphors, farming analogies and community-centred language.
But there’s a catch. These cultural references were often shallow and sometimes inaccurate. AI might reference local customs without truly understanding them.
It could also use agricultural metaphors that work for rural audiences but alienate urban readers. In some cases, AI-generated images produced warped, distorted faces. AI generation of images of people of colour tends to be a persistent problem. This is because these systems haven’t been trained on enough diverse examples.
The WHO’s health-focused AI tool, S.A.R.A.H, often produced incomplete responses and sometimes required resets to function properly. Its use of a white female avatar also raises questions about representation in global health AI design.
Traditional health campaigns had their own problems, too. Despite being created by organisations with substantial resources and local presence, they often reinforced Western medical expertise. They gave limited space to community knowledge and traditional health practices.
This reflects a broader pattern. International organisations can inadvertently replicate colonial-era patterns of external “experts” telling local communities what to do.
We saw this during the Covid-19 pandemic, when high-income countries blocked efforts to waive intellectual property rules and hoarded vaccine doses. This left many low- and middle-income countries struggling to secure access. It reinforced a hierarchy of whose health and expertise mattered most in global decision-making.
Most striking was what both approaches missed: genuine community empowerment. Across nearly all the messages we analysed, people were positioned as passive recipients of expert knowledge rather than active participants in their own health decisions.
Why this matters now
These findings matter because AI adoption in African health systems is accelerating rapidly. In sub-Saharan Africa, surveys suggest that 31.7 per cent of AI deployments in health are in telemedicine, 20 per cent in sexual and reproductive health, and 16.7 per cent in operations.
Success stories are emerging, like Kenya’s AI Consult platform, reducing diagnostic errors. Another is AI tools changing healthcare access in Nigeria.
But our research suggests that without careful attention to cultural context and community engagement, AI health messaging could run into the same old problems: outsiders creating messages for communities without genuinely understanding or involving them.
The stakes are particularly high for vaccine hesitancy and maternal health. These are two areas where trust, cultural sensitivity and community buy-in can literally mean the difference between life and death.
When people trust health guidance, more lives are saved. Vaccines protect communities from preventable diseases, and maternal health support lowers the risk of mothers and infants dying during childbirth.
A path forward
The solution isn’t to abandon AI in health communication.
Instead, these tools need to be developed with local communities from the ground up. This means training AI systems using locally relevant data and knowledge systems.
Health organisations should also build community feedback loops into AI message development. Test AI-generated health content with the communities it’s meant to serve. Include local health workers, traditional leaders and community members in validating accuracy, cultural appropriateness and emotional resonance.
There’s also an opportunity to invest in homegrown AI development. African-led platforms like the digital healthcare assistant AwaDoc demonstrate how locally developed AI can better understand cultural context while maintaining medical accuracy.

The Conversation
***
Yewande O. Addie, Adjunct Professor, University of Florida
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Top Stories Today