When it comes to widespread global health problems, ChatGPT and its like may not be the first solution that comes to your mind.
But generative AI, the type of large language model that underlies artificial intelligence chatbots like ChatGPT, could have a lot to offer in low- and middle-income countries where access to reliable health care remains a hurdle for many. Eleni Linos, MD, DrPH, the director of the Stanford Center for Digital Health, spends a lot of time thinking about how digital tools including generative AI could tackle health problems that humans haven't been able to solve.
Recently, Linos and her research team at the center co-authored a report on generative AI's application for health in low- and middle-income countries in collaboration with Isabella de Vere Hunt from Oxford and Sarah Soule, the incoming dean of the Graduate School of Business and the Stanford Center for Advanced Study in Behavioral Science, which she was leading during our research.
How did this report come about?
We spoke with Linos about how AI can be used for good to provide personalized and reliable health care and information to patients, especially in settings where high quality medical care is difficult to access, or when people are hesitant to discuss such things as HIV testing or reproductive health with their doctor.
Linos, the Ben Davenport and Lucy Zhang Professor in Medicine, also discussed the difficulty of reaching patients remotely in places where many households don't have access to online tools. This interview has been edited for length and clarity.

Related content
Read the full Stanford Center for Digital Health white paper, Generative AI for Health in Low & Middle Income Countries, on the potential of generative AI to improve health and health care in low- and middle-income countries.
As generative AI use rapidly grows, the opportunity to transform people's lives worldwide has also increased. The potential - the optimism - that we can finally provide high quality medical care for everyone is here.
A lot of organizations are investing in GenAI health projects across the world, but we didn't know how these tools are being used. What's working and what's not working? To find out, we conducted detailed interviews with health care workers, policymakers, funders, and technology developers. In addition, we surveyed hundreds of stakeholders who work in this field and conducted two round-table meetings at Stanford and in Nairobi, Kenya. Our aim was to provide timely insights to inform the next stage of investment in and use of these technologies.
What's different about using generative AI in low- and middle-income countries than in high-income countries?
Access to basic health care infrastructure is very different. Finding care, especially specialist care, is difficult in many of these countries with rural communities where people might face long travel distances to clinics, prohibitive costs, and a shortage of treatments and trained health care professionals.
When it comes to these AI models, language is another big challenge. Many AI models are trained in English or other common languages, and translations into the thousands of different languages spoken in Africa, for example, may not be accurate. Then there's the scale required to meet the health needs of billions of people living in lower income settings. Finally, many people in these communities don't have access to internet or digital tools.
There's a tension with any AI model for health between the perfect and the good enough. Obviously we need to prioritize safety first. But if your alternative is going down the street to a trained physician who can provide excellent care, with empathy and trust versus getting your information from an AI chatbot, that's one thing -- your standards for that chatbot may be pretty high. But if you don't have that alternative or if your alternative is waiting nine months, what counts as good enough is different. In many settings, especially if people are suffering, we may not have time to wait for the perfect AI model.
What's an example of how generative AI is being used in these settings?
One of the most widely scaled examples we highlight in the report is Jacaranda Health's PROMPTS system in Kenya. PROMPTS is a two-way SMS-based maternal health service that provides timely, AI-generated responses to questions from pregnant and postpartum patients. Since integrating a custom-trained AI model in Swahili and English, the system has significantly improved response times -- from hours or days to just minutes.
By combining AI with human oversight, PROMPTS has reached over 500,000 users in 2024 alone. The system flags high-risk cases for immediate human follow-up, ensuring that AI enhances, rather than replaces, human expertise. This is a game-changer in maternal health care, particularly in regions where pregnancy-related complications remain a leading cause of death.
What are problems that still need to be overcome?
In addition to the known challenges of AI in healthcare: data quality, ethical considerations, privacy, algorithmic bias and the guardrails needed to overcome these, our research identified some additional challenges specific to low- and middle-income settings.
Many of the developers working on these problems feel that everyone is working in parallel without talking to each other, in part because the technology moves so quickly. We need to figure out a reliable way for everyone in this field to learn from each other's successes and struggles. Relatedly, we need to establish consistency in the metrics we use to measure success so we can compare like with like, something our team is helping do. We also need to develop methods and standards for evaluating these models that match the pace of innovation.
Models also must be fine-tuned for local language dialects and slang terms, varying cultural contexts and medical realities, requiring large quantities of high-quality textual data. Luckily, many companies in this space are already working on this problem, as is our team.
Lastly, we need to improve basic health infrastructure. No matter how optimistic we are about AI's potential, or how advanced the AI models are, how well they improve someone's health depends on the environment and resources that are available where they live. Imagine if an AI model diagnosed you perfectly and correctly recommended a particular surgery or antibiotic -- if there's no surgeon in your community, or no antibiotics, it doesn't actually help.
Linos and her team hope this report sparks collaboration across borders and disciplines, ensuring that AI doesn't just replicate existing health systems but reshapes them to transform the way people worldwide can live healthier lives. In a fast-moving environment like generative AI, they hope this report and ongoing efforts will help the teams make even more steady headway toward improving health in these settings.
The Center for Digital Health aims to bridge disciplines to help answer some of the most crucial questions in related to technology and health. True to this mission, the center is planning a follow up convening with funders and tech companies in April to present this report and focus on aligning on solutions.
"The road ahead is filled with challenges, but with the right values, partnerships, and ethical guardrails, AI can be the great equalizer for health," Linos said. "This is just the beginning."
Illustration by Emily Moskal / Stanford Medicine