Using artificial intelligence to improve health

Nikhil Vytla, SM ’25, builds AI software with an emphasis on trustworthy results
Nikhil Vytla was in high school when he programmed his first mobile phone app. Along with his friend, he created an app that accessed news articles from different websites, translated them into any language, and then used text-to-speech technology to read them out loud. Vytla’s inspiration for the app was close to home.
“My grandfather loved to read the news, but over time, [he developed] age-related macular degeneration and lost his eyesight. He couldn’t read his favorite newspapers anymore,” he said. “That was my motivation to build technology that helped people in my communities—the people that I cared about.”
Since his first foray into the world of software development, Vytla has continued to pursue his passion for building technology throughout his educational and professional career. His focus on using artificial intelligence (AI) in health care grew from witnessing firsthand how technology could address real human challenges, particularly for vulnerable populations who might benefit most from more accurate and accessible medical tools. In March, he completed his master of science in health data science from the Department of Biostatistics at Harvard T.H. Chan School of Public Health.
Creating technology for social good
For his undergraduate degree, Vytla studied computer science and statistics at the University of North Carolina (UNC) at Chapel Hill. He became interested in applying his technical skills to projects for social good, particularly in the health care field. “Health care represents this incredible intersection where technical innovation can have immediate, life-changing impact,” he said. “When you’re building software that could help doctors save lives or improve patient outcomes, the work takes on a completely different meaning.” He is particularly proud of being a founding member of his college’s chapter of Computer Science + Social Good. In the organization, teams of students partnered with local nonprofits and startups to develop mobile apps and other technologies.
For example, Vytla and his teammates worked with the hospital system UNC Health to make virtual reality (VR) software for immunocompromised children, who had to stay in isolated hospital wings due to their susceptibility to disease. Using VR headsets, the children could take virtual field trips to museums, explore underwater environments, or play interactive games. “Seeing a child’s face light up as they virtually swam with dolphins while confined to a hospital bed—that’s when you realize technology isn’t just about algorithms and code,” he said. “It’s about restoring joy and possibility to people when they need it most.”
After college, Vytla took a job as a software engineer for the startup TruEra, where he developed methods to better explain how AI models work.
“AI is a black box,” he said. “How can you know what features or influences are most important to the model in terms of making a decision—like the prediction of a certain outcome, for example [whether a patient has a] disease or not?”
According to Vytla, AI explainability is also important in determining fairness in how models use protected characteristics—traits like race and gender that are legally protected from discrimination—to make decisions such as loan or credit card approvals or health diagnoses.
“My goal isn’t just to make AI smarter—it’s to make AI that works equitably for everyone. I want to bridge the gap between cutting-edge AI research and practical tools that could actually improve patient care in real clinical settings,” he said. Vytla noted that the need for trustworthy AI in health care is particularly urgent—studies have shown that some medical AI systems exhibit racial bias, such as algorithms that underestimate pain levels in Black patients, or diagnostic tools trained primarily on data from white populations, leading to less accurate diagnoses for people of color. With the aim of working on these types of issues, he set his sights on studying at Harvard Chan School.
Improving diagnoses for traumatic injuries
In the health data science program, Vytla became part of a tight-knit cohort of students. “We were able to build a close network and rely on and support each other through pretty tough classes, coursework, and complex problems,” he said. “I think that camaraderie was really important. I don’t know if I would have made it through the program without that kind of support.”
In addition to classes, he completed his capstone project at the Surgical Informatics Lab at Harvard Medical School and Beth Israel Deaconess Medical Center. The project focused on the process of diagnosing patients who come to the emergency room with traumatic injuries from, for example, car accidents or falls. Vytla worked on developing an AI model to help surgeons make more accurate diagnoses.
“Clinical decision-making in trauma care is highly subjective. There’s a lot of variability in how different surgeons treat patients, which might lead to missed injuries and delayed treatments, or potentially inconsistent outcomes,” he explained.
Vytla’s AI model used several factors as inputs for analysis. The major component was patient results from medical imaging—such as X-rays and CT scans—which were written up by clinicians in reports. According to Vytla, different clinicians rarely use the same exact terminology to describe the same observations, so the initial step of the model involved analyzing the text and converting it into standardized terms for diagnoses. Then, the model incorporated additional information including patient demographics and physical exam results. Based on all of the data, the model produced a list of potential missed diagnoses and recommendations for follow-up medical tests.
“It’s not necessarily that the tool is filling in the diagnoses, but it’s providing suggestions to the surgeon or to the resident [a surgeon in training] of things to check out and potentially test for. It’s designed to complement the surgeon’s expertise, not override it,” Vytla said.
When the model was evaluated using a test dataset, it predicted more injuries than patients had, erring on the side of safety. “In trauma care, false positives are far preferable to false negatives. An extra CT scan might be inconvenient, but a missed internal injury could be fatal. We deliberately designed the system to be cautious—better to be thorough than to miss something critical,” Vytla said. Researchers at the Surgical Informatics Lab are continuing to refine Vytla’s model for future use.
Making AI more trustworthy
Since finishing his degree, Vytla has been working as a software engineer for the company Snowflake (which acquired TruEra last year) to improve the trustworthiness of AI models—specifically, large language models such as ChatGPT and Claude. For example, he is working on methods to trace how the models reach conclusions and help them cite sources and express uncertainty—making AI responses more transparent and verifiable.
Vytla is also looking at whether an answer calculated by an AI model matches what the model actually tells the user. “Do models say what they’re really thinking—and what does it actually mean for a model to think? These are the types of questions I’m interested in answering,” he said.
Quick hits
Hobbies: Bouldering [a type of rock climbing], surfing, playing soccer, and making bracelets with paracord [rope used in parachuting]
Book recommendation: “Unmasking AI” by Joy Buolamwini, which looks at AI from the perspective of race and gender