Harvard researchers weigh AI’s climate and health impact

As artificial intelligence (AI) technology rapidly expands, experts are taking a closer look at both its capacity to address climate and health challenges and the new risks it may create. During Harvard Climate Action Week, the Department of Environmental Health and The Center for Climate, Health, and the Global Environment at Harvard T.H. Chan School of Public Health convened experts for a panel discussion that explored the complex relationship between AI, climate, and public health.
Opportunities and Concerns
Panelists were enthusiastic about the advances AI can bring across disciplines but expressed concern about the social and environmental impacts of its development and expansion.
Amruta Nori-Sarma, Harvard Chan C-CHANGE Deputy Director and Assistant Professor of Environmental Health and Population Science, highlighted her enthusiasm for AI’s potential to address environmental injustice. She described how AI could help researchers identify populations disproportionately affected by pollution and climate change and guide the strategic allocation of resources. She called for transparency and careful oversight in AI development, however, cautioning that “if we use historical data, there’s a possibility for some of the biases that are inherent in those data to also get perpetuated.” The challenge, she said, is to use AI to facilitate discovery and improve public health while simultaneously making sure we’re not perpetuating inequities. It’s important to ask who is building the models, whose data is being used, and who gets access to these tools.
Claudio Battiloro, Postdoctoral Research Fellow, stressed that while current AI systems are highly effective at extracting and learning from large and complex datasets, he is concerned about the rapid growth of AI and its potential long-term environmental impacts. Drawing a parallel with the history of plastics, he observed that—much like plastics became deeply embedded in society before their environmental impacts were widely understood—AI is at a critical stage where decisions made now could have lasting consequences. But Battiloro argued that “AI is not yet plastic,” and proactive regulations that ensure the sustainable and equitable integration of AI can prevent it from becoming as pervasive and difficult to address as plastics.
Dr. Nick Nassikas, Assistant Professor at Harvard Medical School, saw significant promise for AI in health care, particularly in areas such as drug discovery and reducing physician burnout, noting “I’m also really excited about the future and what it holds…there’s this thought that what we could have accomplished in the next 50 to 100 years, we’re now going to be able to accomplish in the next 5 to 10 years.” He cited AI-powered tools that transcribe patient visits and help with documentation, allowing physicians to focus more on patients. AI is also finding broader uses in analyzing new medical research, interpreting medical images, and identifying new approaches in cancer treatment and vaccine design.
Francesca Dominici, Professor of Biostatistics, Population and Data Science, emphasized both the potential for AI to accelerate discovery and the need to address its environmental and equity impacts. She pointed to the large energy requirements of AI infrastructure, and that data centers now account for about 4% of all U.S. electricity use, with much of this power coming from fossil fuels. She also noted that recent data shows AI infrastructure often relies on power sources with higher carbon intensity than the national average, increasing greenhouse gas emissions as well as other pollutants with known public health risks.
Real-World Impacts and the Importance of Equity
The discussion also covered the risks of deepening health inequities through AI. Dominici shared a recent example of a town in Virginia, where developers proposed a natural gas power plant to supply electricity to a large new data center, assuring community members that the health impacts would be minimal. However, after Dominici’s team conducted an independent analysis and found the plant’s emissions would expose 1.2 million people to increased health risks from air pollution and lead to $625 million in additional healthcare costs, the town objected to the plant and the developer withdrew the project. “I think this is another element of democratizing information so that the communities that are affected are not fed misinformation that only supports the business interests of the developer,” said Dominici. She emphasized that such experiences show how critical it is for communities to have access to accurate and transparent information about potential environmental and health impacts of AI’s growing energy demands.
A Path Forward
The panelists agreed that there are vast potential benefits of using AI to address climate and health challenges, but we need to implement policies that encourage sustainable energy use and transparency in AI operations. They emphasized that broader policy, regulatory, and cultural changes are required to ensure AI’s development is equitable and environmentally sustainable, and that this is a critical moment to make choices that shape AI’s role in society for the better.