Lessons learned: Transparency around using AI needs to be specific to maintain trust

You’re reading Lessons Learned, which distills practical takeaways from standout campaigns and peer-reviewed research in health and science communication. Want more Lessons Learned? Subscribe to our Call to Action newsletter.
The rapid development of AI technologies has raised many ethical, legal, economic and environmental concerns. On top of these general concerns, every industry must also reckon with how AI will affect their line of work. For health communicators, one essential consideration is understanding how using AI affects community trust.
One suggested strategy for earning trust is to prioritize transparency. In other words, communicators could openly disclose when and how they use AI through content labels and publicly accessible organizational guidelines. However, many communicators fear that using AI and disclosing its use could backfire and decrease trust. That leaves us with a critical tension – does transparency about using AI hurt or help trust?
A recent study published in The International Journal of Press/Politics explored this tension by evaluating how trust in AI-generated news was impacted by content labels. To do this, 1483 participants were randomly assigned to read AI-generated news articles that contained a label indicating it was AI-generated and/or the source list the AI used to generate the article. The experimenters also included a control condition with no AI-labels and no source list. After viewing the article, participants were asked how accurate and fair they thought the story was and how trustworthy they found the publishing organization.
What they learned: AI-labels decrease people’s trust in a publishing organization. The damage to people’s trust depends on how much the person already trusted news in the first place – the higher their trust in news, the more AI-labels hurt their trust. Interestingly, these effects on trust occur even though the AI-labels don’t have an impact on people’s beliefs about the accuracy or fairness of the article. Importantly, the negative impact of AI-labels on trust can be counteracted by including a source list.
Why it matters: Many health communicators are grappling with if and how they can use AI responsibly. These tough questions are set against a backdrop of competing pressures to innovate and move faster (often amid funding cuts), maintain community trust, and stay true to personal and industry ethical values. Science-based techniques for transparency, like the ones that this study highlights, can help with navigating these complicated challenges.
➡️ Idea worth stealing: If you choose to use AI, be specific when disclosing how you used it. People are more likely to trust AI-based content when you specify how it was generated (e.g., what sources the AI uses to generate the material).
What to watch: How health communicators and AI experts continue to work together to develop guidelines for using AI ethically.