Chicago Health Magazine guidance and policies on using AI in our work
Last updated: February 2, 2026
Generative artificial intelligence is the use of large language models to create something new, such as text, images, graphics, or interactive media. Although generative AI has the potential to improve news gathering, it also has the potential to harm journalists’ credibility, integrity, and our unique relationship with our audience. With that in mind, the following core values will guide our work.
Core Values
1) Transparency
We value both internal and external transparency.
Externally, if we use AI in our journalism, we will document and describe the tools in order to disclose and educate news consumers. Editors and designers will create disclosures that are precise in language without being onerous to our audience. This may be a short tagline, a caption or credit, or something more substantial (i.e. an editor’s note).
Communication and disclosure create opportunities to get feedback from the audience, as well as educate consumers. As journalists, part of our job is to empower our audience with news literacy skills. AI literacy — understanding how generative AI works, ways it changes the information ecosystem, and how to avoid AI-generated misinformation — is a subset of news literacy.
Internally, it will be made clear to everyone on our team whether anyone has used generative AI. This will help us create applicable policies as the technologies evolve. And it will provide an understanding of how much of the work humans are doing vs. how much of the work AI is doing.
2) Accuracy and human work
Reporters, designers, and editors create and review all of our editorial work: stories, podcasts, photographs, social media posts, illustrations, and videos. We do not use AI to write stories; however, it may be used in the editorial process, including to transcribe interviews, to catch grammatical errors, and in search tools. Our writers sign contracts to not use AI to write their stories. And if we ever use AI in a way that differs from what we have described above, we will disclose the AI use. Everything we publish will live up to our verification standards. Increasingly in all of our work, it is important to be explicit about how we know facts are facts.
3) Audience service
We serve our audience, and we have made a promise to our audience to provide them with information that helps them navigate their health decisions. Historically, human journalists have researched, written, and produced the information in our magazine and on our website, in communication with other humans. We will continue along that trajectory to ensure truth and integrity in the work we produce, and to protect people’s livelihoods and connection with one another.
4) Privacy and security
Our relationship with our audience and with our sources is rooted in trust and respect. We will protect their data in accordance with our newsroom’s privacy policies. Our privacy policy forbids entering sensitive or identifying information about readers, sources, or our own staff into any generative AI tools.
————
Logistics
The point person on generative AI in our newsroom is Editor-in-Chief Katie Scarlett Brandt. She will seek input regularly from the core team. In addition, Katie’s responsibilities include:
- Monitoring our content management systems, word processing software, photo editing software, and other business software for updates that may include AI tools. Because software changes quickly and AI is being added to nearly every technology product, she may delegate appropriate team members to stay knowledgeable of updates.
- Writing clear guidance about AI in content-generation.
- Editing and finalizing our AI policy, ensuring that it is both internally available and, where appropriate, publicly available (with our other standards and ethics guidelines).
- Seeking input from our audience, through surveys and other feedback mechanisms.
- Understanding our privacy policies and explaining how they apply to AI and other product development. This includes consulting with editors, lawyers, or other privacy experts that influence newsroom policies.
Editorial use:
Research – AI can be very helpful in scraping and analyzing large amounts of data. Our writers are permitted to use AI to search for information, mine public databases, or calculate statistics that would be useful to our audience. Any data analysis should be checked by an editor and subsequently a fact checker. Our writers also may ask a publicly available large language model to research a topic. However, the writer will need to independently verify every fact. It is fairly common for AI to hallucinate information, including facts, biographical information, and even newspaper citations.
Writing – We do not use AI to generate articles, article summaries, or newsletter copy. Additionally, our policy is that we do not enter our content into any large language models.
Illustrations – All illustrations created with AI contain the following credit: Artificial intelligence contributed to the creation of this illustration.
Photos – We do not use AI-created “photos.” We also do not use AI to manipulate photos, unless they are for illustration purposes and clearly defined. Visual journalists need to be aware of software updates to photo processing tools to ensure AI-enhancement is being used according to our policies.
Reader-submitted content – We do not publish any reader-submitted content without first verifying its authenticity.
Fact-checking – Use of AI alone is not sufficient for independent fact checking. Facts should be checked against multiple authoritative sources that have been created, edited, or curated by human beings. A single source is generally not sufficient; information should be checked against multiple sources.
Social media – Use of verbatim GPT content is not permitted on our social channels.
————
Commitment to Audience AI Literacy
Along with this AI policy, we are developing a page to help our audience understand our approach to reporting health news. This material will be regularly updated. We will link to resources, articles, and other materials in order to:
- Help our audience understand the basics of generative AI
- Explain how we might use AI in our work
- Build a more robust vocabulary for describing AI
- Avoid AI-generated misinformation
- Use chatbots responsibly to seek out factual information



