Technology evolves rapidly, and as it does, we often ask ourselves, “What does this mean for us?” » When ChatGPT ushered in a new era of accessible artificial intelligence (AI) tools in 2023, our staff here at the International Center for Journalists (ICFJ) was full of questions about what this meant for our work, our mission and the journalism in general.
To support our staff, we have embarked on a project to develop a policy providing guidance on how the organization will use AI tools. And because we know we’re not alone in answering these big questions, we wanted to share the lessons we’ve learned along our journey to help other organizations who are in the process of creating their own policy.
Why we decided to write an AI policy for our organization
Here at the ICFJ, AI was a hotly debated issue. Some people were extremely reluctant to use it, others just went for it.
The most concerned had serious concerns about AI: how would it impact their work? Would this harm the journalism community? What impact will this have on the rise of disinformation? Is it safe to use? Others saw the potential: how could AI help optimize our organization’s processes? How can it help us analyze the impacts of the ICFJ program and tell the story of the organization? Can this hold journalists accountable?
To answer these questions, we convened a working group to create an AI Use Policy that sets a standard for how ICFJ uses, adopts, and engages with AI tools. The task force was comprised of staff from different levels and departments to ensure the new policy was an effective tool for everyone in the organization.
How we wrote the policy
The first goal of the AI Working Group was to establish principles that would guide us through the process. Here is what we decided:
- Don’t hurt
- Protect the rights of others, privacy and original works
- Use content and data with consent
- Be transparent
After identifying these principles, we began our research. We started with the AI Guidelines Partnership. We read articles, listened to podcasts, talked to colleagues, and attended webinars and in-person discussions. Some of the resources we used are linked below.
Once we were confident in our knowledge, we began writing the policy. And where better to start than ChatGPT? We used the tool to write an AI use policy for a nonprofit organization. Ultimately, we didn’t use it, but it helped us lay the groundwork and decide which sections to include in our own policy.
Once we had a draft, we went through an intensive review process. We asked IT consultants, AI experts, editors, an attorney, and ICFJ senior management to review the policy.
After six months of work, we had a final version of the ICFJ AI Use Policy.
What we learned from this process
- Be ready to learn. AI research can be daunting and it’s difficult to know where to start. There are more resources today than at the beginning. (See the list below.)
Our suggestion is to create a space (we used a Slack channel) where you and the rest of your organization can collaborate on research. Also access conferences and webinars. Reach out to your professional community to see if anyone is willing to help you or work with your IT service provider if you have one.
- Start simple. People who work for nonprofits are typically responsible for wearing multiple hats, and your organization may assign you the role of in-house AI expert. Start by explaining why your organization is creating the policy and what the goals of the policy are.
- Don’t worry about writing the policy. All policies typically come with some form of Overview, Purpose, Scope, Compliance, and Review sections. Write these standard sections first. This will put your head in the right space to write the gist of the policy. Poynter has a practical model to help you get started.
- Keep everyone informed. This policy will affect all of your staff. Regularly communicate the why of the policy and invite people to contribute to the development of the policy.
- Optimize the review process. As mentioned previously, this policy will affect everyone in your organization. Don’t make your review process just for review. Create a review team that understands how this new policy could affect all departments in your organization.
- AI is constantly evolving. Your first policy won’t cover every AI tool available and that’s okay. Build a process into the policy to review and update it regularly as new AI technologies emerge.
How ICFJ Helps Journalists Navigate AI
Journalists’ justified distrust of AI rivals their fear of being left behind. ICFJ hosts programs that candidly address this tension, addressing questions of ethics and fairness, while exploring ways newsrooms can use AI tools safely and effectively to improve their processes, their research and workflows. Some media outlets block AI scrapers on their sites, others license their work to AI companies to train their models.
The ICFJ works with journalists to explore options and, in some cases, uses AI tools to counter the results of AI tools, such as the spread of misinformation. The tools and framework are constantly evolving.
- ICFJ Knight Fellows as Thought Leaders: We have two AI-focused Knight Fellows, with one joining our team soon. Nikita Roy creates courses to teach AI knowledge and works with newsrooms to carefully implement AI tools. Newsroom Robots, its weekly podcast of conversations with industry leaders, is listed as a Top Technology Podcast category on Apple Podcasts in more than 30 countries. Mattia Peretti, one of the pioneers of AI in journalism, takes a step back to see how this watershed moment allows us to reimagine how journalism can better serve communities, with AI as an aid to change, not like its engine. He also organized a Directory of consultants and trainers in AI and journalism for the editorial staff.
- Leap Solutions Challenge: Leap, the ICFJ’s innovation lab, challenged eight teams of journalists to explore how AI tools can defuse AI-powered misinformation. The solutions included AP’s Verify dashboard, which helps its journalists verify information; Rolli Information Tracer, which tracks the origin and spread of misinformation across platforms; several chatbots that immediately respond to queries about suspicious allegations, including ChatVE, which supports several African languages; and Snap Audit from Serendipia, which helps Mexican journalists quickly analyze documents to expose corruption and disinformation.
- Media Fest: This gathering of technologists and journalists, started by an ICFJ Knight Fellow years ago and now supported by the ICFJ, hosts workshops, discussions, and a hackathon. Last year’s theme was “How can AI serve journalism?” » and this year, it is “AI and elections”.
- The ICFJ Disarmament program investigates who is behind disinformation and how it spreads, with a particular focus on AI.
- AI Literacy Program: Roy and Peretti are partnering to develop an AI literacy program and learning experiences for journalists.
- The ICFJ’s International Journalists Network (IJNet) oversees the Pamela Howard Forum on Crisis Reporting, which has hosted numerous webinars covering topics such as the ethics of using AI and creating AI tools for journalists. You can consult the webinar training courses here.
What the ICFJ will do next
AI will continue to be a disruptive force for the foreseeable future, and we must be ready for whatever changes it brings. To stay current, we plan to regularly review our AI Use Policy to ensure our staff are always informed, protected and empowered when using AI tools, now and in the future.
ICFJ Resources on All Things AI
Additional Resources for Creating Your Own Policy