Projects per year
Abstract / Description of output
This rapid review outlines a range of existing and potential risks generative AI poses if incorporated into journalism, written with newsroom leaders and journalists in mind. It is intended as a quick entry point into live and rapidly evolving discussions of the issues, with links and references out to useful resources – some academic and peer-reviewed, some journalistic. It is not a comprehensive analysis or an exploration of applications or benefits (of which there are a growing number of resources. For ease of navigation, the document is structured into three broad risk categories: editorial, legal, and societal. The report was created as an output of collaboration between the University of Edinburgh and the BBC R&D Responsible Innovation team, as part of the PETRAS Building Public Value via Intelligible AI project. The work underpinning it includes: a review of existing research and grey literature, expert workshops with BBC staff, interviews and focus groups with BBC journalists.
Why have we produced this? Generative AI is a branch of general purpose AI (also referred to as foundation models) that can create media content of varied types, including text, images, audio and code. Generative AI systems such as Large Language Models (LLMs) have pushed the boundaries of what is possible in content generation and created new challenges and risks for society. They will likely have significant impacts on news organisations and journalists as well as audience members/news users, impacting how news is gathered, produced, distributed and consumed. However, the news media industry currently lacks an advanced understanding of exactly how they work, when and how they fail, and what mitigations are required to ensure they work in the public interest.
Why have we produced this? Generative AI is a branch of general purpose AI (also referred to as foundation models) that can create media content of varied types, including text, images, audio and code. Generative AI systems such as Large Language Models (LLMs) have pushed the boundaries of what is possible in content generation and created new challenges and risks for society. They will likely have significant impacts on news organisations and journalists as well as audience members/news users, impacting how news is gathered, produced, distributed and consumed. However, the news media industry currently lacks an advanced understanding of exactly how they work, when and how they fail, and what mitigations are required to ensure they work in the public interest.
Original language | English |
---|---|
Type | Industry-facing report |
Media of output | |
Number of pages | 12 |
Publication status | Published - 6 Jun 2023 |
Keywords / Materials (for Non-textual outputs)
- Generative AI
- journalism
- news production
- Large Language Models
Fingerprint
Dive into the research topics of 'Generative AI & journalism: A rapid risk-based review'. Together they form a unique fingerprint.Projects
- 1 Finished
-
PubVIA: Building Public Value via Intelligible AI
Jones, B., Luger, E. & Elsden, C.
1/10/22 → 31/03/23
Project: Research
Research output
- 1 Other report
-
Futures Thinking with Journalists: Resource Pack for Researchers and Innovators in the News Industry
Jones, B. & Jones, R., 2023, 32 p.Research output: Book/Report › Other report
Open AccessFile