Artificial Intelligence and journalism: Critical considerations for public service media

Research output: Contribution to conferenceAbstractpeer-review

Abstract

Public Service Media (PSM) - predominantly those that are well-resourced and in developed
economies - are increasingly deploying artificial intelligence (AI) in news production (EBU
2019, Jones, Jones & Luger 2022) as are counterparts across the news industry (Beckett 2019,
Diakopoulos 2019, Marconi 2020). However, PSM organisations’ approach to technology
innovation has to reflect their distinct role, remit, and values (EBU 2020). This raises complex
questions about what it means to responsibly procure, develop, deploy, and report on AI and how to ensure core PSM commitments to principles such as independence, impartiality, and accuracy can be maintained in an era of AI. Our study draws from a series of workshops as part of three years’ collaborative research with Britain’s largest public service broadcaster, the BBC. The overarching research question we address here is: What do PSM need to consider when integrating AI into news production? We broke this down into sub-questions: RQ1: What are the core PSM values and how do they shape journalism? RQ2: How is AI being applied in news
operations and what risks and opportunities does this raise? RQ3: Given the answers to RQ1 and RQ2, what value tensions and associated challenges arise that PSM should address? We conducted a) desktop research and literature review to identify key themes pertaining to the application of AI in journalism, b) six two-hour workshops bringing together selected industry representatives from the BBC and European Broadcasting Union, with academic experts in diverse fields (e.g. law, ethics, business, AI, computer science etc.,) and c) six further workshops with BBC journalists and decision-makers, focusing on the specific topics of generative AI and synthetic media. Drawing together frameworks for responsible AI (Floridi 2021), and public service values (EBU 2012, Donders 2021) enabled us to explore ways of thinking about the benefits and risks of AI that extend beyond, and provide an alternative to dominant commercial market-based approaches that prioritise profit generation. Our analysis finds unresolved tensions between long-held values including independence, universality, diversity, impartiality, and accuracy, and emerging logics of datafication, personalisation, choice, scale and efficiency. We identify critical questions regarding AI that are not yet adequately addressed by PSM regarding sustainability (e.g. climate impacts of energy-hungry AI models), (in)equality (e.g. mitigation of bias and harms, the digital divide and data literacy), labour relations and work conditions (e.g. exploitation in supply chains, hidden or ‘ghost’ labour, human displacement via automation), and regulation and governance (e.g. oversight of mutable and inscrutable AI systems). We then use the example of generative AI (e.g. arge language models like GPT-3) to illustrate some of these issues and the conflicting priorities at play before suggesting potential mitigations and responses. We contend that these core considerations are important for any public interest news organisation looking to introduce or already incorporating AI-based systems.

Conference

ConferenceInternational Association for Media & Communications Research (IAMCR) Annual Conference
Country/TerritoryFrance
CityLyon
Period9/07/2313/07/23
Internet address

Keywords / Materials (for Non-textual outputs)

  • public service media (PSM)
  • Artificial Intelligence
  • Journalism

Fingerprint

Dive into the research topics of 'Artificial Intelligence and journalism: Critical considerations for public service media'. Together they form a unique fingerprint.

Cite this