TY - JOUR
T1 - 'Frontier AI,' power, and the public interest
T2 - Who benefits, who decides?
AU - Leslie, David
AU - Ashurst, Carolyn
AU - Mendendez Gonzalez, Natalia
AU - Griffiths, Frances
AU - Jayadeva, Smera
AU - Jorgensen, Mackenzie
AU - Katell, Michael
AU - Krishna, Shyam
AU - Kwiatkowsk, Doschmund
AU - Iglésias Martins, Carolina
AU - Mahomed, Sabeehah
AU - Mougan, Carlos
AU - Pandit, Shachi
AU - Richey, Mark
AU - Sakshaug, Joseph W.
AU - Vallor, Shannon
AU - Vilain, Luke
PY - 2024
Y1 - 2024
N2 - As the rapid industrialization of generative AI (GenAI) reached a crescendo in the fall of 2023, a series of international AI policy initiatives, like the UK AI Safety Summit and G7’s Hiroshima Process, cropped up as a response to corresponding global AI governance challenges. The policymakers and government officials, who drove these initiatives, emphasized that the rise of ‘frontier AI’ technologies was bringing humankind to a historical inflection point—placing humanity at a crossroads and situating the present generation at an axis of choice which would determine whether the evolution of AI innovation moves toward the exponential advancement of the public good or toward possibilities for potentially irreparable harm to people, society, and the planet. And yet, despite the inflationary rhetoric of historical transformation around which these policy initiatives were framed, their actual results (high-level voluntary commitments, non-binding codes of conduct, the formation of light-touch national AI safety institutes, etc.) seem vastly out of sync with the scope and scale of the problem to which such initiatives claimed to respond. In this paper we argue that, if anything, this framing of the GenAI moment as a historical pivot point raises fundamental questions about the thorny relationship of ‘frontier AI,’ power, and the public interest, namely: Who actually has their hands on the wheel? Who defines the agenda of ‘frontier AI’ innovation? Who controls the means of producing it, and thus ultimately its influence on humanity’s broader fate? These questions cut much deeper than those surrounding the risks arising from unforeseen advances in ‘frontier AI’ capability or those around pre-deployment safety testing that took up much of the oxygen at the UK AI Safety Summit and related international AI policy discussions. They have to do with who possesses agenda-setting power, who decides on when, where, and how these technologies are developed and deployed and who stands to benefit from or be harmed by them. They also raise further questions about how affected members of society can harness this control over the direction of AI to serve the common good and thereby exercise agency over the trajectories of their own collective futures. We will claim that, ultimately, an effective response to these questions demands a fundamental rethinking of the broader political economy of AI and of the global innovation ecosystem which drives its forward progress—a rethinking that recasts this technology as a global public utility subject to democratic control, community-led agenda-setting, and society-centered regulation.
AB - As the rapid industrialization of generative AI (GenAI) reached a crescendo in the fall of 2023, a series of international AI policy initiatives, like the UK AI Safety Summit and G7’s Hiroshima Process, cropped up as a response to corresponding global AI governance challenges. The policymakers and government officials, who drove these initiatives, emphasized that the rise of ‘frontier AI’ technologies was bringing humankind to a historical inflection point—placing humanity at a crossroads and situating the present generation at an axis of choice which would determine whether the evolution of AI innovation moves toward the exponential advancement of the public good or toward possibilities for potentially irreparable harm to people, society, and the planet. And yet, despite the inflationary rhetoric of historical transformation around which these policy initiatives were framed, their actual results (high-level voluntary commitments, non-binding codes of conduct, the formation of light-touch national AI safety institutes, etc.) seem vastly out of sync with the scope and scale of the problem to which such initiatives claimed to respond. In this paper we argue that, if anything, this framing of the GenAI moment as a historical pivot point raises fundamental questions about the thorny relationship of ‘frontier AI,’ power, and the public interest, namely: Who actually has their hands on the wheel? Who defines the agenda of ‘frontier AI’ innovation? Who controls the means of producing it, and thus ultimately its influence on humanity’s broader fate? These questions cut much deeper than those surrounding the risks arising from unforeseen advances in ‘frontier AI’ capability or those around pre-deployment safety testing that took up much of the oxygen at the UK AI Safety Summit and related international AI policy discussions. They have to do with who possesses agenda-setting power, who decides on when, where, and how these technologies are developed and deployed and who stands to benefit from or be harmed by them. They also raise further questions about how affected members of society can harness this control over the direction of AI to serve the common good and thereby exercise agency over the trajectories of their own collective futures. We will claim that, ultimately, an effective response to these questions demands a fundamental rethinking of the broader political economy of AI and of the global innovation ecosystem which drives its forward progress—a rethinking that recasts this technology as a global public utility subject to democratic control, community-led agenda-setting, and society-centered regulation.
KW - generative AI
KW - frontier models
KW - AI ethics and governance
KW - public interest
KW - power
KW - public utility
U2 - 10.1162/99608f92.4a42495c
DO - 10.1162/99608f92.4a42495c
M3 - Article
SN - 2688-8513
JO - Harvard Data Science Review
JF - Harvard Data Science Review
IS - Special Issue 5
ER -