How AI narratives shape policy and public opinion
25 June 2024
An interview with Dr Kerry McInerney, Research Fellow at the Leverhulme Centre for the Future of Intelligence (CFI) at the University of Cambridge by James Boyd-Wallis.
As anyone in communications will attest, narratives matter. So, how we talk about artificial intelligence (AI) and its risks and benefits can significantly influence its development, regulation and place in public opinion. And from Homer to HAL, we have been sharing stories about AI long before we had computers.
Given the increasing exposure and adoption of the technology, and as policymakers consider legislation, I spoke to Dr Kerry McInerney to explore some of these narratives, metaphors, and analogies and their impact on policy.
Photo credit: Jason Sheldon / Junction 10 photography
“Currently, some people talk about artificial intelligence as a revolutionary technology, representing a complete break from the past. We see that in many national AI strategies. However, many are pushing back on this narrative and are instead drawing comparisons to previous technologies,” explains Dr McInerney, who co-leads the Global Politics of AI project at the CFI investigating how artificial intelligence is reshaping international relations.
“Metaphors can be a good strategy for drawing people towards legislation and governance mechanisms. For instance, one of the analogies we hear among policymakers and the media is whether AI is the new nuclear. But we need to probe the limits of this comparison,” says Dr McInerney.
“We could argue that the governance of nuclear weapons has been a success story. We have reached global agreements and alliances preventing their proliferation.”
Such successes may suggest a parallel path for managing the societal impact of AI. “However, while this analogy may be useful for policymakers seeking to create international governance regimes for AI, this narrative of success often overshadows the terrible impact of nuclear testing on people and the environment worldwide,” argues Dr McInerney. So, the situation is more nuanced than a simplistic comparison.
There are additional historical analogies now gaining traction. “We see the rhetoric of an AI arms race, being put forward by the defence establishment, policy officials and Big Tech,” says Dr McInerney, also an AHRC/BBC New Generation Thinker.
This narrative is already impacting policy where the U.S. government has banned the sale of semiconductor chips used for artificial intelligence to China, for instance.
However, when global governance is crucial to managing the societal impact of artificial intelligence, and this ‘arms race’ analogy could entrench international competition over collaboration, we should challenge its assumptions. Are tech companies amplifying this argument to stifle global competition, push for less domestic regulation and develop and release new products at speed and scale?
Alongside the global politics of AI, Dr McInerney also explores the intersection between race, gender and artificial intelligence.
She recently co-edited the book The Good Robot: why technology needs feminism with Dr Eleanor Drage, which looks at how feminism can help us work towards ‘good’ technology.
Through the voices of leading feminist thinkers, activists and technologists, the book demonstrates the efforts of academics and researchers such as Dr McInerney to cultivate a critical and informed public conversation about technological developments.
Given the importance of charting the right path for AI, we should heed what Dr McInerney and her colleagues say about how narratives shape policy, public opinion and the technology’s development.
James Boyd-Wallis is co-founder of the Appraise Network.