AI SAFETY:

Unlocking the policy debate

May 2024

EXECUTIVE SUMMARY

The second AI Safety Summit in South Korea is upon us, and the hope among many experts is that it starts to answer some of the questions raised by the first summit in the UK.

It is still early days – and while a degree of consensus emerged at Bletchley Park in November 2023, the devil will be in the detail. Already, battle lines are being drawn between those who would like to see authorities take firmer action to ensure the safety of citizens and society, and those who warn against inhibiting the potential of a truly transformative technology.

The ambition with this report

Here at Appraise, we believe in the potential for artificial intelligence (AI) to achieve great things, from decoding the genome to unlocking nuclear fusion. However, we are alive to the risks of disruption about to be unleashed on society.

Our ambition with this report is not to offer answers or solutions to AI safety policy. There are others better qualified to do so. Rather, we hope to encourage a positive debate. The complexities at play are vast – from the diversity of interests to account for, to the technology itself, to the questions it raises.

As a society, we need a constructive policy debate. As policy communications specialists ourselves, with decades of experience in public affairs and digital advocacy, we have a sense of what it takes to generate support for a policy, to build consensus, to enable compromise. And we know what derails conversations.

The ingredients for a constructive debate

This is a call to think not just about the content of the conversation, but also about creating the right context for a constructive debate.

First, you need trust and good will. While adoption of the technology has soared, trust in AI companies has plummeted in most developed markets, according to the latest Edelman Trust Barometer. This was particularly the case in regions of AI innovation, where the public was more likely to believe innovation was mismanaged. In the UK, 54% of the general population rejected the growing use of AI (compared to 16% who embraced it), and 66% doubted those in government had the understanding to regulate AI effectively, even after the AI Safety Summit in Bletchley Park.

Second, all stakeholders need to feel heard. One major criticism of the UK summit was the narrow range of participants. Civil society in particular was poorly represented (though private-sector companies outside of Big Tech and AI also felt ignored). The challenge, of course, was in part logistical. How do you gather a representative group of civil society organisations from such a fragmented ecosystem? How do you then reflect the same differences between countries? Or the imbalances between Global North and Global South? COP negotiations and the Internet Governance Forum (IGF) show it can be done. But they also give us a sense of the scale of the task at hand. International governance has to be a key objective for Seoul.

Third, you need a common frame of reference. This research shows how much of a challenge this is going to be. All of the experts we interviewed – drawn from Parliament, private sector, academia, civil society and media – agreed some kind of action was required. But then the conversation diverged. Those inclined to talk about AI safety in terms of policy may find the premise of the conversation rejected and their plans derailed, as they find out they are having a different conversation to their interlocutors. While one conversation is taking place at a technical level, the other is taking place at a philosophical level.

If we are going to navigate our way through the hugely complex question of AI safety, we need to find some common ground. You cannot offer a policy answer to a philosophical question. We need to be having the same conversation. We need a common frame of reference. 

We need to have the right conversations

Our research shows that AI safety is tapping into broader debates which have been raging for decades: starting with industrial relations, inequality, or climate change. The explosion of AI will further fuel these debates over the coming decade. It will pick at barely-healed wounds, and – if left unchecked – will further drive a wedge between groups which are already divided. 

It will ask hard questions of the relationships between many groups. Employers and employees. Old and young. Digitally literate and non-literates. Rich and poor. Educated and uneducated. Companies and society. Global North and Global South.

Short term, we can – and should – talk about policy and regulation. As our research shows, there is plenty to discuss on that front. But long term, this needs to be a society-wide conversation about values. What kind of relationship do we want between company and society? What does a competitive market really look like?

Yes, we need to hear from AI engineers. From entrepreneurs. From data scientists. From policy-makers. From academics. From philosophers and ethicists. But we also need to hear from those impacted by the technology. From teachers and schoolchildren. From doctors and nurses. From ethnic minorities. From those across the Global North and Global South. 

The roads ahead

We need to agree a common frame of reference. Failure to do so will result in participants talking at cross-purposes. And when that happens, the conversation breaks down, the status quo prevails, and inertia sets in – along with a bagful of resentment.

AI is a transformational technology, and inertia won’t do. For all its imperfections, the safety debate is inviting us to redefine the terms of the conversation, to reflect on our values as a society, and to tackle some of the fundamental questions we have failed to address.

This is not to say we shouldn’t talk policy and regulation – the realities on the ground mean that, to some extent, we have to build the plane while it flies. However, the safety summits give us an opportunity to have the conversations we need, to set us up for decades of development, equitably distributed and sustainable.