Rishi Sunak borrows from the strategic communicator’s playbook to cement the UK’s position on AI

31 October 2023

Last week, the Prime Minister announced a range of measures focused on AI safety, including the launch of a UK-based AI Safety Institute, and a £100 million investment in a taskforce to develop the UK government’s capability.

This is smart positioning. Appraise’s research on MPs’ attitudes towards AI, published in June, highlighted that only 23% of MPs felt they understood the implications of AI for citizens and society. And a further 60% felt safety should be prioritised over innovation and growth.

While the US and China are leading the way in terms of AI technological development, the UK knows that it can’t compete pound-for-pound.

By focusing on safety, Rishi Sunak believes he has found a meaningful part of the debate that the UK can make its own. To drive it home, he took the unprecedented step to publish the government’s assessment of risks associated with AI, including declassified material from UK intelligence.

In strategic communications, this is called identifying the ‘white space’ – a space that you can major on, become known for, and be called upon. This is what thought leadership programmes are built on.

On paper, this seems to be a shrewd approach. The UK is bidding for AI companies from the two superpowers to establish their European headquarters in the UK. By becoming a beacon for responsible AI, the UK is looking to leverage its strategic position as a base off the coast of mainland Europe.

The EU remains the largest economic bloc in the world, and AI companies will want to tap into that. But the EU has also been quickest off the mark to regulate against the risks of AI.

By promising a low-regulation environment while shining a spotlight on the risks associated with the technology, Mr Sunak is helping himself to a double slice of the AI cake.

This is appealing for those who – like us – are calling for a calm, measured and evidence-based approach to AI policy. Pro-investment. Pro-innovation. Pro-safety.

The risk with this strategy, however, is twofold.

First, Mr Sunak needs to be careful not to be ‘too successful’ in hyping the risks of AI. While these are undoubtedly real, turning public opinion against the technology would be self-defeating. As my colleague James Boyd-Wallis argued in PR News, focusing on the existential threat to humanity is unhelpful, better talk about the immediate challenges (of which there are plenty).

Second, while a measured approach focused on evidence-gathering is sensible, the technology is already out there – and we can’t afford to waste 18 months on developing the knowledge base to take action.

With a UK General Election and a US Presidential election expected next year, disinformation and data misuse will be putting a strain on our information environments like never before. And we know from the experience of social media that once the genie is out of the bottle, there is no going back.

Luckily, perhaps, the looming General Election in the UK means Mr Sunak will be in no mood to waste any time himself.

Aidan Muller is co-founder of the Appraise Network.