In conversation with Ben Lyons, Director of Policy and Public Affairs at Darktrace
17 January 2025
Founded in 2013 by experts in AI and cyber defence, Darktrace is a global leader in cybersecurity AI, delivering the essential cybersecurity platform to protect organizations today and for an ever-changing future.
We talk to the firm's Director of Policy and Public Affairs, Ben Lyons, about his role and the AI policy landscape.
Tell us about your role at Darktrace.
I lead our policy and public affairs function. Darktrace is a global AI cybersecurity company operating at the intersection of several important policy debates. My role involves bridging the gap between technologists, policymakers, and cyber analysts in different regulatory jurisdictions. This involves translation – between technical and policy communities; and sharing learning from different countries – to help build a more secure future.
What attracted you to cybersecurity and AI?
My career has been split between tech and telecoms in the private sector, and I have worked on AI and data in the UK government's Department for Science, Innovation, and Technology (DSIT). Joining Darktrace was the perfect opportunity to bring those experiences together for a high-growth, innovative company addressing a crucial global challenge. Cybersecurity is critical to ensuring institutions, businesses, and people can trust technology to deliver its potential for economic growth and better public services. AI can play a transformative role in improving cyber resilience, enabling trust, and unlocking opportunities for citizens and consumers. Supporting these goals is what makes my role so exciting.
Darktrace operates globally. How do you navigate competing regulatory regimes?
It's about striking the right balance between tracking regulatory developments and engaging proactively. On the one hand, we've built robust tracking processes to monitor formal developments in our priority markets. But beyond reacting, we engage with governments, academia, and civil society to anticipate and contribute to emerging debates. This dual approach – short-term responsiveness and medium-term foresight – helps us to navigate the evolving regulatory landscape.
What are the key AI policy priorities for the UK?
The UK government is taking a largely sector-led approach, although it is planning to also pursue a separate regime for the most powerful AI models. This approach might include strengthening the AI Safety Institute.
Building state capability will be essential. The Labour Government's proposal to introduce the Regulatory Innovation Office may help streamline regulatory processes. Upskilling regulators and fostering AI assurance frameworks are other critical areas. Effective regulation isn’t just what’s written down in legislation – it’s also about ensuring the regulators themselves have the mandate and skills to adapt with technology and partner with industry to drive responsible innovation.
How would you like to see AI policy develop in the UK?
The UK has a massive opportunity to be an AI leader. AI can drive economic growth, improve public services, and tackle long-standing challenges like sluggish productivity.
The UK has many strengths. These include brilliant universities, a strong venture funding ecosystem, and accomplished developers. It can be one of the best places to build an innovative AI company today.
On the economic front, AI should be at the centre of the government's upcoming industrial strategy. This isn’t just about creating a bigger AI industry, but also about ensuring that companies in other industries are able to harness AI to become more productive. In the context of industrial strategy, this means tackling the barriers to AI use across the UK, and supporting identified priority sectors to help them drive adoption.
For the public sector, DSIT’s Digital Centre for Government has an opportunity to use tech to make services more effective, accessible and focused on people. This will require strong senior-level support and commitment across departments.
What role does Darktrace play in this landscape?
We’re AI optimists, and we’ll bang the drum for policy that supports innovation and adoption. But we’re only going to be realise the potential of technology if it’s secure. Cybersecurity is a pre-requisite for reliable, privacy-preserving AI use that is ultimately trustworthy.
And specifically within the cyber domain, AI is a double-edged sword. On the attacker side, generative AI is being weaponised for reconnaissance and to mount sophisticated social engineering attacks. On the defender side, companies like Darktrace are helping organisations fight back with multi-layered AI to enable continuous monitoring for threats, anomaly detection, and autonomous response.
Often we find ourselves acting as a bridge between the AI and cyber policy communities. We can’t think of these domains in isolation!
James Boyd-Wallis is co-founder of the Appraise Network.