AI policy leaders series: Julia Mykhailiuk
03 September 2025
Julia focuses on the intersection of EU and international AI governance, institutional engagement and multilateral policy development. She recently participated in the EU General Purpose Code of Practice discussions.
We spoke to Julia about her role, the Code of Practice, the EU AI Act more widely, and the role civil society can play in helping shape AI policy.
What's your role, and why is AI policy relevant to you?
In my most recent role, I co-led a Brussels based think-tank, driving work on operationalising high-level EU regulatory frameworks like the AI Act by shaping compliance obligations for the providers of GPAI/GPAISR models, and high-risk AI systems.
A significant part of my work involved direct engagement with EU institutions, including the Commission, the Parliament and the European AI Office. I provided policy analysis and contributed to shaping implementation tools such as code of practice and general-purpose AI guidelines. In addition, I also support the ongoing development of international AI governance and policy frameworks in the US, UK and Indo-Pacific by contributing to the work of the OECD AI expert group, as well as holding research and policy fellowships at CAIDP, fp21 and GMF.
What led me to AI policy was the realisation that technology was rapidly outpacing the institutional capacity to govern it or set any guardrails around it. With AI, we need to build regulatory frameworks for something highly technical and fast-evolving, while also ensuring that democratic values and public interests such as transparency, accountability and public oversight do not get sidelined.
You were involved in helping draft the EU General Purpose Code of Practice. What's your view on the final wording?
The final version, as it stands, is a solid foundation, especially considering the diversity of stakeholders involved in the process. It was a challenging and lengthy process, but it strikes a reasonable balance between regulatory ambition and technical feasibility. It outlines clear, testable expectations around key areas, including transparency, safety, evaluation criteria, and governance processes for AI model providers.
That said, it also leaves open some questions around enforcement and alignment with national implementation mechanisms. In the future, the key will be to ensure the code is more than just a symbolic document. It will need to become a standard that can evolve with industry practices and institutional learning.
What's your view on the EU AI Act? What works? What could be improved?
The European AI Act is a landmark regulation. It's one of the first comprehensive AI regulations globally, and it introduces a scalable risk-based approach to regulating AI. It is built around a risk classification system that emphasises the importance of robust safety requirements and accountability mechanisms, which are particularly valuable in establishing baseline expectations across diverse industry sectors integrating AI systems.
It could potentially evolve further in its operational capacity for both general-purpose AI models and high-risk systems, especially in clarifying compliance responsibilities across the value chain. Policymakers also need to pay close attention to how the Act interacts with adjacent digital policies as well as GDPR and sector specific regulations, to avoid fragmented implementation and conflicting obligations.
What role do you see for civil society in shaping AI policy?
Civil society is important in holding the policy-making process accountable to public interest outcomes. That is the primary and key objective of civil society work. One of the risks in tech regulation is that the loudest voices are often also the best-resourced. Civil society's role is to bring attention to the public interest as a counterweight to that, whether by surfacing specific case incidents, making the case for stronger safety guardrails, or translating policy impacts for the public to understand what it means to them. Civil society organisations help keep that regulatory conversation grounded. The challenge is ensuring meaningful participation beyond the consultation stage. That requires engaging directly with policymakers to ensure that different voices can be heard. Making sure that the right information reaches the right policymaker at the right time is both the key challenge and the key objective.
AI providers are understandably involved in the legislative and regulatory process. They often argue that their input is necessary due to the complexity and highly technical nature of AI development. However, they are not the only ones who can understand these models, test them, or indeed decipher whether they are transparent and safe, or whether they introduce biases and risks. If we rely solely on the internally generated test results and safety information provided by these companies, then we are essentially asking them to check their own homework. This is something we do not generally allow companies to do in other safety critical regulated industries such as pharmaceuticals or engineering.
What's next for AI policy?
Within the EU/EEA, it will be the implementation of the AI Act for GPAI/GPAISR models and high risk AI systems, their oversight, regulatory enforcement and wider guidance for public and private sector adoption. At the same time, AI governance does not stop at the borders of the EU member states. As we are seeing, AI development and integration is not just a regulatory challenge but a geopolitical one. The tension between how the US and China approach AI governance is already affecting how global standards are being shaped through measures such as export controls and industrial policies. Europe is in a unique position here. It does not compete on model development or raw compute power, but it has normative power to influence global AI governance. Used wisely, it can be harnessed not just for a political extension of domestic regulation, but also to ensure that democratic resilience remains important in increasingly competitive, multipolar AI landscape.
Civil society is important in holding the policy-making process accountable to public interest outcomes.
James Boyd-Wallis is co-founder of the Appraise Network.