EUROSAI PRESIDENCY FOREWARD
Artificial Intelligence is no longer a distant prospect — it is rapidly becoming part of the everyday machinery of European governments. Public authorities across Europe are moving from pilot initiatives to full-scale implementation, embedding AI in public services, internal workflows, and decision-support systems. This momentum can deliver better outcomes for citizens and more effective public administration — but it also brings significant responsibilities concerning legality, security, privacy, transparency, and public trust.
At this pivotal moment, the EUROSAI Presidency is proud to present this Parallel Audit on Artificial Intelligence — a clear demonstration that Supreme Audit Institutions (SAIs) choose cooperation over fragmentation. Coordinated by SAI Israel, this initiative brought together twelve participating SAIs: Albania, Estonia, France, Israel, Italy, Latvia, Lithuania, North Macedonia, Poland, Romania, Slovakia, and Switzerland. Together, they examined governmental preparedness for a technology that permeates every layer of the state, from national strategic planning to concrete cross-sectoral projects. This initiative reflects EUROSAI Strategic Goal 1 in action: supporting effective, innovative, and relevant audits by promoting and strengthening professional cooperation.
As AI systems expand across audited entities, auditors will increasingly encounter these technologies “from within” — in procurement processes, data governance frameworks, cybersecurity controls, human resources management, and frontline service delivery. A key added value of this Parallel Audit lies in the shared understanding it has fostered: SAIs must be equipped to audit AI with confidence, possessing the skills, methodologies, and tools necessary to assess how such systems are designed and deployed, how risks are identified and mitigated, and how public value is created and safeguarded.
The technological landscape ahead will be faster, more automated, and increasingly language-driven. European governments will require clear strategies, robust governance frameworks, and skilled professionals to keep pace with rapid technological change. By learning together, developing common approaches, and strengthening our collective capabilities, SAIs can help guide AI adoption toward transparency, resilience, accountability, and tangible results for citizens.
On behalf of the EUROSAI Presidency, I extend my sincere appreciation to the Members of EUROSAI, colleagues, and experts who contributed to this important project. Your dedication and expertise have been instrumental in shaping a shared vision for AI adoption that is transparent, accountable, and beneficial to society. Your collective effort has further strengthened cooperation among EUROSAI members and will continue to support responsible AI governance across Europe.
Matanyahu Englman
EUROSAI President
State Comptroller and
Ombudsman of Israel
March 2026
EXECUTIVE SUMMARY
AI is no longer a “future policy” topic - it is already changing how governments work, how services are delivered, and how public trust is earned or lost. This Parallel Audit shows that many countries are moving quickly from ambition to action, but that readiness is still uneven: progress accelerates where strategy, funding, governance, data, skills, and controls move together, and it stalls where they advance separately.
Led by the Office of the State Comptroller and Ombudsman of Israel (SAI Israel) under EUROSAI Strategic Goal 1, this multinational Parallel Audit brought together 12 SAIs (Albania, Estonia, France, Israel, Italy, Latvia, Lithuania, North Macedonia, Poland, Romania, Slovakia and Switzerland). Conducted between May 2024 and December 2025, the audit used a shared analytical framework of 9 topics and more than 92 structured questions to compare national preparedness across strategic, infrastructural, and implementation dimensions.
National Strategic Plan. The audit found that countries are pursuing different paths: some have government-approved AI strategies, while others rely on broader digital strategies, standalone initiatives, or draft strategies without formal adoption. Where governance ownership and cross-ministry coordination are clear, strategies translate more effectively into action and public-facing trust measures, including strong emphasis on public awareness. The chapter concludes that strategic direction matters most when it is paired with implementation ownership and measurable goals. It recommends periodic strategy reviews to ensure the chosen model still supports an ecosystem approach, real coordination, and sustained delivery.
National AI Budgets. Funding is a decisive test of seriousness, and the audit found that many countries still struggle with visibility. Less than half reported a clearly defined AI budget, while others embed AI spending in broader digital or sectoral envelopes, making it harder to track whether resources match strategic priorities. The chapter concludes that fragmented budgeting weakens oversight and slows coherent scaling. It recommends improving transparency by distinguishing direct AI project funding from enabling investments (especially infrastructure), consolidating visibility across ministries, and coordinating external funding streams through clear ownership and multi-year planning.
Regulatory Guidelines. Regulatory readiness is developing, but not consistently. Roughly half of countries reported published AI guidelines, even though all reported a dedicated body responsible for oversight. The EU AI Act is already acting as a powerful catalyst, yet countries anticipate heavy implementation demands and capacity constraints. The chapter concludes that institutional ownership is ahead of operational guidance, and that ethics often remains a principles-level commitment without consistent, testable assurance. It recommends publishing practical guidelines, treating EU AI Act preparation as whole-of-government execution (not only legal transposition), and strengthening enforceable mechanisms such as defined accountability, traceability, and pre-deployment checks where appropriate.
Infrastructure. Many countries are investing in AI infrastructure, especially compute capacity, but implementation is still in progress and cross-country comparability is limited by uneven measurement. National cloud environments are common, yet every country relies on third-party providers, reinforcing that hybrid delivery is the norm. The chapter concludes that infrastructure enables everything else, but it can also become a bottleneck when governance, demand forecasting, and accountability in hybrid environments are unclear. It recommends mapping capacity and forecasting demand across national and contracted resources, and strengthening hybrid governance to manage security, cost, resilience, and supplier concentration risks.
Information Security. The audit found strong awareness of AI security risks, especially data leakage and unauthorized access, but weaker baselines for enforceable practice. Mandatory cybersecurity protocols and AI-specific privacy policies were not consistently reported, and incident experience remains limited - which makes prevention and preparedness even more critical. The chapter concludes that AI security is as much a governance challenge as a technical one. It recommends establishing baseline requirements and role-based training, improving traceability and documentation, and adopting lifecycle security practices that apply consistently across ministries and suppliers.
Digital Maturity. Data foundations remain a decisive constraint on scalable AI. While all countries reported some form of data sharing policy, operational barriers persist - especially regulatory and governance friction, interoperability gaps, and uneven data readiness. External benchmarking reinforces a recurring pattern: policy and platforms can advance faster than proven impact. The chapter concludes that governments often have “rules to share data,” but not always the operational conditions to share it efficiently, safely, and at scale. It recommends strengthening governance clarity, streamlining processes, improving interoperability and data quality practices, and building auditability so lawful reuse can be demonstrated, not just declared.
Government Projects. AI is already producing practical use cases across government, particularly in high-volume operational domains, and many countries report productivity improvements. Yet monitoring and evaluation mechanisms are not consistently embedded, and KPIs are often concentrated on efficiency rather than a balanced view of service quality, model performance, sustainability, reuse, and risk. The chapter concludes that implementation is advancing faster than governments’ ability to prove and compare impact. It recommends establishing consistent portfolio visibility, adopting balanced evaluation frameworks, and standardizing minimum reporting so scaling decisions are evidence-based and risk remains visible.
Human Capital. Talent is the most universal constraint: every country reported a shortage of AI experts, and many also reported a shortage of researchers. Upskilling is underway in many places, but not yet universal, and retention of critical stewardship roles remains a vulnerability that can deepen reliance on external providers. The chapter concludes that AI readiness rises or falls on people - not just technology. It recommends integrated workforce strategies that combine education pipelines, role-based training, enablement structures (centers of expertise and knowledge-sharing), and stronger recruitment and retention for key oversight and delivery roles.
Natural Language Processing (NLP). Most countries are developing local-language NLP capability, but the dominant delivery model is external or hybrid, which raises long-term dependency and lifecycle governance questions. The chapter concludes that language capability is foundational for scalable AI in government services and internal operations, but it must be sustainable and governable over time. It recommends treating NLP as a reusable shared capability, and where vendors are central, enforcing clear responsibilities for maintenance, monitoring, documentation, updates, and risk management.
Overall, the audit’s message is clear: governments are building momentum, and many of the right building blocks are already in motion. The next leap forward is to connect them - turning strategies into governed delivery, turning investment into transparent portfolios, turning principles into enforceable safeguards, and turning pilots into measurable public value at scale.