
Artificial intelligence is no longer a theoretical risk or a future opportunity. It is already influencing decision‑making, operations and oversight across organisations — and boards are feeling the pressure to respond.
At our AI Governance Breakfast Briefing, held in London in partnership with Board Agenda, senior governance leaders came together to explore a critical question:
How do boards move from high‑level AI principles to governance frameworks they can actually oversee?
Here’s a summary of the key discussions and practical takeaways from the morning — structured around the event’s two core sessions.
The opening discussion focused on the growing governance gap many boards are facing: AI adoption is accelerating, but oversight structures are often unclear, fragmented or informal. That gap is already visible in boardrooms: while 66% of directors report using AI for board work, only 22% say they have AI governance processes in place to guide that use.
A consistent theme was that AI governance cannot sit solely with technology teams. Boards are increasingly expected to understand not just what AI systems do, but how decisions are made, what data is used and where accountability sits.
Ethical oversight was framed not as an abstract values exercise, but as a practical safeguard — protecting organisational trust, reputation and resilience in an environment of heightened scrutiny.
Panellists discussed how unclear accountability creates risk. When AI‑related decisions span functions — legal, compliance, IT, risk, operations — boards need clarity on:
Without defined accountability frameworks, organisations struggle to demonstrate control to regulators, stakeholders and their own boards. Compounding the challenge, only 8% of directors say their board has strong AI expertise — increasing the importance of clear accountability, assurance and reporting structures.
The discussion also explored the EU AI Act and emerging global standards, with a clear message: regulation is setting the minimum bar, not the ceiling.
Boards were encouraged to view regulation as a catalyst for better governance — prompting earlier investment in transparency, documentation and controls, rather than reactive compliance once enforcement begins.
Importantly, the conversation emphasised that boards don’t need to master technical detail. What they do need is confidence that appropriate guardrails, reporting and assurance mechanisms are in place.
The second session shifted from why AI governance matters to how organisations can embed it into existing governance structures.
That shift is timely: 44% of directors say AI and digital risks and opportunities are now among the most pressing board agenda topics, yet 40% say technological developments, including AI, are among the hardest issues to oversee.
Many organisations already have AI principles or ethical statements. The challenge is operationalising them.
The panel discussed the importance of translating principles into:
Rather than creating entirely new governance models, speakers emphasised aligning AI oversight with existing corporate governance codes, risk frameworks and committee structures.
Company secretaries and governance teams were highlighted as playing a critical role in making AI governance work in practice.
From coordinating oversight across committees to embedding AI considerations into board agendas, risk reporting and assurance processes, governance professionals are often the connective tissue between strategy, regulation and execution.
This role is becoming increasingly important as boards look for structured, defensible approaches to AI oversight — without adding unnecessary complexity.
The session concluded with a forward‑looking discussion on resilience.
Rather than governing individual AI tools in isolation, boards were encouraged to focus on:
In an environment where AI capabilities will continue to change rapidly, good governance was positioned as an enabler of innovation — not a blocker.
Across both sessions, several themes stood out:
For boards, company secretaries and governance professionals, the message was clear: the question is no longer whether to govern AI, but how to do so in a way that is practical, proportionate and defensible.
If you’re exploring how to operationalise AI governance within your organisation, DiligentAI helps boards and governance teams apply oversight, accountability and assurance in practice — without adding complexity.