The AI Maturity Framework for Boards
The Conversation Has Moved On
Five Dimensions. Three Levels. A Practical Diagnostic.
Every time a new technology enters the boardroom, directors ask the same questions. Is it secure? Can we trust it with sensitive material? Will it change how we work in ways we cannot control? These questions were asked about email. They were asked about board portals. They are now being asked about AI. But there is a difference this time: the organisation is not waiting for the answer.
The pattern is worth naming, because it clarifies where we are. When email arrived in professional life, senior people questioned whether it was appropriate for anything confidential. They ran parallel systems for years: email for routine matters, paper or phone calls for anything that mattered. The concerns were legitimate. They also faded completely once the practical benefits became undeniable. Nobody remembers the resistance now. Email is infrastructure.
Board portals followed the same curve on a shorter timeline. Early scepticism about security, usability, and whether directors would read papers on a screen gave way to near-universal adoption. Today, a board without a portal is an outlier.
AI is at the beginning of that same curve. Directors are asking whether it can be trusted with board papers, strategy documents, CEO performance assessments, M&A deliberations, succession data. Where does the data go? Who can access it? Is the AI summarising accurately, or introducing distortions that time-poor directors will not catch? These are the right questions. But while boards are deliberating, their organisations are not. AI adoption inside companies is accelerating. The pace at which executives, management teams and operational functions are deploying AI is outrunning the pace at which boards are developing the oversight capability to govern it. That gap is the defining challenge of this moment.
A recent Global Directors Council roundtable brought together chairs and NEDs from listed companies, government bodies and not-for-profits across Australia, the UK, Asia and beyond. What came through clearly is that AI adoption at board level is being driven by individual directors, not by collective board policy. Some directors use AI extensively in their preparation. Others do not use it at all. Meanwhile, the management teams they oversee are embedding AI into operations, workflows and decision-making. In many boardrooms, neither conversation is happening. That opacity is itself a governance risk, and it is compounding every month that passes without a framework for the conversation.
This is where a framework becomes useful. Boards need a way to assess where they actually stand on AI readiness, not as a single score, but across the dimensions that matter. Without that clarity, the conversation stays abstract: vague concern on one side, vague enthusiasm on the other, and no mechanism for the board to move collectively. A practical diagnostic gives boards a shared language for the conversation they need to have and a way to direct effort where it will make the most difference.
The data reinforces the urgency. Across approximately 150 skills matrix processes conducted over the past two years, there was close to a nine-point swing of directors moving from general to advanced ratings on technology-related skills, driven largely by AI. Directors are individually upskilling. But not a single skills framework we reviewed included AI as a standalone director competency. The individual appetite is running well ahead of institutional readiness. And institutional readiness at board level is running behind the pace of change inside the organisations boards are meant to govern. This is a three-way divergence, and the framework below is designed to close it.
The AI Maturity Framework
The AI Maturity Framework provides a practical diagnostic for boards and company secretaries to assess where they stand on AI readiness and where to focus next.
It is built around five dimensions and three levels of maturity. Most boards are at different levels on different dimensions. That unevenness is normal and expected. The value of the framework is not in producing a single score but in making the unevenness visible so that effort can be directed where it will have the most effect.
The Five Dimensions
1. AI Governance. How the board oversees AI risk, policy and regulatory compliance across the organisation. This covers whether AI has a formal place in the governance architecture, whether it sits on the risk register with clear ownership, and whether the board receives structured reporting or reacts only when something goes wrong.
2. Active AI Use. Whether the board and the company secretary are actually using AI in their governance work, and if so, whether they are using generic tools that start from zero each time or tools that draw on the board’s own data, history and dynamics. This is the dimension where the gap between intent and practice is widest. Many boards express interest in AI but have not moved beyond informal, individual use.
3. Director Capability. AI literacy, education programmes, and whether AI fluency is treated as a board composition criterion. The data is striking here: directors are self-educating, but boards have not made AI literacy a formal requirement for new appointments or a named dimension in capability assessments.
4. Strategic Integration. Whether AI implications are a standing lens on major decisions. Most boards still treat AI primarily as a risk issue. Fewer are asking AI questions about M&A targets, capital allocation, or competitive positioning. At the leading edge, boards see AI governance as a source of competitive advantage and long-term resilience, not a compliance requirement.
5. The Company Secretary Function. The shift from administrative custodian of board data to the intelligence hub that knows what that data means. This is the dimension with the highest leverage. The company secretary sits at the centre of the board’s information ecosystem. They are often more AI-aware than the directors they serve but lack a mandate to act on it. At maturity, the company secretary becomes the person who connects evaluations, skills data, governance history and board papers into a coherent picture that the board can act on.
The Three Levels
Level 1: Emerging. Aware but unstructured. AI is on the radar but not yet embedded in governance or working practice. Most boards globally sit here, often without knowing it. Discussions are reactive, triggered by media coverage or incidents rather than proactive agenda-setting. There is no formal policy, no systematic use of AI tools, and the company secretary handles AI matters ad hoc alongside everything else.
Level 2: Developing. Structured but uneven. Governance foundations are in place and AI tools are entering board practice. A formal policy exists. AI sits on the risk register. The board receives structured updates. But progress is inconsistent across the board, with some dimensions well advanced and others lagging. The company secretary has a dedicated AI brief and is beginning to connect board data to AI tools, though without a formal framework for doing so.
Level 3: Leading. Embedded and active. AI governance is fully integrated into the board’s constitution and committee terms of reference. Directors use AI as a working tool across the board cycle. AI fluency is a named recruitment criterion and a standing dimension in capability assessments. The company secretary operates as the board’s intelligence hub, using AI tools that draw on the board’s own evaluation history, skills matrix and governance data to surface insights proactively.
Five Dimensions at a Glance
|
Dimension |
Level 1: Emerging |
Level 2: Developing |
Level 3: Leading |
|
AI Governance |
Reactive discussion, no policy, no AI inventory |
Policy in place, AI on risk register, structured quarterly reporting |
Embedded in board constitution, named NED lead, public disclosure |
|
Active AI Use |
Board data sits inert and unused; generic tools only |
Selective use for summaries and research; beginning to distinguish generic from board-specific AI |
AI advisor trained on board’s own data; systematic use across full board cycle |
|
Director Capability |
No literacy programme; AI buried in broader tech skills |
At least one structured session; uneven capability; no individual assessment |
Ongoing curriculum; AI fluency as recruitment criterion and skills matrix item |
|
Strategic Integration |
AI considered only as risk; board relies on executive framing |
AI considered in major decisions but not systematic; some peer benchmarking |
AI as standing lens on all major decisions; board independently assesses competitive position |
|
CoSec Function |
Ad hoc, no dedicated brief; data managed but not used analytically |
Dedicated AI brief and governance calendar; producing tailored briefings |
Board’s intelligence hub; AI tools surface succession risks, capability gaps, dynamics proactively |
What the Framework Reveals
When boards use this framework honestly, a consistent pattern appears. Most boards are stronger on governance than on active AI use. They have, or believe they have, the oversight architecture in place. Policies exist. Risk registers have been updated. Someone is tracking regulatory developments. But the picture inside the organisation looks different. AI is being adopted by management teams, operational functions and frontline teams at a pace that the board’s governance architecture was not designed to track. The framework does not just reveal where a board sits on five dimensions. It reveals the distance between what the board can see and what is actually happening.
The active use of AI in the board’s own work is almost always behind. Directors are individually curious and increasingly willing, but the board as an institution has not made the shift. The company secretary, who typically sees the opportunity most clearly, lacks the mandate or the tools to act on it.
The framework also reveals a gap that most boards have not yet confronted: the difference between generic AI and board-specific AI.
A generic AI tool starts from zero every time. It has no knowledge of the board’s history, dynamics, or priorities. It processes whatever it is given without context. It can summarise a board pack, but it cannot tell you how the board’s own assessment of CEO performance has evolved over three years. It cannot connect a skills gap flagged in last year’s evaluation to a succession risk that is now six months from becoming urgent. It cannot surface the question that the board’s own data suggests is not receiving sufficient attention.
A board-specific AI advisor does all of this. It draws on the board’s own evaluation history, skills matrix, director profiles and governance papers. Every insight it surfaces is anchored in the specific history, dynamics and priorities of that board. It knows the room.
This distinction matters enormously for the trust question. When directors ask where their data goes, a board-specific AI advisor can give a clear answer: it stays within the board’s own environment. When they ask how the AI knows what it is talking about, the answer is that it has been built on the board’s own history.
Once directors experience AI that knows their board, the trust question resolves itself. Not because the concern about confidentiality goes away, but because the value proposition becomes concrete and the safeguards are visible. Directors stop asking whether they should use AI and start asking what else it can show them.
Two Challenges, Not One
There is a persistent conflation in board conversations about AI. Directors talk about AI as a single challenge, when it is two distinct ones that require different responses.
Governing AI in the organisation means overseeing how the company develops, deploys and manages AI systems. It is about risk, ethics, regulatory compliance, competitive positioning and strategic opportunity. It demands that the board can ask the right questions about the organisation’s AI exposure, even if directors themselves are not technical practitioners. This is the oversight challenge.
Governing with AI means using AI tools within the boardroom itself. It is about preparation, analysis, institutional memory, and decision quality. It demands that the board agrees on what tools are appropriate, what data they can process, and what safeguards are non-negotiable. This is the practice challenge.
Both challenges appear across the five dimensions, but they show up differently. A board can be sophisticated at governing AI in the organisation while having no collective agreement on whether directors are using ChatGPT to prepare for meetings. Conversely, a board might have adopted AI tools for its own use while lacking any structured oversight of the organisation’s AI deployments.
The framework is designed to surface both. Dimensions 1, 3 and 4 lean toward the oversight challenge. Dimensions 2 and 5 lean toward the practice challenge. But none sits cleanly in only one category, which is why the framework treats them as interconnected rather than separate.
The Company Secretary as the Highest-Leverage Point
If there is a single role that determines the pace at which a board moves through this framework, it is the company secretary.
This is not because the company secretary has the most authority. It is because they have the most information. They sit at the centre of the board’s data ecosystem. They manage evaluations, skills matrices, governance calendars, board papers, minutes, action trackers, succession data, committee reporting. They know where the gaps are. They often see risks that have not yet reached the formal agenda.
At Level 1, the company secretary handles AI matters ad hoc, alongside everything else. Board data is managed administratively but not used analytically. The company secretary may be more AI-aware than the board itself, but without a mandate to act on it, that awareness stays dormant.
At Level 2, the company secretary has a dedicated AI brief and a governance calendar aligned to the board’s annual workplan. They produce regular AI risk briefings tailored for non-technical audiences. They track regulatory developments and provide advance compliance briefings. They are actively working to connect board data to AI tools, though without a formal framework.
At Level 3, the company secretary is the custodian of the board’s data ecosystem. They use AI tools to surface succession risks, capability gaps and dynamics proactively. They give every board member access to the institutional knowledge of the room, available on demand. They lead horizon scanning and commission external AI assurance reviews.
The shift from Level 1 to Level 3 is not primarily about technology. It is about the role. When the company secretary moves from administrative custodian to intelligence hub, the board gains something it has never had before: a continuous, data-grounded view of its own effectiveness. Not a snapshot taken once a year in an evaluation. A living picture, updated as the board’s own data changes.
Where Most Boards Actually Sit
If forced to place most boards on this framework today, the honest answer for the majority would be Level 1 across most dimensions, with pockets of Level 2 on governance.
That is not a criticism. It reflects the speed at which AI has moved and the legitimate caution that boards bring to any technology that touches their most sensitive material. But it does mean that the gap between board oversight capability and organisational AI adoption is widening, not narrowing. Every quarter that passes without deliberate board action is a quarter in which the organisation moves further ahead without meaningful governance in place.
The boards that are pulling ahead are not the ones with the most technical directors. They are the ones that have done three things.
First, they have activated their existing data. Most boards already generate the data that an AI advisor needs: evaluations, skills assessments, governance reviews, committee reports, CEO performance data. That data typically sits inert. The boards that are advancing have connected it to tools that can make it usable.
Second, they have run the diagnostic. They have used a framework like this one to score themselves honestly across all five dimensions. Most find they are stronger on governance than on active use, and weaker on both than they expected.
Third, they have empowered the company secretary to lead. They have given the role a mandate, the tools, and the authority to raise maturity across all five dimensions. This is the highest-leverage intervention because it creates a single point of accountability for progress.
The Real Risk
The risk for the sector is not that directors will reject AI. Most are curious and increasingly willing. The risk is that the conversation stays at the level of tools, and the harder question, how boards close the gap between their oversight capability and the pace of AI adoption inside their organisations, goes unaddressed.
Board portals have evolved considerably. Many now hold years of board papers, committee minutes and, in some cases, outputs from governance processes. The question is not whether that data exists. It is whether it is being connected, synthesised and surfaced in ways that support board judgment. Storing data and making it work are different problems.
Think about everything a board is responsible for beyond reading the papers. When a board runs an evaluation, that data should compound over time, informing how the board sees its own development and composition. When a skills matrix is completed, it should connect to succession planning and recruitment criteria. When a CEO performance review is conducted, the board should be able to see how its own assessment of leadership has evolved across years. Most of this data already exists somewhere in the board’s ecosystem. The problem is not its absence. It is that it is rarely connected, rarely interrogated and rarely used to inform what happens next. That is exactly the gap AI, properly deployed, is built to close.
The boards that understand this are already working differently. They are asking different questions. And they are finding that once AI knows the room, the trust question takes care of itself.