AI has leapt from the lab to the boardroom but most governance conversations remain stuck in abstraction. Too often, boards discuss AI as a distant trend, a technology horizon they’ll one day need to prepare for, rather than a present-tense governance imperative already reshaping risk, opportunity, and accountability.
This presents not just a missed opportunity but a governance failure.
Cut through the hype and addresses what AI literacy really entails for Non-Executive Directors (NEDs). It’s about board readiness: the ability to ask the right questions, set clear expectations, define boundaries, and ultimately understand where responsibility lies when algorithms take the wheel.
For NEDs, this is not a future-facing skillset, it’s a current and critical boardroom responsibility.
AI Governance – What’s Actually at Stake
When implemented responsibly, AI has the power to reshape industries. It improves efficiency, sharpens operational insight, and unlocks new business capabilities. From advanced analytics in financial services to predictive modelling in logistics, and personalised diagnostics in healthcare, AI is not some abstract idea – it’s already here, actively redefining competitive advantage.
In the right hands, AI enables faster, smarter, and more personalised decision-making. It can help an organisation pivot swiftly in times of crisis, optimise underused resources, and anticipate future demand with startling accuracy. AI can even augment human judgment -flagging anomalies, spotting patterns, and surfacing unseen risks.
But when done poorly, AI systems can have the opposite effect. They can erode trust, entrench bias, and introduce opaque processes that no one fully understands, or can explain. AI gone wrong doesn’t just create operational issues; it can provoke legal scrutiny, spark public backlash, and create a reputational tailspin from which it’s hard to recover.
The potential for systemic harm is real – and rising. We’ve already seen examples: recruitment algorithms that discriminate against women or minority groups, financial tools that penalise certain postcodes, and content moderation systems that reinforce harmful narratives. Each case reveals the same blind spot – the absence of effective governance and oversight at the highest levels.
Boards that treat AI as merely an “IT issue” are missing the point. AI is not just a technical shift, it’s a strategic, ethical, and governance shift. And it demands a proportionate response.
The Three Questions Every NED Must Be Able to Ask
Whether or not you sit on a technology-heavy board, these are the foundational questions that every NED should be asking today:
1. What decisions are currently being influenced or made by AI?
Too many boards can’t get a clear answer to this. That alone is a red flag. AI systems are now routinely used across business functions – recruitment, dynamic pricing, fraud detection, risk assessment, customer service, logistics planning, content delivery, and more. If the board doesn’t know where algorithms are active in the organisation, it certainly can’t govern their impact.
Boards must demand visibility into where and how AI is being applied. This includes third-party tools as well as proprietary systems. It’s not enough to assume that AI use is limited to the data science or marketing teams. The reality is: if your organisation handles large volumes of data or makes frequent decisions at scale, it’s probably already using AI formally or informally.
2. Where is the human oversight?
It’s common to hear that a human remains “in the loop” on major AI decisions. But what does that really mean? Oversight must be meaningful, not symbolic. Are humans truly in a position to understand, challenge, and override automated outputs? Or are they simply rubber-stamping decisions made by systems they don’t fully grasp?
Board members must ask for clarity
- What does human intervention look like in practice?
- How often do overrides happen?
- What level of training and authority do those individuals have?
Without this understanding, claims of oversight are little more than wishful thinking.
3. How are we mitigating bias, opacity, and harm?
This is where governance meets ethics. It’s not enough for executives to promise that their systems are responsible. Boards should require evidence: bias testing protocols, explainability reports, third-party audits, post-deployment impact reviews, and simulation testing.
Ask the same questions you would ask of financial or cyber risk:
- What controls are in place?
- How are they verified?
- What reporting reaches the board, and how often?
Don’t settle for platitudes like “we take ethics seriously.” Insist on tangible, auditable safeguards. Responsible AI use must be more than an aspiration, it should be an operational standard.
The Risk of AI-Washing
We’ve seen this pattern before with ESG. Ambitious strategies and glossy brochures often mask a lack of depth. This gap eventually leads to accusations of greenwashing and erodes stakeholder trust.
AI is heading the same way. “AI-washing” refers to the gap between what organisations say about AI and what they actually do. Boards must be alert to this emerging reputational and compliance risk.
Warning signs of AI-washing include:
- AI risk delegated solely to the CTO or data science team, with no meaningful board engagement.
- Strategy decks full of buzzwords -“transformational AI,” “machine learning edge,” “AI-powered insights” – but no alignment with a documented risk register.
- AI ethics policies that exist on paper but are not enforced, reviewed, or resourced.
- Lack of board-level assurance on how AI models are trained, validated, and monitored.
As regulators move fast across the EU, UK, and globally, companies that lack robust AI governance will find themselves scrambling to retrofit credibility. Retrospective compliance is always more expensive than proactive governance.
Board Responsibilities – Not Just Oversight, But Framing
NEDs are not expected to write code or understand the intricacies of neural networks. But they are expected to ensure proper governance around the tools built with these technologies.
At a minimum, boards must:
This is not about slowing innovation. It’s about making innovation safe, sustainable, and aligned with the organisation’s long-term goals. Without this balance, the board becomes not an enabler of innovation but a bystander to risk.
When AI Goes Wrong – A Case Study
In 2024, a global logistics firm faced a public and reputational crisis after its AI-powered shift rostering system disproportionately excluded disabled workers from high-paying shifts. The algorithm hadn’t been designed to discriminate. But nor had it been assessed for indirect bias. When the issue came to light, it wasn’t through internal audit or executive review, it was exposed by a whistleblower who went to the press.
The fallout was swift and costly: a regulatory probe, a sharp decline in employee engagement scores, loss of a major government contract, and the resignation of a board member who had chaired the ethics subcommittee.
This wasn’t a failure of the data science team, it was a failure of board-level governance.
The board had signed off on the use of AI without requesting adequate oversight mechanisms. No one had asked where human judgment entered the process or what controls were in place to detect bias. In hindsight, the questions were obvious. But by then, it was too late.
What Does Good Look Like?
In well-governed organisations, AI oversight is already embedded in board routines, not treated as an annual review topic or a one-off agenda item. High-performing NEDs:
Literacy Is Not Optional
AI is already influencing key decisions in most organisations. It is not a theoretical frontier, it is an active, strategic force shaping real-world outcomes. And like any powerful force, it requires guardrails.
Just as boards have upskilled in cybersecurity, climate risk, and digital transformation, they must now do the same for AI. The biggest boardroom failure in 2025? Believing AI is someone else’s problem.
AI literacy doesn’t require you to become a technologist. But it does require you to become the kind of board leader who knows which questions to ask – before the consequences ask them for you.
Your role as a NED is not to fear AI. It is to frame it, challenge it, and govern it – before it governs you.
Want to Deepen Your AI Oversight Skills?
The article above introduces the critical questions every board should be asking but if you’re ready to go deeper, our Navigating the AI Revolution Responsibly course module is for you.
This practical, board-ready module is part of the Certified NED Programme, and can be taken as a standalone or alongside other modules to build toward full certification.
You’ll gain:
-
A clear framework for AI governance and oversight
-
Real-world case studies and dilemmas
-
Tools to lead confidently through complexity and hype
Arrange a short call to find out more.