Speaking Up Systems That Work
Whistleblowing and speaking up are critical components of a healthy organisational culture…
Artificial intelligence is no longer confined to tech labs or innovation teams. It is now embedded in everyday business operations, influencing customer decisions, underwriting processes, hiring practices, product recommendations and more. As AI technologies become more deeply woven into strategic workflows, the associated risks have become immediate and material.
Yet many boards remain behind the curve when it comes to AI oversight. They are either treating it as a purely technical issue, delegating governance to management, or waiting for regulators to set clearer expectations. This approach risks significant blind spots — not only operationally, but reputationally and ethically.
Boards must now treat AI as a standing risk issue, not an emerging trend. This article sets out how board members, and particularly non-executive directors (NEDs), can take a more structured, informed role in overseeing AI across the organisation.
AI has evolved from an experimental capability to a core driver of business efficiency and competitive advantage. Organisations are increasingly using it to automate decisions, optimise pricing, detect fraud, allocate resources and personalise customer experiences.
But with increased reliance comes increased exposure. AI systems can produce harmful outputs, reinforce bias, malfunction in high-stakes environments or operate without sufficient human oversight. Even when deployed with the best of intentions, they can create regulatory, legal and reputational liabilities if boards are not asking the right questions.
In short, AI no longer sits in the future risk category. It is a present-tense governance priority.
While many organisations have technical teams experimenting with or deploying AI tools, fewer have integrated AI oversight into their enterprise risk and assurance frameworks. This leaves gaps that boards may not fully appreciate.
A clear oversight structure, grounded in governance principles and ethical guardrails, is needed to address these risks.
The board does not need to be fluent in algorithms or data science. But it does need to understand the strategic use of AI, its implications for risk and accountability, and how management is exercising control.
NEDs, in particular, bring a crucial external lens. Their independence allows them to probe areas where management may be too close to the technology to see the full picture.
To perform their role effectively, NEDs should challenge management on the strategic, operational and ethical implications of AI. The following questions can help surface issues that may otherwise remain hidden.
Boards should not treat AI oversight as a separate initiative. It should be embedded into existing governance, risk and assurance frameworks. The key is integration — ensuring that AI does not sit outside the standard processes for managing enterprise risk.
These committees should receive regular reporting on AI risks, incidents and audit findings. External expertise can be brought in where technical interpretation is required.
An AI policy should cover procurement, development, deployment, monitoring and incident response. It should define principles such as transparency, explainability, proportionality and accountability.
Board members may require training to engage confidently with AI topics. This does not mean becoming technical experts, but being equipped to understand the implications and challenge appropriately.
Reputation can be damaged not just by what AI systems do, but by how organisations respond when things go wrong. The board plays a crucial role in shaping that response — both before and after the event.
Several companies have faced public scrutiny after using AI-based hiring tools that systematically disadvantaged certain candidates. In many cases, the board was unaware of the tool’s deployment or had assumed it was neutral.
Lesson: Boards must ensure that fairness testing is built into any AI process that impacts customers, employees or the public — and that outcomes are regularly audited.
Investors, regulators and the public expect organisations to explain how they use AI and what safeguards are in place. Boards must ensure the organisation is transparent and accountable in both policy and practice.
AI oversight is no longer a niche concern. It is a board-level responsibility, sitting at the intersection of ethics, risk, strategy and accountability.
Boards that wait for a crisis, a regulatory intervention or public backlash before engaging with AI governance are missing the opportunity to lead. Those that act now — by embedding oversight into governance structures, equipping themselves to ask the right questions, and insisting on ethical guardrails — will be better placed to harness the benefits of AI while avoiding its most damaging risks.
Whistleblowing and speaking up are critical components of a healthy organisational culture…
Artificial intelligence is no longer a specialist tool confined to research labs or experi…
Navigating Cross-Board Risks for NEDs Non-Executive Directors are valued for their experie…