Independence or Influence as a NED?
Navigating Cross-Board Risks for NEDs Non-Executive Directors are valued for their experie…
Artificial intelligence is no longer a specialist tool confined to research labs or experimental projects. It is embedded in daily operations, shaping decision-making, customer interactions, and product design across multiple industries. With this integration comes a deeper and more complex governance challenge. The conversation is moving from whether AI should be adopted to how it should be governed, and more specifically, how boards can ensure that ethical considerations are built into every stage of AI deployment.
The stakes are high. Misuse of AI or failure to anticipate ethical risks can result in reputational damage, regulatory penalties, and erosion of stakeholder trust. The accountability gap emerges when responsibility for AI outcomes is unclear or fragmented across teams. Boards must address this gap proactively, setting clear expectations for ethical oversight and ensuring that accountability is defined from the top.
AI ethics is not simply about preventing harm. It encompasses fairness, transparency, privacy, accountability, and the responsible use of data. As AI systems influence everything from recruitment to risk scoring and customer service, ethical considerations can no longer be treated as a peripheral compliance exercise.
Ethical risks in AI often fall into five broad categories:
Boards must ensure these risks are addressed early in AI adoption, not retrofitted in response to problems.
The accountability gap occurs when it is unclear who is responsible for AI’s decisions and impacts. This can arise when:
When accountability is not clearly assigned, ethical lapses can slip through unnoticed until they become crises. Regulators are increasingly unwilling to accept “black box” explanations, and stakeholders expect organisations to know exactly how their AI systems operate and why.
Boards cannot delegate away responsibility for AI governance. Even when technical expertise resides elsewhere, directors have a duty to ensure the right frameworks, controls, and reporting mechanisms are in place. Key responsibilities include:
Boards can use the following approaches to strengthen ethical oversight:
Ethical Impact Assessments
Just as environmental and social impact assessments are standard in certain sectors, ethical impact assessments can be applied to AI projects before launch. These should identify potential biases, privacy risks, and unintended consequences.
AI Ethics Committees
A dedicated committee, including both internal leaders and external experts, can provide independent scrutiny of AI projects, reporting findings directly to the board.
Scenario Testing
Boards should ask management to present “what if” scenarios exploring the ethical risks of AI failure or misuse, along with mitigation strategies.
Continuous Monitoring
Ethical oversight is not a one-time sign-off. AI systems evolve, and so should their monitoring. Boards should expect regular performance and ethics reports, supported by meaningful data.
Non-executive directors are not expected to be AI engineers, but they must be informed, sceptical, and prepared to challenge. This means:
NEDs have a unique vantage point. They can see patterns across industries and bring external perspectives that help prevent groupthink around AI adoption.
To avoid the accountability gap, boards should regularly ask:
The answers to these questions should be clear, documented, and tested over time.
Global regulators are moving towards stricter oversight of AI, with proposals for mandatory risk classification, transparency obligations, and penalties for non-compliance. Reputation risk will often move faster than regulation. In the age of rapid information sharing, a single ethical misstep can become a public crisis within hours.
Boards that take proactive ownership of AI ethics will not only reduce legal exposure but also position their organisations as trustworthy innovators. Those that neglect this responsibility risk finding themselves in the spotlight for all the wrong reasons.
AI governance is not just a technical challenge; it is a leadership responsibility. Boards must ensure that accountability is defined, ethical risks are anticipated, and transparency is non-negotiable. The accountability gap closes only when there is clarity about who owns AI outcomes and how those outcomes align with organisational values.
For boards willing to engage deeply with AI ethics, the opportunity is not only to avoid harm but to lead with integrity in an era where trust will be the ultimate competitive advantage.
Navigating Cross-Board Risks for NEDs Non-Executive Directors are valued for their experie…
The role of the board Chair has always carried weight, but recent shifts in the governance…
The Alignment Test Many organisations have spent significant time defining their corporate…