Stator AI

What boards should know about bias audits before they need one

A guest post from the FairGap team on the questions a board should be asking about algorithmic bias audits, written for the executives Stator AI typically briefs.

The conversation about algorithmic bias has, in most companies, been an engineering and legal one. The board sees the work as compliance, signs off on the budget, and trusts that the team handling it knows what they are doing. The pattern works fine until the company has an incident, at which point the board discovers that they did not have the level of understanding the situation now requires.

The companies that come out of bias-related incidents in the best shape tend to have boards that asked four or five clarifying questions before the incident, kept asking them at each board meeting, and built a familiarity with the topic that paid off when the situation needed it. The companies that come out worst tend to have boards that delegated the topic entirely until it became urgent.

We work with the engineering and product teams that run these audits. The board-level questions we keep recommending to our customers are short. We thought it would be useful to share them in a context where executives are likely to read them.

The first question is whether the company has identified, in writing, which of its automated decisions are subject to bias risk. Most companies have made some decisions automatic that, ten years ago, were made by a human. Pricing, eligibility, ranking, scheduling, prioritization. The list is longer than executives often realize, because the automation accreted gradually. A board that asks for the written list, and reviews it annually, has a much better view of where the risk lives.

The second question is which of those automated decisions have been audited for disparate impact, when, and by whom. The honest answer at most companies is "very few, never, by no one." That answer is not necessarily a problem if the decisions are low-stakes. It is a problem if the decisions are high-stakes and the board has not been told. The board's job is to know which case applies and to ensure the company's investment is sized accordingly.

The third question is what the company would do if a regulator, a customer, or a journalist asked for evidence that a particular automated decision is fair. The right answer is a pre-prepared response that points to the audit, the data, and the remediation history. The wrong answer is the team being asked to assemble that response in real time during an incident. Boards that have asked for the dry-run version of this answer tend to find that the team can produce it for one or two priority decisions and not for others, which is useful information.

The fourth question is whether the audit cadence is keeping up with the rate of model and data change. Many companies do an audit once and treat it as durable. Models drift, training data shifts, customer populations change, and the audit becomes stale within a year. The board should know whether the company has a recurring audit schedule for its high-stakes decisions or whether it is relying on point-in-time audits that are no longer current.

The fifth question is who owns remediation. An audit that finds disparate impact and is followed by no clear remediation owner produces worse outcomes than no audit at all, because the company now has documented knowledge of a problem it did not act on. The board should know which executive owns the remediation budget and the authority to ship the changes the audit recommends.

The five questions are not exhaustive, and the company's specific situation will surface others. They are the minimum set we suggest as starting points. A board that has asked all five and received credible answers has done significantly more diligence than the median, and has positioned the company to handle the next incident with composure rather than improvisation.

For executives who would like a structured way to bring this conversation to their board, the working pattern is to add a recurring agenda item, ten or fifteen minutes per quarter, in which the executive responsible for ML or analytics walks the board through the answers to the five questions. The conversation is brief, the memory is shared across the board, and the inevitable specific question becomes a discussion the board is prepared for rather than one it is having for the first time.


This is a guest post from the team at FairGap, who run third-party algorithmic bias audits for organizations operating regulated or consumer-facing automated decisions.