A flood of businesses are looking to AI to solve business challenges, automate processes and differentiate their solutions in highly competitive markets. But with the pressure to bring these solutions to market quickly, there needs to be early adoption of good processes from the outset.
The key first key process is understanding your dataset. Previous algorithm failures in other industries - insurance and e-commerce, for example - have occurred due to the predominant bias found in the gender and socioeconomic status of their training datasets.
It is important to understand how bias may be found naturally in your own customer base, and how this may affect your inferences, since these naturally need checks and balances to avoid sampling bias.
Second, for real-time automated decision systems, it would be wise to create circuit breakers. These will raise alerts when a propensity of the decisions or distribution of outcomes becomes heavily skewed beyond some fail-safe values. This is a good proactive measure to monitor performance and expectations of algorithm decision making.
AI responsibility is twofold. We’re looking at the strict implementation of governance processes fortified by a culture of open communication and scrutiny. Data governance best practices are critical, but players implementing AI from across the business ecosystem need to also be vigilant about engaging a diverse range of internal contributors and stakeholders.
This could be sharing private and public proofs of concept, working with the academic and research communities to participate in peer reviews, or even involving consenting members of the public to help validate their AI models.
In Europe, GDPR actually encourages this culture of algorithm scrutiny. Failing to have the appropriate checks in place can result in repercussions with data processing compliance and lines of recourse for automated decision making; in particular, data subjects being allowed to contest automated decisions and request human intervention, review and maintenance of fairness. An inability to explain a black box score or reach a similar conclusion when manually reviewed will raise further scrutiny.
Trust is inherently tied to AI, so I urge any business in this space to constantly push for transparency – both with stakeholders and how there are communicating data collection. It is this environment of open discussion and scrutiny in addition to holistically constructed governance practices– that will help deter bias in AI.