This resource is no longer available
The risk of bias in artificial intelligence (AI) has been the source of much concern and debate. Numerous high-profile examples demonstrate the reality that AI is not a default “neutral” technology and can come to reflect or exacerbate bias encoded in human data. The consequences of unmitigated and even unintentional bias in AI can be far-reaching, with troubling stories in healthcare, judicial sentencing, hiring, and credit evaluations, to name a few.
As the technology and procedures underlying AI mature, a standardized approach to how to identify, diagnose, and mitigate bias is a pivotal addition to the AI and machine learning (ML) workflow implemented in enterprise. In the following guide, how to pragmatically tackle AI bias will be discussed in technical depth, with recommendations and procedures to assist the practitioner in building equitable, fair models that reflect their organizational values and minimize the potential for harm.