This resource is no longer available

Cover Image

Industry leaders and policymakers have begun to converge on shared requirements for trustworthy, accountable AI. Microsoft calls for fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. IBM calls for explainability, fairness, robustness, transparency, and privacy. The United States House of Representatives Resolution 2231 seeks to establish a requirement of algorithmic accountability, addressing bias and discrimination, a risk-benefit analysis and impact assessment, and issues of security and privacy. The government of Canada has put out its own Algorithmic Impact Assessment tool that is available online. These proposals all have far more principles in common than they have divergent ones. At DataRobot, we share in this vision and have defined our 13 dimensions of AI Trust to pragmatically inform the responsible use of AI.

The challenge now is to translate those guiding principles and aspirations into implementation, and make it accessible, reproducible, and achievable for all who engage with the design and use of AI systems. This is a tall order but far from an insurmountable obstacle. This document will not be a statement of principles for trustworthy AI, but rather will do a deep dive into practical concerns and considerations and the frameworks and tools that can empower you to address them. We’ll approach these principles through the dimensions of AI Trust, and we will detail them in the sections that follow.

Vendor:
DataRobot
Posted:
Jun 3, 2021
Published:
May 17, 2021
Format:
PDF
Type:
White Paper

This resource is no longer available.