The White House recently introduced the AI Bill of Rights. What is it, why is it needed and what does it do? In this post, I offer a perspective—from a technology and a business perspective—on this document and what it can or should mean for organizations.
Context – why is this necessary
First some context. As many know, AI is now being deployed or deployed in virtually every business context, from finance (think credit card approval) to healthcare (think disease diagnosis/risk assessment) and beyond. While many advanced technologies are a long way from consumers – when was the last time you thought about how the latest database research was used by your hospital to treat you? – AI is not like that. AI advances touch humans directly—whether in using human information, making decisions that affect humans, or both. Also, AI is ubiquitous – now anyone from a big bank to a high school student can use the most advanced AIs known to man and then demonstrate the resulting application to anyone. How do we ensure this rapid pace of technological advancement is safe?
Enter AI ethics
AI ethics is the field of AI that focuses on the ethical application of AI – particularly in relation to humans and society. AI ethics includes areas like AI bias – to ensure AIs treat all humans fairly, AI and privacy – to ensure humans can understand and control how their information is used, etc. AI ethics is a critical area, like explained here. Now we can come to the AI Bill of Rights – which is a prototype design for AI ethics in practice. The AI Bill of Rights outlines the government’s view of which human rights should be protected by organizations building and deploying AI.
What does the AI Bill of Rights say?
A detailed document on the AI Bill of Rights can be found here. The document outlines five fundamental rights. I have listed them below:
- Security. The key element here is that automated systems can (and do!) make mistakes. In AI – these bugs can come in many ways – see the article here on how COVID-19 has broken many AIs around the world. While not all errors are predictable, operational machine learning techniques (MLOps) can be used to detect and mitigate AI errors before they cause further damage.
- Privacy. AI lives on information. Combined with the proliferation of sensors, video cameras and recordings of online activity, it is now possible for large amounts of personal information to be used by organizations without the knowledge of the individual. This element focuses on the need for individuals to have methods in place to access, understand, and control how their personal information is being used.
- Justice. AIs learn patterns from data. Without proper data validation, AIs can (and do!) learn bias and become unequal in their treatment. This element focuses on the need to design and test AIs for fairness.
- explanation. Data protection law focuses on the need for individuals to be able to understand what information about them is being used by an algorithm. The right of explanation is complementary – it states that individuals also have the right to understand how the algorithms use the data they are allowed to use. For example, if an individual has consented to a bank using their personal information (in accordance with data protection requirements), the disclosure requirement will tell them whether their age, gender, or other information was used to determine a loan interest rate for them.
- alternatives. This element focuses on the need to provide choices to the individual. Choices may be to opt-out of a system that makes automated decisions, or to have access to solutions or people to troubleshoot problems caused by an automated system.
The AI Bill of Rights, I believe, outlines a set of interrelated principles that can be applied at all stages of the AI lifecycle through a combination of AI ethics and MLOps techniques. How it can be applied is very domain specific – for example healthcare applications will impose different human privacy constraints than web-based retail applications. However, the principles apply to all areas. It’s worth examining each of these pillars and understanding how they should fit into operational AI practices in your organization.