With a name like the AI Bill of Rights, you’d be forgiven for thinking that robots and machines are being endowed with the same ethical protection as human beings.
In reality, however, the AI Bill of Rights aims to protect the public from the harm that automated systems can produce through their various algorithms—a phenomenon known as artificial intelligence bias or AI bias.
What is AI bias?
Thanks to advances in computer science, developers have been creating algorithms so powerful that they can help us make decisions more efficiently—from loan approvals, hiring, and parole eligibility, to patient care. If you live in the U.S., a USA VPN could help bypass some AI censorships and geo-restrictions. However, what some of these creators didn’t anticipate, is that many of those machine-made decisions would reflect human biases.
Picture this:
- A woman applies for a job but her application gets rejected automatically because a recruiting algorithm is set to favor men’s résumés.
- A Latino couple’s offer on their dream home gets turned down repeatedly because of a mortgage-approval algorithm, despite them being high earners with a hefty down payment.
- A black teen who is caught stealing gets labeled as a high-risk future offender by an algorithm used in courtroom sentencing, while a white man who steals something of the same value gets rated a low-risk.
The above are real-life examples of AI bias found embedded in the algorithmic systems of a Big Tech company, the country’s largest mortgage lenders, and the judicial system respectively.
Why does AI bias occur?
While bias in AI is usually not deliberate, it is very much a reality. And, even though there’s no definitive answer as to what exactly causes AI bias, sources that have been known to contribute include:
Creator bias: Because algorithms and software are designed to mimic humans by uncovering certain patterns, they can sometimes adopt the unconscious prejudices of their creators.
Data-driven bias: Some AI is trained to learn by observing patterns in data. If a particular dataset shows bias, then AI—being a good learner—will too.
Bias through interaction: ‘Tay,’ Microsoft’s Twitter-based chatbot, is a prime example. Designed to learn from its interactions with users, Tay, unfortunately, lived a mere 24 hours before being shut down after it had become aggressively racist and misogynistic.
Latent bias: This is when an algorithm incorrectly correlates ideas with gender and race stereotypes. For example, if AI correlates the term “doctor” with men, just because male figures appear in the majority of stock imagery.
Selection bias: If the data used to train the algorithm over-represents one population, it’s likely it will operate more effectively for that population at the expense of other demographic groups (as seen with the Latino couple above).
Over the past few years, it’s become clearer that the machines created to streamline the human decision-making process are also adding to widespread ethical issues.
Not surprisingly, this has resulted in calls for the U.S. government to adopt an algorithmic bill of rights that protects the civil rights and liberties of the American people—a call that they have finally heeded.
How will the AI Bill of Rights combat bias?
In a big win for those who sounded the alarm over AI bias, the White House Office of Science and Technology Policy (OSTP) recently released what it calls a blueprint for the Bill.
After gathering input from Big Tech companies, AI auditing startups, technology experts, researchers, civil rights groups, and the general public over a one-year period, the OSTP laid out five categories of protection, along with steps that creators should take when developing their AI technology:
- AI algorithms should be safe and effective. How: By thoroughly testing and monitoring systems to ensure that they aren’t being abused.
- Humans should not be discriminated against by unfair algorithms. How: By implementing proactive measures with continuous and transparent reporting.
- AI should allow people the right to control how their data is used. How: By giving citizens access to this information.
- Everyone deserves to know when an AI is being used and when it’s making a decision about them. How: By providing accompanying documents that outline the exact impact these systems have on citizens.
- People should be able to opt out of automated decision-making and talk to a human when encountering a problem. How: By ensuring the option to opt out is made clearly available.
When can we expect these laws to protect us?
Unfortunately, the answer isn’t clear-cut. Unlike the better-known Bill of Rights, which comprises the first ten amendments to the U.S. Constitution, the AI version has yet to become binding legislation (thus the term “blueprint”). This is because the OSTP is a White House body that advises the president but can’t advance actual laws.
This means that adhering to the recommendations laid out in the nonbinding white paper (as the blueprint is described) is completely optional. This is why the AI version of the Bill should be seen as more of an educational tool that outlines how government agencies and technology companies should make their AI systems safe so that their algorithms avoid bias in the future.
So, will AI ever be completely unbiased?
An AI system is as good as the quality of its input data. As long as creators follow the recommendations set out in the Blueprint, and consciously develop AI systems with responsible AI principles in mind, AI bias can technically become a thing of the past.
However, while the Blueprint is a step in the right direction of making this happen, experts emphasize that until the AI Bill of Rights gets enforced as law, there will be too many loopholes that allow AI bias to go undetected.
“Although this Blueprint does not give us everything we have been advocating for, it is a roadmap that should be leveraged for greater consent and equity,” says the Algorithmic Justice League, an organization dedicated to advocating against AI-based discrimination. “Next, we need lawmakers to develop government policy that puts this blueprint into law.”
At this stage, it’s anyone’s guess as to when and how long it may take for this to happen. Meredith Broussard, a data journalism professor at NYU and author of Artificial Unintelligence, believes it’s going to be a “carrot-and-stick situation.”
She explains, “There’s going to be a request for voluntary compliance. And then we’re going to see that that doesn’t work—and so there’s going to be a need for enforcement.”
We hope she’s on to something. Humanity deserves technology that protects our human rights.
Protect your online privacy and security
30-day money-back guarantee
Comments
I simply study the article on the “Artificial Intelligence Bill of Rights” on ExpressVPN’s blog, and I should say it is a thought-provoking piece. It highlights the developing significance of ethical issues in AI improvement and the want to guard imperative rights. It’s reassuring to see discussions round AI’s have an impact on on privacy, fairness, and accountability.
I am really glad this is not an AI chosen article that it deemed ok to publish. This article clearly shows the author’s bias on racial matters. Why did the author forget to mention political bias as a major form of bias as practiced by folks like twitter, Facebook, Instagram and big tech of that nature mmmmmmmm? Just a thought