The Outsourcing of Discrimination: how A.I. is being used to undermine human rights by DL


The Outsourcing of Discrimination:

how A.I. is being used to undermine human rights


0aac2f7e20719981b83ccaa17d92e44ea88f8d7f

In a season 3 episode of Netflix's dystopian show "Black Mirror," released in 2016, the protagonist, Lacie Pound, lives in a world where one's socioeconomic status is inextricably linked to a ubiquitous app used to rate fellow citizens. She seeks out a better apartment, but her hopes are dashed because her rating is too low. She hopes to improve her rating by speaking at the wedding of her higher-rated friend, but a disagreement with an airline employee lowers her score to the point that she can no longer book a flight or even rent a decent car. Ultimately, she lashes out, which lowers her score even further, and is finally imprisoned for the crime of having too low of a rating. 


While “Black Mirror” and its warnings about technology fall under the category of science-fiction, the advancements in AI and predictive technology have made reality much closer to a dystopia than most people realize. Imagine a world where an algorithm decides where you don't get to live; imagine a world where an algorithm decides how likely you are to be imprisoned; imagine a world where an algorithm can track you based on your ethnicity. You don't have to imagine that world, because you're already living in it. 


Many would argue that technology such as AI, algorithms, and predictive technology are neutral tools that, by definition, are unable to discriminate, but that simply isn't true. Even seemingly innocent oversights in technology can lead to discrimination, and their consequences can range from mundane to deadly. In 2017, a company known as Technical Concepts was met with backlash after creating soap dispensers that only worked for white people as that was the only skin the dispensers had been tested on. In March of this year, researchers at the Georgia Institute of Technology discovered that cameras and sensors in detection systems used for self-driving cars were much better at detecting people with white skin tones, thus making those vehicles more likely to strike Black people. Whether intended or not, technology can inherit the biases of its creators. 


So how much more damaging to society can artificial intelligence become when that discrimination is intentional? The answer to that question can be found all around the world from China to America and, yes, even Canada. 


Canada holds itself in high esteem when it comes to matters of human rights. When directly compared to the U.S or China, that may even seem earned, but simply being better than the absolute worst doesn't necessarily translate to good. In 2017, Canadian Prime Minister Justin Trudeau acknowledged housing as a human right. "Housing rights are human rights," the PM said "and everyone deserves a safe and affordable place to call home. One person on the streets in Canada is too many." 


Now, in 2020, not only is Canada still facing a housing crisis made even worse by Covid-19, one Ontario-based company is developing and using technology to potentially put that right even further out of people's reach. Naborly is a service that uses A.I and algorithms to vet potential tenants, but unlike your average credit check and verification of employment, Naborly takes the screening process multiple steps further. These steps include using its algorithm to comb through the applicant's social media in an effort to predict behavior. The prospective tenant is then given a score which is shared among landlords who can also review tenants and share information with each other in order to create a database. 


In a youtube video released on April 1st, 2020, Naborly CEO Dylan Lenz, who is himself a landlord in San Francisco (a city widely known for its gentrification and displacement of the poor) encouraged landlords to report tenants who missed payments, even as Covid-19 caused record job losses. "The more data we have on what happens over the next few days is going to really accelerate our ability to retrain our AI systems and then help all of you accurately understand tenant risk moving forward into this new world," he told his audience.


This quickly led to widespread accusations of blacklisting tenants for financially falling behind in a pandemic. aIn an attempt to reassure his clients, Lenz emailed the landlords the next day, promising that, "This database helps other landlords know in the future if a tenant has been delinquent in the past, while also helping Naborly continue to deliver the most accurate and up-to-date tenant screening service in the market." He ended the email by assuring landlords, "Please note that we keep reporting fully confidential and DO NOT notify your tenant that you have reported to our system." 


Of course, this is a tacit admission that their screening process not only raises valid concerns over privacy, but also violates the consent of the applicants being screened. In fact, Canada's Personal Information Protection and Electronic Documents Act (PIPEDA) has strict rules limiting the collection of personal information by businesses. It also mandates that people have the right to see what information is being collected about them and should have the opportunity to challenge that information if they feel it's inaccurate. This seems to be in direct conflict with Naborly's methods of operating, but like many such Silicon Valley companies, Naborly are no stranger to  circumventing the law. 


After the company's initial "bad tenants list"  landed them on the Canadian Civil Liberties Association's (CCLA) radar in 2016, they were forced to scrap it. However, they've seemingly found a way around that order by getting landlords to build that list themselves using Naborly's platform. Micheal Bryant, executive director of the CCLA noted, “I fear that Naborly is trying to split hairs and exploit loopholes in order to do obtusely what they cannot do directly under privacy laws." 


Housing discrimination isn't discussed much in Canadian media, but it is happening around us all of the time. According to the Centre for Equality Rights in Accommodation, Black single parents and South Asian households have a one-in-four chance of experience moderate to severe discrimination when trying to rent an apartment in Toronto. Allowing a start-up to potentially streamline that discrimination, with no accountability whatsoever, is unacceptable. With a once-in-a-century pandemic ravaging lives and livelihoods alike, and sky-high housing insecurity, it is absolutely amoral that a company could be allowed to profit by putting housing further out of people's reach, especially in a country that claims housing is a human right.


Housing discrimination isn't the only type of discrimination predictive technologies are helping propagate.,They can also outright rob people of their right to a due process. Containing just four percent of the world's population and yet a quarter of the world's total number incarcerated citizens, it would be easy at first glance to view America's justice system as broken. But a further look into the profit motives behind mass incarceration reveals that it may be working exactly as intended. 


The US bail bond and private prison industries are worth an estimated $2 billion and $7.4 billion per year respectively, so it's no surprise that artificial intelligence companies have come for their piece of the pie—much to the detriment of the already marginalized. There are many ways in which AI is being used by the American justice system, but by far the most controversial is the use of criminal risk assessment algorithms. So, what is a criminal risk assessment algorithm? A program that analyzes a defendant's profile then allocates a score estimating how likely they are to reoffend. That score is then used by a judge to help determine a suitable sentence. What could go wrong? 


Well, plenty already has. A Propublica study of the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm looked at 10,000 cases where it was used and found that, "Black defendants were far more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism, while white defendants were more likely than black defendants to be incorrectly flagged as low risk." 


The maker of this program, Equivant, describe their product as "software for justice" that, according to their website, is, "used nationwide for evidence-based decision making, helping to remove biases by equipping justice professionals with the research and rationale they need for making informed, defensible decisions." Unfortunately, the only true part of that statement is that their tool is used nationwide. Not only has Equivant faced no consequences for taking part in the wrongful convictions of thousands of Black people, they've been rewarded with $1.67 billion in gross revenue, much of it coming from tax dollars. 


Anyone curious as to where that road leads if the AI industry is left unchecked and unchallenged need look no further than China, where AI has become inescapable. China has the largest video surveillance network in the world, which they plan to expand to over 600 million cameras by the end of 2021. This network is being used to implement their social credit program, in which Chinese citizens are tracked with facial recognition systems, rated according to their behavior, then given a public score which determines their mobility on the socioeconomic ladder. According to the Communist Party of China, the goal of this system is to, "make trustworthy people benefit everywhere and untrustworthy people restricted everywhere." People with high scores are rewarded with things such as discounts on their heating bills, access to better paying jobs, or even access to better healthcare, while those with low scores can be barred from booking plane tickets and publicly shamed for things such as defaulting on their debts. 


All of this, of course, is eerily reminiscent of dystopian science-fiction, but it gets worse. The Chinese government has been using its omnipresent surveillance and facial recognition apparatus to not only crush dissent but to also track and persecute its ethnic and religious minorities, namely the Muslim Ughiurs. Over one million Ughiurs are curently being held government-run "reeducation" (read: concentration) camps, making it the largest imprisonment of people based on religion since the Holocaust. AI is used to look for 75 behavioral indications of what the Chinese government considers religious extremism, but these indicators are completely arbitrary and simply growing a beard or fasting can land you in these camps where forced labor, forced sterilization, and even forced abortion have become the norm. Meanwhile, the Chinese AI companies behind this are expected to reach a combined net worth of $165 billion within this decade. 


As it stands today, world governments are either woefully unprepared to effectively regulate the artificial intelligence industry or actively complicit in their unfettered violation of human rights. We pride ourselves on the progress we've made as a society and we celebrate civil rights leaders who stood against systemic racism in the past, but right now, the AI industry is helping take systemic racism into the future — and making a killing, both figuratively and literally, in doing so. We all need to push back against what is undoubtedly the automation of racism and the outsourcing of discrimination by raising awareness and collectively demanding more accountability on this issue from our elected officials at every level. If we fail to do so, the future will end up looking a lot less like Martin Luther King's dream, and a lot more like an episode of Black Mirror.