How an algorithm can keep you from a job — or benefits if you lose your job.

Published: 
Tuesday, July 26, 2022
By Anne Paxton, Policy Director, Unemployment Law Project and Jennifer Lee, Technology and Liberty Manager, ACLU-WA

If you’re looking for a job today or are one of the millions of people who rely on unemployment assistance, there’s a chance you’ll encounter and be affected by an algorithm and automated decision-making system (ADS) somewhere in the process — from hiring, performance tracking, and even providing unemployment benefits — even if you don’t realize it.

While automated decision systems may allow for employers to increase the number of candidates interviewed and save on the cost of reading through applications, using them in hiring raises several concerns that should prompt further scrutiny. Not only can using an ADS lead to inaccuracies, but these systems may also entrench and worsen existing biases, making them harder to uncover and challenge. Faulty or misguided algorithms have proven to discriminate against applicants on many bases, including gender, race, or disability status.

Many hiring algorithms use data about past successful and unsuccessful job candidates and look for commonalities in that data, often filtering out candidates they deem to lack the right fit. However, if candidate hiring is based on processes that lead to biased outcomes, then the use of an ADS based on those data can automate biases. In the private sector, for example, Amazon once developed a recruiting tool it ultimately abandoned after discovering the algorithm discriminated against women. The company’s technology analyzed a decade’s worth of resumes to rank applicants, but since those resumes were mostly from men, its results skewed heavily in favor of men.

Automated hiring tools are also criticized for their lack of transparency about their effects upon applicants. For example, some employers use HireVue, a face and voice-scanning application, to conduct video interviews. HireVue analyzes a job candidate’s voice and word choice to rank them against other candidates, often without informed written consent. For example, candidates who have accents or give shorter answers to questions may be disproportionately flagged for review. HireVue previously analyzed facial expressions but announced last year that it would stop in response to widespread criticism. The system also doesn’t share with candidates their assessment scores or explain its rankings, potentially allowing companies to discriminate against applicants based on national origin and against applicants of color and applicants with disabilities.

Automated hiring tools are used in the public sector, too. A 2020 public records request by the ACLU of Washington found that several Washington state and local agencies use some sort of automated hiring tool or software to manage their employees and job candidates. The city of Kennewick, for example, uses Caliper, a company whose algorithms measure personality traits such as “sociability” or “energy” in analyzing if a job candidate is a fit for the role. The city doesn’t base hiring decisions on Caliper, but the system aids in developing focus areas for reference checks and creating interview questions.

In May 2022, the U.S. Department of Justice and Equal Employment Opportunity Commission issued a warning to employers that rely on algorithms or artificial intelligence for hiring or monitoring the productivity of their employees. Use of these tools, the agencies said, may lead to violations of the Americans with Disabilities Act. Specifically, these technologies could “potentially screen out people with speech impediments, severe arthritis that slows typing, or a range of other physical or mental impairments.” The agencies also warned that tools that monitor employee behavior at work might discriminate against those with legally mandated workplace accommodations (I.e., modifications to work conditions which enable them to perform their job successfully).

Automated decision systems not only affect current employees and job candidates, but also those who rely on unemployment assistance. During the pandemic, at least 20 states have turned to ID.me — a third-party company that uses automated facial recognition technology to verify a person’s identity — purportedly to help prevent fraud in unemployment benefits claims. But the ID.me platform, which is being considered for use in Washington, has been criticized for its negative impacts on already disadvantaged groups, including low-income individuals, people living with a disability, and residents of rural communities. The Washington State Employment Security Department is attempting to remedy its inadequate handling of sensitive information and lax privacy measures which recently led to a series of fraudulent claims costing Washington state taxpayers more than $300 million. But turning to an invasive and biased technology like ID.me risks increasing the damage already done to the state and its residents.

Facial recognition tools like ID.me are often inaccurate and rife with race and gender biases. For example, in a 2018 study of three commercial facial recognition software technologies, their error rates in determining the gender of light-skinned men were never lower than 0.8 percent. In contrast, two of the technologies had an error rate greater than 34 percent when evaluating darker-skinned women. Similarly, a 2019 study funded by the federal government examined 189 software algorithms from 99 developers making up most of the industry and found the rates of false-positives for Asian and African American faces to be 10 to 100 times higher than those for white faces for one-to-one matching (I.e., where the technology matches one photo of a person to a different photo of the same person in a database). The central reason for these biases is non-diverse training datasets. In other words, “Human bias and data availability affect the racial distribution of faces used to train the algorithm, usually with lighter skin tones predominating.” For example, one popular open source (publicly available) facial image dataset, Labeled Faces in the Wild, is 83.5% white.

Even if it worked accurately across race and gender, this type of automated facial recognition technology can be a barrier for those without the technology, such as a smartphone or high-speed internet, to access benefits. All these factors combine to bar people from vital benefits when that support is most needed — benefits which they have earned through their work.

At the ACLU of Washington, we are working to pass legislation that would require transparency and accountability for government use of automated decision systems, like those used in hiring, performance tracking, and providing unemployment benefits. Senate Bill 5116 would create requirements for state government agencies to follow in procuring, developing, and using an ADS and prohibit discrimination via automated decision systems. Agencies would be required to train employees who use these automated systems and notify people who were impacted by the ADS. Agencies would also have ongoing monitoring of their systems and be required to produce a public accountability report on how they are used. While the state legislature didn’t pass SB 5116 in 2022, we continue to work in the interim to critically examine automated decision systems used by both companies and government agencies. Gov. Jay Inslee unfortunately vetoed $100,000 initially allocated from the state’s budget to assess the use of ADSs in state agencies, but we hope that this important work will be funded in the upcoming session.

People deserve to know if an algorithm prone to bias and error is coming between them and their paycheck.