Automated Decision Systems, and Where They Appear

Published: 
Friday, January 21, 2022
Algorithms affect people’s lives every day, even if they aren’t visible to the public. Below are descriptions of how algorithmically-driven automated decision systems are used in several sectors, and how lack of transparency and accountability can cause harm. Ensuing blogs will examine a few of these areas closer.
 
Housing: Cities nationwide are using automated decision systems to screen applicants for apartments, process mortgage loan applications, and assign individuals to public housing. Cities feed these algorithmic systems large amounts of historical data, from which the systems create assumptions about individuals. Applicants generally need to provide their past few addresses, meaning the system uses previous zip codes to map an individual’s relationship to certain neighborhoods and their associated characteristics. Additionally, ProPublica found that by inputting social media data into automated decision-making systems, companies can discriminate by advertising only certain housing opportunities to an individual based on their race.[1] Systems that automate decision making in both screening and advertising exacerbate existing racial inequities created by the discriminatory policies at the root of the current housing and homeownership landscape.
 
Banking: Banks aid or make credit and loan decisions with automated decision systems that mine data from people’s online and offline activities. While the systems utilize seemingly neutral data points such as shopping habits, zip codes, and social media usage in rendering a decision,[2] these inputs serve as a proxy for race.[3] Thus, these systems draw patterns from these supposedly neutral inputs that mimic patterns they would draw if race was an input, and in doing so, circumvent existing anti-discrimination laws. Berkeley researchers found that lenders using automated decision systems to assign loan interest rates with purportedly neutral inputs discriminated against borrowers of color by consistently giving them higher interest rates.[4] These decisions naturally bear on creditworthiness, perpetuating racial inequities in other banking outcomes and impact opportunities to build generational wealth and prosperity.
 
Employment: Employers use automated decision systems to screen job applicants’ resumes for specific terms, experiences, and education. This process removes the holistic consideration humans use, and because, historically, institutional values in hiring have favored white, male candidates, systems could discriminate against applicants of color and other "diverse" candidates. Because automated decision systems offer extremely particularized screening without human discretion and intervention, use of these systems reinforces and compounds the lack of workforce diversity and obstructs applicants of color from gaining equal employment.
 
Medical Care: Some industries use automated decision systems for internal decisions, rather than consumer-facing decisions. Care providers, like hospitals, use systems to schedule employee hours and allocation. However, even systems that serve an operational purpose are capable of substantive discrimination. In Idaho and Arkansas, automated decision systems drastically reduced Medicaid attendant hours for low-income participants living with disabilities, resulting in patients with bedsores and inhumane living conditions. As able-bodied patients did not suffer the same decrease in care, the system de facto discriminated based on disability.[5] Lawsuits ultimately removed both systems, but automated decision systems are still being used elsewhere in the medical care industry.
 
Policing: Police use a type of automated decision system called “predictive policing” that analyzes data about where criminal conduct has occurred in the past to predict where it will occur in the future and deploy officers accordingly. However, that input data is influenced by law enforcement’s long history of over-policing neighborhoods of color,[6] meaning that because police have collected more data from neighborhoods of color, results will lead to further over-policing.
 
Sentencing: Judges use sentencing risk assessments (also called “pretrial risk assessments”), a type of automated decision system, to recommend sentences for persons convicted of crimes. Like other automated decision systems, these assessments are fed seemingly neutral but relevant data, such as whether the person is convicted of a violent crime and any history of convictions.[7] Yet these inputs proxy race,[8] so, in practice, the assessments make sentencing recommendations along discriminatory lines. ProPublica found that one widely used sentencing risk assessment consistently recommended longer sentences for Black people than white people convicted of the same crime.[9]
 
Public Benefits: Some states use automated decision systems to process, deny, and allocate public benefits more efficiently than human employees. For example, Michigan used a system to implement a recent rule disqualifying those with outstanding felony warrants from food assistance. However, the system incorrectly removed more than 19,000 people without outstanding felony warrants from the list, causing them to lose their benefits.[10] Another system in Michigan wrongfully accused people of unemployment fraud and removed their benefits in 85% of cases.[11] A federal district court invalidated the systems in both instances and restored the lost benefits.[12]
 
[1] Ava Kofman & Ariana Tobin, Facebook Ads Can Still Discriminate Against Women and Older Workers, Despite a Civil Rights Settlement, ProPublica, Dec. 13, 2019, https://www.propublica.org/article/facebook-ads-can-still-discriminate-against-women-and-older-workers-despite-a-civil-rights-settlement.
[2] Big Data: A Tool for Inclusion or Exclusion?, Federal Trade Commission, Jan. 2019, at ii, https://www.ftc.gov/system/files/documents/reports/big-data-tool-inclusion-or-exclusion-understanding-issues/
160106big-data-rpt.pdf
.
[3] Id. at 25.
[4] Laura Counts, Minority homebuyers face widespread statistical lending discrimination, study finds, Berkeley Newsroom, Nov. 13, 2018, https://newsroom.haas.berkeley.edu/minority-homebuyers-face-widespread-statistical-lending-discrimination-study-finds.
[5] K.W. v. Armstrong, No. 1:12-cv-00022-BLW, (D. Idaho 2014); Ark. Dep’t of Human Services v. Ledgerwood, 530 S.W.3d 336 (Ark. 2017).
[6] Rashida Richardson et al., Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice, 94 N.Y.U. L. Rev. 192, 218 (2019).
[7] Sarah L. Desmarais & Evan M. Lowder, Pretrial Risk Assessment Tools: A Primer for Judges, Prosecutors, and Defense Attorneys, Safety and Justice Challenge (in the Vera Institute of Justice), Feb. 2019, at 4, http://
www.safetyandjusticechallenge.org/wp-content/uploads/2019/02/Pretrial-Risk-Assessment-Primer-February-2019.pdf
.
[8] Julia Angwin, et al., Machine Bias: Risk Assessments in Criminal Sentencing, ProPublica, May 23, 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
[9] Id.
[10] Barry v. Lyon, 834 F.3d 706 (6th Circ. 2016).
[11] Robert N. Charette, Michigan’s MiDAS Unemployment System: Algorithm Alchemy Created Lead, Not Gold, IEEE Spectrum, Jan. 24, 2018, https://spectrum.ieee.org/riskfactor/computing/software/michigans-midas-unemployment-system-algorithm-alchemy-that-created-lead-not-gold.
[12] Barry v. Lyon, 834 F.3d 706 (6th Circ. 2016).
Explore More: