Automating tasks has been an industry goal for years, whether driven by necessity (to remain competitive and save human resources), or to achieve higher goals. However, when it comes to automating hiring, training artificial intelligence (AI) systems on past decisions will merely automate racial and cultural biases that have previously discriminated against older workers, people of color, women, people with disabilities, or other groups.
In other words, if the data inputs for design development are not diverse, then the model output will likely be biased.
AI, the configuration of computer systems to perform tasks that generally require human intelligence, is one of the most exciting technological advances, impacting all sectors of society, including but not limited to employment, health care, legal services, education, finance, national security, criminal justice, and transportation.
Today, most businesses, and even state and local government employers, heavily utilize AI, machine learning, algorithms, and other automated systems (AI technologies) to help them streamline various stages of the employment process. The goals of using AI technologies include hiring the most qualified employees more quickly and efficiently, making workers more productive by monitoring their performance, ascertaining pay or promotions, terminating poor performers, and establishing the terms and conditions of employment. Resumé scanners use certain keywords to prioritize job applications; video interviewing software evaluates applicants by their facial expressions and speech patterns; and testing software assesses prospective candidates on their personalities, aptitudes, and cognitive skills.
Bias in AI Tools
ChatGPT is the most recent mindboggling innovation. ChatGPT is a type of AI technology that allows users to naturally communicate with machines. It is designed to mimic human conversation based on chat-type inputs from users. While still in its infancy, ChatGPT has the potential to revolutionize the recruitment industry by providing a more efficient, personalized, and diverse hiring process. It also, unfortunately, can be used by applicants to create resumés, writing samples, and articles for publications based on search criteria, and without substantive input from the person performing the search.
Like all new AI products, though, ChatGPT has the same old biases. Algorithmic decision-making tools, such as chatbots, could “screen out” an individual because of a disability. For example, a chatbot might be programmed with a simple algorithm that rejects all job applicants who, during their communications with the chatbot, indicate that they have significant gaps in their employment history. If a particular applicant had such a gap in employment due to her disability, such as undergoing treatment, then the chatbot may function to screen out that person because of the disability
In another example, an algorithm could be developed based on resumés from past successful candidates. The algorithm could then be trained to learn word patterns in the resumés to identify a job applicant’s ideal suitability for a company. Theoretically, the algorithm simplifies a company’s hiring process by determining individuals whose scanned resumés have attributes comparable to the benchmark resumés, indicating that these top-choice, new candidates might likely be successful in the company. However, the risks of reproducing past discriminatory effects arises when the benchmark resumés used to train the AI are derived from candidates of a predominant gender, age, national origin, race, or other group, and thus might exclude words that are commonly found in resumés of a minority group.
Similarly, men, for example, are more likely to use assertive words like “leader,” “competitive,” and “dominant.” In contrast, women are more apt to use words like “support,” “understand,” and “interpersonal.” By replicating the gendered ways in which hiring managers judge applicants, the AI may conclude the men are more qualified when compared with their female counterparts based on the active language used in their resumés. Also, women tend to downplay their skills on resumés. On the other hand, men will frequently include phrases tailored to the position, thereby making their resumés stand out to an algorithm.
In these cases, the diversity of the applicant pool can be affected even before the employer has a chance to evaluate job candidates. Regardless of the surge in unconscious bias training and diversity initiatives, machines are only as good as the input provided to them.
Reliance on these fast-evolving algorithmic decision-making technologies may, consciously or unconsciously, lead to unlawful discrimination against groups of applicants, ultimately harming society as a whole. The problem with algorithms is that they have the appearance of neutrality, since they do not use human judgment, but the historical data input into algorithms already includes extensive human judgment.
Federal Government Responses
It was nearly six decades ago that Title VII of the Civil Rights Act of 1964 prohibited discrimination on the basis of race, sex, religion, and national origin. A similar provision in the Age Discrimination in Employment Act (ADEA) prohibits ads indicating a preference based on age. Moreover, the Americans with Disabilities Act (ADA) prohibits employers from using tests or selection criteria “that screen out or tend to screen out” individuals with disabilities unless the test or criterion is job-related and consistent with business necessity. Individuals with disabilities who are covered by the Americans with Disabilities Act have the right to request accommodations during the hiring process—rights that are ensured by the Equal Employment Opportunity Commission (EEOC).
However, unregulated algorithmic screening tools cannot always comply with these mandates. Given the risks of discriminatory outcomes, the growing use of AI tools in the workplace raises a number of legal concerns. Thus, federal, state, and local governments are racing to develop standards to address AI’s proliferation in the workplace. In the meantime, employers should take action. In 2018, Amazon pioneered using AI to improve its hiring process. Yet despite its best efforts to remain neutral to protected groups, Amazon decided to terminate the program after discovering bias in the AI hiring recommendations.
On the federal level, Sen. Ron Wyden (D-Oregon) in February 2022 introduced the Algorithmic Accountability Act, which would direct the U.S. Federal Trade Commission to require companies to conduct “impact assessments of automated decision systems and augmented critical decision processes, and for other purposes.” In May 2022, the EEOC and the Department of Justice (DOJ) Civil Rights Division released guidance, warning employers that the use of algorithmic screening tools could be a violation of the ADA.
The EEOC is also stepping up its enforcement efforts on AI- and machine learning-driven hiring tools to ensure compliance with federal civil rights laws. In fact, the EEOC filed its first age discrimination lawsuit involving the use of AI technologies against three integrated companies providing English language tutoring services to students in China for allegedly encoding its online recruitment software to automatically reject more than 200 qualified applicants based in the U.S.—specifically, female applicants age 55 or older and male applicants age 60 or older were excluded from potential job opportunities. [See EEOC v. iTutorGroup, Inc., et al., Case No. 1:22-cv-02565 (E.D.N.Y.)].
State and Local Actions
Additionally, a number of state and local legislators have, in recent years, either introduced or passed legislation regulating AI, or established task forces to evaluate the use of AI. In August 2019, Illinois led the way with one of the country’s first AI workplace laws, enacting the Artificial Intelligence Video Interview Act, which took effect January 2020. It requires that employers make certain disclosures as to how AI works and what types of general characteristics it uses to evaluate applicants, and also obtain consent from applicants when employers use AI video interview technology during the hiring process.
The law, as amended in 2022, further requires employers that rely solely on AI to make certain interview decisions to maintain records of demographic data, including applicants’ race and ethnicity. Employers must submit that data on an annual basis to the state, which must conduct an analysis to determine if there was racial bias in the use of the AI. Employers also may not share applicant videos unnecessarily, and they must delete an applicant’s interview within 30 days of an applicant’s request.
Illinois also enacted the Illinois Future of Work Act, which created the Illinois Future of Work Task Force in August 2021 to identify and assess the new and emerging technologies, including artificial intelligence, that impact employment, wages, and skill requirements.
Maryland enacted a law in 2020 restricting employers’ use of facial recognition services during preemployment interviews until an employer receives consent from the applicant. Of note, research conducted in 2020 found that facial-analysis technology performed better on lighter-skinned subjects and with men.
Most notably, New York City passed a local law, effective on Jan. 1, 2023, that specifically focused on regulating AI associated with typical human resources technology. The law prohibits employers from using “automated employment decision tools” to screen candidates or employees for employment decisions unless the tool has undergone a “bias audit” not more than a year prior to the use of the tool. Before such a tool is used to screen a candidate or employee for an employment decision, the employer must first notify the individual that the tool will be used, identify the job qualifications and characteristics that the tool will use in its assessment, and make publicly available on its website a summary of the bias audit and the distribution date of the tool. The candidate also has the right to request an alternative selection process or accommodation upon notification of use of the tool.
While the Illinois and New York City laws touch on regulating notice of the use of AI technology and examining its impact on hiring, California draft regulations would go further. They would specify that companies or third-party agencies using or selling services with AI-supervised machine learning, or an automated decision making system (ADS) that screens, evaluates, categorizes, recommends, or otherwise makes a decision or facilitates human decision-making that impacts potential job applicants, could realistically face liability under state anti-discrimination laws, regardless of discriminatory intent, unless the “selection criteria” used “are shown to be job-related for the position in question and are consistent with business necessity.” The draft regulations would establish specific restrictions on hiring practices such as preemployment inquiries, applications, interviews, selection devices, and background checks. Further, the proposed regulations would expand employers’ recordkeeping requirements by directing companies to include machine-learning data as part of their records, and for employers or agencies using ADS to retain records of assessment criteria used by the ADS. At publication, the regulations are in the pre-rulemaking phase.
Pending legislation in the District of Columbia, the Stop Discrimination By Algorithms Act, goes a step further than California by permitting a private right of action for individual plaintiffs, including potential punitive damages and attorney’s fees. If enacted, this legislation would bar covered entities from making an algorithmic eligibility determination on the basis of an individual’s, or class of individuals’, actual or perceived race, color, religion, national origin, sex, gender identity or expression, sexual orientation, familial status, source of income, or disability in a manner that segregates, discriminates against, or otherwise makes important employment opportunities unavailable to an individual or class of individuals.
To avoid inadvertently encoding past intentional or unintentional human biases, it will often be necessary for the program designers who build AI systems, and the businesses who use them, to take actions to counter discriminatory effects that might otherwise occur, thereby creating AI systems that embrace the full spectrum of inclusion. The key, however, is to carefully select a broader sampling of women, minorities, and other diverse individuals in the design, development, deployment, and governance of AI. Of note, discrimination in the workplace is unlawful and has legal consequences to the employer, even when technology automates the discrimination.
Employers should exercise caution when implementing hiring practices involving AI technologies by taking steps to evaluate and mitigate any potential discriminatory impact of these tools by investigating whether their technology can successfully pass a “bias audit” conducted by an independent auditor. For compliance purposes, employers should closely monitor and stay abreast of developments in federal laws and guidance, as well as state and local laws that may in the future impact the legality of AI technological tools in the workplace.
For transparency, employers should inform candidates, in readily understood terms, about what the evaluation entails by explaining the knowledge, skill, ability, education, experience, quality, or trait that will be measured with the AI tool. Employers should similarly describe how testing will be conducted and what it will require, such as verbally answering questions or interacting with a chatbot.
Finally, employers should give the job applicant the opportunity to ask ahead of time for accommodation for any disability.