Labor and Employment Law Section

 View Only

Artificial intelligence and discrimination: Combating the risk of bias

By NJSBA Staff posted 06-21-2021 01:40 PM

  

This is an edited excerpt from an article written by Stephanie Wilson, Diane A. Bettino and Kimberly Jeffries Leonard in the May 2021 edition of Labor and Employment Law Quarterly. Read the full article and issue here (login required).

Human resources managers across industries are turning to artificial intelligence to assist with employment-related tasks such as recruiting, hiring, compensation analysis, employee retention and promotion decisions. They contend that AI is a critically important tool because it reduces risks that are associated with human errors in decision-making; expands the universe of potential applicants whom employers can interview; and evaluates extensive and complicated compensation and other employment-related data in a quicker, accurate, and more cost-effective manner.

AI and Bias

Proponents of AI’s use argue that it reaches quick and efficient decisions by analyzing huge amounts of data while eliminating factors that can affect negatively human decision-making, such as lack of objectivity, explicit and implicit bias, and mental fatigue. Arguably, AI produces neutral outcomes because algorithms belong to no race, gender or other protected status and are exempt from the problems that can impair human-decision making. Although this argument has initial facial appeal, research shows, that like human beings, AI’s conclusions can be susceptible to unlawful bias. 

Bias can infiltrate the algorithmic process at a number of access points. One potential access point is at the data input stage. It is at this point where flawed outcomes can occur if the algorithm relies on data that is incomplete or inaccurate; under-inclusive (i.e, data that contains information concerning only one gender); or contains historical patterns of discrimination. If input and output data are flawed and are left unaudited and uncorrected, studies show that an algorithm’s output will continue to perpetuate inequitable information that could provide the underlying foundation for a disparate impact discrimination claim. This “garbage-in/garbage out” argument can arise regardless of the industry.

Data Integrity Issues in Employment Matters

“Algorithmic bias” issues can arise in the employment context. For example, when a company decides to use AI in making certain employment-related decisions, such as determining who to interview, who to hire and who to fire, it has to consider both the nature of the inputted data and whether the outputs have a disparate impact on any protected status.  A case in point occurred in 2015 when Amazon discontinued use of a recruiting algorithm once it realized that it failed to identity potential candidates for an interview in a gender-neutral manner because it showed a preference for men. Upon review, it was determined that this occurred because the data, on which the algorithm was trained, focused on certain re-occurring terms contained in resumes of applicants who had been hired by Amazon over a 10-year period.

As it turned out, the resumes that were used as part of the training data were predominately from male applicants and they contained verbs such as “executed” and “captured” in describing the applicants or what they had done. In selecting candidates for interviews, the algorithm favored resumes that used these types of action verbs, penalized resumes that included the word “women’s,” and downgraded applicants who graduated from two of the all-women’s colleges. In a similar vein, recruiting algorithms are more likely to show advertisements for higher-paying jobs to men over women.

Steps to Minimize Algorithmic Bias

  • Evaluate Data Input and Output with a Diverse Team

Data integrity is key to the proper functioning of machine learning models. Therefore, an algorithm’s input data and its output must be audited for possible disparate impact discrimination claims. Companies must examine the sources of the data for potential bias and have a thorough understanding of the data it uses and how the algorithm works. Human audits of the AI’s dataset should be conducted to determine whether any objective factors are proxies for discrimination that impact individuals in a protected class.

Recent studies show that having a diverse team who build and test AI reduces algorithmic bias as well as training for individuals involved in the development and testing processes.  Human review of the final decision before it is made and implemented reduces the risk of discriminatory outcomes because inconsistencies and red flags with the algorithm’s output can be spotted.

  • Document Compliance and Risk Management Steps

Additionally, companies who utilize AI should document all steps and decisions taken in order to manage discrimination risk. This entails a continuing audit of their processes, which includes determining whether the original inputted data requires updating.

  • Review Applicable Laws

Companies should review their applicable states’ laws to ensure compliance, inclusive of laws governing privacy. While Congress has yet to pass legislation regarding the use of AI, federal regulators that oversee financial services companies recently issued a request for information, encouraging interested parties to submit written comments in response to inquiries regarding how financial institutions use AI and machine learning. Additionally, the Equal Employment Opportunity Commission has begun to explore the implications of algorithms for fair employment.

At the state level, Illinois became the first state to regulate an employer’s use of AI, effective Jan. 1, 2020. Given the rise of AI’s use across industries, other states, such as New Jersey, have pending legislation and the New York City Council is considering a bill aimed at prohibiting the sale of AI technology unless it has been audited for bias and has passed anti-bias testing in the year before the sale, among other things.

Permalink