Artificial intelligence (AI) is arguably the most transformative technology since the birth of the internet, and it has already played a significant role in the workplace. Many applicants never even connect with a human being at a hiring company until after AI has screened them. If your company is using AI screening tools, it is important for you to be aware of the legal risks and potential liabilities.
How Businesses Use AI Tools
Not that long ago, the key to getting a job was an impressive resume printed on premium bond paper, a nice interview outfit, and a firm handshake.
Today, resumes are sent via email, and an applicant’s first contact with the hiring company is likely to be an Applicant Tracking System (ATS). In fact, an estimated 97 percent of Fortune 500 companies now use an ATS such as Greenhouse, Jobvite, Lever, or Taleo to write job descriptions, perform background checks, conduct online tests, measure personality traits, communicate with candidates, and schedule interviews.
But before those interviews are scheduled, AI is used to filter through the horde of candidates in order to quickly find those people with resumes that include particular certifications, degrees, skills, or former job titles.
Hastening the hiring process by weeding out unqualified candidate is generally seen as a good thing, but these waters are murky and filled with things that bite.
Legal Risks of AI Hiring
Dependence on AI and its time-saving capabilities can develop quickly, which is why it is so important to remember that AI should not replace human intelligence in the hiring process. Yes, AI has the potential to reduce hiring bias, but it can just as easily violate anti-discrimination laws enforced by the Equal Opportunity Employment Commission (EEOC).
Federal law makes it illegal to discriminate against job seekers and workers on the basis of the following characteristics: race, color, national origin, religion, sex (gender identity, sexual orientation, and pregnancy), age (40 and over), disability, and genetic information. Minnesota state law goes on to add creed, marital status, public assistance status, and familial status to the list of protected classes.
While companies are responsible for their hiring decisions (including decisions based on AI, even when the AI software is administered by a third-party vendor), the 2023 Hiring Benchmark Report notes that “some of the earliest uses of AI in recruitment have led to major misses that have had the opposite of the intended effect, increasing bias and legal liability.” For example, in September 2023, iTutorGroup paid $365,000 to settle an EEOC discriminatory-hiring lawsuit arising from its use of an AI candidate selection tool.
Because employers remain 100% responsible for the decisions they make—however they reached them—employers who rely on AI algorithms to screen candidates and make other hiring decisions should take action to eliminate bias from the equation and thereby prevent law suits, damages, fines, and penalties. For example, conduct ongoing self-analysis to determine whether your AI software selects individuals in a protected class substantially less frequently than individuals who are not in a protected class.
Employers should also develop best practices that allow candidates to give informed consent about the AI processes being used and to opt out or request an ADA accommodation, such as specialized equipment or alternative tests.
In conclusion, AI presents novel issues that can be difficult for businesses to navigate. Even unintentional noncompliance with federal or state anti-discrimination laws can result in lawsuits and substantial penalties. For help crafting legally compliant hiring policies, whether you use AI or not, contact Sjoberg & Tebelius, PA to schedule an appointment (651-738-3433).