Accessibility Tools

Skip to main content

FCRA Litigation Challenges Employers’ Use of AI Hiring Platforms

Written on .

A January 20, 2026, class action filed against Eightfold AI, Inc. in California is sending shockwaves through the employer and AI community.  Kistler v. Eightfold AI, Inc., Superior Court of the State of California, County of Contra Costa.  The class action lawsuit alleges that over 100 employers including Microsoft, Morgan Stanley, Starbucks, BNY, Paypal, Chevron and Bayer use hidden AI technology to collect sensitive and often inaccurate information about job applicants and score them from 0 to 5 for potential employers based on their supposed “likelihood of success” on the job.  Using its evaluation tools, Eightfold allegedly provides prospective employers with reports that assess job applicants not only as individuals, but also relative to one another, and employers then use these reports to sift through applications, typically only reviewing highly-ranked candidates.  The poor-ranked candidates are often discarded before a human being ever looks at their application, according to the lawsuit.

Plaintiffs contend that in passing the Fair Credit Reporting Act (FCRA), 15 U.S.C. § 1681 et seq., in 1970, the law includes a broad definition of consumer reports as “any written, oral, or other communication of any information by a consumer reporting agency bearing on a consumer’s . . . character, general reputation, personal characteristics, or mode of living which is used or expected to be used or collected in whole or in part for the purpose of serving as a factor in establishing the consumer’s eligibility for . . . employment purposes.”  The FCRA requires consumer reporting agencies to make certain disclosures, obtain certain certifications, and insure that consumers (here, job applicants) have a mechanism to review and correct reports that are provided to prospective employers for purposes of determining eligibility for employment.  Further, in 2024, the Consumer Financial Protection Bureau, the federal agency that administers the FCRA, published a guidance document in the Federal Register entitled “Background Dossiers and Algorithmic Scores for Hiring, Promotion, and Other Employment Decisions.”  According to the plaintiffs, this guidance documents how longstanding protections for job applicants and employees under the FCRA applied to new AI employment technologies.

It should be noted that the FCRA guidance of 2024 was rescinded in 2025 under the current Administration, but still serves as legal support for the plaintiffs’ theory.  If the plaintiffs prevail, it will trigger the FCRA’s full list of obligations, including stand-alone written disclosures, applicant authorization, and pre-adverse action and adverse action notice requirements.  The FCRA provides for a private right of action, statutory damages of $100 to $1,000 per violation, punitive damages for willful violations, and a lot of case law precedent in support of plaintiffs, including the right to bring class actions.  

Note also that while this particular case was brought against a vendor, a similar type cause of action can be brought against the employer using the services. Both the vendor and the employer have potential liability.  While some vendor contracts include AI-specific provisions, it is rare for a vendor contract to include full indemnification for regulatory fines and class action exposure.  In addition, it should be considered that employers using any type of hiring criteria, including an AI hiring tool, have legal exposure for adverse impact type discrimination claims, in which a plaintiff alleges that an employer’s selection tools are discriminatory in effect, even if neutrally applied without discriminatory motive.  Although such claims are not currently being processed by the Equal Employment Opportunity Commission (EEOC), the theory remains available for plaintiffs in private litigation.  Employers in such situations have to show that the selection tool is a job-related business necessity or otherwise “valid” for its use.

Other AI Uses for Employment Purposes Raise Similar Issues

Selection tools like AI are used by employers not only in the hiring process, but also in performance appraisals, promotions, lay-off decisions, and even in the immigration process, and this information may be used not only currently but also for future decisions, without current consent or disclosure.  For example, suppose an employer as part of the employment verification process uses some type of “connect the dots” AI platform to determine employment authorization.  The same type legal issues could arise. 

Editor’s Note:  The growth of AI affects Human Resources (HR) as much as any area of work.  It offers massive opportunities for potential productivity improvements, but there are emerging concerns that are developing almost weekly as to potential legal issues.  A company should first review how it is currently using AI, in order to determine what safeguards are necessary.  The no-brainer steps to take would include an AI policy as to usage and careful review of vendor contracts.  An approach to consider involving the FCRA issue is to consider an in-house company AI system, as the FCRA only applies to “consumer reports” from a “consumer reporting agency,” a third party.

This article is part of our May 2026 Newsletter. 

View the newsletter online

Download the newsletter as a PDF

Get Email Updates

Receive newsletters and alerts directly in your email inbox. Sign up below.
ai, human reach out
A January 20, 2026, class action filed against Eightfold AI, Inc. in California is sending shockwaves through the employer and AI community…
danger sign, skull
A second “bombshell” affecting HR pertaining to AI is a federal court ruling in New York, that a defendant’s use of AI in researching and p…
CHAT GPT
Soon after the deciding of the above-discussed case on February 17, 2026, in U.S. v. Heppner, a criminal case in the District Court for the…
june 2026 legal immigration webinar promo graphic
The webinar will cover how to deal with a worksite enforcement action and various types of immigration enforcement activities. The webinar…
Disparate Impact Theory webinar promo graphic
The webinar was led by Jim Wimberly, and the subject was the current status of the disparate impact theory of discrimination. The EEOC is c…
Early morning Bagan, Myanmar
The Trump Administration has acted to terminate TPS status for several countries.  Of course, litigation has followed each notice of termin…