Accessibility Tools

Skip to main content
legal minefield cartoon

Navigating the New Legal Minefield of Automated (AI Driven) HR

Artificial Intelligence is changing how businesses hire, manage, and evaluate employees—but it is also creating a new frontier for employment litigation. In this webinar, we explored some of the legal risks AI poses to your business. Whether you are using automated software to screen resumes, monitor productivity, or assist in performance reviews, you will learn some of the ways plaintiffs are targeting AI-driven HR tools for discrimination claims, and you'll walk away with tips on how to use this technology without inviting lawsuits or government audits.

The speakers are Sheri Oluyemi and James W. Wimberly (J. Larry Stine had to step out at the last minute).

Watch This Webinar

Webinar Key Insights

The webinar aims to educate employers and HR professionals on the legal and practical risks of integrating Artificial Intelligence (AI) into the workplace, focusing on compliance with long-standing employment laws and emerging litigation trends.

Historical Legal Principles and Discrimination

  • Persistent Legal Standards: While AI is a modern tool, it remains subject to decades-old legal principles regarding workplace discrimination.
  • Disparate Impact vs. Treatment: Discrimination can be intentional (disparate treatment) or unintentional (disparate impact/effect), where neutral criteria—like AI filtering—disproportionately exclude protected groups.
  • The Four-Fifths Rule: The EEOC uses this tool to identify discriminatory effects; if a disadvantaged group is eligible at a rate less than 80% of the majority group, it creates a discriminatory effect.
  • Business Necessity: If an AI system has a discriminatory effect, the employer must prove it is a "job-related business necessity" through validation studies.

Risks of "Hallucination" and Inaccuracy

  • AI Errors: AI models can "hallucinate," meaning they invent facts, case citations, and legal analysis.
  • Human Oversight: Experts strongly suggest human oversight to prevent AI systems from making unchecked or biased decisions.

Data Privacy and Discovery Risks

  • Loss of Privilege: A recent New York federal ruling found that legal inquiries made to AI by non-lawyers are not protected by attorney-client privilege and are discoverable in court.
  • Confidentiality Breaches: Disclosing information to "open" AI platforms is legally comparable to publishing it in a newspaper, potentially waiving trade secret protections.
  • Recording Risks: AI-enabled meeting tools create records that are discoverable in litigation and may require explicit employee consent.

Vendor Liability and "Shadow AI"

  • Employer Accountability: Using a third-party vendor does not absolve the employer of legal responsibility for discriminatory outcomes.
  • Shadow AI: Unsanctioned use of AI tools by staff—known as "shadow AI"—remains the legal responsibility of the organization.

Productivity Monitoring and Labor Relations

  • Hyper-Precision Monitoring: AI tools used for monitoring can lead to litigation regarding privacy and compensable time for minor breaks.

Action Items for Employers

  1. Conduct a Workplace AI Audit: Identify all AI models currently in use, including built-in enterprise tools and "Shadow AI".
  2. Review Vendor Terms and Compliance: Examine vendor contracts for indemnification and request validation studies to ensure tools are legally compliant.
  3. Implement an AI Acceptable Use Policy: Create written guidelines specifying approved platforms, data restrictions, and the obligation to verify results.
  4. Perform Bias Testing: Audit AI-driven decisions periodically to ensure no protected categories are adversely impacted.
  5. Establish Data Retention Policies: Define how long AI records and recordings are kept and ensure compliance with "litigation hold" requirements.

Contact us if you'd like to work with Wimberly Lawson to knock out these action items.

FAQ

What is AI brain fry and which professions are most affected by it?

AI brain fry describes the mental and physical overwhelm caused by the rapid implementation of complex artificial intelligence systems. According to the Harvard Business Review, human resources and marketing are the two professional fields most significantly impacted by this condition as they navigate integrating these tools into their daily workflows.

How does the EEOC's Four-Fifths Rule apply to AI hiring tools?

The Four-Fifths Rule is a statistical benchmark used to determine if a neutral hiring criteria, including AI screening, has a discriminatory effect. If a protected group's selection rate is less than 80% of the majority group's rate, it creates a "discriminatory effect," shifting the legal burden to the employer to prove the tool is a job-related business necessity.

Can an employer be held legally responsible for discriminatory AI provided by a vendor?

Yes, employers remain legally responsible for any discriminatory outcomes resulting from a vendor’s AI product. Utilizing a reputable third-party system is not a valid legal defense; both the vendor and the employer can be sued for adverse impacts. Organizations should review a vendor's validation studies and seek indemnity provisions in their service contracts.

Is seeking legal advice from AI protected by attorney-client privilege?

No, communications with AI to seek legal advice or develop strategy are generally not protected by attorney-client privilege because the information is not being exchanged with a lawyer. A recent New York federal court ruling determined that such inquiries are discoverable and admissible as evidence in court, allowing plaintiffs to access an employer's specific AI prompts and advice.

What are the risks of using open AI platforms like ChatGPT for company data?

Disclosing trade secrets, confidential company information, or sensitive legal questions to open AI platforms is legally comparable to publishing them in a newspaper. Such disclosures can lead to the loss of confidentiality and intellectual property rights. Furthermore, many AI vendors reserve the right to share stored data to respond to legal processes or improve their systems.

What steps should employers take to audit AI systems for bias?

Employers should perform regular audits by reviewing data, such as rejected resumes, to identify trends where specific protected categories are being adversely impacted. If internal technical capacity is lacking, third-party vendors can be hired to conduct these bias audits. Constant human oversight is essential to catch inadvertent legal violations that precise AI monitoring might trigger.

Are AI-driven productivity monitoring tools legal under the FLSA?

While monitoring is common, AI systems that track productivity down to the second can inadvertently violate the Fair Labor Standards Act (FLSA) by docking pay for compensable time, such as short bathroom breaks or coffee runs. Courts have allowed lawsuits to proceed against companies where AI-adjusted timekeeping records failed to pay employees for legally required compensable portions of the workday.

Webinar Transcript

James W. Wimberly (00:00):
This is Jim Wimberley. I wanna welcome you to our Friday webinar on ai. And you expected to hear Larry Stein for this. I'm sitting at Larry Stein's desk. He had some medical issues this week, but he is recovering and I'm taking his place. And I'm with Sherry Oui. And we're together conducting the program today. Sherry, do you wanna say anything about yourself before we get started?

Sheri Oluyemi (00:31):
Well good afternoon to everyone. My name is Sheri Oluyemi and I have been an attorney with WIM Law going on five years now. I cover the full gambit of employment defense issues. Today we'll be talking about AI and hr.

James W. Wimberly (00:45):
Okay? I can't tell you the broadness of how much we would like to discuss with you this today about this important subject. Don't know if we'll finish it all. So I'm gonna jump right into it. Let me just give you a little history of AI from my personal perspective. I first heard a presentation on AI probably 20 years ago before the term AI was developed. It was a presentation by Fortune two 50 company that had just developed and implemented a, an electronic hiring system for its massive workforce. And they talked about how much time and effort they went into developing this system and how the system was designed to anticipate workers who would stay with the company, follow policies, and in hopes of that lower turnover. And they, my memory was, and this was 20 years ago, they emphasized how much effort they went to to develop this program because they wanted it to be legally compliant, but at the same time produce good results. And they felt they had achieved both results. Now, that was 20 years ago. That was before the term AI came around. And now over the last two years, it seems like it's all we hear, ai, ai, ai. I'm also interested in reporting to you that the Harvard Business Review has reported that a new term that's now being used. It, it's called AI Brain Fry <laugh>.

James W. Wimberly (02:32):
And that's when the use of this new system just overwhelm certain persons and causes even medical issues. And it named two professions as being most affected by AI brain fry. And guess what? The two were, human resources and marketing. So for those of you that are victims of this malaise, I'm gonna talk you through some of the issues and hopefully we'll get around to offering some suggestions. First let me, let me, let me say this. AI is new, but it's not so new as I demonstrated in discussing the program of Fortune 500 or two 50 Company put in about 20 years ago. But one thing to keep in the back of your mind is the legal principles that we're dealing with for many years are still with us concerning ai. And so, at the outset, I want to discuss one of the issues that will always come up with a I use and how it affects our programs that we're developing.

James W. Wimberly (03:42):
Many years ago in a Supreme Court case, really about 50 years old now called Griggs versus Duke Power, the Supreme Court law developed that there are two forms of discrimination, and I'll be brief, is disparate treatment where one person is favored over another similarly situated person. And the implication is it must be based on race, sex, age, or some other protected characteristic. This is also known as intentional discrimination, where there was an intent to favor one ethnic racial sexual group over another. But in this Griggs case, the Supreme Court developed a second theory of discrimination. And it said simply that there are two forms of discrimination. One is discriminatory intent, which is disparate treatment. The other is discriminatory effect. It was in the context of an employer in North Carolina that required a high school education for entry level positions that were more of a manual nature.

James W. Wimberly (04:47):
And the result of requiring a high school education was that African Americans and some other minority groups were not eligible for employment to the same extent others were because they were disqualified by this neutral, supposedly non-discriminatory factor of the requirement of having a high school education. So from that case, the law ultimately developed that an employment criteria or requirement can be discriminatory in effect, even though it was neutrally applied to everybody with no discriminatory intent. And to some extent these type cases were based on statistic. The courts and EEOC didn't understand the statistical basis of what was a discriminatory effect of something. So the EEOC for many years came up with a, a handy tool called the Four Fifth Rule. And the four fifth rule said that the eligibility of one group so that a second disadvantage group is only eligible less than 80% of the time, the majority group is eligible, creates a discriminatory effect.

James W. Wimberly (06:07):
Now, just because some hiring criteria has a discriminatory effect, doesn't mean per se that it's unlawful. It means that it shifts the burden to the employer to show a job related business necessity. And another term for that is validity of the hiring criteria. That is the hiring I criteria actually works in producing the effect of having a more qualified workforce. So when we use ai, and let's take it in the terms of a hiring criteria examination of applications, whatever testing this even goes back to psychological testing that many employers have used for many years, that if these tools that we use have this discriminatory effect because one group is disqualified at a significant level more than another protected group, there has to be validity to it or some kind of job related business success. So the first thing that comes to mind in AI situations is whether the AI produces a valid result, a job related business success, or alternatively, whether it doesn't adversely affect one protected group any more than the other.

James W. Wimberly (07:32):
So if you start off with lesson number one about ai if it's used in the employment context for hiring, promotions, layoffs, whatever, any employment decision, it either shouldn't have an adverse impact on one protected group more than the other. Or if it does, it's a valid system that accurately determines more qualified employees in a broad sense. Maybe they're gonna stay with the employer longer, maybe they're better workers, whatever. So that's lesson number one. But AI introduces so many other issues that are mind-boggling. And as we get into this program, I think you will see some of the other mind-boggling issues. But I wanna start with a premise that any AI that we use for employment decisions has to deal with the issue of adverse impact and whether the criteria results in a valid test or job related business necessity for conducting it.

James W. Wimberly (08:40):
Now how can adverse impacts result? Well, I'm gonna mention some additional problems that AI has as far as adverse impact situation. If a company sets up a system like this Fortune two 50 company did, presumably it sets up a system based on certain data that they've examined in the past. You've heard the old saying, garbage in, garbage out. If the data being inputted to whatever AI system were setting up shows or reflects some sort of pass bias, then that bias is possibly or likely gonna be reflected in the outcome. So certain filtering terms we use of selection points we use let's just talk about examining applicants for a minute. That can disadvantage certain groups because the data's bad and it means the employer's not gonna be able to show it's a valid system. So this may be totally unintentional, but it may result in the, a screening process that creates serious legal issues of adverse impact type discrimination.

James W. Wimberly (09:50):
Now I wanna mention another type AI system. 'cause There's so many of them. This is monitoring productivity. And when I think of this, I think of Amazon. You might say Amazon is the picture boy of using AI as a monitoring tool. The Amazon warehouses are a legend, are known for having all kind of monitoring devices. So they know what each employee's doing at each time, whether they're working or not working and so forth. That raises other issues. Besides discrimination related issues. It relates re creates HR issues. Certain groups contend that's inappropriate and they're working people like crazy in these warehouses. With all this electronic monitoring truck drivers are making some of the same complaints because they have cabs now in which ca camcorders keep track of what the truck driver's doing. And so I'm being looked at, I being monitored, you know, I don't like all these people looking at what I'm doing.

James W. Wimberly (10:59):
Suppose I have stop to go to the bathroom. You know, privacy. So those are human resource issues and labor relations issues more than legal issues. But they can result in claims that somebody's privacy is being violated. The previous administration was getting into some of that. They were going to set up certain national guidelines of some type where certain type monitoring must be limited and must be disclosed to all employees, must be explained to employees, et cetera. It must be a bonafide purpose for it. That's a whole nother issue of a somebody's purported privacy interest. I will say those privacy claims haven't had much of a legal bite at this point. It's more of a argument standpoint, a human resource standpoint. Occasional legal attacks are made now she we don't need to be changing PowerPoints yet 'cause we, we aren't on those yet.

James W. Wimberly (12:06):
I'm, I, I changed the agenda that she <laugh> this morning. Didn't even tell Sherry. We'll, we'll come back to these PowerPoints in just a minute. So now I'm, I'll mention another thing that gets something. Some companies in trouble in certain states that shouldn't concern most of us. I'm gonna be brief facial recognition, you know, over in China, supposedly they know what everybody in the country's doing. And if somebody gets out of their normal routine or in an area they weren't supposed to be in, there's an investigation and they get in trouble. Some states, particularly Illinois in this country, are very sensitive to that and they have set up certain guidelines or legal mandates with big penalties. So I'm not gonna spend much time on that. But facial recognition is one type of employment device that if used in Illinois and to much lesser extent in Texas, has to meet certain additional requirement.

James W. Wimberly (13:07):
Now what are some things that the courts are looking into? First of all if you are relying on some kind of device to determine hiring, promotions, layoffs, and so forth it needs to be valid or workable or produce what it's supposed to produce. And that usually comes from what's called a validation study. And we don't have enough time for validation study training here, but there's essentially three types of validation studies, and I'll just mention two of them. One is something that that's on its face valid. And, and you might say, isn't that subjective? Not really. If something seems to so directly related to the job and that's what it's testing that usually passes under what's called content validity. Then there's a second type of validity to support the use of a test or device called criterion related.

James W. Wimberly (14:14):
And that means that statistically the test produces the best candidate. I can't go into a lot of details on that, but you can run comparisons on how the test has been used to do what it's supposed to do. What are some other things that might be looked at at AI as something that's always in the background? Is there any human oversight to it? And I can't say to you that that's a legal requirement at this point, but if you look at the literature and what the experts are saying most say that to design a good AI system, there should, should be some human oversight rather than just blindly relying on the AI results. And I can't really define it any better than that, but everybody seems to be suggesting, even though it's not reflected in the case law, the experts seem to say, we gotta give human oversight to these systems.

James W. Wimberly (15:19):
'Cause They may get outta control in some way. Now some other thoughts. We use vendors to provide some AI and, you know, let's go back to an old example of psychological testing or p employment tests or what have you. Vendors may supply us the AI that we use to make the selection. I wanna say several things about vendors. Number one, we remain responsible for the use of the vendor's product. So don't ever get the idea that we're off the hook because we use some product that a reputable vendor provides. It's no legal defense to say, oh, I paid a lot of money for such and such to supply this system to me, so I'm off the hook. If you have a problem with the system, go after them. Well, that ain't the way it works. <Laugh>, both the vendor and the employer can be in trouble.

James W. Wimberly (16:18):
There's a current case pending just against the vendor. I believe the vendor's workday. It provides various employment programs to employers all over the country. One plaintiff went after them alone without even bringing in the employers. But that doesn't mean the employers can't be sued without suing the vendor. So what should we do in our reliance on vendors? Number one we need to look at their validation studies. What, what evidence do you have that this system you're selling us? Does what it supposed to do? What validity tests have you run and considered? And I would like to see the result. So that is one thing you can do. Secondly, you can try to get indemnity provisions. You know, those of us that use staffing companies, sometimes we try to get indemnity provisions from staffing companies or other contractors that if they violate the terms of the agreement don't produce valid tests don't provide legal workers, we get sued for it.

James W. Wimberly (17:27):
They have to hold, come in and defend the case and or reimburse us for our loss. So those are some things we need to look at at dealing with vendors. Don't think just because we use a vendor, we're off the hook and we aren't legally responsible. So let's find out if the vendor's tools have been tested what the results have been, and let's receive the results. We wanna know if they've been in litigation, what the results have been and so forth. So how do we start in, in, in some of this? I'm on now getting to some of these PowerPoints. Alright, she, you got the right PowerPoint up. My fear right now is that without company guidance, people are on their own to use AI as they choose. So we don't know, possibly don't know what use our management staff is making of ai, how they're using it, whether they're taking precaution, and there's a term called shadow ai, that's this unsanctioned ar ai tools that persons in our organization are using, but yet we remain legally responsible for their use.

James W. Wimberly (18:37):
Some companies don't like ai. They're saying there are too many unknowns out there that we don't wanna dig into it yet. Others embrace it. Some require their employees to use AI and offer rewards or promotions for using ai. SHRM reports that 13% of organizations are using AI in performance review. But of course AI will unfortunately do something called hallucination and hallucination is they make up stuff. I don't understand why this happens, but having used AI many times myself in legal research, they invent case, case sites, case discussions, analysis and conclusion. How often does this happen? A lot of people say it happens 20% of the time. I mean, they're pulling that number out of the air. I would say to you that that's probably a pretty good number. I, in my personal use of ai, I have found it on a given project to be a hundred percent wrong.

James W. Wimberly (19:46):
Because I've been around so long. I know in the cases where it was a hundred percent wrong, I can understand a little bit how they were misled. And ironically, they were misled the same way a lawyer would be misled by doing non-AI research. <Laugh> the law deals in analogies, you know, situations alike, others, so they must have similar outcomes. So but it's one thing to give a wrong answer, but it's another thing to make things up, make up cases and everything. So we have to check every case that's cited. I mean, we have to do this for to do a good job, but secondly, we have to do it for ethical reasons. Lawyers are getting in all kind of trouble with courts today because I, I may be exaggerating this, but let's just put it simplistically. A plaintiff's lawyer can push a button and print a, a, a lawsuit, print a motion in support of that lawsuit, print discovery questions to ask the employer and all of these kind of things in something that would take a lawyer a week or two to do. AI does it in 10 minutes. Problem is it's full of errors. And if a lawyer goes and files these things in court and the judge realizes the lawyer has used AI without checking everything, serious sanctions can be an order. Serious sanction, they can be ethical issues. So you know, I I'm in a mystery to explain why AI makes things up and is wrong a portion of the time, although some of their mistakes I understand on legal questions, but also AI's influenced by our prompt. Apparently AI wants to pr please <laugh>.

James W. Wimberly (21:38):
And you know, the old saying, garbage in, garbage out. That applies also. Next slide. Sherry. Yes sir. Now this is Mindboggling. Hold your seats when I tell you this. You're a human resource director. You have a legal issue come up. I don't want to call my lawyer. I'm going to do this myself. I'm an HR professional, I'm sure I'm certified and all this kind of stuff. So I do this research myself. Maybe the question is, can I fire an employee because the employee did such and such? Tell me the answer, the applicable law to rely on and the words to use to carry out the termination. That type inquiry is possible with ai. Well, in a case of first impression, meaning it never happened before, <laugh>, a New York federal judge ruled two weeks ago, this is new, that when a client, an employer or hr, whatever communicates with AI to seek legal advice and develop strategy that's discoverable, the plaintiff can say, did the company consult AI on this issue?

James W. Wimberly (22:52):
And we want to see the inquiry and the advice. And the federal judge says, that's discoverable. Evidence is admissible in court. So think about that. The HR person says, I'm just getting legal advice. The judge says, well, you are not getting it from a lawyer, you're getting it from ai. So that's not protected by the attorney-client privilege. Now that case has just shook the legal world, and in two cases that I've seen discovery in the last couple of weeks, guess what the plaintiff's lawyers are routinely asking now, was AI consulted on such and such and they want to get access to that ai, what was requested, what was stated to ai, what the advice was the whole Shabbat. So this is significant. Next slide, Sherry, please. Here's another, this is not so mind boggling in the legal community, because this word's been out a long time.

James W. Wimberly (23:58):
Whatever confidentiality our company requires, let's say it's a tradee. You know, once we disclose trade secrets, it's, you might say, public information that we don't have a right to protect. Same thing with confidential company information. Same thing with legal question. All of these doctrine generally say whether it's a trade secret, a confidential matter, or legally advised that if they're to, to be privileged, they have to be kept private and only disclosed to people that need to know. Well, if you disclose these things on ai, it's the same as publish 'em in the newspaper as far as the law is concerned. So if we use what I'll and the terms are just developing, I'm calling it open AI like chat, GPT, the most popular one or clause. We're disclosing it to the world and therefore arguably have lost our confidentiality. Now the law is just developing in this area.

James W. Wimberly (25:02):
So I'm not saying this is the way every court will look at it, but I want you to know what the risks are. So you can lose all sorts of rights by disclosing things to open ai. Now here's the second prop. Even if it's not open, you are signing some sort of an agreement with a vendor that you probably have not read, and it's probably a long and in small print, they reserves the right to share stored data when acquired by law to respond to legal processes, including Microsoft. They have that program. So you don't know what you're getting into by using ai. I mean, the lesson is, one thing you gotta do is study the privacy PO policies of the AI you're using. And my gosh, here's something that's mind boggling to me. What about no Ting technologies we use? What about this webinar?

James W. Wimberly (25:55):
What about the teams meetings we conduct? We're now looking at that from an AI standpoint and realizing this has all sorts of implement c implications. For example, the recording policy. Are we violating somebody's rights by recording this where they've not consented to the recording? Now teams puts up some kind of a warning about AI use being recorded. I assume you got some sort of a warning on this particular broadcast, but I, I think you, you get the point. Now suppose we keep all our internal company teams meeting can you imagine the discovery issues we're gonna get into? You know, in discovery, the other side can seek relevant information from you that you, you have in your possession. And I won't get into any particulars here, but in our industry there's been litigation over certain meetings, a lot of litigation over meetings, and the plaintiffs warn everything known about those meetings.

James W. Wimberly (26:58):
Were they recorded? Who took notes? I wanna see the notes, what was said. So I, I hope I've sensitized you to looking into whether our company is keeping meeting records of very sensitive meetings and whether we should be keeping those records. Maybe we should not be keeping those records because you have two other things to remember. How long are these records kept? Let me give you a common example. I'm dealing with all the time right now. Camera videos in a plant some plants keep those videos 30 days, some keep 'em six months, you can keep 'em a year. I don't know anybody that keeps 'em forever because it clocks up their system. But what record retention policies do you have? If you're gonna record everything, you ought to have a retention program of how long you're gonna keep it. And I'm putting this under problem areas with AI usage because there's another thing, it's called litigation hole.

James W. Wimberly (28:02):
When you have reason to believe you're gonna be sued, and I'm not gonna be any more specific than that 'cause the law's a little vague in itself. You are legally obligated to keep relevant information and if you destroy that relevant information, even if it's in the regular course of business due to a retention policy you get sanctioned by the court in a big way, they can even find you guilty or having committed an illegal act by doing so. So I hope I've sensitized you with ai. We're keeping information. Maybe the AI is in the forms of recordings of meetings. We need to know how long what we're recording and how long we're gonna keep it. And we need to know about the litigation whole concept. Next slide. Next. Okay, well, you know, where as a company do I get some advice on? Well I'll get to that in just, well, I won't really get to it, but there's some things out there.

James W. Wimberly (29:05):
There's a National Institute of Standards and Technology Risk Management Framework. I've read that once. I didn't get a lot out of it <laugh>, but you might get something out of it. At least it's a source to be checked. Now the administration believes that AI is wonderful because AI has a potential of making our society much more productive. And the more productive our society is, the more money companies can make and the more everyone can enjoy the benefits of that productivity. All the way from new hires, new hourly hires to CEO. So pre the administration doesn't wanna see states enact impediments to a IU, but as of this minute, any state laws on AI remain enforceable. Next slide please. So what should we do? Well first of all, let's find out what we're currently doing. What policies do we currently have that address ai? It makes sense to me unless somebody comes up with a better idea that companies are to develop an AI acceptable use policy. What platforms are approved? What settings are required? What types of data are off limit? The obligation to verify the accuracy of the result. What recording policies are we gonna have? What data retention policies and reminder of legal whole proceed. Now this is just practical sense, that to me would apply. Next slide, please, Sherry. Alright, Sherry, this is your turn, isn't it?

Sheri Oluyemi (30:37):
Yes. So I'd like to talk a little bit more about what strategies you as employers can implement to mini minimize and to mitigate some of the AI risks that Jim has just discussed. Even if you're not certain what type of AI tools are in effecting your workplace, that's where you need to start. The first step is to really understand what models are in your workplace. Are you using large language models? Are you using open ai? Do you have any proprietary systems? Understand exactly what is already in your workplace. Now a lot of you use enterprise word processing and email such as G Suite or Outlook. A lot of those will come with Microsoft has copilot and G Suite will give you Gemini. So a lot of your employees and your team members already have access to these tools within your enterprise system, which means you can exercise some sort of audit capacity to see exactly who is using it, what are they using it for, and what are the risks associated with what is being used.

Sheri Oluyemi (31:46):
So that's really where you have to start. Look at all of your systems, go through all of your software. If you use Workday, if you're using Gusto, if you're using Paychex, a DP, all of those platforms now include even in your practice management software, if you're using Clio, they all now have an AI component. So start there, understand exactly what it is that it's in your workplace. Look at the T's and C's, read those terms and conditions, see whether there is any indemnification. See whether, for example, on Gemini, whether the history is turned on, which means in the case of litigation history can go back to the beginning of that employee's use of Gemini. So see whether you need to toggle those systems on or off. See whether you need to draft policies for retention to manage those systems. These systems will come with a lot of automatic default settings.

Sheri Oluyemi (32:38):
So understand what those default settings are and determine actively whether they work for your workplace. So that is really the first step. Just understand what is being used and why. After you've done that, you really then should have an audit of any risk. You can do this in-house or if you don't have the technical capacity, you can have a third party audit for bias. This is what Jim was talking about earlier. For disparate impact cases, what has been the result of using AI in a specific sphere? Going back to the example, for example of resume screening. Look at the resumes that have been screened out by your AI system over the past three months or any period of time that you feel is representative. And look and see whether any trends have emerged. Has a certain type of person, a certain protected category been excluded?

Sheri Oluyemi (33:29):
If that's the case. And then you might need to do some work on the way you're using ai, but you wouldn't know that unless you did an audit. And again, there are vendors who, who specialize in this type of audit testing. They go into companies and they determine whether there are any protected categories that are being adversely impacted. By the way AI is being used in your work, in your workplace. The Workday litigation was the very first one as Jim mentioned, and it was huge. The Workday filed a motion to dismiss arguing on two basis. First that it was a vendor and such. It wasn't actually responsible for all the resumes that were kicked out because it wasn't the employer. And two, that its systems were random and the systems were not focusing or targeting any specific protected group. And the court refused to throw out the case arguing instead that Workday did have some responsibility to make sure that it was compliant with federal law.

Sheri Oluyemi (34:27):
So because the use of AI is new and growing exponentially, becoming smarter every day, the laws remain the same. As old as they are, they are equally applicable. If there's something that you cannot do as an individual or as an entity, you cannot use AI to do for you because those same rules apply and you will have no grounds to argue there was a lack of agency. So second step is to run those audits, run those tests, and see exactly what risks might be might be apparent from those reviews. Third step is human oversight. Jim mentioned this a little bit, but you can never remove yourself entirely from AI processes. There needs to be some human review of AI decisions. You cannot rely entirely on on the results that AI pump out. We as attorneys have learned that through reading cases, seeing that AI will sometime have the case citation right, but will misunderstand the standing of the case or the holding of the case.

Sheri Oluyemi (35:28):
We have to go and read the cases ourselves. The same applies to you. If a resume is rejected, do a spot audit. If AI rejects 10 resumes, do a spot audit of two and see whether or not AI is missing something so that you can adjust your, your settings always have ho human oversight over the system. Another case that, another set of cases that might interest you come out of the Western District of Texas. In those set of cases, employees had argued that the companies were using AI to adjust their timekeeping records. For example, keeping track of when the employee was actually working versus stepping away from the workplace. And these losses were brought to claim that the employees were not paid for time that would otherwise be compensable. Because if an employee's on the clock and they stop to go to the bathroom for two minutes, they don't necessarily have to clock out for that.

Sheri Oluyemi (36:22):
If they go to hr, maybe they don't have to clock out for that. If they stop to grab a coffee. Some of this time is part of the workday and is usually compensated and sometimes required to be compensated under the law. But with AI being very precise down to the second, the moment the employee stops typing, the moment the employee takes their hand off the controls of the machine, the the time sheet gets logged out and logged back in when they hold onto the machine. That was found to be, well we didn't, we don't have a decision on that case, yes, be because it's settled, but the court did refuse to dismiss those cases on a motion to dismiss case. So at least those plaintiffs got through the threshold indicating that this could become a bigger problem. And I'm certain lawsuits are gonna come based on the same facts.

Sheri Oluyemi (37:07):
So we're gonna have some decisions guiding us in the future, even though we do not have them now. So there needs to be some oversight to see whether your AI systems are making these inadvertent FLSA or other violations. I was gonna talk about d vendor due diligence, but Jim's sort of covered that You cannot assign or delegate your responsibility under any of these employment law statutes to a vendor. At the very worst case, you may be jointly liable with the vendor or you may be liable entirely with the vendor completely off the hook because you are the employer. So do not assume that you can rely entirely on your vendor. So these are four steps that you could take in your workplace to minimize some of the, the risks here. We were gonna talk also about validation studies. We've sort of co covered that.

Sheri Oluyemi (38:01):
Tool selection don't just default to whatever AI tool happens to fall in your lap. They are not all created equal. And trust me, I've tested most of them. When I do legal research and when I do drafting, they are very different. Some are better for some things. For example, notebook, LLMs are good for audios and Gemini is great for things that need a deep internet research. So be selective with the tools that you, that you choose. So based on all that information that you've gathered, you would decide for your workplace how aggressive you'd like to be in preventing employees from using AI or not using it. After you've performed these four audits, these four steps that I've just discussed, you could decide really how restrictive you want to be. Once you've made those rules up, you should put them in writing and have policies disseminated to the employees so that they have adequate notice and an open door policy to come and talk to you about that. If they have any questions, Jim, anything else you'd like to add before we close for the afternoon?

James W. Wimberly (39:06):
No, except to point out that we're in a very early stage of the emerging legal issues under ai. Best example being the case I told you and which a New York federal judge ruled that even AI legal inquiries by non-lawyers seeking legal advice were discoverable and admissible in court. And all these things about losing confidentiality, trademark and legal privilege by open AI features. All that is, you might say largely theoretical, but I use a an old saying, you hope for the best and you plan for the worst. So in our planning, we have to keep in mind that these things might open up a floodgate of loss of any conferences in the company. And I also hope I've alerted you to the issues associated with keeping all kind of recordings, team meetings, et cetera. What, what retention policies do we have? Should we be recording all this? And it even gets down to I use the example of video cameras in our plants. How long do we keep that? Do we know we have to hold those documents when there's litigation anticipated? That summarizes it. Thank you for joining us today.

Sheri Oluyemi (40:23):
Thank you everyone. Have a good weekend.

promo graphic, Navigating the New Legal Minefield of Automated HR
Status: Available On-Demand
Webinar Date: Friday, April 03, 2026
Start Time: 12:00 PM
End Time: 12:45 PM
Venue: Zoom

Presenters

Listen To This Webinar

Receive Webinar Email Updates About Future Webinars