This writer cannot recall a subject that has dominated the news in recent months like AI. Although this writer is not attuned to modern technology, he nevertheless believes that AI is for real and is going to be as important as computers and cell phones. But there is a lot to learn, and this article is hopefully a good overview.
AI in general is software and machines that search through billions of books, articles, websites and posts and use that data to respond to requests. AI analyzes this vast array of data to see patterns and makes predictions and decisions. While it continuously improves with more data, it also is capable of making mistakes, major mistakes. It also raises many other risks.
The most popular AI uses are chatbots like ChatGPT, Google Bard and Baidu's ERNIE. You can go to ChatGPT, explain in a few sentences what you want, and a few seconds later a response will be produced for you, that may have taken you a month to complete. You can even go back and ask for changes and it will do so. You can get started by typing questions, giving it text to summarize, data to analyze, etc.
Most of these AI mechanisms are easily accessible and are free to use. You can go to Chat.OpenAI.com on a computer or mobile device, register for a free account by providing an email and a mobile phone number and creating a password. AI uses the data to further train their models. For example, OpenAI provides that: Data submitted through non-API consumer services ChatGPT or Dall·E may be used to improve our models." Because these tools may use data to further train their models, there is a possibility that a user's input data could be output verbatim to another user.
There are many legal ramifications of this. Inputting a trade secret into an AI tool may be like disclosing that trade secret to a third party, eventually affecting trade secret protection, potential surrender of patent rights, or contain portions of potentially trademarked, copyrighted, or otherwise protected material.
Because of the significance of the issues presented, federal regulators have jumped into the picture and on April 25, 2023, the Equal Employment Opportunity Commission (EEOC), Consumer Financial Protection Bureau (CFPB), the Department of Justice (DOJ) and the Federal Trade Commission (FTC) issued a joint statement pledging to use existing laws to protect the public from bias in automated systems. Automated systems, as described in the joint statement, include: "software and algorithmic processes that are used to automate work flows and help people complete tasks or make decisions." Regulators stress that automated systems have the potential to discriminate when trained on historical or skewed data sets, or if the model lacks transparency, and where the automated system fails to account for context. Employers now use automated systems to help make decisions that have legally significant impacts, including hiring, promotions, etc.
The April 25, 2023 report suggests that employers should conduct an assessment including documenting the data collected and how the data is processed by each automated system used to make the ultimate decision. It is suggested that by understanding the methodology, which may take some help from a software vendor, it may be easier to assess the system. Obviously, contracts with third-party AI vendors need to be reviewed and a specific study made of representations and warranties, indemnification, and limitation of liability. Many employers will try to renegotiate those terms to more appropriately shift the risk back to the vendor, which is in a better position to understand its own automated system. Employers may also wish to review their insurance policies, to assess whether there is coverage for potential AI-related liabilities. Some states and localities are also regulating in this area, such as New York City's law that went into effect on April 15, 2023.
On May 18, 2023, the EEOC released a technical assistance document, "Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964." In this document, the EEOC refers to its Uniform Guidelines on Employee Selection Procedures under Title VII, adopted back in 1978. It discusses how the guidelines provide guidance for employers about how to determine if their tests and selection procedures are lawful for the purpose of Title VII disparate impact analysis. The Guidelines would apply to algorithmic decision-making tools, when they are used to make or inform decisions about whether to hire, promote, terminate, or take similar actions toward applicants or current employees. Under the Guidelines, employers are required to assess whether a selection procedure has an adverse impact on a particular protected group by checking whether use of the procedure causes a selection rate for individuals in the group that is "substantially" less than the selection rate for individuals in another group. Ultimately, the employer may be responsible under Title VII even if the test was developed by an outside vendor. Therefore, the Guidance recommends that employers that are deciding whether to rely on a software vendor to develop or administer an algorithmic decision-making tool may want to ask the vendor, at a minimum, whether steps have been taken to evaluate whether use of the tool causes a substantially lower selection rate for individuals with a characteristic protected by Title VII. If the vendor states that the tool should be expected to result in a substantially lower selection rate for individuals of a particular race, color, religion, sex or national origin, then the employer should consider whether use of the tool is job related and consistent with business necessity and whether there are alternatives that may meet the employer's needs and have less of a disparate impact.
Editor's Note: Employers may wish to conduct periodic privileged reviews supervised by counsel of AI algorithms to test for potential bias, both in terms of input data and output results.