The Bottom Line
- Employers should make sure that service agreements they have with employee benefit plan administrators (third-party administrators, or “TPAs”) include adequate protections from a TPA’s use (or misuse) in providing AI services. In turn, TPAs should ensure that they are protected from the vendors they rely on in providing AI-fueled services.
- Companies must diligently work with their data security and ERISA counsel to ensure that AI tools appropriately comply with privacy, data security, and other applicable employee benefit laws, including HIPAA and ERISA, as well as newly enacted state laws regulating the use of AI technologies.
- AI tools should be carefully vetted for accuracy and reliability. For this purpose, an employer may want to allocate oversight responsibility to its employee benefit plan committee.
The emergence of generative artificial intelligence (AI) technology offers significant opportunities in the field of employee benefits, for both employers and their employees, including:
- Generating user-friendly summaries and infographics to clearly explain key details of plans to employees (e.g., in the form of summary plan descriptions (SPDs) and open enrollment materials);
- Approving and denying benefit plan claims more accurately;
- Detecting fraud;
- Helping employers and employees make customized, data-based decisions regarding which employee benefit plans employers should provide and which elections employees might want to make; and
- Serving as a customer service tool in the form of AI-powered chatbots that can answer employee questions about cost and coverage details or provide updates regarding a claim dispute process.
Despite the promise that generative AI technology holds for benefits administration, third-party administrators (TPAs) and employers that engage them must remain cognizant of significant risks that these technologies can pose, including:
- Ensuring that these tools are compliant with applicable federal and state privacy and data security laws;
- Fiduciary liability resulting from inaccurate outputs; and
- Producing discriminatory and harmful responses to participants.
Privacy and Data Security
Various federal and state privacy laws, including the Health Insurance Portability and Accountability Act (HIPAA), the Gramm-Leach Bliley Act (GLBA), the California Consumer Privacy Act (CCPA) and other newly enacted privacy laws, need to be considered when TPAs and employers collect and process personal health and financial data, control how it can be accessed and by whom, and the security measures which must be implemented to protect individuals’ privacy and guard against data breaches. While many of these state laws have carveouts for data covered by HIPAA or may be preempted by federal law under ERISA, state laws still may play a role in assessing how data held by an employer outside of an employee benefit plan is handled (e.g., in employment files).
For example, the CCPA is one of the newly enacted comprehensive consumer privacy laws that applies to certain employee data. Newly proposed regulations in California would extend the CCPA’s authority, including requirements relating to transparency and consumer access, over automated decision-making technologies that rely on consumer personal information.
Enterprise-level AI tools and those developed specifically for employee benefits administration are more likely to have appropriate protections in place. However, it is still vital for TPAs and employers to properly vet these tools and the vendors that offer them to verify that they do in fact use appropriate measures to ensure that sensitive personal information is not leaked into a wider, more public data set, or otherwise disclosed to those who have no legal basis to possess such information. Employers should work with their data security counsel to closely review the terms of use and privacy policies provided by AI vendors and carefully negotiate vendor service agreements to ensure such providers are subject to appropriate legal obligations and responsibilities, including entering into a business associate agreement (where necessary).
Reliability
Because many large language models on which generative AI tools are built rely on predictive text technology, a major drawback is the propensity for these tools to “hallucinate” or simply make up a response to a user’s query that might sound convincing but not have any factual basis. Hallucinations can occur in ways large and small, and while they are sometimes obvious, other times they may go undetected. Consequently, there may be significant risks to employers and employees who rely on these tools to guide them in selecting benefits plans and coverage elections. Recommendations made by an AI tool may be unreliable, or inaccurate, and may not be supported by data or factual information.
This phenomenon also presents significant risk to TPAs, who must act on behalf of the employers they serve. If a TPA improperly relies on AI to support a recommendation (e.g., whether a prescription drug is covered under a group health plan), to prepare an SPD, or to process claims, and the output is inaccurate, the TPA or the employer they serve may have violated a fiduciary duty.
Accordingly, employers should make sure that their employee benefit plan committees are carefully assessing the technologies, methodologies and training data powering these AI tools to minimize the risk of hallucination and ensure that the results generated are reliable. These committees should consider engaging their ERISA counsel to review their current vendor agreements regarding AI-related warranties, representations, disclaimers and liability-shifting provisions. Additionally, committees may wish to include appropriate AI-specific questions in their RFPs when seeking a new TPA or other vendors (e.g., COBRA administrators).
Discrimination
Generative AI tools have been developed based on vast amounts of information available across the internet, much of which is subject to various human biases. As a result, many AI tools are inherently biased and often generate output that incorporates these biases. The more generative AI is used to make meaningful decisions that could have a significant impact on people’s lives, the more important it becomes to properly address these biases and guard against what has become known as algorithmic discrimination. Not only does a process of human oversight function as a check on the accuracy of an AI-generated output, but it also functions to help correct any inherent bias or discrimination embedded in the output.
Some legislatures are beginning to take concrete steps to enact laws prohibiting algorithmic discrimination in AI. The recently passed Colorado Artificial Intelligence Act is the first law in the U.S. to establish such regulations. This law requires AI developers and those using high-risk AI systems to use reasonable care to ensure that their AI tools are not used to make any consequential decisions impacting an individual’s rights and opportunities – particularly around employment, financial services, healthcare services and insurance – in a manner that unlawfully discriminates based on a protected class or category. Whether a particular state law applies to an employee benefit plan will depend on several factors, including whether ERISA preempts such laws.
Trust and Disclosure
Employers that use generative AI tools for benefits administration should (and in some instances, may be required to) ensure that the use of such AI technology is clearly disclosed to employees. This is particularly important when utilizing an AI-powered chatbot. Various state laws, including California’s Autobot Law and Utah’s Artificial Intelligence Policy Act, require companies that use chatbots to interact with consumers to clearly and conspicuously disclose to those consumers that they are interacting with an automated technology and not a human individual. Again, whether a particular state law applies to an employee benefit plan will depend on several factors, including whether ERISA preempts such laws.