The Bottom Line
- Over the last year, nearly 200 new laws were proposed across dozens of states to regulate AI technology.
- Utah’s new AI transparency law requires companies to disclose when their AI systems are used to interact with consumers.
- A new law pending in Colorado will be the first to regulate the use of AI when making consequential decisions about individuals.
- Other states may regulate AI through existing consumer privacy legislation, and numerous states have enacted laws regulating deepfakes, including for political advertising.
With the rapid introduction of increasingly powerful artificial intelligence (AI) technologies, regulators, consumers and even industry participants are seeking to establish a clear regulatory framework.
While the European Union recently enacted the AI Act, which will both regulate AI and prohibit the use of AI in certain instances, the United States Congress has not adopted any new federal laws addressing AI technologies. In the absence of any such federal legislation, many U.S. states are taking matters into their own hands. Since the beginning of 2023, nearly 200 AI-related bills have been introduced across dozens of states.
Utah and Colorado are now among the first to pass and enact broad consumer protection and transparency laws regulating the use of AI tools by private organizations, with many others close behind. In addition, over a dozen states have recently passed laws regulating the use of generative AI to create deepfakes imitating people or scenarios, particularly in the context of political advertising.
Utah: The First to Require Transparency and Disclosure Obligations
Utah’s Artificial Intelligence Policy Act was enacted on May 1, 2024. The Act defines generative AI as “an artificial system that: (i) is trained on data; (ii) interacts with a person using text, audio, or visual communication; and (iii) generates non-scripted outputs similar to outputs created by a human, with limited or no human oversight.”
Utah’s AI Policy Act clarifies that the State’s consumer protection laws apply to companies using generative AI in the same manner as such laws apply to companies operating in any other business, and clarifies that companies are responsible for the actions of the generative AI tools they use. It also amends the state Consumer Protection Act to specify that deidentified data includes “synthetic data,” defined as “data that has been generated by computer algorithms or statistical models and does not contain personal data.”
Importantly, Utah’s AI Policy Act is the first law in the U.S. that imposes disclosure requirements on private companies that use generative AI when interacting with consumers. For non-regulated companies or service providers, the fact that a consumer is interacting with generative AI and not a human must be disclosed to the consumer, but only if the consumer asks or prompts this inquiry. However, for regulated companies or service providers (meaning those in occupations for which a state license or certification is required to practice the occupation), there is an affirmative obligation to make this disclosure to consumers immediately when they are interacting with a generative AI tool as part of the regulated service. This information must be disclosed prominently at the outset of the communication.
Utah’s Consumer Protection Division is responsible with enforcing these disclosure obligations and violators may be subject to a fine of up to $2,500 per violation.
Colorado: The First to Establish Regulations to Combat Algorithmic Discrimination
Colorado’s Artificial Intelligence Act, SB 24-205 was passed May 8, 2024. If signed into law by the Governor, it will go into effect on Feb. 1, 2026. The Act sets out requirements for AI developers and those deploying high-risk AI systems to use reasonable care to protect consumers from risks of algorithmic discrimination – meaning “any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law.”
Colorado’s AI Act applies to “developers,” defined as people or business entities doing business in Colorado that develop, or intentionally and substantially modify, an artificial intelligence system, and to “deployers,” which are persons or business entities doing business in Colorado that deploy a high-risk artificial intelligence system. An artificial intelligence system is considered “high-risk” if it makes, or is a substantial factor in making, a consequential decision – which is one that that “has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of: (a) education enrollment or an education opportunity; (b) employment or an employment opportunity; (c) a financial or lending service; (d) an essential government service; (e) health-care services; (f) housing; (g) insurance; or (h) a legal service.”
In short, businesses in Colorado must use reasonable care to ensure that the generative AI tools they create or deploy for use by consumers do not make or are not used to make any consequential decision impacting a consumer’s rights and opportunities in a manner that unlawfully discriminates against such consumer based on a protected class or category.
Assuming the bill is signed into law as written, developers and deployers who follow its requirements are presumed to have exercised such reasonable care. The requirements for companies deploying high-risk AI systems include:
- Implementing a risk management policy governing use of high-risk AI systems.
- Completing impact assessments for high-risk AI systems.
- Notifying consumers if a high-risk AI system is being used to make, or to be a substantial factor in making, consequential decisions concerning a consumer and disclosing to consumers the purpose of the AI system and the nature of the consequential decision, along with information regarding the right to opt out of consumer data profiling under the Colorado Privacy Act (if applicable).
- Providing consumers a right to appeal the consequential decision (and the appeal must be reviewed by a human, to the extent feasible).
- Including a statement on the company’s website summarizing the types of high-risk AI systems being deployed and how the company manages the risks of algorithmic discrimination.
To encourage companies to mitigate risks, developers and deployers are also given certain legal protections for uncovering and curing violations in compliance with recognized AI risk mitigation framework.
This law will be enforceable only by the Colorado Attorney General. There is no private cause of action.
States May Regulate AI Based on Existing Privacy Legislation
In addition, existing comprehensive consumer privacy laws, such as the California Consumer Privacy Act (CCPA), may be used as a backdoor to AI regulation. The California Privacy Protection Agency (CPPA) has already proposed draft regulations regarding “automated decision making,” which may be finalized in the coming months. The CPPA defines automated decision making as “any system, software, or process — including one derived from machine-learning, statistics, or other data-processing or artificial intelligence — that processes personal information and uses computation as whole or part of a system to make or execute a decision or facilitate human decision making. Automated decision-making technology includes profiling.”
Once finalized, these regulations will result in the application of all CCPA obligations and rights to automated decision-making technologies, such as transparency and consumer access rights.
Numerous States Regulating Deepfakes
Tennessee’s ELVIS Act expands the state’s existing right of publicity law to prohibit the use of AI to mimic a person’s photograph, voice or likeness without permission, and violators may be subject to civil and criminal penalties. This law will take effect on July 1, 2024.
As of May 3, 2024, 13 states have passed laws regulating AI use in political advertising, and at least 18 more have bills under consideration. Broadly speaking, these laws prohibit political campaigns or advertisers promoting a candidate for elective office from using AI to create deepfake look-alikes or sound-alikes of people, or to generate scenes or scenarios that are not truthful and did not actually take place, unless the advertisement includes a clear and conspicuous disclaimer stating that the advertisement was produced using AI. Some of these states, including Texas and Minnesota, fully prohibit the use of AI in political advertising if it is done with intent to injure a candidate or influence the results of an election.