California Employees in the Artificial Intelligence (AI) Sector Gain Whistleblower Protection

Follow us on LinkedIn to see future News.

Andrew M. DeLucia

December 3, 2025

“Catastrophe and creation are twins.” – Barbara Marx Hubbard

On January 1, 2026, the Transparency in Frontier Artificial Intelligence Act (the “Act”) takes effect in California.  The Act is designed to help prevent “catastrophic” risks created by advanced Artificial Intelligence (AI) systems by setting requirements for large developers working with “frontier models.”

A key element of the Act establishes whistleblower protections for covered employees, protecting them from retaliation when reporting violations or disclosing information about critical safety threats.  The Act itself explains the importance of including such provisions, stating: “Whistleblower protections and public-facing information sharing are key instruments to increase transparency.”

Employers must provide clear notice to all employees about their rights under the Act, including annual written notice (with acknowledgement of receipt), posting notices in the workplace, notifying new hires, and confirming remote employees receive equivalent information.

California is the first state to impose transparency, reporting, and whistleblower protections for developers of advanced frontier AI models.

Key Points of the Whistleblower Protections

Who is a “covered employee” under the Act?
The definition of a “covered employee” is narrow and includes only employees responsible for assessing, managing, or addressing risk of “critical safety incidents.”

What is a “frontier model” and a “catastrophic risk”?
A “frontier model” is determined by the amount of computing power used during training, fine-tuning, or modification.

The Act defines “catastrophic risk” as a foreseeable and material risk that a frontier developer’s development, storage, use, or deployment of a foundation model will materially contribute to the death of, or serious injury to, more than 50 people or more than one billion dollars in damage to, or loss of, property arising from a single incident involving a foundation model doing any of the following: (1) Providing expert-level assistance in the creation or release of a chemical, biological, radiological, or nuclear weapon; (2) Engaging in conduct with no meaningful human oversight, intervention, or supervision that is either a cyberattack or, if committed by a human, would constitute the crime of murder, assault, extortion, or theft, including theft by false pretense; (3) Evading the control of its frontier developer or user.

The relatively high threshold of what constitutes a “catastrophic risk” and who is a “covered employee” appears aimed at ensuring that the whistleblower protections are focused on significant safety concerns.

What conduct does the Act prohibit regarding whistleblowing?
Employers are prohibited from implementing any rule, regulation, policy, or contract that prevents employees from disclosing information about activities that pose a catastrophic risk or other violation of the law.  Employees may report concerns to the attorney general, federal authorities, persons of authority over employees at their company, or authorized investigators.

How can a covered employee report a suspected violation?
Employers must implement a reasonable internal process for covered employees to anonymously report concerns about catastrophic risks or violations of the Act.  Employers are required to provide monthly updates to the employee who reported the potential violation on the status of any investigations or responsive actions.  On a quarterly basis, summaries of disclosures and responses must be shared with company officers and directors, except when those individuals may be implicated in the alleged wrongdoing.

What are the consequences of a violation?
The attorney general has authority to seek civil penalties up to one million dollars per violation through a civil action, and covered employees also may sue if they experience retaliatory adverse employment actions, including monetary damages for any harm suffered, injunctive relief, and recovery of attorney’s fees.

What should impacted employers do now?
Any employers involved in developing AI should review and update their internal rules, regulations, policies, or contracts to ensure protected reporting of critical safety incidents, whether the report is made within the organization or externally.  Employers also should consider consulting with any service providers and contractors to ensure that those entities adopt compliant practices.

Of course, employers should review any adverse employment action taken against a covered employee to ensure it is not the product of retaliation.

 

The author of this article, Andrew DeLucia, is a member of the Bars of Pennsylvania and New Jersey.  This article is designed to provide one perspective regarding recent legal developments, and is not intended to serve as legal advice in Pennsylvania, New Jersey, California or any other jurisdiction, nor does it establish an attorney-client relationship with any reader of the article where one does not exist.  Always consult an attorney with specific legal issues.

 
© 2026 Rubin Fortunato. All rights reserved. Disclaimer | Privacy Policy | Sitemap
Lisi
Rubin Fortunato
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.