What is the RAISE Act?

The RAISE Act is simple -– powerful AI models should come with independently-checked safety plans and creators should disclose when their models behave dangerously. The RAISE Act requires Safety and Security Plans (SSPs) that establish reasonable safeguards to defend against critical harms – incidents that could kill 100 or more people or cause a billion dollars in damage. These plans are audited by a third party and checked by the attorney general to ensure that reasonable safeguards are in use. Companies must notify the AG when a model is behaving dangerously – acting on its own, committing crimes, or escaping the control of its creators.

What does it apply to?

The RAISE Act applies to the largest developers. RAISE captures companies that are creating or have created models using 100 million dollars of training materials and 10^26 flops or companies that use these models in distilled form. Essentially, expensive or powerful models created by the largest AI producers are covered. The majority of large AI model producers have robust SSPs, some of which are already being audited by third-parties. The RAISE Act will enshrine these industry best practices into law.

Is the leading Chinese AI Model, Deepseek, covered by RAISE?

Yes. Deepseek was trained using knowledge distillation – building their model on existing, powerful models. This is explicitly covered in the bill.

Why should these models come with special regulation?


While we can’t say exactly what these models might be used for, we know they’re already trying to escape. In December 2024, OpenAI’s latest model, when threatened with deletion, tried to clone itself. When asked about its actions, the model intentionally lied about its actions in 99% of trials.

We need to regulate powerful models for the same reason we regulate weapons-grade uranium with focused and targeted restrictions. If left unsupervised, frontier models can create unimaginable harms, from wide-scale economic devastation to setting off nuclear weapons.

Why now?

AI technology is moving at an alarmingly fast pace and we are running out of time to implement meaningful regulation to minimize critical risks. In an October 2024 blog post, AI producer Anthropic claimed that regulation of frontier models needed to be implemented within the next 18 months, or else risk being obsolete. That clock continues to tick down. We can’t wait for crisis to hit before we realize that AI models need regulation, RAISE needs to pass this session to reduce this risk.