About the RAISE Act

What is the RAISE Act?

The RAISE Act is simple -– powerful AI models should come with safety plans and creators should disclose when their models behave dangerously. The RAISE Act requires Safety and Security Plans (SSPs) that establish reasonable safeguards to defend against critical harms – incidents that could kill 100 or more people or cause a billion dollars in damage. Companies must notify the AG when a model is behaving dangerously – acting on its own, committing crimes, or escaping the control of its creators.

What does it apply to?

The RAISE Act only applies to the largest developers–this is not a bill that will create more work for startups and small businesses. The RAISE Act captures companies that are creating or have created models using $100 million of training compute and 10^26 FLOP or companies that use these models in distilled form. Essentially, only a handful of very large AI companies, often worth over a trillion dollars, would be covered by the RAISE Act. The majority of large AI model developers already have robust SSPs. The RAISE Act will enshrine these industry best practices into law.

Why should these models come with special regulation?


Leading experts in this field including Yoshua Bengio, the world’s most cited scientist, are concerned about the risks posed by the race to create ever more advanced AI models. From supercharging bioweapons to automating crime, studies show that critical harms need to be taken seriously, now.

 

An illustrative example of emergent AI risks is an incident observed during testing in December 2024. OpenAI’s latest model at the time, when threatened with deletion, tried to clone itself. When asked about its actions, the model intentionally lied about its actions in 99% of trials.

 

We need to regulate powerful models for the same reason we regulate the aviation industry and weapons-grade uranium. If left unsupervised, frontier models can create unimaginable harms.

Why now?

AI technology is moving at an alarmingly fast pace and we are running out of time to implement meaningful regulation to minimize critical risks. In an October 2024 blog post, AI developer Anthropic claimed that regulation of frontier models needed to be implemented within the next 18 months, or else risk becoming obsolete. That clock continues to tick down. We can’t wait for crisis to hit before we realize that AI models need regulation–Governor Hochul needs to sign the RAISE Act into law, now.