New York State passes RAISE Act for frontier AI models

Synopsis
New York lawmakers have approved the Responsible AI Safety and Education (RAISE) Act, mandating transparency and safety measures for frontier AI models. Supported by AI experts, the Act requires AI labs to release safety reports and report incidents, with penalties for non-compliance. India's AI adoption is growing, prompting increased demand for AI trust and safety professionals.
The legislation comes as a reform for the previous AI safety bill, which was ultimately vetoed. The AI safety bill targeted only large-scale models and didn’t address high-risk deployment or smaller but potentially dangerous models.
The Act is now awaiting the approval of New York governor Kathy Hochul, who can either sign it, send it back for amendments, or veto it.
The key provisions of the proposed RAISE Act include-
- Requires AI labs to release safety and security reports on their frontier AI models
- In case of AI model behaviour or bad actors affecting the AI systems, the labs are mandated to report such safety incidents.
- Failure to comply with brings civil penalties up to $30 million
In a recent global survey by IBM, it was revealed that AI adoption in India is higher than in other countries. However, this is more experimentation while adoption at scale still lags.
Hiring in this space has surged 36% year-on-year, and the demand for AI trust and safety professionals is expected to grow by 2530% in 2025, data from Teamlease Digital showed.