Lummis Proposes RISE Act for AI Transparency
Source: coindesk.com
RISE Act Aims for AI Transparency
Senator Cynthia Lummis (R-WY) has put forward the Responsible Innovation and Safe Expertise (RISE) Act of 2025, which is intended as legislation to clarify liability for artificial intelligence (AI) used by professionals. The proposed bill could lead to transparency from AI developers, but it doesn't require open source models.
According to a press release from Lummis, the RISE Act means professionals, like doctors, lawyers, engineers, and financial advisors, will still be legally responsible for their advice, even if AI systems inform that advice. AI developers can avoid civil liability if they publicly share model cards when problems arise.
Model Cards Explained
The proposed bill defines model cards as technical documents that give details on an AI system’s training data sources, intended uses, performance metrics, limitations, and potential failure modes. This is meant to assist professionals in determining if the tool is suitable for their work.
Lummis stated in a press release that Wyoming values innovation and accountability, and the RISE Act establishes predictable standards to encourage safer AI development while protecting professional autonomy. Lummis added that the legislation does not create blanket immunity for AI.
Limits to Immunity
The immunity provided by the Act has specific limits. Developers are not protected in cases of recklessness, willful misconduct, fraud, knowing misrepresentation, or actions outside professional usage. The RISE Act also requires developers to maintain ongoing accountability. AI documentation and specifications must be updated within 30 days of deploying new versions or discovering significant failure modes, ensuring continuous transparency.
The RISE Act does not mandate fully open source AI models. Developers can keep proprietary information private if it is unrelated to safety, and each omission must include a written explanation justifying the trade secret exemption.
Simon Kim, the CEO of Hashed, previously discussed the danger of centralized, closed-source AI. Kim stated that OpenAI is not open and is controlled by few people, which he believes is dangerous. He likened creating this type of closed source foundational model to creating a 'god' without understanding how it works.