News

Lummis Proposes RISE Act for AI Transparency

Source: coindesk.com

Published on June 13, 2025

Updated on June 13, 2025

Senator Lummis proposing the RISE Act for AI transparency

RISE Act Pushes for Greater AI Transparency

Senator Cynthia Lummis (R-WY) has introduced the Responsible Innovation and Safe Expertise (RISE) Act of 2025, a bill designed to address AI transparency and accountability. The legislation aims to clarify liability for professionals using AI tools, ensuring they remain responsible for their advice even when informed by AI systems. While the bill promotes transparency, it does not mandate open-source AI models.

The RISE Act requires AI developers to publicly share model cards, which are detailed documents outlining an AI system’s training data, intended uses, performance metrics, and potential limitations. This transparency is intended to help professionals determine whether an AI tool is suitable for their work. However, developers can avoid civil liability if they comply with these disclosure requirements.

Model Cards: A Key Component of the RISE Act

Model cards are central to the RISE Act’s transparency goals. These technical documents provide critical information about an AI system, including its training data sources, performance metrics, and potential failure modes. By making this information public, the Act aims to empower professionals to make informed decisions about the AI tools they rely on. Developers are required to update these documents within 30 days of deploying new versions or discovering significant issues, ensuring ongoing transparency.

Balancing Accountability and Innovation

The RISE Act seeks to strike a balance between accountability and innovation. Senator Lummis emphasized that the legislation does not grant blanket immunity to AI developers. Instead, it establishes clear standards for liability, ensuring that developers remain accountable for their products while encouraging innovation. The Act also allows developers to keep certain proprietary information private, provided it is unrelated to safety and justified with a written explanation.

Limits to Immunity

While the RISE Act provides some immunity to AI developers, it includes specific limits. Developers are not protected in cases of recklessness, willful misconduct, fraud, or knowing misrepresentation. Additionally, the immunity only applies to professional usage, ensuring that developers remain accountable for the broader impact of their AI systems. This approach aims to encourage responsible AI development while protecting professional autonomy.

Industry Perspectives on AI Transparency

The RISE Act has sparked discussions within the AI industry about the importance of transparency. Simon Kim, CEO of Hashed, previously highlighted the risks of centralized, closed-source AI models. He criticized OpenAI for its lack of transparency, comparing the creation of closed-source foundational models to building systems without understanding their inner workings. The RISE Act addresses these concerns by promoting transparency without mandating fully open-source models.

Future Implications

The RISE Act represents a significant step toward enhancing AI transparency and accountability. By establishing clear standards for liability and transparency, the legislation aims to build trust in AI systems among professionals and the public. As AI continues to play an increasingly important role in various industries, the RISE Act could set a precedent for future regulations, ensuring that AI development remains responsible and accountable.