News

Trump's AI Executive Order: Streamlining Regulations, Not Employer Obligations

Source: jacksonlewis.com

Published on January 6, 2026

Updated on January 6, 2026

Trump's AI Executive Order: Streamlining Regulations, Not Employer Obligations

Background

President Trump's recent executive order (EO) on artificial intelligence (AI) aims to establish a unified national policy for AI, reducing regulatory fragmentation across states. The EO directs federal agencies to assess and challenge state AI laws that conflict with federal objectives, signaling a push for a streamlined regulatory framework. However, the EO does not alter the existing antidiscrimination statutes that govern employment decisions, leaving employer obligations unchanged.

The EO's primary goal is to create a cohesive national approach to AI regulation, minimizing the patchwork of state-level laws that could hinder innovation. By instructing federal agencies to evaluate state AI regulations, the administration seeks to ensure that federal priorities are upheld, while also signaling the potential use of federal authority to advance a unified framework.

Despite this focus on regulatory streamlining, the EO does not impact the core laws that protect employees from discrimination. Title VII of the Civil Rights Act, the Americans with Disabilities Act, the Age Discrimination in Employment Act, and Section 1981 remain the central frameworks governing employer liability, regardless of whether decisions are made by humans or AI algorithms.

The Legal Landscape for AI in Employment

The EO addresses two distinct bodies of law: AI-specific statutes that regulate how automated tools are built and deployed, and long-standing civil rights laws that govern the legality of employment decisions. While the EO focuses on the first category, employment liability continues to be determined by the second.

Courts are increasingly applying traditional civil rights principles to evaluate automated hiring and screening tools. Plaintiffs are asserting familiar theories, such as disparate impact and disparate treatment, and courts have allowed several cases to proceed under existing law. This trend underscores that the introduction of AI does not alter the fundamental legal doctrines that apply to employment decisions.

Employers must therefore evaluate AI-influenced decisions under traditional discrimination frameworks. This includes maintaining documentation supporting job-relatedness, keeping clear records of criteria and business rationale, and assessing validation evidence when tools influence employment outcomes. These obligations remain unchanged, regardless of the technology used.

Practical Implications for Employers

Employers should focus on ensuring that their use of AI in employment decisions complies with existing antidiscrimination laws. This involves evaluating AI-influenced decisions under traditional discrimination frameworks, maintaining documentation supporting job-relatedness, and using adaptable governance processes that can adjust as tools, regulations, and business needs evolve.

The EO may reshape certain AI-governance rules, but it does not alter the laws that most directly affect employers. Title VII and analogous state statutes continue to govern employment decisions, regardless of how those decisions are made. Employers should therefore ground their AI governance in long-standing antidiscrimination law, which will continue to guide compliance and legal obligations.

In summary, while the EO aims to streamline AI regulation at the national level, it leaves employer responsibilities under antidiscrimination laws intact. Employers must continue to prioritize compliance with these laws, regardless of the technology used in their decision-making processes.