Real-world test of Apple's latest implementation of Mac cluster computing proves it can help AI researchers work using massive models, thanks to pooling memory resources over Thunderbolt 5. In November, Apple teased inbound features in macOS Tahoe 26.2 that stands to considerably change how AI researchers perform machine learning processing. At the time, the headline

Trump Administration Issues Executive Order on National AI Policy and Deregulation
On December 11, 2025, the White House issued an Executive Order (EO), “Ensuring a National Policy Framework for Artificial Intelligence,” that establishes a new federal policy on artificial intelligence (AI). The EO states that it is “the policy of the United States to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.” The EO asserts that leadership in AI is critical to U.S. national and economic security and dominance and that “AI companies must be free to innovate without cumbersome regulation.” To accomplish this, the EO seeks to challenge “onerous” state AI laws through various federal actions.
The EO is a revised draft of an EO that was circulated by the media in mid-November 2025 but was later put on hold.
Policy Direction
The EO raises concerns that “excessive” and “onerous” state AI laws “thwart” innovation by creating a patchwork of different regulatory regimes that make compliance challenging, particularly for start-ups, and that state laws are increasingly responsible for requiring entities to “embed ideological bias within models.” The EO specifically calls out concerns regarding Colorado law “banning algorithmic discrimination.” There is also concern raised that states impermissibly regulate beyond state borders, impinging on interstate commerce.
Federal Action
The EO provides for several approaches to challenge state AI laws or pressure states to withdraw legislation or roll back existing laws. These actions focus on legal challenge, limiting available funding, development of federal requirements, guidance on interpretation of federal law, and legislative proposals. Specifically, the EO provides that:
- AI Litigation Task force to challenge state AI laws: Within 30 days, the U.S. Attorney General must establish an “AI Litigation Task Force” to challenge state AI laws inconsistent with the EO policy, including on grounds that they are “unconstitutionally regulating interstate commerce,” are preempted by federal law, or are otherwise unlawful;
- Commerce Secretary to publish state laws conflicting with the EO: Within 90 days, the U.S. Secretary of Commerce shall evaluate and publish existing state laws that conflict with the EO, including laws that “require AI models to alter their truthful outputs” or “that may compel AI developers or deployers to disclose or report information in a manner that would violate the First Amendment or any other provision of the Constitution.” The evaluation may additionally identify State laws that promote AI innovation;
- Commerce Secretary to publish Policy Notice on state eligibility to receive BEAD funding: Within 90 days, the U.S. Secretary of Commerce shall issue a Policy Notice outlining the eligibility conditions for states to receive remaining Broadband Equity Access and Deployment (BEAD) funding, the $42.45 billion federal grant program created under the Infrastructure Investment and Jobs Act (IIJA) to expand high-speed broadband across the U.S. This approach follows Congress’s earlier attempt—and ultimate bipartisan rejection—to attach an “AI moratorium” on state AI laws to the tax-and-spending bill signed into law earlier this year. The EO nevertheless specifies that states identified as having “onerous” AI laws that conflict with federal policy may, to the maximum extent permitted by federal law, be deemed ineligible for non-deployment BEAD funds;
- Agencies to assess if grantee states enacted AI laws contradictory to the EO: Directs agencies to assess their discretionary grant programs for states and determine whether i) those states have enacted AI laws that are contradictory to the EO, and ii) agencies may condition such grants on states not enacting an AI law that conflicts with the policy outlined in the EO. The EO also directs agencies to determine whether, for those states that have enacted conflicting laws, the agencies may condition such grants on those states entering into a binding agreement with the relevant agency not to enforce any such laws during the performance period in which it receives the discretionary funding;
- Federal Communications Commission (FCC) to determine whether to adopt federal reporting and disclosure standards for AI models: Within 90 days, the FCC Chairman, in consultation with the Special Advisor for AI and Crypto, shall “initiate a proceeding to determine whether to adopt a federal reporting and disclosure standard for AI models that preempts conflicting State laws,”
- Federal Trade Commission (FTC) to issue a policy statement on FTCA’s prohibition on unfair and deceptive practices applying to AI models: Within 90 days, the FTC Chairman shall issue a policy statement on the application of the prohibition on unfair and deceptive practices to AI models in order to preempt state laws “mandating deceptive conduct;” and
- Administration to develop a legislative proposal for a federal AI regulatory framework: The Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology shall develop a legislative proposal for a federal policy framework for AI that preempts state law Legislative recommendations shall not propose preempting otherwise lawful state AI laws relating to: i) child safety protections; ii) AI compute and data center infrastructure, other than generally applicable permitting reforms; iii) state government procurement and use of AI; and iv) other topics as shall be determined.
Notably, while the EO directs the Administration to challenge certain AI-related state laws, it does not render them automatically void or unenforceable by states. The statutes remain in full force unless and until they are amended, repealed, or struck down through appropriate legal or administrative processes.
Impact on Frontier AI Developers and General AI Developers
Although the EO references “purely speculative suspicion that AI might ‘pose significant catastrophic risk,’” the safety debates among most frontier developers have rarely been focused on whether these types of risks exist, but rather how and to what extent government should be involved in mitigating them. The EO does not seem to target the various AI safety efforts of frontier model developers (individually and collectively) aimed at preventing the development and proliferation of biological, chemical, or nuclear weapons, the use of AI in cyberattacks, and the like. However, the EO’s clear skepticism regarding “algorithmic discrimination” and its potential to “embed DEI” into models suggests that the administration may take aim at safety efforts in this area.
Impact on AI Application Developers and Customers
Developers of AI applications are already subject to a patchwork of sector-specific state laws that are not specifically aimed at AI but often apply to their AI functionalities. It is unclear whether this EO deems those laws as “State AI laws” or whether the EO is truly focused on AI-specific laws like the named Colorado statute. Some sectors of specific interest include healthcare, fintech, and housing.
Impact on Healthcare
AI has increasingly been used in healthcare to streamline administrative functions, engage patients, provide clinical decision support, and for screening and triage. In addition to the broad state AI laws, there are a number of states that have passed laws specifically impacting use of AI in healthcare—see Wilson Sonsini’s recently published advisory, “Legal Framework for AI in Mental Healthcare.” The EO seems to be focused on comprehensive AI laws; however, it is likely that the directives in the EO would sweep up healthcare-specific AI laws as well.
Existing state laws, such as those related to the practice of medicine and privacy would not likely be affected by this EO and would continue to play a role in digital health companies’ and healthcare organizations’ development and adoption of AI in healthcare.
As stated above, the EO directs the FCC to initiate a proceeding to determine whether to adopt a reporting and disclosure standard to preempt conflicting state laws. For healthcare AI companies, a uniform standard on disclosure and reporting could reduce compliance burdens because existing healthcare AI laws impose a patchwork of requirements, including how those disclosures must be made, how frequently such disclosures must be made and by whom.
Impact on Fintech
The fintech industry (including “wealthtech,” “regtech,” “insurtech,” digital banking, payments, cryptocurrency and digital assets, lending, and personal finance management) has already integrated AI into most facets of the industry and related technology and platforms. AI is used heavily in fintech for fraud detection and prevention, credit scoring and underwriting, customer service, algorithmic trading and investment management, robo-advisors, KYC and AML compliance, cybersecurity, and more. AI adoption and integration in fintech products and services is showing no signs of slowing, and if anything is likely to increase over time and through the near future.
To date, the majority of the U.S. state AI laws have not directly focused on key fintech applications and risks, in part because much of the fintech industry is subject to a patchwork of federal regulations and regulatory agencies that already preempt state law. So, it is possible that the EO might not directly impact fintech as much as certain other sectors. However, like with healthcare, existing state laws, such as those related to consumer protection and privacy would likely continue to apply (at least in part) to fintech companies’ use of AI, even after the implementation of the EO’s directives and recommendations.
Impact on Housing
State legislators, as well as antitrust enforcers at all levels of government, have focused on AI models and pricing recommendation algorithms in the housing sector that make use of data collected from competing landlords or property managers. A number of municipalities across the country have enacted ordinances prohibiting or restricting such software, and in October 2025 the first statewide bill was signed into law in New York. In the same month, California enacted a broader law clarifying the application of its antitrust law to common pricing algorithms, including those based on AI and those affecting housing pricing. The New York law has already been challenged as an unconstitutional infringement on First Amendment rights. In addition, a federal antitrust suit concerning a company’s price recommendation software was settled last month. The settlement differentiates between the use of data in AI model training and in runtime operation, permitting the use of certain pooled historical pricing data in the former, suggesting that the Administration may view outright prohibitions like the New York law as inconsistent with its preferred national AI policy, potentially imperiling them under this EO.
Conclusion
The issuance of the EO establishes a national policy for an AI framework. Companies should continue to comply with all existing state and federal regulations, as the EO is implemented. This EO does not render existing AI laws automatically void or unenforceable by states.
Contact Us
Wilson Sonsini works with clients developing, deploying, and using AI across the regulatory spectrum, and we are actively monitoring state and federal AI laws and announcements, including activities in accordance with this EO.
For more information, please contact Jodi Daniel, Andrea Linna, Maneesha Mithal, Scott McKinney, Barath Chari, Brian Smith, or any member of Wilson Sonsini’s Artificial Intelligence and Machine Learning, Communications, Antitrust and Competition, Data, Privacy, and Cybersecurity, and Digital Health practices.
Brad Tennis, Lidia Niecko-Najjum, Nawa Lodin, Seamus Taylor, and Sophia Galleher contributed to the preparation of this alert.
