Meta Refuses to Sign EU’s AI Code of Practice, Warns of Overreach as Europe Pushes Ahead with Landmark Regulation

Meta Platforms has formally declined to sign the European Union’s newly unveiled Code of Practice for General-Purpose Artificial Intelligence, escalating tensions between Big Tech and European regulators just weeks before key provisions of the EU’s landmark AI Act are set to take effect.
In a detailed statement shared on LinkedIn, Meta’s Chief Global Affairs Officer Joel Kaplan criticized the voluntary framework, warning it could “stunt innovation” and introduce legal uncertainty for AI developers across the bloc. Kaplan said the EU was “heading down the wrong path on AI,” describing the Code as regulatory overreach that “goes far beyond the scope” of the AI Act itself.
Although the Code of Practice is non-binding, companies that voluntarily sign on are granted practical benefits, such as a simplified compliance pathway, lighter regulatory obligations, and legal clarity when the AI Act becomes fully enforceable starting August 2025. Non-signatories like Meta will have to navigate the full weight of the AI Act’s compliance requirements without the benefit of these easing measures.
Register for Tekedia Mini-MBA edition 18 (Sep 15 – Dec 6, 2025) today for early bird discounts. Do annual for access to Blucera.com.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register to become a better CEO or Director with Tekedia CEO & Director Program.
Meta’s refusal to sign is the latest in a growing wave of resistance from both global tech giants and European firms. A coalition of 44 European tech leaders, including Airbus, Siemens, Mistral, and ASML, had earlier called on the European Commission to delay the implementation of the AI Act’s Code of Practice by two years, citing fears that the framework is moving too quickly and could endanger Europe’s global competitiveness in the AI arms race.
In their open letter, the companies warned that the current scope of the Code places an “undue burden” on developers of general-purpose AI models and risks “jeopardizing the entire European AI ecosystem.” They called for more time to test, refine, and adjust compliance mechanisms before they become operational.
Despite mounting opposition, the European Commission has refused to delay. The EU’s Internal Market Commissioner Thierry Breton has insisted that the framework will proceed as scheduled, emphasizing that the AI Act is essential for consumer safety and trust in emerging technologies. The first compliance obligations for general-purpose AI developers will commence on August 2, 2025, with full obligations for high-risk systems following a year later, in August 2026.
The Code of Practice itself outlines voluntary standards in several key areas:
- Companies are expected to document the data used to train AI models, disclose technical capabilities, and share summaries that explain how models behave.
- The code also imposes copyright compliance safeguards, transparent risk mitigation procedures, and governance frameworks to manage systemic AI threats.
- Signatories will also have to demonstrate how they address safety concerns and avoid discriminatory outputs.
The EU says the Code is not just about enforcement—it’s meant to provide legal certainty, especially for general-purpose models that fall into a gray area of the AI Act. However, critics like Meta argue that it essentially extends legal obligations under the guise of “voluntary” rules, setting a precedent for indirect enforcement.
“We are committed to building and deploying AI responsibly and have already published a number of transparency and safety resources for our generative AI models,” Meta said in a statement. “We’ll continue to engage with the Commission and look forward to supporting the goals of the AI Act in practice.”
As the deadline nears, other AI giants, including Google and OpenAI, are still reviewing their positions on whether to sign the Code. While Meta insists that it may still join later if its concerns are addressed, its current stance reflects a broader rift between regulators focused on precaution and tech companies emphasizing flexibility and speed in innovation.
Analysts say the rift could determine the pace and direction of AI development in Europe, where concerns about misuse, bias, and safety are being weighed against the continent’s lagging position in the AI race.
Ultimately, the EU appears determined to use its regulatory muscle to set global norms for artificial intelligence, regardless of whether Big Tech chooses to cooperate.