TL;DR:
- The US government is pushing back against the EU’s AI Code of Practice
- Big Tech companies are lobbying to water down the regulations
- The code is meant to promote transparent and copyright-conscious AI development
- Critics say the regulations stifle innovation and are burdensome
The European Union’s (EU) AI Code of Practice has been a subject of controversy, with Big Tech companies and the US government pushing back against the regulations. According to a report, the US government’s Mission to the EU sent a letter to the European Commission, opposing the adoption of the code in its current form. The code is meant to promote more transparent and copyright-conscious AI development, but critics say it stifles innovation and is burdensome.
Background on the AI Code of Practice
The AI Code of Practice is a voluntary code that aims to help providers of general-purpose AI models demonstrate compliance with the EU’s AI Act. The code is being drafted by a diverse group of stakeholders, including industry organizations, copyright holders, civil society representatives, and independent experts. However, Big Tech companies such as Meta and Google have been lobbying to water down the regulations, citing concerns that they are too restrictive and would stifle innovation.
Implications of the US Government’s Pushback
The US government’s pushback against the AI Code of Practice has significant implications for the future of AI regulation. If the code is watered down or scrapped, it could lead to a lack of transparency and accountability in AI development. This could have serious consequences, including the potential for AI systems to be used in ways that are harmful or discriminatory. As Thomas Randall, director of AI market research at Info-Tech Research Group, noted, “Any organization conducting business in Europe needs to have its own AI risk playbooks, including privacy impact checks, provenance logs, or red-team testing, to avoid contractual, regulatory, and reputational damages” .
Conclusion
The controversy surrounding the EU’s AI Code of Practice highlights the challenges of regulating AI development. While the code is meant to promote transparency and accountability, critics say it stifles innovation and is burdensome. The US government’s pushback against the code has significant implications for the future of AI regulation, and it remains to be seen how the situation will unfold. As the use of AI continues to grow and evolve, it is essential that regulators and industry leaders work together to develop regulations that promote transparency, accountability, and innovation.
References
[^1]: Taryn Plumb (Apr 25, 2025). “US wants to nix the EU AI Act’s code of practice, leaving enterprises to develop their own risk standards“. Computerworld. Retrieved Apr 30, 2025.
[^2]: (Feb 26, 2025). “Tech Giants Push Back at a Crucial Time for the EU AI Act“. pymnts.com. Retrieved Apr 30, 2025.
[^3]: (Feb 21, 2025). “Google, Meta execs blast Europe over strict AI regulation as Big Tech ups the ante“. NBC Connecticut. Retrieved Apr 30, 2025.
[^4]: (Jan 7, 2025). “Setting the rules of their own game: how Big Tech is shaping AI standards“. Corporate Europe Observatory. Retrieved Apr 30, 2025.
[^5]: (Mar 25, 2025). “EU lawmakers warn against ‘dangerous’ moves to water down AI rules“. Financial Times. Retrieved Apr 30, 2025.