Every day, Artificial Intelligence (AI) continues to reshape economies and industries while reshuffling geopolitical power. But as the stakes rise, so does the tension between regulation and innovation. The dominant narrative suggests that regulation hinders AI’s progress, yet this simplistic framing is misleading. With this week’s Paris AI Action Summit convening industry leaders, policymakers, and researchers from around the world, now is the moment to challenge the assumption that rules necessarily constrain progress. 

At Edelman, we’ve seen time and again that trust is the foundation of adoption. Without it, even the most groundbreaking technologies struggle to gain traction. The question on everyone’s mind is: Does AI regulation stifle innovation, or can it help unlock its full potential in a responsible way? 

The AI Battlefield – Global Innovation Meets Regulatory Tensions 

The current AI landscape is shaped by rapid technological advancements, geopolitical tensions, and evolving regulatory frameworks. Three key forces are shaping the debate. 

First, the European Union is pressing ahead with its AI Act, positioning itself as a leader in ethical AI governance. Meanwhile, Trump’s new administration has signaled resistance to stringent AI regulations, warning that heavy-handed policies could weaken U.S. tech dominance. 

Finally, against the backdrop of that mano a mano, a resounding wakeup call has now come from China with its disruptive Deepseek model. DeepSeek’s unexpected breakthrough has introduced models challenging U.S. AI supremacy, proving that cost-effective, high-performance AI is possible outside Silicon Valley. This raises critical concerns: Is AI regulation in the west slowing competitive agility? Or is it the necessary safeguard against ethical pitfalls? 

The EU’s risk-based approach demonstrated in the AI Act has sparked industry concern, with critics fearing compliance costs will stifle start-ups while benefiting Big Tech. Conversely, there is a valid concern that excessive regulation could hinder innovation by discouraging entrepreneurship. 

So, where does this leave us? The conversation must move beyond a simple pro- or anti-regulation stance and instead focus on what kind of regulation fosters both trust and innovation. 

Debunking the Myth – Why Regulation and Innovation Can Coexist 

The common assumption is that regulation and innovation are opposing forces. History tells us a different story. In the automotive sector, early seat belt mandates were seen as an innovation killer but ultimately boosted consumer confidence and market growth.

In pharmaceuticals, while regulatory oversight slows drug development, it also ensures public trust in life-saving medicines, driving sustainable market expansion.

As for environmental policies, emission limits were seen as business constraints, yet they catalyzed the green energy revolution and the opening of new markets for clean tech that benefit both the economy and the planet. 

AI is no different. When rules are clear, companies can innovate with confidence. So instead of asking whether AI should be regulated, we should ask: How can we design regulations that enable responsible innovation? 

Making Regulation Adaptive, Not Restrictive 

Regulation must not just be a burden to comply with, but a framework that builds trust. AI evolves rapidly; the rules we create today may look outdated tomorrow. That is why we need adaptive, iterative regulation, rather than static, one-size-fits-all policies.

Regulatory ‘sandboxes’ are a strong mechanism here, offering a controlled environment for AI innovators to test applications under real-world conditions – while policymakers observe and refine rules for maximum benefit and minimal harm. 

And trust is what makes innovation scalable, sustainable, and socially accepted. In that regard, Edelman’s 2025 Trust Barometer revealed a troubling paradox. Trust in AI is declining, with concerns over job displacement, misinformation, and ethical misuse growing. Yet, demand for AI-driven solutions is surging, particularly in enterprise applications, healthcare, and automation. 

At the Paris AI Summit, policymakers and industry leaders must move past ideological battles and co-create AI rules that build trust while driving competitiveness. The focus should be on adaptive, risk-based frameworks that don’t punish responsible innovation; regulatory sandboxes that let AI solutions be tested safely before scaling; coupled to global coordination to prevent fragmentation.

Beyond the Regulation-Innovation Stalemate 

Given the current state of play, this may sound like a stretch. Regardless, as the AI ecosystem gathers in Paris, it becomes ever more critical to reject the false choice between regulation and innovation. Instead, we must embrace trust as the bridge between them – because in AI, as everywhere else, trust is never a constraint.

Yoni Lawson is Head of Tech at Edelman France.