Member Insights
Usman Javaid, Chief Product and Marketing Officer, Orange Business, explores what AI sovereignty means in practice to enterprises and how their requirements could play to Europe's strengths as AI deployment scales.

Europe’s AI advantage is trust. Its weakness is fragmentation
People keep saying Europe is behind in AI, but behind in what exactly? If the yardstick is the number of consumer apps launched at breakneck speed, perhaps there’s a point. If we mean the number of billion-dollar model builders, also fair. If we mean in terms of capability, as in whether organisations can use AI safely, at scale, in the parts of the economy that matter, the answer is less clear-cut.
Europe is threaded through the global AI supply chain. A lot of the AI rush over the past couple of years gets credited to a handful of brands that depend on European engineering, European manufacturing, and European standards. We are not absent from the development.
Currently, we’re struggling to turn that into global influence. This isn’t because we can’t innovate, but because we don’t scale what we innovate. It’s why sovereignty, a complex subject, matters more than ever. It’s not a border wall, but it’s a practical way for us to build scale with trust.
Sovereignty definitions blocking scale
Spend time with large European organisations and most leadership teams want to ‘do AI’. Pilots kick off, start well, everyone’s excited and then the real questions arrive.
Where does the data live? Who can access it? Can we prove what the system did, and why?
This is the moment when Europe should shine, because Europe has always been strong on three things: trust, resilience, and regulation, which is exactly what AI at scale needs.
The problem is that our market structure and our investment pathways don’t let those strengths travel far enough, fast enough. Multiple variations of the ‘same’ rules, inconsistent public procurement, and a scaling gap between start-up success and continental impact are creating obstacles.
This brings us to sovereignty. The word often gets heard as purity. Build everything locally, own every layer, depend on no one. That isn’t realistic, it also isn’t what most enterprises are asking for.
When I speak to customers, they’re asking for choice, control, and the ability to change course. The ability to avoid lock-in. The ability to keep sensitive workloads under the right jurisdiction. The ability to demonstrate compliance without turning every project into a legal drill.
So, a better word for it is reversibility. Reversibility is like having a two-way door: it allows you to make decisions today without trapping your future self. You can adopt powerful technologies while keeping your options open. This approach turns sovereignty into a guiding principle, ensuring flexibility and choice.
Semantics are important
Europe needs a definition of sovereignty that businesses can implement. It needs to address whether businesses can control how data is used, audit what the systems do, and determine whether they can move the data if needed.
It gets us out of the false headlines between Europe builds everything itself and Europe rents the future from elsewhere. Most organisations are looking for a credible middle path, built around trust and optionality.
Making AI widespread is less about model selection and more about the machinery around it. Models matter, but the gap between “we tried it” and “we rely on it” is usually caused by basics. Governance that sets boundaries people understand. Security that controls access and records actions. Training and support that prevents every team from improvising its own tools in the shadows. Without those, an AI programme either stalls, or it spreads in an uncontrolled way.
You see this when organisations try to move beyond pilots. Suddenly the question is no longer whether the model produces good answers. It is whether the organisation can run it safely across hundreds of teams, thousands of users, and dozens of systems. That is where risk sits. It’s also where the real value is unlocked.
This is why the next phase of AI adoption will be decided inside regulated, operationally complex sectors. Banks and insurers. Hospitals and utilities. Manufacturers and public services. These are not edge cases. They are where European economic weight lives.
It also explains why operators are being misread in some AI commentary. Too often, telcos are treated as another vertical using AI, alongside retail or media. The more relevant question is how AI is delivered and governed across cloud, networks and security. In practice those layers behave like a single system. Weakness in one part can compromise the whole deployment.
That is why the conversation is shifting towards trusted AI services. Not as a marketing label, but as a description of what enterprises actually need when AI becomes part of the daily operating environment, secure connectivity, compliant infrastructure, cyber resilience, and the ability to run AI in a controlled way across distributed architectures.
Agentic AI will intensify this. Systems that can take actions, not just answer prompts, change the risk profile overnight. When software can act, permissions and traceability become the core of the product. Human oversight is non-negotiable, it has to show up in design choices, who can do what, what gets logged, what can be reversed, and where responsibility sits when something goes wrong.
So, what should Europe do next?
Stop treating fragmentation as an inconvenience and start treating it as a competitiveness problem. A genuine digital single market is the basis for building companies that can scale across borders with speed and consistency.
Use public procurement strategically, not as protectionism, but as industrial policy with a purpose. If Europe wants trusted digital infrastructure and sovereign capabilities, it should buy them consistently. Procurement can turn standards into markets.
Invest in the parts of the stack that support reversibility. These are open-source ecosystems, skills pipelines, trusted cloud and security capabilities, and practical governance frameworks. These are the foundations that let organisations adopt AI without locking themselves into a single path.
Europe does not win by trying to do everything but by doing the right things at meaningful scale, with trust built in from day one. That’s what AI sovereignty should mean. Not a retreat from the world, rather a confident way to participate in it, with control, choice, and resilience.