Artificial intelligence is moving rapidly into the operational core of industries from media to healthcare. But as adoption accelerates, a different kind of risk is emerging—one driven less by performance than by politics.
The question is no longer simply whether AI works. It is whether it will remain available.

As AI becomes infrastructure, it inherits the vulnerabilities of infrastructure. And as previous technology cycles have shown, when political considerations intervene, the question is not whether a system performs—but whether it remains in place.
Earlier enterprise AI cycles highlight the shift. IBM Watson, once celebrated after its Jeopardy! win, struggled in real-world deployments due to high costs, fragmented data, and inconsistent outcomes. The risk was technical, and the consequences, though significant, remained contained within corporate control.
Today’s generation of AI providers, including Anthropic, is different. Their systems are not just tools but infrastructure—embedded in workflows, decision-making, and customer-facing applications. That depth of integration creates dependency.
Switching providers is neither simple nor cheap. It requires rebuilding systems, retraining staff, and reworking data pipelines around new models. The deeper the integration, the higher the cost of change.
A parallel can be drawn with Huawei. Its telecom equipment was widely adopted before geopolitical tensions triggered bans in several countries. Operators were forced to remove and replace infrastructure, often at substantial expense. While governments offered partial support in some cases, the bulk of the cost fell on those who had deployed the technology.
AI now faces a similar, if less visible, exposure. As governments introduce regulations, export controls, and security frameworks, access to certain systems could be restricted or reshaped. For enterprises reliant on external AI providers, such shifts would be disruptive.
Liability structures offer limited protection. Providers typically cap their exposure through contracts, while regulatory actions rarely include compensation for downstream users. The result is a familiar pattern: when systems must be replaced, the adopter pays.
For sectors such as media, healthcare, and finance, where AI is becoming embedded in critical operations, the implications are significant. Disruptions could affect workflows, compliance, and service delivery.
The evolution from IBM Watson to Anthropic reflects a broader shift in risk—from technical execution to geopolitical exposure. As AI becomes infrastructure, it inherits the vulnerabilities that come with it.
When that infrastructure is challenged, the issue is no longer whether it performs—but whether it can be used at all.

