At the recently concluded GITEX AI Asia conference, leaders from Nokia, AI chip specialist Blaize, and Indonesian telecom provider Datacomm shared a clear message about where AI infrastructure is headed: the future will be more decentralized, more local, and increasingly driven by the need for speed and data control.
In their discussion, the executives explained that AI workloads are splitting into two very different paths. AI training—the compute-heavy process of building and refining large models—still makes the most financial sense in centralized data centers, often located in remote regions where electricity, land, and operating costs are lower. Centralized training clusters can be scaled efficiently, packed with high-performance hardware, and optimized for nonstop operation, which helps organizations control costs while pushing model development forward.
But inference—the real-time moment when an AI model is actually used to generate results—has different priorities. Inference needs fast response times, consistent performance, and the ability to operate close to where users and devices are located. That’s why inference architectures are quickly moving away from purely centralized cloud setups and toward edge computing and decentralized deployments.
The executives emphasized two major forces accelerating this shift. The first is latency. Many AI applications, from enterprise assistants to telecom network optimization and real-time analytics, lose value when responses arrive too slowly. Running inference closer to the user—at the edge, within regional hubs, or inside telecom networks—reduces delays and improves reliability.
The second driver is data sovereignty. As regulations tighten and businesses become more protective of sensitive information, there’s growing pressure to keep data within specific geographic boundaries or under local control. Decentralized inference makes it easier to process data locally rather than sending it across borders to distant cloud data centers. For telecom providers and cloud service operators, this is becoming a strategic requirement, not just a technical preference.
The conversation also points to a broader industry trend: cloud service providers and telecommunications companies are being pushed toward decentralized architectures that can support sovereign AI strategies. In practice, that means building infrastructure that can deliver AI results locally while still benefiting from centralized training pipelines. It’s a “centralized training, distributed inference” model—designed to balance cost efficiency with responsiveness, compliance, and trust.
As AI adoption expands across industries, this edge-first inference approach is likely to become a defining feature of next-generation AI infrastructure. For businesses, telecom operators, and public-sector organizations alike, the takeaway is simple: the most competitive AI systems won’t just be powerful—they’ll be closer, faster, and more in control of where data lives and how it’s used.






