GITEX Asia 2026 ended in Singapore with a message that felt hard to miss across the exhibition floor: the AI industry is moving into its next chapter, and it’s less about pouring money into massive infrastructure buildouts and more about turning artificial intelligence into results that businesses can measure. Over two packed days on April 9–10 at Marina Bay Sands, the conversations repeatedly circled around one theme—AI spending is shifting toward inference and edge deployment, where real-world use cases and monetization are expected to accelerate.
The scale of the event reflected how quickly Southeast Asia is becoming a serious player in the global AI ecosystem. GITEX Asia welcomed participants from more than 110 countries, with over 550 enterprises and startups on-site. Investors managing roughly US$350 billion in assets were also in attendance, signaling that capital is actively looking for the next wave of growth—and that wave increasingly appears tied to deployment, not just development.
From AI buildout to AI deployment in the real world
A noticeable change at GITEX Asia 2026 was the focus on what happens after the model is built. Rather than centering on model development alone, discussions leaned into the operational reality of putting AI into production at scale—inside enterprises, industrial environments, and real workflows where performance, reliability, and cost determine whether projects succeed.
Several constraints are now shaping that shift. Compute availability is not unlimited, energy usage is rising as workloads grow, and hardware supply remains a practical concern. These pressures are pushing companies to prioritize efficiency and deployability instead of chasing model scale at any cost. The takeaway: the competitive edge is increasingly found in architectures that run effectively across both data centers and edge environments, especially where latency and cost control matter.
Data center growth meets physical limits
Singapore and nearby markets are racing to expand data center capacity to meet demand, but executives pointed to rising infrastructure constraints. Power availability, cooling requirements, and access to advanced hardware are becoming more prominent bottlenecks—issues that can slow expansion no matter how strong the AI demand signals are.
Those constraints are also changing how systems are designed. Instead of forcing everything into centralized data centers, more companies are adopting hybrid architectures that split workloads between the cloud or core data centers and edge locations closer to where data is generated. This approach helps reduce latency while optimizing resource usage, which is increasingly important when energy and hardware availability are under pressure.
On the show floor, Nokia and Blaize showcased joint work developed in Singapore that combines networking infrastructure with AI inference platforms designed for practical deployment, highlighting how edge inference is becoming a key area of innovation.
Why AI monetization is shifting toward inference and the edge
As investment in AI infrastructure matures, the focus is shifting to where returns actually come from. Multiple executives emphasized that the next phase of AI growth is expected to rely less on training ever-larger models and more on deploying AI into real applications that deliver tangible outcomes.
Stephen Patak described this transition as a clear move toward inference and edge environments as the next battleground for monetization. Training remains important, but the expectation is that revenue generation will increasingly come from AI systems running in production—supporting operations, improving efficiency, and enabling new services in real-world settings.
Industry participants also noted that inference is still in an early stage, but momentum is building. As power efficiency improves, costs fall, and performance becomes more accessible, broader adoption across sectors is expected to follow—especially in enterprise and industrial scenarios where scalable deployment is the real test.
Southeast Asia’s growing strategic role in the global AI race
GITEX Asia 2026 also spotlighted Southeast Asia’s rising importance as AI companies localize infrastructure and adjust to shifting supply chain dynamics. Singapore, in particular, continues to strengthen its position as a regional hub connecting global technology providers with investors and enterprise demand. The presence of both Chinese and international firms reflected an industry-wide transition: deployment capability, efficiency, and infrastructure resilience increasingly matter as much as raw model performance.
Rather than focusing only on training bigger models, many companies are prioritizing optimization, cost control, and scalable deployment that works with available hardware. It’s a pragmatic shift—and one that suggests AI leadership will increasingly be defined by execution: who can deploy, operate, and monetize AI systems reliably at scale.
As GITEX Asia 2026 wrapped up, the direction of the market became clearer. The AI race is no longer driven solely by model size or infrastructure expansion. The next winners will be the ones who can move AI from promise to production—efficiently, at the edge, and in ways that deliver real business value.






