NVIDIA’s CUDA platform is now supported on RISC-V processors, marking a pivotal moment for the AI sector. This expansion opens up a new world of opportunities for RISC-V, a versatile and open-source architecture gaining traction beyond the established x86 and ARM processors.
Traditionally, AI markets have relied heavily on data center CPUs using x86 from Intel and AMD, or ARM-based solutions from NVIDIA and other tech giants. However, with RISC-V now able to run NVIDIA’s CUDA via a porting mechanism, the landscape is poised for change. This transition could significantly boost RISC-V’s presence in the market.
CUDA, known for its superior AI computation capabilities, is central to many industry operations. The integration of RISC-V with CUDA will not only broaden its adoption but also offer numerous advantages, particularly due to the absence of licensing fees. This open-source ISA allows developers and companies to utilize, modify, and distribute the architecture without royalties, encouraging adoption among startups and smaller businesses.
RISC-V brings scalability to the table, simplifying chip design and verification with its minimal instruction set. This can accelerate development and testing processes. While ARM and x86 dominate large-scale AI clusters, RISC-V has significant potential in edge AI applications. Though not yet widely adopted in the AI industry, companies like Jim Keller’s Tenstorrent are leading the charge.
Tenstorrent is noteworthy for its powerful and cost-effective AI solutions, such as the Wormhole n150 and n300 chips. RISC-V’s appeal, amplified by its open-source nature, is particularly strong among Chinese developers. With the added support of NVIDIA’s CUDA, interest in RISC-V for AI applications is likely to grow substantially.






