Two RISC-V development boards are shown under the text 'Compute Power Awakens: Your Ticket to RISC-V Desktop AI.'

Sipeed’s Tiny RISC‑V Powerhouse Packs 32GB LPDDR5 + 60 TOPS NPU, Pushing 15 Tokens/s on Qwen‑3.5 35B

Sipeed is stepping into the spotlight with a new RISC-V single-board computer lineup built for local AI workloads: the K3 series. Designed for edge computing, networking, and on-device inference, these compact boards target developers and builders who want serious AI performance without relying on the cloud. Prices start at $299, and the platform is positioned to handle large language models up to the 30B class smoothly, with Sipeed even showcasing performance running Qwen 3.5 35B.

At the heart of the K3 series is the SPACEMIT Key Stone K3 AI processor, based on the RISC-V architecture. It combines 8 X100 high-performance CPU cores with 8 A100 AI cores in a fusion-style design that’s rated for up to 130,000 DMIPS of general compute. Clocked at up to 2.4 GHz, it’s pitched as roughly comparable to ARM Cortex-A76-class performance for everyday CPU tasks, while focusing heavily on AI acceleration.

For AI inference, the built-in NPU is rated at up to 60 TOPS (INT4) and supports common AI data types including BF16, FP16, FP8, INT8, and INT4. According to Sipeed, the K3 can run 30B parameter models locally at more than 10 tokens per second, and in its Qwen 3.5 35B demo it reportedly reaches around 15 tokens per second. The message is clear: this is a privacy-friendly “local intelligence” platform aimed at people who want LLM capability on a small machine rather than sending data to remote servers.

Memory is another major part of the pitch. The K3 supports up to 32GB of LPDDR5-6400, delivering up to 51GB/s of bandwidth, which is a key factor for keeping larger models fed with data during inference. Sipeed plans configurations with 8GB, 16GB, and 32GB RAM, making it easier to choose a board that matches your model size and performance expectations.

Storage and expansion are built around practical developer needs. The K3 series supports eMMC 5.1, SD card storage, and M.2 NVMe SSDs via PCIe Gen3 x4, giving it far more flexibility than typical hobby SBCs when you need fast local model storage and quick loading. Multimedia capabilities include 4K decode up to 120 fps (H.265/VP9) and 4K encode up to 60 fps, which can be useful for edge AI video workloads such as smart cameras or real-time analytics.

Form factor is where the K3 series gets especially interesting. Sipeed is offering both a compute module approach and a compact Pico-ITX style board. The K3 CoM260 Kit is a 69.6mm x 45mm module using a 260-pin SO-DIMM slot, and it’s designed to be hardware compatible with Jetson Orin Nano carrier boards. That means builders who already have cases, carrier boards, or embedded designs can potentially transition from an ARM-based setup to RISC-V with less friction than usual.

Meanwhile, the K3 Pico-ITX board targets small but capable embedded builds. It emphasizes strong connectivity and modern I/O, including onboard 10GbE plus an additional Gigabit LAN for high-throughput edge deployments. Power and connectivity are streamlined with dual USB Type-C ports supporting USB Power Delivery and Alt DisplayPort, allowing for simpler setups that don’t require bulky adapters. The board is also offered with onboard unified LPDDR5 memory options (16GB or 32GB), tuned for AI throughput needs.

On the software side, the platform is listed with support for a Debian-based OS (Bianbu OS), Docker, and RISC-V KVM virtualization. Sipeed also mentions Ubuntu 26.04 and ROS support for the broader ecosystem, which will matter to robotics developers and teams building AI-powered edge systems.

In terms of sizing, the K3 series remains compact even with cooling in place, with listed dimensions around 103mm x 90.5mm x 35mm when equipped with a heatsink (depending on configuration). That keeps it within the realm of small embedded boxes, industrial controllers, and space-constrained AI appliances.

Pricing lands in “premium SBC” territory. The 8GB models are listed at $299–$309, while the 32GB versions range from $629–$639, with about a $10 difference between the kit and the ITX-style board. For context, an 8GB Jetson Orin Nano system can be found for less, but Sipeed is betting that RISC-V fans and local LLM builders will value the K3’s AI positioning, memory bandwidth, and upgrade-friendly compatibility with existing carrier board ecosystems.

For anyone looking to build a small AI machine capable of running large language models locally—especially in edge deployments where privacy and low latency matter—the Sipeed K3 series stands out as an intriguing new option in the growing world of RISC-V AI hardware.