MSI’s palm‑sized AI workstation is finally rolling out, bringing serious datacenter‑class horsepower to a 1.19‑liter box. Built on Nvidia’s DGX Spark Grace Blackwell platform, the EdgeXpert series targets developers, startups, and teams that want on‑prem, low‑latency AI without a full rack.
After a staggered launch window, MSI is listing configurations starting at $2,999. While the original MS-C931 page still exists, the purchasable models now use updated product codes and a refreshed chassis. The current lineup includes:
– 99SUS base model with 1 TB NVMe SSD at $2,999
– 11SUS with 4 TB storage
– 01SKUS X2 with 8 TB storage
What’s inside is the real story. The system is powered by Nvidia’s GB10 superchip, which marries a Grace CPU to a Blackwell GPU for up to 1 petaFLOP of FP4 AI performance in a compact form factor. The Grace CPU combines 10 ARM Cortex‑X925 and 10 ARM Cortex‑A725 cores, while the GPU handles accelerated AI inference and training tasks with modern model architectures.
Key hardware highlights:
– 128 GB LPDDR5X unified memory delivering 273 GB/s
– NVLink C2C interconnect between CPU and GPU with up to 5x the bandwidth of PCIe 5.0, letting both processors access the same memory pool
– NVMe storage options of 1 TB or 4 TB, plus an 8 TB variant in the lineup
– Four USB‑C 3.2 20 Gbps ports with Power Delivery
– HDMI 2.1a video output with multichannel audio
– 10 GbE networking and dual Nvidia ConnectX‑7 ports for ultra‑fast node‑to‑node links
– Wi‑Fi 7 and Bluetooth 5.3 wireless (note: earlier materials listed BT 5.4; current models specify 5.3)
Despite the speed, the box is tiny: 151 x 151 x 52 mm and about 1.2 kg. Cooling is handled by a precision dual‑fan setup with ultrawide fins and an advanced heat‑pipe design. Fan curves and airflow are user‑tunable to keep acoustics low even when workloads spike.
On the software front, the EdgeXpert models ship with Nvidia DGX OS (Ubuntu‑based), giving developers immediate access to the Nvidia AI software stack—CUDA, PyTorch, TensorFlow, Jupyter, NIMs, SDKs, deployment blueprints, and acceleration via TensorRT. It’s built to integrate smoothly with today’s leading model ecosystems, including favorites like DeepSeek R1 and Llama 3.1, alongside major frameworks from top AI providers.
Need more headroom? You can link two units via the ConnectX‑7 ports to effectively double capacity—up to 2 petaFLOPs of AI performance, 256 GB of unified memory, and as much as 8 TB of storage—ideal for scaling inference, fine‑tuning, or multi‑team environments.
Initial stock appears tight, but regular restocks are expected as more DGX Spark kits land with partners. If you’re aiming to bring cutting‑edge AI workflows in‑house without dedicating a server rack, MSI’s EdgeXpert family makes an intriguing, portable alternative at an accessible entry price. Interested buyers can sign up for in‑stock alerts to secure a unit as availability improves.






