An Apple Mac Mini is connected to an external GALAX GPU with dual fans on a wooden desk by a window.

How One AI Startup Supercharged Apple’s Mac Mini by Bolting On NVIDIA and AMD GPUs

Apple’s Mac mini has quickly become one of the most in-demand small-form-factor computers, especially as more people look for compact machines that can handle agentic AI and local large language model (LLM) workloads. Despite its strong CPU performance and efficient design, the Mac mini has long had a frustrating limitation: expanding graphics or AI acceleration through external GPUs (eGPUs) isn’t straightforward on macOS, particularly when compared to similarly sized Windows PCs.

That’s why a new experiment from AI startup TinyCorp is turning heads. The team has demonstrated a Mac mini connected to what appears to be an NVIDIA consumer graphics card, reportedly a GALAX GeForce RTX 5060. While this setup isn’t meant to turn the Mac mini into a gaming powerhouse, it could significantly expand what the machine can do for local AI compute tasks—opening the door to faster on-device inference and more capable AI development on Apple’s compact desktop.

So how did they do it? TinyCorp’s approach relies on an ADT-Link adapter that converts a Thunderbolt 4 connection into a PCIe interface that a GPU can use. In practice, they’re seeing up to 40 Gbps (around 5 GB/s bidirectional) throughput. That’s a respectable number for many AI inference scenarios, where compute performance and VRAM matter more than ultra-high bandwidth. However, it’s not ideal for graphics-heavy use cases that demand more consistent, high-bandwidth data transfer—one of the reasons this solution is being positioned around compute rather than traditional GPU workloads like gaming or full desktop graphics acceleration.

The bigger challenge is software. macOS doesn’t offer a simple path to use modern AMD or NVIDIA GPUs externally in the way many users expect. To work around that bottleneck, TinyCorp reportedly built dedicated Python userspace drivers for AMD and NVIDIA GPUs, enabling the system to recognize and utilize the external GPU for certain tasks without relying on conventional macOS driver support.

There are important limitations, though. TinyCorp says support currently focuses on modern consumer GPUs, and the current implementation is restricted to compute-only workloads. In other words, the GPU is treated like a raw accelerator accessed via its memory-mapped PCIe interface—useful for AI and compute, but not intended for full graphical output or standard GPU features people associate with desktop eGPUs.

Looking ahead, TinyCorp also claims it plans to ship a dedicated eGPU board in Q2. The goal is to make the setup more practical and reliable with built-in hardware controls, including smarter power delivery management so users can reduce energy use when the GPU isn’t needed. They also want a reset mechanism to recover quickly if a GPU hangs or behaves unpredictably—an issue that can happen in experimental or edge-case configurations.

While external GPUs aren’t a new concept on either Windows or macOS, the rapidly growing demand for edge AI and local LLM performance is giving eGPU-style solutions a new purpose. If TinyCorp’s approach continues to mature, it could make the Mac mini far more capable for AI experimentation and local inference—without forcing users to abandon Apple’s compact, efficient desktop for a larger workstation.