OpenAI launches two new open-weight models

Open Up or Get Cooked: Hugging Face CEO’s Warning for U.S. AI

America may dominate many AI benchmarks, but China is quietly building an advantage that’s harder to quantify and even harder to catch: a fast-growing open-source AI ecosystem. While the United States argues over how open large language models should be, Chinese companies are releasing powerful systems and encouraging a culture of shared research that accelerates progress for everyone participating.

That tension is at the heart of a broader policy and industry debate. One of the stated goals in President Trump’s AI plan is to widen access to LLMs so more researchers, startups, and institutions can build with them. Yet the biggest AI players in the U.S. have been slow to embrace open-source. The economics are a major reason why. Training and serving frontier models cost billions, and companies like OpenAI and Anthropic rely on subscriptions and API usage to recoup those investments. In that environment, keeping cutting-edge models closed feels like the safer business decision, preserving differentiation and margins.

But there’s a growing chorus warning that a closed stance could backfire. Hugging Face CEO Clement Delangue recently argued that without a return to “open science,” the U.S. risks ceding momentum to countries that freely share methods, weights, and benchmarks. His comments followed the release of a massive open-source Mixture-of-Experts model from Meituan, a Chinese food delivery giant. The system reportedly boasts 560 billion total parameters and is aimed at automating tasks such as customer support while feeding a thriving community of developers. The symbolism matters: when non–Big Tech firms ship serious open models, they help seed a broader ecosystem of tools, datasets, and research that compounds over time.

The U.S. preference for closed models is understandable, but it comes with trade-offs:
– Slower collaborative progress, since fewer external researchers can audit, improve, and adapt models
– Higher duplication of effort across companies solving similar problems in isolation
– Less transparency, which can hamper safety work, evaluation, and trust
– Reduced opportunity for smaller labs and startups to innovate on top of shared foundations

By contrast, open-source AI can speed up discovery and skill-building across universities, startups, and independent researchers. It fuels rapid iteration, makes safety evaluations easier, and allows organizations to customize models for specialized domains without reinventing the wheel. Those advantages explain why China’s open-source ecosystem has scaled so quickly: more eyes on the code, faster feedback loops, and a steady stream of community-driven enhancements.

There are signs of movement in the U.S., including efforts to release more permissive models and tooling. Still, the overall posture remains cautious. A pragmatic path forward would balance competitiveness with openness:
– Open-source strong “previous-gen” models and research artifacts while keeping frontier models proprietary for a limited window
– Release detailed evaluations, training recipes, and safety methodologies even when full weights aren’t shared
– Support standardized benchmarks and red-teaming frameworks that invite external scrutiny
– Encourage government and academic partnerships that fund open datasets and compute grants for public-interest research

America doesn’t need to choose between leadership and openness—it needs to recognize that openness can be a catalyst for leadership. If the U.S. industry aligns around open science where it counts, it can harness its vast talent, research culture, and capital to build not just the most advanced systems, but the most widely beneficial ones. China’s momentum shows what a committed open-source strategy can unlock. The question now is whether the U.S. will match it with a clear, collaborative vision of its own.