BEIJING, Nov 13, 2025 – Chinese tech giant Baidu (9888.HK) announced the launch of two next-generation artificial intelligence (AI) semiconductors, aiming to provide domestic companies with high-performance, cost-effective, and locally controlled computing solutions. The announcement comes amid ongoing U.S.-China trade and technology tensions, which have restricted exports of advanced U.S. AI chips to Chinese firms, spurring a wave of homegrown innovation.
The reveal occurred at Baidu World 2025, the company’s annual technology conference, highlighting China’s push for self-reliance in AI hardware as global competition over advanced computing intensifies.
New AI Chips: M100 and M300
Baidu introduced two key chips designed to support a range of AI workloads:
- M100: An inference-focused chip intended for processing AI model predictions and user requests efficiently. The M100 is scheduled for launch in early 2026.
- M300: A versatile chip capable of both training AI models and performing inference, set for release in early 2027.
In AI computing, training involves teaching models to recognize patterns in large datasets, while inference applies these trained models to real-world tasks, such as language processing, image recognition, or predictive analytics.
Baidu has been developing proprietary chips since 2011, underscoring its long-term investment in domestic AI hardware to reduce dependence on foreign technology.
Supernode Systems: Scaling AI Performance
In addition to individual chips, Baidu unveiled two supernode products, designed to link multiple chips together to overcome the limitations of single-chip performance.
- Tianchi 256: Comprising 256 P800 chips, available in the first half of 2026.
- Tianchi 512: A more powerful version using 512 P800 chips, set for launch in the second half of 2026.
These supernodes leverage advanced networking and parallel processing, allowing AI workloads to scale efficiently for enterprise-level tasks and large language models.
For comparison, Huawei has deployed the CloudMatrix 384, which uses 384 Ascend 910C chips, and is widely regarded as more powerful than Nvidia’s GB200 NVL72, one of the U.S. chipmaker’s most advanced system-level products. Huawei has also announced plans to release even more powerful supernodes in the near future.
Advancing AI Models: Ernie 4.0
Baidu also showcased an updated version of its Ernie large language model, capable of handling not just text but also images and videos. This advancement positions Ernie as a multimodal AI system, capable of supporting a broad range of applications, from natural language understanding to complex multimedia analysis.
The combination of M100/M300 chips and Tianchi supernodes is expected to power the next generation of Ernie models, further strengthening Baidu’s position in China’s AI ecosystem.