Peking University Scientists Pioneer Carbon Nanotube Tensor Processing Unit

Researchers at Peking University have made a groundbreaking advancement by developing the world’s first tensor processing unit (TPU) using carbon nanotube transistors. This pioneering effort marks a significant stride in creating energy-efficient AI hardware. The novel TPU demonstrates exceptional efficiency and low power consumption, suggesting that carbon nanotube technology could play a pivotal role in the future of AI computing.

The TPU’s architecture is built on a systolic array that supports parallel 2-bit integer multiply-accumulate operations. Approximately 3,000 carbon nanotube field-effect transistors are integrated into the chip, enabling it to perform tasks like convolution and matrix multiplication with minimal energy requirements.

Zhiyong Zhang, a co-author of the study published in Nature Electronics, emphasized the motivation behind their work: the rapid development of AI applications and the limitations of current silicon-based technology in handling vast amounts of data. The research team’s innovative approach to semiconductor manufacturing has resulted in transistors with high on-current densities and reliable performance, thanks to surfaces that are 99.9999% pure.

System-level simulations of an 8-bit TPU constructed with these nanotube transistors show impressive results: the TPU can operate at 850MHz and achieve an energy efficiency of 1 tera-operation per second per watt. When used to power a five-layer convolutional neural network, the TPU reached an accuracy of up to 88% in recognizing MNIST images while consuming only 295 μW of power, surpassing the capabilities of current convolutional acceleration hardware.

While the current version of this TPU, developed using 180nm-class processes, may have limited practical applications, the researchers believe their achievement represents a substantial leap toward next-generation, energy-efficient AI hardware enabled by carbon nanotube technology.