MIT engineers have unveiled a new chip-building technique that could make future AI hardware and high-performance computers far more energy efficient. The idea is simple in concept but difficult in manufacturing: place transistors and memory much closer together by stacking active electronics directly on the backside of a silicon chip. If it scales, this kind of “3D” approach could help cut the energy wasted every time data moves between computing logic and memory.
Why does that matter? As artificial intelligence workloads grow, energy use grows with them. Today’s chips typically keep logic and memory in separate areas, which forces constant back-and-forth data transfers. That movement doesn’t just slow things down—it consumes a large amount of power. By tightly integrating these components in a vertical stack, the distance data must travel shrinks dramatically, opening the door to faster processing with lower power draw.
The main obstacle has always been heat. In standard manufacturing, delicate transistors are built on one side of the chip, while the other side is generally used for wiring and interconnections. Trying to add another functional transistor layer after the fact is risky because conventional fabrication steps can require temperatures high enough to damage the circuitry that’s already there.
The MIT team, led by researcher Yanjie Shao, addressed this by developing a low-temperature process that can build new transistor layers without frying what’s underneath. Their method relies on amorphous indium oxide, a material that can form ultra-thin transistor layers at only about 150 °C (302 °F). Because that temperature is low enough to protect the existing chip layer, it becomes feasible to add active transistors onto the backside—turning what used to be mostly a wiring “real estate” into a functional layer for computing.
In practical terms, this approach could enable chip designers to merge logic and memory more effectively within a compact vertical structure. That can translate into better energy efficiency and more functionality packed into smaller devices—especially important as everything from consumer electronics to data center accelerators pushes for more performance per watt.
The researchers also demonstrated further improvements using a ferroelectric material called hafnium-zirconium-oxide. With it, they built transistors measuring 20 nanometers. During testing, the devices reached switching speeds of about 10 nanoseconds, which the team noted is essentially the limit of their current measurement setup. Just as important for energy-conscious computing, the transistors operated at significantly lower voltage than comparable technologies.
While there’s still more work ahead to refine the architecture and push performance boundaries, this backside transistor stacking method highlights a promising direction for next-generation chip design. If future development brings it into mainstream manufacturing, it could help deliver longer-lasting devices, more efficient AI computing, and high-performance systems that don’t keep pushing energy demands to new extremes.






