清华大学戴琼海团队研究了用于百万TOPS通用计算的衍射张量单元。相关论文发表在2025年8月27日出版的《自然—光子学》杂志上。
光子计算已经成为下一代处理器的一种有前景的技术,基于衍射的架构显示出大规模并行处理的特殊潜力。不幸的是,缺乏片上可重构性对实现通用计算构成了重大障碍,限制了这些体系结构对各种高级应用的适应性。
研究组提出了一个衍射张量单元(DTU),这是一个完全可重构的光子处理器,支持百万TOPS通用计算。DTU利用张量分解方法通过衍射张量核执行复杂矩阵乘法,而每个衍射张量核采用近核调制机制来激活动态时间衍射连接。实验证实,DTU克服了衍射计算长期存在的通用性和可扩展性限制,实现了任意1024大小矩阵乘法的10−6平均绝对误差的通用计算。
与最先进的解决方案相比,DTU不仅在各种具有挑战性的任务(如自然语言生成和跨模态识别)上实现了具有竞争力的准确性,而且在计算吞吐量方面也比传统电子处理器提高了1000倍。提出的DTU代表了通用光子计算的飞跃,为大规模人工智能的进一步发展铺平了道路。
附:英文原文
Title: Diffractive tensorized unit for million-TOPS general-purpose computing
Author: Wang, Chao, Cheng, Yuan, Xu, Zhihao, Dai, Qionghai, Fang, Lu
Issue&Volume: 2025-08-27
Abstract: Photonic computing has emerged as a promising next-generation technology for processors, with diffraction-based architectures showing particular potential for large-scale parallel processing. Unfortunately, the lack of on-chip reconfigurability poses significant obstacles to realizing general-purpose computing, restricting the adaptability of these architectures to diverse advanced applications. Here we propose a diffractive tensorized unit (DTU), which is a fully reconfigurable photonic processor supporting million-TOPS general-purpose computing. The DTU leverages a tensor factorization approach to perform complex matrix multiplication through clustered diffractive tensor cores, while each diffractive tensor core employs a near-core modulation mechanism to activate dynamic temporal diffractive connections. Experiments confirm that the DTU overcomes the long-standing generality and scalability constraints of diffractive computing, realizing general computing with a 106 mean absolute error for arbitrary 1,024-size matrix multiplications. Compared with state-of-the-art solutions, the DTU not only achieves competitive accuracy on various challenging tasks, such as natural language generation and cross-modal recognition, but also delivers a 1,000× improvement in computing throughput over conventional electronic processors. The proposed DTU represents a leap forward in general-purpose photonic computing, paving the way for further advancements in large-scale artificial intelligence.
DOI: 10.1038/s41566-025-01749-3
Source: https://www.nature.com/articles/s41566-025-01749-3