当前位置:科学网首页 > 小柯机器人 >详情
科学家提出用于量子态学习的量子序贯散射模型
作者:小柯机器人 发布时间:2024/6/20 16:20:40

近日,香港科技大学(广州)的王鑫及其研究小组与香港中文大学(深圳)的Geng Liu等人合作并取得一项新进展。经过不懈努力,他们提出用于量子态学习的量子序贯散射模型。相关研究成果已于2024年6月18日在国际知名学术期刊《物理评论A》上发表。

在这项工作中,研究人员设计了量子序贯散射模型,该模型借鉴了经典扩散模型的思路,旨在解决量子计算中的可扩展性问题。此模型通过有效的训练机制,显著改善了高维目标态在多项式尺度施密特秩下容易出现的梯度消失问题。

经过深入的理论分析和严谨的数值实验验证,该模型在学习物理和算法上具有重要意义的量子态时,展现出了显著的有效性,同时在训练速度和学习精度方面均优于传统方法。这项研究工作表明,目标态中纠缠(量子态的特性)的增加需要更大规模的模型,这可能会降低该模型的学习性能和效率。

据悉,学习概率分布是经典学习理论的一个重要框架。与之对应的是,量子态学习刺激了量子机器学习理论的探索。然而,随着维度的增加,由于可训练性问题,通过传统的量子神经网络方法学习高维未知量子态仍然具有挑战性。

附:英文原文

Title: Quantum sequential scattering model for quantum state learning

Author: Mingrui Jing, Geng Liu, Hongbin Ren, Xin Wang

Issue&Volume: 2024/06/18

Abstract: Learning probability distribution is an essential framework in classical learning theory. As a counterpart, quantum state learning has spurred the exploration of quantum machine learning theory. However, as dimensionality increases, learning a high-dimensional unknown quantum state via conventional quantum neural network approaches remains challenging due to trainability issues. In this work we devise the quantum sequential scattering model, inspired by the classical diffusion model, to overcome this scalability issue. Training of our model could effectively circumvent the vanishing gradient problem for a large class of high-dimensional target states possessing polynomial-scaled Schmidt ranks. Theoretical analysis and numerical experiments provide evidence for our model's effectiveness in learning both physically and algorithmically meaningful quantum states and outperform the conventional approaches in training speed and learning accuracy. Our work indicates that an increasing entanglement (a property of quantum states) in the target states necessitates a larger-scaled model, which could reduce our model's learning performance and efficiency.

DOI: 10.1103/PhysRevA.109.062425

Source: https://journals.aps.org/pra/abstract/10.1103/PhysRevA.109.062425

期刊信息

Physical Review A:《物理评论A》,创刊于1970年。隶属于美国物理学会,最新IF:2.97
官方网址:https://journals.aps.org/pra/
投稿链接:https://authors.aps.org/Submissions/login/new