学术报告:Trainability and Generalization in Quantum Neural Networks

报告人:朱成鸿

报告时间:20251071600-1640

报告地点:数学与信息学院715

报告摘要:Quantum neural networks (QNNs) promise near-term quantum advantage, yet they face connected obstacles in learning, trainability, and generalization. We show that, even with high-quality initial states, QNNs almost surely fall into unfavorable local minima once the loss dips below a critical value; the escape probability shrinks exponentially with qubit count, regardless of circuit architecture or initialization, and the curvature of these minima is set by the quantum Fisher information. To anticipate such failures without running full trainings, we introduce a lightweight learnability metric that compares landscape fluctuations to trusted benchmarks, unifying barren plateaus, poor expressibility, bad minima, and overparameterization. This metric is efficiently estimable on classical hardware through Clifford sampling and accurately predicts performance across physical and random Hamiltonians. Finally, by analyzing stochastic gradient descent via uniform stability, we derive a practical generalization bound that depends on the number of parameters, data uploads, dataset dimension, and optimizer hyperparameters.

报告人简介:朱成鸿,就读于香港科技大学广州校区人工智能领域博士生项目。本硕毕业于墨尔本大学,研究兴趣包括量子信息、量子机器学习及量子体系架构,目前在IEEE Transactions on Information TheoryIEEE JSACPhysical Review LettersNature Computational ScienceCommunications PhysicsNeurIPSISCAHPCADAC 等国际权威期刊和会议发表论文十余篇。

欢迎广大师生参加!