Lorenzo Leone, Salvatore F.E. Oliviero, Lukasz Cincio, M. Cerezo
{"title":"关于硬件高效解析的实用性","authors":"Lorenzo Leone, Salvatore F.E. Oliviero, Lukasz Cincio, M. Cerezo","doi":"10.22331/q-2024-07-03-1395","DOIUrl":null,"url":null,"abstract":"Variational Quantum Algorithms (VQAs) and Quantum Machine Learning (QML) models train a parametrized quantum circuit to solve a given learning task. The success of these algorithms greatly hinges on appropriately choosing an ansatz for the quantum circuit. Perhaps one of the most famous ansatzes is the one-dimensional layered Hardware Efficient Ansatz (HEA), which seeks to minimize the effect of hardware noise by using native gates and connectives. The use of this HEA has generated a certain ambivalence arising from the fact that while it suffers from barren plateaus at long depths, it can also avoid them at shallow ones. In this work, we attempt to determine whether one should, or should not, use a HEA. We rigorously identify scenarios where shallow HEAs should likely be avoided (e.g., VQA or QML tasks with data satisfying a volume law of entanglement). More importantly, we identify a Goldilocks scenario where shallow HEAs could achieve a quantum speedup: QML tasks with data satisfying an area law of entanglement. We provide examples for such scenario (such as Gaussian diagonal ensemble random Hamiltonian discrimination), and we show that in these cases a shallow HEA is always trainable and that there exists an anti-concentration of loss function values. Our work highlights the crucial role that input states play in the trainability of a parametrized quantum circuit, a phenomenon that is verified in our numerics.","PeriodicalId":20807,"journal":{"name":"Quantum","volume":null,"pages":null},"PeriodicalIF":5.1000,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"On the practical usefulness of the Hardware Efficient Ansatz\",\"authors\":\"Lorenzo Leone, Salvatore F.E. Oliviero, Lukasz Cincio, M. Cerezo\",\"doi\":\"10.22331/q-2024-07-03-1395\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Variational Quantum Algorithms (VQAs) and Quantum Machine Learning (QML) models train a parametrized quantum circuit to solve a given learning task. The success of these algorithms greatly hinges on appropriately choosing an ansatz for the quantum circuit. Perhaps one of the most famous ansatzes is the one-dimensional layered Hardware Efficient Ansatz (HEA), which seeks to minimize the effect of hardware noise by using native gates and connectives. The use of this HEA has generated a certain ambivalence arising from the fact that while it suffers from barren plateaus at long depths, it can also avoid them at shallow ones. In this work, we attempt to determine whether one should, or should not, use a HEA. We rigorously identify scenarios where shallow HEAs should likely be avoided (e.g., VQA or QML tasks with data satisfying a volume law of entanglement). More importantly, we identify a Goldilocks scenario where shallow HEAs could achieve a quantum speedup: QML tasks with data satisfying an area law of entanglement. We provide examples for such scenario (such as Gaussian diagonal ensemble random Hamiltonian discrimination), and we show that in these cases a shallow HEA is always trainable and that there exists an anti-concentration of loss function values. Our work highlights the crucial role that input states play in the trainability of a parametrized quantum circuit, a phenomenon that is verified in our numerics.\",\"PeriodicalId\":20807,\"journal\":{\"name\":\"Quantum\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.1000,\"publicationDate\":\"2024-07-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Quantum\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://doi.org/10.22331/q-2024-07-03-1395\",\"RegionNum\":2,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PHYSICS, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Quantum","FirstCategoryId":"101","ListUrlMain":"https://doi.org/10.22331/q-2024-07-03-1395","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PHYSICS, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
摘要
变分量子算法(VQAs)和量子机器学习(QML)模型训练参数化的量子电路来解决给定的学习任务。这些算法的成功与否在很大程度上取决于是否为量子电路选择了合适的算式。一维分层硬件高效解析(HEA)也许是最著名的解析之一,它试图通过使用原生门和连接件将硬件噪声的影响降到最低。这种 HEA 的使用产生了一定的矛盾性,因为它在长深度时会出现贫瘠高原,但在浅深度时也能避免贫瘠高原。在这项工作中,我们试图确定是否应该使用 HEA。我们严格确定了应避免使用浅层 HEA 的情况(例如,数据满足纠缠体积定律的 VQA 或 QML 任务)。更重要的是,我们确定了浅层 HEA 可以实现量子加速的黄金组合方案:数据满足纠缠面积律的 QML 任务。我们举例说明了这种情况(如高斯对角集合随机哈密顿辨别),并证明在这些情况下,浅层 HEA 始终是可训练的,而且存在损失函数值的反集中。我们的工作强调了输入状态在参数化量子电路可训练性中的关键作用,这一现象在我们的数值计算中得到了验证。
On the practical usefulness of the Hardware Efficient Ansatz
Variational Quantum Algorithms (VQAs) and Quantum Machine Learning (QML) models train a parametrized quantum circuit to solve a given learning task. The success of these algorithms greatly hinges on appropriately choosing an ansatz for the quantum circuit. Perhaps one of the most famous ansatzes is the one-dimensional layered Hardware Efficient Ansatz (HEA), which seeks to minimize the effect of hardware noise by using native gates and connectives. The use of this HEA has generated a certain ambivalence arising from the fact that while it suffers from barren plateaus at long depths, it can also avoid them at shallow ones. In this work, we attempt to determine whether one should, or should not, use a HEA. We rigorously identify scenarios where shallow HEAs should likely be avoided (e.g., VQA or QML tasks with data satisfying a volume law of entanglement). More importantly, we identify a Goldilocks scenario where shallow HEAs could achieve a quantum speedup: QML tasks with data satisfying an area law of entanglement. We provide examples for such scenario (such as Gaussian diagonal ensemble random Hamiltonian discrimination), and we show that in these cases a shallow HEA is always trainable and that there exists an anti-concentration of loss function values. Our work highlights the crucial role that input states play in the trainability of a parametrized quantum circuit, a phenomenon that is verified in our numerics.
QuantumPhysics and Astronomy-Physics and Astronomy (miscellaneous)
CiteScore
9.20
自引率
10.90%
发文量
241
审稿时长
16 weeks
期刊介绍:
Quantum is an open-access peer-reviewed journal for quantum science and related fields. Quantum is non-profit and community-run: an effort by researchers and for researchers to make science more open and publishing more transparent and efficient.