Pub Date : 2024-07-25DOI: 10.1103/prxquantum.5.030316
M. A. Norciaet al.
Assembling and maintaining large arrays of individually addressable atoms is a key requirement for continued scaling of neutral-atom-based quantum computers and simulators. In this work, we demonstrate a new paradigm for assembly of atomic arrays, based on a synergistic combination of optical tweezers and cavity-enhanced optical lattices, and the incremental filling of a target array from a repetitively filled reservoir. In this protocol, the tweezers provide microscopic rearrangement of atoms, while the cavity-enhanced lattices enable the creation of large numbers of optical traps with sufficient depth for rapid low-loss imaging of atoms. We apply this protocol to demonstrate near-deterministic filling (99% per-site occupancy) of 1225-site arrays of optical traps. Because the reservoir is repeatedly filled with fresh atoms, the array can be maintained in a filled state indefinitely. We anticipate that this protocol will be compatible with mid-circuit reloading of atoms into a quantum processor, which will be a key capability for running large-scale error-corrected quantum computations whose durations exceed the lifetime of a single atom in the system.
{"title":"Iterative Assembly of 171Yb Atom Arrays with Cavity-Enhanced Optical Lattices","authors":"M. A. Norciaet al.","doi":"10.1103/prxquantum.5.030316","DOIUrl":"https://doi.org/10.1103/prxquantum.5.030316","url":null,"abstract":"Assembling and maintaining large arrays of individually addressable atoms is a key requirement for continued scaling of neutral-atom-based quantum computers and simulators. In this work, we demonstrate a new paradigm for assembly of atomic arrays, based on a synergistic combination of optical tweezers and cavity-enhanced optical lattices, and the incremental filling of a target array from a repetitively filled reservoir. In this protocol, the tweezers provide microscopic rearrangement of atoms, while the cavity-enhanced lattices enable the creation of large numbers of optical traps with sufficient depth for rapid low-loss imaging of atoms. We apply this protocol to demonstrate near-deterministic filling (99% per-site occupancy) of 1225-site arrays of optical traps. Because the reservoir is repeatedly filled with fresh atoms, the array can be maintained in a filled state indefinitely. We anticipate that this protocol will be compatible with mid-circuit reloading of atoms into a quantum processor, which will be a key capability for running large-scale error-corrected quantum computations whose durations exceed the lifetime of a single atom in the system.","PeriodicalId":501296,"journal":{"name":"PRX Quantum","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141783990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-24DOI: 10.1103/prxquantum.5.030315
Petr Ivashkov, Gideon Uchehara, Liang Jiang, Derek S. Wang, Alireza Seif
Generalized measurements, also called positive operator-valued measures (POVMs), can offer advantages over projective measurements in various quantum information tasks. Here, we realize a generalized measurement of one and two superconducting qubits with high fidelity and in a single experimental setting. To do so, we propose a hybrid method, the “Naimark-terminated binary tree,” based on a hybridization of Naimark’s dilation and binary tree techniques that leverages emerging hardware capabilities for midcircuit measurements and feed-forward control. Furthermore, we showcase a highly effective use of approximate compiling to enhance POVM fidelity in noisy conditions. We argue that our hybrid method scales better toward larger system sizes than its constituent methods and demonstrate its advantage by performing detector tomography of symmetric, informationally complete POVM (SIC POVM). Detector fidelity is further improved through a composite error-mitigation strategy that incorporates twirling and a newly devised conditional readout error mitigation. Looking forward, we expect improvements in approximate compilation and hardware noise for dynamic circuits to enable generalized measurements of larger multiqubit POVMs on superconducting qubits.
{"title":"High-Fidelity, Multiqubit Generalized Measurements with Dynamic Circuits","authors":"Petr Ivashkov, Gideon Uchehara, Liang Jiang, Derek S. Wang, Alireza Seif","doi":"10.1103/prxquantum.5.030315","DOIUrl":"https://doi.org/10.1103/prxquantum.5.030315","url":null,"abstract":"Generalized measurements, also called positive operator-valued measures (POVMs), can offer advantages over projective measurements in various quantum information tasks. Here, we realize a generalized measurement of one and two superconducting qubits with high fidelity and in a single experimental setting. To do so, we propose a hybrid method, the “Naimark-terminated binary tree,” based on a hybridization of Naimark’s dilation and binary tree techniques that leverages emerging hardware capabilities for midcircuit measurements and feed-forward control. Furthermore, we showcase a highly effective use of approximate compiling to enhance POVM fidelity in noisy conditions. We argue that our hybrid method scales better toward larger system sizes than its constituent methods and demonstrate its advantage by performing detector tomography of symmetric, informationally complete POVM (SIC POVM). Detector fidelity is further improved through a composite error-mitigation strategy that incorporates twirling and a newly devised conditional readout error mitigation. Looking forward, we expect improvements in approximate compilation and hardware noise for dynamic circuits to enable generalized measurements of larger multiqubit POVMs on superconducting qubits.","PeriodicalId":501296,"journal":{"name":"PRX Quantum","volume":"95 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141783991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-23DOI: 10.1103/prxquantum.5.030314
Cenk Tüysüz, Su Yeon Chang, Maria Demidik, Karl Jansen, Sofia Vallecorsa, Michele Grossi
Geometric quantum machine learning based on equivariant quantum neural networks (EQNNs) recently appeared as a promising direction in quantum machine learning. Despite encouraging progress, studies are still limited to theory, and the role of hardware noise in EQNN training has never been explored. This work studies the behavior of EQNN models in the presence of noise. We show that certain EQNN models can preserve equivariance under Pauli channels, while this is not possible under the amplitude damping channel. We claim that the symmetry breaks linearly in the number of layers and noise strength. We support our claims with numerical data from simulations as well as hardware up to 64 qubits. Furthermore, we provide strategies to enhance the symmetry protection of EQNN models in the presence of noise.
{"title":"Symmetry Breaking in Geometric Quantum Machine Learning in the Presence of Noise","authors":"Cenk Tüysüz, Su Yeon Chang, Maria Demidik, Karl Jansen, Sofia Vallecorsa, Michele Grossi","doi":"10.1103/prxquantum.5.030314","DOIUrl":"https://doi.org/10.1103/prxquantum.5.030314","url":null,"abstract":"Geometric quantum machine learning based on equivariant quantum neural networks (EQNNs) recently appeared as a promising direction in quantum machine learning. Despite encouraging progress, studies are still limited to theory, and the role of hardware noise in EQNN training has never been explored. This work studies the behavior of EQNN models in the presence of noise. We show that certain EQNN models can preserve equivariance under Pauli channels, while this is not possible under the amplitude damping channel. We claim that the symmetry breaks linearly in the number of layers and noise strength. We support our claims with numerical data from simulations as well as hardware up to 64 qubits. Furthermore, we provide strategies to enhance the symmetry protection of EQNN models in the presence of noise.","PeriodicalId":501296,"journal":{"name":"PRX Quantum","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141754192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-22DOI: 10.1103/prxquantum.5.030313
ChunJun Cao, Michael J. Gullans, Brad Lackey, Zitao Wang
We provide the first tensor-network method for computing quantum weight enumerator polynomials in the most general form. If a quantum code has a known tensor-network construction of its encoding map, our method is far more efficient, and in some cases exponentially faster than the existing approach. As a corollary, it produces decoders and an algorithm that computes the code distance. For non-(Pauli)-stabilizer codes, this constitutes the current best algorithm for computing the code distance. For degenerate stabilizer codes, it can be substantially faster compared to the current methods. We also introduce novel weight enumerators and their applications. In particular, we show that these enumerators can be used to compute logical error rates exactly and thus construct (optimal) decoders for any independent and identically distributed single qubit or qudit error channels. The enumerators also provide a more efficient method for computing nonstabilizerness in quantum many-body states. As the power for these speedups rely on a quantum Lego decomposition of quantum codes, we further provide systematic methods for decomposing quantum codes and graph states into a modular construction for which our technique applies. As a proof of principle, we perform exact analyses of the deformed surface codes, the holographic pentagon code, and the two-dimensional Bacon-Shor code under (biased) Pauli noise and limited instances of coherent error at sizes that are inaccessible by brute force.
{"title":"Quantum Lego Expansion Pack: Enumerators from Tensor Networks","authors":"ChunJun Cao, Michael J. Gullans, Brad Lackey, Zitao Wang","doi":"10.1103/prxquantum.5.030313","DOIUrl":"https://doi.org/10.1103/prxquantum.5.030313","url":null,"abstract":"We provide the first tensor-network method for computing quantum weight enumerator polynomials in the most general form. If a quantum code has a known tensor-network construction of its encoding map, our method is far more efficient, and in some cases exponentially faster than the existing approach. As a corollary, it produces decoders and an algorithm that computes the code distance. For non-(Pauli)-stabilizer codes, this constitutes the current best algorithm for computing the code distance. For degenerate stabilizer codes, it can be substantially faster compared to the current methods. We also introduce novel weight enumerators and their applications. In particular, we show that these enumerators can be used to compute logical error rates exactly and thus construct (optimal) decoders for any independent and identically distributed single qubit or qudit error channels. The enumerators also provide a more efficient method for computing nonstabilizerness in quantum many-body states. As the power for these speedups rely on a quantum Lego decomposition of quantum codes, we further provide systematic methods for decomposing quantum codes and graph states into a modular construction for which our technique applies. As a proof of principle, we perform exact analyses of the deformed surface codes, the holographic pentagon code, and the two-dimensional Bacon-Shor code under (biased) Pauli noise and limited instances of coherent error at sizes that are inaccessible by brute force.","PeriodicalId":501296,"journal":{"name":"PRX Quantum","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141754193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-19DOI: 10.1103/prxquantum.5.030312
Christian A. Rosiek, Massimiliano Rossi, Albert Schliesser, Anders S. Sørensen
Generating macroscopic nonclassical quantum states is a long-standing challenge in physics. Anharmonic dynamics is an essential ingredient to generate these states, but for large mechanical systems, the effect of the anharmonicity tends to become negligible compared with the effect of decoherence. As a possible solution to this challenge, we propose using a motional squeezed state as a resource to effectively increase the anharmonicity. We analyze the production of negativity in the Wigner distribution of a quantum anharmonic resonator initially in a squeezed state. We find that initial squeezing increases the rate at which negativity is generated. We also analyze the effect of two common sources of decoherence—namely, energy damping and dephasing—and find that the detrimental effects of energy damping are suppressed by strong squeezing. In the limit of large squeezing, which is needed for state-of-the-art systems, we find good approximations for the Wigner function. Our analysis is significant for current experiments attempting to prepare macroscopic mechanical systems in genuine quantum states. We provide an overview of several experimental platforms featuring nonlinear behaviors and low levels of decoherence. In particular, we discuss the feasibility of our proposal with carbon nanotubes and levitated nanoparticles.
{"title":"Quadrature Squeezing Enhances Wigner Negativity in a Mechanical Duffing Oscillator","authors":"Christian A. Rosiek, Massimiliano Rossi, Albert Schliesser, Anders S. Sørensen","doi":"10.1103/prxquantum.5.030312","DOIUrl":"https://doi.org/10.1103/prxquantum.5.030312","url":null,"abstract":"Generating macroscopic nonclassical quantum states is a long-standing challenge in physics. Anharmonic dynamics is an essential ingredient to generate these states, but for large mechanical systems, the effect of the anharmonicity tends to become negligible compared with the effect of decoherence. As a possible solution to this challenge, we propose using a motional squeezed state as a resource to effectively increase the anharmonicity. We analyze the production of negativity in the Wigner distribution of a quantum anharmonic resonator initially in a squeezed state. We find that initial squeezing increases the rate at which negativity is generated. We also analyze the effect of two common sources of decoherence—namely, energy damping and dephasing—and find that the detrimental effects of energy damping are suppressed by strong squeezing. In the limit of large squeezing, which is needed for state-of-the-art systems, we find good approximations for the Wigner function. Our analysis is significant for current experiments attempting to prepare macroscopic mechanical systems in genuine quantum states. We provide an overview of several experimental platforms featuring nonlinear behaviors and low levels of decoherence. In particular, we discuss the feasibility of our proposal with carbon nanotubes and levitated nanoparticles.","PeriodicalId":501296,"journal":{"name":"PRX Quantum","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141745906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-18DOI: 10.1103/prxquantum.5.030311
Samuel J. Garratt, Ehud Altman
We study the problem of observing quantum collective phenomena emerging from large numbers of measurements. These phenomena are difficult to observe in conventional experiments because, in order to distinguish the effects of measurement from dephasing, it is necessary to postselect on sets of measurement outcomes with Born probabilities that are exponentially small in the number of measurements performed. An unconventional approach, which avoids this exponential “postselection problem”, is to construct cross-correlations between experimental data and the results of simulations on classical computers. However, these cross-correlations generally have no definite relation to physical quantities. We first show how to incorporate classical shadows into this framework, thereby allowing for the construction of quantum information-theoretic cross-correlations. We then identify cross-correlations that both upper and lower bound the measurement-averaged von Neumann entanglement entropy, as well as cross-correlations that lower bound the measurement-averaged purity and entanglement negativity. These bounds show that experiments can be performed to constrain postmeasurement entanglement without the need for postselection. To illustrate our technique, we consider how it could be used to observe the measurement-induced entanglement transition in Haar-random quantum circuits. We use exact numerical calculations as proxies for quantum simulations and, to highlight the fundamental limitations of classical memory, we construct cross-correlations with tensor-network calculations at finite bond dimension. Our results reveal a signature of measurement-induced criticality that can be observed using a quantum simulator in polynomial time and with polynomial classical memory.
{"title":"Probing Postmeasurement Entanglement without Postselection","authors":"Samuel J. Garratt, Ehud Altman","doi":"10.1103/prxquantum.5.030311","DOIUrl":"https://doi.org/10.1103/prxquantum.5.030311","url":null,"abstract":"We study the problem of observing quantum collective phenomena emerging from large numbers of measurements. These phenomena are difficult to observe in conventional experiments because, in order to distinguish the effects of measurement from dephasing, it is necessary to postselect on sets of measurement outcomes with Born probabilities that are exponentially small in the number of measurements performed. An unconventional approach, which avoids this exponential “postselection problem”, is to construct cross-correlations between experimental data and the results of simulations on classical computers. However, these cross-correlations generally have no definite relation to physical quantities. We first show how to incorporate classical shadows into this framework, thereby allowing for the construction of quantum information-theoretic cross-correlations. We then identify cross-correlations that both upper and lower bound the measurement-averaged von Neumann entanglement entropy, as well as cross-correlations that lower bound the measurement-averaged purity and entanglement negativity. These bounds show that experiments can be performed to constrain postmeasurement entanglement without the need for postselection. To illustrate our technique, we consider how it could be used to observe the measurement-induced entanglement transition in Haar-random quantum circuits. We use exact numerical calculations as proxies for quantum simulations and, to highlight the fundamental limitations of classical memory, we construct cross-correlations with tensor-network calculations at finite bond dimension. Our results reveal a signature of measurement-induced criticality that can be observed using a quantum simulator in polynomial time and with polynomial classical memory.","PeriodicalId":501296,"journal":{"name":"PRX Quantum","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141740917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-17DOI: 10.1103/prxquantum.5.030310
Yu-Hsueh Chen, Tarun Grover
We study quantum many-body mixed states with a symmetry from the perspective of separability, i.e., whether a mixed state can be expressed as an ensemble of short-range-entangled symmetric pure states. We provide evidence for “symmetry-enforced separability transitions” in a variety of states, where in one regime the mixed state is expressible as a convex sum of symmetric short-range-entangled pure states, while in the other regime, such a representation is not feasible. We first discuss the Gibbs state of Hamiltonians that exhibit spontaneous breaking of a discrete symmetry, and argue that the associated thermal phase transition can be thought of as a symmetry-enforced separability transition. Next we study cluster states in various dimensions subjected to local decoherence, and identify several distinct mixed-state phases and associated separability phase transitions, which also provides an alternative perspective on recently discussed “average symmetry-protected topological order.” We also study decohered superconductors, and find that if the decoherence breaks the fermion parity explicitly, then the resulting mixed state can be expressed as a convex sum of nonchiral states, while a fermion parity–preserving decoherence results in a phase transition at a nonzero threshold that corresponds to spontaneous breaking of fermion parity. Finally, we briefly discuss systems that satisfy the no low-energy trivial state property, such as the recently discovered good low-density parity-check codes, and argue that the Gibbs state of such systems exhibits a temperature-tuned separability transition.
{"title":"Symmetry-Enforced Many-Body Separability Transitions","authors":"Yu-Hsueh Chen, Tarun Grover","doi":"10.1103/prxquantum.5.030310","DOIUrl":"https://doi.org/10.1103/prxquantum.5.030310","url":null,"abstract":"We study quantum many-body mixed states with a symmetry from the perspective of <i>separability</i>, i.e., whether a mixed state can be expressed as an ensemble of short-range-entangled symmetric pure states. We provide evidence for “symmetry-enforced separability transitions” in a variety of states, where in one regime the mixed state is expressible as a convex sum of symmetric short-range-entangled pure states, while in the other regime, such a representation is not feasible. We first discuss the Gibbs state of Hamiltonians that exhibit spontaneous breaking of a discrete symmetry, and argue that the associated thermal phase transition can be thought of as a symmetry-enforced separability transition. Next we study cluster states in various dimensions subjected to local decoherence, and identify several distinct mixed-state phases and associated separability phase transitions, which also provides an alternative perspective on recently discussed “average symmetry-protected topological order.” We also study decohered <math display=\"inline\" overflow=\"scroll\" xmlns=\"http://www.w3.org/1998/Math/MathML\"><mi>p</mi><mo>+</mo><mi>i</mi><mi>p</mi></math> superconductors, and find that if the decoherence breaks the fermion parity explicitly, then the resulting mixed state can be expressed as a convex sum of nonchiral states, while a fermion parity–preserving decoherence results in a phase transition at a nonzero threshold that corresponds to spontaneous breaking of fermion parity. Finally, we briefly discuss systems that satisfy the no low-energy trivial state property, such as the recently discovered good low-density parity-check codes, and argue that the Gibbs state of such systems exhibits a temperature-tuned separability transition.","PeriodicalId":501296,"journal":{"name":"PRX Quantum","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141740918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-16DOI: 10.1103/prxquantum.5.030309
Lindsay Bassman Oftelie, Antonella De Pasquale, Michele Campisi
We study the problem of dynamic cooling whereby a target qubit is cooled at the expense of heating up further identical qubits by means of a global unitary operation. A standard back-of-the-envelope high-temperature estimate establishes that the target qubit temperature can be dynamically cooled by at most a factor of . Here we provide the exact expression for the minimum temperature to which the target qubit can be cooled and reveal that there is a crossover from the high initial temperature regime, where the scaling is , to a low initial temperature regime, where a much faster scaling of occurs. This slow, scaling, which was relevant for early high-temperature NMR quantum computers, is the reason dynamic cooling was dismissed as ineffectual around 20 years ago; the fact that current low-temperature quantum computers fall in the fast, scaling regime, reinstates the appeal of dynamic cooling today. We further show that the associated work cost of cooling is exponentially more advantageous in the low-temperature regime. We discuss the implementation of dynamic cooling in terms of quantum circuits and examine the effects of hardware noise. We successfully demonstrate dynamic cooling in a three-qubit system on a real quantum processor. Since the circuit size grows quickly with , scaling dynamic cooling to larger systems on noisy devices poses a challenge. We therefore propose a suboptimal cooling algorithm, whereby relinquishing a small amount of cooling capability results in a drastically reduced circuit complexity, greatly facilitating the implementation of dynamic cooling on near-future quantum computers.
{"title":"Dynamic Cooling on Contemporary Quantum Computers","authors":"Lindsay Bassman Oftelie, Antonella De Pasquale, Michele Campisi","doi":"10.1103/prxquantum.5.030309","DOIUrl":"https://doi.org/10.1103/prxquantum.5.030309","url":null,"abstract":"We study the problem of dynamic cooling whereby a target qubit is cooled at the expense of heating up <math display=\"inline\" overflow=\"scroll\" xmlns=\"http://www.w3.org/1998/Math/MathML\"><mi>N</mi><mo>−</mo><mn>1</mn></math> further identical qubits by means of a global unitary operation. A standard back-of-the-envelope high-temperature estimate establishes that the target qubit temperature can be dynamically cooled by at most a factor of <math display=\"inline\" overflow=\"scroll\" xmlns=\"http://www.w3.org/1998/Math/MathML\"><mn>1</mn><mo>/</mo><msqrt><mi>N</mi></msqrt></math>. Here we provide the exact expression for the minimum temperature to which the target qubit can be cooled and reveal that there is a crossover from the high initial temperature regime, where the scaling is <math display=\"inline\" overflow=\"scroll\" xmlns=\"http://www.w3.org/1998/Math/MathML\"><mn>1</mn><mo>/</mo><msqrt><mi>N</mi></msqrt></math>, to a low initial temperature regime, where a much faster scaling of <math display=\"inline\" overflow=\"scroll\" xmlns=\"http://www.w3.org/1998/Math/MathML\"><mn>1</mn><mo>/</mo><mi>N</mi></math> occurs. This slow, <math display=\"inline\" overflow=\"scroll\" xmlns=\"http://www.w3.org/1998/Math/MathML\"><mn>1</mn><mo>/</mo><msqrt><mi>N</mi></msqrt></math> scaling, which was relevant for early high-temperature NMR quantum computers, is the reason dynamic cooling was dismissed as ineffectual around 20 years ago; the fact that current low-temperature quantum computers fall in the fast, <math display=\"inline\" overflow=\"scroll\" xmlns=\"http://www.w3.org/1998/Math/MathML\"><mn>1</mn><mo>/</mo><mi>N</mi></math> scaling regime, reinstates the appeal of dynamic cooling today. We further show that the associated work cost of cooling is exponentially more advantageous in the low-temperature regime. We discuss the implementation of dynamic cooling in terms of quantum circuits and examine the effects of hardware noise. We successfully demonstrate dynamic cooling in a three-qubit system on a real quantum processor. Since the circuit size grows quickly with <math display=\"inline\" overflow=\"scroll\" xmlns=\"http://www.w3.org/1998/Math/MathML\"><mi>N</mi></math>, scaling dynamic cooling to larger systems on noisy devices poses a challenge. We therefore propose a suboptimal cooling algorithm, whereby relinquishing a small amount of cooling capability results in a drastically reduced circuit complexity, greatly facilitating the implementation of dynamic cooling on near-future quantum computers.","PeriodicalId":501296,"journal":{"name":"PRX Quantum","volume":"54 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141717899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Coupling between a single quantum emitter and an optical cavity presents a key capability for future quantum networking applications. Here, we explore interactions between individual germanium-vacancy (V) defects in diamond and an open microcavity at cryogenic temperatures. Exploiting the tunability of our microcavity system to characterize and select emitters, we observe a Purcell-effect-induced lifetime reduction of up to and extract coherent-coupling rates up to MHz. Our results indicate that the V defect has favorable optical properties for cavity coupling, with a quantum efficiency of at least and likely much higher.
{"title":"Lifetime Reduction of Single Germanium-Vacancy Centers in Diamond via a Tunable Open Microcavity","authors":"Rigel Zifkin, César Daniel Rodríguez Rosenblueth, Erika Janitz, Yannik Fontana, Lilian Childress","doi":"10.1103/prxquantum.5.030308","DOIUrl":"https://doi.org/10.1103/prxquantum.5.030308","url":null,"abstract":"Coupling between a single quantum emitter and an optical cavity presents a key capability for future quantum networking applications. Here, we explore interactions between individual germanium-vacancy (<math display=\"inline\" overflow=\"scroll\" xmlns=\"http://www.w3.org/1998/Math/MathML\"><mi>Ge</mi></math>V) defects in diamond and an open microcavity at cryogenic temperatures. Exploiting the tunability of our microcavity system to characterize and select emitters, we observe a Purcell-effect-induced lifetime reduction of up to <math display=\"inline\" overflow=\"scroll\" xmlns=\"http://www.w3.org/1998/Math/MathML\"><mn>4.5</mn><mo>±</mo><mn>0.3</mn></math> and extract coherent-coupling rates up to <math display=\"inline\" overflow=\"scroll\" xmlns=\"http://www.w3.org/1998/Math/MathML\"><mn>360</mn><mo>±</mo><mn>20</mn></math> MHz. Our results indicate that the <math display=\"inline\" overflow=\"scroll\" xmlns=\"http://www.w3.org/1998/Math/MathML\"><mi>Ge</mi></math>V defect has favorable optical properties for cavity coupling, with a quantum efficiency of at least <math display=\"inline\" overflow=\"scroll\" xmlns=\"http://www.w3.org/1998/Math/MathML\"><mn>0.34</mn><mo>±</mo><mn>0.05</mn></math> and likely much higher.","PeriodicalId":501296,"journal":{"name":"PRX Quantum","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141717900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-12DOI: 10.1103/prxquantum.5.030307
Pablo Sala, Sara Murciano, Yue Liu, Jason Alicea
Entanglement, measurement, and classical communication together enable teleportation of quantum states between distant parties, in principle, with perfect fidelity. To what extent do correlations and entanglement of a many-body wave function transfer under imperfect teleportation protocols? We address this question for the case of an imperfectly teleported quantum critical wave function, focusing on the ground state of a critical Ising chain. We demonstrate that imperfections, e.g., in the entangling gate adopted for a given protocol, effectively manifest as weak measurements acting on the otherwise pristinely teleported critical state. Armed with this perspective, we leverage and further develop the theory of measurement-altered quantum criticality to quantify the resilience of critical-state teleportation. We identify classes of teleportation protocols for which imperfection (i) preserves both the universal long-range entanglement and correlations of the original quantum critical state, (ii) weakly modifies these quantities away from their universal values, and (iii) obliterates long-range entanglement altogether while preserving power-law correlations, albeit with a new set of exponents. We also show that mixed states describing the average over a series of sequential imperfect teleportation events retain pristine power-law correlations due to a “built-in” decoding algorithm, though their entanglement structure measured by the negativity depends on errors similarly to individual protocol runs. These results may allow one to design teleportation protocols that optimize against errors—highlighting a potential practical application of measurement-altered criticality.
{"title":"Quantum Criticality Under Imperfect Teleportation","authors":"Pablo Sala, Sara Murciano, Yue Liu, Jason Alicea","doi":"10.1103/prxquantum.5.030307","DOIUrl":"https://doi.org/10.1103/prxquantum.5.030307","url":null,"abstract":"Entanglement, measurement, and classical communication together enable teleportation of quantum states between distant parties, in principle, with perfect fidelity. To what extent do correlations and entanglement of a many-body wave function transfer under <i>imperfect</i> teleportation protocols? We address this question for the case of an imperfectly teleported quantum critical wave function, focusing on the ground state of a critical Ising chain. We demonstrate that imperfections, e.g., in the entangling gate adopted for a given protocol, effectively manifest as weak measurements acting on the otherwise pristinely teleported critical state. Armed with this perspective, we leverage and further develop the theory of measurement-altered quantum criticality to quantify the resilience of critical-state teleportation. We identify classes of teleportation protocols for which imperfection (i) preserves both the universal long-range entanglement and correlations of the original quantum critical state, (ii) weakly modifies these quantities away from their universal values, and (iii) obliterates long-range entanglement altogether while preserving power-law correlations, albeit with a new set of exponents. We also show that mixed states describing the average over a series of sequential imperfect teleportation events retain pristine power-law correlations due to a “built-in” decoding algorithm, though their entanglement structure measured by the negativity depends on errors similarly to individual protocol runs. These results may allow one to design teleportation protocols that optimize against errors—highlighting a potential practical application of measurement-altered criticality.","PeriodicalId":501296,"journal":{"name":"PRX Quantum","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141610134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}