Quantum image processing (QIMP) was first introduced in 2003, by Venegas-Andraca et al. at the University of Oxford. This field attempts to overcome the limitations of classical computers and the potentially overwhelming complexity of classical algorithms by providing a more effective way to store and manipulate visual information. Over the past 20 years, QIMP has become an active area of research, experiencing rapid and vigorous development. However, these advancements have suffered from an imbalance, as inherent critical issues have been largely ignored. In this paper, we review the original intentions for this field and analyze various unresolved issues from a new perspective, including QIMP algorithm design, potential advantages and limitations, technological debates, and potential directions for future development. We suggest the 20-year milestone could serve as a new beginning and advocate for more researchers to focus their attention on this pursuit, helping to overcome bottlenecks, and achieving more practical results in the future.
{"title":"Lessons from Twenty Years of Quantum Image Processing","authors":"Fei Yan, S. Venegas-Andraca","doi":"10.1145/3663577","DOIUrl":"https://doi.org/10.1145/3663577","url":null,"abstract":"Quantum image processing (QIMP) was first introduced in 2003, by Venegas-Andraca et al. at the University of Oxford. This field attempts to overcome the limitations of classical computers and the potentially overwhelming complexity of classical algorithms by providing a more effective way to store and manipulate visual information. Over the past 20 years, QIMP has become an active area of research, experiencing rapid and vigorous development. However, these advancements have suffered from an imbalance, as inherent critical issues have been largely ignored. In this paper, we review the original intentions for this field and analyze various unresolved issues from a new perspective, including QIMP algorithm design, potential advantages and limitations, technological debates, and potential directions for future development. We suggest the 20-year milestone could serve as a new beginning and advocate for more researchers to focus their attention on this pursuit, helping to overcome bottlenecks, and achieving more practical results in the future.","PeriodicalId":474832,"journal":{"name":"ACM transactions on quantum computing","volume":"29 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141022962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Over the past 27 years, quantum computing has seen a huge rise in interest from both academia and industry. At the current rate, quantum computers are growing in size rapidly backed up by the increase of research in the field. Significant efforts are being made to improve the reliability of quantum hardware and to develop suitable software to program quantum computers. In contrast, the verification of quantum programs has received relatively less attention. Verifying programs is especially important in the quantum setting due to how difficult it is to program complex algorithms correctly on resource-constrained and error-prone quantum hardware. Research into creating verification frameworks for quantum programs has seen recent development, with a variety of tools implemented using a collection of theoretical ideas. This survey aims to be a short introduction into the area of formal verification of quantum programs, bringing together theory and tools developed to date. Further, this survey examines some of the challenges that the field may face in the future, namely the development of complex quantum algorithms.
{"title":"Formal Verification of Quantum Programs: Theory, Tools and Challenges","authors":"Marco Lewis, Sadegh Soudjani, Paolo Zuliani","doi":"10.1145/3624483","DOIUrl":"https://doi.org/10.1145/3624483","url":null,"abstract":"Over the past 27 years, quantum computing has seen a huge rise in interest from both academia and industry. At the current rate, quantum computers are growing in size rapidly backed up by the increase of research in the field. Significant efforts are being made to improve the reliability of quantum hardware and to develop suitable software to program quantum computers. In contrast, the verification of quantum programs has received relatively less attention. Verifying programs is especially important in the quantum setting due to how difficult it is to program complex algorithms correctly on resource-constrained and error-prone quantum hardware. Research into creating verification frameworks for quantum programs has seen recent development, with a variety of tools implemented using a collection of theoretical ideas. This survey aims to be a short introduction into the area of formal verification of quantum programs, bringing together theory and tools developed to date. Further, this survey examines some of the challenges that the field may face in the future, namely the development of complex quantum algorithms.","PeriodicalId":474832,"journal":{"name":"ACM transactions on quantum computing","volume":"169 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136077515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scaling bottlenecks the making of digital quantum computers, posing challenges from both the quantum and the classical components. We present a classical architecture to cope with a comprehensive list of the latter challenges all at once , and implement it fully in an end-to-end system by integrating a multi-core RISC-V CPU with our in-house control electronics. Our architecture enables scalable, high-precision control of large quantum processors and accommodates evolving requirements of quantum hardware. A central feature is a microarchitecture executing quantum operations in parallel on arbitrary predefined qubit groups. Another key feature is a reconfigurable quantum instruction set that supports easy qubit re-grouping and instructions extensions. As a demonstration, we implement the surface code quantum computing workflow. Our design, for the first time, reduces instruction issuing and transmission costs to constants, which do not scale with the number of qubits, without adding any overheads in decoding or dispatching. Our system uses a dedicated general-purpose CPU for both qubit control and classical computation, including syndrome decoding. Implementing recent theoretical proposals as decoding firmware that parallelizes general inner decoders, we can achieve unprecedented decoding capabilities of up to distances 47 and 67 with the currently available systems-on-chips for physical error rate p = 0.001 and p = 0.0001, respectively, all in just 1 µs.
{"title":"A Classical Architecture For Digital Quantum Computers","authors":"Fang Zhang, Xing Zhu, Rui Chao, Cupjin Huang, Linghang Kong, Guoyang Chen, Dawei Ding, Haishan Feng, Yihuai Gao, Xiaotong Ni, Liwei Qiu, Zhe Wei, Yueming Yang, Yang Zhao, Yaoyun Shi, Weifeng Zhang, Peng Zhou, Jianxin Chen","doi":"10.1145/3626199","DOIUrl":"https://doi.org/10.1145/3626199","url":null,"abstract":"Scaling bottlenecks the making of digital quantum computers, posing challenges from both the quantum and the classical components. We present a classical architecture to cope with a comprehensive list of the latter challenges all at once , and implement it fully in an end-to-end system by integrating a multi-core RISC-V CPU with our in-house control electronics. Our architecture enables scalable, high-precision control of large quantum processors and accommodates evolving requirements of quantum hardware. A central feature is a microarchitecture executing quantum operations in parallel on arbitrary predefined qubit groups. Another key feature is a reconfigurable quantum instruction set that supports easy qubit re-grouping and instructions extensions. As a demonstration, we implement the surface code quantum computing workflow. Our design, for the first time, reduces instruction issuing and transmission costs to constants, which do not scale with the number of qubits, without adding any overheads in decoding or dispatching. Our system uses a dedicated general-purpose CPU for both qubit control and classical computation, including syndrome decoding. Implementing recent theoretical proposals as decoding firmware that parallelizes general inner decoders, we can achieve unprecedented decoding capabilities of up to distances 47 and 67 with the currently available systems-on-chips for physical error rate p = 0.001 and p = 0.0001, respectively, all in just 1 µs.","PeriodicalId":474832,"journal":{"name":"ACM transactions on quantum computing","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135195624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce quantum utility , a new approach to evaluating quantum performance that aims to capture the user experience by considering the overhead costs associated with a quantum computation. A demonstration of quantum utility by the quantum processing unit (QPU) shows that the QPU can outperform classical solvers at some tasks of interest to practitioners, when considering the costs of computational overheads. A milestone is a test of quantum utility that is restricted to a specific subset of overhead costs and input types. We illustrate this approach with a benchmark study of a D-Wave annealing-based QPU versus seven classical solvers, for a variety of problems in heuristic optimization. We consider overhead costs that arise in standalone use of the D-Wave QPU (as opposed to a hybrid computation). We define three early milestones on the path to broad-scale quantum utility. Milestone 0 is the purely quantum computation with no overhead costs, and is demonstrated implicitly by positive results on other milestones. We evaluate performance of a D-Wave Advantage QPU with respect to milestones 1 and 2: For milestone 1, the QPU outperformed all classical solvers in 99% of our tests. For milestone 2, the QPU outperformed all classical solvers in 19% of our tests, and the scenarios in which the QPU found success correspond to cases where classical solvers most frequently failed. This approach isolating subsets of overheads for separate analysis reveals distinct mechanisms in quantum versus classical performance, which explain the observed differences in patterns of success and failure. We present evidence-based arguments that these distinctions bode well for annealing quantum processors to support demonstrations of quantum utility on ever-expanding classes of inputs and with more challenging milestones, in the very near future.
{"title":"Milestones on the Quantum Utility Highway: Quantum Annealing Case Study","authors":"Catherine C. McGeoch, Pau Farré","doi":"10.1145/3625307","DOIUrl":"https://doi.org/10.1145/3625307","url":null,"abstract":"We introduce quantum utility , a new approach to evaluating quantum performance that aims to capture the user experience by considering the overhead costs associated with a quantum computation. A demonstration of quantum utility by the quantum processing unit (QPU) shows that the QPU can outperform classical solvers at some tasks of interest to practitioners, when considering the costs of computational overheads. A milestone is a test of quantum utility that is restricted to a specific subset of overhead costs and input types. We illustrate this approach with a benchmark study of a D-Wave annealing-based QPU versus seven classical solvers, for a variety of problems in heuristic optimization. We consider overhead costs that arise in standalone use of the D-Wave QPU (as opposed to a hybrid computation). We define three early milestones on the path to broad-scale quantum utility. Milestone 0 is the purely quantum computation with no overhead costs, and is demonstrated implicitly by positive results on other milestones. We evaluate performance of a D-Wave Advantage QPU with respect to milestones 1 and 2: For milestone 1, the QPU outperformed all classical solvers in 99% of our tests. For milestone 2, the QPU outperformed all classical solvers in 19% of our tests, and the scenarios in which the QPU found success correspond to cases where classical solvers most frequently failed. This approach isolating subsets of overheads for separate analysis reveals distinct mechanisms in quantum versus classical performance, which explain the observed differences in patterns of success and failure. We present evidence-based arguments that these distinctions bode well for annealing quantum processors to support demonstrations of quantum utility on ever-expanding classes of inputs and with more challenging milestones, in the very near future.","PeriodicalId":474832,"journal":{"name":"ACM transactions on quantum computing","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135924988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nikiforos Paraskevopoulos, Fabio Sebastiano, Carmen G. Almudever, Sebastian Feld
Despite NISQ devices being severely constrained, hardware- and algorithm-aware quantum circuit mapping techniques have been developed to enable successful algorithm executions. Not so much attention has been paid to mapping and compilation implementations for spin-qubit quantum processors due to the scarce availability of experimental devices and their small sizes. However, based on their high scalability potential and their rapid progress it is timely to start exploring solutions on such devices. In this work, we discuss the unique mapping challenges of a scalable crossbar architecture with shared control and introduce SpinQ , the first native compilation framework for scalable spin-qubit architectures. At the core of SpinQ is the Integrated Strategy that addresses the unique operational constraints of the crossbar while considering compilation scalability and obtaining a O(n) computational complexity. To evaluate the performance of SpinQ on this novel architecture, we compiled a broad set of well-defined quantum circuits and performed an in-depth analysis based on multiple metrics such as gate overhead, depth overhead, and estimated success probability, which in turn allowed us to create unique mapping and architectural insights. Finally, we propose novel mapping techniques that could increase algorithm success rates on this architecture and potentially inspire further research on quantum circuit mapping for other scalable spin-qubit architectures.
{"title":"SpinQ: Compilation Strategies for Scalable Spin-Qubit Architectures","authors":"Nikiforos Paraskevopoulos, Fabio Sebastiano, Carmen G. Almudever, Sebastian Feld","doi":"10.1145/3624484","DOIUrl":"https://doi.org/10.1145/3624484","url":null,"abstract":"Despite NISQ devices being severely constrained, hardware- and algorithm-aware quantum circuit mapping techniques have been developed to enable successful algorithm executions. Not so much attention has been paid to mapping and compilation implementations for spin-qubit quantum processors due to the scarce availability of experimental devices and their small sizes. However, based on their high scalability potential and their rapid progress it is timely to start exploring solutions on such devices. In this work, we discuss the unique mapping challenges of a scalable crossbar architecture with shared control and introduce SpinQ , the first native compilation framework for scalable spin-qubit architectures. At the core of SpinQ is the Integrated Strategy that addresses the unique operational constraints of the crossbar while considering compilation scalability and obtaining a O(n) computational complexity. To evaluate the performance of SpinQ on this novel architecture, we compiled a broad set of well-defined quantum circuits and performed an in-depth analysis based on multiple metrics such as gate overhead, depth overhead, and estimated success probability, which in turn allowed us to create unique mapping and architectural insights. Finally, we propose novel mapping techniques that could increase algorithm success rates on this architecture and potentially inspire further research on quantum circuit mapping for other scalable spin-qubit architectures.","PeriodicalId":474832,"journal":{"name":"ACM transactions on quantum computing","volume":"220 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135153379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
introduction Introduction to the Special Issue on Software Tools for Quantum Computing: Part 2 Editors: Yuri Alexeev Computational Science Division, Argonne National Laboratory Computational Science Division, Argonne National Laboratory 0000-0001-5066-2254View Profile , Alex McCaskey NVIDIA NVIDIA 0000-0002-0745-3294View Profile , Wibe de Jong Computational Science Department Applied Computing for Scientific Discovery, Lawrence Berkeley National Laboratory Computational Science Department Applied Computing for Scientific Discovery, Lawrence Berkeley National Laboratory 0000-0002-7114-8315View Profile Authors Info & Claims ACM Transactions on Quantum ComputingVolume 4Issue 1March 2023 Article No.: 1pp 1–3https://doi.org/10.1145/3574160Published:14 February 2023Publication History 0citation21DownloadsMetricsTotal Citations0Total Downloads21Last 12 Months21Last 6 weeks13 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my AlertsNew Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteGet Access
量子计算软件工具特刊简介:第二部分编辑:Yuri Alexeev计算科学部,Argonne国家实验室计算科学部,Argonne国家实验室0000-0001-5066-2254View Profile, Alex McCaskey NVIDIA NVIDIA 0000-0002-0745-3294View Profile, Wibe de Jong计算科学部应用计算科学发现,Lawrence Berkeley国家实验室计算科学部应用计算科学发现,劳伦斯伯克利国家实验室0000-0002-7114-8315查看个人资料作者信息和声明ACM量子计算汇刊第4卷第1期2023年3月文章编号: 1pp 1-3https://doi.org/10.1145/3574160Published:14 2023年2月出版历史0citation21downloadsmetrictotalcitations0总下载21过去12个月21过去6周13获取引文警报添加了新的引文警报!此警报已成功添加,并将发送到:每当您选择的记录被引用时,您将收到通知。要管理您的警报首选项,请单击下面的按钮。管理我的提醒新引文提醒!请登录到您的帐户保存到binder保存到binder创建一个新的BinderNameCancelCreateExport CitationPublisher SiteGet Access
{"title":"Introduction to the Special Issue on Software Tools for Quantum Computing: Part 2","authors":"Yuri Alexeev, Alex McCaskey, Wibe de Jong","doi":"10.1145/3574160","DOIUrl":"https://doi.org/10.1145/3574160","url":null,"abstract":"introduction Introduction to the Special Issue on Software Tools for Quantum Computing: Part 2 Editors: Yuri Alexeev Computational Science Division, Argonne National Laboratory Computational Science Division, Argonne National Laboratory 0000-0001-5066-2254View Profile , Alex McCaskey NVIDIA NVIDIA 0000-0002-0745-3294View Profile , Wibe de Jong Computational Science Department Applied Computing for Scientific Discovery, Lawrence Berkeley National Laboratory Computational Science Department Applied Computing for Scientific Discovery, Lawrence Berkeley National Laboratory 0000-0002-7114-8315View Profile Authors Info & Claims ACM Transactions on Quantum ComputingVolume 4Issue 1March 2023 Article No.: 1pp 1–3https://doi.org/10.1145/3574160Published:14 February 2023Publication History 0citation21DownloadsMetricsTotal Citations0Total Downloads21Last 12 Months21Last 6 weeks13 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my AlertsNew Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteGet Access","PeriodicalId":474832,"journal":{"name":"ACM transactions on quantum computing","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135797833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}