首页 > 最新文献

arXiv - CS - Emerging Technologies最新文献

英文 中文
On the Limitations of Compute Thresholds as a Governance Strategy 论计算阈值作为治理策略的局限性
Pub Date : 2024-07-08 DOI: arxiv-2407.05694
Sara Hooker
At face value, this essay is about understanding a fairly esoteric governancetool called compute thresholds. However, in order to grapple with whether thesethresholds will achieve anything, we must first understand how they came to be.This requires engaging with a decades-old debate at the heart of computerscience progress, namely, is bigger always better? Hence, this essay may be ofinterest not only to policymakers and the wider public but also to computerscientists interested in understanding the role of compute in unlockingbreakthroughs. Does a certain inflection point of compute result in changes tothe risk profile of a model? This discussion is increasingly urgent given thewide adoption of governance approaches that suggest greater compute equateswith higher propensity for harm. Several leading frontier AI companies havereleased responsible scaling policies. Both the White House Executive Orders onAI Safety (EO) and the EU AI Act encode the use of FLOP or floating-pointoperations as a way to identify more powerful systems. What is striking aboutthe choice of compute thresholds to-date is that no models currently deployedin the wild fulfill the current criteria set by the EO. This implies that theemphasis is often not on auditing the risks and harms incurred by currentlydeployed models - but rather is based upon the belief that future levels ofcompute will introduce unforeseen new risks. A key conclusion of this essay isthat compute thresholds as currently implemented are shortsighted and likely tofail to mitigate risk. Governance that is overly reliant on compute fails tounderstand that the relationship between compute and risk is highly uncertainand rapidly changing. It also overestimates our ability to predict whatabilities emerge at different scales. This essay ends with recommendations fora better way forward.
从表面上看,这篇文章的主题是了解一种名为计算阈值的相当深奥的治理工具。然而,为了探讨计算阈值是否能实现任何目标,我们必须首先了解计算阈值是如何产生的。这就需要讨论计算机科学进步的核心问题,即 "是否越大就越好?"这一长达数十年之久的争论。因此,这篇文章不仅对政策制定者和广大公众有意义,而且对有兴趣了解计算在实现突破中的作用的计算机科学家也有意义。计算的某个拐点是否会导致模型的风险状况发生变化?这一问题的讨论越来越紧迫,因为人们普遍采用的治理方法认为,计算能力越强,造成危害的可能性就越大。一些领先的前沿人工智能公司已经发布了负责任的扩展政策。白宫关于人工智能安全的行政命令(EO)和欧盟人工智能法案都将 FLOP 或浮点运算作为识别更强大系统的一种方式。迄今为止,在计算阈值的选择上令人震惊的是,目前部署在野外的模型都不符合 EO 目前设定的标准。这意味着,重点往往不在于审核当前部署的模型所带来的风险和危害--而是基于这样一种信念,即未来的计算水平将带来不可预见的新风险。本文的一个重要结论是,目前实施的计算阈值是短视的,很可能无法降低风险。过度依赖计算的治理方式未能理解计算与风险之间的关系是高度不确定和快速变化的。同时,它也高估了我们预测不同规模下可能出现的风险的能力。本文最后提出了更好的发展建议。
{"title":"On the Limitations of Compute Thresholds as a Governance Strategy","authors":"Sara Hooker","doi":"arxiv-2407.05694","DOIUrl":"https://doi.org/arxiv-2407.05694","url":null,"abstract":"At face value, this essay is about understanding a fairly esoteric governance\u0000tool called compute thresholds. However, in order to grapple with whether these\u0000thresholds will achieve anything, we must first understand how they came to be.\u0000This requires engaging with a decades-old debate at the heart of computer\u0000science progress, namely, is bigger always better? Hence, this essay may be of\u0000interest not only to policymakers and the wider public but also to computer\u0000scientists interested in understanding the role of compute in unlocking\u0000breakthroughs. Does a certain inflection point of compute result in changes to\u0000the risk profile of a model? This discussion is increasingly urgent given the\u0000wide adoption of governance approaches that suggest greater compute equates\u0000with higher propensity for harm. Several leading frontier AI companies have\u0000released responsible scaling policies. Both the White House Executive Orders on\u0000AI Safety (EO) and the EU AI Act encode the use of FLOP or floating-point\u0000operations as a way to identify more powerful systems. What is striking about\u0000the choice of compute thresholds to-date is that no models currently deployed\u0000in the wild fulfill the current criteria set by the EO. This implies that the\u0000emphasis is often not on auditing the risks and harms incurred by currently\u0000deployed models - but rather is based upon the belief that future levels of\u0000compute will introduce unforeseen new risks. A key conclusion of this essay is\u0000that compute thresholds as currently implemented are shortsighted and likely to\u0000fail to mitigate risk. Governance that is overly reliant on compute fails to\u0000understand that the relationship between compute and risk is highly uncertain\u0000and rapidly changing. It also overestimates our ability to predict what\u0000abilities emerge at different scales. This essay ends with recommendations for\u0000a better way forward.","PeriodicalId":501168,"journal":{"name":"arXiv - CS - Emerging Technologies","volume":"2016 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141571837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy Efficient Knapsack Optimization Using Probabilistic Memristor Crossbars 利用概率 Memristor 交叉条实现高能效 Knapsack 优化
Pub Date : 2024-07-05 DOI: arxiv-2407.04332
Jinzhan Li, Suhas Kumar, Su-in Yi
Constrained optimization underlies crucial societal problems (for instance,stock trading and bandwidth allocation), but is often computationally hard(complexity grows exponentially with problem size). The big-data era urgentlydemands low-latency and low-energy optimization at the edge, which cannot behandled by digital processors due to their non-parallel von Neumannarchitecture. Recent efforts using massively parallel hardware (such asmemristor crossbars and quantum processors) employing annealing algorithms,while promising, have handled relatively easy and stable problems with sparseor binary representations (such as the max-cut or traveling salesmanproblems).However, most real-world applications embody three features, whichare encoded in the knapsack problem, and cannot be handled by annealingalgorithms - dense and non-binary representations, with destabilizingself-feedback. Here we demonstrate a post-digital-hardware-friendly randomizedcompetitive Ising-inspired (RaCI) algorithm performing knapsack optimization,experimentally implemented on a foundry-manufactured CMOS-integratedprobabilistic analog memristor crossbar. Our solution outperforms digital andquantum approaches by over 4 orders of magnitude in energy efficiency.
受限优化是关键社会问题(如股票交易和带宽分配)的基础,但往往难以计算(复杂度随问题规模呈指数增长)。大数据时代迫切需要在边缘进行低延迟、低能耗的优化,而数字处理器的非并行冯-诺依曼体系结构无法解决这一问题。最近使用大规模并行硬件(如memristor crossbars 和量子处理器)的退火算法虽然很有前途,但只能处理稀疏或二进制表示的相对简单和稳定的问题(如最大切割问题或旅行推销员问题)。然而,现实世界中的大多数应用都体现了三个特征,即密集和非二进制表示,以及不稳定的自我反馈,而这些特征都被编码在了knapsack 问题中,退火算法无法处理这些问题。在这里,我们展示了一种对数字硬件友好的后随机竞争性伊辛启发(RaCI)算法,该算法在代工厂制造的 CMOS 集成非概率模拟忆阻器横梁上实验实现,可执行knapsack优化。我们的解决方案在能效方面超过数字和量子方法 4 个数量级。
{"title":"Energy Efficient Knapsack Optimization Using Probabilistic Memristor Crossbars","authors":"Jinzhan Li, Suhas Kumar, Su-in Yi","doi":"arxiv-2407.04332","DOIUrl":"https://doi.org/arxiv-2407.04332","url":null,"abstract":"Constrained optimization underlies crucial societal problems (for instance,\u0000stock trading and bandwidth allocation), but is often computationally hard\u0000(complexity grows exponentially with problem size). The big-data era urgently\u0000demands low-latency and low-energy optimization at the edge, which cannot be\u0000handled by digital processors due to their non-parallel von Neumann\u0000architecture. Recent efforts using massively parallel hardware (such as\u0000memristor crossbars and quantum processors) employing annealing algorithms,\u0000while promising, have handled relatively easy and stable problems with sparse\u0000or binary representations (such as the max-cut or traveling salesman\u0000problems).However, most real-world applications embody three features, which\u0000are encoded in the knapsack problem, and cannot be handled by annealing\u0000algorithms - dense and non-binary representations, with destabilizing\u0000self-feedback. Here we demonstrate a post-digital-hardware-friendly randomized\u0000competitive Ising-inspired (RaCI) algorithm performing knapsack optimization,\u0000experimentally implemented on a foundry-manufactured CMOS-integrated\u0000probabilistic analog memristor crossbar. Our solution outperforms digital and\u0000quantum approaches by over 4 orders of magnitude in energy efficiency.","PeriodicalId":501168,"journal":{"name":"arXiv - CS - Emerging Technologies","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141571840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Late Breaking Results: Fortifying Neural Networks: Safeguarding Against Adversarial Attacks with Stochastic Computing 最新成果:强化神经网络:利用随机计算防范恶意攻击
Pub Date : 2024-07-05 DOI: arxiv-2407.04861
Faeze S. Banitaba, Sercan Aygun, M. Hassan Najafi
In neural network (NN) security, safeguarding model integrity and resilienceagainst adversarial attacks has become paramount. This study investigates theapplication of stochastic computing (SC) as a novel mechanism to fortify NNmodels. The primary objective is to assess the efficacy of SC to mitigate thedeleterious impact of attacks on NN results. Through a series of rigorousexperiments and evaluations, we explore the resilience of NNs employing SC whensubjected to adversarial attacks. Our findings reveal that SC introduces arobust layer of defense, significantly reducing the susceptibility of networksto attack-induced alterations in their outcomes. This research contributesnovel insights into the development of more secure and reliable NN systems,essential for applications in sensitive domains where data integrity is ofutmost concern.
在神经网络(NN)安全方面,保障模型的完整性和抵御对抗性攻击的能力已变得至关重要。本研究调查了随机计算(SC)作为一种新机制在强化神经网络模型方面的应用。主要目的是评估随机计算在减轻攻击对 NN 结果的有害影响方面的功效。通过一系列严格的实验和评估,我们探索了采用 SC 的 NN 在受到对抗性攻击时的恢复能力。我们的研究结果表明,SC 引入了一个强大的防御层,大大降低了网络对攻击引起的结果改变的敏感性。这项研究为开发更安全、更可靠的 NN 系统提供了新的见解,这对于数据完整性最重要的敏感领域的应用至关重要。
{"title":"Late Breaking Results: Fortifying Neural Networks: Safeguarding Against Adversarial Attacks with Stochastic Computing","authors":"Faeze S. Banitaba, Sercan Aygun, M. Hassan Najafi","doi":"arxiv-2407.04861","DOIUrl":"https://doi.org/arxiv-2407.04861","url":null,"abstract":"In neural network (NN) security, safeguarding model integrity and resilience\u0000against adversarial attacks has become paramount. This study investigates the\u0000application of stochastic computing (SC) as a novel mechanism to fortify NN\u0000models. The primary objective is to assess the efficacy of SC to mitigate the\u0000deleterious impact of attacks on NN results. Through a series of rigorous\u0000experiments and evaluations, we explore the resilience of NNs employing SC when\u0000subjected to adversarial attacks. Our findings reveal that SC introduces a\u0000robust layer of defense, significantly reducing the susceptibility of networks\u0000to attack-induced alterations in their outcomes. This research contributes\u0000novel insights into the development of more secure and reliable NN systems,\u0000essential for applications in sensitive domains where data integrity is of\u0000utmost concern.","PeriodicalId":501168,"journal":{"name":"arXiv - CS - Emerging Technologies","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141571841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Resistive Memory for Computing and Security: Algorithms, Architectures, and Platforms 用于计算和安全的电阻式内存:算法、架构和平台
Pub Date : 2024-07-04 DOI: arxiv-2407.03843
Simranjeet Singh, Farhad Merchant, Sachin Patkar
Resistive random-access memory (RRAM) is gaining popularity due to itsability to offer computing within the memory and its non-volatile nature. Theunique properties of RRAM, such as binary switching, multi-state switching, anddevice variations, can be leveraged to design novel techniques and algorithms.This thesis proposes a technique for utilizing RRAM devices in three majordirections: i) digital logic implementation, ii) multi-valued computing, andiii) hardware security primitive design. We proposed new algorithms andarchitectures and conducted textit{experimental studies} on eachimplementation. Moreover, we developed the electronic design automationframework and hardware platforms to facilitate these experiments.
电阻式随机存取存储器(RRAM)因其可在存储器内提供计算功能和非易失性而越来越受欢迎。RRAM 的独特特性,如二进制开关、多状态开关和器件变化,可以被用来设计新颖的技术和算法。本论文提出了在三个主要方向利用 RRAM 器件的技术:i) 数字逻辑实现;ii) 多值计算;iii) 硬件安全基元设计。我们提出了新的算法和体系结构,并对每种实现方法进行了文本{实验研究}。此外,我们还开发了电子设计自动化框架和硬件平台,以促进这些实验。
{"title":"Resistive Memory for Computing and Security: Algorithms, Architectures, and Platforms","authors":"Simranjeet Singh, Farhad Merchant, Sachin Patkar","doi":"arxiv-2407.03843","DOIUrl":"https://doi.org/arxiv-2407.03843","url":null,"abstract":"Resistive random-access memory (RRAM) is gaining popularity due to its\u0000ability to offer computing within the memory and its non-volatile nature. The\u0000unique properties of RRAM, such as binary switching, multi-state switching, and\u0000device variations, can be leveraged to design novel techniques and algorithms.\u0000This thesis proposes a technique for utilizing RRAM devices in three major\u0000directions: i) digital logic implementation, ii) multi-valued computing, and\u0000iii) hardware security primitive design. We proposed new algorithms and\u0000architectures and conducted textit{experimental studies} on each\u0000implementation. Moreover, we developed the electronic design automation\u0000framework and hardware platforms to facilitate these experiments.","PeriodicalId":501168,"journal":{"name":"arXiv - CS - Emerging Technologies","volume":"367 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141571839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantum Serverless Paradigm and Application Development using the QFaaS Framework 量子无服务器范式和使用 QFaaS 框架的应用程序开发
Pub Date : 2024-07-03 DOI: arxiv-2407.02828
Hoa T. Nguyen, Bui Binh An Pham, Muhammad Usman, Rajkumar Buyya
Quantum computing has the potential to solve complex problems beyond thecapabilities of classical computers. However, its practical use is currentlylimited due to early-stage quantum software engineering and the constraints ofNoisy Intermediate-Scale Quantum (NISQ) devices. To address this issue, thischapter introduces the concept of serverless quantum computing with examplesusing QFaaS, a practical Quantum Function-as-a-Service framework. Thisframework utilizes the serverless computing model to simplify quantumapplication development and deployment by abstracting the complexities ofquantum hardware and enhancing application portability across different quantumsoftware development kits and quantum backends. The chapter providescomprehensive documentation and guidelines for deploying and using QFaaS,detailing the setup, component deployment, and examples of service-orientedquantum applications. This framework offers a promising approach to overcomingcurrent limitations and advancing the practical software engineering of quantumcomputing.
量子计算有可能解决超越经典计算机能力的复杂问题。然而,由于早期量子软件工程和噪声中量子(NISQ)设备的限制,量子计算的实际应用目前还很有限。为了解决这个问题,本章将介绍无服务器量子计算的概念,并举例说明一个实用的量子功能即服务框架--QFaaS。该框架利用无服务器计算模型,通过抽象量子硬件的复杂性来简化量子应用的开发和部署,并增强不同量子软件开发工具包和量子后端之间的应用可移植性。本章为部署和使用 QFaaS 提供了全面的文档和指南,详细介绍了面向服务的量子应用的设置、组件部署和示例。该框架为克服当前的局限性和推进量子计算的实用软件工程提供了一种前景广阔的方法。
{"title":"Quantum Serverless Paradigm and Application Development using the QFaaS Framework","authors":"Hoa T. Nguyen, Bui Binh An Pham, Muhammad Usman, Rajkumar Buyya","doi":"arxiv-2407.02828","DOIUrl":"https://doi.org/arxiv-2407.02828","url":null,"abstract":"Quantum computing has the potential to solve complex problems beyond the\u0000capabilities of classical computers. However, its practical use is currently\u0000limited due to early-stage quantum software engineering and the constraints of\u0000Noisy Intermediate-Scale Quantum (NISQ) devices. To address this issue, this\u0000chapter introduces the concept of serverless quantum computing with examples\u0000using QFaaS, a practical Quantum Function-as-a-Service framework. This\u0000framework utilizes the serverless computing model to simplify quantum\u0000application development and deployment by abstracting the complexities of\u0000quantum hardware and enhancing application portability across different quantum\u0000software development kits and quantum backends. The chapter provides\u0000comprehensive documentation and guidelines for deploying and using QFaaS,\u0000detailing the setup, component deployment, and examples of service-oriented\u0000quantum applications. This framework offers a promising approach to overcoming\u0000current limitations and advancing the practical software engineering of quantum\u0000computing.","PeriodicalId":501168,"journal":{"name":"arXiv - CS - Emerging Technologies","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unified Anomaly Detection methods on Edge Device using Knowledge Distillation and Quantization 使用知识提炼和量化的边缘设备统一异常检测方法
Pub Date : 2024-07-03 DOI: arxiv-2407.02968
Sushovan Jena, Arya Pulkit, Kajal Singh, Anoushka Banerjee, Sharad Joshi, Ananth Ganesh, Dinesh Singh, Arnav Bhavsar
With the rapid advances in deep learning and smart manufacturing in Industry4.0, there is an imperative for high-throughput, high-performance, and fullyintegrated visual inspection systems. Most anomaly detection approaches usingdefect detection datasets, such as MVTec AD, employ one-class models thatrequire fitting separate models for each class. On the contrary, unified modelseliminate the need for fitting separate models for each class and significantlyreduce cost and memory requirements. Thus, in this work, we experiment withconsidering a unified multi-class setup. Our experimental study shows thatmulti-class models perform at par with one-class models for the standard MVTecAD dataset. Hence, this indicates that there may not be a need to learnseparate object/class-wise models when the object classes are significantlydifferent from each other, as is the case of the dataset considered.Furthermore, we have deployed three different unified lightweight architectureson the CPU and an edge device (NVIDIA Jetson Xavier NX). We analyze thequantized multi-class anomaly detection models in terms of latency and memoryrequirements for deployment on the edge device while comparingquantization-aware training (QAT) and post-training quantization (PTQ) forperformance at different precision widths. In addition, we explored twodifferent methods of calibration required in post-training scenarios and showthat one of them performs notably better, highlighting its importance forunsupervised tasks. Due to quantization, the performance drop in PTQ is furthercompensated by QAT, which yields at par performance with the original 32-bitFloating point in two of the models considered.
随着深度学习和智能制造在工业 4.0 领域的快速发展,高通量、高性能和完全集成的视觉检测系统势在必行。大多数使用缺陷检测数据集的异常检测方法(如 MVTec AD)都采用单类模型,需要为每一类分别拟合模型。相反,统一模型则无需为每个类别分别拟合模型,并能显著降低成本和内存需求。因此,在这项工作中,我们尝试考虑统一的多类设置。我们的实验研究表明,在标准 MVTecAD 数据集上,多类模型的性能与单类模型相当。此外,我们还在 CPU 和边缘设备(NVIDIA Jetson Xavier NX)上部署了三种不同的统一轻量级架构。我们分析了在边缘设备上部署量化多类异常检测模型的延迟和内存需求,同时比较了量化感知训练(QAT)和训练后量化(PTQ)在不同精度宽度下的性能。此外,我们还探索了在训练后场景中所需的两种不同校准方法,结果表明其中一种方法的性能明显更好,突出了其对无监督任务的重要性。由于量化,QAT 进一步补偿了 PTQ 的性能下降,在所考虑的两个模型中,QAT 的性能与原始的 32 位浮点运算相当。
{"title":"Unified Anomaly Detection methods on Edge Device using Knowledge Distillation and Quantization","authors":"Sushovan Jena, Arya Pulkit, Kajal Singh, Anoushka Banerjee, Sharad Joshi, Ananth Ganesh, Dinesh Singh, Arnav Bhavsar","doi":"arxiv-2407.02968","DOIUrl":"https://doi.org/arxiv-2407.02968","url":null,"abstract":"With the rapid advances in deep learning and smart manufacturing in Industry\u00004.0, there is an imperative for high-throughput, high-performance, and fully\u0000integrated visual inspection systems. Most anomaly detection approaches using\u0000defect detection datasets, such as MVTec AD, employ one-class models that\u0000require fitting separate models for each class. On the contrary, unified models\u0000eliminate the need for fitting separate models for each class and significantly\u0000reduce cost and memory requirements. Thus, in this work, we experiment with\u0000considering a unified multi-class setup. Our experimental study shows that\u0000multi-class models perform at par with one-class models for the standard MVTec\u0000AD dataset. Hence, this indicates that there may not be a need to learn\u0000separate object/class-wise models when the object classes are significantly\u0000different from each other, as is the case of the dataset considered.\u0000Furthermore, we have deployed three different unified lightweight architectures\u0000on the CPU and an edge device (NVIDIA Jetson Xavier NX). We analyze the\u0000quantized multi-class anomaly detection models in terms of latency and memory\u0000requirements for deployment on the edge device while comparing\u0000quantization-aware training (QAT) and post-training quantization (PTQ) for\u0000performance at different precision widths. In addition, we explored two\u0000different methods of calibration required in post-training scenarios and show\u0000that one of them performs notably better, highlighting its importance for\u0000unsupervised tasks. Due to quantization, the performance drop in PTQ is further\u0000compensated by QAT, which yields at par performance with the original 32-bit\u0000Floating point in two of the models considered.","PeriodicalId":501168,"journal":{"name":"arXiv - CS - Emerging Technologies","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DRLQ: A Deep Reinforcement Learning-based Task Placement for Quantum Cloud Computing DRLQ:基于深度强化学习的量子云计算任务分配
Pub Date : 2024-07-03 DOI: arxiv-2407.02748
Hoa T. Nguyen, Muhammad Usman, Rajkumar Buyya
The quantum cloud computing paradigm presents unique challenges in taskplacement due to the dynamic and heterogeneous nature of quantum computationresources. Traditional heuristic approaches fall short in adapting to therapidly evolving landscape of quantum computing. This paper proposes DRLQ, anovel Deep Reinforcement Learning (DRL)-based technique for task placement inquantum cloud computing environments, addressing the optimization of taskcompletion time and quantum task scheduling efficiency. It leverages the Deep QNetwork (DQN) architecture, enhanced with the Rainbow DQN approach, to create adynamic task placement strategy. This approach is one of the first in the fieldof quantum cloud resource management, enabling adaptive learning anddecision-making for quantum cloud environments and effectively optimizing taskplacement based on changing conditions and resource availability. We conductextensive experiments using the QSimPy simulation toolkit to evaluate theperformance of our method, demonstrating substantial improvements in taskexecution efficiency and a reduction in the need to reschedule quantum tasks.Our results show that utilizing the DRLQ approach for task placement cansignificantly reduce total quantum task completion time by 37.81% to 72.93% andprevent task rescheduling attempts compared to other heuristic approaches.
由于量子计算资源的动态性和异构性,量子云计算模式在任务置换方面提出了独特的挑战。传统的启发式方法无法适应量子计算的快速发展。本文提出的 DRLQ 是一种基于深度强化学习(DRL)的量子云计算环境任务分配技术,旨在优化任务完成时间和量子任务调度效率。它利用深度量子网络(DQN)架构,并采用 Rainbow DQN 方法进行增强,从而创建了一种动态任务分配策略。该方法是量子云资源管理领域的首创方法之一,可实现量子云环境的自适应学习和决策,并根据不断变化的条件和资源可用性有效优化任务分配。我们使用QSimPy仿真工具包进行了大量实验,评估了我们方法的性能,结果表明,与其他启发式方法相比,利用DRLQ方法进行任务放置可以显著缩短量子任务的总完成时间,缩短幅度为37.81%到72.93%,并避免了任务重新安排的尝试。
{"title":"DRLQ: A Deep Reinforcement Learning-based Task Placement for Quantum Cloud Computing","authors":"Hoa T. Nguyen, Muhammad Usman, Rajkumar Buyya","doi":"arxiv-2407.02748","DOIUrl":"https://doi.org/arxiv-2407.02748","url":null,"abstract":"The quantum cloud computing paradigm presents unique challenges in task\u0000placement due to the dynamic and heterogeneous nature of quantum computation\u0000resources. Traditional heuristic approaches fall short in adapting to the\u0000rapidly evolving landscape of quantum computing. This paper proposes DRLQ, a\u0000novel Deep Reinforcement Learning (DRL)-based technique for task placement in\u0000quantum cloud computing environments, addressing the optimization of task\u0000completion time and quantum task scheduling efficiency. It leverages the Deep Q\u0000Network (DQN) architecture, enhanced with the Rainbow DQN approach, to create a\u0000dynamic task placement strategy. This approach is one of the first in the field\u0000of quantum cloud resource management, enabling adaptive learning and\u0000decision-making for quantum cloud environments and effectively optimizing task\u0000placement based on changing conditions and resource availability. We conduct\u0000extensive experiments using the QSimPy simulation toolkit to evaluate the\u0000performance of our method, demonstrating substantial improvements in task\u0000execution efficiency and a reduction in the need to reschedule quantum tasks.\u0000Our results show that utilizing the DRLQ approach for task placement can\u0000significantly reduce total quantum task completion time by 37.81% to 72.93% and\u0000prevent task rescheduling attempts compared to other heuristic approaches.","PeriodicalId":501168,"journal":{"name":"arXiv - CS - Emerging Technologies","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In-Memory Mirroring: Cloning Without Reading 内存镜像不读取的克隆
Pub Date : 2024-07-03 DOI: arxiv-2407.02921
Simranjeet Singh, Ankit Bende, Chandan Kumar Jha, Vikas Rana, Rolf Drechsler, Sachin Patkar, Farhad Merchant
In-memory computing (IMC) has gained significant attention recently as itattempts to reduce the impact of memory bottlenecks. Numerous schemes fordigital IMC are presented in the literature, focusing on logic operations.Often, an application's description has data dependencies that must beresolved. Contemporary IMC architectures perform read followed by writeoperations for this purpose, which results in performance and energy penalties.To solve this fundamental problem, this paper presents in-memory mirroring(IMM). IMM eliminates the need for read and write-back steps, thus avoidingenergy and performance penalties. Instead, we perform data movement withinmemory, involving row-wise and column-wise data transfers. Additionally, theIMM scheme enables parallel cloning of entire row (word) with a complexity of$mathcal{O}(1)$. Moreover, our analysis of the energy consumption of theproposed technique using resistive random-access memory crossbar andexperimentally validated JART VCM v1b model. The IMM increases energyefficiency and shows 2$times$ performance improvement compared to conventionaldata movement methods.
最近,内存计算(IMC)受到了广泛关注,因为它试图减少内存瓶颈的影响。文献中介绍了大量数字 IMC 方案,主要集中在逻辑运算方面。为了解决这一根本问题,本文提出了内存镜像(IMM)方案。为了解决这一根本问题,本文提出了内存内镜像(IMM)技术。IMM 消除了读取和回写步骤,从而避免了能耗和性能损失。取而代之的是,我们在内存中执行数据移动,包括行向和列向数据传输。此外,IMM 方案实现了整行(字)的并行克隆,复杂度为 $/mathcal{O}(1)$。此外,我们还利用电阻式随机存取存储器交叉条和经过实验验证的 JART VCM v1b 模型分析了所提技术的能耗。与传统的数据移动方法相比,IMM 提高了能效,并将性能提高了 2 倍。
{"title":"In-Memory Mirroring: Cloning Without Reading","authors":"Simranjeet Singh, Ankit Bende, Chandan Kumar Jha, Vikas Rana, Rolf Drechsler, Sachin Patkar, Farhad Merchant","doi":"arxiv-2407.02921","DOIUrl":"https://doi.org/arxiv-2407.02921","url":null,"abstract":"In-memory computing (IMC) has gained significant attention recently as it\u0000attempts to reduce the impact of memory bottlenecks. Numerous schemes for\u0000digital IMC are presented in the literature, focusing on logic operations.\u0000Often, an application's description has data dependencies that must be\u0000resolved. Contemporary IMC architectures perform read followed by write\u0000operations for this purpose, which results in performance and energy penalties.\u0000To solve this fundamental problem, this paper presents in-memory mirroring\u0000(IMM). IMM eliminates the need for read and write-back steps, thus avoiding\u0000energy and performance penalties. Instead, we perform data movement within\u0000memory, involving row-wise and column-wise data transfers. Additionally, the\u0000IMM scheme enables parallel cloning of entire row (word) with a complexity of\u0000$mathcal{O}(1)$. Moreover, our analysis of the energy consumption of the\u0000proposed technique using resistive random-access memory crossbar and\u0000experimentally validated JART VCM v1b model. The IMM increases energy\u0000efficiency and shows 2$times$ performance improvement compared to conventional\u0000data movement methods.","PeriodicalId":501168,"journal":{"name":"arXiv - CS - Emerging Technologies","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decentralized Intelligence Network (DIN) 分散式情报网络(DIN)
Pub Date : 2024-07-02 DOI: arxiv-2407.02461
Abraham Nash
Decentralized Intelligence Network (DIN) addresses the significant challengesof data sovereignty and AI utilization caused by the fragmentation and siloingof data across providers and institutions. This comprehensive frameworkovercomes access barriers to scalable data sources previously hindered by silosby leveraging: 1) personal data stores as a prerequisite for data sovereignty;2) a scalable federated learning protocol implemented on a public blockchainfor decentralized AI training, where data remains with participants and onlymodel parameter updates are shared; and 3) a scalable, trustless rewardsmechanism to incentivize participation and ensure fair reward distribution.This framework ensures that no entity can prevent or control access to trainingon data offered by participants or determine financial benefits, as theseprocesses operate on a public blockchain with an immutable record and without athird party. It supports effective AI training, allowing participants tomaintain control over their data, benefit financially, and contribute to adecentralized, scalable ecosystem that leverages collective AI to developbeneficial algorithms.
去中心化智能网络(DIN)解决了数据主权和人工智能利用方面的重大挑战,这些挑战是由不同提供商和机构之间的数据分散和孤岛化造成的。这个综合框架克服了以前因数据孤岛而造成的对可扩展数据源的访问障碍,它利用:1)个人数据存储作为数据主权的先决条件;2)在公共区块链上实施的可扩展联合学习协议,用于去中心化人工智能训练,其中数据保留在参与者手中,仅共享模型参数更新;3)可扩展、无信任的奖励机制,以激励参与并确保公平的奖励分配。该框架确保任何实体都无法阻止或控制对参与者提供的数据进行培训,也无法确定经济利益,因为这些过程是在具有不可更改记录的公共区块链上进行的,没有第三方参与。它支持有效的人工智能培训,使参与者能够控制自己的数据,获得经济利益,并为利用集体人工智能开发有益算法的去中心化、可扩展生态系统做出贡献。
{"title":"Decentralized Intelligence Network (DIN)","authors":"Abraham Nash","doi":"arxiv-2407.02461","DOIUrl":"https://doi.org/arxiv-2407.02461","url":null,"abstract":"Decentralized Intelligence Network (DIN) addresses the significant challenges\u0000of data sovereignty and AI utilization caused by the fragmentation and siloing\u0000of data across providers and institutions. This comprehensive framework\u0000overcomes access barriers to scalable data sources previously hindered by silos\u0000by leveraging: 1) personal data stores as a prerequisite for data sovereignty;\u00002) a scalable federated learning protocol implemented on a public blockchain\u0000for decentralized AI training, where data remains with participants and only\u0000model parameter updates are shared; and 3) a scalable, trustless rewards\u0000mechanism to incentivize participation and ensure fair reward distribution.\u0000This framework ensures that no entity can prevent or control access to training\u0000on data offered by participants or determine financial benefits, as these\u0000processes operate on a public blockchain with an immutable record and without a\u0000third party. It supports effective AI training, allowing participants to\u0000maintain control over their data, benefit financially, and contribute to a\u0000decentralized, scalable ecosystem that leverages collective AI to develop\u0000beneficial algorithms.","PeriodicalId":501168,"journal":{"name":"arXiv - CS - Emerging Technologies","volume":"137 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assistive Image Annotation Systems with Deep Learning and Natural Language Capabilities: A Review 具有深度学习和自然语言功能的辅助图像注释系统:综述
Pub Date : 2024-06-28 DOI: arxiv-2407.00252
Moseli Mots'oehli
While supervised learning has achieved significant success in computer visiontasks, acquiring high-quality annotated data remains a bottleneck. This paperexplores both scholarly and non-scholarly works in AI-assistive deep learningimage annotation systems that provide textual suggestions, captions, ordescriptions of the input image to the annotator. This potentially results inhigher annotation efficiency and quality. Our exploration covers annotation fora range of computer vision tasks including image classification, objectdetection, regression, instance, semantic segmentation, and pose estimation. Wereview various datasets and how they contribute to the training and evaluationof AI-assistive annotation systems. We also examine methods leveragingneuro-symbolic learning, deep active learning, and self-supervised learningalgorithms that enable semantic image understanding and generate free-textoutput. These include image captioning, visual question answering, andmulti-modal reasoning. Despite the promising potential, there is limitedpublicly available work on AI-assistive image annotation with textual outputcapabilities. We conclude by suggesting future research directions to advancethis field, emphasizing the need for more publicly accessible datasets andcollaborative efforts between academia and industry.
虽然监督学习在计算机视觉任务中取得了巨大成功,但获取高质量的注释数据仍是一个瓶颈。本论文探讨了人工智能辅助深度学习图像标注系统的学术和非学术成果,该系统可为标注者提供输入图像的文本建议、标题和说明。这有可能提高注释效率和质量。我们的探索涵盖了一系列计算机视觉任务的注释,包括图像分类、对象检测、回归、实例、语义分割和姿态估计。我们查看了各种数据集,以及它们如何为人工智能辅助标注系统的训练和评估做出贡献。我们还研究了利用神经符号学习、深度主动学习和自我监督学习算法实现语义图像理解并生成自由文本输出的方法。这些方法包括图像标题、视觉问题解答和多模态推理。尽管潜力巨大,但目前公开发表的具有文本输出能力的人工智能辅助图像注释工作还很有限。最后,我们提出了推进这一领域的未来研究方向,强调需要更多可公开访问的数据集以及学术界和产业界之间的合作。
{"title":"Assistive Image Annotation Systems with Deep Learning and Natural Language Capabilities: A Review","authors":"Moseli Mots'oehli","doi":"arxiv-2407.00252","DOIUrl":"https://doi.org/arxiv-2407.00252","url":null,"abstract":"While supervised learning has achieved significant success in computer vision\u0000tasks, acquiring high-quality annotated data remains a bottleneck. This paper\u0000explores both scholarly and non-scholarly works in AI-assistive deep learning\u0000image annotation systems that provide textual suggestions, captions, or\u0000descriptions of the input image to the annotator. This potentially results in\u0000higher annotation efficiency and quality. Our exploration covers annotation for\u0000a range of computer vision tasks including image classification, object\u0000detection, regression, instance, semantic segmentation, and pose estimation. We\u0000review various datasets and how they contribute to the training and evaluation\u0000of AI-assistive annotation systems. We also examine methods leveraging\u0000neuro-symbolic learning, deep active learning, and self-supervised learning\u0000algorithms that enable semantic image understanding and generate free-text\u0000output. These include image captioning, visual question answering, and\u0000multi-modal reasoning. Despite the promising potential, there is limited\u0000publicly available work on AI-assistive image annotation with textual output\u0000capabilities. We conclude by suggesting future research directions to advance\u0000this field, emphasizing the need for more publicly accessible datasets and\u0000collaborative efforts between academia and industry.","PeriodicalId":501168,"journal":{"name":"arXiv - CS - Emerging Technologies","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141508407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - CS - Emerging Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1