首页 > 最新文献

International Journal of Intelligent Computing and Cybernetics最新文献

英文 中文
Ferroelectric Devices for Intelligent Computing 用于智能计算的铁电器件
IF 4.3 Q1 Computer Science Pub Date : 2022-09-07 DOI: 10.34133/2022/9859508
G. Han, Yue Peng, Huan Liu, Jiuren Zhou, Zhengdong Luo, Bing Chen, R. Cheng, C. Jin, W. Xiao, Fenning Liu, Jiayi Zhao, Shulong Wang, Xiao Yu, Y. Liu, Yue Hao
Recently, transistor scaling is approaching its physical limit, hindering the further development of the computing capability. In the post-Moore era, emerging logic and storage devices have been the fundamental hardware for expanding the capability of intelligent computing. In this article, the recent progress of ferroelectric devices for intelligent computing is reviewed. The material properties and electrical characteristics of ferroelectric devices are elucidated, followed by a discussion of novel ferroelectric materials and devices that can be used for intelligent computing. Ferroelectric capacitors, transistors, and tunneling junction devices used for low-power logic, high-performance memory, and neuromorphic applications are comprehensively reviewed and compared. In addition, to provide useful guidance for developing high-performance ferroelectric-based intelligent computing systems, the key challenges for realizing ultrascaled ferroelectric devices for high-efficiency computing are discussed.
近年来,晶体管的缩放正接近其物理极限,阻碍了计算能力的进一步发展。在后摩尔时代,新兴的逻辑和存储设备已经成为扩展智能计算能力的基础硬件。本文综述了智能计算用铁电器件的最新研究进展。阐述了铁电器件的材料特性和电学特性,讨论了可用于智能计算的新型铁电材料和器件。本文对用于低功耗逻辑、高性能存储器和神经形态应用的铁电电容器、晶体管和隧道结器件进行了全面的回顾和比较。此外,为了对高性能铁电智能计算系统的发展提供有用的指导,讨论了实现用于高效计算的超尺度铁电器件的关键挑战。
{"title":"Ferroelectric Devices for Intelligent Computing","authors":"G. Han, Yue Peng, Huan Liu, Jiuren Zhou, Zhengdong Luo, Bing Chen, R. Cheng, C. Jin, W. Xiao, Fenning Liu, Jiayi Zhao, Shulong Wang, Xiao Yu, Y. Liu, Yue Hao","doi":"10.34133/2022/9859508","DOIUrl":"https://doi.org/10.34133/2022/9859508","url":null,"abstract":"Recently, transistor scaling is approaching its physical limit, hindering the further development of the computing capability. In the post-Moore era, emerging logic and storage devices have been the fundamental hardware for expanding the capability of intelligent computing. In this article, the recent progress of ferroelectric devices for intelligent computing is reviewed. The material properties and electrical characteristics of ferroelectric devices are elucidated, followed by a discussion of novel ferroelectric materials and devices that can be used for intelligent computing. Ferroelectric capacitors, transistors, and tunneling junction devices used for low-power logic, high-performance memory, and neuromorphic applications are comprehensively reviewed and compared. In addition, to provide useful guidance for developing high-performance ferroelectric-based intelligent computing systems, the key challenges for realizing ultrascaled ferroelectric devices for high-efficiency computing are discussed.","PeriodicalId":45291,"journal":{"name":"International Journal of Intelligent Computing and Cybernetics","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2022-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85937725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Deep Learning in Cell Image Analysis 细胞图像分析中的深度学习
IF 4.3 Q1 Computer Science Pub Date : 2022-09-07 DOI: 10.34133/2022/9861263
Junde Xu, Donghao Zhou, Danruo Deng, Jingpeng Li, Cheng Chen, Xiangyun Liao, Guangyong Chen, P. Heng
Cell images, which have been widely used in biomedical research and drug discovery, contain a great deal of valuable information that encodes how cells respond to external stimuli and intentional perturbations. Meanwhile, to discover rarer phenotypes, cell imaging is frequently performed in a high-content manner. Consequently, the manual interpretation of cell images becomes extremely inefficient. Fortunately, with the advancement of deep-learning technologies, an increasing number of deep learning-based algorithms have been developed to automate and streamline this process. In this study, we present an in-depth survey of the three most critical tasks in cell image analysis: segmentation, tracking, and classification. Despite the impressive score, the challenge still remains: most of the algorithms only verify the performance in their customized settings, causing a performance gap between academic research and practical application. Thus, we also review more advanced machine learning technologies, aiming to make deep learning-based methods more useful and eventually promote the application of deep-learning algorithms.
细胞图像已广泛应用于生物医学研究和药物发现,它包含了大量有价值的信息,这些信息编码了细胞如何对外部刺激和故意扰动作出反应。同时,为了发现更罕见的表型,细胞成像经常以高含量的方式进行。因此,人工解释细胞图像变得极其低效。幸运的是,随着深度学习技术的进步,越来越多的基于深度学习的算法被开发出来,以自动化和简化这一过程。在本研究中,我们对细胞图像分析中三个最关键的任务:分割、跟踪和分类进行了深入的调查。尽管取得了令人印象深刻的成绩,但挑战仍然存在:大多数算法仅在其定制设置中验证性能,导致学术研究与实际应用之间的性能差距。因此,我们也回顾了更先进的机器学习技术,旨在使基于深度学习的方法更有用,并最终促进深度学习算法的应用。
{"title":"Deep Learning in Cell Image Analysis","authors":"Junde Xu, Donghao Zhou, Danruo Deng, Jingpeng Li, Cheng Chen, Xiangyun Liao, Guangyong Chen, P. Heng","doi":"10.34133/2022/9861263","DOIUrl":"https://doi.org/10.34133/2022/9861263","url":null,"abstract":"Cell images, which have been widely used in biomedical research and drug discovery, contain a great deal of valuable information that encodes how cells respond to external stimuli and intentional perturbations. Meanwhile, to discover rarer phenotypes, cell imaging is frequently performed in a high-content manner. Consequently, the manual interpretation of cell images becomes extremely inefficient. Fortunately, with the advancement of deep-learning technologies, an increasing number of deep learning-based algorithms have been developed to automate and streamline this process. In this study, we present an in-depth survey of the three most critical tasks in cell image analysis: segmentation, tracking, and classification. Despite the impressive score, the challenge still remains: most of the algorithms only verify the performance in their customized settings, causing a performance gap between academic research and practical application. Thus, we also review more advanced machine learning technologies, aiming to make deep learning-based methods more useful and eventually promote the application of deep-learning algorithms.","PeriodicalId":45291,"journal":{"name":"International Journal of Intelligent Computing and Cybernetics","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2022-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91115993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Queueing-Theoretic Performance Analysis of a Low-Entropy Labeled Network Stack 低熵标记网络堆栈的排队理论性能分析
IF 4.3 Q1 Computer Science Pub Date : 2022-09-05 DOI: 10.34133/2022/9863054
Hongrui Guo, Wenli Zhang, Zishu Yu, Mingyu Chen
Theoretical modeling is a popular method for quantitative analysis and performance prediction of computer systems, including cloud systems. Low entropy cloud (i.e., low interference among workloads and low system jitter) is becoming a new trend, where the Labeled Network Stack (LNS) based server is a good case to gain orders of magnitude performance improvement compared to servers based on traditional network stacks. However, it is desirable to figure out 1) where the low tail latency and the low entropy of LNS mainly come from, compared with mTCP, a typical user-space network stack in academia, and Linux network stack, the mainstream network stack in industry, and 2) how much LNS can be further optimized. Therefore, we propose a queueing theory-based analytical method defining a bottleneck stage to simplify the quantitative analysis of tail latency. Facilitated by the analytical method, we establish models characterizing the change of processing speed in different stages for an LNS-based server, an mTCP-based server, and a Linux-based server, with bursty traffic as an example. Under such traffic, each network service stage's processing speed is obtained by non-intrusive basic tests to identify the slowest stage as the bottleneck according to traffic and system characteristics. Our models reveal that the full-datapath prioritized processing and the full-path zero-copy are primary sources of the low tail latency and the low entropy of the LNS-based server, with 0.8%-24.4% error for the 99th percentile latency. In addition, the model of the LNS-based server can give the best number of worker threads querying a database, improving 2.1×-3.5× in concurrency.
理论建模是计算机系统(包括云系统)定量分析和性能预测的一种流行方法。低熵云(即工作负载之间的低干扰和低系统抖动)正在成为一种新的趋势,其中基于标记网络堆栈(LNS)的服务器是一个很好的例子,与基于传统网络堆栈的服务器相比,它可以获得数量级的性能提升。但是,与学术界典型的用户空间网络堆栈mTCP和工业上主流的网络堆栈Linux相比,需要弄清楚的是:1)LNS的低尾延迟和低熵主要来自哪里;2)LNS还有多少可以进一步优化的空间。因此,我们提出了一种基于排队理论的分析方法,定义瓶颈阶段,以简化尾部延迟的定量分析。在分析方法的帮助下,以突发流量为例,建立了基于lns、mtcp和linux的服务器在不同阶段处理速度变化的模型。在这种流量下,通过非侵入性的基础测试得到各网络服务阶段的处理速度,根据流量和系统特点,识别出最慢的阶段作为瓶颈。我们的模型表明,全数据路径优先处理和全路径零复制是基于lns的服务器低尾部延迟和低熵的主要来源,第99百分位延迟的误差为0.8%-24.4%。此外,基于lns的服务器模型可以提供查询数据库的最佳工作线程数,从而提高2.1×-3.5×的并发性。
{"title":"Queueing-Theoretic Performance Analysis of a Low-Entropy Labeled Network Stack","authors":"Hongrui Guo, Wenli Zhang, Zishu Yu, Mingyu Chen","doi":"10.34133/2022/9863054","DOIUrl":"https://doi.org/10.34133/2022/9863054","url":null,"abstract":"Theoretical modeling is a popular method for quantitative analysis and performance prediction of computer systems, including cloud systems. Low entropy cloud (i.e., low interference among workloads and low system jitter) is becoming a new trend, where the Labeled Network Stack (LNS) based server is a good case to gain orders of magnitude performance improvement compared to servers based on traditional network stacks. However, it is desirable to figure out 1) where the low tail latency and the low entropy of LNS mainly come from, compared with mTCP, a typical user-space network stack in academia, and Linux network stack, the mainstream network stack in industry, and 2) how much LNS can be further optimized. Therefore, we propose a queueing theory-based analytical method defining a bottleneck stage to simplify the quantitative analysis of tail latency. Facilitated by the analytical method, we establish models characterizing the change of processing speed in different stages for an LNS-based server, an mTCP-based server, and a Linux-based server, with bursty traffic as an example. Under such traffic, each network service stage's processing speed is obtained by non-intrusive basic tests to identify the slowest stage as the bottleneck according to traffic and system characteristics. Our models reveal that the full-datapath prioritized processing and the full-path zero-copy are primary sources of the low tail latency and the low entropy of the LNS-based server, with 0.8%-24.4% error for the 99th percentile latency. In addition, the model of the LNS-based server can give the best number of worker threads querying a database, improving 2.1×-3.5× in concurrency.","PeriodicalId":45291,"journal":{"name":"International Journal of Intelligent Computing and Cybernetics","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2022-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86815826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fractal Parallel Computing 分形并行计算
IF 4.3 Q1 Computer Science Pub Date : 2022-09-05 DOI: 10.34133/2022/9797623
Yongwei Zhao, Yunji Chen, Zhiwei Xu
As machine learning (ML) becomes the prominent technology for many emerging problems, dedicated ML computers are being developed at a variety of scales, from clouds to edge devices. However, the heterogeneous, parallel, and multilayer characteristics of conventional ML computers concentrate the cost of development on the software stack, namely, ML frameworks, compute libraries, and compilers, which limits the productivity of new ML computers. Fractal von Neumann architecture (FvNA) is proposed to address the programming productivity issue for ML computers. FvNA is scale-invariant to program, thus making the development of a family of scaled ML computers as easy as a single node. In this study, we generalize FvNA to the field of general-purpose parallel computing. We model FvNA as an abstract parallel computer, referred to as the fractal parallel machine (FPM), to demonstrate several representative general-purpose tasks that are efficiently programmable. FPM limits the entropy of programming by applying constraints on the control pattern of the parallel computing systems. However, FPM is still general-purpose and cost-optimal. We settle some preliminary results showing that FPM is as powerful as many fundamental parallel computing models such as BSP and alternating Turing machine. Therefore, FvNA is also generally applicable to various fields other than ML.
随着机器学习(ML)成为许多新兴问题的突出技术,从云到边缘设备,各种规模的专用ML计算机正在开发中。然而,传统机器学习计算机的异构、并行和多层特性将开发成本集中在软件堆栈上,即机器学习框架、计算库和编译器,这限制了新机器学习计算机的生产力。为了解决机器学习计算机的编程效率问题,提出了分形冯·诺伊曼结构(FvNA)。FvNA对程序是尺度不变的,因此使得一系列尺度ML计算机的开发像单个节点一样简单。在本研究中,我们将FvNA推广到通用并行计算领域。我们将FvNA建模为一个抽象的并行计算机,称为分形并行机(FPM),以演示几个具有代表性的通用任务,这些任务是有效可编程的。FPM通过对并行计算系统的控制模式施加约束来限制编程的熵。然而,FPM仍然是通用的和成本最优的。初步结果表明,FPM与BSP、交替图灵机等基本并行计算模型一样强大。因此,FvNA也普遍适用于ML以外的各个领域。
{"title":"Fractal Parallel Computing","authors":"Yongwei Zhao, Yunji Chen, Zhiwei Xu","doi":"10.34133/2022/9797623","DOIUrl":"https://doi.org/10.34133/2022/9797623","url":null,"abstract":"As machine learning (ML) becomes the prominent technology for many emerging problems, dedicated ML computers are being developed at a variety of scales, from clouds to edge devices. However, the heterogeneous, parallel, and multilayer characteristics of conventional ML computers concentrate the cost of development on the software stack, namely, ML frameworks, compute libraries, and compilers, which limits the productivity of new ML computers. Fractal von Neumann architecture (FvNA) is proposed to address the programming productivity issue for ML computers. FvNA is scale-invariant to program, thus making the development of a family of scaled ML computers as easy as a single node. In this study, we generalize FvNA to the field of general-purpose parallel computing. We model FvNA as an abstract parallel computer, referred to as the fractal parallel machine (FPM), to demonstrate several representative general-purpose tasks that are efficiently programmable. FPM limits the entropy of programming by applying constraints on the control pattern of the parallel computing systems. However, FPM is still general-purpose and cost-optimal. We settle some preliminary results showing that FPM is as powerful as many fundamental parallel computing models such as BSP and alternating Turing machine. Therefore, FvNA is also generally applicable to various fields other than ML.","PeriodicalId":45291,"journal":{"name":"International Journal of Intelligent Computing and Cybernetics","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2022-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88888975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Labeled Architecture for Low-Entropy Clouds: Theory, Practice, and Lessons 低熵云的标记架构:理论、实践和教训
IF 4.3 Q1 Computer Science Pub Date : 2022-09-01 DOI: 10.34133/2022/9795476
Chuanqi Zhang, Sa Wang, Zihao Yu, Huizhe Wang, Yinan Xu, Luoshan Cai, Dan Tang, Ninghui Sun, Yungang Bao
Resource efficiency and quality of service (QoS) are both long-pursuit goals for cloud providers over the last decade. However, hardly any cloud platform can exactly achieve them perfectly even until today. Improving resource efficiency or resource utilization often could cause complicated resource contention between colocated cloud applications on different resources, spanning from the underlying hardware to the software stack, leading to unexpected performance degradation. The low-entropy cloud proposes a new software-hardware codesigned technology stack to holistically curb performance interference from the bottom up and obtain both high resource efficiency and high quality of application performance. In this paper, we introduce a new computer architecture for the low-entropy cloud stack, called labeled von Neumann architecture (LvNA), which incorporates a set of label-powered control mechanisms to enable shared components and resources on chip to differentiate, isolate, and prioritize user-defined application requests when competing for hardware resource. With the power of these mechanisms, LvNA was able to protect the performance of certain applications, such as latency-critical applications, from disorderly resource contention while improving resource utilization. We further build and tapeout Beihai, a 1.2 GHz 8-core RISC-V processor based on the LvNA architecture. The evaluation results show that Beihai could drastically reduce the performance degradation caused by memory bandwidth contention from 82.8% to 0.4%. When improving the CPU utilization over 70%, Beihai could reduce the 99th tail latency of Redis from 115 ms to 18.1 ms. Furthermore, Beihai can realize hardware virtualization, which boots up two unmodified virtual machines concurrently without the intervention of any software hypervisor.
资源效率和服务质量(QoS)都是云提供商在过去十年中长期追求的目标。然而,直到今天,几乎没有任何云平台可以完全完美地实现它们。提高资源效率或资源利用率通常会导致不同资源上的托管云应用程序(从底层硬件到软件堆栈)之间出现复杂的资源争用,从而导致意外的性能下降。低熵云提出了一种新的软硬件协同设计技术栈,从下到上整体抑制性能干扰,获得高资源效率和高质量的应用性能。在本文中,我们为低熵云堆栈引入了一种新的计算机体系结构,称为标记冯·诺伊曼体系结构(LvNA),它包含一组标签驱动的控制机制,使芯片上的共享组件和资源能够在竞争硬件资源时区分、隔离和优先考虑用户定义的应用程序请求。借助这些机制的强大功能,LvNA能够保护某些应用程序(如延迟关键型应用程序)的性能,避免无序的资源争用,同时提高资源利用率。我们进一步构建并推出了基于LvNA架构的1.2 GHz 8核RISC-V处理器“北海”。评估结果表明,北海可以将内存带宽争用引起的性能下降从82.8%大幅降低到0.4%。当CPU利用率提高70%以上时,北海可以将Redis的第99次尾部延迟从115 ms降低到18.1 ms。此外,北海还可以实现硬件虚拟化,在没有任何软件管理程序干预的情况下,同时启动两个未修改的虚拟机。
{"title":"A Labeled Architecture for Low-Entropy Clouds: Theory, Practice, and Lessons","authors":"Chuanqi Zhang, Sa Wang, Zihao Yu, Huizhe Wang, Yinan Xu, Luoshan Cai, Dan Tang, Ninghui Sun, Yungang Bao","doi":"10.34133/2022/9795476","DOIUrl":"https://doi.org/10.34133/2022/9795476","url":null,"abstract":"Resource efficiency and quality of service (QoS) are both long-pursuit goals for cloud providers over the last decade. However, hardly any cloud platform can exactly achieve them perfectly even until today. Improving resource efficiency or resource utilization often could cause complicated resource contention between colocated cloud applications on different resources, spanning from the underlying hardware to the software stack, leading to unexpected performance degradation. The low-entropy cloud proposes a new software-hardware codesigned technology stack to holistically curb performance interference from the bottom up and obtain both high resource efficiency and high quality of application performance. In this paper, we introduce a new computer architecture for the low-entropy cloud stack, called labeled von Neumann architecture (LvNA), which incorporates a set of label-powered control mechanisms to enable shared components and resources on chip to differentiate, isolate, and prioritize user-defined application requests when competing for hardware resource. With the power of these mechanisms, LvNA was able to protect the performance of certain applications, such as latency-critical applications, from disorderly resource contention while improving resource utilization. We further build and tapeout Beihai, a 1.2 GHz 8-core RISC-V processor based on the LvNA architecture. The evaluation results show that Beihai could drastically reduce the performance degradation caused by memory bandwidth contention from 82.8% to 0.4%. When improving the CPU utilization over 70%, Beihai could reduce the 99th tail latency of Redis from 115 ms to 18.1 ms. Furthermore, Beihai can realize hardware virtualization, which boots up two unmodified virtual machines concurrently without the intervention of any software hypervisor.","PeriodicalId":45291,"journal":{"name":"International Journal of Intelligent Computing and Cybernetics","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75302895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Global-to-Local Design for Self-Organized Task Allocation in Swarms 群中自组织任务分配的全局到局部设计
IF 4.3 Q1 Computer Science Pub Date : 2022-08-03 DOI: 10.34133/2022/9761694
Gabriele Valentini, Heiko Hamann, M. Dorigo
Programming robot swarms is hard because system requirements are formulated at the swarm level (i.e., globally) while control rules need to be coded at the individual robot level (i.e., locally). Connecting global to local levels or vice versa through mathematical modeling to predict the system behavior is generally assumed to be the grand challenge of swarm robotics. We propose to approach this problem by programming directly at the swarm level. Key to this solution is the use of heterogeneous swarms that combine appropriate subsets of agents whose hard-coded agent behaviors have known global effects. Our novel global-to-local design methodology allows to compose heterogeneous swarms for the example application of self-organized task allocation. We define a large but finite number of local agent controllers and focus on the global dynamics of behaviorally heterogeneous swarms. The user inputs the desired global task allocation for the swarm as a stationary probability distribution of agents allocated over tasks. We provide a generic method that implements the desired swarm behavior by mathematically deriving appropriate compositions of heterogeneous swarms that approximate these global user requirements. We investigate our methodology over several task allocation scenarios and validate our results with multiagent simulations. The proposed global-to-local design methodology is not limited to task allocation problems and can pave the way to formal approaches to design other swarm behaviors.
对机器人群体进行编程是困难的,因为系统需求是在群体级别(即全局)制定的,而控制规则需要在单个机器人级别(即局部)进行编码。通过数学建模来预测系统行为,将全局与局部或反之连接起来,通常被认为是群体机器人的重大挑战。我们建议通过直接在群体级别编程来解决这个问题。此解决方案的关键是使用异构群集,这些群集结合了代理的适当子集,这些代理的硬编码行为具有已知的全局影响。我们新颖的全局到局部设计方法允许为自组织任务分配的示例应用程序组成异构群。我们定义了大量但数量有限的局部代理控制器,并关注行为异构群体的全局动力学。用户输入群体所需的全局任务分配,作为在任务上分配的代理的平稳概率分布。我们提供了一种通用的方法,通过数学推导出近似这些全局用户需求的异构群体的适当组成来实现所需的群体行为。我们在几个任务分配场景中研究了我们的方法,并通过多智能体模拟验证了我们的结果。提出的全局到局部设计方法不仅限于任务分配问题,而且可以为设计其他群体行为的正式方法铺平道路。
{"title":"Global-to-Local Design for Self-Organized Task Allocation in Swarms","authors":"Gabriele Valentini, Heiko Hamann, M. Dorigo","doi":"10.34133/2022/9761694","DOIUrl":"https://doi.org/10.34133/2022/9761694","url":null,"abstract":"Programming robot swarms is hard because system requirements are formulated at the swarm level (i.e., globally) while control rules need to be coded at the individual robot level (i.e., locally). Connecting global to local levels or vice versa through mathematical modeling to predict the system behavior is generally assumed to be the grand challenge of swarm robotics. We propose to approach this problem by programming directly at the swarm level. Key to this solution is the use of heterogeneous swarms that combine appropriate subsets of agents whose hard-coded agent behaviors have known global effects. Our novel global-to-local design methodology allows to compose heterogeneous swarms for the example application of self-organized task allocation. We define a large but finite number of local agent controllers and focus on the global dynamics of behaviorally heterogeneous swarms. The user inputs the desired global task allocation for the swarm as a stationary probability distribution of agents allocated over tasks. We provide a generic method that implements the desired swarm behavior by mathematically deriving appropriate compositions of heterogeneous swarms that approximate these global user requirements. We investigate our methodology over several task allocation scenarios and validate our results with multiagent simulations. The proposed global-to-local design methodology is not limited to task allocation problems and can pave the way to formal approaches to design other swarm behaviors.","PeriodicalId":45291,"journal":{"name":"International Journal of Intelligent Computing and Cybernetics","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2022-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81969018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
What Is Missing from Contemporary AI? The World 当代人工智能缺少什么?世界
IF 4.3 Q1 Computer Science Pub Date : 2022-07-25 DOI: 10.34133/2022/9847630
In the past three years, we have witnessed the emergence of a new class of artificial intelligence systems–—so-called foundation models, which are characterised by very large machine learning models (with tens or hundreds of billions of parameters) trained using extremely large and broad data sets. Foundation models, it is argued, have competence in a broad range of tasks, which can be specialised for specific applications. Large language models, of which GPT-3 is perhaps the best known, are the most prominent example of current foundation models. While foundation models have demonstrated impressive capabilities in certain tasks—natural language generation being the most obvious example—I argue that because they are inherently disembodied, and they are limited with respect to what they have learned and what they can do. Foundation models are likely to be very useful in many applications: but they are not the end of the road in artificial intelligence.
在过去三年中,我们目睹了一类新的人工智能系统的出现——所谓的基础模型,其特点是使用极其庞大和广泛的数据集训练的非常大的机器学习模型(具有数百亿或数千亿个参数)。有人认为,基础模型在广泛的任务范围内具有能力,可以专门用于特定的应用。大型语言模型,其中GPT-3可能是最著名的,是当前基础模型中最突出的例子。虽然基础模型在某些任务中表现出了令人印象深刻的能力——自然语言生成是最明显的例子——但我认为这是因为它们本质上是无实体的,它们在所学和所能做的方面是有限的。基础模型可能在许多应用中非常有用:但它们并不是人工智能道路的终点。
{"title":"What Is Missing from Contemporary AI? The World","authors":"","doi":"10.34133/2022/9847630","DOIUrl":"https://doi.org/10.34133/2022/9847630","url":null,"abstract":"In the past three years, we have witnessed the emergence of a new class of artificial intelligence systems–—so-called foundation models, which are characterised by very large machine learning models (with tens or hundreds of billions of parameters) trained using extremely large and broad data sets. Foundation models, it is argued, have competence in a broad range of tasks, which can be specialised for specific applications. Large language models, of which GPT-3 is perhaps the best known, are the most prominent example of current foundation models. While foundation models have demonstrated impressive capabilities in certain tasks—natural language generation being the most obvious example—I argue that because they are inherently disembodied, and they are limited with respect to what they have learned and what they can do. Foundation models are likely to be very useful in many applications: but they are not the end of the road in artificial intelligence.","PeriodicalId":45291,"journal":{"name":"International Journal of Intelligent Computing and Cybernetics","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76133574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Intelligent Computing – A Flagship Journal towards the New Frontier of Computing and Intelligence 智能计算-迈向计算和智能新前沿的旗舰期刊
IF 4.3 Q1 Computer Science Pub Date : 2022-07-23 DOI: 10.34133/2022/9801324
Shiqiang Zhu, Ninghui Sun
{"title":"Intelligent Computing – A Flagship Journal towards the New Frontier of Computing and Intelligence","authors":"Shiqiang Zhu, Ninghui Sun","doi":"10.34133/2022/9801324","DOIUrl":"https://doi.org/10.34133/2022/9801324","url":null,"abstract":"","PeriodicalId":45291,"journal":{"name":"International Journal of Intelligent Computing and Cybernetics","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2022-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83111697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Extrapolated Speckle-Correlation Imaging 外推散斑相关成像
IF 4.3 Q1 Computer Science Pub Date : 2022-06-19 DOI: 10.34133/2022/9787098
Yuto Endo, J. Tanida, M. Naruse, R. Horisaki
Imaging through scattering media is a longstanding issue in a wide range of applications, including biomedicine, security, and astronomy. Speckle-correlation imaging is promising for noninvasively seeing through scattering media by assuming shift invariance of the scattering process called the memory effect. However, the memory effect is known to be severely limited when the medium is thick. Under such a scattering condition, speckle-correlation imaging is not practical because the correlation of the speckle decays, reducing the field of view. To address this problem, we present a method for expanding the field of view of single-shot speckle-correlation imaging by extrapolating the correlation with a limited memory effect. We derive the imaging model under this scattering condition and its inversion for reconstructing the object. Our method simultaneously estimates both the object and the decay of the speckle correlation based on the gradient descent method. We numerically and experimentally demonstrate the proposed method by reconstructing point sources behind scattering media with a limited memory effect. In the demonstrations, our speckle-correlation imaging method with a minimal lensless optical setup realized a larger field of view compared with the conventional one. This study will make techniques for imaging through scattering media more practical in various fields.
通过散射介质成像是一个长期存在的问题,在广泛的应用中,包括生物医学,安全和天文学。通过假设散射过程的移位不变性(称为记忆效应),散斑相关成像有望实现穿透散射介质的非侵入性观察。然而,众所周知,当介质较厚时,记忆效应受到严重限制。在这种散射条件下,由于散斑的相关性衰减,缩小了视场,因此散斑相关成像是不现实的。为了解决这一问题,我们提出了一种利用有限记忆效应外推散斑相关来扩大单镜头散斑相关成像视场的方法。导出了该散射条件下的成像模型及其反演,用于重建目标。该方法基于梯度下降法同时估计目标和散斑相关的衰减。通过数值和实验验证了该方法在有限记忆效应散射介质后重建点源。在演示中,我们的散斑相关成像方法与最小的无透镜光学装置相比,实现了更大的视场。本研究将使散射介质成像技术在各个领域更加实用。
{"title":"Extrapolated Speckle-Correlation Imaging","authors":"Yuto Endo, J. Tanida, M. Naruse, R. Horisaki","doi":"10.34133/2022/9787098","DOIUrl":"https://doi.org/10.34133/2022/9787098","url":null,"abstract":"Imaging through scattering media is a longstanding issue in a wide range of applications, including biomedicine, security, and astronomy. Speckle-correlation imaging is promising for noninvasively seeing through scattering media by assuming shift invariance of the scattering process called the memory effect. However, the memory effect is known to be severely limited when the medium is thick. Under such a scattering condition, speckle-correlation imaging is not practical because the correlation of the speckle decays, reducing the field of view. To address this problem, we present a method for expanding the field of view of single-shot speckle-correlation imaging by extrapolating the correlation with a limited memory effect. We derive the imaging model under this scattering condition and its inversion for reconstructing the object. Our method simultaneously estimates both the object and the decay of the speckle correlation based on the gradient descent method. We numerically and experimentally demonstrate the proposed method by reconstructing point sources behind scattering media with a limited memory effect. In the demonstrations, our speckle-correlation imaging method with a minimal lensless optical setup realized a larger field of view compared with the conventional one. This study will make techniques for imaging through scattering media more practical in various fields.","PeriodicalId":45291,"journal":{"name":"International Journal of Intelligent Computing and Cybernetics","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2022-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83141965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Phase Retrieval with Complexity-Guidance for Coherent X-Ray Imaging 基于复杂度制导的相干x射线成像鲁棒相位恢复
IF 4.3 Q1 Computer Science Pub Date : 2022-05-09 DOI: 10.34133/2022/9819716
Mansi Butola, Sunaina Rajora, K. Khare
Reconstruction of a stable and reliable solution from noisy and incomplete Fourier intensity data is a challenging problem for iterative phase retrieval algorithms. The typical methodology employed in the coherent X-ray imaging (CXI) literature involves thousands of iterations of well-known phase retrieval algorithms, e.g., hybrid input-output (HIO) or relaxed averaged alternating reflections (RAAR), that are concluded with a smaller number of error reduction (ER) iterations. Since the single run of this methodology may not provide a reliable solution, hundreds of trial solutions are first obtained by initializing the phase retrieval algorithm with independent random guesses. The resulting trial solutions are then averaged with appropriate phase adjustment, and resolution of the averaged reconstruction is assessed by plotting the phase retrieval transfer function (PRTF). In this work, we examine this commonly used RAAR-ER methodology from the perspective of the complexity parameter introduced by us in recent years. It is observed that the single run of the RAAR-ER algorithm provides a solution with undesirable grainy artifacts that persist to some extent even after averaging the multiple trial solutions. The grainy features are spurious in the sense that they are smaller in size compared to the resolution predicted by the PRTF curve. This inconsistency can be addressed by a novel methodology that we refer to as complexity-guided RAAR (CG-RAAR). The methodology is demonstrated with simulations and experimental data sets from the CXIDB database. In addition to providing consistent solution, CG-RAAR is also observed to require reduced number of independent trials for averaging.
从噪声和不完整的傅立叶强度数据中重建稳定可靠的解是迭代相位恢复算法的一个挑战。相干x射线成像(CXI)文献中采用的典型方法涉及众所周知的相位检索算法的数千次迭代,例如混合输入输出(HIO)或松弛平均交替反射(RAAR),这些算法通过较少的误差减少(ER)迭代得出结论。由于该方法的单次运行可能无法提供可靠的解,因此首先通过初始化具有独立随机猜测的相位检索算法获得数百个试验解。然后对得到的试验解进行适当相位调整的平均,并通过绘制相位恢复传递函数(PRTF)来评估平均重建的分辨率。在这项工作中,我们从我们近年来引入的复杂性参数的角度来研究这种常用的RAAR-ER方法。可以观察到,RAAR-ER算法的单次运行提供了一个具有不良颗粒伪影的解决方案,即使在多次试验解决方案平均后,这些伪影在某种程度上仍然存在。颗粒状特征是虚假的,因为它们的尺寸比PRTF曲线预测的分辨率要小。这种不一致可以通过一种新的方法来解决,我们称之为复杂性引导的RAAR (CG-RAAR)。该方法通过CXIDB数据库中的仿真和实验数据集进行了验证。除了提供一致的解决方案外,CG-RAAR还可以减少独立试验的平均次数。
{"title":"Robust Phase Retrieval with Complexity-Guidance for Coherent X-Ray Imaging","authors":"Mansi Butola, Sunaina Rajora, K. Khare","doi":"10.34133/2022/9819716","DOIUrl":"https://doi.org/10.34133/2022/9819716","url":null,"abstract":"Reconstruction of a stable and reliable solution from noisy and incomplete Fourier intensity data is a challenging problem for iterative phase retrieval algorithms. The typical methodology employed in the coherent X-ray imaging (CXI) literature involves thousands of iterations of well-known phase retrieval algorithms, e.g., hybrid input-output (HIO) or relaxed averaged alternating reflections (RAAR), that are concluded with a smaller number of error reduction (ER) iterations. Since the single run of this methodology may not provide a reliable solution, hundreds of trial solutions are first obtained by initializing the phase retrieval algorithm with independent random guesses. The resulting trial solutions are then averaged with appropriate phase adjustment, and resolution of the averaged reconstruction is assessed by plotting the phase retrieval transfer function (PRTF). In this work, we examine this commonly used RAAR-ER methodology from the perspective of the complexity parameter introduced by us in recent years. It is observed that the single run of the RAAR-ER algorithm provides a solution with undesirable grainy artifacts that persist to some extent even after averaging the multiple trial solutions. The grainy features are spurious in the sense that they are smaller in size compared to the resolution predicted by the PRTF curve. This inconsistency can be addressed by a novel methodology that we refer to as complexity-guided RAAR (CG-RAAR). The methodology is demonstrated with simulations and experimental data sets from the CXIDB database. In addition to providing consistent solution, CG-RAAR is also observed to require reduced number of independent trials for averaging.","PeriodicalId":45291,"journal":{"name":"International Journal of Intelligent Computing and Cybernetics","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2022-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77260213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Intelligent Computing and Cybernetics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1