首页 > 最新文献

arXiv - CS - Neural and Evolutionary Computing最新文献

英文 中文
Federated Fairness Analytics: Quantifying Fairness in Federated Learning 联合公平分析:量化联合学习中的公平性
Pub Date : 2024-08-15 DOI: arxiv-2408.08214
Oscar Dilley, Juan Marcelo Parra-Ullauri, Rasheed Hussain, Dimitra Simeonidou
Federated Learning (FL) is a privacy-enhancing technology for distributed ML.By training models locally and aggregating updates - a federation learnstogether, while bypassing centralised data collection. FL is increasinglypopular in healthcare, finance and personal computing. However, it inheritsfairness challenges from classical ML and introduces new ones, resulting fromdifferences in data quality, client participation, communication constraints,aggregation methods and underlying hardware. Fairness remains an unresolvedissue in FL and the community has identified an absence of succinct definitionsand metrics to quantify fairness; to address this, we propose FederatedFairness Analytics - a methodology for measuring fairness. Our definition offairness comprises four notions with novel, corresponding metrics. They aresymptomatically defined and leverage techniques originating from XAI,cooperative game-theory and networking engineering. We tested a range ofexperimental settings, varying the FL approach, ML task and data settings. Theresults show that statistical heterogeneity and client participation affectfairness and fairness conscious approaches such as Ditto and q-FedAvgmarginally improve fairness-performance trade-offs. Using our techniques, FLpractitioners can uncover previously unobtainable insights into their system'sfairness, at differing levels of granularity in order to address fairnesschallenges in FL. We have open-sourced our work at:https://github.com/oscardilley/federated-fairness.
联邦学习(FL)是一种用于分布式人工智能的隐私增强技术。通过本地训练模型和聚合更新--联邦共同学习,同时绕过集中数据收集。FL 在医疗保健、金融和个人计算领域越来越受欢迎。然而,FL 继承了经典 ML 的公平性挑战,并引入了新的挑战,这些挑战来自数据质量、客户参与、通信限制、聚合方法和底层硬件的差异。公平性仍然是 FL 中一个悬而未决的问题,社区已经发现缺乏量化公平性的简明定义和衡量标准;为了解决这个问题,我们提出了联邦公平性分析--一种衡量公平性的方法。我们对公平性的定义包括四个概念和相应的新指标。它们是渐进定义的,并利用了源自 XAI、合作博弈论和网络工程的技术。我们测试了一系列实验设置,改变了 FL 方法、ML 任务和数据设置。结果表明,统计异质性和客户端参与会影响公平性,而诸如 Ditto 和 q-FedAvg 等注重公平性的方法可在一定程度上改善公平性-性能权衡。利用我们的技术,FL 实践者可以在不同的粒度水平上发现以前无法获得的系统公平性洞察力,从而解决 FL 中的公平性挑战。我们已将我们的工作开源:https://github.com/oscardilley/federated-fairness。
{"title":"Federated Fairness Analytics: Quantifying Fairness in Federated Learning","authors":"Oscar Dilley, Juan Marcelo Parra-Ullauri, Rasheed Hussain, Dimitra Simeonidou","doi":"arxiv-2408.08214","DOIUrl":"https://doi.org/arxiv-2408.08214","url":null,"abstract":"Federated Learning (FL) is a privacy-enhancing technology for distributed ML.\u0000By training models locally and aggregating updates - a federation learns\u0000together, while bypassing centralised data collection. FL is increasingly\u0000popular in healthcare, finance and personal computing. However, it inherits\u0000fairness challenges from classical ML and introduces new ones, resulting from\u0000differences in data quality, client participation, communication constraints,\u0000aggregation methods and underlying hardware. Fairness remains an unresolved\u0000issue in FL and the community has identified an absence of succinct definitions\u0000and metrics to quantify fairness; to address this, we propose Federated\u0000Fairness Analytics - a methodology for measuring fairness. Our definition of\u0000fairness comprises four notions with novel, corresponding metrics. They are\u0000symptomatically defined and leverage techniques originating from XAI,\u0000cooperative game-theory and networking engineering. We tested a range of\u0000experimental settings, varying the FL approach, ML task and data settings. The\u0000results show that statistical heterogeneity and client participation affect\u0000fairness and fairness conscious approaches such as Ditto and q-FedAvg\u0000marginally improve fairness-performance trade-offs. Using our techniques, FL\u0000practitioners can uncover previously unobtainable insights into their system's\u0000fairness, at differing levels of granularity in order to address fairness\u0000challenges in FL. We have open-sourced our work at:\u0000https://github.com/oscardilley/federated-fairness.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Universality of Real Minimal Complexity Reservoir 真实最小复杂性水库的普遍性
Pub Date : 2024-08-15 DOI: arxiv-2408.08071
Robert Simon Fong, Boyu Li, Peter Tiňo
Reservoir Computing (RC) models, a subclass of recurrent neural networks, aredistinguished by their fixed, non-trainable input layer and dynamically coupledreservoir, with only the static readout layer being trained. This designcircumvents the issues associated with backpropagating error signals throughtime, thereby enhancing both stability and training efficiency. RC models havebeen successfully applied across a broad range of application domains.Crucially, they have been demonstrated to be universal approximators oftime-invariant dynamic filters with fading memory, under various settings ofapproximation norms and input driving sources. Simple Cycle Reservoirs (SCR) represent a specialized class of RC models witha highly constrained reservoir architecture, characterized by uniform ringconnectivity and binary input-to-reservoir weights with an aperiodic signpattern. For linear reservoirs, given the reservoir size, the reservoirconstruction has only one degree of freedom -- the reservoir cycle weight. Sucharchitectures are particularly amenable to hardware implementations withoutsignificant performance degradation in many practical tasks. In this study weendow these observations with solid theoretical foundations by proving thatSCRs operating in real domain are universal approximators of time-invariantdynamic filters with fading memory. Our results supplement recent researchshowing that SCRs in the complex domain can approximate, to arbitraryprecision, any unrestricted linear reservoir with a non-linear readout. Wefurthermore introduce a novel method to drastically reduce the number of SCRunits, making such highly constrained architectures natural candidates forlow-complexity hardware implementations. Our findings are supported byempirical studies on real-world time series datasets.
储层计算(RC)模型是递归神经网络的一个子类,其与众不同之处在于固定的、不可训练的输入层和动态耦合的储层,只有静态的读出层需要训练。这种设计避免了误差信号随时间反向传播的问题,从而提高了稳定性和训练效率。最重要的是,在各种逼近规范和输入驱动源设置下,它们已被证明是具有衰减记忆的时间不变动态滤波器的通用逼近器。简单循环蓄水池(SCR)代表了一类具有高度受限蓄水池结构的专用 RC 模型,其特点是均匀的环连接性和具有非周期性符号模式的二进制输入到蓄水池权重。对于线性储层,给定储层大小,储层结构只有一个自由度--储层循环权重。这种架构特别适合硬件实现,在许多实际任务中不会出现明显的性能下降。在本研究中,我们证明了在实域中运行的储层结构是具有衰减记忆的时变动态滤波器的通用近似器,从而为这些观察结果提供了坚实的理论基础。我们的研究结果补充了最近的研究,即复数域中的可控硅可以任意精度逼近任何具有非线性读出的无限制线性水库。我们还引入了一种新方法来大幅减少 SCR 单元的数量,从而使这种高度受限的架构成为低复杂度硬件实现的天然候选者。我们的研究结果得到了真实世界时间序列数据集实证研究的支持。
{"title":"Universality of Real Minimal Complexity Reservoir","authors":"Robert Simon Fong, Boyu Li, Peter Tiňo","doi":"arxiv-2408.08071","DOIUrl":"https://doi.org/arxiv-2408.08071","url":null,"abstract":"Reservoir Computing (RC) models, a subclass of recurrent neural networks, are\u0000distinguished by their fixed, non-trainable input layer and dynamically coupled\u0000reservoir, with only the static readout layer being trained. This design\u0000circumvents the issues associated with backpropagating error signals through\u0000time, thereby enhancing both stability and training efficiency. RC models have\u0000been successfully applied across a broad range of application domains.\u0000Crucially, they have been demonstrated to be universal approximators of\u0000time-invariant dynamic filters with fading memory, under various settings of\u0000approximation norms and input driving sources. Simple Cycle Reservoirs (SCR) represent a specialized class of RC models with\u0000a highly constrained reservoir architecture, characterized by uniform ring\u0000connectivity and binary input-to-reservoir weights with an aperiodic sign\u0000pattern. For linear reservoirs, given the reservoir size, the reservoir\u0000construction has only one degree of freedom -- the reservoir cycle weight. Such\u0000architectures are particularly amenable to hardware implementations without\u0000significant performance degradation in many practical tasks. In this study we\u0000endow these observations with solid theoretical foundations by proving that\u0000SCRs operating in real domain are universal approximators of time-invariant\u0000dynamic filters with fading memory. Our results supplement recent research\u0000showing that SCRs in the complex domain can approximate, to arbitrary\u0000precision, any unrestricted linear reservoir with a non-linear readout. We\u0000furthermore introduce a novel method to drastically reduce the number of SCR\u0000units, making such highly constrained architectures natural candidates for\u0000low-complexity hardware implementations. Our findings are supported by\u0000empirical studies on real-world time series datasets.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"87 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing Spatio-Temporal Processing in Spiking Neural Networks through Adaptation 通过适应推进尖峰神经网络的时空处理
Pub Date : 2024-08-14 DOI: arxiv-2408.07517
Maximilian Baronig, Romain Ferrand, Silvester Sabathiel, Robert Legenstein
Efficient implementations of spiking neural networks on neuromorphic hardwarepromise orders of magnitude less power consumption than their non-spikingcounterparts. The standard neuron model for spike-based computation on suchneuromorphic systems has long been the leaky integrate-and-fire (LIF) neuron.As a promising advancement, a computationally light augmentation of the LIFneuron model with an adaptation mechanism experienced a recent upswing inpopularity, caused by demonstrations of its superior performance onspatio-temporal processing tasks. The root of the superiority of theseso-called adaptive LIF neurons however, is not well understood. In thisarticle, we thoroughly analyze the dynamical, computational, and learningproperties of adaptive LIF neurons and networks thereof. We find that thefrequently observed stability problems during training of such networks can beovercome by applying an alternative discretization method that results inprovably better stability properties than the commonly used Euler-Forwardmethod. With this discretization, we achieved a new state-of-the-artperformance on common event-based benchmark datasets. We also show that thesuperiority of networks of adaptive LIF neurons extends to the prediction andgeneration of complex time series. Our further analysis of the computationalproperties of networks of adaptive LIF neurons shows that they are particularlywell suited to exploit the spatio-temporal structure of input sequences.Furthermore, these networks are surprisingly robust to shifts of the mean inputstrength and input spike rate, even when these shifts were not observed duringtraining. As a consequence, high-performance networks can be obtained withoutany normalization techniques such as batch normalization or batch-normalizationthrough time.
尖峰神经网络在神经形态硬件上的高效实现,使其功耗比非尖峰神经网络低几个数量级。长期以来,这种神经形态系统上基于尖峰计算的标准神经元模型一直是 "渗漏整合-发射(LIF)"神经元。作为一种很有前途的进步,LIF 神经元模型在计算上的轻量化增强具有适应机制,最近由于在时空处理任务上的卓越表现而大受欢迎。然而,人们对所谓自适应 LIF 神经元优越性的根源还不甚了解。在本文中,我们深入分析了自适应 LIF 神经元及其网络的动态、计算和学习特性。我们发现,应用另一种离散化方法可以克服此类网络在训练过程中经常出现的稳定性问题。通过这种离散化方法,我们在常见的基于事件的基准数据集上取得了新的先进性能。我们还证明,自适应 LIF 神经元网络的优越性还可以扩展到复杂时间序列的预测和生成。我们对自适应 LIF 神经元网络计算特性的进一步分析表明,它们特别适合利用输入序列的时空结构。此外,这些网络对平均输入强度和输入尖峰率的变化具有惊人的鲁棒性,即使在训练期间没有观察到这些变化。因此,无需任何归一化技术,如批次归一化或通过时间的批次归一化,就能获得高性能网络。
{"title":"Advancing Spatio-Temporal Processing in Spiking Neural Networks through Adaptation","authors":"Maximilian Baronig, Romain Ferrand, Silvester Sabathiel, Robert Legenstein","doi":"arxiv-2408.07517","DOIUrl":"https://doi.org/arxiv-2408.07517","url":null,"abstract":"Efficient implementations of spiking neural networks on neuromorphic hardware\u0000promise orders of magnitude less power consumption than their non-spiking\u0000counterparts. The standard neuron model for spike-based computation on such\u0000neuromorphic systems has long been the leaky integrate-and-fire (LIF) neuron.\u0000As a promising advancement, a computationally light augmentation of the LIF\u0000neuron model with an adaptation mechanism experienced a recent upswing in\u0000popularity, caused by demonstrations of its superior performance on\u0000spatio-temporal processing tasks. The root of the superiority of these\u0000so-called adaptive LIF neurons however, is not well understood. In this\u0000article, we thoroughly analyze the dynamical, computational, and learning\u0000properties of adaptive LIF neurons and networks thereof. We find that the\u0000frequently observed stability problems during training of such networks can be\u0000overcome by applying an alternative discretization method that results in\u0000provably better stability properties than the commonly used Euler-Forward\u0000method. With this discretization, we achieved a new state-of-the-art\u0000performance on common event-based benchmark datasets. We also show that the\u0000superiority of networks of adaptive LIF neurons extends to the prediction and\u0000generation of complex time series. Our further analysis of the computational\u0000properties of networks of adaptive LIF neurons shows that they are particularly\u0000well suited to exploit the spatio-temporal structure of input sequences.\u0000Furthermore, these networks are surprisingly robust to shifts of the mean input\u0000strength and input spike rate, even when these shifts were not observed during\u0000training. As a consequence, high-performance networks can be obtained without\u0000any normalization techniques such as batch normalization or batch-normalization\u0000through time.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Surrogate-Assisted Search with Competitive Knowledge Transfer for Expensive Optimization 针对昂贵优化的具有竞争性知识转移的代理辅助搜索
Pub Date : 2024-08-13 DOI: arxiv-2408.07176
Xiaoming Xue, Yao Hu, Liang Feng, Kai Zhang, Linqi Song, Kay Chen Tan
Expensive optimization problems (EOPs) have attracted increasing researchattention over the decades due to their ubiquity in a variety of practicalapplications. Despite many sophisticated surrogate-assisted evolutionaryalgorithms (SAEAs) that have been developed for solving such problems, most ofthem lack the ability to transfer knowledge from previously-solved tasks andalways start their search from scratch, making them troubled by the notoriouscold-start issue. A few preliminary studies that integrate transfer learninginto SAEAs still face some issues, such as defective similarity quantificationthat is prone to underestimate promising knowledge, surrogate-dependency thatmakes the transfer methods not coherent with the state-of-the-art in SAEAs,etc. In light of the above, a plug and play competitive knowledge transfermethod is proposed to boost various SAEAs in this paper. Specifically, both theoptimized solutions from the source tasks and the promising solutions acquiredby the target surrogate are treated as task-solving knowledge, enabling them tocompete with each other to elect the winner for expensive evaluation, thusboosting the search speed on the target task. Moreover, the lower bound of theconvergence gain brought by the knowledge competition is mathematicallyanalyzed, which is expected to strengthen the theoretical foundation ofsequential transfer optimization. Experimental studies conducted on a series ofbenchmark problems and a practical application from the petroleum industryverify the efficacy of the proposed method. The source code of the competitiveknowledge transfer is available at https://github.com/XmingHsueh/SAS-CKT.
几十年来,昂贵的优化问题(EOPs)在各种实际应用中无处不在,因此吸引了越来越多的研究关注。尽管为解决这类问题开发了许多复杂的代理辅助进化算法(SAEAs),但它们大多缺乏从以前解决过的任务中迁移知识的能力,总是从头开始搜索,这使它们受到臭名昭著的冷启动问题的困扰。一些将迁移学习集成到SAEAs中的初步研究仍然面临着一些问题,如相似性量化存在缺陷,容易低估有潜力的知识;代用依赖性使迁移方法与SAEAs的最新技术不一致等。有鉴于此,本文提出了一种即插即用的竞争性知识转移方法,以促进各种 SAEA 的发展。具体来说,源任务中的优化方案和目标代理任务中获得的有前途的方案都被视为任务解决知识,使它们能够相互竞争,选出优胜者进行昂贵的评估,从而提高目标任务的搜索速度。此外,还对知识竞争带来的收敛增益的下限进行了数学分析,从而有望加强后继转移优化的理论基础。在一系列基准问题上进行的实验研究和石油行业的实际应用验证了所提方法的有效性。竞争性知识转移的源代码可在 https://github.com/XmingHsueh/SAS-CKT 上获取。
{"title":"Surrogate-Assisted Search with Competitive Knowledge Transfer for Expensive Optimization","authors":"Xiaoming Xue, Yao Hu, Liang Feng, Kai Zhang, Linqi Song, Kay Chen Tan","doi":"arxiv-2408.07176","DOIUrl":"https://doi.org/arxiv-2408.07176","url":null,"abstract":"Expensive optimization problems (EOPs) have attracted increasing research\u0000attention over the decades due to their ubiquity in a variety of practical\u0000applications. Despite many sophisticated surrogate-assisted evolutionary\u0000algorithms (SAEAs) that have been developed for solving such problems, most of\u0000them lack the ability to transfer knowledge from previously-solved tasks and\u0000always start their search from scratch, making them troubled by the notorious\u0000cold-start issue. A few preliminary studies that integrate transfer learning\u0000into SAEAs still face some issues, such as defective similarity quantification\u0000that is prone to underestimate promising knowledge, surrogate-dependency that\u0000makes the transfer methods not coherent with the state-of-the-art in SAEAs,\u0000etc. In light of the above, a plug and play competitive knowledge transfer\u0000method is proposed to boost various SAEAs in this paper. Specifically, both the\u0000optimized solutions from the source tasks and the promising solutions acquired\u0000by the target surrogate are treated as task-solving knowledge, enabling them to\u0000compete with each other to elect the winner for expensive evaluation, thus\u0000boosting the search speed on the target task. Moreover, the lower bound of the\u0000convergence gain brought by the knowledge competition is mathematically\u0000analyzed, which is expected to strengthen the theoretical foundation of\u0000sequential transfer optimization. Experimental studies conducted on a series of\u0000benchmark problems and a practical application from the petroleum industry\u0000verify the efficacy of the proposed method. The source code of the competitive\u0000knowledge transfer is available at https://github.com/XmingHsueh/SAS-CKT.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Potential of Combined Learning Strategies to Enhance Energy Efficiency of Spiking Neuromorphic Systems 组合学习策略提高尖峰神经形态系统能效的潜力
Pub Date : 2024-08-13 DOI: arxiv-2408.07150
Ali Shiri Sichani, Sai Kankatala
Ensuring energy-efficient design in neuromorphic computing systemsnecessitates a tailored architecture combined with algorithmic approaches. Thismanuscript focuses on enhancing brain-inspired perceptual computing machinesthrough a novel combined learning approach for Convolutional Spiking NeuralNetworks (CSNNs). CSNNs present a promising alternative to traditionalpower-intensive and complex machine learning methods like backpropagation,offering energy-efficient spiking neuron processing inspired by the humanbrain. The proposed combined learning method integrates Pair-based SpikeTiming-Dependent Plasticity (PSTDP) and power law-dependentSpike-timing-dependent plasticity (STDP) to adjust synaptic efficacies,enabling the utilization of stochastic elements like memristive devices toenhance energy efficiency and improve perceptual computing accuracy. Byreducing learning parameters while maintaining accuracy, these systems consumeless energy and have reduced area overhead, making them more suitable forhardware implementation. The research delves into neuromorphic designarchitectures, focusing on CSNNs to provide a general framework forenergy-efficient computing hardware. Various CSNN architectures are evaluatedto assess how less trainable parameters can maintain acceptable accuracy inperceptual computing systems, positioning them as viable candidates forneuromorphic architecture. Comparisons with previous work validate theachievements and methodology of the proposed architecture.
要确保神经形态计算系统的高能效设计,就必须将量身定制的架构与算法方法相结合。这篇手稿的重点是通过针对卷积尖峰神经网络(CSNN)的新型组合学习方法来增强大脑启发的感知计算机器。卷积尖峰神经网络(CSNN)是反向传播(backpropagation)等传统耗能且复杂的机器学习方法的一种有前途的替代方法,它提供了受人脑启发的高能效尖峰神经元处理方法。所提出的组合学习方法整合了基于配对的尖峰计时可塑性(PSTDP)和基于幂律的尖峰计时可塑性(STDP)来调整突触效率,从而能够利用随机元素(如记忆器件)来提高能效和感知计算精度。通过在保持准确性的同时降低学习参数,这些系统无需消耗能源,并减少了面积开销,因此更适合硬件实施。该研究深入研究了神经形态设计架构,重点关注 CSNN,为高能效计算硬件提供了一个通用框架。研究评估了各种 CSNN 架构,以评估在感知计算系统中,如何利用较少的可训练参数来保持可接受的准确性,从而将它们定位为神经形态架构的可行候选方案。与以前工作的比较验证了所提架构的成就和方法。
{"title":"The Potential of Combined Learning Strategies to Enhance Energy Efficiency of Spiking Neuromorphic Systems","authors":"Ali Shiri Sichani, Sai Kankatala","doi":"arxiv-2408.07150","DOIUrl":"https://doi.org/arxiv-2408.07150","url":null,"abstract":"Ensuring energy-efficient design in neuromorphic computing systems\u0000necessitates a tailored architecture combined with algorithmic approaches. This\u0000manuscript focuses on enhancing brain-inspired perceptual computing machines\u0000through a novel combined learning approach for Convolutional Spiking Neural\u0000Networks (CSNNs). CSNNs present a promising alternative to traditional\u0000power-intensive and complex machine learning methods like backpropagation,\u0000offering energy-efficient spiking neuron processing inspired by the human\u0000brain. The proposed combined learning method integrates Pair-based Spike\u0000Timing-Dependent Plasticity (PSTDP) and power law-dependent\u0000Spike-timing-dependent plasticity (STDP) to adjust synaptic efficacies,\u0000enabling the utilization of stochastic elements like memristive devices to\u0000enhance energy efficiency and improve perceptual computing accuracy. By\u0000reducing learning parameters while maintaining accuracy, these systems consume\u0000less energy and have reduced area overhead, making them more suitable for\u0000hardware implementation. The research delves into neuromorphic design\u0000architectures, focusing on CSNNs to provide a general framework for\u0000energy-efficient computing hardware. Various CSNN architectures are evaluated\u0000to assess how less trainable parameters can maintain acceptable accuracy in\u0000perceptual computing systems, positioning them as viable candidates for\u0000neuromorphic architecture. Comparisons with previous work validate the\u0000achievements and methodology of the proposed architecture.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Massive Dimensions Reduction and Hybridization with Meta-heuristics in Deep Learning 深度学习中的大规模维度缩减和与元启发式算法的混合
Pub Date : 2024-08-13 DOI: arxiv-2408.07194
Rasa Khosrowshahli, Shahryar Rahnamayan, Beatrice Ombuki-Berman
Deep learning is mainly based on utilizing gradient-based optimization fortraining Deep Neural Network (DNN) models. Although robust and widely used,gradient-based optimization algorithms are prone to getting stuck in localminima. In this modern deep learning era, the state-of-the-art DNN models havemillions and billions of parameters, including weights and biases, making themhuge-scale optimization problems in terms of search space. Tuning a huge numberof parameters is a challenging task that causes vanishing/exploding gradientsand overfitting; likewise, utilized loss functions do not exactly represent ourtargeted performance metrics. A practical solution to exploring large andcomplex solution space is meta-heuristic algorithms. Since DNNs exceedthousands and millions of parameters, even robust meta-heuristic algorithms,such as Differential Evolution, struggle to efficiently explore and converge insuch huge-dimensional search spaces, leading to very slow convergence and highmemory demand. To tackle the mentioned curse of dimensionality, the concept ofblocking was recently proposed as a technique that reduces the search spacedimensions by grouping them into blocks. In this study, we aim to introduceHistogram-based Blocking Differential Evolution (HBDE), a novel approach thathybridizes gradient-based and gradient-free algorithms to optimize parameters.Experimental results demonstrated that the HBDE could reduce the parameters inthe ResNet-18 model from 11M to 3K during the training/optimizing phase bymetaheuristics, namely, the proposed HBDE, which outperforms baselinegradient-based and parent gradient-free DE algorithms evaluated on CIFAR-10 andCIFAR-100 datasets showcasing its effectiveness with reduced computationaldemands for the very first time.
深度学习主要基于梯度优化来训练深度神经网络(DNN)模型。基于梯度的优化算法虽然稳健且应用广泛,但容易陷入局部极值。在现代深度学习时代,最先进的 DNN 模型有数百万乃至数十亿个参数,包括权重和偏置,这使得它们在搜索空间上成为超大规模的优化问题。调整大量参数是一项具有挑战性的任务,会导致梯度消失/爆炸和过拟合;同样,利用的损失函数也不能完全代表我们的目标性能指标。元启发式算法是探索庞大而复杂的求解空间的实用解决方案。由于 DNN 的参数超过数千甚至数百万,即使是鲁棒的元启发式算法,如微分进化算法,也很难有效地探索和收敛这种超大维度的搜索空间,从而导致收敛速度非常缓慢,内存需求也很高。为了解决上述 "维度诅咒 "问题,最近有人提出了 "分块"(blocking)的概念,即通过将搜索空间分组成块来减少搜索空间维度的技术。在本研究中,我们旨在引入基于组图的分块差分进化(Histogram-based Blocking Differential Evolution,HBDE),这是一种混合基于梯度和无梯度算法来优化参数的新方法。实验结果表明,在训练/优化阶段,HBDE 可以通过元启发式方法将 ResNet-18 模型的参数从 11M 减少到 3K,即所提出的 HBDE 优于在 CIFAR-10 和 CIFAR-100 数据集上评估的基于梯度和无梯度 DE 算法。
{"title":"Massive Dimensions Reduction and Hybridization with Meta-heuristics in Deep Learning","authors":"Rasa Khosrowshahli, Shahryar Rahnamayan, Beatrice Ombuki-Berman","doi":"arxiv-2408.07194","DOIUrl":"https://doi.org/arxiv-2408.07194","url":null,"abstract":"Deep learning is mainly based on utilizing gradient-based optimization for\u0000training Deep Neural Network (DNN) models. Although robust and widely used,\u0000gradient-based optimization algorithms are prone to getting stuck in local\u0000minima. In this modern deep learning era, the state-of-the-art DNN models have\u0000millions and billions of parameters, including weights and biases, making them\u0000huge-scale optimization problems in terms of search space. Tuning a huge number\u0000of parameters is a challenging task that causes vanishing/exploding gradients\u0000and overfitting; likewise, utilized loss functions do not exactly represent our\u0000targeted performance metrics. A practical solution to exploring large and\u0000complex solution space is meta-heuristic algorithms. Since DNNs exceed\u0000thousands and millions of parameters, even robust meta-heuristic algorithms,\u0000such as Differential Evolution, struggle to efficiently explore and converge in\u0000such huge-dimensional search spaces, leading to very slow convergence and high\u0000memory demand. To tackle the mentioned curse of dimensionality, the concept of\u0000blocking was recently proposed as a technique that reduces the search space\u0000dimensions by grouping them into blocks. In this study, we aim to introduce\u0000Histogram-based Blocking Differential Evolution (HBDE), a novel approach that\u0000hybridizes gradient-based and gradient-free algorithms to optimize parameters.\u0000Experimental results demonstrated that the HBDE could reduce the parameters in\u0000the ResNet-18 model from 11M to 3K during the training/optimizing phase by\u0000metaheuristics, namely, the proposed HBDE, which outperforms baseline\u0000gradient-based and parent gradient-free DE algorithms evaluated on CIFAR-10 and\u0000CIFAR-100 datasets showcasing its effectiveness with reduced computational\u0000demands for the very first time.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"18 1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Overcoming the Limitations of Layer Synchronization in Spiking Neural Networks 克服尖峰神经网络中层同步的局限性
Pub Date : 2024-08-09 DOI: arxiv-2408.05098
Roel Koopman, Amirreza Yousefzadeh, Mahyar Shahsavari, Guangzhi Tang, Manolis Sifalakis
Currently, neural-network processing in machine learning applications relieson layer synchronization, whereby neurons in a layer aggregate incomingcurrents from all neurons in the preceding layer, before evaluating theiractivation function. This is practiced even in artificial Spiking NeuralNetworks (SNNs), which are touted as consistent with neurobiology, in spite ofprocessing in the brain being, in fact asynchronous. A truly asynchronoussystem however would allow all neurons to evaluate concurrently their thresholdand emit spikes upon receiving any presynaptic current. Omitting layersynchronization is potentially beneficial, for latency and energy efficiency,but asynchronous execution of models previously trained with layersynchronization may entail a mismatch in network dynamics and performance. Wepresent a study that documents and quantifies this problem in three datasets onour simulation environment that implements network asynchrony, and we show thatmodels trained with layer synchronization either perform sub-optimally inabsence of the synchronization, or they will fail to benefit from any energyand latency reduction, when such a mechanism is in place. We then "make endsmeet" and address the problem with unlayered backprop, a novelbackpropagation-based training method, for learning models suitable forasynchronous processing. We train with it models that use different neuronexecution scheduling strategies, and we show that although their neurons aremore reactive, these models consistently exhibit lower overall spike density(up to 50%), reach a correct decision faster (up to 2x) without integrating allspikes, and achieve superior accuracy (up to 10% higher). Our findings suggestthat asynchronous event-based (neuromorphic) AI computing is indeed moreefficient, but we need to seriously rethink how we train our SNN models, tobenefit from it.
目前,机器学习应用中的神经网络处理依赖于层同步,即一层中的神经元在评估其激活功能之前,汇总来自前一层所有神经元的输入电流。即使在人工尖峰神经网络(SNN)中也是如此,尽管大脑中的处理过程实际上是异步的,但它却被吹捧为与神经生物学相一致。然而,真正的异步系统将允许所有神经元同时评估其阈值,并在接收到任何突触前电流时发出尖峰脉冲。省略层同步对延迟和能效有潜在好处,但异步执行以前用层同步训练的模型可能会导致网络动力学和性能的不匹配。我们提出了一项研究,在我们实现了网络异步的仿真环境中的三个数据集上记录并量化了这一问题,我们表明,使用层同步训练的模型要么在没有同步的情况下表现不理想,要么在有这种机制的情况下无法从任何能耗和延迟减少中获益。于是,我们 "亡羊补牢",采用无层反向传播(一种基于反向传播的新型训练方法)来解决这个问题,以学习适合同步处理的模型。我们用它训练了使用不同神经元执行调度策略的模型,结果表明,虽然这些模型的神经元反应更快,但它们始终表现出较低的整体尖峰密度(高达 50%),在不整合所有尖峰的情况下,能更快(高达 2 倍)做出正确决策,而且准确率更高(高达 10%)。我们的研究结果表明,基于异步事件(神经形态)的人工智能计算确实更加高效,但我们需要认真反思如何训练我们的 SNN 模型,才能从中获益。
{"title":"Overcoming the Limitations of Layer Synchronization in Spiking Neural Networks","authors":"Roel Koopman, Amirreza Yousefzadeh, Mahyar Shahsavari, Guangzhi Tang, Manolis Sifalakis","doi":"arxiv-2408.05098","DOIUrl":"https://doi.org/arxiv-2408.05098","url":null,"abstract":"Currently, neural-network processing in machine learning applications relies\u0000on layer synchronization, whereby neurons in a layer aggregate incoming\u0000currents from all neurons in the preceding layer, before evaluating their\u0000activation function. This is practiced even in artificial Spiking Neural\u0000Networks (SNNs), which are touted as consistent with neurobiology, in spite of\u0000processing in the brain being, in fact asynchronous. A truly asynchronous\u0000system however would allow all neurons to evaluate concurrently their threshold\u0000and emit spikes upon receiving any presynaptic current. Omitting layer\u0000synchronization is potentially beneficial, for latency and energy efficiency,\u0000but asynchronous execution of models previously trained with layer\u0000synchronization may entail a mismatch in network dynamics and performance. We\u0000present a study that documents and quantifies this problem in three datasets on\u0000our simulation environment that implements network asynchrony, and we show that\u0000models trained with layer synchronization either perform sub-optimally in\u0000absence of the synchronization, or they will fail to benefit from any energy\u0000and latency reduction, when such a mechanism is in place. We then \"make ends\u0000meet\" and address the problem with unlayered backprop, a novel\u0000backpropagation-based training method, for learning models suitable for\u0000asynchronous processing. We train with it models that use different neuron\u0000execution scheduling strategies, and we show that although their neurons are\u0000more reactive, these models consistently exhibit lower overall spike density\u0000(up to 50%), reach a correct decision faster (up to 2x) without integrating all\u0000spikes, and achieve superior accuracy (up to 10% higher). Our findings suggest\u0000that asynchronous event-based (neuromorphic) AI computing is indeed more\u0000efficient, but we need to seriously rethink how we train our SNN models, to\u0000benefit from it.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"93 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neuromorphic Keyword Spotting with Pulse Density Modulation MEMS Microphones 利用脉冲密度调制 MEMS 麦克风进行神经形态关键词搜索
Pub Date : 2024-08-09 DOI: arxiv-2408.05156
Sidi Yaya Arnaud Yarga, Sean U. N. Wood
The Keyword Spotting (KWS) task involves continuous audio stream monitoringto detect predefined words, requiring low energy devices for continuousprocessing. Neuromorphic devices effectively address this energy challenge.However, the general neuromorphic KWS pipeline, from microphone to SpikingNeural Network (SNN), entails multiple processing stages. Leveraging thepopularity of Pulse Density Modulation (PDM) microphones in modern devices andtheir similarity to spiking neurons, we propose a direct microphone-to-SNNconnection. This approach eliminates intermediate stages, notably reducingcomputational costs. The system achieved an accuracy of 91.54% on the GoogleSpeech Command (GSC) dataset, surpassing the state-of-the-art for the SpikingSpeech Command (SSC) dataset which is a bio-inspired encoded GSC. Furthermore,the observed sparsity in network activity and connectivity indicates potentialfor remarkably low energy consumption in a neuromorphic device implementation.
关键词定位(KWS)任务涉及持续监测音频流以检测预定义词,需要低能耗设备进行持续处理。然而,从麦克风到尖峰神经网络(SNN)的一般神经形态 KWS 管道需要多个处理阶段。利用脉冲密度调制(PDM)麦克风在现代设备中的普及及其与尖峰神经元的相似性,我们提出了麦克风到尖峰神经网络的直接连接。这种方法省去了中间环节,显著降低了计算成本。该系统在谷歌语音命令(GSC)数据集上的准确率达到了 91.54%,超过了生物启发编码 GSC 数据集 SpikingSpeech Command(SSC)的最先进水平。此外,观察到的网络活动和连通性的稀疏性表明,在神经形态设备的实现中,具有显著降低能耗的潜力。
{"title":"Neuromorphic Keyword Spotting with Pulse Density Modulation MEMS Microphones","authors":"Sidi Yaya Arnaud Yarga, Sean U. N. Wood","doi":"arxiv-2408.05156","DOIUrl":"https://doi.org/arxiv-2408.05156","url":null,"abstract":"The Keyword Spotting (KWS) task involves continuous audio stream monitoring\u0000to detect predefined words, requiring low energy devices for continuous\u0000processing. Neuromorphic devices effectively address this energy challenge.\u0000However, the general neuromorphic KWS pipeline, from microphone to Spiking\u0000Neural Network (SNN), entails multiple processing stages. Leveraging the\u0000popularity of Pulse Density Modulation (PDM) microphones in modern devices and\u0000their similarity to spiking neurons, we propose a direct microphone-to-SNN\u0000connection. This approach eliminates intermediate stages, notably reducing\u0000computational costs. The system achieved an accuracy of 91.54% on the Google\u0000Speech Command (GSC) dataset, surpassing the state-of-the-art for the Spiking\u0000Speech Command (SSC) dataset which is a bio-inspired encoded GSC. Furthermore,\u0000the observed sparsity in network activity and connectivity indicates potential\u0000for remarkably low energy consumption in a neuromorphic device implementation.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse Spiking Neural-like Membrane Systems on Graphics Processing Units 图形处理器上的稀疏尖峰神经膜系统
Pub Date : 2024-08-08 DOI: arxiv-2408.04343
Javier Hernández-Tello, Miguel Ángel Martínez-del-Amor, David Orellana-Martín, Francis George C. Cabarle
The parallel simulation of Spiking Neural P systems is mainly based on amatrix representation, where the graph inherent to the neural model is encodedin an adjacency matrix. The simulation algorithm is based on a matrix-vectormultiplication, which is an operation efficiently implemented on paralleldevices. However, when the graph of a Spiking Neural P system is not fullyconnected, the adjacency matrix is sparse and hence, lots of computingresources are wasted in both time and memory domains. For this reason, twocompression methods for the matrix representation were proposed in a previouswork, but they were not implemented nor parallelized on a simulator. In thispaper, they are implemented and parallelized on GPUs as part of a new SpikingNeural P system with delays simulator. Extensive experiments are conducted onhigh-end GPUs (RTX2080 and A100 80GB), and it is concluded that they outperformother solutions based on state-of-the-art GPU libraries when simulating SpikingNeural P systems.
尖峰神经 P 系统的并行仿真主要基于矩阵表示法,其中神经模型的固有图被编码为邻接矩阵。仿真算法基于矩阵向量相乘,这是在并行设备上高效实现的操作。然而,当尖峰神经 P 系统的图不是完全连接时,邻接矩阵是稀疏的,因此在时间和内存领域都会浪费大量计算资源。为此,前人提出了矩阵表示的两种压缩方法,但没有在模拟器上实现或并行化。本文在 GPU 上实现了这两种方法,并将其并行化,作为带有延迟模拟器的新型 SpikingNeural P 系统的一部分。在高端 GPU(RTX2080 和 A100 80GB)上进行了广泛的实验,结论是在模拟尖峰神经 P 系统时,它们优于基于最先进 GPU 库的其他解决方案。
{"title":"Sparse Spiking Neural-like Membrane Systems on Graphics Processing Units","authors":"Javier Hernández-Tello, Miguel Ángel Martínez-del-Amor, David Orellana-Martín, Francis George C. Cabarle","doi":"arxiv-2408.04343","DOIUrl":"https://doi.org/arxiv-2408.04343","url":null,"abstract":"The parallel simulation of Spiking Neural P systems is mainly based on a\u0000matrix representation, where the graph inherent to the neural model is encoded\u0000in an adjacency matrix. The simulation algorithm is based on a matrix-vector\u0000multiplication, which is an operation efficiently implemented on parallel\u0000devices. However, when the graph of a Spiking Neural P system is not fully\u0000connected, the adjacency matrix is sparse and hence, lots of computing\u0000resources are wasted in both time and memory domains. For this reason, two\u0000compression methods for the matrix representation were proposed in a previous\u0000work, but they were not implemented nor parallelized on a simulator. In this\u0000paper, they are implemented and parallelized on GPUs as part of a new Spiking\u0000Neural P system with delays simulator. Extensive experiments are conducted on\u0000high-end GPUs (RTX2080 and A100 80GB), and it is concluded that they outperform\u0000other solutions based on state-of-the-art GPU libraries when simulating Spiking\u0000Neural P systems.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"49 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Ungrounded Alignment Problem 不接地气的对齐问题
Pub Date : 2024-08-08 DOI: arxiv-2408.04242
Marc Pickett, Aakash Kumar Nain, Joseph Modayil, Llion Jones
Modern machine learning systems have demonstrated substantial abilities withmethods that either embrace or ignore human-provided knowledge, but combiningbenefits of both styles remains a challenge. One particular challenge involvesdesigning learning systems that exhibit built-in responses to specific abstractstimulus patterns, yet are still plastic enough to be agnostic about themodality and exact form of their inputs. In this paper, we investigate what wecall The Ungrounded Alignment Problem, which asks How can we build inpredefined knowledge in a system where we don't know how a given stimulus willbe grounded? This paper examines a simplified version of the general problem,where an unsupervised learner is presented with a sequence of images for thecharacters in a text corpus, and this learner is later evaluated on its abilityto recognize specific (possibly rare) sequential patterns. Importantly, thelearner is given no labels during learning or evaluation, but must map imagesfrom an unknown font or permutation to its correct class label. That is, at nopoint is our learner given labeled images, where an image vector is explicitlyassociated with a class label. Despite ample work in unsupervised andself-supervised loss functions, all current methods require a labeledfine-tuning phase to map the learned representations to correct classes.Finding this mapping in the absence of labels may seem a fool's errand, but ourmain result resolves this seeming paradox. We show that leveraging only letterbigram frequencies is sufficient for an unsupervised learner both to reliablyassociate images to class labels and to reliably identify trigger words in thesequence of inputs. More generally, this method suggests an approach forencoding specific desired innate behaviour in modality-agnostic models.
现代机器学习系统已经通过接受或忽略人类提供的知识的方法展示出了强大的能力,但如何将两种风格的优势结合起来仍然是一个挑战。其中一个特别的挑战是设计既能对特定的抽象刺激模式做出内置反应,又能对输入的模式和确切形式保持足够可塑性的学习系统。在本文中,我们研究了所谓的 "未接地对齐问题"(The Ungrounded Alignment Problem),即我们如何才能在一个不知道给定刺激将如何接地的系统中构建预定义知识?本文研究了一般问题的简化版本,即向无监督学习者提供文本语料库中的字符图像序列,然后评估该学习者识别特定(可能罕见)序列模式的能力。重要的是,在学习或评估过程中,学习者不会得到任何标签,但必须将未知字体或排列的图像映射到正确的类标签上。也就是说,我们的学习者在任何时候都不会得到有标签的图像,图像向量与类标签明确相关。尽管在无监督和自我监督损失函数方面做了大量工作,但目前所有的方法都需要一个有标签的微调阶段,才能将学习到的表征映射到正确的类别上。我们的研究表明,对于无监督学习者来说,仅利用字母图谱频率就足以将图像与类别标签可靠地联系起来,并在这些输入序列中可靠地识别出触发词。更广义地说,这种方法提出了一种在模式无关模型中编码特定所需先天行为的方法。
{"title":"The Ungrounded Alignment Problem","authors":"Marc Pickett, Aakash Kumar Nain, Joseph Modayil, Llion Jones","doi":"arxiv-2408.04242","DOIUrl":"https://doi.org/arxiv-2408.04242","url":null,"abstract":"Modern machine learning systems have demonstrated substantial abilities with\u0000methods that either embrace or ignore human-provided knowledge, but combining\u0000benefits of both styles remains a challenge. One particular challenge involves\u0000designing learning systems that exhibit built-in responses to specific abstract\u0000stimulus patterns, yet are still plastic enough to be agnostic about the\u0000modality and exact form of their inputs. In this paper, we investigate what we\u0000call The Ungrounded Alignment Problem, which asks How can we build in\u0000predefined knowledge in a system where we don't know how a given stimulus will\u0000be grounded? This paper examines a simplified version of the general problem,\u0000where an unsupervised learner is presented with a sequence of images for the\u0000characters in a text corpus, and this learner is later evaluated on its ability\u0000to recognize specific (possibly rare) sequential patterns. Importantly, the\u0000learner is given no labels during learning or evaluation, but must map images\u0000from an unknown font or permutation to its correct class label. That is, at no\u0000point is our learner given labeled images, where an image vector is explicitly\u0000associated with a class label. Despite ample work in unsupervised and\u0000self-supervised loss functions, all current methods require a labeled\u0000fine-tuning phase to map the learned representations to correct classes.\u0000Finding this mapping in the absence of labels may seem a fool's errand, but our\u0000main result resolves this seeming paradox. We show that leveraging only letter\u0000bigram frequencies is sufficient for an unsupervised learner both to reliably\u0000associate images to class labels and to reliably identify trigger words in the\u0000sequence of inputs. More generally, this method suggests an approach for\u0000encoding specific desired innate behaviour in modality-agnostic models.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"111 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - CS - Neural and Evolutionary Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1