Gegelati:通过通用和进化的纠结程序图的轻量级人工智能

K. Desnos, Nicolas Sourbier, Pierre-Yves Raumer, Olivier Gesny, M. Pelcat
{"title":"Gegelati:通过通用和进化的纠结程序图的轻量级人工智能","authors":"K. Desnos, Nicolas Sourbier, Pierre-Yves Raumer, Olivier Gesny, M. Pelcat","doi":"10.1145/3441110.3441575","DOIUrl":null,"url":null,"abstract":"Tangled Program Graph (TPG) is a reinforcement learning technique based on genetic programming concepts. On state-of-the-art learning environments, TPGs have been shown to offer comparable competence with Deep Neural Networks (DNNs), for a fraction of their computational and storage cost. This lightness of TPGs, both for training and inference, makes them an interesting model to implement Artificial Intelligences (AIs) on embedded systems with limited computational and storage resources. In this paper, we introduce the Gegelati library for TPGs. Besides introducing the general concepts and features of the library, two main contributions are detailed in the paper: 1/ The parallelization of the deterministic training process of TPGs, for supporting heterogeneous Multiprocessor Systems-on-Chipss (MPSoCss). 2/ The support for customizable instruction sets and data types within the genetically evolved programs of the TPG model. The scalability of the parallel training process is demonstrated through experiments on architectures ranging from a high-end 24-core processor to a low-power heterogeneous MPSoCs. The impact of customizable instructions on the outcome of a training process is demonstrated on a state-of-the-art reinforcement learning environment.","PeriodicalId":398729,"journal":{"name":"Workshop on Design and Architectures for Signal and Image Processing (14th edition)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Gegelati: Lightweight Artificial Intelligence through Generic and Evolvable Tangled Program Graphs\",\"authors\":\"K. Desnos, Nicolas Sourbier, Pierre-Yves Raumer, Olivier Gesny, M. Pelcat\",\"doi\":\"10.1145/3441110.3441575\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Tangled Program Graph (TPG) is a reinforcement learning technique based on genetic programming concepts. On state-of-the-art learning environments, TPGs have been shown to offer comparable competence with Deep Neural Networks (DNNs), for a fraction of their computational and storage cost. This lightness of TPGs, both for training and inference, makes them an interesting model to implement Artificial Intelligences (AIs) on embedded systems with limited computational and storage resources. In this paper, we introduce the Gegelati library for TPGs. Besides introducing the general concepts and features of the library, two main contributions are detailed in the paper: 1/ The parallelization of the deterministic training process of TPGs, for supporting heterogeneous Multiprocessor Systems-on-Chipss (MPSoCss). 2/ The support for customizable instruction sets and data types within the genetically evolved programs of the TPG model. The scalability of the parallel training process is demonstrated through experiments on architectures ranging from a high-end 24-core processor to a low-power heterogeneous MPSoCs. The impact of customizable instructions on the outcome of a training process is demonstrated on a state-of-the-art reinforcement learning environment.\",\"PeriodicalId\":398729,\"journal\":{\"name\":\"Workshop on Design and Architectures for Signal and Image Processing (14th edition)\",\"volume\":\"31 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Workshop on Design and Architectures for Signal and Image Processing (14th edition)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3441110.3441575\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Workshop on Design and Architectures for Signal and Image Processing (14th edition)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3441110.3441575","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

摘要

纠缠程序图(TPG)是一种基于遗传规划概念的强化学习技术。在最先进的学习环境中,TPGs已被证明可以提供与深度神经网络(dnn)相当的能力,而其计算和存储成本只是前者的一小部分。对于训练和推理来说,这种轻量级的TPGs使它们成为在计算和存储资源有限的嵌入式系统上实现人工智能(ai)的有趣模型。本文介绍了用于TPGs的Gegelati库。除了介绍该库的一般概念和特点外,本文还详细介绍了两个主要贡献:1/为支持异构多处理器片上系统(MPSoCss),实现了TPGs的确定性训练过程的并行化。2/在TPG模型的遗传进化程序中支持可定制的指令集和数据类型。通过从高端24核处理器到低功耗异构mpsoc的架构实验,证明了并行训练过程的可扩展性。可定制指令对训练过程结果的影响在最先进的强化学习环境中进行了演示。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Gegelati: Lightweight Artificial Intelligence through Generic and Evolvable Tangled Program Graphs
Tangled Program Graph (TPG) is a reinforcement learning technique based on genetic programming concepts. On state-of-the-art learning environments, TPGs have been shown to offer comparable competence with Deep Neural Networks (DNNs), for a fraction of their computational and storage cost. This lightness of TPGs, both for training and inference, makes them an interesting model to implement Artificial Intelligences (AIs) on embedded systems with limited computational and storage resources. In this paper, we introduce the Gegelati library for TPGs. Besides introducing the general concepts and features of the library, two main contributions are detailed in the paper: 1/ The parallelization of the deterministic training process of TPGs, for supporting heterogeneous Multiprocessor Systems-on-Chipss (MPSoCss). 2/ The support for customizable instruction sets and data types within the genetically evolved programs of the TPG model. The scalability of the parallel training process is demonstrated through experiments on architectures ranging from a high-end 24-core processor to a low-power heterogeneous MPSoCs. The impact of customizable instructions on the outcome of a training process is demonstrated on a state-of-the-art reinforcement learning environment.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
On Cache Limits for Dataflow Applications and Related Efficient Memory Management Strategies Hardware-software implementation of the PointPillars network for 3D object detection in point clouds Convolutional Fully-Connected Capsule Network (CFC-CapsNet) Low-Power Sign-Magnitude FFT Design for FMCW Radar Signal Processing Multiple Transform Selection concept modeling and implementation using Interface Based SDF graphs
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1