首页 > 最新文献

2020 31st Irish Signals and Systems Conference (ISSC)最新文献

英文 中文
Genetic Algorithmic Parameter Optimisation of a Recurrent Spiking Neural Network Model 循环脉冲神经网络模型的遗传算法参数优化
Pub Date : 2020-03-30 DOI: 10.1109/ISSC49989.2020.9180185
Ifeatu Ezenwe, Alok Joshi, KongFatt Wong-Lin
Neural networks are complex algorithms that loosely model the behaviour of the human brain. They play a significant role in computational neuroscience and artificial intelligence. The next generation of neural network models is based on the spike timing activity of neurons: spiking neural networks (SNNs). However, model parameters in SNNs are difficult to search and optimise. Previous studies using genetic algorithm (GA) optimisation of SNNs were focused mainly on simple, feedforward, or oscillatory networks, but not much work has been done on optimising cortex-like recurrent SNNs. In this work, we investigated the use of GAs to search for optimal parameters in recurrent SNNs to reach targeted neuronal population firing rates, e.g. as in experimental observations. We considered a cortical column based SNN comprising 1000 Izhikevich spiking neurons for computational efficiency and biologically realism. The model parameters explored were the neuronal biased input currents. First, we found for this particular SNN, the optimal parameter values for targeted population averaged firing activities, and the convergence of algorithm by ~100 generations. We then showed that the GA optimal population size was within ~16-20 while the crossover rate that returned the best fitness value was ~0.95. Overall, we have successfully demonstrated the feasibility of implementing GA to optimize model parameters in a recurrent cortical based SNN.
神经网络是一种复杂的算法,可以松散地模拟人类大脑的行为。它们在计算神经科学和人工智能中发挥着重要作用。下一代神经网络模型是基于神经元的尖峰定时活动:尖峰神经网络(SNNs)。然而,snn中的模型参数很难搜索和优化。以往使用遗传算法(GA)优化snn的研究主要集中在简单、前馈或振荡网络上,但在优化类皮质循环snn方面做得不多。在这项工作中,我们研究了使用GAs在循环snn中搜索最优参数以达到目标神经元群放电率,例如在实验观察中。我们考虑了一个基于皮质柱的SNN,包括1000个Izhikevich尖峰神经元,以提高计算效率和生物学真实性。研究的模型参数为神经元偏置输入电流。首先,我们发现对于这个特定的SNN,目标群体平均射击活动的最优参数值,以及算法的收敛约100代。结果表明,遗传最优种群大小在~16 ~ 20之间,获得最佳适应度值的交叉率为~0.95。总的来说,我们已经成功地证明了在基于循环皮层的SNN中实现遗传算法优化模型参数的可行性。
{"title":"Genetic Algorithmic Parameter Optimisation of a Recurrent Spiking Neural Network Model","authors":"Ifeatu Ezenwe, Alok Joshi, KongFatt Wong-Lin","doi":"10.1109/ISSC49989.2020.9180185","DOIUrl":"https://doi.org/10.1109/ISSC49989.2020.9180185","url":null,"abstract":"Neural networks are complex algorithms that loosely model the behaviour of the human brain. They play a significant role in computational neuroscience and artificial intelligence. The next generation of neural network models is based on the spike timing activity of neurons: spiking neural networks (SNNs). However, model parameters in SNNs are difficult to search and optimise. Previous studies using genetic algorithm (GA) optimisation of SNNs were focused mainly on simple, feedforward, or oscillatory networks, but not much work has been done on optimising cortex-like recurrent SNNs. In this work, we investigated the use of GAs to search for optimal parameters in recurrent SNNs to reach targeted neuronal population firing rates, e.g. as in experimental observations. We considered a cortical column based SNN comprising 1000 Izhikevich spiking neurons for computational efficiency and biologically realism. The model parameters explored were the neuronal biased input currents. First, we found for this particular SNN, the optimal parameter values for targeted population averaged firing activities, and the convergence of algorithm by ~100 generations. We then showed that the GA optimal population size was within ~16-20 while the crossover rate that returned the best fitness value was ~0.95. Overall, we have successfully demonstrated the feasibility of implementing GA to optimize model parameters in a recurrent cortical based SNN.","PeriodicalId":351013,"journal":{"name":"2020 31st Irish Signals and Systems Conference (ISSC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124486578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Re-Training StyleGAN - A First Step Towards Building Large, Scalable Synthetic Facial Datasets 重新训练StyleGAN -迈向构建大型,可扩展的合成面部数据集的第一步
Pub Date : 2020-03-24 DOI: 10.1109/ISSC49989.2020.9180189
Viktor Varkarakis, S. Bazrafkan, P. Corcoran
StyleGAN is a state-of-art generative adversarial network architecture that generates random 2D high-quality synthetic facial data samples. In this paper we recap the StyleGAN architecture and training methodology and present our experiences of retraining it on a number of alternative public datasets. Practical issues and challenges arising from the retraining process are discussed. Tests and validation results are presented and a comparative analysis of several different re-trained StyleGAN weightings is provided. The role of this tool in building large, scalable datasets of synthetic facial data is also discussed.
StyleGAN是一种最先进的生成对抗网络架构,可以生成随机的2D高质量合成面部数据样本。在本文中,我们概述了StyleGAN的架构和训练方法,并介绍了我们在一些可供选择的公共数据集上对其进行再训练的经验。讨论了再培训过程中出现的实际问题和挑战。给出了测试和验证结果,并对几种不同的重新训练的StyleGAN权重进行了比较分析。本文还讨论了该工具在构建大型、可扩展的合成面部数据集中的作用。
{"title":"Re-Training StyleGAN - A First Step Towards Building Large, Scalable Synthetic Facial Datasets","authors":"Viktor Varkarakis, S. Bazrafkan, P. Corcoran","doi":"10.1109/ISSC49989.2020.9180189","DOIUrl":"https://doi.org/10.1109/ISSC49989.2020.9180189","url":null,"abstract":"StyleGAN is a state-of-art generative adversarial network architecture that generates random 2D high-quality synthetic facial data samples. In this paper we recap the StyleGAN architecture and training methodology and present our experiences of retraining it on a number of alternative public datasets. Practical issues and challenges arising from the retraining process are discussed. Tests and validation results are presented and a comparative analysis of several different re-trained StyleGAN weightings is provided. The role of this tool in building large, scalable datasets of synthetic facial data is also discussed.","PeriodicalId":351013,"journal":{"name":"2020 31st Irish Signals and Systems Conference (ISSC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125129211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
High-Accuracy Facial Depth Models derived from 3D Synthetic Data 基于三维合成数据的高精度面部深度模型
Pub Date : 2020-03-13 DOI: 10.1109/ISSC49989.2020.9180166
Faisal Khan, Shubhajit Basak, Hossein Javidnia, M. Schukat, P. Corcoran
In this paper, we explore how synthetically generated 3D face models can be used to construct a high-accuracy ground truth for depth. This allows us to train the Convolutional Neural Networks (CNN) to solve facial depth estimation problems. These models provide sophisticated controls over image variations including pose, illumination, facial expressions and camera position. 2D training samples can be rendered from these models, typically in RGB format, together with depth information. Using synthetic facial animations, a dynamic facial expression or facial action data can be rendered for a sequence of image frames together with ground truth depth and additional metadata such as head pose, light direction, etc. The synthetic data is used to train a CNN-based facial depth estimation system which is validated on both synthetic and real images. Potential fields of application include 3D reconstruction, driver monitoring systems, robotic vision systems, and advanced scene understanding.
在本文中,我们探讨了如何使用合成生成的三维人脸模型来构建高精度的深度地面真值。这允许我们训练卷积神经网络(CNN)来解决面部深度估计问题。这些模型提供了对图像变化的复杂控制,包括姿势,照明,面部表情和相机位置。2D训练样本可以从这些模型中渲染出来,通常是RGB格式,以及深度信息。使用合成面部动画,动态面部表情或面部动作数据可以与地面真实深度和额外的元数据(如头部姿势,光线方向等)一起呈现为一系列图像帧。将合成数据用于训练基于cnn的人脸深度估计系统,并在合成图像和真实图像上进行了验证。潜在的应用领域包括3D重建、驾驶员监控系统、机器人视觉系统和高级场景理解。
{"title":"High-Accuracy Facial Depth Models derived from 3D Synthetic Data","authors":"Faisal Khan, Shubhajit Basak, Hossein Javidnia, M. Schukat, P. Corcoran","doi":"10.1109/ISSC49989.2020.9180166","DOIUrl":"https://doi.org/10.1109/ISSC49989.2020.9180166","url":null,"abstract":"In this paper, we explore how synthetically generated 3D face models can be used to construct a high-accuracy ground truth for depth. This allows us to train the Convolutional Neural Networks (CNN) to solve facial depth estimation problems. These models provide sophisticated controls over image variations including pose, illumination, facial expressions and camera position. 2D training samples can be rendered from these models, typically in RGB format, together with depth information. Using synthetic facial animations, a dynamic facial expression or facial action data can be rendered for a sequence of image frames together with ground truth depth and additional metadata such as head pose, light direction, etc. The synthetic data is used to train a CNN-based facial depth estimation system which is validated on both synthetic and real images. Potential fields of application include 3D reconstruction, driver monitoring systems, robotic vision systems, and advanced scene understanding.","PeriodicalId":351013,"journal":{"name":"2020 31st Irish Signals and Systems Conference (ISSC)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124629590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Chairs Address 椅子的地址
Pub Date : 2013-10-01 DOI: 10.1109/3dtv.2013.6676630
J. Watson
I am delighted that its first visit of the 3DTV-CON to the United Kingdom is to the city of Aberdeen, Scotland, the heart of Europe's oil and gas industry. 3D technologies are beginning to play a crucial role in the Energy industry, both in its traditional oil and gas activities, and in the blossoming renewable Energy sector.
我很高兴3DTV-CON第一次访问英国是在苏格兰的阿伯丁市,这里是欧洲石油和天然气工业的中心。3D技术开始在能源行业发挥至关重要的作用,无论是在传统的石油和天然气活动中,还是在蓬勃发展的可再生能源领域。
{"title":"Chairs Address","authors":"J. Watson","doi":"10.1109/3dtv.2013.6676630","DOIUrl":"https://doi.org/10.1109/3dtv.2013.6676630","url":null,"abstract":"I am delighted that its first visit of the 3DTV-CON to the United Kingdom is to the city of Aberdeen, Scotland, the heart of Europe's oil and gas industry. 3D technologies are beginning to play a crucial role in the Energy industry, both in its traditional oil and gas activities, and in the blossoming renewable Energy sector.","PeriodicalId":351013,"journal":{"name":"2020 31st Irish Signals and Systems Conference (ISSC)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115952803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2020 31st Irish Signals and Systems Conference (ISSC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1