首页 > 最新文献

Integrated Computer-Aided Engineering最新文献

英文 中文
Deep deterministic policy gradient with constraints for gait optimisation of biped robots 带约束条件的深度确定性策略梯度,用于优化双足机器人的步态
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-15 DOI: 10.3233/ica-230724
Xingyang Liu, Haina Rong, Ferrante Neri, Peng Yue, Gexiang Zhang
In this paper, we propose a novel Reinforcement Learning (RL) algorithm for robotic motion control, that is, a constrained Deep Deterministic Policy Gradient (DDPG) deviation learning strategy to assist biped robots in walking safely and accurately. The previous research on this topic highlighted the limitations in the controller’s ability to accurately track foot placement on discrete terrains and the lack of consideration for safety concerns. In this study, we address these challenges by focusing on ensuring the overall system’s safety. To begin with, we tackle the inverse kinematics problem by introducing constraints to the damping least squares method. This enhancement not only addresses singularity issues but also guarantees safe ranges for joint angles, thus ensuring the stability and reliability of the system. Based on this, we propose the adoption of the constrained DDPG method to correct controller deviations. In constrained DDPG, we incorporate a constraint layer into the Actor network, incorporating joint deviations as state inputs. By conducting offline training within the range of safe angles, it serves as a deviation corrector. Lastly, we validate the effectiveness of our proposed approach by conducting dynamic simulations using the CRANE biped robot. Through comprehensive assessments, including singularity analysis, constraint effectiveness evaluation, and walking experiments on discrete terrains, we demonstrate the superiority and practicality of our approach in enhancing walking performance while ensuring safety. Overall, our research contributes to the advancement of biped robot locomotion by addressing gait optimisation from multiple perspectives, including singularity handling, safety constraints, and deviation learning.
在本文中,我们提出了一种用于机器人运动控制的新型强化学习(RL)算法,即有约束的深度确定性策略梯度(DDPG)偏差学习策略,以帮助双足机器人安全、准确地行走。以往关于这一主题的研究强调了控制器在离散地形上精确跟踪脚部位置能力的局限性,以及缺乏对安全问题的考虑。在本研究中,我们通过重点确保整个系统的安全性来应对这些挑战。首先,我们通过在阻尼最小二乘法中引入约束条件来解决逆运动学问题。这一改进不仅解决了奇异性问题,还保证了关节角度的安全范围,从而确保了系统的稳定性和可靠性。在此基础上,我们提出采用约束 DDPG 方法来修正控制器偏差。在受约束 DDPG 中,我们在 Actor 网络中加入了一个约束层,将关节偏差作为状态输入。通过在安全角度范围内进行离线训练,它可作为偏差校正器。最后,我们通过使用 CRANE 双足机器人进行动态模拟,验证了我们提出的方法的有效性。通过奇异性分析、约束有效性评估和离散地形行走实验等综合评估,我们证明了我们的方法在提高行走性能、确保安全方面的优越性和实用性。总之,我们的研究从奇异性处理、安全约束和偏差学习等多个角度解决了步态优化问题,为双足机器人运动的发展做出了贡献。
{"title":"Deep deterministic policy gradient with constraints for gait optimisation of biped robots","authors":"Xingyang Liu, Haina Rong, Ferrante Neri, Peng Yue, Gexiang Zhang","doi":"10.3233/ica-230724","DOIUrl":"https://doi.org/10.3233/ica-230724","url":null,"abstract":"In this paper, we propose a novel Reinforcement Learning (RL) algorithm for robotic motion control, that is, a constrained Deep Deterministic Policy Gradient (DDPG) deviation learning strategy to assist biped robots in walking safely and accurately. The previous research on this topic highlighted the limitations in the controller’s ability to accurately track foot placement on discrete terrains and the lack of consideration for safety concerns. In this study, we address these challenges by focusing on ensuring the overall system’s safety. To begin with, we tackle the inverse kinematics problem by introducing constraints to the damping least squares method. This enhancement not only addresses singularity issues but also guarantees safe ranges for joint angles, thus ensuring the stability and reliability of the system. Based on this, we propose the adoption of the constrained DDPG method to correct controller deviations. In constrained DDPG, we incorporate a constraint layer into the Actor network, incorporating joint deviations as state inputs. By conducting offline training within the range of safe angles, it serves as a deviation corrector. Lastly, we validate the effectiveness of our proposed approach by conducting dynamic simulations using the CRANE biped robot. Through comprehensive assessments, including singularity analysis, constraint effectiveness evaluation, and walking experiments on discrete terrains, we demonstrate the superiority and practicality of our approach in enhancing walking performance while ensuring safety. Overall, our research contributes to the advancement of biped robot locomotion by addressing gait optimisation from multiple perspectives, including singularity handling, safety constraints, and deviation learning.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"25 1","pages":""},"PeriodicalIF":6.5,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138818438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Look inside 3D point cloud deep neural network by patch-wise saliency map 通过斑块突出图观察三维点云深度神经网络的内部结构
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-15 DOI: 10.3233/ica-230725
Linkun Fan, Fazhi He, Yupeng Song, Huangxinxin Xu, Bing Li
The 3D point cloud deep neural network (3D DNN) has achieved remarkable success, but its black-box nature hinders its application in many safety-critical domains. The saliency map technique is a key method to look inside the black-box and determine where a 3D DNN focuses when recognizing a point cloud. Existing point-wise point cloud saliency methods are proposed to illustrate the point-wise saliency for a given 3D DNN. However, the above critical points are alternative and unreliable. The findings are grounded on our experimental results which show that a point becomes critical because it is responsible for representing one specific local structure. However, one local structure does not have to be represented by some specific points, conversely. As a result, discussing the saliency of the local structure (named patch-wise saliency) represented by critical points is more meaningful than discussing the saliency of some specific points. Based on the above motivations, this paper designs a black-box algorithm to generate patch-wise saliency map for point clouds. Our basic idea is to design the Mask Building-Dropping process, which adaptively matches the size of important/unimportant patches by clustering points with close saliency. Experimental results on several typical 3D DNNs show that our patch-wise saliency algorithm can provide better visual guidance, and can detect where a 3D DNN is focusing more efficiently than a point-wise saliency map. Finally, we apply our patch-wise saliency map to adversarial attacks and backdoor defenses. The results show that the improvement is significant.
三维点云深度神经网络(3D DNN)已取得显著成功,但其黑箱特性阻碍了它在许多安全关键领域的应用。突出图技术是观察黑箱内部并确定三维点云深度神经网络在识别点云时关注点的关键方法。现有的点云突出度方法可用于说明给定三维 DNN 的点云突出度。然而,上述临界点是替代性的,并不可靠。我们的实验结果表明,一个点之所以成为临界点,是因为它代表了一个特定的局部结构。然而,一个局部结构并不一定由某些特定点来代表,反之亦然。因此,讨论临界点所代表的局部结构的显著性(命名为片面显著性)比讨论某些特定点的显著性更有意义。基于上述动机,本文设计了一种黑盒算法来生成点云的斑块式显著性图。我们的基本思想是设计 "掩模构建-丢弃 "过程,通过对显著性接近的点进行聚类,自适应地匹配重要/不重要斑块的大小。在几个典型三维 DNN 上的实验结果表明,我们的片状显著性算法可以提供更好的视觉引导,比点状显著性地图更有效地检测出三维 DNN 的焦点所在。最后,我们将斑块式显著性图应用于对抗性攻击和后门防御。结果表明,改进效果显著。
{"title":"Look inside 3D point cloud deep neural network by patch-wise saliency map","authors":"Linkun Fan, Fazhi He, Yupeng Song, Huangxinxin Xu, Bing Li","doi":"10.3233/ica-230725","DOIUrl":"https://doi.org/10.3233/ica-230725","url":null,"abstract":"The 3D point cloud deep neural network (3D DNN) has achieved remarkable success, but its black-box nature hinders its application in many safety-critical domains. The saliency map technique is a key method to look inside the black-box and determine where a 3D DNN focuses when recognizing a point cloud. Existing point-wise point cloud saliency methods are proposed to illustrate the point-wise saliency for a given 3D DNN. However, the above critical points are alternative and unreliable. The findings are grounded on our experimental results which show that a point becomes critical because it is responsible for representing one specific local structure. However, one local structure does not have to be represented by some specific points, conversely. As a result, discussing the saliency of the local structure (named patch-wise saliency) represented by critical points is more meaningful than discussing the saliency of some specific points. Based on the above motivations, this paper designs a black-box algorithm to generate patch-wise saliency map for point clouds. Our basic idea is to design the Mask Building-Dropping process, which adaptively matches the size of important/unimportant patches by clustering points with close saliency. Experimental results on several typical 3D DNNs show that our patch-wise saliency algorithm can provide better visual guidance, and can detect where a 3D DNN is focusing more efficiently than a point-wise saliency map. Finally, we apply our patch-wise saliency map to adversarial attacks and backdoor defenses. The results show that the improvement is significant.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"5 1","pages":""},"PeriodicalIF":6.5,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139067137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A broadcast sub-GHz framework for unmanned aerial vehicles clock synchronization 一种用于无人机时钟同步的广播sub-GHz框架
2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-16 DOI: 10.3233/ica-230723
Niccolò Cecchinato, Ivan Scagnetto, Andrea Toma, Carlo Drioli, Gian Luca Foresti
Nowadays, set of cooperative drones are commonly used as aerial sensors, in order to monitor areas and track objects of interest (think, e.g., of border and coastal security and surveillance, crime control, disaster management, emergency first responder, forest and wildlife, traffic monitoring). The drones generate a quite large and continuous in time multimodal (audio, video and telemetry) data stream towards a ground control station with enough computing power and resources to store and process it. Hence, due to the distributed nature of this setting, further complicated by the movement and varying distance among drones, and to possible interferences and obstacles compromising communications, a common clock between the nodes is of utmost importance to make feasible a correct reconstruction of the multimodal data stream from the single datagrams, which may be received out of order or with different delays. A framework architecture, using sub-GHz broadcasting communications, is proposed to ensure time synchronization for a set of drones, allowing one to recover even in difficult situations where the usual time sources, e.g. GPS, NTP etc., are not available for all the devices. Such architecture is then implemented and tested using LoRa radios and Raspberry Pi computers. However, other sub-GHz technologies can be used in the place of LoRa, and other kinds of single-board computers can substitute the Raspberry Pis, making the proposed solution easily customizable, according to specific needs. Moreover, the proposal is low cost, since it does not require expensive hardware like, e.g., onboard Rubidium based atomic clocks. Our experiments indicate a worst case skew of about 16 ms between drones clocks, using cheap components commonly available in the market. This is sufficient to deal with audio/video footage at 30 fps. Hence, it can be viewed as a useful and easy to implement architecture helping to maintain a decent synchronization even when traditional solutions are not available.
如今,一套合作无人机通常用作空中传感器,以监测区域和跟踪感兴趣的对象(例如,边境和沿海安全和监视,犯罪控制,灾害管理,紧急第一反应,森林和野生动物,交通监测)。无人机产生相当大的、连续的多模态(音频、视频和遥测)数据流,流向具有足够计算能力和资源来存储和处理数据的地面控制站。因此,由于这种设置的分布式性质,无人机之间的移动和距离的变化进一步复杂化,以及可能的干扰和阻碍通信的障碍,节点之间的公共时钟对于从单个数据报中正确重建多模态数据流是至关重要的,这些数据报可能被无序接收或具有不同的延迟。提出了一种使用sub-GHz广播通信的框架架构,以确保一组无人机的时间同步,即使在常规时间源(例如GPS, NTP等)不可用于所有设备的困难情况下也可以恢复。然后使用LoRa无线电和树莓派计算机实现和测试这种体系结构。然而,其他sub-GHz技术可以代替LoRa,其他类型的单板计算机可以替代Raspberry Pis,使所提出的解决方案易于根据特定需求进行定制。此外,该提议的成本很低,因为它不需要昂贵的硬件,例如基于铷的机载原子钟。我们的实验表明,使用市场上常见的廉价组件,无人机时钟之间的最坏情况偏差约为16毫秒。这足以处理30 fps的音频/视频素材。因此,它可以被视为一种有用且易于实现的体系结构,即使在传统解决方案不可用时,它也可以帮助维护良好的同步。
{"title":"A broadcast sub-GHz framework for unmanned aerial vehicles clock synchronization","authors":"Niccolò Cecchinato, Ivan Scagnetto, Andrea Toma, Carlo Drioli, Gian Luca Foresti","doi":"10.3233/ica-230723","DOIUrl":"https://doi.org/10.3233/ica-230723","url":null,"abstract":"Nowadays, set of cooperative drones are commonly used as aerial sensors, in order to monitor areas and track objects of interest (think, e.g., of border and coastal security and surveillance, crime control, disaster management, emergency first responder, forest and wildlife, traffic monitoring). The drones generate a quite large and continuous in time multimodal (audio, video and telemetry) data stream towards a ground control station with enough computing power and resources to store and process it. Hence, due to the distributed nature of this setting, further complicated by the movement and varying distance among drones, and to possible interferences and obstacles compromising communications, a common clock between the nodes is of utmost importance to make feasible a correct reconstruction of the multimodal data stream from the single datagrams, which may be received out of order or with different delays. A framework architecture, using sub-GHz broadcasting communications, is proposed to ensure time synchronization for a set of drones, allowing one to recover even in difficult situations where the usual time sources, e.g. GPS, NTP etc., are not available for all the devices. Such architecture is then implemented and tested using LoRa radios and Raspberry Pi computers. However, other sub-GHz technologies can be used in the place of LoRa, and other kinds of single-board computers can substitute the Raspberry Pis, making the proposed solution easily customizable, according to specific needs. Moreover, the proposal is low cost, since it does not require expensive hardware like, e.g., onboard Rubidium based atomic clocks. Our experiments indicate a worst case skew of about 16 ms between drones clocks, using cheap components commonly available in the market. This is sufficient to deal with audio/video footage at 30 fps. Hence, it can be viewed as a useful and easy to implement architecture helping to maintain a decent synchronization even when traditional solutions are not available.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"2 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136227744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An exploratory design science research on troll factories 巨魔工厂探索性设计科学研究
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-11 DOI: 10.3233/ica-230720
Francisco S. Marcondes, José João Almeida, Paulo Novais
Private and military troll factories (facilities used to spread rumours in online social media) are currently proliferating around the world. By their very nature, they are obscure companies whose internal workings are largely unknown, apart from leaks to the press. They are even more concealed when it comes to their underlying technology. At least in a broad sense, it is believed that there are two main tasks performed by a troll factory: sowing and spreading. The first is to create and, more importantly, maintain a social network that can be used for the spreading task. It is then a wicked long-term activity, subject to all sorts of problems. As an attempt to make this perspective a little clearer, this paper uses exploratory design science research to produce artefacts that could be applied to online rumour spreading in social media. Then, as a hypothesis: it is possible to design a fully automated social media agent capable of sowing a social network on microblogging platforms. The expectation is that it will be possible to identify common opportunities and difficulties in the development of such tools, which in turn will allow an evaluation of the technology, but above all the level of automation of these facilities. The research is based on a general domain Twitter corpus with 4M+ tokens and on ChatGPT, and discusses both knowledge-based and deep learning approaches for smooth tweet generation. These explorations suggest that for the current, widespread and publicly available NLP technology, troll factories work like a call centre; i.e. humans assisted by more or less sophisticated computing tools (often called cyborgs).
私人和军事喷子工厂(用于在在线社交媒体上传播谣言的设施)目前在世界各地激增。就其本质而言,它们是默默无闻的公司,除了向媒体泄露外,其内部运作基本上是未知的。当涉及到它们的底层技术时,它们甚至更加隐蔽。至少在广义上,人们认为巨魔工厂有两项主要任务:播种和传播。首先是创建,更重要的是维护一个可以用于传播任务的社会网络。因此,这是一项邪恶的长期活动,会受到各种问题的影响。为了使这一观点更清晰一些,本文使用探索性设计科学研究来制作可应用于社交媒体在线谣言传播的人工制品。然后,作为一个假设:有可能设计一个完全自动化的社交媒体代理,能够在微博平台上播种社交网络。预期将有可能确定开发此类工具的共同机会和困难,这反过来将允许对技术进行评估,但首先是对这些设施的自动化水平进行评估。该研究基于具有4M+令牌的通用领域Twitter语料库和ChatGPT,并讨论了基于知识和深度学习的平稳tweet生成方法。这些探索表明,对于目前广泛且公开可用的NLP技术,巨魔工厂的工作方式就像呼叫中心;也就是说,人类在或多或少复杂的计算工具(通常称为半机械人)的帮助下。
{"title":"An exploratory design science research on troll factories","authors":"Francisco S. Marcondes, José João Almeida, Paulo Novais","doi":"10.3233/ica-230720","DOIUrl":"https://doi.org/10.3233/ica-230720","url":null,"abstract":"Private and military troll factories (facilities used to spread rumours in online social media) are currently proliferating around the world. By their very nature, they are obscure companies whose internal workings are largely unknown, apart from leaks to the press. They are even more concealed when it comes to their underlying technology. At least in a broad sense, it is believed that there are two main tasks performed by a troll factory: sowing and spreading. The first is to create and, more importantly, maintain a social network that can be used for the spreading task. It is then a wicked long-term activity, subject to all sorts of problems. As an attempt to make this perspective a little clearer, this paper uses exploratory design science research to produce artefacts that could be applied to online rumour spreading in social media. Then, as a hypothesis: it is possible to design a fully automated social media agent capable of sowing a social network on microblogging platforms. The expectation is that it will be possible to identify common opportunities and difficulties in the development of such tools, which in turn will allow an evaluation of the technology, but above all the level of automation of these facilities. The research is based on a general domain Twitter corpus with 4M+ tokens and on ChatGPT, and discusses both knowledge-based and deep learning approaches for smooth tweet generation. These explorations suggest that for the current, widespread and publicly available NLP technology, troll factories work like a call centre; i.e. humans assisted by more or less sophisticated computing tools (often called cyborgs).","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"57 1","pages":""},"PeriodicalIF":6.5,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138503301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An explainable machine learning system for left bundle branch block detection and classification 用于左束支传导阻滞检测和分类的可解释机器学习系统
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-08-28 DOI: 10.3233/ica-230719
Beatriz Macas, Javier Garrigós, J. Martínez, J. M. Ferrández, M. P. Bonomini
Left bundle branch block is a cardiac conduction disorder that occurs when the electrical impulses that control the heartbeat are blocked or delayed as they travel through the left bundle branch of the cardiac conduction system providing a characteristic electrocardiogram (ECG) pattern. We use a reduced set of biologically inspired features extracted from ECG data is proposed and used to train a variety of machine learning models for the LBBB classification task. Then, different methods are used to evaluate the importance of the features in the classification process of each model and to further reduce the feature set while maintaining the classification performance of the models. The performances obtained by the models using different metrics improve those obtained by other authors in the literature on the same dataset. Finally, XAI techniques are used to verify that the predictions made by the models are consistent with the existing relationships between the data. This increases the reliability of the models and their usefulness in the diagnostic support process. These explanations can help clinicians to better understand the reasoning behind diagnostic decisions.
左束支传导阻滞是一种心脏传导障碍,当控制心跳的电脉冲在通过提供特征心电图(ECG)模式的心脏传导系统的左束支时被阻断或延迟时发生。我们使用从ECG数据中提取的一组简化的生物启发特征,并用于训练LBBB分类任务的各种机器学习模型。然后,使用不同的方法来评估特征在每个模型的分类过程中的重要性,并在保持模型分类性能的同时进一步减少特征集。使用不同度量的模型获得的性能改进了文献中其他作者在同一数据集上获得的性能。最后,使用XAI技术来验证模型所做的预测与数据之间的现有关系是否一致。这增加了模型的可靠性及其在诊断支持过程中的有用性。这些解释可以帮助临床医生更好地理解诊断决策背后的原因。
{"title":"An explainable machine learning system for left bundle branch block detection and classification","authors":"Beatriz Macas, Javier Garrigós, J. Martínez, J. M. Ferrández, M. P. Bonomini","doi":"10.3233/ica-230719","DOIUrl":"https://doi.org/10.3233/ica-230719","url":null,"abstract":"Left bundle branch block is a cardiac conduction disorder that occurs when the electrical impulses that control the heartbeat are blocked or delayed as they travel through the left bundle branch of the cardiac conduction system providing a characteristic electrocardiogram (ECG) pattern. We use a reduced set of biologically inspired features extracted from ECG data is proposed and used to train a variety of machine learning models for the LBBB classification task. Then, different methods are used to evaluate the importance of the features in the classification process of each model and to further reduce the feature set while maintaining the classification performance of the models. The performances obtained by the models using different metrics improve those obtained by other authors in the literature on the same dataset. Finally, XAI techniques are used to verify that the predictions made by the models are consistent with the existing relationships between the data. This increases the reliability of the models and their usefulness in the diagnostic support process. These explanations can help clinicians to better understand the reasoning behind diagnostic decisions.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":" ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44767202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neuro-distributed cognitive adaptive optimization for training neural networks in a parallel and asynchronous manner 以并行和异步方式训练神经网络的神经分布式认知自适应优化
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-08-06 DOI: 10.3233/ica-230718
P. Michailidis, Iakovos T. Michailidis, Sokratis Gkelios, Georgios D. Karatzinis, Elias B. Kosmatopoulos
Distributed Machine learning has delivered considerable advances in training neural networks by leveraging parallel processing, scalability, and fault tolerance to accelerate the process and improve model performance. However, training of large-size models has exhibited numerous challenges, due to the gradient dependence that conventional approaches integrate. To improve the training efficiency of such models, gradient-free distributed methodologies have emerged fostering the gradient-independent parallel processing and efficient utilization of resources across multiple devices or nodes. However, such approaches, are usually restricted to specific applications, due to their conceptual limitations: computational and communicational requirements between partitions, limited partitioning solely into layers, limited sequential learning between the different layers, as well as training a potential model in solely synchronous mode. In this paper, we propose and evaluate, the Neuro-Distributed Cognitive Adaptive Optimization (ND-CAO) methodology, a novel gradient-free algorithm that enables the efficient distributed training of arbitrary types of neural networks, in both synchronous and asynchronous manner. Contrary to the majority of existing methodologies, ND-CAO is applicable to any possible splitting of a potential neural network, into blocks (partitions), with each of the blocks allowed to update its parameters fully asynchronously and independently of the rest of the blocks. Most importantly, no data exchange is required between the different blocks during training with the only information each block requires is the global performance of the model. Convergence of ND-CAO is mathematically established for generic neural network architectures, independently of the particular choices made, while four comprehensive experimental cases, considering different model architectures and image classification tasks, validate the algorithms’ robustness and effectiveness in both synchronous and asynchronous training modes. Moreover, by conducting a thorough comparison between synchronous and asynchronous ND-CAO training, the algorithm is identified as an efficient scheme to train neural networks in a novel gradient-independent, distributed, and asynchronous manner, delivering similar – or even improved results in Loss and Accuracy measures.
分布式机器学习通过利用并行处理、可扩展性和容错性来加速过程并提高模型性能,在训练神经网络方面取得了相当大的进步。然而,由于传统方法集成的梯度依赖,大尺寸模型的训练显示出许多挑战。为了提高这些模型的训练效率,无梯度分布式方法的出现促进了梯度无关的并行处理和跨多个设备或节点资源的有效利用。然而,由于概念上的限制,这些方法通常仅限于特定的应用:分区之间的计算和通信需求,有限的分层划分,不同层之间有限的顺序学习,以及在完全同步模式下训练潜在模型。在本文中,我们提出并评估了神经分布式认知自适应优化(ND-CAO)方法,这是一种新颖的无梯度算法,能够以同步和异步方式对任意类型的神经网络进行有效的分布式训练。与大多数现有方法相反,ND-CAO适用于任何可能将潜在神经网络分割成块(分区),每个块允许完全异步更新其参数,并且独立于其他块。最重要的是,在训练过程中,不同块之间不需要数据交换,每个块需要的唯一信息是模型的全局性能。在数学上建立了ND-CAO在通用神经网络架构下的收敛性,而不受具体选择的影响。同时,考虑到不同的模型架构和图像分类任务,四个综合实验案例验证了算法在同步和异步训练模式下的鲁棒性和有效性。此外,通过对同步和异步ND-CAO训练进行全面比较,该算法被确定为一种有效的方案,以一种新颖的梯度无关、分布式和异步方式训练神经网络,在Loss和Accuracy度量方面提供相似甚至改进的结果。
{"title":"Neuro-distributed cognitive adaptive optimization for training neural networks in a parallel and asynchronous manner","authors":"P. Michailidis, Iakovos T. Michailidis, Sokratis Gkelios, Georgios D. Karatzinis, Elias B. Kosmatopoulos","doi":"10.3233/ica-230718","DOIUrl":"https://doi.org/10.3233/ica-230718","url":null,"abstract":"Distributed Machine learning has delivered considerable advances in training neural networks by leveraging parallel processing, scalability, and fault tolerance to accelerate the process and improve model performance. However, training of large-size models has exhibited numerous challenges, due to the gradient dependence that conventional approaches integrate. To improve the training efficiency of such models, gradient-free distributed methodologies have emerged fostering the gradient-independent parallel processing and efficient utilization of resources across multiple devices or nodes. However, such approaches, are usually restricted to specific applications, due to their conceptual limitations: computational and communicational requirements between partitions, limited partitioning solely into layers, limited sequential learning between the different layers, as well as training a potential model in solely synchronous mode. In this paper, we propose and evaluate, the Neuro-Distributed Cognitive Adaptive Optimization (ND-CAO) methodology, a novel gradient-free algorithm that enables the efficient distributed training of arbitrary types of neural networks, in both synchronous and asynchronous manner. Contrary to the majority of existing methodologies, ND-CAO is applicable to any possible splitting of a potential neural network, into blocks (partitions), with each of the blocks allowed to update its parameters fully asynchronously and independently of the rest of the blocks. Most importantly, no data exchange is required between the different blocks during training with the only information each block requires is the global performance of the model. Convergence of ND-CAO is mathematically established for generic neural network architectures, independently of the particular choices made, while four comprehensive experimental cases, considering different model architectures and image classification tasks, validate the algorithms’ robustness and effectiveness in both synchronous and asynchronous training modes. Moreover, by conducting a thorough comparison between synchronous and asynchronous ND-CAO training, the algorithm is identified as an efficient scheme to train neural networks in a novel gradient-independent, distributed, and asynchronous manner, delivering similar – or even improved results in Loss and Accuracy measures.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"1 1","pages":""},"PeriodicalIF":6.5,"publicationDate":"2023-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42611147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving landslide prediction by computer vision and deep learning 利用计算机视觉和深度学习改进滑坡预测
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-08-02 DOI: 10.3233/ica-230717
B. Guerrero-Rodriguez, J. Garcia-Rodriguez, Jaime Salvador, Christian Mejia-Escobar, Shirley Cadena, Jairo Cepeda, Manuel Benavent-Lledó, David Mulero-Pérez
The destructive power of a landslide can seriously affect human beings and infrastructures. The prediction of this phenomenon is of great interest; however, it is a complex task in which traditional methods have limitations. In recent years, Artificial Intelligence has emerged as a successful alternative in the geological field. Most of the related works use classical machine learning algorithms to correlate the variables of the phenomenon and its occurrence. This requires large quantitative landslide datasets, collected and labeled manually, which is costly in terms of time and effort. In this work, we create an image dataset using an official landslide inventory, which we verified and updated based on journalistic information and interpretation of satellite images of the study area. The images cover the landslide crowns and the actual triggering values of the conditioning factors at the detail level (5 × 5 pixels). Our approach focuses on the specific location where the landslide starts and its proximity, unlike other works that consider the entire landslide area as the occurrence of the phenomenon. These images correspond to geological, geomorphological, hydrological and anthropological variables, which are stacked in a similar way to the channels of a conventional image to feed and train a convolutional neural network. Therefore, we improve the quality of the data and the representation of the phenomenon to obtain a more robust, reliable and accurate prediction model. The results indicate an average accuracy of 97.48%, which allows the generation of a landslide susceptibility map on the Aloag-Santo Domingo highway in Ecuador. This tool is useful for risk prevention and management in this area where small, medium and large landslides occur frequently.
山体滑坡的破坏力会对人类和基础设施造成严重影响。对这种现象的预测是非常有趣的;然而,这是一项复杂的任务,传统方法有其局限性。近年来,人工智能已成为地质领域的成功替代方案。大多数相关工作使用经典的机器学习算法来关联现象的变量及其发生。这需要大量的滑坡数据集,手工收集和标记,这在时间和精力方面都是昂贵的。在这项工作中,我们使用官方滑坡清单创建了一个图像数据集,并根据新闻信息和对研究区域卫星图像的解释对其进行了验证和更新。图像覆盖了滑坡冠体和各条件因子在细节级(5 × 5像素)的实际触发值。我们的方法侧重于滑坡开始的特定位置及其邻近程度,而不像其他工作那样将整个滑坡区域视为现象的发生。这些图像对应于地质、地貌、水文和人类学变量,它们以与传统图像通道类似的方式堆叠,以馈送和训练卷积神经网络。因此,我们提高了数据的质量和现象的表征,以获得更稳健、可靠和准确的预测模型。结果表明,平均精度为97.48%,可以在厄瓜多尔阿罗格-圣多明各高速公路上生成滑坡易感性图。这个工具对这个经常发生小、中、大型滑坡的地区的风险预防和管理很有用。
{"title":"Improving landslide prediction by computer vision and deep learning","authors":"B. Guerrero-Rodriguez, J. Garcia-Rodriguez, Jaime Salvador, Christian Mejia-Escobar, Shirley Cadena, Jairo Cepeda, Manuel Benavent-Lledó, David Mulero-Pérez","doi":"10.3233/ica-230717","DOIUrl":"https://doi.org/10.3233/ica-230717","url":null,"abstract":"The destructive power of a landslide can seriously affect human beings and infrastructures. The prediction of this phenomenon is of great interest; however, it is a complex task in which traditional methods have limitations. In recent years, Artificial Intelligence has emerged as a successful alternative in the geological field. Most of the related works use classical machine learning algorithms to correlate the variables of the phenomenon and its occurrence. This requires large quantitative landslide datasets, collected and labeled manually, which is costly in terms of time and effort. In this work, we create an image dataset using an official landslide inventory, which we verified and updated based on journalistic information and interpretation of satellite images of the study area. The images cover the landslide crowns and the actual triggering values of the conditioning factors at the detail level (5 × 5 pixels). Our approach focuses on the specific location where the landslide starts and its proximity, unlike other works that consider the entire landslide area as the occurrence of the phenomenon. These images correspond to geological, geomorphological, hydrological and anthropological variables, which are stacked in a similar way to the channels of a conventional image to feed and train a convolutional neural network. Therefore, we improve the quality of the data and the representation of the phenomenon to obtain a more robust, reliable and accurate prediction model. The results indicate an average accuracy of 97.48%, which allows the generation of a landslide susceptibility map on the Aloag-Santo Domingo highway in Ecuador. This tool is useful for risk prevention and management in this area where small, medium and large landslides occur frequently.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":" ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2023-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44613986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improvement of small objects detection in thermal images 热图像中小目标检测的改进
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-07-10 DOI: 10.3233/ica-230715
Maxence Chaverot, Maxime Carré, M. Jourlin, A. Bensrhair, R. Grisel
Thermal images are widely used for various applications such as safety, surveillance, and Advanced Driver Assistance Systems (ADAS). However, these images typically have low contrast, blurred aspect, and low resolution, making it difficult to detect distant and small-sized objects. To address these issues, this paper explores various preprocessing algorithms to improve the performance of already trained object detection networks. Specifically, mathematical morphology is used to favor the detection of small bright objects, while deblurring and super-resolution techniques are employed to enhance the image quality. The Logarithmic Image Processing (LIP) framework is chosen to perform mathematical morphology, as it is consistent with the Human Visual System. The efficacy of the proposed algorithms is evaluated on the FLIR dataset, with a sub-base focused on images containing distant objects. The mean Average-Precision (mAP) score is computed to objectively evaluate the results, showing a significant improvement in the detection of small objects in thermal images using CNNs such as YOLOv4 and EfficientDet.
热图像被广泛用于各种应用,如安全、监视和高级驾驶辅助系统(ADAS)。然而,这些图像通常具有低对比度,模糊的方面,和低分辨率,使得难以检测远距离和小尺寸的物体。为了解决这些问题,本文探讨了各种预处理算法,以提高已经训练好的目标检测网络的性能。具体来说,数学形态学被用于有利于小的明亮物体的检测,而去模糊和超分辨率技术被用于提高图像质量。选择对数图像处理(LIP)框架来执行数学形态学,因为它与人类视觉系统一致。在FLIR数据集上评估了所提出算法的有效性,其中子库侧重于包含远处物体的图像。计算平均平均精度(mAP)分数以客观评价结果,显示使用YOLOv4和EfficientDet等cnn在热图像中检测小物体方面有显着改善。
{"title":"Improvement of small objects detection in thermal images","authors":"Maxence Chaverot, Maxime Carré, M. Jourlin, A. Bensrhair, R. Grisel","doi":"10.3233/ica-230715","DOIUrl":"https://doi.org/10.3233/ica-230715","url":null,"abstract":"Thermal images are widely used for various applications such as safety, surveillance, and Advanced Driver Assistance Systems (ADAS). However, these images typically have low contrast, blurred aspect, and low resolution, making it difficult to detect distant and small-sized objects. To address these issues, this paper explores various preprocessing algorithms to improve the performance of already trained object detection networks. Specifically, mathematical morphology is used to favor the detection of small bright objects, while deblurring and super-resolution techniques are employed to enhance the image quality. The Logarithmic Image Processing (LIP) framework is chosen to perform mathematical morphology, as it is consistent with the Human Visual System. The efficacy of the proposed algorithms is evaluated on the FLIR dataset, with a sub-base focused on images containing distant objects. The mean Average-Precision (mAP) score is computed to objectively evaluate the results, showing a significant improvement in the detection of small objects in thermal images using CNNs such as YOLOv4 and EfficientDet.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"2 1","pages":"311-325"},"PeriodicalIF":6.5,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69926861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Internet-of-Things framework for scalable end-of-life condition monitoring in remanufacturing 用于再制造中可扩展的报废状态监测的物联网框架
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-07-07 DOI: 10.3233/ica-230716
Celia Garrido-Hidalgo, Luis Roda-Sanchez, A. Fernández-Caballero, T. Olivares, F. J. Ramírez
The worldwide generation of waste electrical and electronic equipment is continuously growing, with electric vehicle batteries reaching their end-of-life having become a key concern for both the environment and human health in recent years. In this context, the proliferation of Internet of Things standards and data ecosystems is advancing the feasibility of data-driven condition monitoring and remanufacturing. This is particularly desirable for the end-of-life recovery of high-value equipment towards sustainable closed-loop production systems. Low-Power Wide-Area Networks, despite being relatively recent, are starting to be conceived as key-enabling technologies built upon the principles of long-range communication and negligible energy consumption. While LoRaWAN is considered the open standard with the highest level of acceptance from both industry and academia, it is its random access protocol (Aloha) that limits its capacity in large-scale deployments to some extent. Although time-slotted scheduling has proved to alleviate certain scalability limitations, the constrained nature of end nodes and their application-oriented requirements significantly increase the complexity of time-slotted network management tasks. To shed light on this matter, a multi-agent network management system for the on-demand allocation of resources in end-of-life monitoring applications for remanufacturing is introduced in this work. It leverages LoRa’s spreading factor orthogonality and network-wide knowledge to increase the number of nodes served in time-slotted monitoring setups. The proposed system is validated and evaluated for end-of-life monitoring where two representative end-node distributions were emulated, with the achieved network capacity improvements ranging from 75.27% to 249.46% with respect to LoRaWAN’s legacy operation. As a result, the suitability of different agent-based strategies has been evaluated and a number of lessons have been drawnaccording to different application and hardware constraints. While the presented findings can be used to further improve the explainability of the proposed models (in line with the concept of eXplainable AI), the overall framework represents a step forward in lightweight end-of-life condition monitoring for remanufacturing.
世界范围内产生的废弃电气和电子设备不断增加,近年来,电动汽车电池达到其使用寿命已成为环境和人类健康的一个关键问题。在此背景下,物联网标准和数据生态系统的激增正在推进数据驱动状态监测和再制造的可行性。这对于高价值设备的使用寿命结束后回收到可持续闭环生产系统是特别可取的。低功耗广域网,尽管是相对较新的,但开始被认为是建立在远程通信和可忽略的能源消耗原则基础上的关键技术。虽然LoRaWAN被认为是业界和学术界接受度最高的开放标准,但它的随机访问协议(Aloha)在一定程度上限制了其大规模部署的能力。虽然时隙调度已被证明可以缓解某些可伸缩性限制,但终端节点的约束性质及其面向应用的需求显著增加了时隙网络管理任务的复杂性。为了解决这一问题,本文介绍了一种多智能体网络管理系统,用于再制造终端监控应用中的资源按需分配。它利用LoRa的传播因子正交性和网络范围知识来增加时隙监视设置中服务的节点数量。通过对两个具有代表性的终端节点分布进行仿真,验证和评估了所提出的系统的寿命终止监测,与LoRaWAN的传统操作相比,所实现的网络容量改善范围从75.27%到249.46%。因此,评估了不同基于代理的策略的适用性,并根据不同的应用程序和硬件约束得出了许多经验教训。虽然所提出的研究结果可用于进一步提高所提出模型的可解释性(符合可解释人工智能的概念),但总体框架代表了在再制造的轻量化寿命终止状态监测方面向前迈出的一步。
{"title":"Internet-of-Things framework for scalable end-of-life condition monitoring in remanufacturing","authors":"Celia Garrido-Hidalgo, Luis Roda-Sanchez, A. Fernández-Caballero, T. Olivares, F. J. Ramírez","doi":"10.3233/ica-230716","DOIUrl":"https://doi.org/10.3233/ica-230716","url":null,"abstract":"The worldwide generation of waste electrical and electronic equipment is continuously growing, with electric vehicle batteries reaching their end-of-life having become a key concern for both the environment and human health in recent years. In this context, the proliferation of Internet of Things standards and data ecosystems is advancing the feasibility of data-driven condition monitoring and remanufacturing. This is particularly desirable for the end-of-life recovery of high-value equipment towards sustainable closed-loop production systems. Low-Power Wide-Area Networks, despite being relatively recent, are starting to be conceived as key-enabling technologies built upon the principles of long-range communication and negligible energy consumption. While LoRaWAN is considered the open standard with the highest level of acceptance from both industry and academia, it is its random access protocol (Aloha) that limits its capacity in large-scale deployments to some extent. Although time-slotted scheduling has proved to alleviate certain scalability limitations, the constrained nature of end nodes and their application-oriented requirements significantly increase the complexity of time-slotted network management tasks. To shed light on this matter, a multi-agent network management system for the on-demand allocation of resources in end-of-life monitoring applications for remanufacturing is introduced in this work. It leverages LoRa’s spreading factor orthogonality and network-wide knowledge to increase the number of nodes served in time-slotted monitoring setups. The proposed system is validated and evaluated for end-of-life monitoring where two representative end-node distributions were emulated, with the achieved network capacity improvements ranging from 75.27% to 249.46% with respect to LoRaWAN’s legacy operation. As a result, the suitability of different agent-based strategies has been evaluated and a number of lessons have been drawnaccording to different application and hardware constraints. While the presented findings can be used to further improve the explainability of the proposed models (in line with the concept of eXplainable AI), the overall framework represents a step forward in lightweight end-of-life condition monitoring for remanufacturing.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":" ","pages":""},"PeriodicalIF":6.5,"publicationDate":"2023-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44648957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A measured data correlation-based strain estimation technique for building structures using convolutional neural network 基于卷积神经网络的建筑结构实测数据相关应变估计技术
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-06-17 DOI: 10.3233/ica-230714
B. Oh, Sang Hoon Yoo, H. Park
A machine learning-based strain estimation method for structural members in a building is presented The relationship between the strain responses of structural members is determined using a convolutional neural network (CNN) For accurate strain estimation, correlation analysis is introduced to select the optimal CNN model among responses from multiple structural members. The optimal CNN model trained using the response of the structural member with a high degree of correlation with the response of the target structural member is utilized to estimate the strain of the target structural member The proposed correlation-based technique can also provide the next best CNN model in case of defects in the sensors used to construct the optimal CNN. Validity is examined through the application of the presented technique to a numerical study on a three-dimensional steel structure and an experimental study on a steel frame specimen.
提出了一种基于机器学习的建筑物结构构件应变估计方法,利用卷积神经网络(CNN)确定结构构件应变响应之间的关系。为了准确估计应变,引入相关分析,在多个结构构件的响应中选择最优的CNN模型。利用与目标构件响应高度相关的构件响应训练出的最优CNN模型来估计目标构件的应变,在用于构建最优CNN的传感器存在缺陷的情况下,本文提出的基于相关性的技术还可以提供次优CNN模型。通过对三维钢结构的数值研究和钢架试件的试验研究,验证了该方法的有效性。
{"title":"A measured data correlation-based strain estimation technique for building structures using convolutional neural network","authors":"B. Oh, Sang Hoon Yoo, H. Park","doi":"10.3233/ica-230714","DOIUrl":"https://doi.org/10.3233/ica-230714","url":null,"abstract":"A machine learning-based strain estimation method for structural members in a building is presented The relationship between the strain responses of structural members is determined using a convolutional neural network (CNN) For accurate strain estimation, correlation analysis is introduced to select the optimal CNN model among responses from multiple structural members. The optimal CNN model trained using the response of the structural member with a high degree of correlation with the response of the target structural member is utilized to estimate the strain of the target structural member The proposed correlation-based technique can also provide the next best CNN model in case of defects in the sensors used to construct the optimal CNN. Validity is examined through the application of the presented technique to a numerical study on a three-dimensional steel structure and an experimental study on a steel frame specimen.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":"30 1","pages":"395-412"},"PeriodicalIF":6.5,"publicationDate":"2023-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69926794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Integrated Computer-Aided Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1