首页 > 最新文献

Integrated Computer-Aided Engineering最新文献

英文 中文
A parametric and feature-based CAD dataset to support human-computer interaction for advanced 3D shape learning 基于参数和特征的 CAD 数据集,支持先进 3D 形状学习的人机交互
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-10 DOI: 10.3233/ica-240744
Rubin Fan, Fazhi He, Yuxin Liu, Yupeng Song, Linkun Fan, Xiaohu Yan
3D shape learning is an important research topic in computer vision, in which the datasets play a critical role. However, most of the existing 3D datasets use voxels, point clouds, mesh, and B-rep, which are not parametric and feature-based. Thus they can not support the generation of real-world engineering computer-aided design (CAD) models with complicated shape features. Furthermore, they are based on 3D geometry results without human-computer interaction (HCI) history. This work is the first to provide a full parametric and feature-based CAD dataset with a selection mechanism to support HCI in 3D learning. First, unlike existing datasets, mainly composed of simple features (typical sketch and extrude), we devise complicated engineering features, such as fillet, chamfer, mirror, pocket, groove, and revolve. Second, different from the monotonous combination of features, we invent a select mechanism to mimic how human focuses on and selects a particular topological entity. The proposed mechanism establishes the relationships among complicated engineering features, which fully express the design intention and design knowledge of human CAD engineers. Therefore, it can process advanced 3D features for real-world engineering shapes. The experiments show that the proposed dataset outperforms existing CAD datasets in both reconstruction and generation tasks. In quantitative experiment, the proposed dataset demonstrates better prediction accuracy than other parametric datasets. Furthermore, CAD models generated from the proposed dataset comply with semantics of the human CAD engineers and can be edited and redesigned via mainstream industrial CAD software.
三维形状学习是计算机视觉领域的一个重要研究课题,其中数据集起着至关重要的作用。然而,现有的三维数据集大多使用体素、点云、网格和 B-rep,它们不是基于参数和特征的。因此,它们无法支持生成具有复杂形状特征的真实工程计算机辅助设计(CAD)模型。此外,它们基于三维几何结果,没有人机交互(HCI)历史。这项研究首次提供了一个完整的基于参数和特征的 CAD 数据集,该数据集具有支持三维学习中人机交互的选择机制。首先,与现有数据集主要由简单特征(典型草图和挤出)组成不同,我们设计了复杂的工程特征,如圆角、倒角、镜面、口袋、凹槽和旋转。其次,与单调的特征组合不同,我们发明了一种选择机制,模仿人类关注和选择特定拓扑实体的方式。所提出的机制建立了复杂工程特征之间的关系,充分表达了人类 CAD 工程师的设计意图和设计知识。因此,它可以处理真实世界工程形状的高级三维特征。实验表明,所提出的数据集在重建和生成任务方面都优于现有的 CAD 数据集。在定量实验中,建议的数据集比其他参数数据集具有更高的预测精度。此外,从提出的数据集生成的 CAD 模型符合人类 CAD 工程师的语义,并可通过主流工业 CAD 软件进行编辑和重新设计。
{"title":"A parametric and feature-based CAD dataset to support human-computer interaction for advanced 3D shape learning","authors":"Rubin Fan, Fazhi He, Yuxin Liu, Yupeng Song, Linkun Fan, Xiaohu Yan","doi":"10.3233/ica-240744","DOIUrl":"https://doi.org/10.3233/ica-240744","url":null,"abstract":"3D shape learning is an important research topic in computer vision, in which the datasets play a critical role. However, most of the existing 3D datasets use voxels, point clouds, mesh, and B-rep, which are not parametric and feature-based. Thus they can not support the generation of real-world engineering computer-aided design (CAD) models with complicated shape features. Furthermore, they are based on 3D geometry results without human-computer interaction (HCI) history. This work is the first to provide a full parametric and feature-based CAD dataset with a selection mechanism to support HCI in 3D learning. First, unlike existing datasets, mainly composed of simple features (typical sketch and extrude), we devise complicated engineering features, such as fillet, chamfer, mirror, pocket, groove, and revolve. Second, different from the monotonous combination of features, we invent a select mechanism to mimic how human focuses on and selects a particular topological entity. The proposed mechanism establishes the relationships among complicated engineering features, which fully express the design intention and design knowledge of human CAD engineers. Therefore, it can process advanced 3D features for real-world engineering shapes. The experiments show that the proposed dataset outperforms existing CAD datasets in both reconstruction and generation tasks. In quantitative experiment, the proposed dataset demonstrates better prediction accuracy than other parametric datasets. Furthermore, CAD models generated from the proposed dataset comply with semantics of the human CAD engineers and can be edited and redesigned via mainstream industrial CAD software.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A high-level simulator for Network-on-Chip 芯片网络高级模拟器
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-25 DOI: 10.3233/ica-240743
Paulo Cesar Donizeti Paris, Emerson Carlos Pedrino
This study presents a high-level simulator for Network-on-Chip (NoC), designed for many-core architectures, and integrated with the PlatEMO platform. The motivation for developing this tool arose from the need to evaluate the behavior of application mapping algorithms and the routing, both aspectsare essential in the implementation and design of NoC architectures. This analysis underscored the importance of having effective NoC simulators as tools that allow for studying and comparing various network technologies while ensuring a controlled simulation environment. During this investigation and evaluation, some simulators, such as Noxim, NoCTweak, and NoCmap, among others, offered configurable parameters for application traffic, options to synthetically define topology and packet traffic patterns. Additionally, they include mapping options that optimize latency and energy consumption, routing algorithms, technological settings such as the CMOS process, and measurement options for evaluating performance metrics such as throughput and power usage. However, while these simulators meet detailed technical demands, they are mostly restricted to analyzing the low-level elements of the architecture, thus hindering quick and easy under- standing for non-specialists. This insight underscored the challenge in developing a tool that balances detailed analysis with a comprehensive learning perspective, considering the specific restrictions of each simulator analyzed. Experiments demonstrated the proposed simulator efficacy in handling algorithms like GA, PSO, and SA variant, and, surprisingly, revealed limitations of the XY algorithm in Mesh topologies, indicating the need for further investigation to confirm these findings. Future work will expand the simulator functionalities, incorporating a broader range of algorithms and performance metrics, to establish it as an indispensable tool for research and development in NoCs.
本研究介绍了一种高级芯片网络(NoC)模拟器,该模拟器专为多核架构设计,并与 PlatEMO 平台集成。开发该工具的动机源于评估应用映射算法和路由行为的需要,这两个方面在 NoC 架构的实施和设计中都至关重要。这项分析强调了拥有有效 NoC 仿真器的重要性,这些仿真器可作为研究和比较各种网络技术的工具,同时确保仿真环境受控。在调查和评估过程中,一些模拟器(如 Noxim、NoCTweak 和 NoCmap 等)提供了可配置的应用流量参数、合成定义拓扑和数据包流量模式的选项。此外,它们还包括可优化延迟和能耗的映射选项、路由算法、CMOS 工艺等技术设置,以及用于评估吞吐量和功耗等性能指标的测量选项。然而,虽然这些模拟器能满足详细的技术要求,但它们大多仅限于分析架构的底层元素,从而阻碍了非专业人员快速、轻松地了解架构。考虑到所分析的每种模拟器的具体限制,开发一种兼顾详细分析和全面学习视角的工具就显得尤为重要。实验证明了所提出的模拟器在处理 GA、PSO 和 SA 变体等算法方面的功效,而且令人惊讶的是,它还揭示了 XY 算法在网状拓扑结构中的局限性,这表明有必要进一步研究以证实这些发现。未来的工作将扩展模拟器的功能,纳入更广泛的算法和性能指标,使其成为 NoC 研发中不可或缺的工具。
{"title":"A high-level simulator for Network-on-Chip","authors":"Paulo Cesar Donizeti Paris, Emerson Carlos Pedrino","doi":"10.3233/ica-240743","DOIUrl":"https://doi.org/10.3233/ica-240743","url":null,"abstract":"This study presents a high-level simulator for Network-on-Chip (NoC), designed for many-core architectures, and integrated with the PlatEMO platform. The motivation for developing this tool arose from the need to evaluate the behavior of application mapping algorithms and the routing, both aspectsare essential in the implementation and design of NoC architectures. This analysis underscored the importance of having effective NoC simulators as tools that allow for studying and comparing various network technologies while ensuring a controlled simulation environment. During this investigation and evaluation, some simulators, such as Noxim, NoCTweak, and NoCmap, among others, offered configurable parameters for application traffic, options to synthetically define topology and packet traffic patterns. Additionally, they include mapping options that optimize latency and energy consumption, routing algorithms, technological settings such as the CMOS process, and measurement options for evaluating performance metrics such as throughput and power usage. However, while these simulators meet detailed technical demands, they are mostly restricted to analyzing the low-level elements of the architecture, thus hindering quick and easy under- standing for non-specialists. This insight underscored the challenge in developing a tool that balances detailed analysis with a comprehensive learning perspective, considering the specific restrictions of each simulator analyzed. Experiments demonstrated the proposed simulator efficacy in handling algorithms like GA, PSO, and SA variant, and, surprisingly, revealed limitations of the XY algorithm in Mesh topologies, indicating the need for further investigation to confirm these findings. Future work will expand the simulator functionalities, incorporating a broader range of algorithms and performance metrics, to establish it as an indispensable tool for research and development in NoCs.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient surface defect detection in industrial screen printing with minimized labeling effort 在工业丝网印刷中高效检测表面缺陷,最大限度减少贴标工作量
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-21 DOI: 10.3233/ica-240742
Paul Josef Krassnig, Matthias Haselmann, Michael Kremnitzer, Dieter Paul Gruber
As part of the evolving Industry 4.0 landscape, machine learning-based visual inspection plays a key role in enhancing production efficiency. Screen printing, a versatile and cost-effective manufacturing technique, is widely applied in industries like electronics, textiles, and automotive. However,the production of complex multilayered designs is error-prone, resulting in a variety of defect appearances and classes. These defects can be characterized as small in relation to large sample areas and weakly pronounced. Sufficient defect visualization and robust defect detection methods are essential to address these challenges, especially considering the permitted design variability. In this work, we present a novel automatic visual inspection system for surface defect detection on decorated foil plates. Customized optical modalities, integrated into a sequential inspection procedure, enable defect visualization of production-related defect classes. The introduced patch-wise defect detection methods, designed to leverage less labeled data, prove effective for industrial defect detection, meeting the given process requirements. In this context, we propose an industry-applicable and scalable data preprocessing workflow that minimizes the overall labeling effort while maintaining high detection performance, as known in supervised settings. Moreover, the presented methods, not relying on any labeled defective training data, outperformed a state-of-the-art unsupervised anomaly detection method in terms of defect detection performance and inference speed.
作为不断发展的工业 4.0 的一部分,基于机器学习的视觉检测在提高生产效率方面发挥着关键作用。丝网印刷是一种多功能、高性价比的制造技术,广泛应用于电子、纺织和汽车等行业。然而,复杂的多层设计在生产过程中很容易出错,从而导致各种外观和类别的缺陷。这些缺陷的特点是相对于大面积样品而言较小且不明显。充分的缺陷可视化和稳健的缺陷检测方法对于应对这些挑战至关重要,特别是考虑到允许的设计可变性。在这项工作中,我们提出了一种新型的自动视觉检测系统,用于检测装饰箔板的表面缺陷。将定制的光学模式集成到顺序检测程序中,可实现与生产相关的缺陷类别的可视化。引入的片状缺陷检测方法旨在利用较少的标记数据,证明可有效用于工业缺陷检测,满足特定的工艺要求。在此背景下,我们提出了一种适用于工业领域且可扩展的数据预处理工作流程,可最大限度地减少整体标记工作量,同时保持高检测性能,这在监督设置中是众所周知的。此外,所提出的方法不依赖于任何标记的缺陷训练数据,在缺陷检测性能和推理速度方面优于最先进的无监督异常检测方法。
{"title":"Efficient surface defect detection in industrial screen printing with minimized labeling effort","authors":"Paul Josef Krassnig, Matthias Haselmann, Michael Kremnitzer, Dieter Paul Gruber","doi":"10.3233/ica-240742","DOIUrl":"https://doi.org/10.3233/ica-240742","url":null,"abstract":"As part of the evolving Industry 4.0 landscape, machine learning-based visual inspection plays a key role in enhancing production efficiency. Screen printing, a versatile and cost-effective manufacturing technique, is widely applied in industries like electronics, textiles, and automotive. However,the production of complex multilayered designs is error-prone, resulting in a variety of defect appearances and classes. These defects can be characterized as small in relation to large sample areas and weakly pronounced. Sufficient defect visualization and robust defect detection methods are essential to address these challenges, especially considering the permitted design variability. In this work, we present a novel automatic visual inspection system for surface defect detection on decorated foil plates. Customized optical modalities, integrated into a sequential inspection procedure, enable defect visualization of production-related defect classes. The introduced patch-wise defect detection methods, designed to leverage less labeled data, prove effective for industrial defect detection, meeting the given process requirements. In this context, we propose an industry-applicable and scalable data preprocessing workflow that minimizes the overall labeling effort while maintaining high detection performance, as known in supervised settings. Moreover, the presented methods, not relying on any labeled defective training data, outperformed a state-of-the-art unsupervised anomaly detection method in terms of defect detection performance and inference speed.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Battery parameter identification for unmanned aerial vehicles with hybrid power system 混合动力系统无人驾驶飞行器的电池参数识别
IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-19 DOI: 10.3233/ica-240741
Zhuoyao He, David Martín Gómez, Pablo Flores Peña, Arturo de la Escalera Hueso, Xingcai Lu, José María Armingol Moreno
Unmanned aerial vehicles (UAVs) nowadays are getting soaring importance in many aspects like agricultural and military fields. A hybrid power system is a promising solution toward high energy density and power density demands for UAVs as it integrates power sources like internal combustion engine (ICE), fuel cell (FC) and lowcapacity lithium-polymer (LIPO) batteries. For robust energy management, accurate state-of-charge (SOC) estimation is indispensable, which necessitates open circuit voltage (OCV) determination and parameter identification of battery. The presented research demonstrates the feasibility of carrying out incremental OCV test and even dynamic stress test (DST) by making use of the hybrid powered UAV system itself. Based on battery relaxation terminal voltage as well as current wave excitation, novel methods for OCV determination and parameter identification are proposed. Results of SOC estimation against DST through adaptive unscented Kalman filter (AUKF) algorithm show that parameters and OCV identified with longer relaxation time don’t yield better SOC estimation accuracy. Besides, it also discloses that OCV played the vital role in affecting SOC estimation accuracy. A detailed analysis is presented showing that mean discharging rate and current wave amplitude are the major factors which affect the quality of OCV identified related to SOC estimation accuracy.
如今,无人驾驶飞行器(UAV)在农业和军事等诸多领域的重要性日益凸显。混合动力系统整合了内燃机(ICE)、燃料电池(FC)和低容量锂聚合物(LIPO)电池等动力源,是满足无人飞行器高能量密度和功率密度需求的理想解决方案。要实现稳健的能源管理,准确的充电状态(SOC)估算必不可少,这就需要确定电池的开路电压(OCV)和参数识别。本研究展示了利用混合动力无人机系统本身进行增量 OCV 测试甚至动态应力测试(DST)的可行性。基于电池弛豫端电压和电流波激励,提出了新的 OCV 确定和参数识别方法。通过自适应无cented 卡尔曼滤波器(AUKF)算法针对 DST 估算 SOC 的结果表明,用较长的弛豫时间确定参数和 OCV 并不能获得更好的 SOC 估算精度。此外,它还揭示出 OCV 在影响 SOC 估计精度方面起着至关重要的作用。详细的分析表明,平均放电率和电流波幅是影响与 SOC 估计精度相关的 OCV 识别质量的主要因素。
{"title":"Battery parameter identification for unmanned aerial vehicles with hybrid power system","authors":"Zhuoyao He, David Martín Gómez, Pablo Flores Peña, Arturo de la Escalera Hueso, Xingcai Lu, José María Armingol Moreno","doi":"10.3233/ica-240741","DOIUrl":"https://doi.org/10.3233/ica-240741","url":null,"abstract":"Unmanned aerial vehicles (UAVs) nowadays are getting soaring importance in many aspects like agricultural and military fields. A hybrid power system is a promising solution toward high energy density and power density demands for UAVs as it integrates power sources like internal combustion engine (ICE), fuel cell (FC) and lowcapacity lithium-polymer (LIPO) batteries. For robust energy management, accurate state-of-charge (SOC) estimation is indispensable, which necessitates open circuit voltage (OCV) determination and parameter identification of battery. The presented research demonstrates the feasibility of carrying out incremental OCV test and even dynamic stress test (DST) by making use of the hybrid powered UAV system itself. Based on battery relaxation terminal voltage as well as current wave excitation, novel methods for OCV determination and parameter identification are proposed. Results of SOC estimation against DST through adaptive unscented Kalman filter (AUKF) algorithm show that parameters and OCV identified with longer relaxation time don’t yield better SOC estimation accuracy. Besides, it also discloses that OCV played the vital role in affecting SOC estimation accuracy. A detailed analysis is presented showing that mean discharging rate and current wave amplitude are the major factors which affect the quality of OCV identified related to SOC estimation accuracy.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141719051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effectiveness of deep learning techniques in TV programs classification: A comparative analysis 深度学习技术在电视节目分类中的有效性:对比分析
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-04-10 DOI: 10.3233/ica-240740
Federico Candela, Angelo Giordano, Carmen Francesca Zagaria, Francesco Carlo Morabito

Abstract

In the application areas of streaming, social networks, and video-sharing platforms such as YouTube and Facebook, along with traditional television systems, programs’ classification stands as a pivotal effort in multimedia content management. Despite recent advancements, it remains a scientific challenge for researchers. This paper proposes a novel approach for television monitoring systems and the classification of extended video content. In particular, it presents two distinct techniques for program classification. The first one leverages a framework integrating Structural Similarity Index Measurement and Convolutional Neural Network, which pipelines on stacked frames to classify program initiation, conclusion, and contents. Noteworthy, this versatile method can be seamlessly adapted across various systems. The second analyzed framework implies directly processing optical flow. Building upon a shot-boundary detection technique, it incorporates background subtraction to adaptively discern frame alterations. These alterations are subsequently categorized through the integration of a Transformers network, showcasing a potential advancement in program classification methodology. A comprehensive overview of the promising experimental results yielded by the two techniques is reported. The first technique achieved an accuracy of 95%, while the second one surpassed it with an even higher accuracy of 87% on multiclass classification. These results underscore the effectiveness and reliability of the proposed frameworks, and pave the way for a more efficient and precise content management in the ever-evolving landscape of multimedia platforms and streaming services.

摘要 在流媒体、社交网络、YouTube 和 Facebook 等视频共享平台以及传统电视系统等应用领域,节目分类是多媒体内容管理的关键工作。尽管近年来取得了一些进展,但它仍然是研究人员面临的一项科学挑战。本文为电视监控系统和扩展视频内容分类提出了一种新方法。特别是,它提出了两种不同的节目分类技术。第一种技术利用结构相似性指数测量和卷积神经网络的整合框架,通过对堆叠帧进行流水线处理来对节目的开始、结束和内容进行分类。值得注意的是,这种通用方法可无缝适用于各种系统。第二个分析框架是直接处理光流。它以镜头边界检测技术为基础,结合背景减影技术,自适应地识别帧的变化。随后,通过整合变形金刚网络对这些变化进行分类,展示了程序分类方法的潜在进步。报告全面概述了这两种技术所取得的令人鼓舞的实验结果。第一种技术的准确率达到 95%,而第二种技术在多类分类中的准确率更高达 87%,超过了第一种技术。这些结果凸显了建议框架的有效性和可靠性,为在不断发展的多媒体平台和流媒体服务中实现更高效、更精确的内容管理铺平了道路。
{"title":"Effectiveness of deep learning techniques in TV programs classification: A comparative analysis","authors":"Federico Candela, Angelo Giordano, Carmen Francesca Zagaria, Francesco Carlo Morabito","doi":"10.3233/ica-240740","DOIUrl":"https://doi.org/10.3233/ica-240740","url":null,"abstract":"<h4><span>Abstract</span></h4><p>In the application areas of streaming, social networks, and video-sharing platforms such as YouTube and Facebook, along with traditional television systems, programs’ classification stands as a pivotal effort in multimedia content management. Despite recent advancements, it remains a scientific challenge for researchers. This paper proposes a novel approach for television monitoring systems and the classification of extended video content. In particular, it presents two distinct techniques for program classification. The first one leverages a framework integrating Structural Similarity Index Measurement and Convolutional Neural Network, which pipelines on stacked frames to classify program initiation, conclusion, and contents. Noteworthy, this versatile method can be seamlessly adapted across various systems. The second analyzed framework implies directly processing optical flow. Building upon a shot-boundary detection technique, it incorporates background subtraction to adaptively discern frame alterations. These alterations are subsequently categorized through the integration of a Transformers network, showcasing a potential advancement in program classification methodology. A comprehensive overview of the promising experimental results yielded by the two techniques is reported. The first technique achieved an accuracy of 95%, while the second one surpassed it with an even higher accuracy of 87% on multiclass classification. These results underscore the effectiveness and reliability of the proposed frameworks, and pave the way for a more efficient and precise content management in the ever-evolving landscape of multimedia platforms and streaming services.</p>","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140941649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Railway alignment optimization in regions with densely-distributed obstacles based on semantic topological maps 基于语义拓扑图的障碍物密集区域铁路线路优化
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-04-09 DOI: 10.3233/ica-240739
Xinjie Wan, Hao Pu, Paul Schonfeld, Taoran Song, Wei Li, Lihui Peng

Abstract

Railway alignment development in a study area with densely-distributed obstacles, in which regions favorable for alignments are isolated (termed an isolated island effect, i.e., IIE), is a computation-intensive and time-consuming task. To enhance search efficiency and solution quality, an environmental suitability analysis is conducted to identify alignment-favorable regions (AFRs), focusing the subsequent alignment search on these areas. Firstly, a density-based clustering algorithm (DBSCAN) and a specific criterion are customized to distinguish AFR distribution patterns: continuously-distributed AFRs, obstructed effects, and IIEs. Secondly, a study area characterized by IIEs is represented with a semantic topological map (STM), integrating between-island and within-island paths. Specifically, between-island paths are derived through a multi-directional scanning strategy, while within-island paths are optimized using a Floyd-Warshall algorithm. To this end, the intricate alignment optimization problem is simplified into a shortest path problem, tackled with conventional shortest path algorithms (of which Dijkstra’s algorithm is adopted in this work). Lastly, the proposed method is applied to a real case in a mountainous region with karst landforms. Numerical results indicate its superior performance in both construction costs and environmental suitability compared to human designers and a prior alignment optimization method.

摘要在障碍物密集分布的研究区域内进行铁路选线开发是一项计算密集且耗时的任务,因为在该区域内有利于选线的区域是孤立的(称为孤岛效应,即 IIE)。为了提高搜索效率和解决方案质量,我们进行了环境适宜性分析,以确定有利于配准的区域(AFRs),并将后续配准搜索集中在这些区域。首先,定制了基于密度的聚类算法(DBSCAN)和特定标准,以区分 AFR 分布模式:连续分布的 AFR、阻碍效应和 IIE。其次,用语义拓扑图(STM)表示以IIEs为特征的研究区域,整合岛间路径和岛内路径。具体来说,岛间路径是通过多向扫描策略得出的,而岛内路径则是通过 Floyd-Warshall 算法优化的。为此,错综复杂的对齐优化问题被简化为最短路径问题,并采用传统的最短路径算法(本研究采用的是 Dijkstra 算法)加以解决。最后,将所提出的方法应用于喀斯特地貌山区的一个实际案例。数值结果表明,与人工设计人员和之前的排列优化方法相比,该方法在施工成本和环境适宜性方面都表现出色。
{"title":"Railway alignment optimization in regions with densely-distributed obstacles based on semantic topological maps","authors":"Xinjie Wan, Hao Pu, Paul Schonfeld, Taoran Song, Wei Li, Lihui Peng","doi":"10.3233/ica-240739","DOIUrl":"https://doi.org/10.3233/ica-240739","url":null,"abstract":"<h4><span>Abstract</span></h4><p>Railway alignment development in a study area with densely-distributed obstacles, in which regions favorable for alignments are isolated (termed an isolated island effect, i.e., IIE), is a computation-intensive and time-consuming task. To enhance search efficiency and solution quality, an environmental suitability analysis is conducted to identify alignment-favorable regions (AFRs), focusing the subsequent alignment search on these areas. Firstly, a density-based clustering algorithm (DBSCAN) and a specific criterion are customized to distinguish AFR distribution patterns: continuously-distributed AFRs, obstructed effects, and IIEs. Secondly, a study area characterized by IIEs is represented with a semantic topological map (STM), integrating between-island and within-island paths. Specifically, between-island paths are derived through a multi-directional scanning strategy, while within-island paths are optimized using a Floyd-Warshall algorithm. To this end, the intricate alignment optimization problem is simplified into a shortest path problem, tackled with conventional shortest path algorithms (of which Dijkstra’s algorithm is adopted in this work). Lastly, the proposed method is applied to a real case in a mountainous region with karst landforms. Numerical results indicate its superior performance in both construction costs and environmental suitability compared to human designers and a prior alignment optimization method.</p>","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140626996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A weakly supervised active learning framework for non-intrusive load monitoring 用于非侵入式负载监控的弱监督主动学习框架
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-04-08 DOI: 10.3233/ica-240738
Giulia Tanoni, Tamara Sobot, Emanuele Principi, Vladimir Stankovic, Lina Stankovic, Stefano Squartini
Energy efficiency is at a critical point now with rising energy prices and decarbonisation of the residential sector to meet the global NetZero agenda. Non-Intrusive Load Monitoring is a software-based technique to monitor individual appliances inside a building from a single aggregate meter reading and recent approaches are based on supervised deep learning. Such approaches are affected by practical constraints related to labelled data collection, particularly when a pre-trained model is deployed in an unknown target environment and needs to be adapted to the new data domain. In this case, transfer learning is usually adopted and the end-user is directly involved in the labelling process. Unlike previous literature, we propose a combined weakly supervised and active learning approach to reduce the quantity of data to be labelled and the end user effort in providing the labels. We demonstrate the efficacy of our method comparing it to a transfer learning approach based on weak supervision. Our method reduces the quantity of weakly annotated data required by up to 82.6–98.5% in four target domains while improving the appliance classification performance.
随着能源价格的不断上涨和住宅领域的去碳化,能源效率正处于一个关键时刻,以实现全球净零排放议程。非侵入式负载监控是一种基于软件的技术,可通过单个总电表读数监控建筑物内的单个电器。这些方法受到与标记数据收集相关的实际限制因素的影响,特别是当预先训练好的模型部署在未知的目标环境中,需要适应新的数据域时。在这种情况下,通常采用迁移学习,最终用户直接参与标注过程。与之前的文献不同,我们提出了一种弱监督和主动学习相结合的方法,以减少需要标注的数据量和终端用户提供标签的工作量。我们将我们的方法与基于弱监督的迁移学习方法进行了比较,证明了我们方法的有效性。在四个目标领域中,我们的方法将所需的弱注释数据量减少了 82.6%-98.5%,同时提高了设备分类性能。
{"title":"A weakly supervised active learning framework for non-intrusive load monitoring","authors":"Giulia Tanoni, Tamara Sobot, Emanuele Principi, Vladimir Stankovic, Lina Stankovic, Stefano Squartini","doi":"10.3233/ica-240738","DOIUrl":"https://doi.org/10.3233/ica-240738","url":null,"abstract":"Energy efficiency is at a critical point now with rising energy prices and decarbonisation of the residential sector to meet the global NetZero agenda. Non-Intrusive Load Monitoring is a software-based technique to monitor individual appliances inside a building from a single aggregate meter reading and recent approaches are based on supervised deep learning. Such approaches are affected by practical constraints related to labelled data collection, particularly when a pre-trained model is deployed in an unknown target environment and needs to be adapted to the new data domain. In this case, transfer learning is usually adopted and the end-user is directly involved in the labelling process. Unlike previous literature, we propose a combined weakly supervised and active learning approach to reduce the quantity of data to be labelled and the end user effort in providing the labels. We demonstrate the efficacy of our method comparing it to a transfer learning approach based on weak supervision. Our method reduces the quantity of weakly annotated data required by up to 82.6–98.5% in four target domains while improving the appliance classification performance.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140941152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prediction of thrust bearing’s performance in Mixed Lubrication regime 推力轴承在混合润滑条件下的性能预测
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-03-26 DOI: 10.3233/ica-240737
Konstantinos P. Katsaros, Pantelis G. Nikolakopoulos
A hydrodynamic thrust bearing could be forced to operate in mixed lubrication regime under various circumstances. At this state, the tribological characteristics of the bearing could be affected significantly and the developed phenomena would have a severe impact on the performance of the mechanism. Until recently, researchers were modeling the hydrodynamic lubrication problem of the thrust bearings either with analytical or with numerical solutions. The analytical solutions are very simple and do not provide enough accuracy in describing the actual problem. To add to that, following only computational methodologies, can lead to time consuming and complex algorithms that need to be repeated every time the operating conditions change, in order to draw safe conclusions. Recent technological advances, especially on the field of computer science, have provided tools that enhance and accelerate the modeling of thrust bearings’ operation. The aim of this study is to examine the application of Artificial Neural Networks as Machine Learning models, that are trained to predict the coefficient of friction for lubricated pad thrust bearings in mixed lubrication regime. The hydrodynamic analysis of the thrust bearing is performed by solving the Average 2-D Reynolds equation numerically. In order to describe the roughness of the profiles, both the flow factors suggested by N. Patir and H.S. Cheng (1978) and the model of J.A. Greenwood and J. H. Tripp (1970) are taken into consideration. Three lubricants, the SAE 0W30, the SAE 10W40 and the SAE 10W60, are tested and compared for a variety of operating velocities and applied coatings. The numerical analysis results are used as training datasets for the machine learning algorithms. Four different ML methods are applied in this investigation: Artificial Neural Networks (ANNs), Multi- Variable Quadratic Polynomial Regression, Quadratic SVM and Regression Trees. The coefficient of determination, R2 is calculated and used to determine the most accurate ML method for the current study. The results showed that ANNs provide very good accuracy in the prediction of friction coefficient compared to the rest of the ML models discussed.
在各种情况下,推力流体动力轴承可能被迫在混合润滑状态下运行。在这种状态下,轴承的摩擦学特性会受到很大影响,所产生的现象会严重影响机构的性能。直到最近,研究人员还在通过分析或数值解法对推力轴承的流体动力润滑问题进行建模。解析解非常简单,在描述实际问题时不够精确。此外,如果仅采用计算方法,则会导致耗时且复杂的算法,每次工作条件发生变化时都需要重复计算,才能得出安全的结论。最近的技术进步,特别是计算机科学领域的技术进步,提供了一些工具来加强和加速推力轴承的运行建模。本研究的目的是检验人工神经网络作为机器学习模型的应用情况,该模型经过训练,可用于预测混合润滑条件下润滑垫推力轴承的摩擦系数。推力轴承的流体力学分析是通过数值求解平均二维雷诺方程进行的。为了描述轮廓的粗糙度,考虑了 N. Patir 和 H.S. Cheng(1978 年)提出的流动因子以及 J.A. Greenwood 和 J. H. Tripp(1970 年)的模型。对三种润滑油(SAE 0W30、SAE 10W40 和 SAE 10W60)进行了测试,并对各种工作速度和涂层进行了比较。数值分析结果被用作机器学习算法的训练数据集。本研究采用了四种不同的 ML 方法:人工神经网络 (ANN)、多变量二次多项式回归、二次 SVM 和回归树。通过计算判定系数 R2,可确定当前研究中最准确的 ML 方法。结果表明,与所讨论的其他 ML 模型相比,ANN 在预测摩擦系数方面具有很高的准确性。
{"title":"Prediction of thrust bearing’s performance in Mixed Lubrication regime","authors":"Konstantinos P. Katsaros, Pantelis G. Nikolakopoulos","doi":"10.3233/ica-240737","DOIUrl":"https://doi.org/10.3233/ica-240737","url":null,"abstract":"A hydrodynamic thrust bearing could be forced to operate in mixed lubrication regime under various circumstances. At this state, the tribological characteristics of the bearing could be affected significantly and the developed phenomena would have a severe impact on the performance of the mechanism. Until recently, researchers were modeling the hydrodynamic lubrication problem of the thrust bearings either with analytical or with numerical solutions. The analytical solutions are very simple and do not provide enough accuracy in describing the actual problem. To add to that, following only computational methodologies, can lead to time consuming and complex algorithms that need to be repeated every time the operating conditions change, in order to draw safe conclusions. Recent technological advances, especially on the field of computer science, have provided tools that enhance and accelerate the modeling of thrust bearings’ operation. The aim of this study is to examine the application of Artificial Neural Networks as Machine Learning models, that are trained to predict the coefficient of friction for lubricated pad thrust bearings in mixed lubrication regime. The hydrodynamic analysis of the thrust bearing is performed by solving the Average 2-D Reynolds equation numerically. In order to describe the roughness of the profiles, both the flow factors suggested by N. Patir and H.S. Cheng (1978) and the model of J.A. Greenwood and J. H. Tripp (1970) are taken into consideration. Three lubricants, the SAE 0W30, the SAE 10W40 and the SAE 10W60, are tested and compared for a variety of operating velocities and applied coatings. The numerical analysis results are used as training datasets for the machine learning algorithms. Four different ML methods are applied in this investigation: Artificial Neural Networks (ANNs), Multi- Variable Quadratic Polynomial Regression, Quadratic SVM and Regression Trees. The coefficient of determination, R2 is calculated and used to determine the most accurate ML method for the current study. The results showed that ANNs provide very good accuracy in the prediction of friction coefficient compared to the rest of the ML models discussed.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140941157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-label classification with imbalanced classes by fuzzy deep neural networks 利用模糊深度神经网络进行不平衡类别的多标签分类
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-03-09 DOI: 10.3233/ica-240736
Federico Succetti, Antonello Rosato, Massimo Panella
Multi-label classification is an advantageous technique for managing uncertainty in classification problems where each data instance is associated with several labels simultaneously. Such situations are frequent in real-world scenarios, where decisions rely on imprecise or noisy data and adaptableclassification methods are preferred. However, the problem of class imbalance represents a common characteristic of several multi-label datasets, in which the distribution of samples and their corresponding labels is non-uniform across the data space. In this paper, we propose a multi-label classification approach utilizing fuzzy logic in order to deal with the class imbalance problem. To eliminate the need for an expert to determine the logical rules of inference, deep neural networks are adopted, which have proven to be exceptionally effective for such problems. By combining both fuzzy inference systems and deep neural networks, the strengths and weaknesses of each approach can be mitigated. As a further development, a symbolic representation of time series is put in place to reduce data dimensionality and speed up the training procedure. This allows for more flexibility in model application, in particular with respect to time constraints arising from the causality of observed time series. Tests carried out on a multi-label classification dataset related to the current and voltage profiles of several household appliances show that the proposed model outperforms four baseline models for time series classification.
在分类问题中,每个数据实例都同时与多个标签相关联,多标签分类是一种管理不确定性的有利技术。这种情况在现实世界中经常出现,在这种情况下,决策依赖于不精确或有噪声的数据,而适应性强的分类方法是首选。然而,类不平衡问题是多个多标签数据集的共同特征,其中样本及其相应标签在整个数据空间的分布是不均匀的。本文提出了一种利用模糊逻辑的多标签分类方法,以解决类不平衡问题。为了消除由专家来确定推理逻辑规则的需要,我们采用了深度神经网络,事实证明它对此类问题非常有效。通过将模糊推理系统和深度神经网络结合起来,可以减轻每种方法的优缺点。作为进一步的发展,还采用了时间序列的符号表示法,以降低数据维度并加快训练过程。这使得模型的应用更加灵活,特别是在因观测时间序列的因果关系而产生的时间限制方面。在与几种家用电器的电流和电压曲线相关的多标签分类数据集上进行的测试表明,所提出的模型在时间序列分类方面优于四个基准模型。
{"title":"Multi-label classification with imbalanced classes by fuzzy deep neural networks","authors":"Federico Succetti, Antonello Rosato, Massimo Panella","doi":"10.3233/ica-240736","DOIUrl":"https://doi.org/10.3233/ica-240736","url":null,"abstract":"Multi-label classification is an advantageous technique for managing uncertainty in classification problems where each data instance is associated with several labels simultaneously. Such situations are frequent in real-world scenarios, where decisions rely on imprecise or noisy data and adaptableclassification methods are preferred. However, the problem of class imbalance represents a common characteristic of several multi-label datasets, in which the distribution of samples and their corresponding labels is non-uniform across the data space. In this paper, we propose a multi-label classification approach utilizing fuzzy logic in order to deal with the class imbalance problem. To eliminate the need for an expert to determine the logical rules of inference, deep neural networks are adopted, which have proven to be exceptionally effective for such problems. By combining both fuzzy inference systems and deep neural networks, the strengths and weaknesses of each approach can be mitigated. As a further development, a symbolic representation of time series is put in place to reduce data dimensionality and speed up the training procedure. This allows for more flexibility in model application, in particular with respect to time constraints arising from the causality of observed time series. Tests carried out on a multi-label classification dataset related to the current and voltage profiles of several household appliances show that the proposed model outperforms four baseline models for time series classification.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140587756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-agent simulation of autonomous industrial vehicle fleets: Towards dynamic task allocation in V2X cooperation mode 自主工业车队的多代理模拟:实现 V2X 合作模式下的动态任务分配
IF 6.5 2区 计算机科学 Q1 Computer Science Pub Date : 2024-03-01 DOI: 10.3233/ica-240735
J. Grosset, A.-J. Fougères, M. Djoko-Kouam, J.-M. Bonnin
The smart factory leads to a strong digitalization of industrial processes and continuous communication between the systems integrated into the production, storage, and supply chains. One of the research areas in Industry 4.0 is the possibility of using autonomous and/or intelligent industrial vehicles. The optimization of the management of the tasks allocated to these vehicles with adaptive behaviours, as well as the increase in vehicle-to-everything communications (V2X) make it possible to develop collective and adaptive intelligence for these vehicles, often grouped in fleets. Task allocation and scheduling are often managed centrally. The requirements for flexibility, robustness, and scalability lead to the consideration of decentralized mechanisms to react to unexpected situations. However, before being definitively adopted, decentralization must first be modelled and then simulated. Thus, we use a multi-agent simulation to test the proposed dynamic task (re)allocation process. A set of problematic situations for the circulation of autonomous industrial vehicles in areas such as smart warehouses (obstacles, breakdowns, etc.) has been identified. These problematic situations could disrupt or harm the successful completion of the process of dynamic (re)allocation of tasks. We have therefore defined scenarios involving them in order to demonstrate through simulation that the process remains reliable. The simulation of new problematic situations also allows us to extend the potential of this process, which we discuss at the end of the article.
智能工厂实现了工业流程的高度数字化,以及集成到生产、存储和供应链中的各系统之间的持续通信。工业 4.0 的研究领域之一是使用自动和/或智能工业车辆的可能性。对分配给这些具有自适应行为的车辆的任务进行优化管理,以及增加车对万物通信(V2X),使得为这些车辆(通常以车队为单位)开发集体和自适应智能成为可能。任务分配和调度通常由中央管理。由于对灵活性、稳健性和可扩展性的要求,人们开始考虑采用分散机制来应对突发情况。然而,在明确采用分散机制之前,必须首先对其进行建模和模拟。因此,我们使用多代理模拟来测试建议的动态任务(重新)分配流程。在智能仓库等区域,自动驾驶工业车辆的流通遇到了一系列问题(障碍、故障等)。这些问题情况可能会干扰或损害任务动态(重新)分配过程的顺利完成。因此,我们定义了涉及这些问题的情景,以便通过模拟来证明该流程的可靠性。通过模拟新的问题情况,我们还可以扩展这一过程的潜力,我们将在文章末尾对此进行讨论。
{"title":"Multi-agent simulation of autonomous industrial vehicle fleets: Towards dynamic task allocation in V2X cooperation mode","authors":"J. Grosset, A.-J. Fougères, M. Djoko-Kouam, J.-M. Bonnin","doi":"10.3233/ica-240735","DOIUrl":"https://doi.org/10.3233/ica-240735","url":null,"abstract":"The smart factory leads to a strong digitalization of industrial processes and continuous communication between the systems integrated into the production, storage, and supply chains. One of the research areas in Industry 4.0 is the possibility of using autonomous and/or intelligent industrial vehicles. The optimization of the management of the tasks allocated to these vehicles with adaptive behaviours, as well as the increase in vehicle-to-everything communications (V2X) make it possible to develop collective and adaptive intelligence for these vehicles, often grouped in fleets. Task allocation and scheduling are often managed centrally. The requirements for flexibility, robustness, and scalability lead to the consideration of decentralized mechanisms to react to unexpected situations. However, before being definitively adopted, decentralization must first be modelled and then simulated. Thus, we use a multi-agent simulation to test the proposed dynamic task (re)allocation process. A set of problematic situations for the circulation of autonomous industrial vehicles in areas such as smart warehouses (obstacles, breakdowns, etc.) has been identified. These problematic situations could disrupt or harm the successful completion of the process of dynamic (re)allocation of tasks. We have therefore defined scenarios involving them in order to demonstrate through simulation that the process remains reliable. The simulation of new problematic situations also allows us to extend the potential of this process, which we discuss at the end of the article.","PeriodicalId":50358,"journal":{"name":"Integrated Computer-Aided Engineering","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140199891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Integrated Computer-Aided Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1