首页 > 最新文献

Expert Systems最新文献

英文 中文
How does energy transition improve energy utilization efficiency? A case study of China's coal‐to‐gas program 能源转型如何提高能源利用效率?中国煤制天然气项目案例研究
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-03 DOI: 10.1111/exsy.13721
Zhixiang Zhou, Yifei Zhu, Yannan Li, Huaqing Wu
Improving energy efficiency by adjusting the structure of energy consumption types is of great significance for reducing carbon emissions in the short term. The present paper constructs new data envelopment analysis models for evaluating energy utilization under different structural conditions and calculating potential emissions reductions. We conducted empirical research on 30 provinces in China from 2003 to 2019—a time frame that coincides with the instituting of China's “coal‐to‐gas” program. Our results show that technological progress is the main way for China to reduce carbon emissions and that it is possible to reduce the total amount of carbon emissions by 35%. Additionally, optimizing the energy consumption structure following the coal‐to‐gas program guidelines could reduce the country's carbon emissions by a further 25%. Finally, this paper provides specific policy recommendations based on the efficiency analysis results to guide each province in reducing carbon emissions under the conditions of energy demand growth.
通过调整能源消费类型结构来提高能源效率,对于在短期内减少碳排放具有重要意义。本文构建了新的数据包络分析模型,用于评估不同结构条件下的能源利用率,并计算潜在的减排量。我们在 2003 年至 2019 年期间对中国 30 个省份进行了实证研究--该时间段与中国 "煤改气 "计划的实施时间相吻合。研究结果表明,技术进步是中国减少碳排放的主要途径,碳排放总量有可能减少 35%。此外,按照 "煤改气 "计划的指导方针优化能源消费结构,可以使中国的碳排放量再减少 25%。最后,本文根据效率分析结果提出了具体的政策建议,以指导各省在能源需求增长的条件下减少碳排放。
{"title":"How does energy transition improve energy utilization efficiency? A case study of China's coal‐to‐gas program","authors":"Zhixiang Zhou, Yifei Zhu, Yannan Li, Huaqing Wu","doi":"10.1111/exsy.13721","DOIUrl":"https://doi.org/10.1111/exsy.13721","url":null,"abstract":"Improving energy efficiency by adjusting the structure of energy consumption types is of great significance for reducing carbon emissions in the short term. The present paper constructs new data envelopment analysis models for evaluating energy utilization under different structural conditions and calculating potential emissions reductions. We conducted empirical research on 30 provinces in China from 2003 to 2019—a time frame that coincides with the instituting of China's “coal‐to‐gas” program. Our results show that technological progress is the main way for China to reduce carbon emissions and that it is possible to reduce the total amount of carbon emissions by 35%. Additionally, optimizing the energy consumption structure following the coal‐to‐gas program guidelines could reduce the country's carbon emissions by a further 25%. Finally, this paper provides specific policy recommendations based on the efficiency analysis results to guide each province in reducing carbon emissions under the conditions of energy demand growth.","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":null,"pages":null},"PeriodicalIF":3.3,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142206597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning‐based gesture recognition for surgical applications: A data augmentation approach 基于深度学习的手术应用手势识别:数据增强方法
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-02 DOI: 10.1111/exsy.13706
Sofía Sorbet Santiago, Jenny Alexandra Cifuentes
Hand gesture recognition and classification play a pivotal role in automating Human‐Computer Interaction (HCI) and have garnered substantial attention in research. In this study, the focus is placed on the application of gesture recognition in surgical settings to provide valuable feedback during medical training. A tool gesture classification system based on Deep Learning (DL) techniques is proposed, specifically employing a Long Short Term Memory (LSTM)‐based model with an attention mechanism. The research is structured in three key stages: data pre‐processing to eliminate outliers and smooth trajectories, addressing noise from surgical instrument data acquisition; data augmentation to overcome data scarcity by generating new trajectories through controlled spatial transformations; and the implementation and evaluation of the DL‐based classification strategy. The dataset used includes recordings from ten participants with varying surgical experience, covering three types of trajectories and involving both right and left arms. The proposed classifier, combined with the data augmentation strategy, is assessed for its effectiveness in classifying all acquired gestures. The performance of the proposed model is evaluated against other DL‐based methodologies commonly employed in surgical gesture classification. The results indicate that the proposed approach outperforms these benchmark methods, achieving higher classification accuracy and robustness in distinguishing diverse surgical gestures.
手势识别和分类在人机交互(HCI)自动化中发挥着举足轻重的作用,并在研究中获得了极大的关注。本研究的重点是手势识别在外科手术中的应用,以便在医疗培训期间提供有价值的反馈。本研究提出了一种基于深度学习(DL)技术的工具手势分类系统,特别是采用了基于长短期记忆(LSTM)模型和注意力机制。研究分为三个关键阶段:数据预处理,以消除异常值和平滑轨迹,解决手术器械数据采集带来的噪声问题;数据增强,通过受控空间变换生成新轨迹,克服数据稀缺问题;以及基于深度学习的分类策略的实施和评估。所使用的数据集包括来自十位具有不同手术经验的参与者的记录,涵盖三种类型的轨迹,并涉及左右手臂。所提出的分类器与数据增强策略相结合,对其在对所有获取的手势进行分类方面的有效性进行了评估。与外科手势分类中常用的其他基于 DL 的方法相比,对所提出模型的性能进行了评估。结果表明,所提出的方法优于这些基准方法,在区分各种手术手势方面具有更高的分类准确性和鲁棒性。
{"title":"Deep learning‐based gesture recognition for surgical applications: A data augmentation approach","authors":"Sofía Sorbet Santiago, Jenny Alexandra Cifuentes","doi":"10.1111/exsy.13706","DOIUrl":"https://doi.org/10.1111/exsy.13706","url":null,"abstract":"Hand gesture recognition and classification play a pivotal role in automating Human‐Computer Interaction (HCI) and have garnered substantial attention in research. In this study, the focus is placed on the application of gesture recognition in surgical settings to provide valuable feedback during medical training. A tool gesture classification system based on Deep Learning (DL) techniques is proposed, specifically employing a Long Short Term Memory (LSTM)‐based model with an attention mechanism. The research is structured in three key stages: data pre‐processing to eliminate outliers and smooth trajectories, addressing noise from surgical instrument data acquisition; data augmentation to overcome data scarcity by generating new trajectories through controlled spatial transformations; and the implementation and evaluation of the DL‐based classification strategy. The dataset used includes recordings from ten participants with varying surgical experience, covering three types of trajectories and involving both right and left arms. The proposed classifier, combined with the data augmentation strategy, is assessed for its effectiveness in classifying all acquired gestures. The performance of the proposed model is evaluated against other DL‐based methodologies commonly employed in surgical gesture classification. The results indicate that the proposed approach outperforms these benchmark methods, achieving higher classification accuracy and robustness in distinguishing diverse surgical gestures.","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":null,"pages":null},"PeriodicalIF":3.3,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142206598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CADICA: A new dataset for coronary artery disease detection by using invasive coronary angiography CADICA:利用有创冠状动脉造影检测冠状动脉疾病的新数据集
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-30 DOI: 10.1111/exsy.13708
Ariadna Jiménez‐Partinen, Miguel A. Molina‐Cabello, Karl Thurnhofer‐Hemsi, Esteban J. Palomo, Jorge Rodríguez‐Capitán, Ana I. Molina‐Ramos, Manuel Jiménez‐Navarro
Coronary artery disease (CAD) remains the leading cause of death globally and invasive coronary angiography (ICA) is considered the gold standard of anatomical imaging evaluation when CAD is suspected. However, risk evaluation based on ICA has several limitations, such as visual assessment of stenosis severity, which has significant interobserver variability. This motivates to development of a lesion classification system that can support specialists in their clinical procedures. Although deep learning classification methods are well‐developed in other areas of medical imaging, ICA image classification is still at an early stage. One of the most important reasons is the lack of available and high‐quality open‐access datasets. In this paper, we reported a new annotated ICA images dataset, CADICA, to provide the research community with a comprehensive and rigorous dataset of coronary angiography consisting of a set of acquired patient videos and associated disease‐related metadata. This dataset can be used by clinicians to train their skills in angiographic assessment of CAD severity, by computer scientists to create computer‐aided diagnostic systems to help in such assessment, and to validate existing methods for CAD detection. In addition, baseline classification methods are proposed and analysed, validating the functionality of CADICA with deep learning‐based methods and giving the scientific community a starting point to improve CAD detection.
冠状动脉疾病(CAD)仍然是全球死亡的主要原因,有创冠状动脉造影术(ICA)被认为是疑似冠状动脉疾病时进行解剖成像评估的黄金标准。然而,基于 ICA 的风险评估存在一些局限性,例如对狭窄严重程度的目测评估就存在明显的观察者间差异。这就促使我们开发一种病变分类系统,为专家的临床程序提供支持。虽然深度学习分类方法在医学影像的其他领域已经发展成熟,但 ICA 图像分类仍处于早期阶段。其中一个最重要的原因是缺乏可用的高质量开放访问数据集。在本文中,我们报告了一个新的注释 ICA 图像数据集 CADICA,为研究界提供了一个全面而严谨的冠状动脉造影数据集,该数据集由一组采集的患者视频和相关疾病元数据组成。临床医生可利用该数据集训练血管造影术评估 CAD 严重程度的技能,计算机科学家可利用该数据集创建计算机辅助诊断系统以帮助进行此类评估,还可利用该数据集验证现有的 CAD 检测方法。此外,还提出并分析了基线分类方法,用基于深度学习的方法验证了 CADICA 的功能,为科学界改进 CAD 检测提供了一个起点。
{"title":"CADICA: A new dataset for coronary artery disease detection by using invasive coronary angiography","authors":"Ariadna Jiménez‐Partinen, Miguel A. Molina‐Cabello, Karl Thurnhofer‐Hemsi, Esteban J. Palomo, Jorge Rodríguez‐Capitán, Ana I. Molina‐Ramos, Manuel Jiménez‐Navarro","doi":"10.1111/exsy.13708","DOIUrl":"https://doi.org/10.1111/exsy.13708","url":null,"abstract":"Coronary artery disease (CAD) remains the leading cause of death globally and invasive coronary angiography (ICA) is considered the gold standard of anatomical imaging evaluation when CAD is suspected. However, risk evaluation based on ICA has several limitations, such as visual assessment of stenosis severity, which has significant interobserver variability. This motivates to development of a lesion classification system that can support specialists in their clinical procedures. Although deep learning classification methods are well‐developed in other areas of medical imaging, ICA image classification is still at an early stage. One of the most important reasons is the lack of available and high‐quality open‐access datasets. In this paper, we reported a new annotated ICA images dataset, CADICA, to provide the research community with a comprehensive and rigorous dataset of coronary angiography consisting of a set of acquired patient videos and associated disease‐related metadata. This dataset can be used by clinicians to train their skills in angiographic assessment of CAD severity, by computer scientists to create computer‐aided diagnostic systems to help in such assessment, and to validate existing methods for CAD detection. In addition, baseline classification methods are proposed and analysed, validating the functionality of CADICA with deep learning‐based methods and giving the scientific community a starting point to improve CAD detection.","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":null,"pages":null},"PeriodicalIF":3.3,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142226349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A code change‐oriented approach to just‐in‐time defect prediction with multiple input semantic fusion 采用多输入语义融合的面向代码更改的及时缺陷预测方法
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-28 DOI: 10.1111/exsy.13702
Teng Huang, Hui‐Qun Yu, Gui‐Sheng Fan, Zi‐Jie Huang, Chen‐Yu Wu
Recent research found that fine‐tuning pre‐trained models is superior to training models from scratch in just‐in‐time (JIT) defect prediction. However, existing approaches using pre‐trained models have their limitations. First, the input length is constrained by the pre‐trained models.Secondly, the inputs are change‐agnostic.To address these limitations, we propose JIT‐Block, a JIT defect prediction method that combines multiple input semantics using changed block as the fundamental unit. We restructure the JIT‐Defects4J dataset used in previous research. We then conducted a comprehensive comparison using eleven performance metrics, including both effort‐aware and effort‐agnostic measures, against six state‐of‐the‐art baseline models. The results demonstrate that on the JIT defect prediction task, our approach outperforms the baseline models in all six metrics, showing improvements ranging from 1.5% to 800% in effort‐agnostic metrics and 0.3% to 57% in effort‐aware metrics. For the JIT defect code line localization task, our approach outperforms the baseline models in three out of five metrics, showing improvements of 11% to 140%.
最近的研究发现,在及时 (JIT) 缺陷预测中,微调预训练模型要优于从头开始训练模型。然而,使用预训练模型的现有方法有其局限性。为了解决这些局限性,我们提出了 JIT-Block,一种以变化的块为基本单位,结合多种输入语义的 JIT 缺陷预测方法。我们对之前研究中使用的 JIT-Defects4J 数据集进行了重组。然后,我们使用 11 个性能指标(包括 "努力感知 "和 "努力无关 "指标)与六个最先进的基线模型进行了全面比较。结果表明,在 JIT 缺陷预测任务中,我们的方法在所有六个指标上都优于基线模型,在 "努力与否 "指标上的改进幅度从 1.5% 到 800%,在 "努力感知 "指标上的改进幅度从 0.3% 到 57%。在 JIT 缺陷代码行定位任务中,我们的方法在五个指标中的三个指标上都优于基线模型,提高了 11% 到 140%。
{"title":"A code change‐oriented approach to just‐in‐time defect prediction with multiple input semantic fusion","authors":"Teng Huang, Hui‐Qun Yu, Gui‐Sheng Fan, Zi‐Jie Huang, Chen‐Yu Wu","doi":"10.1111/exsy.13702","DOIUrl":"https://doi.org/10.1111/exsy.13702","url":null,"abstract":"Recent research found that fine‐tuning pre‐trained models is superior to training models from scratch in just‐in‐time (JIT) defect prediction. However, existing approaches using pre‐trained models have their limitations. First, the input length is constrained by the pre‐trained models.Secondly, the inputs are change‐agnostic.To address these limitations, we propose JIT‐Block, a JIT defect prediction method that combines multiple input semantics using changed block as the fundamental unit. We restructure the JIT‐Defects4J dataset used in previous research. We then conducted a comprehensive comparison using eleven performance metrics, including both effort‐aware and effort‐agnostic measures, against six state‐of‐the‐art baseline models. The results demonstrate that on the JIT defect prediction task, our approach outperforms the baseline models in all six metrics, showing improvements ranging from 1.5% to 800% in effort‐agnostic metrics and 0.3% to 57% in effort‐aware metrics. For the JIT defect code line localization task, our approach outperforms the baseline models in three out of five metrics, showing improvements of 11% to 140%.","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":null,"pages":null},"PeriodicalIF":3.3,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142206599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unlocking the potential: A review of artificial intelligence applications in wind energy 挖掘潜力:人工智能在风能领域的应用综述
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-28 DOI: 10.1111/exsy.13716
Safa Dörterler, Seyfullah Arslan, Durmuş Özdemir
This paper presents a comprehensive review of the most recent papers and research trends in the fields of wind energy and artificial intelligence. Our study aims to guide future research by identifying the potential application and research areas of artificial intelligence and machine learning techniques in the wind energy sector and the knowledge gaps in this field. Artificial intelligence techniques offer significant benefits and advantages in many sub‐areas, such as increasing the efficiency of wind energy facilities, estimating energy production, optimizing operation and maintenance, providing security and control, data analysis, and management. Our research focuses on studies indexed in the Web of Science library on wind energy between 2000 and 2023 using sub‐branches of artificial intelligence techniques such as artificial neural networks, other machine learning methods, data mining, fuzzy logic, meta‐heuristics, and statistical methods. In this way, current methods and techniques in the literature are examined to produce more efficient, sustainable, and reliable wind energy, and the findings are discussed for future studies. This comprehensive evaluation is designed to be helpful to academics and specialists interested in acquiring a current and broad perspective on the types of uses of artificial intelligence in wind energy and seeking what research subjects are needed in this field.
本文全面回顾了风能和人工智能领域的最新论文和研究趋势。我们的研究旨在通过确定人工智能和机器学习技术在风能领域的潜在应用和研究领域,以及该领域的知识缺口,为未来研究提供指导。人工智能技术在许多子领域都具有显著的优势,如提高风能设施的效率、估算能源产量、优化运行和维护、提供安全和控制、数据分析和管理等。我们的研究重点是 2000 年至 2023 年期间在 Web of Science 图书馆中收录的有关风能的研究,使用了人工智能技术的子分支,如人工神经网络、其他机器学习方法、数据挖掘、模糊逻辑、元启发式和统计方法。通过这种方式,对当前文献中的方法和技术进行了研究,以产生更高效、可持续和可靠的风能,并对研究结果进行了讨论,以利于今后的研究。本综合评估报告旨在帮助有兴趣了解人工智能在风能领域的应用类型的学者和专家,并寻求该领域所需的研究课题。
{"title":"Unlocking the potential: A review of artificial intelligence applications in wind energy","authors":"Safa Dörterler, Seyfullah Arslan, Durmuş Özdemir","doi":"10.1111/exsy.13716","DOIUrl":"https://doi.org/10.1111/exsy.13716","url":null,"abstract":"This paper presents a comprehensive review of the most recent papers and research trends in the fields of wind energy and artificial intelligence. Our study aims to guide future research by identifying the potential application and research areas of artificial intelligence and machine learning techniques in the wind energy sector and the knowledge gaps in this field. Artificial intelligence techniques offer significant benefits and advantages in many sub‐areas, such as increasing the efficiency of wind energy facilities, estimating energy production, optimizing operation and maintenance, providing security and control, data analysis, and management. Our research focuses on studies indexed in the Web of Science library on wind energy between 2000 and 2023 using sub‐branches of artificial intelligence techniques such as artificial neural networks, other machine learning methods, data mining, fuzzy logic, meta‐heuristics, and statistical methods. In this way, current methods and techniques in the literature are examined to produce more efficient, sustainable, and reliable wind energy, and the findings are discussed for future studies. This comprehensive evaluation is designed to be helpful to academics and specialists interested in acquiring a current and broad perspective on the types of uses of artificial intelligence in wind energy and seeking what research subjects are needed in this field.","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":null,"pages":null},"PeriodicalIF":3.3,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142206860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trust region based chaotic search for solving multi‐objective optimization problems 基于信任区域的混沌搜索,用于解决多目标优化问题
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-27 DOI: 10.1111/exsy.13705
M. A. El‐Shorbagy
A numerical optimization technique used to address nonlinear programming problems is the trust region (TR) method. TR uses a quadratic model, which may represent the function adequately, to create a neighbourhood around the current best solution as a trust region in each step, rather than searching for the original function's objective solution. This allows the method to determine the next local optimum. The TR technique has been utilized by numerous researchers to tackle multi‐objective optimization problems (MOOPs). But there is not any publication that discusses the issue of applying a chaotic search (CS) with the TR algorithm for solving multi‐objective (MO) problems. From this motivation, the main contribution of this study is to introduce trust‐region (TR) technique based on chaotic search (CS) for solving MOOPs. First, the reference point interactive approach is used to convert MOOP to a single objective optimization problem (SOOP). The search space is then randomly initialized with a set of initial points. Second, in order to supply locations on the Pareto boundary, the TR method solves the SOOP. Finally, all points on the Pareto frontier are obtained using CS. A range of MO benchmark problems have demonstrated the efficiency of the proposed algorithm (TR based CS) in generating Pareto optimum sets for MOOPs. Furthermore, a demonstration of the suggested algorithm's ability to resolve real‐world applications is provided through a practical implementation of the algorithm to improve an abrasive water‐jet machining process (AWJM).
信任区域法(TR)是一种用于解决非线性程序设计问题的数值优化技术。信任区域法使用可充分代表函数的二次方模型,在每一步中创建当前最佳解周围的邻域作为信任区域,而不是搜索原始函数的目标解。这样,该方法就能确定下一个局部最优解。TR 技术已被许多研究人员用于解决多目标优化问题(MOOPs)。但目前还没有任何出版物讨论将混沌搜索(CS)与 TR 算法结合起来解决多目标(MO)问题的问题。因此,本研究的主要贡献在于引入基于混沌搜索(CS)的信任区域(TR)技术来解决多目标优化问题。首先,使用参考点交互方法将 MOOP 转换为单目标优化问题(SOOP)。然后用一组初始点随机初始化搜索空间。其次,为了提供帕累托边界上的位置,TR 方法对 SOOP 进行求解。最后,使用 CS 方法获得帕累托边界上的所有点。一系列 MO 基准问题证明了所建议的算法(基于 TR 的 CS)在为 MOOP 生成帕累托最优集方面的效率。此外,通过实际应用该算法来改进加砂水射流加工工艺(AWJM),证明了所建议算法解决实际应用问题的能力。
{"title":"Trust region based chaotic search for solving multi‐objective optimization problems","authors":"M. A. El‐Shorbagy","doi":"10.1111/exsy.13705","DOIUrl":"https://doi.org/10.1111/exsy.13705","url":null,"abstract":"A numerical optimization technique used to address nonlinear programming problems is the trust region (TR) method. TR uses a quadratic model, which may represent the function adequately, to create a neighbourhood around the current best solution as a trust region in each step, rather than searching for the original function's objective solution. This allows the method to determine the next local optimum. The TR technique has been utilized by numerous researchers to tackle multi‐objective optimization problems (MOOPs). But there is not any publication that discusses the issue of applying a chaotic search (CS) with the TR algorithm for solving multi‐objective (MO) problems. From this motivation, the main contribution of this study is to introduce trust‐region (TR) technique based on chaotic search (CS) for solving MOOPs. First, the reference point interactive approach is used to convert MOOP to a single objective optimization problem (SOOP). The search space is then randomly initialized with a set of initial points. Second, in order to supply locations on the Pareto boundary, the TR method solves the SOOP. Finally, all points on the Pareto frontier are obtained using CS. A range of MO benchmark problems have demonstrated the efficiency of the proposed algorithm (TR based CS) in generating Pareto optimum sets for MOOPs. Furthermore, a demonstration of the suggested algorithm's ability to resolve real‐world applications is provided through a practical implementation of the algorithm to improve an abrasive water‐jet machining process (AWJM).","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":null,"pages":null},"PeriodicalIF":3.3,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142206854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing interpretability of data‐driven fuzzy models: Application in industrial regression problems 评估数据驱动模糊模型的可解释性:在工业回归问题中的应用
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-27 DOI: 10.1111/exsy.13710
Jorge S. S. Júnior, Carlos Gaspar, Jérôme Mendes, Cristiano Premebida
Machine Learning (ML) has attracted great interest in the modeling of systems using computational learning methods, being utilized in a wide range of advanced fields due to its ability and efficiency to process large amounts of data and to make predictions or decisions with a high degree of accuracy. However, with the increase in the complexity of the models, ML's methods have presented complex structures that are not always transparent to the users. In this sense, it is important to study how to counteract this trend and explore ways to increase the interpretability of these models, precisely where decision‐making plays a central role. This work addresses this challenge by assessing the interpretability and explainability of fuzzy‐based models. The structural and semantic factors that impact the interpretability of fuzzy systems are examined. Various metrics have been studied to address this topic, such as the Co‐firing Based Comprehensibility Index (COFCI), Nauck Index, Similarity Index, and Membership Function Center Index. These metrics were assessed across different datasets on three fuzzy‐based models: (i) a model designed with Fuzzy c‐Means and Least Squares Method, (ii) Adaptive‐Network‐based Fuzzy Inference System (ANFIS), and (iii) Generalized Additive Model Zero‐Order Takagi‐Sugeno (GAM‐ZOTS). The study conducted in this work culminates in a new comprehensive interpretability metric that covers different domains associated with interpretability in fuzzy‐based models. When addressing interpretability, one of the challenges lies in balancing high accuracy with interpretability, as these two goals often conflict. In this context, experimental evaluations were performed in many scenarios using 4 datasets varying the model parameters in order to find a compromise between interpretability and accuracy.
机器学习(ML)在利用计算机学习方法进行系统建模方面引起了人们的极大兴趣,由于它能够高效地处理大量数据,并做出高度准确的预测或决策,因此被广泛应用于各种先进领域。然而,随着模型复杂性的增加,ML 方法呈现出复杂的结构,而这些结构对用户来说并不总是透明的。从这个意义上说,研究如何抵消这种趋势并探索如何提高这些模型的可解释性是非常重要的,而这恰恰是决策起着核心作用的地方。这项工作通过评估基于模糊的模型的可解释性和可解释性来应对这一挑战。研究了影响模糊系统可解释性的结构和语义因素。针对这一主题研究了各种指标,如基于共燃的可理解性指数(COFCI)、瑙克指数、相似性指数和成员函数中心指数。这些指标在不同的数据集上对三种基于模糊的模型进行了评估:(i) 使用模糊 c-Means 和最小二乘法设计的模型,(ii) 基于自适应网络的模糊推理系统 (ANFIS),以及 (iii) 广义加法模型零阶高木-杉野 (GAM-ZOTS)。这项研究最终提出了一种新的综合可解释性指标,它涵盖了与基于模糊模型的可解释性相关的不同领域。在解决可解释性问题时,面临的挑战之一是如何平衡高准确度和可解释性,因为这两个目标经常会发生冲突。在这种情况下,为了在可解释性和准确性之间找到一个折衷方案,我们使用 4 个数据集,通过改变模型参数,在多种情况下进行了实验评估。
{"title":"Assessing interpretability of data‐driven fuzzy models: Application in industrial regression problems","authors":"Jorge S. S. Júnior, Carlos Gaspar, Jérôme Mendes, Cristiano Premebida","doi":"10.1111/exsy.13710","DOIUrl":"https://doi.org/10.1111/exsy.13710","url":null,"abstract":"Machine Learning (ML) has attracted great interest in the modeling of systems using computational learning methods, being utilized in a wide range of advanced fields due to its ability and efficiency to process large amounts of data and to make predictions or decisions with a high degree of accuracy. However, with the increase in the complexity of the models, ML's methods have presented complex structures that are not always transparent to the users. In this sense, it is important to study how to counteract this trend and explore ways to increase the interpretability of these models, precisely where decision‐making plays a central role. This work addresses this challenge by assessing the interpretability and explainability of fuzzy‐based models. The structural and semantic factors that impact the interpretability of fuzzy systems are examined. Various metrics have been studied to address this topic, such as the Co‐firing Based Comprehensibility Index (COFCI), Nauck Index, Similarity Index, and Membership Function Center Index. These metrics were assessed across different datasets on three fuzzy‐based models: (i) a model designed with Fuzzy c‐Means and Least Squares Method, (ii) Adaptive‐Network‐based Fuzzy Inference System (ANFIS), and (iii) Generalized Additive Model Zero‐Order Takagi‐Sugeno (GAM‐ZOTS). The study conducted in this work culminates in a new comprehensive interpretability metric that covers different domains associated with interpretability in fuzzy‐based models. When addressing interpretability, one of the challenges lies in balancing high accuracy with interpretability, as these two goals often conflict. In this context, experimental evaluations were performed in many scenarios using 4 datasets varying the model parameters in order to find a compromise between interpretability and accuracy.","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":null,"pages":null},"PeriodicalIF":3.3,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142206601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing task allocation with temporal‐spatial privacy protection in mobile crowdsensing 在移动人群感应中优化任务分配,同时保护时空隐私
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-27 DOI: 10.1111/exsy.13717
Yuping Liu, Honglong Chen, Xiaolong Liu, Wentao Wei, Huansheng Xue, Osama Alfarraj, Zafer Almakhadmeh
Mobile Crowdsensing (MCS) is considered to be a key emerging example of a smart city, which combines the wisdom of dynamic people with mobile devices to provide distributed, ubiquitous services and applications. In MCS, each worker tends to complete as many tasks as possible within the limited idle time to obtain higher income, while completing a task may require the worker to move to the specific location of the task and perform continuous sensing. Thus the time and location information of each worker is necessary for an efficient task allocation mechanism. However, submitting the time and location information of the workers to the system raises several privacy concerns, making it significant to protect both the temporal and spatial privacy of workers in MCS. In this article, we propose the Task Allocation with Temporal‐Spatial Privacy Protection (TASP) problem, aiming to maximize the total worker income to further improve the workers' motivation in executing tasks and the platform's utility, which is proved to be NP‐hard. We adopt differential privacy technology to introduce Laplace noise into the location and time information of workers, after which we propose the Improved Genetic Algorithm (SPGA) and the Clone‐Enhanced Genetic Algorithm (SPCGA), to solve the TASP problem. Experimental results on two real‐world datasets verify the effectiveness of the proposed SPGA and SPCGA with the required personalized privacy protection.
移动众感(MCS)被认为是智慧城市的一个重要新兴范例,它将动态人群的智慧与移动设备相结合,提供分布式、无处不在的服务和应用。在 MCS 中,每个工人都倾向于在有限的空闲时间内完成尽可能多的任务,以获得更高的收入,而完成一项任务可能需要工人移动到任务的特定地点并进行连续感知。因此,高效的任务分配机制需要每个工人的时间和位置信息。然而,将工人的时间和位置信息提交给系统会引发一些隐私问题,因此在 MCS 中保护工人的时间和空间隐私意义重大。在本文中,我们提出了带时空隐私保护的任务分配(TASP)问题,旨在使工人的总收入最大化,从而进一步提高工人执行任务的积极性和平台的效用。我们采用差分隐私技术在工人的位置和时间信息中引入拉普拉斯噪声,然后提出改进遗传算法(SPGA)和克隆增强遗传算法(SPCGA)来解决 TASP 问题。在两个真实世界数据集上的实验结果验证了所提出的 SPGA 和 SPCGA 在所需的个性化隐私保护方面的有效性。
{"title":"Optimizing task allocation with temporal‐spatial privacy protection in mobile crowdsensing","authors":"Yuping Liu, Honglong Chen, Xiaolong Liu, Wentao Wei, Huansheng Xue, Osama Alfarraj, Zafer Almakhadmeh","doi":"10.1111/exsy.13717","DOIUrl":"https://doi.org/10.1111/exsy.13717","url":null,"abstract":"Mobile Crowdsensing (MCS) is considered to be a key emerging example of a smart city, which combines the wisdom of dynamic people with mobile devices to provide distributed, ubiquitous services and applications. In MCS, each worker tends to complete as many tasks as possible within the limited idle time to obtain higher income, while completing a task may require the worker to move to the specific location of the task and perform continuous sensing. Thus the time and location information of each worker is necessary for an efficient task allocation mechanism. However, submitting the time and location information of the workers to the system raises several privacy concerns, making it significant to protect both the temporal and spatial privacy of workers in MCS. In this article, we propose the Task Allocation with Temporal‐Spatial Privacy Protection (TASP) problem, aiming to maximize the total worker income to further improve the workers' motivation in executing tasks and the platform's utility, which is proved to be NP‐hard. We adopt differential privacy technology to introduce Laplace noise into the location and time information of workers, after which we propose the Improved Genetic Algorithm (SPGA) and the Clone‐Enhanced Genetic Algorithm (SPCGA), to solve the TASP problem. Experimental results on two real‐world datasets verify the effectiveness of the proposed SPGA and SPCGA with the required personalized privacy protection.","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":null,"pages":null},"PeriodicalIF":3.3,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142206600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Underwater image enhancement using contrast correction 利用对比度校正增强水下图像
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-26 DOI: 10.1111/exsy.13692
Nishant Singh, Aruna Bhat
Light‐induced degeneration of underwater images occurs by physical features of seawater. According to the wavelength of the colour spectrum, light reduces intensity significantly when it moves through water. The greatest wavelength of light that is visible gets absorbed first. Red and blue absorb the most and least, respectively. Because of the reducing consequences of the light spectrum, underwater images having poor contrast can be obtained. As a result, the crucial data contained inside these images will not be effectively retrieved for later analysis. The recent research suggests a novel approach to enhance the contrast while decreasing noise in underwater images. The recommended approach involves image histogram transformation using two significant colour spaces, Red‐Green‐Blue (RGB) and Hue‐Saturation‐Value (HSV). The histogram of the dominant colour channel (blue channel) in the RGB colour model is extended towards the lower level, containing a maximum limitation of 95%, while the inferior red colour channel has been extended towards the upper side, containing a minimum limitation of 5%. During the entire dynamic range, the green colour channel having the dominant and inferior colour channels expands in each direction. The Rayleigh distribution has been utilized for developing various stretching actions within the RGB colour space. The image has been converted to the HSV colour space, having the S and V elements adjusted within 1% of their minimum and maximum values. The suggested approach is examined in both qualitative and quantitative analysis. According to qualitative analysis, the recommended approach substantially boosts image contrast, lowers its blue and green effect, and minimizes over‐enhanced and under‐enhanced sections in the final resultant underwater image. The quantitative examination of 500 large scale underwater images dataset reveals that the suggested technique generates better results. The dataset images are grouped into small fish images, blue coral images, stone wall images, and coral branch images. The quantitative examination of all these four groups have been evaluated and shown. The average mean square error, peak signal to noise ratio, underwater image quality measurement, and underwater colour image quality evaluation values of dataset images are 76.69, 31.25, 3.85, and 0.64, respectively. These values of our proposed work outperform six other previous methods.
海水的物理特性会导致光引起的水下图像质量下降。根据彩色光谱的波长,光在水中移动时会大大降低强度。波长最大的可见光首先被吸收。红色和蓝色分别吸收最多和最少。由于光谱减弱的影响,水下图像的对比度很低。因此,这些图像中包含的重要数据将无法被有效地检索出来,以供日后分析。最近的研究提出了一种新方法来增强水下图像的对比度,同时降低噪声。建议采用的方法包括利用红-绿-蓝(RGB)和色相-饱和度-值(HSV)这两个重要的色彩空间进行图像直方图转换。在 RGB 色彩模型中,主要色彩通道(蓝色通道)的直方图向低层扩展,最大限制为 95%,而次要的红色通道则向高层扩展,最小限制为 5%。在整个动态范围内,具有优势和劣势色彩通道的绿色通道向各个方向扩展。利用瑞利分布在 RGB 色彩空间内进行各种拉伸操作。图像被转换到 HSV 色彩空间,S 和 V 元素的调整幅度为其最小值和最大值的 1%。对建议的方法进行了定性和定量分析。根据定性分析,建议的方法大大提高了图像对比度,降低了蓝绿效果,并最大限度地减少了最终水下图像中过度增强和增强不足的部分。对 500 幅大型水下图像数据集的定量分析显示,建议的技术能产生更好的效果。数据集图像分为小鱼图像、蓝珊瑚图像、石墙图像和珊瑚枝图像。对所有这四组图像进行了定量检测和评估。数据集图像的平均均方误差、峰值信噪比、水下图像质量测量和水下彩色图像质量评估值分别为 76.69、31.25、3.85 和 0.64。我们提出的方法的这些值优于之前的其他六种方法。
{"title":"Underwater image enhancement using contrast correction","authors":"Nishant Singh, Aruna Bhat","doi":"10.1111/exsy.13692","DOIUrl":"https://doi.org/10.1111/exsy.13692","url":null,"abstract":"Light‐induced degeneration of underwater images occurs by physical features of seawater. According to the wavelength of the colour spectrum, light reduces intensity significantly when it moves through water. The greatest wavelength of light that is visible gets absorbed first. Red and blue absorb the most and least, respectively. Because of the reducing consequences of the light spectrum, underwater images having poor contrast can be obtained. As a result, the crucial data contained inside these images will not be effectively retrieved for later analysis. The recent research suggests a novel approach to enhance the contrast while decreasing noise in underwater images. The recommended approach involves image histogram transformation using two significant colour spaces, Red‐Green‐Blue (RGB) and Hue‐Saturation‐Value (HSV). The histogram of the dominant colour channel (blue channel) in the RGB colour model is extended towards the lower level, containing a maximum limitation of 95%, while the inferior red colour channel has been extended towards the upper side, containing a minimum limitation of 5%. During the entire dynamic range, the green colour channel having the dominant and inferior colour channels expands in each direction. The Rayleigh distribution has been utilized for developing various stretching actions within the RGB colour space. The image has been converted to the HSV colour space, having the S and V elements adjusted within 1% of their minimum and maximum values. The suggested approach is examined in both qualitative and quantitative analysis. According to qualitative analysis, the recommended approach substantially boosts image contrast, lowers its blue and green effect, and minimizes over‐enhanced and under‐enhanced sections in the final resultant underwater image. The quantitative examination of 500 large scale underwater images dataset reveals that the suggested technique generates better results. The dataset images are grouped into small fish images, blue coral images, stone wall images, and coral branch images. The quantitative examination of all these four groups have been evaluated and shown. The average mean square error, peak signal to noise ratio, underwater image quality measurement, and underwater colour image quality evaluation values of dataset images are 76.69, 31.25, 3.85, and 0.64, respectively. These values of our proposed work outperform six other previous methods.","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":null,"pages":null},"PeriodicalIF":3.3,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142206602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing anomaly detection in cloud environments with cutting‐edge generative AI for expert systems 利用面向专家系统的尖端生成式人工智能推进云环境中的异常检测
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-26 DOI: 10.1111/exsy.13722
Umit Demirbaga
As artificial intelligence (AI) continues to advance, Generative AI emerges as a transformative force, capable of generating novel content and revolutionizing anomaly detection methodologies. This paper presents CloudGEN, a pioneering approach to anomaly detection in cloud environments by leveraging the potential of Generative Adversarial Networks (GANs) and Convolutional Neural Network (CNN). Our research focuses on developing a state‐of‐the‐art Generative AI‐based anomaly detection system, integrating GANs, deep learning techniques, and adversarial training. We explore unsupervised generative modelling, multi‐modal architectures, and transfer learning to enhance expert systems' anomaly detection systems. We illustrate our approach by dissecting anomalies regarding job performance, network behaviour, and resource utilization in cloud computing environments. The experimental results underscore a notable surge in anomaly detection accuracy with significant development of approximately 11%.
随着人工智能(AI)的不断进步,生成式人工智能(Generative AI)作为一种变革力量应运而生,它能够生成新颖的内容并彻底改变异常检测方法。本文介绍了 CloudGEN,这是一种利用生成对抗网络(GAN)和卷积神经网络(CNN)的潜力在云环境中进行异常检测的开创性方法。我们的研究重点是开发最先进的基于生成式人工智能的异常检测系统,将生成式对抗网络、深度学习技术和对抗训练整合在一起。我们探索无监督生成建模、多模态架构和迁移学习,以增强专家系统的异常检测系统。我们通过剖析云计算环境中有关工作性能、网络行为和资源利用率的异常现象来说明我们的方法。实验结果表明,异常检测准确率显著提高了约 11%。
{"title":"Advancing anomaly detection in cloud environments with cutting‐edge generative AI for expert systems","authors":"Umit Demirbaga","doi":"10.1111/exsy.13722","DOIUrl":"https://doi.org/10.1111/exsy.13722","url":null,"abstract":"As artificial intelligence (AI) continues to advance, Generative AI emerges as a transformative force, capable of generating novel content and revolutionizing anomaly detection methodologies. This paper presents CloudGEN, a pioneering approach to anomaly detection in cloud environments by leveraging the potential of Generative Adversarial Networks (GANs) and Convolutional Neural Network (CNN). Our research focuses on developing a state‐of‐the‐art Generative AI‐based anomaly detection system, integrating GANs, deep learning techniques, and adversarial training. We explore unsupervised generative modelling, multi‐modal architectures, and transfer learning to enhance expert systems' anomaly detection systems. We illustrate our approach by dissecting anomalies regarding job performance, network behaviour, and resource utilization in cloud computing environments. The experimental results underscore a notable surge in anomaly detection accuracy with significant development of approximately 11%.","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":null,"pages":null},"PeriodicalIF":3.3,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142206852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Expert Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1