首页 > 最新文献

Expert Systems with Applications最新文献

英文 中文
Vessel speed prediction using latent-invariant transforms in the presence of incomplete information 在信息不完整的情况下利用潜变量变换预测船速
IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-04 DOI: 10.1016/j.eswa.2024.125685
Xu Zhao , Yuhan Guo , Yiyang Wang , Meirong Wang
This paper presents a novel model designed to predict the vessel speed, specifically tailored to tackle the challenges posed by incomplete information of relevant operating parameters encountered in certain scenarios. In this method, a latent trend in the operating state of marine power system is firstly identified from historical time-series data to approximate the calm water speed information. Then, the modeling of the remaining component, which corresponds to the met-ocean-induced speed loss, can be more precisely targeted. Moreover, the elements situated at diverse temporal scales of the remaining component are disentangled, aiming to resolve the intricacies of coupled factor learning, thus improving the accuracy and validity of the model. For time-series with relatively steady-state, an LSTM network with a global attention mechanism is proposed to effectively capture the temporal evolution, and a differencing operation is incorporated to mitigate potential data inconsistencies between voyages. Finally, the proposed framework has demonstrated superior predictive capabilities for speed compared to a variety of data-driven methods, using a 400,000 DWT ore carrier as an example.
本文提出了一种新颖的船速预测模型,专门用于解决某些情况下相关运行参数信息不全所带来的挑战。在该方法中,首先从历史时间序列数据中识别出船舶动力系统运行状态的潜在趋势,以近似平静水域的航速信息。然后,就可以更精确地针对剩余部分进行建模,该部分对应的是由气象引起的速度损失。此外,对剩余分量中不同时间尺度的要素进行分解,旨在解决复杂的耦合要素学习问题,从而提高模型的准确性和有效性。对于相对稳态的时间序列,提出了一种具有全局关注机制的 LSTM 网络,以有效捕捉时间演化,并加入了差分操作,以缓解航程之间潜在的数据不一致问题。最后,以一艘 400,000 DWT 的矿石运输船为例,与各种数据驱动方法相比,所提出的框架展示了卓越的速度预测能力。
{"title":"Vessel speed prediction using latent-invariant transforms in the presence of incomplete information","authors":"Xu Zhao ,&nbsp;Yuhan Guo ,&nbsp;Yiyang Wang ,&nbsp;Meirong Wang","doi":"10.1016/j.eswa.2024.125685","DOIUrl":"10.1016/j.eswa.2024.125685","url":null,"abstract":"<div><div>This paper presents a novel model designed to predict the vessel speed, specifically tailored to tackle the challenges posed by incomplete information of relevant operating parameters encountered in certain scenarios. In this method, a latent trend in the operating state of marine power system is firstly identified from historical time-series data to approximate the calm water speed information. Then, the modeling of the remaining component, which corresponds to the met-ocean-induced speed loss, can be more precisely targeted. Moreover, the elements situated at diverse temporal scales of the remaining component are disentangled, aiming to resolve the intricacies of coupled factor learning, thus improving the accuracy and validity of the model. For time-series with relatively steady-state, an LSTM network with a global attention mechanism is proposed to effectively capture the temporal evolution, and a differencing operation is incorporated to mitigate potential data inconsistencies between voyages. Finally, the proposed framework has demonstrated superior predictive capabilities for speed compared to a variety of data-driven methods, using a 400,000 DWT ore carrier as an example.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"262 ","pages":"Article 125685"},"PeriodicalIF":7.5,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AuthorNet: Leveraging attention-based early fusion of transformers for low-resource authorship attribution 作者网:利用基于注意力的早期融合转换器实现低资源作者归属
IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-04 DOI: 10.1016/j.eswa.2024.125643
Md. Rajib Hossain , Mohammed Moshiul Hoque , M. Ali Akber Dewan , Enamul Hoque , Nazmul Siddique
Authorship Attribution (AA) is crucial for identifying the author of a given text from a pool of suspects, especially with the widespread use of the internet and electronic devices. However, most AA research has primarily focused on high-resource languages like English, leaving low-resource languages such as Bengali relatively unexplored. Challenges faced in this domain include the absence of benchmark corpora, a lack of context-aware feature extractors, limited availability of tuned hyperparameters, and OOV issues. To address these challenges, this study introduces AuthorNet for authorship attribution using attention-based early fusion of transformer-based language models, i.e., concatenation of an embeddings output of two existing models that were fine-tuned. AuthorNet consists of three key modules: Feature extraction, Fine-tuning and selection of best-performing models, and Attention-based early fusion. To evaluate the performance of AuthorNet, a number of experiments using four benchmark corpora have been conducted. The results demonstrated exceptional accuracy: 98.86 ± 0.01%, 99.49 ± 0.01%, 97.91 ± 0.01%, and 99.87 ± 0.01% for four corpora. Notably, AuthorNet outperformed all foundation models, achieving accuracy improvements ranging from 0.24% to 2.92% across the four corpora.
作者归属(AA)对于从众多嫌疑人中识别特定文本的作者至关重要,尤其是在互联网和电子设备广泛使用的情况下。然而,大多数作者署名研究主要集中在英语等高资源语言上,孟加拉语等低资源语言则相对较少。该领域面临的挑战包括缺乏基准语料库、缺乏上下文感知特征提取器、可调谐超参数的可用性有限以及 OOV 问题。为了应对这些挑战,本研究引入了 AuthorNet,利用基于注意力的早期融合转换器语言模型(即连接两个经过微调的现有模型的嵌入输出)来实现作者归属。AuthorNet 由三个关键模块组成:特征提取、微调和选择性能最佳的模型,以及基于注意力的早期融合。为了评估 AuthorNet 的性能,我们使用四个基准语料库进行了大量实验。结果表明,AuthorNet 的准确率非常高:四个语料库的准确率分别为 98.86 ± 0.01%、99.49 ± 0.01%、97.91 ± 0.01% 和 99.87 ± 0.01%。值得注意的是,AuthorNet 的表现优于所有基础模型,在四个语料库中的准确率提高了 0.24% 到 2.92%。
{"title":"AuthorNet: Leveraging attention-based early fusion of transformers for low-resource authorship attribution","authors":"Md. Rajib Hossain ,&nbsp;Mohammed Moshiul Hoque ,&nbsp;M. Ali Akber Dewan ,&nbsp;Enamul Hoque ,&nbsp;Nazmul Siddique","doi":"10.1016/j.eswa.2024.125643","DOIUrl":"10.1016/j.eswa.2024.125643","url":null,"abstract":"<div><div>Authorship Attribution (AA) is crucial for identifying the author of a given text from a pool of suspects, especially with the widespread use of the internet and electronic devices. However, most AA research has primarily focused on high-resource languages like English, leaving low-resource languages such as Bengali relatively unexplored. Challenges faced in this domain include the absence of benchmark corpora, a lack of context-aware feature extractors, limited availability of tuned hyperparameters, and OOV issues. To address these challenges, this study introduces AuthorNet for authorship attribution using attention-based early fusion of transformer-based language models, i.e., concatenation of an embeddings output of two existing models that were fine-tuned. AuthorNet consists of three key modules: Feature extraction, Fine-tuning and selection of best-performing models, and Attention-based early fusion. To evaluate the performance of AuthorNet, a number of experiments using four benchmark corpora have been conducted. The results demonstrated exceptional accuracy: 98.86 ± 0.01%, 99.49 ± 0.01%, 97.91 ± 0.01%, and 99.87 ± 0.01% for four corpora. Notably, AuthorNet outperformed all foundation models, achieving accuracy improvements ranging from 0.24% to 2.92% across the four corpora.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"262 ","pages":"Article 125643"},"PeriodicalIF":7.5,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning face super-resolution through identity features and distilling facial prior knowledge 通过身份特征和提炼面部先验知识学习面部超分辨率
IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-04 DOI: 10.1016/j.eswa.2024.125625
Anurag Singh Tomar , K.V. Arya , Shyam Singh Rajput
Deep learning techniques in electronic surveillance have shown impressive performance for super-resolution (SR) of captured low-quality face images. Most of these techniques adopt facial priors to improve the feature details in the resultant super-resolved images. However, the estimation of facial priors from the captured low-quality images is often inaccurate in real-life situations because of their tiny, noisy, and blurry nature. Thus, the fusion of such priors badly affects the performance of these models. Therefore, this work presents a teacher–student-based face SR framework that efficiently preserves the personal facial structure information in the super-resolved faces. In the proposed framework, the teacher network exploits the facial heatmap-based ground-truth-prior to learn the facial structure that is utilized by the student network. The student network is trained with the identity feature loss for maintaining the identity and facial structure information in reconstructed high-resolution (HR) face images. The performance of the proposed framework is evaluated by conducting the experimental study on standard datasets namely CelebA-HQ and LFW face. The experimental results reveal that the proposed technique conquers the existing methods for the face SR task.
电子监控领域的深度学习技术在对捕捉到的低质量人脸图像进行超分辨率(SR)处理方面表现出令人印象深刻的性能。这些技术大多采用面部先验来改善超分辨率图像中的特征细节。然而,在现实生活中,由于拍摄到的低质量图像微小、嘈杂、模糊,因此从这些图像中估算出的面部先验值往往并不准确。因此,融合这些前验会严重影响这些模型的性能。因此,本研究提出了一种基于教师-学生的人脸 SR 框架,它能有效保留超分辨率人脸中的个人面部结构信息。在所提出的框架中,教师网络利用基于面部热图的地面实况先验来学习面部结构,学生网络则利用这些先验来学习面部结构。学生网络通过身份特征损失进行训练,以保持重建的高分辨率(HR)人脸图像中的身份和面部结构信息。通过在标准数据集(即 CelebA-HQ 和 LFW 人脸)上进行实验研究,对所提出框架的性能进行了评估。实验结果表明,在人脸 SR 任务中,所提出的技术战胜了现有的方法。
{"title":"Learning face super-resolution through identity features and distilling facial prior knowledge","authors":"Anurag Singh Tomar ,&nbsp;K.V. Arya ,&nbsp;Shyam Singh Rajput","doi":"10.1016/j.eswa.2024.125625","DOIUrl":"10.1016/j.eswa.2024.125625","url":null,"abstract":"<div><div>Deep learning techniques in electronic surveillance have shown impressive performance for super-resolution (SR) of captured low-quality face images. Most of these techniques adopt facial priors to improve the feature details in the resultant super-resolved images. However, the estimation of facial priors from the captured low-quality images is often inaccurate in real-life situations because of their tiny, noisy, and blurry nature. Thus, the fusion of such priors badly affects the performance of these models. Therefore, this work presents a teacher–student-based face SR framework that efficiently preserves the personal facial structure information in the super-resolved faces. In the proposed framework, the teacher network exploits the facial heatmap-based ground-truth-prior to learn the facial structure that is utilized by the student network. The student network is trained with the identity feature loss for maintaining the identity and facial structure information in reconstructed high-resolution (HR) face images. The performance of the proposed framework is evaluated by conducting the experimental study on standard datasets namely CelebA-HQ and LFW face. The experimental results reveal that the proposed technique conquers the existing methods for the face SR task.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"262 ","pages":"Article 125625"},"PeriodicalIF":7.5,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning for predicting used car resale prices using granular vehicle equipment information 利用细粒度车辆设备信息预测二手车转售价格的机器学习
IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-04 DOI: 10.1016/j.eswa.2024.125640
Svenja Bergmann , Stefan Feuerriegel
Millions of used cars are sold every year, and, hence, accurate estimates of resale values are needed. One reason is that under- and overestimating the value of used cars at the end of their leasing period is directly related to the financial return of car retailers. However, in previous literature, granular vehicle equipment information (e.g., alloy rims, park assistance systems) as a predictor has been largely overlooked. In order to address this research gap, we assess the predictive power of granular information about vehicle equipment when forecasting the resale value of used cars. To achieve this, we first preprocess 50,000 equipment options through a tailored, end-to-end automated procedure. Subsequently, we employ machine learning using a comprehensive real-world dataset comprising 92,239 sales where each vehicle is characterized by a unique equipment configuration. We find that including equipment information improves the prediction performance (i.e., mean absolute error) by 3.27% and at a statistically significant level. Altogether, car retailers can use information about the specific vehicle configuration to more accurately predict prices of used vehicles, and, as an implication for businesses, this may eventually increase returns.
每年出售的二手车数以百万计,因此需要对转售价值进行准确估算。其中一个原因是,低估或高估二手车租赁期结束时的价值直接关系到汽车零售商的经济回报。然而,在以往的文献中,细化的车辆设备信息(如合金轮辋、泊车辅助系统)作为一种预测因素在很大程度上被忽视了。为了弥补这一研究空白,我们评估了车辆设备细粒度信息在预测二手车转售价值时的预测能力。为此,我们首先通过量身定制的端到端自动程序对 50,000 个设备选项进行预处理。随后,我们利用由 92,239 个销售数据组成的综合现实世界数据集进行机器学习,其中每辆车都有独特的设备配置。我们发现,加入设备信息后,预测性能(即平均绝对误差)提高了 3.27%,而且在统计学上具有显著意义。总之,汽车零售商可以利用特定车辆配置的信息更准确地预测二手车的价格,这对企业来说可能最终会增加收益。
{"title":"Machine learning for predicting used car resale prices using granular vehicle equipment information","authors":"Svenja Bergmann ,&nbsp;Stefan Feuerriegel","doi":"10.1016/j.eswa.2024.125640","DOIUrl":"10.1016/j.eswa.2024.125640","url":null,"abstract":"<div><div>Millions of used cars are sold every year, and, hence, accurate estimates of resale values are needed. One reason is that under- and overestimating the value of used cars at the end of their leasing period is directly related to the financial return of car retailers. However, in previous literature, granular vehicle equipment information (e.g., alloy rims, park assistance systems) as a predictor has been largely overlooked. In order to address this research gap, we assess the predictive power of granular information about vehicle equipment when forecasting the resale value of used cars. To achieve this, we first preprocess 50,000 equipment options through a tailored, end-to-end automated procedure. Subsequently, we employ machine learning using a comprehensive real-world dataset comprising 92,239 sales where each vehicle is characterized by a unique equipment configuration. We find that including equipment information improves the prediction performance (i.e., mean absolute error) by 3.27% and at a statistically significant level. Altogether, car retailers can use information about the specific vehicle configuration to more accurately predict prices of used vehicles, and, as an implication for businesses, this may eventually increase returns.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"263 ","pages":"Article 125640"},"PeriodicalIF":7.5,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bim-based Digital Twin development for university Campus management. Case study ETSICCP 基于 Bim 的大学校园管理数字孪生开发。案例研究 ETSICCP
IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-04 DOI: 10.1016/j.eswa.2024.125696
Rubén Muñoz Pavón , Marcos García Alberti , Antonio Alfonso Arcos Álvarez , Jorge Jerez Cepa
Innovation and digitalization are outstanding topics acquiring each day more importance for local governments, especially in Facility Management sector. Moreover, during the COVID-19 situation, new management needs emerged, especially in large public buildings. Building Information Modeling (BIM) is considered as one of the emerging technologies used to reach a total digitalization of the infrastructure. Nevertheless, BIM implementation carries important barriers with itself like, high software and hardware investments, initial BIM skills training or low data interoperability. The objective of this project is to overpass those implementation barriers. For this purpose, the paper shows the creation of a BIM-based intelligent platform for infrastructure management that leads to the development of a Digital Twin (DT). To show the potential of the software developed, a real implementation in the Civil Engineering School at Universidad Politécnica de Madrid was carried out, obtaining significant results thanks to the actual feedback of infrastructure users and managers. The novelty of this project relies on the final results achieved, obtaining a complete DT for management functionalities like space reservation, live sensors data or assets management. All of it, linking BIM models with own software and hardware development using Internet of Things and cloud computing. A multidisciplinary work is compiled in this paper, providing the reader with the most relevant challenges detected in a real digitalization process.
创新和数字化是地方政府日益重视的突出问题,尤其是在设施管理领域。此外,在 COVID-19 期间,出现了新的管理需求,尤其是在大型公共建筑中。建筑信息模型(BIM)被认为是实现基础设施全面数字化的新兴技术之一。然而,BIM 的实施也面临着一些重要障碍,如高昂的软件和硬件投资、BIM 初始技能培训或低数据互操作性。本项目的目标就是要克服这些实施障碍。为此,本文展示了一个基于 BIM 的基础设施管理智能平台的创建过程,该平台可促进数字孪生(DT)的发展。为了展示所开发软件的潜力,在马德里理工大学土木工程学院进行了实际应用,并通过基础设施用户和管理人员的实际反馈取得了显著成果。该项目的新颖之处在于最终取得的成果,为空间预订、实时传感器数据或资产管理等管理功能提供了完整的 DT。所有这些都是利用物联网和云计算将 BIM 模型与自身的软件和硬件开发联系起来。本文汇编了一项多学科工作,为读者提供了在实际数字化过程中发现的最相关挑战。
{"title":"Bim-based Digital Twin development for university Campus management. Case study ETSICCP","authors":"Rubén Muñoz Pavón ,&nbsp;Marcos García Alberti ,&nbsp;Antonio Alfonso Arcos Álvarez ,&nbsp;Jorge Jerez Cepa","doi":"10.1016/j.eswa.2024.125696","DOIUrl":"10.1016/j.eswa.2024.125696","url":null,"abstract":"<div><div>Innovation and digitalization are outstanding topics acquiring each day more importance for local governments, especially in Facility Management sector. Moreover, during the COVID-19 situation, new management needs emerged, especially in large public buildings. Building Information Modeling (BIM) is considered as one of the emerging technologies used to reach a total digitalization of the infrastructure. Nevertheless, BIM implementation carries important barriers with itself like, high software and hardware investments, initial BIM skills training or low data interoperability. The objective of this project is to overpass those implementation barriers. For this purpose, the paper shows the creation of a BIM-based intelligent platform for infrastructure management that leads to the development of a Digital Twin (DT). To show the potential of the software developed, a real implementation in the Civil Engineering School at Universidad Politécnica de Madrid was carried out, obtaining significant results thanks to the actual feedback of infrastructure users and managers. The novelty of this project relies on the final results achieved, obtaining a complete DT for management functionalities like space reservation, live sensors data or assets management. All of it, linking BIM models with own software and hardware development using Internet of Things and cloud computing. A multidisciplinary work is compiled in this paper, providing the reader with the most relevant challenges detected in a real digitalization process.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"262 ","pages":"Article 125696"},"PeriodicalIF":7.5,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A distribution-preserving method for resampling combined with LightGBM-LSTM for sequence-wise fraud detection in credit card transactions 重采样的分布保护方法与 LightGBM-LSTM 相结合,用于信用卡交易中的序列欺诈检测
IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-03 DOI: 10.1016/j.eswa.2024.125661
Behnam Yousefimehr, Mehdi Ghatee
Fraud detection is a challenging task that can be difficult to carry out. To address these challenges, a comprehensive framework has been developed which includes a new resampling method combined with a data-dependent classifier that can detect fraud effectively. The proposed framework uses two hybrid approaches that leverage the strengths of a One-Class Support Vector Machine (OCSVM) with the Synthetic Minority Oversampling Technique (SMOTE) and random undersampling. The distribution of fraud instances is effectively preserved by this innovative framework. The comparison of the probability functions of fraud data before and after resampling is demonstrated, indeed. Afterward, The outputs of our hybrid approaches are analyzed using two distinct models, the Light Gradient-Boosting Machine (LightGBM) and the Long Short-Term Memory (LSTM) model. Our case study on European credit cards has consistently demonstrated the effectiveness of our techniques over existing methods, achieving a high F1 score of 87% with a corresponding AUC score of 96% in non-sequential fraud detection and The F1 score of 85% with an AUC score of 87% in sequential fraud detection. Additionally, we have developed an innovative algorithm for determining optimal window sizes for sequence-wise fraud analysis, which recommends window sizes of 3 for the European dataset, highlighting the efficacy of sequence-wise analysis. Overall, the proposed framework, not only offers a promising solution to enhance fraud detection accuracy, but it also reduces false positives.
欺诈检测是一项具有挑战性的任务,执行起来有一定难度。为了应对这些挑战,我们开发了一个综合框架,其中包括一种新的重采样方法,结合一种能有效检测欺诈行为的数据依赖分类器。所提出的框架采用了两种混合方法,充分利用了单类支持向量机(OCSVM)与合成少数群体过采样技术(SMOTE)和随机欠采样的优势。这一创新框架有效地保留了欺诈实例的分布。重采样前后欺诈数据概率函数的比较得到了证实。随后,我们使用两种不同的模型--光梯度提升机(LightGBM)和长短期记忆(LSTM)模型--分析了混合方法的输出结果。我们对欧洲信用卡的案例研究持续证明了我们的技术优于现有方法的有效性,在非序列欺诈检测中取得了 87% 的高 F1 分数和 96% 的相应 AUC 分数,在序列欺诈检测中取得了 85% 的 F1 分数和 87% 的 AUC 分数。此外,我们还开发了一种创新算法,用于确定序列欺诈分析的最佳窗口大小,该算法建议欧洲数据集的窗口大小为 3,突出了序列分析的功效。总之,所提出的框架不仅为提高欺诈检测的准确性提供了一个有前途的解决方案,而且还能减少误报。
{"title":"A distribution-preserving method for resampling combined with LightGBM-LSTM for sequence-wise fraud detection in credit card transactions","authors":"Behnam Yousefimehr,&nbsp;Mehdi Ghatee","doi":"10.1016/j.eswa.2024.125661","DOIUrl":"10.1016/j.eswa.2024.125661","url":null,"abstract":"<div><div>Fraud detection is a challenging task that can be difficult to carry out. To address these challenges, a comprehensive framework has been developed which includes a new resampling method combined with a data-dependent classifier that can detect fraud effectively. The proposed framework uses two hybrid approaches that leverage the strengths of a One-Class Support Vector Machine (OCSVM) with the Synthetic Minority Oversampling Technique (SMOTE) and random undersampling. The distribution of fraud instances is effectively preserved by this innovative framework. The comparison of the probability functions of fraud data before and after resampling is demonstrated, indeed. Afterward, The outputs of our hybrid approaches are analyzed using two distinct models, the Light Gradient-Boosting Machine (LightGBM) and the Long Short-Term Memory (LSTM) model. Our case study on European credit cards has consistently demonstrated the effectiveness of our techniques over existing methods, achieving a high F1 score of 87% with a corresponding AUC score of 96% in non-sequential fraud detection and The F1 score of 85% with an AUC score of 87% in sequential fraud detection. Additionally, we have developed an innovative algorithm for determining optimal window sizes for sequence-wise fraud analysis, which recommends window sizes of 3 for the European dataset, highlighting the efficacy of sequence-wise analysis. Overall, the proposed framework, not only offers a promising solution to enhance fraud detection accuracy, but it also reduces false positives.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"262 ","pages":"Article 125661"},"PeriodicalIF":7.5,"publicationDate":"2024-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mitigating false negatives in imbalanced datasets: An ensemble approach 减少不平衡数据集中的假阴性:一种集合方法
IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-03 DOI: 10.1016/j.eswa.2024.125674
Marcelo Vasconcelos , Luís Cavique
Imbalanced datasets present a challenge in machine learning, especially in binary classification scenarios where one class significantly outweighs the other. This imbalance often leads to models favoring the majority class, resulting in inadequate predictions for the minority class, specifically in false negatives. In response to this issue, this work introduces the MinFNR ensemble algorithm, designed to minimize False Negative Rates (FNR) in imbalanced datasets. The new approach strategically combines data-level, algorithmic-level, and hybrid-level approaches to enhance overall predictive capabilities while minimizing computational resources using the Set Covering Problem (SCP) formulation. Through a comprehensive evaluation of diverse datasets, MinFNR consistently outperforms individual algorithms, showing its potential for applications where the cost of false negatives is substantial, such as fraud detection and medical diagnosis. This work also contributes to ongoing efforts to improve the reliability and effectiveness of machine learning algorithms in real imbalanced scenarios.
不平衡的数据集给机器学习带来了挑战,尤其是在二元分类场景中,其中一类的数量明显多于另一类。这种不平衡往往会导致模型偏向于多数类,从而导致对少数类的预测不足,特别是出现假阴性。针对这一问题,这项研究引入了 MinFNR 集合算法,旨在最大限度地降低不平衡数据集中的假阴性率(FNR)。新方法战略性地结合了数据级、算法级和混合级方法,以增强整体预测能力,同时利用集合覆盖问题(SCP)公式最大限度地减少计算资源。通过对不同数据集的综合评估,MinFNR 的性能始终优于单个算法,这表明它在欺诈检测和医疗诊断等假阴性成本较高的应用领域具有很大的潜力。这项工作还有助于提高机器学习算法在实际不平衡场景中的可靠性和有效性。
{"title":"Mitigating false negatives in imbalanced datasets: An ensemble approach","authors":"Marcelo Vasconcelos ,&nbsp;Luís Cavique","doi":"10.1016/j.eswa.2024.125674","DOIUrl":"10.1016/j.eswa.2024.125674","url":null,"abstract":"<div><div>Imbalanced datasets present a challenge in machine learning, especially in binary classification scenarios where one class significantly outweighs the other. This imbalance often leads to models favoring the majority class, resulting in inadequate predictions for the minority class, specifically in false negatives. In response to this issue, this work introduces the MinFNR ensemble algorithm, designed to minimize False Negative Rates (FNR) in imbalanced datasets. The new approach strategically combines data-level, algorithmic-level, and hybrid-level approaches to enhance overall predictive capabilities while minimizing computational resources using the Set Covering Problem (SCP) formulation. Through a comprehensive evaluation of diverse datasets, MinFNR consistently outperforms individual algorithms, showing its potential for applications where the cost of false negatives is substantial, such as fraud detection and medical diagnosis. This work also contributes to ongoing efforts to improve the reliability and effectiveness of machine learning algorithms in real imbalanced scenarios.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"262 ","pages":"Article 125674"},"PeriodicalIF":7.5,"publicationDate":"2024-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vehicle trajectory extraction with interacting multiple model for low-channel roadside LiDAR 针对低通道路边激光雷达的车辆轨迹提取与交互多重模型
IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-03 DOI: 10.1016/j.eswa.2024.125662
Bowen Gong , Binwen Zhao , Yue Wang , Ciyun Lin , Hongchao Liu
High-precision and consistent vehicle trajectories encompass microscopic traffic parameters, mesoscopic traffic flow characteristics, and macroscopic traffic flow features, which is the cornerstone of innovation in data-driven traffic management and control applications. However, occlusion and trajectory interruption remain challenging in multivehicle tracking under complex traffic environments using low-channel roadside LiDAR. To address the challenge, a novel framework for vehicle trajectory extraction using low-channel roadside LiDAR was proposed. First, the geometric features of the cluster and its L-shape bounding box were used to address the over-segmentation in vehicle detection arising from occlusion and point cloud sparse. Then, objects within adjacent point cloud frames were associated by developing an improved Hungarian algorithm integrated with an adaptive distance threshold to solve the mismatching problem caused by objects entrancing and exiting in a new point cloud frame. Finally, an improved interacting multiple model by considering vehicle driving patterns was deployed to predict the location of missing vehicles and connect the interrupted trajectories. Experimental results showed that the proposed methods achieve 98.76 % of vehicle detection accuracy and 97.40 % of data association precision. The mean absolute error (MAE) and mean square error (MSE) of the vehicle position estimation are 0.2252 m and 0.0729 m2, respectively. The trajectory extraction precision outperforms most of the state-of-the-art algorithms.
高精度和一致的车辆轨迹包含微观交通参数、中观交通流特征和宏观交通流特征,是数据驱动交通管理和控制应用创新的基石。然而,在复杂交通环境下使用低通道路边激光雷达进行多车跟踪时,遮挡和轨迹中断仍然是一个挑战。为了应对这一挑战,我们提出了一种利用低信道路边激光雷达进行车辆轨迹提取的新型框架。首先,利用集群的几何特征及其 L 形边界框来解决车辆检测中因遮挡和点云稀疏而产生的过度分割问题。然后,通过改进的匈牙利算法与自适应距离阈值相结合,将相邻点云帧内的物体关联起来,以解决新点云帧中物体进出造成的不匹配问题。最后,考虑到车辆驾驶模式,采用改进的交互式多重模型来预测丢失车辆的位置,并将中断的轨迹连接起来。实验结果表明,所提出的方法实现了 98.76% 的车辆检测准确率和 97.40% 的数据关联精度。车辆位置估计的平均绝对误差(MAE)和平均平方误差(MSE)分别为 0.2252 m 和 0.0729 m2。轨迹提取精度优于大多数最先进的算法。
{"title":"Vehicle trajectory extraction with interacting multiple model for low-channel roadside LiDAR","authors":"Bowen Gong ,&nbsp;Binwen Zhao ,&nbsp;Yue Wang ,&nbsp;Ciyun Lin ,&nbsp;Hongchao Liu","doi":"10.1016/j.eswa.2024.125662","DOIUrl":"10.1016/j.eswa.2024.125662","url":null,"abstract":"<div><div>High-precision and consistent vehicle trajectories encompass microscopic traffic parameters, mesoscopic traffic flow characteristics, and macroscopic traffic flow features, which is the cornerstone of innovation in data-driven traffic management and control applications. However, occlusion and trajectory interruption remain challenging in multivehicle tracking under complex traffic environments using low-channel roadside LiDAR. To address the challenge, a novel framework for vehicle trajectory extraction using low-channel roadside LiDAR was proposed. First, the geometric features of the cluster and its L-shape bounding box were used to address the over-segmentation in vehicle detection arising from occlusion and point cloud sparse. Then, objects within adjacent point cloud frames were associated by developing an improved Hungarian algorithm integrated with an adaptive distance threshold to solve the mismatching problem caused by objects entrancing and exiting in a new point cloud frame. Finally, an improved interacting multiple model by considering vehicle driving patterns was deployed to predict the location of missing vehicles and connect the interrupted trajectories. Experimental results showed that the proposed methods achieve 98.76 % of vehicle detection accuracy and 97.40 % of data association precision. The mean absolute error (MAE) and mean square error (MSE) of the vehicle position estimation are 0.2252 m and 0.0729 m<sup>2</sup>, respectively. The trajectory extraction precision outperforms most of the state-of-the-art algorithms.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"262 ","pages":"Article 125662"},"PeriodicalIF":7.5,"publicationDate":"2024-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142586044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new look of dispatching for multi-objective interbay AMHS in semiconductor wafer manufacturing: A T–S fuzzy-based learning approach 半导体晶圆制造中多目标晶圆间 AMHS 的调度新视角:基于 T-S 模糊学习的方法
IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-02 DOI: 10.1016/j.eswa.2024.125615
Hua Li , Zhenghong Jin
Semiconductor wafer fabrication systems (SWFS) are among the most intricate discrete processing environments globally. Since the costs associated with automated material handling systems (AMHS) within fabs account for 20%–50% of manufacturing expenses, it is crucial to enhance the efficiency of material handling in semiconductor production lines. However, optimizing AMHS is difficult due to the complexities inherent in large-scale, nonlinear, dynamic, and stochastic production settings, as well as differing objectives and goals. To overcome these challenges, this paper presents a novel fuzzy-based learning algorithm to enhance the multi-objective dispatching model, which incorporates both transportation and production aspects for interbay AMHS in wafer fabrication manufacturing, aligning it more closely with real-world conditions. Furthermore, we formulate a new constrained nonlinear dispatching problem. To tackle the inherent nonlinearity, a Takagi-Sugeno (T–S) fuzzy modeling approach is developed, which transforms nonlinear terms into a fuzzy linear dispatching model and optimizes the weight in multi-objective problems to obtain the optimal solution. The effectiveness and superiority of the proposed approach are demonstrated through extensive simulations and comparative analysis with existing methods. As a result, the proposed method significantly improves transport efficiency, increases wafer throughput, and reduces processing cycle times.
半导体晶片制造系统(SWFS)是全球最复杂的离散加工环境之一。由于晶圆厂内与自动材料处理系统(AMHS)相关的成本占制造费用的 20%-50%,因此提高半导体生产线的材料处理效率至关重要。然而,由于大规模、非线性、动态和随机生产环境固有的复杂性,以及不同的目的和目标,优化 AMHS 十分困难。为了克服这些挑战,本文提出了一种新颖的基于模糊学习的算法来增强多目标调度模型,该模型结合了晶圆制造过程中板间 AMHS 的运输和生产两个方面,使其更加贴近现实条件。此外,我们还提出了一个新的约束非线性调度问题。为了解决固有的非线性问题,我们开发了一种高木-菅野(T-S)模糊建模方法,它将非线性项转化为模糊线性调度模型,并优化多目标问题中的权重,从而获得最优解。通过大量模拟和与现有方法的对比分析,证明了所提方法的有效性和优越性。因此,所提出的方法显著提高了运输效率,增加了晶片吞吐量,并缩短了加工周期时间。
{"title":"A new look of dispatching for multi-objective interbay AMHS in semiconductor wafer manufacturing: A T–S fuzzy-based learning approach","authors":"Hua Li ,&nbsp;Zhenghong Jin","doi":"10.1016/j.eswa.2024.125615","DOIUrl":"10.1016/j.eswa.2024.125615","url":null,"abstract":"<div><div>Semiconductor wafer fabrication systems (SWFS) are among the most intricate discrete processing environments globally. Since the costs associated with automated material handling systems (AMHS) within fabs account for 20%–50% of manufacturing expenses, it is crucial to enhance the efficiency of material handling in semiconductor production lines. However, optimizing AMHS is difficult due to the complexities inherent in large-scale, nonlinear, dynamic, and stochastic production settings, as well as differing objectives and goals. To overcome these challenges, this paper presents a novel fuzzy-based learning algorithm to enhance the multi-objective dispatching model, which incorporates both transportation and production aspects for interbay AMHS in wafer fabrication manufacturing, aligning it more closely with real-world conditions. Furthermore, we formulate a new constrained nonlinear dispatching problem. To tackle the inherent nonlinearity, a Takagi-Sugeno (T–S) fuzzy modeling approach is developed, which transforms nonlinear terms into a fuzzy linear dispatching model and optimizes the weight in multi-objective problems to obtain the optimal solution. The effectiveness and superiority of the proposed approach are demonstrated through extensive simulations and comparative analysis with existing methods. As a result, the proposed method significantly improves transport efficiency, increases wafer throughput, and reduces processing cycle times.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"262 ","pages":"Article 125615"},"PeriodicalIF":7.5,"publicationDate":"2024-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of A deep Learning-based algorithm for High-Pitch helical computed tomography imaging 开发基于深度学习的高螺距螺旋计算机断层扫描成像算法
IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-02 DOI: 10.1016/j.eswa.2024.125663
Xiaoman Duan , Xiao Fan Ding , Samira Khoz , Xiongbiao Chen , Ning Zhu
High-pitch X-ray helical computed tomography (HCT) imaging has been recently drawing considerable attention in biomedical fields due to its ability to reduce the scanning time and thus lower the radiation dose that objects (being imagined) may receive. However, the issue of compromised reconstruction quality caused by incomplete data in these high-pitch CT scans remains, thus limiting its applications. By addressing the aforementioned issue, this paper presents our study on the development of a novel deep leaning (DL)-based algorithm, ViT-U, for high-pitch X-ray propagation-based imaging HCT (PBI-HCT) reconstruction. ViT-U consists of two key process modules of a vision transformer (ViT) and a convolutional neural network (i.e., U-Net), where ViT addresses the missing information in the data domain and U-Net enhances the post data-processing in the reconstruction domain. For verification, we designed and conducted simulations and experiments with both low-density-biomaterial samples and biological-tissue samples to exemplify the biomedical applications, and then examined the ViT-U performance with varying pitches of 3, 3.5, 4, and 4.5, respectively, for comparison in term of radiation does and reconstruction quality. Our results showed that the high-pitch PBI-HCT allowed for the dose reduction from 72% to 93%. Importantly, our results demonstrated that the ViT-U exhibited outstanding performance by effectively removing the missing wedge artifacts thus enhancing the reconstruction quality of high-pitch PBI-HCT imaging. Also, our results showed the superior capability of ViT-U to achieve high quality of reconstruction from the high-pitch images with the helical pitch value up to 4 (which allowed for the substantial reduction of radiation doses). Taken together, our DL-based ViT-U algorithm not only enables high-speed imaging with low radiation dose, but also maintains the high quality of imaging reconstruction, thereby offering significant potentials for biomedical imaging applications.
高螺距 X 射线螺旋计算机断层扫描(HCT)成像技术能够缩短扫描时间,从而降低物体(被成像物体)可能受到的辐射剂量,因此最近在生物医学领域备受关注。然而,这些高间距 CT 扫描中的不完整数据导致重建质量下降的问题依然存在,从而限制了其应用。针对上述问题,本文介绍了我们针对基于 X 射线传播的高间距成像 HCT(PBI-HCT)重建开发的基于深度倾斜(DL)的新型算法 ViT-U 的研究。ViT-U 由视觉转换器(ViT)和卷积神经网络(即 U-Net)两个关键处理模块组成,其中 ViT 处理数据域的缺失信息,U-Net 增强重建域的后数据处理。为了进行验证,我们设计并进行了低密度生物材料样本和生物组织样本的模拟和实验,以生物医学应用为例,然后分别在 3、3.5、4 和 4.5 不同间距下检验了 ViT-U 的性能,以比较辐射影响和重建质量。结果表明,高间距 PBI-HCT 可使剂量减少 72% 至 93%。重要的是,我们的结果表明,ViT-U 能有效去除缺失的楔形伪影,从而提高高间距 PBI-HCT 成像的重建质量,表现出卓越的性能。此外,我们的结果还显示了 ViT-U 的卓越能力,它能从螺旋间距值高达 4 的高间距图像中获得高质量的重建(从而大幅降低了辐射剂量)。综上所述,我们基于 DL 的 ViT-U 算法不仅能以较低的辐射剂量实现高速成像,还能保持高质量的成像重建,从而为生物医学成像应用提供了巨大的潜力。
{"title":"Development of A deep Learning-based algorithm for High-Pitch helical computed tomography imaging","authors":"Xiaoman Duan ,&nbsp;Xiao Fan Ding ,&nbsp;Samira Khoz ,&nbsp;Xiongbiao Chen ,&nbsp;Ning Zhu","doi":"10.1016/j.eswa.2024.125663","DOIUrl":"10.1016/j.eswa.2024.125663","url":null,"abstract":"<div><div>High-pitch X-ray helical computed tomography (HCT) imaging has been recently drawing considerable attention in biomedical fields due to its ability to reduce the scanning time and thus lower the radiation dose that objects (being imagined) may receive. However, the issue of compromised reconstruction quality caused by incomplete data in these high-pitch CT scans remains, thus limiting its applications. By addressing the aforementioned issue, this paper presents our study on the development of a novel deep leaning (DL)-based algorithm, ViT-U, for high-pitch X-ray propagation-based imaging HCT (PBI-HCT) reconstruction. ViT-U consists of two key process modules of a vision transformer (ViT) and a convolutional neural network (i.e., U-Net), where ViT addresses the missing information in the data domain and U-Net enhances the post data-processing in the reconstruction domain. For verification, we designed and conducted simulations and experiments with both low-density-biomaterial samples and biological-tissue samples to exemplify the biomedical applications, and then examined the ViT-U performance with varying pitches of 3, 3.5, 4, and 4.5, respectively, for comparison in term of radiation does and reconstruction quality. Our results showed that the high-pitch PBI-HCT allowed for the dose reduction from 72% to 93%. Importantly, our results demonstrated that the ViT-U exhibited outstanding performance by effectively removing the missing wedge artifacts thus enhancing the reconstruction quality of high-pitch PBI-HCT imaging. Also, our results showed the superior capability of ViT-U to achieve high quality of reconstruction from the high-pitch images with the helical pitch value up to 4 (which allowed for the substantial reduction of radiation doses). Taken together, our DL-based ViT-U algorithm not only enables high-speed imaging with low radiation dose, but also maintains the high quality of imaging reconstruction, thereby offering significant potentials for biomedical imaging applications.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"262 ","pages":"Article 125663"},"PeriodicalIF":7.5,"publicationDate":"2024-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Expert Systems with Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1