首页 > 最新文献

Expert Systems最新文献

英文 中文
Imbalanced survival prediction for gastric cancer patients based on improved XGBoost with cost sensitive and focal loss 基于改进型 XGBoost 的胃癌患者失衡生存预测,具有成本敏感性和病灶损失性
IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-02 DOI: 10.1111/exsy.13666
Liangchen Xu, Chonghui Guo

Accurate prediction of gastric cancer survival state is one of great significant tasks for clinical decision-making. Many advanced machine learning classification techniques have been applied to predict the survival status of cancer patients in three or 5 years, however, many of them have a low sensitivity because of class imbalance. This is a non-negligible problem due to the poor prognosis of gastric cancer patients. Furthermore, models in the medical domain require strong interpretability to increase their applicability. Due to the better performance and interpretability of the XGBoost model, we design a loss function taking into account cost sensitive and focal loss from the algorithm level for XGBoost to deal with the imbalance problem. We apply the improved model into the prediction of the survival status of gastric cancer patients and analyse the important related features. We use two types of indicators to evaluate the model, and we also design the confusion matrix of two models' predictive results to compare two models. The results show that the improved model has better performance. Furthermore, we calculate the importance of features related to survival with three different time periods and analyse their evolution, which are consistent with existing clinical research or further expand their research conclusions. These all support for clinically relevant decision-making and has the potential to expand into survival prediction of other cancer patients.

准确预测胃癌患者的生存状态是临床决策的重要任务之一。许多先进的机器学习分类技术已被应用于预测癌症患者三年或五年后的生存状况,然而,由于类的不平衡,许多分类技术的灵敏度较低。由于胃癌患者的预后较差,这是一个不可忽视的问题。此外,医疗领域的模型需要较强的可解释性,以提高其适用性。由于 XGBoost 模型具有更好的性能和可解释性,我们为 XGBoost 设计了一个损失函数,从算法层面考虑了成本敏感损失和焦点损失,以解决不平衡问题。我们将改进后的模型应用于胃癌患者生存状况的预测,并分析了相关的重要特征。我们使用两类指标对模型进行评估,并设计了两个模型预测结果的混淆矩阵来比较两个模型。结果表明,改进后的模型性能更好。此外,我们还计算了三个不同时间段内与生存相关的特征的重要性,并分析了其演变过程,这些结果与现有的临床研究一致,或进一步扩展了其研究结论。这些都为临床相关决策提供了支持,并有可能扩展到其他癌症患者的生存预测。
{"title":"Imbalanced survival prediction for gastric cancer patients based on improved XGBoost with cost sensitive and focal loss","authors":"Liangchen Xu,&nbsp;Chonghui Guo","doi":"10.1111/exsy.13666","DOIUrl":"10.1111/exsy.13666","url":null,"abstract":"<p>Accurate prediction of gastric cancer survival state is one of great significant tasks for clinical decision-making. Many advanced machine learning classification techniques have been applied to predict the survival status of cancer patients in three or 5 years, however, many of them have a low sensitivity because of class imbalance. This is a non-negligible problem due to the poor prognosis of gastric cancer patients. Furthermore, models in the medical domain require strong interpretability to increase their applicability. Due to the better performance and interpretability of the XGBoost model, we design a loss function taking into account cost sensitive and focal loss from the algorithm level for XGBoost to deal with the imbalance problem. We apply the improved model into the prediction of the survival status of gastric cancer patients and analyse the important related features. We use two types of indicators to evaluate the model, and we also design the confusion matrix of two models' predictive results to compare two models. The results show that the improved model has better performance. Furthermore, we calculate the importance of features related to survival with three different time periods and analyse their evolution, which are consistent with existing clinical research or further expand their research conclusions. These all support for clinically relevant decision-making and has the potential to expand into survival prediction of other cancer patients.</p>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"41 11","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141551444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Portfolio construction using explainable reinforcement learning 利用可解释强化学习构建投资组合
IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-02 DOI: 10.1111/exsy.13667
Daniel González Cortés, Enrique Onieva, Iker Pastor, Laura Trinchera, Jian Wu

While machine learning's role in financial trading has advanced considerably, algorithmic transparency and explainability challenges still exist. This research enriches prior studies focused on high-frequency financial data prediction by introducing an explainable reinforcement learning model for portfolio management. This model transcends basic asset prediction, formulating concrete, actionable trading strategies. The methodology is applied in a custom trading environment mimicking the CAC-40 index's financial conditions, allowing the model to adapt dynamically to market changes based on iterative learning from historical data. Empirical findings reveal that the model outperforms an equally weighted portfolio in out-of-sample tests. The study offers a dual contribution: it elevates algorithmic planning while significantly boosting transparency and interpretability in financial machine learning. This approach tackles the enduring ‘black-box’ issue and provides a holistic, transparent framework for managing investment portfolios.

虽然机器学习在金融交易中的作用已经取得了长足的进步,但算法透明度和可解释性方面的挑战依然存在。本研究通过为投资组合管理引入一个可解释的强化学习模型,丰富了之前专注于高频金融数据预测的研究。该模型超越了基本的资产预测,制定了具体、可操作的交易策略。该方法被应用于模仿 CAC-40 指数财务状况的定制交易环境中,使模型能够在历史数据迭代学习的基础上动态适应市场变化。实证研究结果表明,该模型在样本外测试中的表现优于等权重投资组合。这项研究具有双重贡献:它提升了算法规划,同时显著提高了金融机器学习的透明度和可解释性。这种方法解决了长期存在的 "黑箱 "问题,为管理投资组合提供了一个全面、透明的框架。
{"title":"Portfolio construction using explainable reinforcement learning","authors":"Daniel González Cortés,&nbsp;Enrique Onieva,&nbsp;Iker Pastor,&nbsp;Laura Trinchera,&nbsp;Jian Wu","doi":"10.1111/exsy.13667","DOIUrl":"10.1111/exsy.13667","url":null,"abstract":"<p>While machine learning's role in financial trading has advanced considerably, algorithmic transparency and explainability challenges still exist. This research enriches prior studies focused on high-frequency financial data prediction by introducing an explainable reinforcement learning model for portfolio management. This model transcends basic asset prediction, formulating concrete, actionable trading strategies. The methodology is applied in a custom trading environment mimicking the CAC-40 index's financial conditions, allowing the model to adapt dynamically to market changes based on iterative learning from historical data. Empirical findings reveal that the model outperforms an equally weighted portfolio in out-of-sample tests. The study offers a dual contribution: it elevates algorithmic planning while significantly boosting transparency and interpretability in financial machine learning. This approach tackles the enduring ‘black-box’ issue and provides a holistic, transparent framework for managing investment portfolios.</p>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"41 11","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/exsy.13667","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141551443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient object tracking based on multi‐head cross‐attention transformer 基于多头交叉注意力变换器的高效物体追踪技术
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-01 DOI: 10.1111/exsy.13650
Jiahai Dai, Huimin Li, Shan Jiang, Hongwei Yang
Object tracking is an essential component of computer vision and plays a significant role in various practical applications. Recently, transformer‐based trackers have become the predominant method for tracking due to their robustness and efficiency. However, existing transformer‐based trackers typically focus solely on the template features, neglecting the interactions between the search features and the template features during the tracking process. To address this issue, this article introduces a multi‐head cross‐attention transformer for visual tracking (MCTT), which effectively enhance the interaction between the template branch and the search branch, enabling the tracker to prioritize discriminative feature. Additionally, an auxiliary segmentation mask head has been designed to produce a pixel‐level feature representation, enhancing and tracking accuracy by predicting a set of binary masks. Comprehensive experiments have been performed on benchmark datasets, such as LaSOT, GOT‐10k, UAV123 and TrackingNet using various advanced methods, demonstrating that our approach achieves promising tracking performance. MCTT achieves an AO score of 72.8 on the GOT‐10k.
物体跟踪是计算机视觉的重要组成部分,在各种实际应用中发挥着重要作用。最近,基于变压器的跟踪器因其鲁棒性和高效性而成为跟踪的主要方法。然而,现有的基于变换器的跟踪器通常只关注模板特征,而忽略了跟踪过程中搜索特征与模板特征之间的相互作用。针对这一问题,本文介绍了一种用于视觉跟踪的多头交叉注意变换器(MCTT),它能有效增强模板分支与搜索分支之间的交互,使跟踪器能优先考虑辨别特征。此外,还设计了一个辅助分割掩码头,用于生成像素级特征表示,通过预测一组二进制掩码来提高跟踪精度。我们使用各种先进方法在 LaSOT、GOT-10k、UAV123 和 TrackingNet 等基准数据集上进行了综合实验,结果表明我们的方法具有良好的跟踪性能。在 GOT-10k 数据集上,MCTT 获得了 72.8 的 AO 分数。
{"title":"An efficient object tracking based on multi‐head cross‐attention transformer","authors":"Jiahai Dai, Huimin Li, Shan Jiang, Hongwei Yang","doi":"10.1111/exsy.13650","DOIUrl":"https://doi.org/10.1111/exsy.13650","url":null,"abstract":"Object tracking is an essential component of computer vision and plays a significant role in various practical applications. Recently, transformer‐based trackers have become the predominant method for tracking due to their robustness and efficiency. However, existing transformer‐based trackers typically focus solely on the template features, neglecting the interactions between the search features and the template features during the tracking process. To address this issue, this article introduces a multi‐head cross‐attention transformer for visual tracking (MCTT), which effectively enhance the interaction between the template branch and the search branch, enabling the tracker to prioritize discriminative feature. Additionally, an auxiliary segmentation mask head has been designed to produce a pixel‐level feature representation, enhancing and tracking accuracy by predicting a set of binary masks. Comprehensive experiments have been performed on benchmark datasets, such as LaSOT, GOT‐10k, UAV123 and TrackingNet using various advanced methods, demonstrating that our approach achieves promising tracking performance. MCTT achieves an AO score of 72.8 on the GOT‐10k.","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"28 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141510284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crowdfunding performance prediction using feature-selection-based machine learning models 利用基于特征选择的机器学习模型预测众筹绩效
IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-27 DOI: 10.1111/exsy.13646
Yuanyue Feng, Yuhong Luo, Nianjiao Peng, Ben Niu

Background

Crowdfunding is increasingly favoured by entrepreneurs for online financing. Predicting crowdfunding success can provide valuable guidance for stakeholders. It is a new attempt to evaluate the relative performance of different machine learning algorithms for crowdfunding prediction.

Objectives

This study aims to identify the key factors of crowdfunding, and find the different performance and usage of machine learning algorithms for crowdfunding prediction.

Method

We crawled data from MoDian.com, a Chinese crowdfunding platform, and predicted the crowdfunding performance using four machine learning algorithms, which is a new exploration in this area. Most of the existing literature focuses on empirical analysis. This work solves the problem of predicting crowdfunding performance using a dataset with a minimal number of highly contributive features, which has higher accuracy compared to the regression analysis.

Results

The experiment results show that feature-selection-based machine learning models are effective and beneficial in crowdfunding prediction.

Conclusion

Feature selection can significantly improve the prediction performance of the machine learning models. KNN achieved the best prediction results with five features: number of backers, target amount, number of project likes, number of project comments, and sponsor fans. The prediction accuracy was improved by 16%, the precision was improved by 13.23%, the recall was improved by 22.66%, the F-score was improved by 18.48%, and the AUC was improved by 14.9%.

背景众筹越来越受到创业者在线融资的青睐。预测众筹的成功可以为利益相关者提供有价值的指导。本研究旨在找出众筹的关键因素,并发现机器学习算法在众筹预测中的不同表现和使用情况。方法我们从中国众筹平台摩点网抓取数据,使用四种机器学习算法预测众筹表现,这是该领域的一次新探索。现有文献大多侧重于经验分析。结果实验结果表明,基于特征选择的机器学习模型在众筹预测中是有效的、有益的。KNN 在使用支持者人数、目标金额、项目点赞数、项目评论数和赞助商粉丝这五个特征时取得了最佳预测结果。预测准确率提高了 16%,精确度提高了 13.23%,召回率提高了 22.66%,F-score 提高了 18.48%,AUC 提高了 14.9%。
{"title":"Crowdfunding performance prediction using feature-selection-based machine learning models","authors":"Yuanyue Feng,&nbsp;Yuhong Luo,&nbsp;Nianjiao Peng,&nbsp;Ben Niu","doi":"10.1111/exsy.13646","DOIUrl":"10.1111/exsy.13646","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Crowdfunding is increasingly favoured by entrepreneurs for online financing. Predicting crowdfunding success can provide valuable guidance for stakeholders. It is a new attempt to evaluate the relative performance of different machine learning algorithms for crowdfunding prediction.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Objectives</h3>\u0000 \u0000 <p>This study aims to identify the key factors of crowdfunding, and find the different performance and usage of machine learning algorithms for crowdfunding prediction.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Method</h3>\u0000 \u0000 <p>We crawled data from MoDian.com, a Chinese crowdfunding platform, and predicted the crowdfunding performance using four machine learning algorithms, which is a new exploration in this area. Most of the existing literature focuses on empirical analysis. This work solves the problem of predicting crowdfunding performance using a dataset with a minimal number of highly contributive features, which has higher accuracy compared to the regression analysis.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>The experiment results show that feature-selection-based machine learning models are effective and beneficial in crowdfunding prediction.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusion</h3>\u0000 \u0000 <p>Feature selection can significantly improve the prediction performance of the machine learning models. KNN achieved the best prediction results with five features: number of backers, target amount, number of project likes, number of project comments, and sponsor fans. The prediction accuracy was improved by 16%, the precision was improved by 13.23%, the recall was improved by 22.66%, the F-score was improved by 18.48%, and the AUC was improved by 14.9%.</p>\u0000 </section>\u0000 </div>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"41 10","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141510285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AES software and hardware system co-design for resisting side channel attacks 抵御侧信道攻击的 AES 软硬件系统协同设计
IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-26 DOI: 10.1111/exsy.13664
Liguo Dong, Xinliang Ye, Libin Zhuang, Ruidian Zhan, M. Shamim Hossain

The threat of side-channel attacks poses a significant risk to the security of cryptographic algorithms. To counter this threat, we have designed an AES system capable of defending against such attacks, supporting AES-128, AES-192, and AES-256 encryption standards. In our system, the CPU oversees the AES hardware via the AHB bus and employs true random number generation to provide secure random inputs for computations. The hardware implementation of the AES S-box utilizes complex domain inversion techniques, while intermediate data is shielded using full-time masking. Furthermore, the system incorporates double-path error detection mechanisms to thwart fault propagation. Our results demonstrate that the system effectively conceals key power information, providing robust resistance against CPA attacks, and is capable of detecting injected faults, thereby mitigating fault-based attacks.

侧信道攻击对加密算法的安全性构成了巨大威胁。为了应对这种威胁,我们设计了一种能够抵御此类攻击的 AES 系统,支持 AES-128、AES-192 和 AES-256 加密标准。在我们的系统中,CPU 通过 AHB 总线监控 AES 硬件,并采用真正的随机数生成技术为计算提供安全的随机输入。AES S-box 的硬件实现采用了复杂的域反转技术,同时使用全时掩码屏蔽中间数据。此外,该系统还采用了双路径错误检测机制来阻止故障传播。我们的研究结果表明,该系统能有效地隐藏密钥功率信息,提供强大的抗 CPA 攻击能力,并能检测到注入的故障,从而减轻基于故障的攻击。
{"title":"AES software and hardware system co-design for resisting side channel attacks","authors":"Liguo Dong,&nbsp;Xinliang Ye,&nbsp;Libin Zhuang,&nbsp;Ruidian Zhan,&nbsp;M. Shamim Hossain","doi":"10.1111/exsy.13664","DOIUrl":"10.1111/exsy.13664","url":null,"abstract":"<p>The threat of side-channel attacks poses a significant risk to the security of cryptographic algorithms. To counter this threat, we have designed an AES system capable of defending against such attacks, supporting AES-128, AES-192, and AES-256 encryption standards. In our system, the CPU oversees the AES hardware via the AHB bus and employs true random number generation to provide secure random inputs for computations. The hardware implementation of the AES S-box utilizes complex domain inversion techniques, while intermediate data is shielded using full-time masking. Furthermore, the system incorporates double-path error detection mechanisms to thwart fault propagation. Our results demonstrate that the system effectively conceals key power information, providing robust resistance against CPA attacks, and is capable of detecting injected faults, thereby mitigating fault-based attacks.</p>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"41 10","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facial emotion recognition: A comprehensive review 面部情绪识别:全面回顾
IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-26 DOI: 10.1111/exsy.13670
Manmeet Kaur, Munish Kumar

Facial emotion recognition (FER) represents a significant outcome of the rapid advancements in artificial intelligence (AI) technology. In today's digital era, the ability to decipher emotions from facial expressions has evolved into a fundamental mode of human interaction and communication. As a result, FER has penetrated diverse domains, including but not limited to medical diagnosis, customer feedback analysis, the automation of automobile driver systems, and the evaluation of student comprehension. Furthermore, it has matured into a captivating and dynamic research field, capturing the attention and curiosity of contemporary scholars and scientists. The primary objective of this paper is to provide an exhaustive review of FER systems. Its significance goes beyond offering a comprehensive resource; it also serves as a valuable guide for emerging researchers in the FER domain. Through a meticulous examination of existing FER systems and methodologies, this review equips them with essential insights and guidance for their future research pursuits. Moreover, this comprehensive review contributes to the expansion of their knowledge base, facilitating a profound understanding of this rapidly evolving field. In a world increasingly dependent on technology for communication and interaction, the study of FER holds a pivotal role in human-computer interaction (HCI). It not only provides valuable insights but also unlocks a multitude of possibilities for future innovations and applications. As we continue to integrate AI and facial emotion recognition into our daily lives, the importance of comprehending and enhancing FER systems becomes increasingly evident. This paper serves as a stepping stone for researchers, nurturing their involvement in this exciting and ever-evolving field.

面部情绪识别(FER)是人工智能(AI)技术飞速发展的重要成果。在当今的数字时代,从面部表情中解读情绪的能力已发展成为人类互动和交流的基本模式。因此,FER 已经渗透到各个领域,包括但不限于医疗诊断、客户反馈分析、汽车驾驶系统自动化和学生理解能力评估。此外,它已发展成为一个充满魅力和活力的研究领域,吸引着当代学者和科学家的注意力和好奇心。本文的主要目的是对 FER 系统进行详尽的评述。其意义不仅在于提供全面的资源,还可作为 FER 领域新兴研究人员的宝贵指南。通过对现有 FER 系统和方法的细致研究,本综述为他们今后的研究工作提供了重要的见解和指导。此外,本综述还有助于扩展他们的知识库,促进他们对这一快速发展领域的深刻理解。在一个越来越依赖技术进行交流和互动的世界里,FER 研究在人机交互(HCI)中发挥着举足轻重的作用。它不仅提供了有价值的见解,还为未来的创新和应用开启了多种可能性。随着人工智能和面部情绪识别技术不断融入我们的日常生活,理解和增强 FER 系统的重要性日益凸显。本文可作为研究人员的垫脚石,促进他们参与这一令人兴奋且不断发展的领域。
{"title":"Facial emotion recognition: A comprehensive review","authors":"Manmeet Kaur,&nbsp;Munish Kumar","doi":"10.1111/exsy.13670","DOIUrl":"10.1111/exsy.13670","url":null,"abstract":"<p>Facial emotion recognition (FER) represents a significant outcome of the rapid advancements in artificial intelligence (AI) technology. In today's digital era, the ability to decipher emotions from facial expressions has evolved into a fundamental mode of human interaction and communication. As a result, FER has penetrated diverse domains, including but not limited to medical diagnosis, customer feedback analysis, the automation of automobile driver systems, and the evaluation of student comprehension. Furthermore, it has matured into a captivating and dynamic research field, capturing the attention and curiosity of contemporary scholars and scientists. The primary objective of this paper is to provide an exhaustive review of FER systems. Its significance goes beyond offering a comprehensive resource; it also serves as a valuable guide for emerging researchers in the FER domain. Through a meticulous examination of existing FER systems and methodologies, this review equips them with essential insights and guidance for their future research pursuits. Moreover, this comprehensive review contributes to the expansion of their knowledge base, facilitating a profound understanding of this rapidly evolving field. In a world increasingly dependent on technology for communication and interaction, the study of FER holds a pivotal role in human-computer interaction (HCI). It not only provides valuable insights but also unlocks a multitude of possibilities for future innovations and applications. As we continue to integrate AI and facial emotion recognition into our daily lives, the importance of comprehending and enhancing FER systems becomes increasingly evident. This paper serves as a stepping stone for researchers, nurturing their involvement in this exciting and ever-evolving field.</p>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"41 10","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multi‐focus image fusion network deployed in smart city target detection 部署在智慧城市目标检测中的多焦点图像融合网络
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-26 DOI: 10.1111/exsy.13662
Haojie Zhao, Shuang Guo, Gwanggil Jeon, Xiaomin Yang
In the global monitoring of smart cities, the demands of global object detection systems based on cloud and fog computing in intelligent systems can be satisfied by photographs with globally recognized properties. Nevertheless, conventional techniques are constrained by the imaging depth of field and can produce artefacts or indistinct borders, which can be disastrous for accurately detecting the object. In light of this, this paper proposes an artificial intelligence‐based gradient learning network that gathers and enhances domain information at different sizes in order to produce globally focused fusion results. Gradient features, which provide a lot of boundary information, can eliminate the problem of border artefacts and blur in multi‐focus fusion. The multiple‐receptive module (MRM) facilitates effective information sharing and enables the capture of object properties at different scales. In addition, with the assistance of the global enhancement module (GEM), the network can effectively combine the scale features and gradient data from various receptive fields and reinforce the features to provide precise decision maps. Numerous experiments have demonstrated that our approach outperforms the seven most sophisticated algorithms currently in use.
在智慧城市的全球监控中,基于云计算和雾计算的智能系统对全球物体检测系统的需求可以通过具有全球公认属性的照片来满足。然而,传统技术受成像景深的限制,可能会产生伪影或边界不清晰的情况,这对准确检测物体来说是灾难性的。有鉴于此,本文提出了一种基于人工智能的梯度学习网络,它可以收集和增强不同大小的领域信息,从而产生全局聚焦的融合结果。梯度特征提供了大量边界信息,可以消除多焦点融合中的边界伪影和模糊问题。多接收模块(MRM)有助于有效的信息共享,并能捕捉不同尺度的物体属性。此外,在全局增强模块(GEM)的辅助下,该网络还能有效结合来自不同感受野的尺度特征和梯度数据,并强化这些特征,从而提供精确的决策图。大量实验证明,我们的方法优于目前使用的七种最复杂的算法。
{"title":"A multi‐focus image fusion network deployed in smart city target detection","authors":"Haojie Zhao, Shuang Guo, Gwanggil Jeon, Xiaomin Yang","doi":"10.1111/exsy.13662","DOIUrl":"https://doi.org/10.1111/exsy.13662","url":null,"abstract":"In the global monitoring of smart cities, the demands of global object detection systems based on cloud and fog computing in intelligent systems can be satisfied by photographs with globally recognized properties. Nevertheless, conventional techniques are constrained by the imaging depth of field and can produce artefacts or indistinct borders, which can be disastrous for accurately detecting the object. In light of this, this paper proposes an artificial intelligence‐based gradient learning network that gathers and enhances domain information at different sizes in order to produce globally focused fusion results. Gradient features, which provide a lot of boundary information, can eliminate the problem of border artefacts and blur in multi‐focus fusion. The multiple‐receptive module (MRM) facilitates effective information sharing and enables the capture of object properties at different scales. In addition, with the assistance of the global enhancement module (GEM), the network can effectively combine the scale features and gradient data from various receptive fields and reinforce the features to provide precise decision maps. Numerous experiments have demonstrated that our approach outperforms the seven most sophisticated algorithms currently in use.","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"38 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141531655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual resource constrained flexible job shop scheduling with sequence-dependent setup time 双资源受限灵活作业车间调度与取决于序列的设置时间
IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-25 DOI: 10.1111/exsy.13669
Sasan Barak, Shima Javanmard, Reza Moghdani

This study addresses the imperative need for efficient solutions in the context of the dual resource constrained flexible job shop scheduling problem with sequence-dependent setup times (DRCFJS-SDSTs). We introduce a pioneering tri-objective mixed-integer linear mathematical model tailored to this complex challenge. Our model is designed to optimize the assignment of operations to candidate multi-skilled machines and operators, with the primary goals of minimizing operators' idleness cost and sequence-dependent setup time-related expenses. Additionally, it aims to mitigate total tardiness and earliness penalties while regulating maximum machine workload. Given the NP-hard nature of the proposed DRCFJS-SDST, we employ the epsilon constraint method to derive exact optimal solutions for small-scale problems. For larger instances, we develop a modified variant of the multi-objective invasive weed optimization (MOIWO) algorithm, enhanced by a fuzzy sorting algorithm for competitive exclusion. In the absence of established benchmarks in the literature, we validate our solutions against those generated by multi-objective particle swarm optimization (MOPSO) and non-dominated sorted genetic algorithm (NSGA-II). Through comparative analysis, we demonstrate the superior performance of MOIWO. Specifically, when compared with NSGA-II, MOIWO achieves success rates of 90.83% and shows similar performance in 4.17% of cases. Moreover, compared with MOPSO, MOIWO achieves success rates of 84.17% and exhibits similar performance in 9.17% of cases. These findings contribute significantly to the advancement of scheduling optimization methodologies.

本研究针对具有序列相关设置时间(DRCFJS-SDSTs)的双资源受限灵活作业车间调度问题,探讨了高效解决方案的迫切需求。我们针对这一复杂挑战,引入了一个开创性的三目标混合整数线性数学模型。我们的模型旨在优化对候选多技能机器和操作员的操作分配,主要目标是最大限度地降低操作员的闲置成本和与序列相关的设置时间相关费用。此外,它还旨在减轻总的迟到和早退惩罚,同时调节机器的最大工作量。鉴于所提出的 DRCFJS-SDST 具有 NP 难度,我们采用了ε约束方法来推导小规模问题的精确最优解。对于较大的实例,我们开发了多目标入侵杂草优化(MOIWO)算法的改进变体,并通过模糊排序算法加强了竞争性排除。在文献中没有既定基准的情况下,我们将我们的解决方案与多目标粒子群优化(MOPSO)和非支配排序遗传算法(NSGA-II)生成的解决方案进行了验证。通过对比分析,我们证明了 MOIWO 的卓越性能。具体来说,与 NSGA-II 相比,MOIWO 的成功率高达 90.83%,在 4.17% 的情况下表现出相似的性能。此外,与 MOPSO 相比,MOIWO 的成功率为 84.17%,在 9.17% 的案例中表现出相似的性能。这些发现极大地促进了调度优化方法的发展。
{"title":"Dual resource constrained flexible job shop scheduling with sequence-dependent setup time","authors":"Sasan Barak,&nbsp;Shima Javanmard,&nbsp;Reza Moghdani","doi":"10.1111/exsy.13669","DOIUrl":"10.1111/exsy.13669","url":null,"abstract":"<p>This study addresses the imperative need for efficient solutions in the context of the dual resource constrained flexible job shop scheduling problem with sequence-dependent setup times (DRCFJS-SDSTs). We introduce a pioneering tri-objective mixed-integer linear mathematical model tailored to this complex challenge. Our model is designed to optimize the assignment of operations to candidate multi-skilled machines and operators, with the primary goals of minimizing operators' idleness cost and sequence-dependent setup time-related expenses. Additionally, it aims to mitigate total tardiness and earliness penalties while regulating maximum machine workload. Given the NP-hard nature of the proposed DRCFJS-SDST, we employ the epsilon constraint method to derive exact optimal solutions for small-scale problems. For larger instances, we develop a modified variant of the multi-objective invasive weed optimization (MOIWO) algorithm, enhanced by a fuzzy sorting algorithm for competitive exclusion. In the absence of established benchmarks in the literature, we validate our solutions against those generated by multi-objective particle swarm optimization (MOPSO) and non-dominated sorted genetic algorithm (NSGA-II). Through comparative analysis, we demonstrate the superior performance of MOIWO. Specifically, when compared with NSGA-II, MOIWO achieves success rates of 90.83% and shows similar performance in 4.17% of cases. Moreover, compared with MOPSO, MOIWO achieves success rates of 84.17% and exhibits similar performance in 9.17% of cases. These findings contribute significantly to the advancement of scheduling optimization methodologies.</p>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"41 10","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/exsy.13669","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141510286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ImageVeriBypasser: An image verification code recognition approach based on Convolutional Neural Network ImageVeriBypasser:基于卷积神经网络的图像验证码识别方法
IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-25 DOI: 10.1111/exsy.13658
Tong Ji, Yuxin Luo, Yifeng Lin, Yuer Yang, Qian Zheng, Siwei Lian, Junjie Li

The recent period has witnessed automated crawlers designed to automatically crack passwords, which greatly risks various aspects of our lives. To prevent passwords from being cracked, image verification codes have been implemented to accomplish the human–machine verification. It is important to note, however, that the most widely-used image verification codes, especially the visual reasoning Completely Automated Public Turing tests to tell Computers and Humans Apart (CAPTCHAs), are still susceptible to attacks by artificial intelligence. Taking the visual reasoning CAPTCHAs representing the image verification codes, this study introduces an enhanced approach for generating image verification codes and proposes an improved Convolutional Neural Network (CNN)-based recognition system. After we add a fully connected layer and briefly solve the edge of stability issue, the accuracy of the improved CNN model can smoothly approach 98.40% within 50 epochs on the image verification codes with four digits using a large initial learning rate of 0.01. Compared with the baseline model, it is approximately 37.82% better in accuracy without obvious curve oscillation. The improved CNN model can also smoothly reach the accuracy of 99.00% within 7500 epochs on the image verification codes with six characters, including digits, upper-case alphabets, lower-case alphabets, and symbols. A detailed comparison between our proposed approach and the baseline one is presented. The relationship between the time consumption and the length of the seeds is compared theoretically. Subsequently, we figure out the threat assignments on the visual reasoning CAPTCHAs with different lengths based on four machine learning models. Based on the threat assignments, the Kaplan-Meier (KM) curves are computed.

最近一段时期,自动爬网程序被设计用来自动破解密码,这给我们生活的各个方面带来了极大的风险。为了防止密码被破解,人们开始使用图像验证码来完成人机对话。但值得注意的是,目前使用最广泛的图像验证码,尤其是视觉推理的 "完全自动区分计算机和人类的公共图灵测试(CAPTCHAs)",仍然容易受到人工智能的攻击。本研究以视觉推理验证码为代表,介绍了一种生成图像验证码的增强方法,并提出了一种基于卷积神经网络(CNN)的改进型识别系统。在增加了一个全连接层并简单解决了稳定性边缘问题后,改进的 CNN 模型在使用 0.01 的大初始学习率时,在 50 个历元内对四位数字的图像验证码的准确率可顺利接近 98.40%。与基线模型相比,其准确率提高了约 37.82%,且没有明显的曲线振荡。改进后的 CNN 模型还能在 7500 个 epochs 内对包含数字、大写字母、小写字母和符号在内的六个字符的图像验证码顺利达到 99.00% 的准确率。报告详细比较了我们提出的方法和基线方法。我们从理论上比较了耗时与种子长度之间的关系。随后,我们根据四种机器学习模型计算出了不同长度的视觉推理验证码的威胁分配。根据威胁分配,计算出 Kaplan-Meier (KM) 曲线。
{"title":"ImageVeriBypasser: An image verification code recognition approach based on Convolutional Neural Network","authors":"Tong Ji,&nbsp;Yuxin Luo,&nbsp;Yifeng Lin,&nbsp;Yuer Yang,&nbsp;Qian Zheng,&nbsp;Siwei Lian,&nbsp;Junjie Li","doi":"10.1111/exsy.13658","DOIUrl":"10.1111/exsy.13658","url":null,"abstract":"<p>The recent period has witnessed automated crawlers designed to automatically crack passwords, which greatly risks various aspects of our lives. To prevent passwords from being cracked, image verification codes have been implemented to accomplish the human–machine verification. It is important to note, however, that the most widely-used image verification codes, especially the visual reasoning Completely Automated Public Turing tests to tell Computers and Humans Apart (CAPTCHAs), are still susceptible to attacks by artificial intelligence. Taking the visual reasoning CAPTCHAs representing the image verification codes, this study introduces an enhanced approach for generating image verification codes and proposes an improved Convolutional Neural Network (CNN)-based recognition system. After we add a fully connected layer and briefly solve the edge of stability issue, the accuracy of the improved CNN model can smoothly approach 98.40% within 50 epochs on the image verification codes with four digits using a large initial learning rate of 0.01. Compared with the baseline model, it is approximately 37.82% better in accuracy without obvious curve oscillation. The improved CNN model can also smoothly reach the accuracy of 99.00% within 7500 epochs on the image verification codes with six characters, including digits, upper-case alphabets, lower-case alphabets, and symbols. A detailed comparison between our proposed approach and the baseline one is presented. The relationship between the time consumption and the length of the seeds is compared theoretically. Subsequently, we figure out the threat assignments on the visual reasoning CAPTCHAs with different lengths based on four machine learning models. Based on the threat assignments, the Kaplan-Meier (KM) curves are computed.</p>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"41 10","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/exsy.13658","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Marine predators optimization with deep learning model for video-based facial expression recognition 利用深度学习模型优化基于视频的海洋捕食者面部表情识别
IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-24 DOI: 10.1111/exsy.13657
Mal Hari Prasad, P. Swarnalatha

Video-based facial expression recognition (VFER) technique intends to categorize an input video into different kinds of emotions. It remains a challenging issue because of the gap between visual features and emotions, problems in handling the delicate movement of muscles, and restricted datasets. One of the effective solutions to solve this problem is the exploitation of efficient features defining facial expressions to carry out FER. Generally, the VFER find useful in several areas like unmanned driving, venue management, urban safety management, and senseless attendance. Recent advances in computer vision and deep learning (DL) techniques enable the design of automated VFER models. In this aspect, this study establishes a new Marine Predators Optimization with Deep Learning Model for Video-based Facial Expression Recognition (MPODL-VFER) technique. The presented MPODL-VFER technique mainly aims to classify different kinds of facial emotions in the video. To accomplish this, the presented MPODL-VFER technique derives features using the deep convolutional neural network based densely connected network (DenseNet) model. The presented MPODL-VFER technique employs MPO technique for the hyperparameter adjustment of the DenseNet model. Finally, Elman Neural Network (ENN) model is exploited for emotion recognition purposes. For assuring the enhanced recognition performance of the MPODL-VFER approach, a comparison study was developed on benchmark dataset. The comprehensive results have shown the significant outcome of MPODL-VFER model over other approaches.

基于视频的面部表情识别(VFER)技术旨在将输入视频分为不同的情绪类型。由于视觉特征与情绪之间的差距、处理肌肉微妙运动的问题以及数据集的限制,这仍然是一个具有挑战性的问题。解决这一问题的有效方法之一是利用定义面部表情的有效特征来进行 FER。一般来说,VFER 在无人驾驶、场地管理、城市安全管理和无感考勤等多个领域都很有用。计算机视觉和深度学习(DL)技术的最新进展使得设计自动 VFER 模型成为可能。在这方面,本研究为基于视频的面部表情识别(MPODL-VFER)建立了一种新的海洋捕食者优化与深度学习模型技术。所提出的 MPODL-VFER 技术主要旨在对视频中不同类型的面部情绪进行分类。为了实现这一目标,MPODL-VFER 技术使用基于深度卷积神经网络的密集连接网络(DenseNet)模型获取特征。所介绍的 MPODL-VFER 技术采用 MPO 技术对 DenseNet 模型进行超参数调整。最后,Elman 神经网络(ENN)模型被用于情感识别目的。为确保 MPODL-VFER 方法的识别性能得到提高,在基准数据集上进行了比较研究。综合结果表明,MPODL-VFER 模型与其他方法相比效果显著。
{"title":"Marine predators optimization with deep learning model for video-based facial expression recognition","authors":"Mal Hari Prasad,&nbsp;P. Swarnalatha","doi":"10.1111/exsy.13657","DOIUrl":"10.1111/exsy.13657","url":null,"abstract":"<p>Video-based facial expression recognition (VFER) technique intends to categorize an input video into different kinds of emotions. It remains a challenging issue because of the gap between visual features and emotions, problems in handling the delicate movement of muscles, and restricted datasets. One of the effective solutions to solve this problem is the exploitation of efficient features defining facial expressions to carry out FER. Generally, the VFER find useful in several areas like unmanned driving, venue management, urban safety management, and senseless attendance. Recent advances in computer vision and deep learning (DL) techniques enable the design of automated VFER models. In this aspect, this study establishes a new Marine Predators Optimization with Deep Learning Model for Video-based Facial Expression Recognition (MPODL-VFER) technique. The presented MPODL-VFER technique mainly aims to classify different kinds of facial emotions in the video. To accomplish this, the presented MPODL-VFER technique derives features using the deep convolutional neural network based densely connected network (DenseNet) model. The presented MPODL-VFER technique employs MPO technique for the hyperparameter adjustment of the DenseNet model. Finally, Elman Neural Network (ENN) model is exploited for emotion recognition purposes. For assuring the enhanced recognition performance of the MPODL-VFER approach, a comparison study was developed on benchmark dataset. The comprehensive results have shown the significant outcome of MPODL-VFER model over other approaches.</p>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"41 10","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141522546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Expert Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1