首页 > 最新文献

Artificial Intelligence Review最新文献

英文 中文
Artificial intelligence-based expert weighted quantum picture fuzzy rough sets and recommendation system for metaverse investment decision-making priorities 基于人工智能的专家加权量子图模糊粗糙集和元投资决策优先级推荐系统
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-05 DOI: 10.1007/s10462-024-10905-0
Gang Kou, Hasan Dinçer, Dragan Pamucar, Serhat Yüksel, Muhammet Deveci, Serkan Eti

There should be some improvements to increase the performance of Metaverse investments. However, businesses need to focus on the most important actions to provide cost effectiveness in this process. In summary, a new study is needed in which a priority analysis is made for the performance indicators of Metaverse investments. Accordingly, this study aims to evaluate the main determinants of the performance of the metaverse investments. Within this context, a novel model is created that has four different stages. The first stage is related to the prioritizing the experts with artificial intelligence-based decision-making method. Secondly, missing evaluations are estimated by expert recommendation system. Thirdly, the criteria are weighted with Quantum picture fuzzy rough sets-based (QPFR) M-Step-wise Weight Assessment Ratio Analysis (SWARA). Finally, investment decision-making priorities are ranked by QPFR VIKOR (Vlse Kriterijumska Optimizacija Kompromisno Resenje). The main contribution of this study is the integration of the artificial intelligence methodology to the fuzzy decision-making approach for the purpose of computing the weights of the decision makers. Owing to this condition, the evaluations of these people are examined according to their qualifications. This situation has a positive contribution to make more effective evaluations. Organizational effectiveness is found to be the most important factor in improving the performance of metaverse investments. Similarly, it is also identified that it is important for businesses to ensure technological improvements in the development of Metaverse investments. On the other side, the ranking results indicate that regulatory framework is the most critical alternative in this regard.

要提高 Metaverse 投资的绩效,还需要做出一些改进。不过,企业需要把重点放在最重要的行动上,以便在这一过程中实现成本效益。总之,需要开展一项新的研究,对 Metaverse 投资的绩效指标进行重点分析。因此,本研究旨在评估元数据投资绩效的主要决定因素。在此背景下,我们创建了一个包含四个不同阶段的新模型。第一阶段是利用基于人工智能的决策方法确定专家的优先次序。第二,通过专家推荐系统估算缺失的评价。第三阶段,采用基于量子图模糊粗糙集(QPFR)的 M 步加权评估比率分析法(SWARA)对标准进行加权。最后,通过 QPFR VIKOR(Vlse Kriterijumska Optimizacija Kompromisno Resenje)对投资决策优先级进行排序。本研究的主要贡献在于将人工智能方法与模糊决策方法相结合,以计算决策者的权重。在这种情况下,对这些人的评价将根据其资质进行审查。这种情况对做出更有效的评价具有积极的促进作用。组织效率被认为是提高元数据投资绩效的最重要因素。同样,还发现企业在发展元数据投资时必须确保技术改进。另一方面,排名结果表明,监管框架是这方面最关键的选择。
{"title":"Artificial intelligence-based expert weighted quantum picture fuzzy rough sets and recommendation system for metaverse investment decision-making priorities","authors":"Gang Kou,&nbsp;Hasan Dinçer,&nbsp;Dragan Pamucar,&nbsp;Serhat Yüksel,&nbsp;Muhammet Deveci,&nbsp;Serkan Eti","doi":"10.1007/s10462-024-10905-0","DOIUrl":"10.1007/s10462-024-10905-0","url":null,"abstract":"<div><p>There should be some improvements to increase the performance of Metaverse investments. However, businesses need to focus on the most important actions to provide cost effectiveness in this process. In summary, a new study is needed in which a priority analysis is made for the performance indicators of Metaverse investments. Accordingly, this study aims to evaluate the main determinants of the performance of the metaverse investments. Within this context, a novel model is created that has four different stages. The first stage is related to the prioritizing the experts with artificial intelligence-based decision-making method. Secondly, missing evaluations are estimated by expert recommendation system. Thirdly, the criteria are weighted with Quantum picture fuzzy rough sets-based (QPFR) M-Step-wise Weight Assessment Ratio Analysis (SWARA). Finally, investment decision-making priorities are ranked by QPFR VIKOR (Vlse Kriterijumska Optimizacija Kompromisno Resenje). The main contribution of this study is the integration of the artificial intelligence methodology to the fuzzy decision-making approach for the purpose of computing the weights of the decision makers. Owing to this condition, the evaluations of these people are examined according to their qualifications. This situation has a positive contribution to make more effective evaluations. Organizational effectiveness is found to be the most important factor in improving the performance of metaverse investments. Similarly, it is also identified that it is important for businesses to ensure technological improvements in the development of Metaverse investments. On the other side, the ranking results indicate that regulatory framework is the most critical alternative in this regard.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 10","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10905-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning to learn for few-shot continual active learning 学会学习,实现少数人的持续主动学习
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-05 DOI: 10.1007/s10462-024-10924-x
Stella Ho, Ming Liu, Shang Gao, Longxiang Gao

Continual learning strives to ensure stability in solving previously seen tasks while demonstrating plasticity in a novel domain. Recent advances in continual learning are mostly confined to a supervised learning setting, especially in NLP domain. In this work, we consider a few-shot continual active learning setting where labeled data are inadequate, and unlabeled data are abundant but with a limited annotation budget. We exploit meta-learning and propose a method, called Meta-Continual Active Learning. This method sequentially queries the most informative examples from a pool of unlabeled data for annotation to enhance task-specific performance and tackles continual learning problems through a meta-objective. Specifically, we employ meta-learning and experience replay to address inter-task confusion and catastrophic forgetting. We further incorporate textual augmentations to avoid memory over-fitting caused by experience replay and sample queries, thereby ensuring generalization. We conduct extensive experiments on benchmark text classification datasets from diverse domains to validate the feasibility and effectiveness of meta-continual active learning. We also analyze the impact of different active learning strategies on various meta continual learning models. The experimental results demonstrate that introducing randomness into sample selection is the best default strategy for maintaining generalization in meta-continual learning framework.

持续学习致力于确保在解决以往任务时的稳定性,同时在新领域中表现出可塑性。最近在持续学习方面取得的进展大多局限于有监督的学习环境,尤其是在 NLP 领域。在这项工作中,我们考虑的是少量持续主动学习环境,在这种环境中,标记数据不足,而未标记数据丰富,但注释预算有限。我们利用元学习(meta-learning),提出了一种名为元持续主动学习(Meta-Continual Active Learning)的方法。该方法从未标注数据池中依次查询信息量最大的示例进行标注,以提高特定任务的性能,并通过元目标解决持续学习问题。具体来说,我们采用元学习和经验重放来解决任务间的混淆和灾难性遗忘问题。我们还进一步结合了文本增强技术,以避免经验回放和样本查询造成的记忆过度拟合,从而确保泛化。我们在不同领域的基准文本分类数据集上进行了广泛的实验,以验证元持续主动学习的可行性和有效性。我们还分析了不同主动学习策略对各种元持续学习模型的影响。实验结果表明,在元连续学习框架中,将随机性引入样本选择是保持泛化的最佳默认策略。
{"title":"Learning to learn for few-shot continual active learning","authors":"Stella Ho,&nbsp;Ming Liu,&nbsp;Shang Gao,&nbsp;Longxiang Gao","doi":"10.1007/s10462-024-10924-x","DOIUrl":"10.1007/s10462-024-10924-x","url":null,"abstract":"<div><p>Continual learning strives to ensure <i>stability</i> in solving previously seen tasks while demonstrating <i>plasticity</i> in a novel domain. Recent advances in continual learning are mostly confined to a supervised learning setting, especially in NLP domain. In this work, we consider a few-shot continual active learning setting where labeled data are inadequate, and unlabeled data are abundant but with a limited annotation budget. We exploit meta-learning and propose a method, called <i>Meta-Continual Active Learning</i>. This method sequentially queries the most informative examples from a pool of unlabeled data for annotation to enhance task-specific performance and tackles continual learning problems through a meta-objective. Specifically, we employ meta-learning and experience replay to address inter-task confusion and catastrophic forgetting. We further incorporate textual augmentations to avoid memory over-fitting caused by experience replay and sample queries, thereby ensuring generalization. We conduct extensive experiments on benchmark text classification datasets from diverse domains to validate the feasibility and effectiveness of meta-continual active learning. We also analyze the impact of different active learning strategies on various meta continual learning models. The experimental results demonstrate that introducing randomness into sample selection is the best default strategy for maintaining generalization in meta-continual learning framework.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 10","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10924-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reinforcement learning-based drone simulators: survey, practice, and challenge 基于强化学习的无人机模拟器:调查、实践与挑战
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-05 DOI: 10.1007/s10462-024-10933-w
Jun Hoong Chan, Kai Liu, Yu Chen, A. S. M. Sharifuzzaman Sagar, Yong-Guk Kim

Recently, machine learning has been very useful in solving diverse tasks with drones, such as autonomous navigation, visual surveillance, communication, disaster management, and agriculture. Among these machine learning, two representative paradigms have been widely utilized in such applications: supervised learning and reinforcement learning. Researchers prefer to use supervised learning, mostly based on convolutional neural networks, because of its robustness and ease of use but yet data labeling is laborious and time-consuming. On the other hand, when traditional reinforcement learning is combined with the deep neural network, it can be a very powerful tool to solve high-dimensional input problems such as image and video. Along with the fast development of reinforcement learning, many researchers utilize reinforcement learning in drone applications, and it often outperforms supervised learning. However, it usually requires the agent to explore the environment on a trial-and-error basis which is high cost and unrealistic in the real environment. Recent advances in simulated environments can allow an agent to learn by itself to overcome these drawbacks, although the gap between the real environment and the simulator has to be minimized in the end. In this sense, a realistic and reliable simulator is essential for reinforcement learning training. This paper investigates various drone simulators that work with diverse reinforcement learning architectures. The characteristics of the reinforcement learning-based drone simulators are analyzed and compared for the researchers who would like to employ them for their projects. Finally, we shed light on some challenges and potential directions for future drone simulators.

最近,机器学习在利用无人机解决自主导航、视觉监控、通信、灾害管理和农业等各种任务中发挥了巨大作用。在这些机器学习中,有两种具有代表性的范式在此类应用中得到了广泛应用:监督学习和强化学习。研究人员更倾向于使用监督学习,主要是基于卷积神经网络的监督学习,因为它具有鲁棒性和易用性,但数据标注费时费力。另一方面,当传统的强化学习与深度神经网络相结合时,它可以成为解决图像和视频等高维输入问题的一个非常强大的工具。随着强化学习的快速发展,许多研究人员将强化学习应用于无人机领域,其效果往往优于监督学习。然而,它通常要求代理在试错的基础上探索环境,成本较高,在真实环境中也不现实。最近在模拟环境方面取得的进展可以让代理进行自我学习,从而克服这些弊端,不过最终必须尽量缩小真实环境与模拟器之间的差距。从这个意义上说,逼真可靠的模拟器对强化学习训练至关重要。本文研究了采用不同强化学习架构的各种无人机模拟器。本文对基于强化学习的无人机模拟器的特点进行了分析和比较,供希望在项目中使用这些模拟器的研究人员参考。最后,我们阐明了未来无人机模拟器面临的一些挑战和潜在方向。
{"title":"Reinforcement learning-based drone simulators: survey, practice, and challenge","authors":"Jun Hoong Chan,&nbsp;Kai Liu,&nbsp;Yu Chen,&nbsp;A. S. M. Sharifuzzaman Sagar,&nbsp;Yong-Guk Kim","doi":"10.1007/s10462-024-10933-w","DOIUrl":"10.1007/s10462-024-10933-w","url":null,"abstract":"<div><p>Recently, machine learning has been very useful in solving diverse tasks with drones, such as autonomous navigation, visual surveillance, communication, disaster management, and agriculture. Among these machine learning, two representative paradigms have been widely utilized in such applications: supervised learning and reinforcement learning. Researchers prefer to use supervised learning, mostly based on convolutional neural networks, because of its robustness and ease of use but yet data labeling is laborious and time-consuming. On the other hand, when traditional reinforcement learning is combined with the deep neural network, it can be a very powerful tool to solve high-dimensional input problems such as image and video. Along with the fast development of reinforcement learning, many researchers utilize reinforcement learning in drone applications, and it often outperforms supervised learning. However, it usually requires the agent to explore the environment on a trial-and-error basis which is high cost and unrealistic in the real environment. Recent advances in simulated environments can allow an agent to learn by itself to overcome these drawbacks, although the gap between the real environment and the simulator has to be minimized in the end. In this sense, a realistic and reliable simulator is essential for reinforcement learning training. This paper investigates various drone simulators that work with diverse reinforcement learning architectures. The characteristics of the reinforcement learning-based drone simulators are analyzed and compared for the researchers who would like to employ them for their projects. Finally, we shed light on some challenges and potential directions for future drone simulators.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 10","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10933-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AL-MobileNet: a novel model for 2D gesture recognition in intelligent cockpit based on multi-modal data AL-MobileNet:基于多模态数据的智能驾驶舱二维手势识别新模型
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-05 DOI: 10.1007/s10462-024-10930-z
Bin Wang, Liwen Yu, Bo Zhang

As the degree of automotive intelligence increases, gesture recognition is gaining more attention in human-vehicle interaction. However, existing gesture recognition methods are computationally intensive and perform poorly in multi-modal sensor scenarios. This paper proposes a novel network structure, AL-MobileNet (MobileNet with Attention and Lightweight Modules), which can quickly and accurately estimate 2D gestures in RGB and infrared (IR) images. The innovations of this paper are as follows: Firstly, to enhance multi-modal data, we created a synthetic IR dataset based on real 2D gestures and employed a coarse-to-fine training approach. Secondly, to speed up the model's computation on edge devices, we introduced a new lightweight computational module called the Split Channel Attention Block (SCAB). Thirdly, to ensure the model maintains accuracy in large datasets, we incorporated auxiliary networks and Angle-Weighted Loss (AWL) into the backbone network. Experiments show that AL-MobileNet requires only 0.4 GFLOPs of computational power and 1.2 million parameters. This makes it 1.5 times faster than MobileNet and allows for quick execution on edge devices. AL-MobileNet achieved a running speed of up to 28 FPS on the Ambarella CV28. On both general datasets and our dataset, our algorithm achieved an average PCK0.2 score of 0.95. This indicates that the algorithm can quickly generate accurate 2D gestures. The demonstration of the algorithm can be reviewed in gesturebaolong.

随着汽车智能化程度的提高,手势识别在人车交互中越来越受到关注。然而,现有的手势识别方法计算量大,在多模态传感器场景中表现不佳。本文提出了一种新颖的网络结构--AL-MobileNet(具有注意力和轻量级模块的移动网络),它可以快速、准确地估计 RGB 和红外图像中的二维手势。本文的创新点如下:首先,为了增强多模态数据,我们创建了一个基于真实二维手势的合成红外数据集,并采用了一种从粗到细的训练方法。其次,为了加快模型在边缘设备上的计算速度,我们引入了一个新的轻量级计算模块,称为 "分割通道注意块"(SCAB)。第三,为确保模型在大型数据集中保持准确性,我们在骨干网络中加入了辅助网络和角度加权损耗(AWL)。实验表明,AL-MobileNet 只需要 0.4 GFLOPs 的计算能力和 120 万个参数。这使得它比 MobileNet 快 1.5 倍,并能在边缘设备上快速执行。AL-MobileNet 在 Ambarella CV28 上的运行速度高达 28 FPS。在一般数据集和我们的数据集上,我们的算法平均 PCK0.2 得分为 0.95。这表明该算法可以快速生成准确的二维手势。该算法的演示可在 gesturebaolong 中查看。
{"title":"AL-MobileNet: a novel model for 2D gesture recognition in intelligent cockpit based on multi-modal data","authors":"Bin Wang,&nbsp;Liwen Yu,&nbsp;Bo Zhang","doi":"10.1007/s10462-024-10930-z","DOIUrl":"10.1007/s10462-024-10930-z","url":null,"abstract":"<div><p>As the degree of automotive intelligence increases, gesture recognition is gaining more attention in human-vehicle interaction. However, existing gesture recognition methods are computationally intensive and perform poorly in multi-modal sensor scenarios. This paper proposes a novel network structure, AL-MobileNet (MobileNet with Attention and Lightweight Modules), which can quickly and accurately estimate 2D gestures in RGB and infrared (IR) images. The innovations of this paper are as follows: Firstly, to enhance multi-modal data, we created a synthetic IR dataset based on real 2D gestures and employed a coarse-to-fine training approach. Secondly, to speed up the model's computation on edge devices, we introduced a new lightweight computational module called the Split Channel Attention Block (SCAB). Thirdly, to ensure the model maintains accuracy in large datasets, we incorporated auxiliary networks and Angle-Weighted Loss (AWL) into the backbone network. Experiments show that AL-MobileNet requires only 0.4 GFLOPs of computational power and 1.2 million parameters. This makes it 1.5 times faster than MobileNet and allows for quick execution on edge devices. AL-MobileNet achieved a running speed of up to 28 FPS on the Ambarella CV28. On both general datasets and our dataset, our algorithm achieved an average PCK0.2 score of 0.95. This indicates that the algorithm can quickly generate accurate 2D gestures. The demonstration of the algorithm can be reviewed in gesturebaolong.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 10","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10930-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142227874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of Alzheimer’s disease using pre-trained deep learning models through transfer learning: a review 通过迁移学习使用预训练的深度学习模型检测阿尔茨海默病:综述
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-05 DOI: 10.1007/s10462-024-10914-z
Maleika Heenaye-Mamode Khan, Pushtika Reesaul, Muhammad Muzzammil Auzine, Amelia Taylor

Due to the progress in image processing and Artificial Intelligence (AI), it is now possible to develop automated tool for the early detection and diagnosis of Alzheimer’s Disease (AD). Handcrafted techniques developed so far, lack generality, leading to the development of deep learning (DL) techniques, which can extract more relevant features. To cater for the limited labelled datasets and requirement in terms of high computational power, transfer learning models can be adopted as a baseline. In recent years, considerable research efforts have been devoted to developing machine learning-based techniques for AD detection and classification using medical imaging data. This survey paper comprehensively reviews the existing literature on various methodologies and approaches employed for AD detection and classification, with a focus on neuroimaging techniques such as structural MRI, PET, and fMRI. The main objective of this survey is to analyse the different transfer learning models that can be used for the deployment of deep convolution neural network for AD detection and classification. The phases involved in the development namely image capture, pre-processing, feature extraction and selection are also discussed in the view of shedding light on the different phases and challenges that need to be addressed. The research perspectives may provide research directions on the development of automated applications for AD detection and classification.

由于图像处理和人工智能(AI)技术的进步,现在有可能开发出用于早期检测和诊断阿尔茨海默病(AD)的自动化工具。迄今为止开发的手工技术缺乏通用性,因此开发了可提取更多相关特征的深度学习(DL)技术。为了满足有限的标记数据集和对高计算能力的要求,可以采用迁移学习模型作为基准。近年来,大量研究人员致力于开发基于机器学习的技术,利用医学影像数据进行 AD 检测和分类。本调查报告全面回顾了现有文献中有关用于注意力缺失症检测和分类的各种方法和途径,重点关注结构性 MRI、PET 和 fMRI 等神经成像技术。本调查的主要目的是分析不同的迁移学习模型,这些模型可用于部署深度卷积神经网络,以进行注意力缺失症检测和分类。此外,还讨论了开发过程中涉及的各个阶段,即图像捕获、预处理、特征提取和选择,以揭示需要解决的不同阶段和挑战。这些研究视角可为开发注意力缺失检测和分类的自动化应用提供研究方向。
{"title":"Detection of Alzheimer’s disease using pre-trained deep learning models through transfer learning: a review","authors":"Maleika Heenaye-Mamode Khan,&nbsp;Pushtika Reesaul,&nbsp;Muhammad Muzzammil Auzine,&nbsp;Amelia Taylor","doi":"10.1007/s10462-024-10914-z","DOIUrl":"10.1007/s10462-024-10914-z","url":null,"abstract":"<div><p>Due to the progress in image processing and Artificial Intelligence (AI), it is now possible to develop automated tool for the early detection and diagnosis of Alzheimer’s Disease (AD). Handcrafted techniques developed so far, lack generality, leading to the development of deep learning (DL) techniques, which can extract more relevant features. To cater for the limited labelled datasets and requirement in terms of high computational power, transfer learning models can be adopted as a baseline. In recent years, considerable research efforts have been devoted to developing machine learning-based techniques for AD detection and classification using medical imaging data. This survey paper comprehensively reviews the existing literature on various methodologies and approaches employed for AD detection and classification, with a focus on neuroimaging techniques such as structural MRI, PET, and fMRI. The main objective of this survey is to analyse the different transfer learning models that can be used for the deployment of deep convolution neural network for AD detection and classification. The phases involved in the development namely image capture, pre-processing, feature extraction and selection are also discussed in the view of shedding light on the different phases and challenges that need to be addressed. The research perspectives may provide research directions on the development of automated applications for AD detection and classification.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 10","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10914-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A systematic literature review for load balancing and task scheduling techniques in cloud computing 云计算中负载平衡和任务调度技术的系统性文献综述
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-05 DOI: 10.1007/s10462-024-10925-w
Nisha Devi, Sandeep Dalal, Kamna Solanki, Surjeet Dalal, Umesh Kumar Lilhore, Sarita Simaiya, Nasratullah Nuristani

Cloud computing is an emerging technology composed of several key components that work together to create a seamless network of interconnected devices. These interconnected devices, such as sensors, routers, smartphones, and smart appliances, are the foundation of the Internet of Everything (IoE). Huge volumes of data generated by IoE devices are processed and accumulated in the cloud, allowing for real-time analysis and insights. As a result, there is a dire need for load-balancing and task-scheduling techniques in cloud computing. The primary objective of these techniques is to divide the workload evenly across all available resources and handle other issues like reducing execution time and response time, increasing throughput and fault detection. This systematic literature review (SLR) aims to analyze various technologies comprising optimization and machine learning algorithms used for load balancing and task-scheduling problems in a cloud computing environment. To analyze the load-balancing patterns and task-scheduling techniques, we opted for a representative set of 63 research articles written in English from 2014 to 2024 that has been selected using suitable exclusion-inclusion criteria. The SLR aims to minimize bias and increase objectivity by designing research questions about the topic. We have focused on the technologies used, the merits-demerits of diverse technologies, gaps within the research, insights into tools, forthcoming opportunities, performance metrics, and an in-depth investigation into ML-based optimization techniques.

云计算是一种新兴技术,由多个关键组件组成,共同打造一个由互联设备组成的无缝网络。这些互联设备,如传感器、路由器、智能手机和智能电器,是万物互联(IoE)的基础。IoE 设备产生的大量数据会在云端进行处理和积累,从而实现实时分析和洞察。因此,云计算迫切需要负载平衡和任务调度技术。这些技术的主要目标是在所有可用资源上平均分配工作负载,并处理其他问题,如缩短执行时间和响应时间、提高吞吐量和故障检测。本系统性文献综述(SLR)旨在分析云计算环境中用于负载平衡和任务调度问题的各种技术,包括优化和机器学习算法。为了分析负载平衡模式和任务调度技术,我们选择了一组具有代表性的研究文章,共 63 篇,这些文章都是在 2014 年至 2024 年期间用英文撰写的,并采用了适当的排除--纳入标准。SLR 旨在通过设计有关该主题的研究问题,最大限度地减少偏见,提高客观性。我们重点关注所使用的技术、各种技术的优缺点、研究中存在的差距、对工具的见解、即将到来的机遇、性能指标以及对基于 ML 的优化技术的深入研究。
{"title":"A systematic literature review for load balancing and task scheduling techniques in cloud computing","authors":"Nisha Devi,&nbsp;Sandeep Dalal,&nbsp;Kamna Solanki,&nbsp;Surjeet Dalal,&nbsp;Umesh Kumar Lilhore,&nbsp;Sarita Simaiya,&nbsp;Nasratullah Nuristani","doi":"10.1007/s10462-024-10925-w","DOIUrl":"10.1007/s10462-024-10925-w","url":null,"abstract":"<div><p>Cloud computing is an emerging technology composed of several key components that work together to create a seamless network of interconnected devices. These interconnected devices, such as sensors, routers, smartphones, and smart appliances, are the foundation of the Internet of Everything (IoE). Huge volumes of data generated by IoE devices are processed and accumulated in the cloud, allowing for real-time analysis and insights. As a result, there is a dire need for load-balancing and task-scheduling techniques in cloud computing. The primary objective of these techniques is to divide the workload evenly across all available resources and handle other issues like reducing execution time and response time, increasing throughput and fault detection. This systematic literature review (SLR) aims to analyze various technologies comprising optimization and machine learning algorithms used for load balancing and task-scheduling problems in a cloud computing environment. To analyze the load-balancing patterns and task-scheduling techniques, we opted for a representative set of 63 research articles written in English from 2014 to 2024 that has been selected using suitable exclusion-inclusion criteria. The SLR aims to minimize bias and increase objectivity by designing research questions about the topic. We have focused on the technologies used, the merits-demerits of diverse technologies, gaps within the research, insights into tools, forthcoming opportunities, performance metrics, and an in-depth investigation into ML-based optimization techniques.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 10","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10925-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facial image analysis for automated suicide risk detection with deep neural networks 利用深度神经网络进行面部图像分析以自动检测自杀风险
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-03 DOI: 10.1007/s10462-024-10882-4
Amr E. Eldin Rashed, Ahmed E. Mansour Atwa, Ali Ahmed, Mahmoud Badawy, Mostafa A. Elhosseini, Waleed M. Bahgat

Accurately assessing suicide risk is a critical concern in mental health care. Traditional methods, which often rely on self-reporting and clinical interviews, are limited by their subjective nature and may overlook non-verbal cues. This study introduces an innovative approach to suicide risk assessment using facial image analysis. The Suicidal Visual Indicators Prediction (SVIP) Framework leverages EfficientNetb0 and ResNet architectures, enhanced through Bayesian optimization techniques, to detect nuanced facial expressions indicating mental state. The models’ interpretability is improved using GRADCAM, Occlusion Sensitivity, and LIME, which highlight significant facial regions for predictions. Using datasets DB1 and DB2, which consist of full and cropped facial images from social media profiles of individuals with known suicide outcomes, the method achieved 67.93% accuracy with EfficientNetb0 on DB1 and up to 76.6% accuracy with a Bayesian-optimized Support Vector Machine model using ResNet18 features on DB2. This approach provides a less intrusive, accessible alternative to video-based methods and demonstrates the substantial potential for early detection and intervention in mental health care.

准确评估自杀风险是心理健康护理中的一个关键问题。传统的方法通常依赖于自我报告和临床访谈,这些方法因其主观性而受到限制,而且可能会忽略非语言线索。本研究介绍了一种利用面部图像分析进行自杀风险评估的创新方法。自杀视觉指标预测(SVIP)框架利用 EfficientNetb0 和 ResNet 架构,通过贝叶斯优化技术进行增强,以检测表明精神状态的细微面部表情。利用 GRADCAM、闭塞敏感度和 LIME 提高了模型的可解释性,从而突出了预测的重要面部区域。使用数据集 DB1 和 DB2(包括来自社交媒体上已知自杀结果的个人档案的完整和裁剪面部图像),该方法在 DB1 上使用 EfficientNetb0 实现了 67.93% 的准确率,在 DB2 上使用 ResNet18 特征的贝叶斯优化支持向量机模型实现了高达 76.6% 的准确率。与基于视频的方法相比,这种方法提供了一种侵入性较低且易于使用的替代方法,并展示了在心理健康护理中进行早期检测和干预的巨大潜力。
{"title":"Facial image analysis for automated suicide risk detection with deep neural networks","authors":"Amr E. Eldin Rashed,&nbsp;Ahmed E. Mansour Atwa,&nbsp;Ali Ahmed,&nbsp;Mahmoud Badawy,&nbsp;Mostafa A. Elhosseini,&nbsp;Waleed M. Bahgat","doi":"10.1007/s10462-024-10882-4","DOIUrl":"10.1007/s10462-024-10882-4","url":null,"abstract":"<div><p>Accurately assessing suicide risk is a critical concern in mental health care. Traditional methods, which often rely on self-reporting and clinical interviews, are limited by their subjective nature and may overlook non-verbal cues. This study introduces an innovative approach to suicide risk assessment using facial image analysis. The Suicidal Visual Indicators Prediction (SVIP) Framework leverages EfficientNetb0 and ResNet architectures, enhanced through Bayesian optimization techniques, to detect nuanced facial expressions indicating mental state. The models’ interpretability is improved using GRADCAM, Occlusion Sensitivity, and LIME, which highlight significant facial regions for predictions. Using datasets DB1 and DB2, which consist of full and cropped facial images from social media profiles of individuals with known suicide outcomes, the method achieved 67.93% accuracy with EfficientNetb0 on DB1 and up to 76.6% accuracy with a Bayesian-optimized Support Vector Machine model using ResNet18 features on DB2. This approach provides a less intrusive, accessible alternative to video-based methods and demonstrates the substantial potential for early detection and intervention in mental health care.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 10","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10882-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142227494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Handling imbalanced medical datasets: review of a decade of research 处理不平衡医学数据集:十年研究回顾
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-02 DOI: 10.1007/s10462-024-10884-2
Mabrouka Salmi, Dalia Atif, Diego Oliva, Ajith Abraham, Sebastian Ventura

Machine learning and medical diagnostic studies often struggle with the issue of class imbalance in medical datasets, complicating accurate disease prediction and undermining diagnostic tools. Despite ongoing research efforts, specific characteristics of medical data frequently remain overlooked. This article comprehensively reviews advances in addressing imbalanced medical datasets over the past decade, offering a novel classification of approaches into preprocessing, learning levels, and combined techniques. We present a detailed evaluation of the medical datasets and metrics used, synthesizing the outcomes of previous research to reflect on the effectiveness of the methodologies despite methodological constraints. Our review identifies key research trends and offers speculative insights and research trajectories to enhance diagnostic performance. Additionally, we establish a consensus on best practices to mitigate persistent methodological issues, assisting the development of generalizable, reliable, and consistent results in medical diagnostics.

机器学习和医学诊断研究经常会遇到医学数据集中类别不平衡的问题,这使得准确的疾病预测变得复杂,并削弱了诊断工具的作用。尽管研究工作一直在进行,但医疗数据的具体特征仍经常被忽视。本文全面回顾了过去十年在处理不平衡医学数据集方面取得的进展,对预处理、学习水平和组合技术等方法进行了新颖的分类。我们对所使用的医学数据集和指标进行了详细评估,综合了以往的研究成果,反思了这些方法的有效性,尽管存在方法上的限制。我们的综述确定了关键的研究趋势,并为提高诊断性能提供了推测性的见解和研究轨迹。此外,我们还就最佳实践达成了共识,以缓解长期存在的方法学问题,帮助医学诊断开发可推广、可靠和一致的结果。
{"title":"Handling imbalanced medical datasets: review of a decade of research","authors":"Mabrouka Salmi,&nbsp;Dalia Atif,&nbsp;Diego Oliva,&nbsp;Ajith Abraham,&nbsp;Sebastian Ventura","doi":"10.1007/s10462-024-10884-2","DOIUrl":"10.1007/s10462-024-10884-2","url":null,"abstract":"<div><p>Machine learning and medical diagnostic studies often struggle with the issue of class imbalance in medical datasets, complicating accurate disease prediction and undermining diagnostic tools. Despite ongoing research efforts, specific characteristics of medical data frequently remain overlooked. This article comprehensively reviews advances in addressing imbalanced medical datasets over the past decade, offering a novel classification of approaches into preprocessing, learning levels, and combined techniques. We present a detailed evaluation of the medical datasets and metrics used, synthesizing the outcomes of previous research to reflect on the effectiveness of the methodologies despite methodological constraints. Our review identifies key research trends and offers speculative insights and research trajectories to enhance diagnostic performance. Additionally, we establish a consensus on best practices to mitigate persistent methodological issues, assisting the development of generalizable, reliable, and consistent results in medical diagnostics.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 10","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10884-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based authentication for insider threat detection in critical infrastructure 基于深度学习的身份验证,用于检测关键基础设施中的内部威胁
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-29 DOI: 10.1007/s10462-024-10893-1
Arnoldas Budžys, Olga Kurasova, Viktor Medvedev

In today’s cyber environment, threats such as data breaches, cyberattacks, and unauthorized access threaten national security, critical infrastructure, and financial stability. This research addresses the challenging task of protecting critical infrastructure from insider threats because of the high level of trust and access these individuals typically receive. Insiders may obtain a system administrator’s password through close observation or by deploying software to gather the information. To solve this issue, an innovative artificial intelligence-based methodology is proposed to identify a user by their password’s keystroke dynamics. This paper also introduces a new Gabor Filter Matrix Transformation method to transform numerical values into images by revealing the behavioral pattern of password typing. A siamese neural network (SNN) with the branches of convolutional neural networks is utilized for image comparison, aiming to detect unauthorized attempts to access critical infrastructure systems. The network analyzes the unique features of a user’s password timestamps transformed into images and compares them with previously submitted user passwords. The obtained results indicate that transforming the numerical values of keystroke dynamics into images and training an SNN leads to a lower equal error rate (EER) and higher user authentication accuracy than those previously reported in other studies. The methodology is validated on publicly available keystroke dynamics collections, the CMU and GREYC-NISLAB datasets, which collectively comprise over 30,000 password samples. It achieves the lowest EER value of 0.04545 compared to state-of-the-art methods for transforming non-image data into images. The paper concludes with a discussion of findings and potential future directions.

在当今的网络环境中,数据泄露、网络攻击和未经授权的访问等威胁威胁着国家安全、关键基础设施和金融稳定。这项研究针对的是保护关键基础设施免受内部威胁这一具有挑战性的任务,因为这些人通常会获得高度信任和访问权限。内部人员可能通过近距离观察或部署软件收集信息来获取系统管理员的密码。为了解决这个问题,本文提出了一种基于人工智能的创新方法,通过密码的击键动态来识别用户。本文还介绍了一种新的 Gabor 滤波矩阵变换方法,通过揭示输入密码的行为模式将数值转换成图像。利用具有卷积神经网络分支的连体神经网络(SNN)进行图像对比,旨在检测未经授权试图访问关键基础设施系统的行为。该网络分析转换成图像的用户密码时间戳的独特特征,并将其与之前提交的用户密码进行比较。研究结果表明,将按键动态的数值转换成图像并训练 SNN,与其他研究相比,等错误率(EER)更低,用户验证准确率更高。该方法在公开的按键动态数据集(CMU 和 GREYC-NISLAB 数据集)上进行了验证,这两个数据集共包含 30,000 多个密码样本。与最先进的将非图像数据转换为图像的方法相比,它的 EER 值最低,仅为 0.04545。论文最后讨论了研究结果和潜在的未来方向。
{"title":"Deep learning-based authentication for insider threat detection in critical infrastructure","authors":"Arnoldas Budžys,&nbsp;Olga Kurasova,&nbsp;Viktor Medvedev","doi":"10.1007/s10462-024-10893-1","DOIUrl":"10.1007/s10462-024-10893-1","url":null,"abstract":"<div><p>In today’s cyber environment, threats such as data breaches, cyberattacks, and unauthorized access threaten national security, critical infrastructure, and financial stability. This research addresses the challenging task of protecting critical infrastructure from insider threats because of the high level of trust and access these individuals typically receive. Insiders may obtain a system administrator’s password through close observation or by deploying software to gather the information. To solve this issue, an innovative artificial intelligence-based methodology is proposed to identify a user by their password’s keystroke dynamics. This paper also introduces a new Gabor Filter Matrix Transformation method to transform numerical values into images by revealing the behavioral pattern of password typing. A siamese neural network (SNN) with the branches of convolutional neural networks is utilized for image comparison, aiming to detect unauthorized attempts to access critical infrastructure systems. The network analyzes the unique features of a user’s password timestamps transformed into images and compares them with previously submitted user passwords. The obtained results indicate that transforming the numerical values of keystroke dynamics into images and training an SNN leads to a lower equal error rate (EER) and higher user authentication accuracy than those previously reported in other studies. The methodology is validated on publicly available keystroke dynamics collections, the CMU and GREYC-NISLAB datasets, which collectively comprise over 30,000 password samples. It achieves the lowest EER value of 0.04545 compared to state-of-the-art methods for transforming non-image data into images. The paper concludes with a discussion of findings and potential future directions.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 10","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10893-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From rule-based models to deep learning transformers architectures for natural language processing and sign language translation systems: survey, taxonomy and performance evaluation 从基于规则的模型到用于自然语言处理和手语翻译系统的深度学习转换器架构:调查、分类和性能评估
IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-29 DOI: 10.1007/s10462-024-10895-z
Nada Shahin, Leila Ismail

With the growing Deaf and Hard of Hearing population worldwide and the persistent shortage of certified sign language interpreters, there is a pressing need for an efficient, signs-driven, integrated end-to-end translation system, from sign to gloss to text and vice-versa. There has been a wealth of research on machine translations and related reviews. However, there are few works on sign language machine translation considering the particularity of the language being continuous and dynamic. This paper aims to address this void, providing a retrospective analysis of the temporal evolution of sign language machine translation algorithms and a taxonomy of the Transformers architectures, the most used approach in language translation. We also present the requirements of a real-time Quality-of-Service sign language machine translation system underpinned by accurate deep learning algorithms. We propose future research directions for sign language translation systems.

随着全球聋人和重听者人口的不断增长,以及认证手语翻译人员的持续短缺,人们迫切需要一种高效的、以手势为驱动的端到端综合翻译系统,从手势到文字,从文字到手势,反之亦然。关于机器翻译的研究和相关评论已经非常丰富。然而,考虑到手语具有连续性和动态性的特点,有关手语机器翻译的研究成果却寥寥无几。本文旨在填补这一空白,对手语机器翻译算法的时间演变进行了回顾性分析,并对语言翻译中最常用的 Transformers 架构进行了分类。我们还提出了以精确的深度学习算法为基础的实时服务质量手语机器翻译系统的要求。我们提出了手语翻译系统的未来研究方向。
{"title":"From rule-based models to deep learning transformers architectures for natural language processing and sign language translation systems: survey, taxonomy and performance evaluation","authors":"Nada Shahin,&nbsp;Leila Ismail","doi":"10.1007/s10462-024-10895-z","DOIUrl":"10.1007/s10462-024-10895-z","url":null,"abstract":"<div><p>With the growing Deaf and Hard of Hearing population worldwide and the persistent shortage of certified sign language interpreters, there is a pressing need for an efficient, signs-driven, integrated end-to-end translation system, from sign to gloss to text and vice-versa. There has been a wealth of research on machine translations and related reviews. However, there are few works on sign language machine translation considering the particularity of the language being continuous and dynamic. This paper aims to address this void, providing a retrospective analysis of the temporal evolution of sign language machine translation algorithms and a taxonomy of the Transformers architectures, the most used approach in language translation. We also present the requirements of a real-time Quality-of-Service sign language machine translation system underpinned by accurate deep learning algorithms. We propose future research directions for sign language translation systems.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"57 10","pages":""},"PeriodicalIF":10.7,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-024-10895-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142227495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Artificial Intelligence Review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1