首页 > 最新文献

WIREs Data Mining and Knowledge Discovery最新文献

英文 中文
An overview of current developments and methods for identifying diabetic foot ulcers: A survey 糖尿病足溃疡识别的最新进展和方法概述:一项调查
Pub Date : 2024-10-09 DOI: 10.1002/widm.1562
L. Jani Anbarasi, Malathy Jawahar, R. Beulah Jayakumari, Modigari Narendra, Vinayakumar Ravi, R. Neeraja
Diabetic foot ulcers (DFUs) present a substantial health risk across diverse age groups, creating challenges for healthcare professionals in the accurate classification and grading. DFU plays a crucial role in automated health monitoring and diagnosis systems, where the integration of medical imaging, computer vision, statistical analysis, and gait information is essential for comprehensive understanding and effective management. Diagnosing DFU is imperative, as it plays a major role in the processes of diagnosis, treatment planning, and neuropathy research within automated health monitoring and diagnosis systems. To address this, various machine learning and deep learning‐based methodologies have emerged in the literature to support healthcare practitioners in achieving improved diagnostic analyses for DFU. This survey paper investigates various diagnostic methodologies for DFU, spanning traditional statistical approaches to cutting‐edge deep learning techniques. It systematically reviews key stages involved in diabetic foot ulcer classification (DFUC) methods, including preprocessing, feature extraction, and classification, explaining their benefits and drawbacks. The investigation extends to exploring state‐of‐the‐art convolutional neural network models tailored for DFUC, involving extensive experiments with data augmentation and transfer learning methods. The overview also outlines datasets commonly employed for evaluating DFUC methodologies. Recognizing that neuropathy and reduced blood flow in the lower limbs might be caused by atherosclerotic blood vessels, this paper provides recommendations to researchers and practitioners involved in routine medical therapy to prevent substantial complications. Apart from reviewing prior literature, this survey aims to influence the future of DFU diagnostics by outlining prospective research directions, particularly in the domains of personalized and intelligent healthcare. Finally, this overview is to contribute to the continual evolution of DFU diagnosis in order to provide more effective and customized medical care.This article is categorized under: Application Areas > Health Care Technologies > Machine Learning Technologies > Artificial Intelligence
糖尿病足溃疡(DFUs)给不同年龄段的人带来了巨大的健康风险,给医护人员的准确分类和分级带来了挑战。糖尿病足溃疡在自动健康监测和诊断系统中起着至关重要的作用,医学成像、计算机视觉、统计分析和步态信息的整合对于全面了解和有效管理至关重要。诊断 DFU 势在必行,因为它在自动健康监测和诊断系统的诊断、治疗计划和神经病变研究过程中发挥着重要作用。为此,文献中出现了各种基于机器学习和深度学习的方法,以支持医疗从业人员改进对 DFU 的诊断分析。本调查报告研究了各种 DFU 诊断方法,包括传统的统计方法和前沿的深度学习技术。它系统地回顾了糖尿病足溃疡分类(DFUC)方法所涉及的关键阶段,包括预处理、特征提取和分类,并解释了它们的优点和缺点。研究还扩展到探索为 DFUC 量身定制的最先进的卷积神经网络模型,涉及数据增强和迁移学习方法的广泛实验。综述还概述了用于评估 DFUC 方法的常用数据集。认识到下肢神经病变和血流减少可能是由动脉粥样硬化血管引起的,本文为研究人员和参与常规医疗的从业人员提供了预防重大并发症的建议。除了回顾之前的文献外,本调查还旨在通过概述前瞻性研究方向,尤其是个性化和智能医疗领域的研究方向,影响 DFU 诊断的未来。最后,本综述旨在促进 DFU 诊断的持续发展,以提供更有效的定制化医疗服务:应用领域> 医疗保健技术> 机器学习技术> 人工智能
{"title":"An overview of current developments and methods for identifying diabetic foot ulcers: A survey","authors":"L. Jani Anbarasi, Malathy Jawahar, R. Beulah Jayakumari, Modigari Narendra, Vinayakumar Ravi, R. Neeraja","doi":"10.1002/widm.1562","DOIUrl":"https://doi.org/10.1002/widm.1562","url":null,"abstract":"Diabetic foot ulcers (DFUs) present a substantial health risk across diverse age groups, creating challenges for healthcare professionals in the accurate classification and grading. DFU plays a crucial role in automated health monitoring and diagnosis systems, where the integration of medical imaging, computer vision, statistical analysis, and gait information is essential for comprehensive understanding and effective management. Diagnosing DFU is imperative, as it plays a major role in the processes of diagnosis, treatment planning, and neuropathy research within automated health monitoring and diagnosis systems. To address this, various machine learning and deep learning‐based methodologies have emerged in the literature to support healthcare practitioners in achieving improved diagnostic analyses for DFU. This survey paper investigates various diagnostic methodologies for DFU, spanning traditional statistical approaches to cutting‐edge deep learning techniques. It systematically reviews key stages involved in diabetic foot ulcer classification (DFUC) methods, including preprocessing, feature extraction, and classification, explaining their benefits and drawbacks. The investigation extends to exploring state‐of‐the‐art convolutional neural network models tailored for DFUC, involving extensive experiments with data augmentation and transfer learning methods. The overview also outlines datasets commonly employed for evaluating DFUC methodologies. Recognizing that neuropathy and reduced blood flow in the lower limbs might be caused by atherosclerotic blood vessels, this paper provides recommendations to researchers and practitioners involved in routine medical therapy to prevent substantial complications. Apart from reviewing prior literature, this survey aims to influence the future of DFU diagnostics by outlining prospective research directions, particularly in the domains of personalized and intelligent healthcare. Finally, this overview is to contribute to the continual evolution of DFU diagnosis in order to provide more effective and customized medical care.This article is categorized under:<jats:list list-type=\"simple\"> <jats:list-item>Application Areas &gt; Health Care</jats:list-item> <jats:list-item>Technologies &gt; Machine Learning</jats:list-item> <jats:list-item>Technologies &gt; Artificial Intelligence</jats:list-item> </jats:list>","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142397864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal emotion recognition: A comprehensive review, trends, and challenges 多模态情感识别:全面回顾、趋势和挑战
Pub Date : 2024-10-09 DOI: 10.1002/widm.1563
Manju Priya Arthanarisamy Ramaswamy, Suja Palaniswamy
Automatic emotion recognition is a burgeoning field of research and has its roots in psychology and cognitive science. This article comprehensively reviews multimodal emotion recognition, covering various aspects such as emotion theories, discrete and dimensional models, emotional response systems, datasets, and current trends. This article reviewed 179 multimodal emotion recognition literature papers from 2017 to 2023 to reflect on the current trends in multimodal affective computing. This article covers various modalities used in emotion recognition based on the emotional response system under four categories: subjective experience comprising text and self‐report; peripheral physiology comprising electrodermal, cardiovascular, facial muscle, and respiration activity; central physiology comprising EEG, neuroimaging, and EOG; behavior comprising facial, vocal, whole‐body behavior, and observer ratings. This review summarizes the measures and behavior of each modality under various emotional states. This article provides an extensive list of multimodal datasets and their unique characteristics. The recent advances in multimodal emotion recognition are grouped based on the research focus areas such as emotion elicitation strategy, data collection and handling, the impact of culture and modality on multimodal emotion recognition systems, feature extraction, feature selection, alignment of signals across the modalities, and fusion strategies. The recent multimodal fusion strategies are detailed in this article, as extracting shared representations of different modalities, removing redundant features from different modalities, and learning critical features from each modality are crucial for multimodal emotion recognition. This article summarizes the strengths and weaknesses of multimodal emotion recognition based on the review outcome, along with challenges and future work in multimodal emotion recognition. This article aims to serve as a lucid introduction, covering all aspects of multimodal emotion recognition for novices.This article is categorized under: Fundamental Concepts of Data and Knowledge > Human Centricity and User Interaction Technologies > Cognitive Computing Technologies > Artificial Intelligence
自动情绪识别是一个新兴的研究领域,其根源在于心理学和认知科学。本文全面评述了多模态情感识别,涉及情感理论、离散模型和维度模型、情感反应系统、数据集和当前趋势等多个方面。本文回顾了 2017 年至 2023 年的 179 篇多模态情感识别文献论文,以反思当前多模态情感计算的发展趋势。本文涵盖了基于情绪反应系统的情绪识别中使用的各种模态,分为四类:包括文本和自我报告在内的主观体验;包括皮电、心血管、面部肌肉和呼吸活动在内的外周生理学;包括脑电图、神经影像和 EOG 在内的中枢生理学;包括面部、发声、全身行为和观察者评分在内的行为。本综述总结了各种情绪状态下每种模式的测量和行为。本文广泛列举了多模态数据集及其独特特征。多模态情感识别的最新进展根据研究重点领域进行了分组,如情感激发策略、数据收集和处理、文化和模态对多模态情感识别系统的影响、特征提取、特征选择、跨模态信号配准和融合策略。本文详细介绍了最新的多模态融合策略,因为提取不同模态的共享表征、去除不同模态的冗余特征以及学习每种模态的关键特征对于多模态情感识别至关重要。本文根据综述结果总结了多模态情感识别的优缺点,以及多模态情感识别的挑战和未来工作。本文旨在为新手提供一个清晰的介绍,涵盖多模态情感识别的各个方面:数据与知识的基本概念> 以人为本与用户交互技术> 认知计算技术> 人工智能
{"title":"Multimodal emotion recognition: A comprehensive review, trends, and challenges","authors":"Manju Priya Arthanarisamy Ramaswamy, Suja Palaniswamy","doi":"10.1002/widm.1563","DOIUrl":"https://doi.org/10.1002/widm.1563","url":null,"abstract":"Automatic emotion recognition is a burgeoning field of research and has its roots in psychology and cognitive science. This article comprehensively reviews multimodal emotion recognition, covering various aspects such as emotion theories, discrete and dimensional models, emotional response systems, datasets, and current trends. This article reviewed 179 multimodal emotion recognition literature papers from 2017 to 2023 to reflect on the current trends in multimodal affective computing. This article covers various modalities used in emotion recognition based on the emotional response system under four categories: subjective experience comprising text and self‐report; peripheral physiology comprising electrodermal, cardiovascular, facial muscle, and respiration activity; central physiology comprising EEG, neuroimaging, and EOG; behavior comprising facial, vocal, whole‐body behavior, and observer ratings. This review summarizes the measures and behavior of each modality under various emotional states. This article provides an extensive list of multimodal datasets and their unique characteristics. The recent advances in multimodal emotion recognition are grouped based on the research focus areas such as emotion elicitation strategy, data collection and handling, the impact of culture and modality on multimodal emotion recognition systems, feature extraction, feature selection, alignment of signals across the modalities, and fusion strategies. The recent multimodal fusion strategies are detailed in this article, as extracting shared representations of different modalities, removing redundant features from different modalities, and learning critical features from each modality are crucial for multimodal emotion recognition. This article summarizes the strengths and weaknesses of multimodal emotion recognition based on the review outcome, along with challenges and future work in multimodal emotion recognition. This article aims to serve as a lucid introduction, covering all aspects of multimodal emotion recognition for novices.This article is categorized under:<jats:list list-type=\"simple\"> <jats:list-item>Fundamental Concepts of Data and Knowledge &gt; Human Centricity and User Interaction</jats:list-item> <jats:list-item>Technologies &gt; Cognitive Computing</jats:list-item> <jats:list-item>Technologies &gt; Artificial Intelligence</jats:list-item> </jats:list>","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142397863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence in assessing cardiovascular diseases and risk factors via retinal fundus images: A review of the last decade 人工智能通过视网膜眼底图像评估心血管疾病和风险因素:过去十年的回顾
Pub Date : 2024-10-09 DOI: 10.1002/widm.1560
Mirsaeed Abdollahi, Ali Jafarizadeh, Amirhosein Ghafouri‐Asbagh, Navid Sobhi, Keysan Pourmoghtader, Siamak Pedrammehr, Houshyar Asadi, Ru‐San Tan, Roohallah Alizadehsani, U. Rajendra Acharya
Cardiovascular diseases (CVDs) are the leading cause of death globally. The use of artificial intelligence (AI) methods—in particular, deep learning (DL)—has been on the rise lately for the analysis of different CVD‐related topics. The use of fundus images and optical coherence tomography angiography (OCTA) in the diagnosis of retinal diseases has also been extensively studied. To better understand heart function and anticipate changes based on microvascular characteristics and function, researchers are currently exploring the integration of AI with noninvasive retinal scanning. There is great potential to reduce the number of cardiovascular events and the financial strain on healthcare systems by utilizing AI‐assisted early detection and prediction of cardiovascular diseases on a large scale. A comprehensive search was conducted across various databases, including PubMed, Medline, Google Scholar, Scopus, Web of Sciences, IEEE Xplore, and ACM Digital Library, using specific keywords related to cardiovascular diseases and AI. The study included 87 English‐language publications selected for relevance, and additional references were considered. This article provides an overview of the recent developments and difficulties in using AI and retinal imaging to diagnose cardiovascular diseases. It provides insights for further exploration in this field. Researchers are trying to develop precise disease prognosis patterns in response to the aging population and the growing global burden of CVD. AI and DL are revolutionizing healthcare by potentially diagnosing multiple CVDs from a single retinal image. However, swifter adoption of these technologies in healthcare systems is required.This article is categorized under: Application Areas > Health Care Technologies > Artificial Intelligence
心血管疾病(CVD)是全球死亡的主要原因。最近,人工智能(AI)方法,特别是深度学习(DL),在分析不同的心血管疾病相关主题方面的应用不断增加。眼底图像和光学相干断层血管成像(OCTA)在视网膜疾病诊断中的应用也得到了广泛研究。为了更好地了解心脏功能并根据微血管特征和功能预测变化,研究人员目前正在探索将人工智能与无创视网膜扫描相结合。通过大规模利用人工智能辅助早期检测和预测心血管疾病,在减少心血管事件数量和减轻医疗保健系统的经济压力方面有着巨大的潜力。研究人员使用与心血管疾病和人工智能相关的特定关键词,在各种数据库(包括 PubMed、Medline、Google Scholar、Scopus、Web of Sciences、IEEE Xplore 和 ACM Digital Library)中进行了全面搜索。该研究包括 87 篇相关的英文出版物,并考虑了其他参考文献。本文概述了使用人工智能和视网膜成像诊断心血管疾病的最新进展和困难。它为这一领域的进一步探索提供了见解。研究人员正在努力开发精确的疾病预后模式,以应对人口老龄化和心血管疾病日益加重的全球负担。人工智能和 DL 有可能通过一张视网膜图像诊断出多种心血管疾病,从而为医疗保健带来革命性的变化。然而,医疗保健系统需要更快地采用这些技术:应用领域> 医疗保健技术> 人工智能
{"title":"Artificial intelligence in assessing cardiovascular diseases and risk factors via retinal fundus images: A review of the last decade","authors":"Mirsaeed Abdollahi, Ali Jafarizadeh, Amirhosein Ghafouri‐Asbagh, Navid Sobhi, Keysan Pourmoghtader, Siamak Pedrammehr, Houshyar Asadi, Ru‐San Tan, Roohallah Alizadehsani, U. Rajendra Acharya","doi":"10.1002/widm.1560","DOIUrl":"https://doi.org/10.1002/widm.1560","url":null,"abstract":"Cardiovascular diseases (CVDs) are the leading cause of death globally. The use of artificial intelligence (AI) methods—in particular, deep learning (DL)—has been on the rise lately for the analysis of different CVD‐related topics. The use of fundus images and optical coherence tomography angiography (OCTA) in the diagnosis of retinal diseases has also been extensively studied. To better understand heart function and anticipate changes based on microvascular characteristics and function, researchers are currently exploring the integration of AI with noninvasive retinal scanning. There is great potential to reduce the number of cardiovascular events and the financial strain on healthcare systems by utilizing AI‐assisted early detection and prediction of cardiovascular diseases on a large scale. A comprehensive search was conducted across various databases, including PubMed, Medline, Google Scholar, Scopus, Web of Sciences, IEEE Xplore, and ACM Digital Library, using specific keywords related to cardiovascular diseases and AI. The study included 87 English‐language publications selected for relevance, and additional references were considered. This article provides an overview of the recent developments and difficulties in using AI and retinal imaging to diagnose cardiovascular diseases. It provides insights for further exploration in this field. Researchers are trying to develop precise disease prognosis patterns in response to the aging population and the growing global burden of CVD. AI and DL are revolutionizing healthcare by potentially diagnosing multiple CVDs from a single retinal image. However, swifter adoption of these technologies in healthcare systems is required.This article is categorized under:<jats:list list-type=\"simple\"> <jats:list-item>Application Areas &gt; Health Care</jats:list-item> <jats:list-item>Technologies &gt; Artificial Intelligence</jats:list-item> </jats:list>","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142397865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continual learning and its industrial applications: A selective review 持续学习及其工业应用:选择性综述
Pub Date : 2024-09-24 DOI: 10.1002/widm.1558
J. Lian, K. Choi, B. Veeramani, A. Hu, S. Murli, L. Freeman, E. Bowen, X. Deng
In many industrial applications, datasets are often obtained in a sequence associated with a series of similar but different tasks. To model these datasets, a machine‐learning algorithm, which performed well on the previous task, may not have as strong a performance on the current task. When the architecture of the algorithm is trained to adapt to new tasks, often the whole architecture needs to be revised and the old knowledge of modeling can be forgotten. Efforts to make the algorithm work for all the relevant tasks can cost large computational resources and data storage. Continual learning, also called lifelong learning or continual lifelong learning, refers to the concept that these algorithms have the ability to continually learn without forgetting the information obtained from previous task. In this work, we provide a broad view of continual learning techniques and their industrial applications. Our focus will be on reviewing the current methodologies and existing applications, and identifying a gap between the current methodology and the modern industrial needs.This article is categorized under: Technologies > Artificial Intelligence Fundamental Concepts of Data and Knowledge > Knowledge Representation Application Areas > Business and Industry
在许多工业应用中,数据集的获取往往与一系列相似但不同的任务相关联。为了对这些数据集进行建模,在前一个任务中表现出色的机器学习算法在当前任务中可能表现不佳。当训练算法架构以适应新任务时,往往需要修改整个架构,而建模的旧知识可能会被遗忘。要使算法适用于所有相关任务,需要耗费大量的计算资源和数据存储空间。持续学习,也称为终身学习或持续终身学习,指的是这些算法具有持续学习的能力,而不会遗忘从以前任务中获得的信息。在这项工作中,我们将广泛介绍持续学习技术及其工业应用。我们的重点是回顾当前的方法和现有应用,并找出当前方法与现代工业需求之间的差距:技术 > 人工智能 数据和知识的基本概念 > 知识表示 应用领域 > 商业和工业
{"title":"Continual learning and its industrial applications: A selective review","authors":"J. Lian, K. Choi, B. Veeramani, A. Hu, S. Murli, L. Freeman, E. Bowen, X. Deng","doi":"10.1002/widm.1558","DOIUrl":"https://doi.org/10.1002/widm.1558","url":null,"abstract":"In many industrial applications, datasets are often obtained in a sequence associated with a series of similar but different tasks. To model these datasets, a machine‐learning algorithm, which performed well on the previous task, may not have as strong a performance on the current task. When the architecture of the algorithm is trained to adapt to new tasks, often the whole architecture needs to be revised and the old knowledge of modeling can be forgotten. Efforts to make the algorithm work for all the relevant tasks can cost large computational resources and data storage. Continual learning, also called lifelong learning or continual lifelong learning, refers to the concept that these algorithms have the ability to continually learn without forgetting the information obtained from previous task. In this work, we provide a broad view of continual learning techniques and their industrial applications. Our focus will be on reviewing the current methodologies and existing applications, and identifying a gap between the current methodology and the modern industrial needs.This article is categorized under:<jats:list list-type=\"simple\"> <jats:list-item>Technologies &gt; Artificial Intelligence</jats:list-item> <jats:list-item>Fundamental Concepts of Data and Knowledge &gt; Knowledge Representation</jats:list-item> <jats:list-item>Application Areas &gt; Business and Industry</jats:list-item> </jats:list>","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"101 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142317577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lead–lag effect of research between conference papers and journal papers in data mining 数据挖掘领域会议论文与期刊论文之间的研究滞后效应
Pub Date : 2024-09-24 DOI: 10.1002/widm.1561
Yue Huang, Runyu Tian
The examination of the lead–lag effect between different publication types, incorporating a temporal dimension, is very significant for assessing research. In this article, we introduce a novel framework to quantify the lead–lag effect between the research topics of conference papers and journal papers. We first identify research topics via the text‐embedding‐based topic modeling technique BERTopic, then extract the research topics of each time slice, construct and visualize the similarity matrix of topics to reveal the time‐lag direction and finally quantify the lead–lag effect by four proposed indicators, as well as by average influence topic similarity comparison maps. We conduct a detailed analysis of 19,166 bibliographic data for top conference papers and journal papers from 2015 to 2019 in the data mining field, calculate the similarity of topics obtained by BERTopic between each time slice divided by quarters. The results show that journal paper topics lag behind conference paper topics in the data mining field. The most significant lead–lag effect is 2.5 years, with approximately 33.45% of topics affected by this lag. The methodology presented here holds potential for broader application in the analysis of lead–lag effects across diverse research areas, offering valuable insights into the state of research development and informing policy decisions.This article is categorized under: Application Areas > Science and Technology
结合时间维度,研究不同出版物类型之间的滞后效应对于评估研究工作意义重大。在本文中,我们介绍了一种量化会议论文和期刊论文研究课题之间滞后效应的新框架。我们首先通过基于文本嵌入的主题建模技术 BERTopic 识别研究主题,然后提取每个时间片的研究主题,构建并可视化主题相似性矩阵以揭示时滞方向,最后通过四个拟议指标以及平均影响主题相似性比较图量化时滞效应。我们对2015年至2019年数据挖掘领域顶级会议论文和期刊论文的19166条书目数据进行了详细分析,计算了BERTopic得到的各时间片之间除以季度的话题相似度。结果显示,在数据挖掘领域,期刊论文主题落后于会议论文主题。最明显的滞后效应是 2.5 年,约有 33.45% 的主题受到这一滞后效应的影响。本文介绍的方法有望更广泛地应用于不同研究领域的滞后效应分析,为了解研究发展状况提供有价值的见解,并为政策决策提供参考:应用领域 > 科学与技术
{"title":"Lead–lag effect of research between conference papers and journal papers in data mining","authors":"Yue Huang, Runyu Tian","doi":"10.1002/widm.1561","DOIUrl":"https://doi.org/10.1002/widm.1561","url":null,"abstract":"The examination of the lead–lag effect between different publication types, incorporating a temporal dimension, is very significant for assessing research. In this article, we introduce a novel framework to quantify the lead–lag effect between the research topics of conference papers and journal papers. We first identify research topics via the text‐embedding‐based topic modeling technique BERTopic, then extract the research topics of each time slice, construct and visualize the similarity matrix of topics to reveal the time‐lag direction and finally quantify the lead–lag effect by four proposed indicators, as well as by average influence topic similarity comparison maps. We conduct a detailed analysis of 19,166 bibliographic data for top conference papers and journal papers from 2015 to 2019 in the data mining field, calculate the similarity of topics obtained by BERTopic between each time slice divided by quarters. The results show that journal paper topics lag behind conference paper topics in the data mining field. The most significant lead–lag effect is 2.5 years, with approximately 33.45% of topics affected by this lag. The methodology presented here holds potential for broader application in the analysis of lead–lag effects across diverse research areas, offering valuable insights into the state of research development and informing policy decisions.This article is categorized under:<jats:list list-type=\"simple\"> <jats:list-item>Application Areas &gt; Science and Technology</jats:list-item> </jats:list>","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142317538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From 3D point‐cloud data to explainable geometric deep learning: State‐of‐the‐art and future challenges 从三维点云数据到可解释的几何深度学习:最新技术和未来挑战
Pub Date : 2024-09-17 DOI: 10.1002/widm.1554
Anna Saranti, Bastian Pfeifer, Christoph Gollob, Karl Stampfer, Andreas Holzinger
We present an exciting journey from 3D point‐cloud data (PCD) to the state of the art in graph neural networks (GNNs) and their evolution with explainable artificial intelligence (XAI), and 3D geometric priors with the human‐in‐the‐loop. We follow a simple definition of a “digital twin,” as a high‐precision, three‐dimensional digital representation of a physical object or environment, captured, for example, by Light Detection and Ranging (LiDAR) technology. After a digression into transforming PCD into images, graphs, combinatorial complexes and hypergraphs, we explore recent developments in geometric deep learning (GDL) and provide insight into the application of these network architectures for analyzing and learning from graph‐structured data. We emphasize the importance of the explainability of these models and recognize that the ability to interpret and validate the results of complex models is a crucial aspect of their wider adoption.This article is categorized under: Technologies > Artificial Intelligence
我们介绍了从三维点云数据(PCD)到图神经网络(GNN)的最新技术及其与可解释人工智能(XAI)和三维几何先验(Human-in-the-loop)的演变过程。我们遵循 "数字孪生 "的简单定义,即物理对象或环境的高精度三维数字表示,例如,通过光探测和测距(LiDAR)技术捕获。在将 PCD 转化为图像、图、组合复合物和超图之后,我们探讨了几何深度学习(GDL)的最新发展,并深入分析了这些网络架构在分析和学习图结构数据方面的应用。我们强调了这些模型可解释性的重要性,并认识到解释和验证复杂模型结果的能力是其广泛应用的一个关键方面:技术> 人工智能
{"title":"From 3D point‐cloud data to explainable geometric deep learning: State‐of‐the‐art and future challenges","authors":"Anna Saranti, Bastian Pfeifer, Christoph Gollob, Karl Stampfer, Andreas Holzinger","doi":"10.1002/widm.1554","DOIUrl":"https://doi.org/10.1002/widm.1554","url":null,"abstract":"We present an exciting journey from 3D point‐cloud data (PCD) to the state of the art in graph neural networks (GNNs) and their evolution with explainable artificial intelligence (XAI), and 3D geometric priors with the human‐in‐the‐loop. We follow a simple definition of a “digital twin,” as a high‐precision, three‐dimensional digital representation of a physical object or environment, captured, for example, by Light Detection and Ranging (LiDAR) technology. After a digression into transforming PCD into images, graphs, combinatorial complexes and hypergraphs, we explore recent developments in geometric deep learning (GDL) and provide insight into the application of these network architectures for analyzing and learning from graph‐structured data. We emphasize the importance of the explainability of these models and recognize that the ability to interpret and validate the results of complex models is a crucial aspect of their wider adoption.This article is categorized under:<jats:list list-type=\"simple\"> <jats:list-item>Technologies &gt; Artificial Intelligence</jats:list-item> </jats:list>","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142236376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital twins in healthcare: Applications, technologies, simulations, and future trends 医疗保健领域的数字双胞胎:应用、技术、模拟和未来趋势
Pub Date : 2024-09-06 DOI: 10.1002/widm.1559
Mohamed Abd Elaziz, Mohammed A. A. Al‐qaness, Abdelghani Dahou, Mohammed Azmi Al‐Betar, Mona Mostafa Mohamed, Mohamed El‐Shinawi, Amjad Ali, Ahmed A. Ewees
The healthcare industry has witnessed significant interest in applying DTs (DTs), due to technological advancements. DTs are virtual replicas of physical entities that adapt to real‐time data, enabling predictions of their physical counterparts. DT technology enhances understanding of disease occurrence, enabling more accurate diagnoses and treatments. Integrating emerging technologies like big data, cloud computing, Virtual Reality (VR), and internet‐of‐things (IoT) provides a solid foundation for DT implementation in healthcare. However, defining DTs within the healthcare context still has become increasingly challenging. Therefore, exploring the potential of DTs in healthcare contributes to research, emphasizing their transformative impact on personalized medicine and precision healthcare. In this study, we present diverse healthcare applications of DTs, including healthcare 4.0, cardiac analysis, monitoring and management, data privacy, socio‐ethical, and surgical. Moreover, this paper discusses the software and simulations of DTs that can be used in these applications of healthcare, as well as, the future trends of DTs in healthcare.This article is categorized under: Application Areas > Health Care Technologies > Computational Intelligence
随着技术的进步,医疗保健行业对应用虚拟病历(DTs)产生了浓厚的兴趣。DT 是物理实体的虚拟复制品,能适应实时数据,从而预测物理对应物。DT 技术可增强对疾病发生的了解,从而实现更准确的诊断和治疗。大数据、云计算、虚拟现实(VR)和物联网(IoT)等新兴技术的整合为 DT 在医疗保健领域的应用奠定了坚实的基础。然而,在医疗保健领域定义 DT 仍变得越来越具有挑战性。因此,探索 DT 在医疗保健领域的潜力有助于开展研究,强调其对个性化医疗和精准医疗的变革性影响。在本研究中,我们介绍了 DT 在医疗保健领域的各种应用,包括医疗保健 4.0、心脏分析、监控和管理、数据隐私、社会伦理和外科手术。此外,本文还讨论了可用于这些医疗保健应用的 DTs 软件和模拟,以及 DTs 在医疗保健领域的未来发展趋势:应用领域> 医疗保健技术> 计算智能
{"title":"Digital twins in healthcare: Applications, technologies, simulations, and future trends","authors":"Mohamed Abd Elaziz, Mohammed A. A. Al‐qaness, Abdelghani Dahou, Mohammed Azmi Al‐Betar, Mona Mostafa Mohamed, Mohamed El‐Shinawi, Amjad Ali, Ahmed A. Ewees","doi":"10.1002/widm.1559","DOIUrl":"https://doi.org/10.1002/widm.1559","url":null,"abstract":"The healthcare industry has witnessed significant interest in applying DTs (DTs), due to technological advancements. DTs are virtual replicas of physical entities that adapt to real‐time data, enabling predictions of their physical counterparts. DT technology enhances understanding of disease occurrence, enabling more accurate diagnoses and treatments. Integrating emerging technologies like big data, cloud computing, Virtual Reality (VR), and internet‐of‐things (IoT) provides a solid foundation for DT implementation in healthcare. However, defining DTs within the healthcare context still has become increasingly challenging. Therefore, exploring the potential of DTs in healthcare contributes to research, emphasizing their transformative impact on personalized medicine and precision healthcare. In this study, we present diverse healthcare applications of DTs, including healthcare 4.0, cardiac analysis, monitoring and management, data privacy, socio‐ethical, and surgical. Moreover, this paper discusses the software and simulations of DTs that can be used in these applications of healthcare, as well as, the future trends of DTs in healthcare.This article is categorized under:<jats:list list-type=\"simple\"> <jats:list-item>Application Areas &gt; Health Care</jats:list-item> <jats:list-item>Technologies &gt; Computational Intelligence</jats:list-item> </jats:list>","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142144170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A taxonomy of automatic differentiation pitfalls 自动区分误区分类法
Pub Date : 2024-09-03 DOI: 10.1002/widm.1555
Jan Hückelheim, Harshitha Menon, William Moses, Bruce Christianson, Paul Hovland, Laurent Hascoët
Automatic differentiation is a popular technique for computing derivatives of computer programs. While automatic differentiation has been successfully used in countless engineering, science, and machine learning applications, it can sometimes nevertheless produce surprising results. In this paper, we categorize problematic usages of automatic differentiation, and illustrate each category with examples such as chaos, time‐averages, discretizations, fixed‐point loops, lookup tables, linear solvers, and probabilistic programs, in the hope that readers may more easily avoid or detect such pitfalls. We also review debugging techniques and their effectiveness in these situations.This article is categorized under: Technologies > Machine Learning
自动微分是计算计算机程序导数的一种流行技术。虽然自动微分已成功应用于无数工程、科学和机器学习领域,但有时也会产生令人惊讶的结果。在本文中,我们对自动微分的问题用法进行了分类,并通过混沌、时间平均、离散化、定点循环、查找表、线性求解器和概率程序等实例对每个类别进行了说明,希望读者可以更容易地避免或发现这些陷阱。我们还回顾了调试技巧及其在这些情况下的有效性:技术 > 机器学习
{"title":"A taxonomy of automatic differentiation pitfalls","authors":"Jan Hückelheim, Harshitha Menon, William Moses, Bruce Christianson, Paul Hovland, Laurent Hascoët","doi":"10.1002/widm.1555","DOIUrl":"https://doi.org/10.1002/widm.1555","url":null,"abstract":"Automatic differentiation is a popular technique for computing derivatives of computer programs. While automatic differentiation has been successfully used in countless engineering, science, and machine learning applications, it can sometimes nevertheless produce surprising results. In this paper, we categorize problematic usages of automatic differentiation, and illustrate each category with examples such as chaos, time‐averages, discretizations, fixed‐point loops, lookup tables, linear solvers, and probabilistic programs, in the hope that readers may more easily avoid or detect such pitfalls. We also review debugging techniques and their effectiveness in these situations.This article is categorized under:<jats:list list-type=\"simple\"> <jats:list-item>Technologies &gt; Machine Learning</jats:list-item> </jats:list>","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142131050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancements in Q‐learning meta‐heuristic optimization algorithms: A survey Q-learning 元启发式优化算法的进展:调查
Pub Date : 2024-08-19 DOI: 10.1002/widm.1548
Yang Yang, Yuchao Gao, Zhe Ding, Jinran Wu, Shaotong Zhang, Feifei Han, Xuelan Qiu, Shangce Gao, You‐Gan Wang
This paper reviews the integration of Q‐learning with meta‐heuristic algorithms (QLMA) over the last 20 years, highlighting its success in solving complex optimization problems. We focus on key aspects of QLMA, including parameter adaptation, operator selection, and balancing global exploration with local exploitation. QLMA has become a leading solution in industries like energy, power systems, and engineering, addressing a range of mathematical challenges. Looking forward, we suggest further exploration of meta‐heuristic integration, transfer learning strategies, and techniques to reduce state space.This article is categorized under: Technologies > Computational Intelligence Technologies > Artificial Intelligence
本文回顾了 Q-learning 与元启发式算法(QLMA)在过去 20 年中的整合情况,重点介绍了 Q-learning 在解决复杂优化问题方面取得的成功。我们将重点放在 QLMA 的关键方面,包括参数适应、算子选择以及平衡全局探索与局部开发。QLMA 已成为能源、电力系统和工程等行业的领先解决方案,解决了一系列数学难题。展望未来,我们建议进一步探索元启发式集成、迁移学习策略和缩小状态空间的技术:技术> 计算智能技术> 人工智能
{"title":"Advancements in Q‐learning meta‐heuristic optimization algorithms: A survey","authors":"Yang Yang, Yuchao Gao, Zhe Ding, Jinran Wu, Shaotong Zhang, Feifei Han, Xuelan Qiu, Shangce Gao, You‐Gan Wang","doi":"10.1002/widm.1548","DOIUrl":"https://doi.org/10.1002/widm.1548","url":null,"abstract":"This paper reviews the integration of Q‐learning with meta‐heuristic algorithms (QLMA) over the last 20 years, highlighting its success in solving complex optimization problems. We focus on key aspects of QLMA, including parameter adaptation, operator selection, and balancing global exploration with local exploitation. QLMA has become a leading solution in industries like energy, power systems, and engineering, addressing a range of mathematical challenges. Looking forward, we suggest further exploration of meta‐heuristic integration, transfer learning strategies, and techniques to reduce state space.This article is categorized under:<jats:list list-type=\"simple\"> <jats:list-item>Technologies &gt; Computational Intelligence</jats:list-item> <jats:list-item>Technologies &gt; Artificial Intelligence</jats:list-item> </jats:list>","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142007588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the convergence of Metaverse, Blockchain, and AI: A comprehensive survey of enabling technologies, applications, challenges, and future directions 探索元宇宙、区块链和人工智能的融合:对赋能技术、应用、挑战和未来方向的全面调查
Pub Date : 2024-08-19 DOI: 10.1002/widm.1556
Mueen Uddin, Muath Obaidat, Selvakumar Manickam, Shams Ul Arfeen Laghari, Abdulhalim Dandoush, Hidayat Ullah, Syed Sajid Ullah
The Metaverse, distinguished by its capacity to integrate the physical and digital realms seamlessly, presents a dynamic virtual environment offering diverse opportunities for engagement across innovation, entertainment, socialization, and commercial endeavors. However, the Metaverse is poised for a transformative evolution through the convergence of contemporary technological advancements, including artificial intelligence (AI), Blockchain, Robotics, augmented reality, virtual reality, and mixed reality. This convergence is anticipated to revolutionize the global digital landscape, introducing novel social, economic, and operational paradigms for organizations and communities. To comprehensively elucidate the future potential of this technological fusion and its implications for digital innovation, this research endeavors to undertake a thorough analysis of scholarly discourse and research pertaining to the Metaverse, AI, Blockchain, and associated technologies. This survey delves into various critical facets of the Metaverse ecosystem, encompassing component analysis, exploration of digital currencies, assessment of AI utilization in virtual environments, and examination of Blockchain's role in enhancing digital content and data security. Leveraging articles retrieved from esteemed digital repositories including ScienceDirect, IEEE Xplore, Springer Nature, Google Scholar, and ACM, published between 2017 and 2023, this study adopts an analytical approach to engage with these materials. Through rigorous examination and discourse, this research aims to provide insights into the emerging trends, challenges, and future directions in the convergence of the Metaverse, Blockchain, and AI.This article is categorized under: Application Areas > Industry Specific Applications
元宇宙(Metaverse)以其将物理和数字领域无缝整合的能力而与众不同,它提供了一个动态的虚拟环境,为创新、娱乐、社交和商业活动提供了各种参与机会。然而,随着人工智能(AI)、区块链(Blockchain)、机器人技术(Robotics)、增强现实(Augmented Reality)、虚拟现实(Virtual Reality)和混合现实(Mixed Reality)等当代技术进步的融合,元宇宙(Metaverse)正蓄势待发。预计这种融合将彻底改变全球数字景观,为组织和社区引入新的社会、经济和运营模式。为了全面阐明这一技术融合的未来潜力及其对数字创新的影响,本研究致力于对有关元宇宙、人工智能、区块链及相关技术的学术论述和研究进行深入分析。这项调查深入研究了元宇宙生态系统的各个重要方面,包括组件分析、数字货币探索、虚拟环境中人工智能应用评估,以及区块链在增强数字内容和数据安全方面的作用。本研究利用从ScienceDirect、IEEE Xplore、Springer Nature、Google Scholar和ACM等著名数字资料库检索到的2017年至2023年间发表的文章,采用分析方法来处理这些资料。通过严格的检查和论述,本研究旨在为元宇宙、区块链和人工智能融合的新兴趋势、挑战和未来方向提供见解。本文归类于:应用领域> 行业特定应用
{"title":"Exploring the convergence of Metaverse, Blockchain, and AI: A comprehensive survey of enabling technologies, applications, challenges, and future directions","authors":"Mueen Uddin, Muath Obaidat, Selvakumar Manickam, Shams Ul Arfeen Laghari, Abdulhalim Dandoush, Hidayat Ullah, Syed Sajid Ullah","doi":"10.1002/widm.1556","DOIUrl":"https://doi.org/10.1002/widm.1556","url":null,"abstract":"The Metaverse, distinguished by its capacity to integrate the physical and digital realms seamlessly, presents a dynamic virtual environment offering diverse opportunities for engagement across innovation, entertainment, socialization, and commercial endeavors. However, the Metaverse is poised for a transformative evolution through the convergence of contemporary technological advancements, including artificial intelligence (AI), Blockchain, Robotics, augmented reality, virtual reality, and mixed reality. This convergence is anticipated to revolutionize the global digital landscape, introducing novel social, economic, and operational paradigms for organizations and communities. To comprehensively elucidate the future potential of this technological fusion and its implications for digital innovation, this research endeavors to undertake a thorough analysis of scholarly discourse and research pertaining to the Metaverse, AI, Blockchain, and associated technologies. This survey delves into various critical facets of the Metaverse ecosystem, encompassing component analysis, exploration of digital currencies, assessment of AI utilization in virtual environments, and examination of Blockchain's role in enhancing digital content and data security. Leveraging articles retrieved from esteemed digital repositories including ScienceDirect, IEEE Xplore, Springer Nature, Google Scholar, and ACM, published between 2017 and 2023, this study adopts an analytical approach to engage with these materials. Through rigorous examination and discourse, this research aims to provide insights into the emerging trends, challenges, and future directions in the convergence of the Metaverse, Blockchain, and AI.This article is categorized under:<jats:list list-type=\"simple\"> <jats:list-item>Application Areas &gt; Industry Specific Applications</jats:list-item> </jats:list>","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142007490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
WIREs Data Mining and Knowledge Discovery
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1