L. Jani Anbarasi, Malathy Jawahar, R. Beulah Jayakumari, Modigari Narendra, Vinayakumar Ravi, R. Neeraja
Diabetic foot ulcers (DFUs) present a substantial health risk across diverse age groups, creating challenges for healthcare professionals in the accurate classification and grading. DFU plays a crucial role in automated health monitoring and diagnosis systems, where the integration of medical imaging, computer vision, statistical analysis, and gait information is essential for comprehensive understanding and effective management. Diagnosing DFU is imperative, as it plays a major role in the processes of diagnosis, treatment planning, and neuropathy research within automated health monitoring and diagnosis systems. To address this, various machine learning and deep learning‐based methodologies have emerged in the literature to support healthcare practitioners in achieving improved diagnostic analyses for DFU. This survey paper investigates various diagnostic methodologies for DFU, spanning traditional statistical approaches to cutting‐edge deep learning techniques. It systematically reviews key stages involved in diabetic foot ulcer classification (DFUC) methods, including preprocessing, feature extraction, and classification, explaining their benefits and drawbacks. The investigation extends to exploring state‐of‐the‐art convolutional neural network models tailored for DFUC, involving extensive experiments with data augmentation and transfer learning methods. The overview also outlines datasets commonly employed for evaluating DFUC methodologies. Recognizing that neuropathy and reduced blood flow in the lower limbs might be caused by atherosclerotic blood vessels, this paper provides recommendations to researchers and practitioners involved in routine medical therapy to prevent substantial complications. Apart from reviewing prior literature, this survey aims to influence the future of DFU diagnostics by outlining prospective research directions, particularly in the domains of personalized and intelligent healthcare. Finally, this overview is to contribute to the continual evolution of DFU diagnosis in order to provide more effective and customized medical care.This article is categorized under:Application Areas > Health CareTechnologies > Machine LearningTechnologies > Artificial Intelligence
{"title":"An overview of current developments and methods for identifying diabetic foot ulcers: A survey","authors":"L. Jani Anbarasi, Malathy Jawahar, R. Beulah Jayakumari, Modigari Narendra, Vinayakumar Ravi, R. Neeraja","doi":"10.1002/widm.1562","DOIUrl":"https://doi.org/10.1002/widm.1562","url":null,"abstract":"Diabetic foot ulcers (DFUs) present a substantial health risk across diverse age groups, creating challenges for healthcare professionals in the accurate classification and grading. DFU plays a crucial role in automated health monitoring and diagnosis systems, where the integration of medical imaging, computer vision, statistical analysis, and gait information is essential for comprehensive understanding and effective management. Diagnosing DFU is imperative, as it plays a major role in the processes of diagnosis, treatment planning, and neuropathy research within automated health monitoring and diagnosis systems. To address this, various machine learning and deep learning‐based methodologies have emerged in the literature to support healthcare practitioners in achieving improved diagnostic analyses for DFU. This survey paper investigates various diagnostic methodologies for DFU, spanning traditional statistical approaches to cutting‐edge deep learning techniques. It systematically reviews key stages involved in diabetic foot ulcer classification (DFUC) methods, including preprocessing, feature extraction, and classification, explaining their benefits and drawbacks. The investigation extends to exploring state‐of‐the‐art convolutional neural network models tailored for DFUC, involving extensive experiments with data augmentation and transfer learning methods. The overview also outlines datasets commonly employed for evaluating DFUC methodologies. Recognizing that neuropathy and reduced blood flow in the lower limbs might be caused by atherosclerotic blood vessels, this paper provides recommendations to researchers and practitioners involved in routine medical therapy to prevent substantial complications. Apart from reviewing prior literature, this survey aims to influence the future of DFU diagnostics by outlining prospective research directions, particularly in the domains of personalized and intelligent healthcare. Finally, this overview is to contribute to the continual evolution of DFU diagnosis in order to provide more effective and customized medical care.This article is categorized under:<jats:list list-type=\"simple\"> <jats:list-item>Application Areas > Health Care</jats:list-item> <jats:list-item>Technologies > Machine Learning</jats:list-item> <jats:list-item>Technologies > Artificial Intelligence</jats:list-item> </jats:list>","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142397864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automatic emotion recognition is a burgeoning field of research and has its roots in psychology and cognitive science. This article comprehensively reviews multimodal emotion recognition, covering various aspects such as emotion theories, discrete and dimensional models, emotional response systems, datasets, and current trends. This article reviewed 179 multimodal emotion recognition literature papers from 2017 to 2023 to reflect on the current trends in multimodal affective computing. This article covers various modalities used in emotion recognition based on the emotional response system under four categories: subjective experience comprising text and self‐report; peripheral physiology comprising electrodermal, cardiovascular, facial muscle, and respiration activity; central physiology comprising EEG, neuroimaging, and EOG; behavior comprising facial, vocal, whole‐body behavior, and observer ratings. This review summarizes the measures and behavior of each modality under various emotional states. This article provides an extensive list of multimodal datasets and their unique characteristics. The recent advances in multimodal emotion recognition are grouped based on the research focus areas such as emotion elicitation strategy, data collection and handling, the impact of culture and modality on multimodal emotion recognition systems, feature extraction, feature selection, alignment of signals across the modalities, and fusion strategies. The recent multimodal fusion strategies are detailed in this article, as extracting shared representations of different modalities, removing redundant features from different modalities, and learning critical features from each modality are crucial for multimodal emotion recognition. This article summarizes the strengths and weaknesses of multimodal emotion recognition based on the review outcome, along with challenges and future work in multimodal emotion recognition. This article aims to serve as a lucid introduction, covering all aspects of multimodal emotion recognition for novices.This article is categorized under:Fundamental Concepts of Data and Knowledge > Human Centricity and User InteractionTechnologies > Cognitive ComputingTechnologies > Artificial Intelligence
{"title":"Multimodal emotion recognition: A comprehensive review, trends, and challenges","authors":"Manju Priya Arthanarisamy Ramaswamy, Suja Palaniswamy","doi":"10.1002/widm.1563","DOIUrl":"https://doi.org/10.1002/widm.1563","url":null,"abstract":"Automatic emotion recognition is a burgeoning field of research and has its roots in psychology and cognitive science. This article comprehensively reviews multimodal emotion recognition, covering various aspects such as emotion theories, discrete and dimensional models, emotional response systems, datasets, and current trends. This article reviewed 179 multimodal emotion recognition literature papers from 2017 to 2023 to reflect on the current trends in multimodal affective computing. This article covers various modalities used in emotion recognition based on the emotional response system under four categories: subjective experience comprising text and self‐report; peripheral physiology comprising electrodermal, cardiovascular, facial muscle, and respiration activity; central physiology comprising EEG, neuroimaging, and EOG; behavior comprising facial, vocal, whole‐body behavior, and observer ratings. This review summarizes the measures and behavior of each modality under various emotional states. This article provides an extensive list of multimodal datasets and their unique characteristics. The recent advances in multimodal emotion recognition are grouped based on the research focus areas such as emotion elicitation strategy, data collection and handling, the impact of culture and modality on multimodal emotion recognition systems, feature extraction, feature selection, alignment of signals across the modalities, and fusion strategies. The recent multimodal fusion strategies are detailed in this article, as extracting shared representations of different modalities, removing redundant features from different modalities, and learning critical features from each modality are crucial for multimodal emotion recognition. This article summarizes the strengths and weaknesses of multimodal emotion recognition based on the review outcome, along with challenges and future work in multimodal emotion recognition. This article aims to serve as a lucid introduction, covering all aspects of multimodal emotion recognition for novices.This article is categorized under:<jats:list list-type=\"simple\"> <jats:list-item>Fundamental Concepts of Data and Knowledge > Human Centricity and User Interaction</jats:list-item> <jats:list-item>Technologies > Cognitive Computing</jats:list-item> <jats:list-item>Technologies > Artificial Intelligence</jats:list-item> </jats:list>","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142397863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cardiovascular diseases (CVDs) are the leading cause of death globally. The use of artificial intelligence (AI) methods—in particular, deep learning (DL)—has been on the rise lately for the analysis of different CVD‐related topics. The use of fundus images and optical coherence tomography angiography (OCTA) in the diagnosis of retinal diseases has also been extensively studied. To better understand heart function and anticipate changes based on microvascular characteristics and function, researchers are currently exploring the integration of AI with noninvasive retinal scanning. There is great potential to reduce the number of cardiovascular events and the financial strain on healthcare systems by utilizing AI‐assisted early detection and prediction of cardiovascular diseases on a large scale. A comprehensive search was conducted across various databases, including PubMed, Medline, Google Scholar, Scopus, Web of Sciences, IEEE Xplore, and ACM Digital Library, using specific keywords related to cardiovascular diseases and AI. The study included 87 English‐language publications selected for relevance, and additional references were considered. This article provides an overview of the recent developments and difficulties in using AI and retinal imaging to diagnose cardiovascular diseases. It provides insights for further exploration in this field. Researchers are trying to develop precise disease prognosis patterns in response to the aging population and the growing global burden of CVD. AI and DL are revolutionizing healthcare by potentially diagnosing multiple CVDs from a single retinal image. However, swifter adoption of these technologies in healthcare systems is required.This article is categorized under:Application Areas > Health CareTechnologies > Artificial Intelligence
心血管疾病(CVD)是全球死亡的主要原因。最近,人工智能(AI)方法,特别是深度学习(DL),在分析不同的心血管疾病相关主题方面的应用不断增加。眼底图像和光学相干断层血管成像(OCTA)在视网膜疾病诊断中的应用也得到了广泛研究。为了更好地了解心脏功能并根据微血管特征和功能预测变化,研究人员目前正在探索将人工智能与无创视网膜扫描相结合。通过大规模利用人工智能辅助早期检测和预测心血管疾病,在减少心血管事件数量和减轻医疗保健系统的经济压力方面有着巨大的潜力。研究人员使用与心血管疾病和人工智能相关的特定关键词,在各种数据库(包括 PubMed、Medline、Google Scholar、Scopus、Web of Sciences、IEEE Xplore 和 ACM Digital Library)中进行了全面搜索。该研究包括 87 篇相关的英文出版物,并考虑了其他参考文献。本文概述了使用人工智能和视网膜成像诊断心血管疾病的最新进展和困难。它为这一领域的进一步探索提供了见解。研究人员正在努力开发精确的疾病预后模式,以应对人口老龄化和心血管疾病日益加重的全球负担。人工智能和 DL 有可能通过一张视网膜图像诊断出多种心血管疾病,从而为医疗保健带来革命性的变化。然而,医疗保健系统需要更快地采用这些技术:应用领域> 医疗保健技术> 人工智能
{"title":"Artificial intelligence in assessing cardiovascular diseases and risk factors via retinal fundus images: A review of the last decade","authors":"Mirsaeed Abdollahi, Ali Jafarizadeh, Amirhosein Ghafouri‐Asbagh, Navid Sobhi, Keysan Pourmoghtader, Siamak Pedrammehr, Houshyar Asadi, Ru‐San Tan, Roohallah Alizadehsani, U. Rajendra Acharya","doi":"10.1002/widm.1560","DOIUrl":"https://doi.org/10.1002/widm.1560","url":null,"abstract":"Cardiovascular diseases (CVDs) are the leading cause of death globally. The use of artificial intelligence (AI) methods—in particular, deep learning (DL)—has been on the rise lately for the analysis of different CVD‐related topics. The use of fundus images and optical coherence tomography angiography (OCTA) in the diagnosis of retinal diseases has also been extensively studied. To better understand heart function and anticipate changes based on microvascular characteristics and function, researchers are currently exploring the integration of AI with noninvasive retinal scanning. There is great potential to reduce the number of cardiovascular events and the financial strain on healthcare systems by utilizing AI‐assisted early detection and prediction of cardiovascular diseases on a large scale. A comprehensive search was conducted across various databases, including PubMed, Medline, Google Scholar, Scopus, Web of Sciences, IEEE Xplore, and ACM Digital Library, using specific keywords related to cardiovascular diseases and AI. The study included 87 English‐language publications selected for relevance, and additional references were considered. This article provides an overview of the recent developments and difficulties in using AI and retinal imaging to diagnose cardiovascular diseases. It provides insights for further exploration in this field. Researchers are trying to develop precise disease prognosis patterns in response to the aging population and the growing global burden of CVD. AI and DL are revolutionizing healthcare by potentially diagnosing multiple CVDs from a single retinal image. However, swifter adoption of these technologies in healthcare systems is required.This article is categorized under:<jats:list list-type=\"simple\"> <jats:list-item>Application Areas > Health Care</jats:list-item> <jats:list-item>Technologies > Artificial Intelligence</jats:list-item> </jats:list>","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142397865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Lian, K. Choi, B. Veeramani, A. Hu, S. Murli, L. Freeman, E. Bowen, X. Deng
In many industrial applications, datasets are often obtained in a sequence associated with a series of similar but different tasks. To model these datasets, a machine‐learning algorithm, which performed well on the previous task, may not have as strong a performance on the current task. When the architecture of the algorithm is trained to adapt to new tasks, often the whole architecture needs to be revised and the old knowledge of modeling can be forgotten. Efforts to make the algorithm work for all the relevant tasks can cost large computational resources and data storage. Continual learning, also called lifelong learning or continual lifelong learning, refers to the concept that these algorithms have the ability to continually learn without forgetting the information obtained from previous task. In this work, we provide a broad view of continual learning techniques and their industrial applications. Our focus will be on reviewing the current methodologies and existing applications, and identifying a gap between the current methodology and the modern industrial needs.This article is categorized under:Technologies > Artificial IntelligenceFundamental Concepts of Data and Knowledge > Knowledge RepresentationApplication Areas > Business and Industry
{"title":"Continual learning and its industrial applications: A selective review","authors":"J. Lian, K. Choi, B. Veeramani, A. Hu, S. Murli, L. Freeman, E. Bowen, X. Deng","doi":"10.1002/widm.1558","DOIUrl":"https://doi.org/10.1002/widm.1558","url":null,"abstract":"In many industrial applications, datasets are often obtained in a sequence associated with a series of similar but different tasks. To model these datasets, a machine‐learning algorithm, which performed well on the previous task, may not have as strong a performance on the current task. When the architecture of the algorithm is trained to adapt to new tasks, often the whole architecture needs to be revised and the old knowledge of modeling can be forgotten. Efforts to make the algorithm work for all the relevant tasks can cost large computational resources and data storage. Continual learning, also called lifelong learning or continual lifelong learning, refers to the concept that these algorithms have the ability to continually learn without forgetting the information obtained from previous task. In this work, we provide a broad view of continual learning techniques and their industrial applications. Our focus will be on reviewing the current methodologies and existing applications, and identifying a gap between the current methodology and the modern industrial needs.This article is categorized under:<jats:list list-type=\"simple\"> <jats:list-item>Technologies > Artificial Intelligence</jats:list-item> <jats:list-item>Fundamental Concepts of Data and Knowledge > Knowledge Representation</jats:list-item> <jats:list-item>Application Areas > Business and Industry</jats:list-item> </jats:list>","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"101 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142317577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The examination of the lead–lag effect between different publication types, incorporating a temporal dimension, is very significant for assessing research. In this article, we introduce a novel framework to quantify the lead–lag effect between the research topics of conference papers and journal papers. We first identify research topics via the text‐embedding‐based topic modeling technique BERTopic, then extract the research topics of each time slice, construct and visualize the similarity matrix of topics to reveal the time‐lag direction and finally quantify the lead–lag effect by four proposed indicators, as well as by average influence topic similarity comparison maps. We conduct a detailed analysis of 19,166 bibliographic data for top conference papers and journal papers from 2015 to 2019 in the data mining field, calculate the similarity of topics obtained by BERTopic between each time slice divided by quarters. The results show that journal paper topics lag behind conference paper topics in the data mining field. The most significant lead–lag effect is 2.5 years, with approximately 33.45% of topics affected by this lag. The methodology presented here holds potential for broader application in the analysis of lead–lag effects across diverse research areas, offering valuable insights into the state of research development and informing policy decisions.This article is categorized under:Application Areas > Science and Technology
{"title":"Lead–lag effect of research between conference papers and journal papers in data mining","authors":"Yue Huang, Runyu Tian","doi":"10.1002/widm.1561","DOIUrl":"https://doi.org/10.1002/widm.1561","url":null,"abstract":"The examination of the lead–lag effect between different publication types, incorporating a temporal dimension, is very significant for assessing research. In this article, we introduce a novel framework to quantify the lead–lag effect between the research topics of conference papers and journal papers. We first identify research topics via the text‐embedding‐based topic modeling technique BERTopic, then extract the research topics of each time slice, construct and visualize the similarity matrix of topics to reveal the time‐lag direction and finally quantify the lead–lag effect by four proposed indicators, as well as by average influence topic similarity comparison maps. We conduct a detailed analysis of 19,166 bibliographic data for top conference papers and journal papers from 2015 to 2019 in the data mining field, calculate the similarity of topics obtained by BERTopic between each time slice divided by quarters. The results show that journal paper topics lag behind conference paper topics in the data mining field. The most significant lead–lag effect is 2.5 years, with approximately 33.45% of topics affected by this lag. The methodology presented here holds potential for broader application in the analysis of lead–lag effects across diverse research areas, offering valuable insights into the state of research development and informing policy decisions.This article is categorized under:<jats:list list-type=\"simple\"> <jats:list-item>Application Areas > Science and Technology</jats:list-item> </jats:list>","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142317538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna Saranti, Bastian Pfeifer, Christoph Gollob, Karl Stampfer, Andreas Holzinger
We present an exciting journey from 3D point‐cloud data (PCD) to the state of the art in graph neural networks (GNNs) and their evolution with explainable artificial intelligence (XAI), and 3D geometric priors with the human‐in‐the‐loop. We follow a simple definition of a “digital twin,” as a high‐precision, three‐dimensional digital representation of a physical object or environment, captured, for example, by Light Detection and Ranging (LiDAR) technology. After a digression into transforming PCD into images, graphs, combinatorial complexes and hypergraphs, we explore recent developments in geometric deep learning (GDL) and provide insight into the application of these network architectures for analyzing and learning from graph‐structured data. We emphasize the importance of the explainability of these models and recognize that the ability to interpret and validate the results of complex models is a crucial aspect of their wider adoption.This article is categorized under:Technologies > Artificial Intelligence
{"title":"From 3D point‐cloud data to explainable geometric deep learning: State‐of‐the‐art and future challenges","authors":"Anna Saranti, Bastian Pfeifer, Christoph Gollob, Karl Stampfer, Andreas Holzinger","doi":"10.1002/widm.1554","DOIUrl":"https://doi.org/10.1002/widm.1554","url":null,"abstract":"We present an exciting journey from 3D point‐cloud data (PCD) to the state of the art in graph neural networks (GNNs) and their evolution with explainable artificial intelligence (XAI), and 3D geometric priors with the human‐in‐the‐loop. We follow a simple definition of a “digital twin,” as a high‐precision, three‐dimensional digital representation of a physical object or environment, captured, for example, by Light Detection and Ranging (LiDAR) technology. After a digression into transforming PCD into images, graphs, combinatorial complexes and hypergraphs, we explore recent developments in geometric deep learning (GDL) and provide insight into the application of these network architectures for analyzing and learning from graph‐structured data. We emphasize the importance of the explainability of these models and recognize that the ability to interpret and validate the results of complex models is a crucial aspect of their wider adoption.This article is categorized under:<jats:list list-type=\"simple\"> <jats:list-item>Technologies > Artificial Intelligence</jats:list-item> </jats:list>","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142236376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohamed Abd Elaziz, Mohammed A. A. Al‐qaness, Abdelghani Dahou, Mohammed Azmi Al‐Betar, Mona Mostafa Mohamed, Mohamed El‐Shinawi, Amjad Ali, Ahmed A. Ewees
The healthcare industry has witnessed significant interest in applying DTs (DTs), due to technological advancements. DTs are virtual replicas of physical entities that adapt to real‐time data, enabling predictions of their physical counterparts. DT technology enhances understanding of disease occurrence, enabling more accurate diagnoses and treatments. Integrating emerging technologies like big data, cloud computing, Virtual Reality (VR), and internet‐of‐things (IoT) provides a solid foundation for DT implementation in healthcare. However, defining DTs within the healthcare context still has become increasingly challenging. Therefore, exploring the potential of DTs in healthcare contributes to research, emphasizing their transformative impact on personalized medicine and precision healthcare. In this study, we present diverse healthcare applications of DTs, including healthcare 4.0, cardiac analysis, monitoring and management, data privacy, socio‐ethical, and surgical. Moreover, this paper discusses the software and simulations of DTs that can be used in these applications of healthcare, as well as, the future trends of DTs in healthcare.This article is categorized under:Application Areas > Health CareTechnologies > Computational Intelligence
{"title":"Digital twins in healthcare: Applications, technologies, simulations, and future trends","authors":"Mohamed Abd Elaziz, Mohammed A. A. Al‐qaness, Abdelghani Dahou, Mohammed Azmi Al‐Betar, Mona Mostafa Mohamed, Mohamed El‐Shinawi, Amjad Ali, Ahmed A. Ewees","doi":"10.1002/widm.1559","DOIUrl":"https://doi.org/10.1002/widm.1559","url":null,"abstract":"The healthcare industry has witnessed significant interest in applying DTs (DTs), due to technological advancements. DTs are virtual replicas of physical entities that adapt to real‐time data, enabling predictions of their physical counterparts. DT technology enhances understanding of disease occurrence, enabling more accurate diagnoses and treatments. Integrating emerging technologies like big data, cloud computing, Virtual Reality (VR), and internet‐of‐things (IoT) provides a solid foundation for DT implementation in healthcare. However, defining DTs within the healthcare context still has become increasingly challenging. Therefore, exploring the potential of DTs in healthcare contributes to research, emphasizing their transformative impact on personalized medicine and precision healthcare. In this study, we present diverse healthcare applications of DTs, including healthcare 4.0, cardiac analysis, monitoring and management, data privacy, socio‐ethical, and surgical. Moreover, this paper discusses the software and simulations of DTs that can be used in these applications of healthcare, as well as, the future trends of DTs in healthcare.This article is categorized under:<jats:list list-type=\"simple\"> <jats:list-item>Application Areas > Health Care</jats:list-item> <jats:list-item>Technologies > Computational Intelligence</jats:list-item> </jats:list>","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142144170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jan Hückelheim, Harshitha Menon, William Moses, Bruce Christianson, Paul Hovland, Laurent Hascoët
Automatic differentiation is a popular technique for computing derivatives of computer programs. While automatic differentiation has been successfully used in countless engineering, science, and machine learning applications, it can sometimes nevertheless produce surprising results. In this paper, we categorize problematic usages of automatic differentiation, and illustrate each category with examples such as chaos, time‐averages, discretizations, fixed‐point loops, lookup tables, linear solvers, and probabilistic programs, in the hope that readers may more easily avoid or detect such pitfalls. We also review debugging techniques and their effectiveness in these situations.This article is categorized under:Technologies > Machine Learning
{"title":"A taxonomy of automatic differentiation pitfalls","authors":"Jan Hückelheim, Harshitha Menon, William Moses, Bruce Christianson, Paul Hovland, Laurent Hascoët","doi":"10.1002/widm.1555","DOIUrl":"https://doi.org/10.1002/widm.1555","url":null,"abstract":"Automatic differentiation is a popular technique for computing derivatives of computer programs. While automatic differentiation has been successfully used in countless engineering, science, and machine learning applications, it can sometimes nevertheless produce surprising results. In this paper, we categorize problematic usages of automatic differentiation, and illustrate each category with examples such as chaos, time‐averages, discretizations, fixed‐point loops, lookup tables, linear solvers, and probabilistic programs, in the hope that readers may more easily avoid or detect such pitfalls. We also review debugging techniques and their effectiveness in these situations.This article is categorized under:<jats:list list-type=\"simple\"> <jats:list-item>Technologies > Machine Learning</jats:list-item> </jats:list>","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142131050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yang Yang, Yuchao Gao, Zhe Ding, Jinran Wu, Shaotong Zhang, Feifei Han, Xuelan Qiu, Shangce Gao, You‐Gan Wang
This paper reviews the integration of Q‐learning with meta‐heuristic algorithms (QLMA) over the last 20 years, highlighting its success in solving complex optimization problems. We focus on key aspects of QLMA, including parameter adaptation, operator selection, and balancing global exploration with local exploitation. QLMA has become a leading solution in industries like energy, power systems, and engineering, addressing a range of mathematical challenges. Looking forward, we suggest further exploration of meta‐heuristic integration, transfer learning strategies, and techniques to reduce state space.This article is categorized under:Technologies > Computational IntelligenceTechnologies > Artificial Intelligence
{"title":"Advancements in Q‐learning meta‐heuristic optimization algorithms: A survey","authors":"Yang Yang, Yuchao Gao, Zhe Ding, Jinran Wu, Shaotong Zhang, Feifei Han, Xuelan Qiu, Shangce Gao, You‐Gan Wang","doi":"10.1002/widm.1548","DOIUrl":"https://doi.org/10.1002/widm.1548","url":null,"abstract":"This paper reviews the integration of Q‐learning with meta‐heuristic algorithms (QLMA) over the last 20 years, highlighting its success in solving complex optimization problems. We focus on key aspects of QLMA, including parameter adaptation, operator selection, and balancing global exploration with local exploitation. QLMA has become a leading solution in industries like energy, power systems, and engineering, addressing a range of mathematical challenges. Looking forward, we suggest further exploration of meta‐heuristic integration, transfer learning strategies, and techniques to reduce state space.This article is categorized under:<jats:list list-type=\"simple\"> <jats:list-item>Technologies > Computational Intelligence</jats:list-item> <jats:list-item>Technologies > Artificial Intelligence</jats:list-item> </jats:list>","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142007588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Metaverse, distinguished by its capacity to integrate the physical and digital realms seamlessly, presents a dynamic virtual environment offering diverse opportunities for engagement across innovation, entertainment, socialization, and commercial endeavors. However, the Metaverse is poised for a transformative evolution through the convergence of contemporary technological advancements, including artificial intelligence (AI), Blockchain, Robotics, augmented reality, virtual reality, and mixed reality. This convergence is anticipated to revolutionize the global digital landscape, introducing novel social, economic, and operational paradigms for organizations and communities. To comprehensively elucidate the future potential of this technological fusion and its implications for digital innovation, this research endeavors to undertake a thorough analysis of scholarly discourse and research pertaining to the Metaverse, AI, Blockchain, and associated technologies. This survey delves into various critical facets of the Metaverse ecosystem, encompassing component analysis, exploration of digital currencies, assessment of AI utilization in virtual environments, and examination of Blockchain's role in enhancing digital content and data security. Leveraging articles retrieved from esteemed digital repositories including ScienceDirect, IEEE Xplore, Springer Nature, Google Scholar, and ACM, published between 2017 and 2023, this study adopts an analytical approach to engage with these materials. Through rigorous examination and discourse, this research aims to provide insights into the emerging trends, challenges, and future directions in the convergence of the Metaverse, Blockchain, and AI.This article is categorized under:Application Areas > Industry Specific Applications
{"title":"Exploring the convergence of Metaverse, Blockchain, and AI: A comprehensive survey of enabling technologies, applications, challenges, and future directions","authors":"Mueen Uddin, Muath Obaidat, Selvakumar Manickam, Shams Ul Arfeen Laghari, Abdulhalim Dandoush, Hidayat Ullah, Syed Sajid Ullah","doi":"10.1002/widm.1556","DOIUrl":"https://doi.org/10.1002/widm.1556","url":null,"abstract":"The Metaverse, distinguished by its capacity to integrate the physical and digital realms seamlessly, presents a dynamic virtual environment offering diverse opportunities for engagement across innovation, entertainment, socialization, and commercial endeavors. However, the Metaverse is poised for a transformative evolution through the convergence of contemporary technological advancements, including artificial intelligence (AI), Blockchain, Robotics, augmented reality, virtual reality, and mixed reality. This convergence is anticipated to revolutionize the global digital landscape, introducing novel social, economic, and operational paradigms for organizations and communities. To comprehensively elucidate the future potential of this technological fusion and its implications for digital innovation, this research endeavors to undertake a thorough analysis of scholarly discourse and research pertaining to the Metaverse, AI, Blockchain, and associated technologies. This survey delves into various critical facets of the Metaverse ecosystem, encompassing component analysis, exploration of digital currencies, assessment of AI utilization in virtual environments, and examination of Blockchain's role in enhancing digital content and data security. Leveraging articles retrieved from esteemed digital repositories including ScienceDirect, IEEE Xplore, Springer Nature, Google Scholar, and ACM, published between 2017 and 2023, this study adopts an analytical approach to engage with these materials. Through rigorous examination and discourse, this research aims to provide insights into the emerging trends, challenges, and future directions in the convergence of the Metaverse, Blockchain, and AI.This article is categorized under:<jats:list list-type=\"simple\"> <jats:list-item>Application Areas > Industry Specific Applications</jats:list-item> </jats:list>","PeriodicalId":501013,"journal":{"name":"WIREs Data Mining and Knowledge Discovery","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142007490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}