Haijun Zhang, Xiangyu Mu, Han Yan, Lang Ren, Jianghong Ma
With the development of social media and the ubiquity of the Internet, recent years have witnessed the rapid development of online video advertising among publishers and advertisers. Video advertising, as a new type of advertisement, has gained significant research attention from both academia and industry, coinciding with the ever‐growing volume of online videos. In this research, we provide a comprehensive survey of online video advertising in the fields of social science and computer science. We investigate state‐of‐the‐art articles from 1990 to the present and provide a new taxonomy of extant research topics based on these articles. We also highlight the factors that cause advertising to affect people and the most popular video advertising techniques used in computer science. Finally, on the basis of the analytics of the surveyed papers, future challenges are identified and potential solutions to these are discussed.
{"title":"A survey of online video advertising","authors":"Haijun Zhang, Xiangyu Mu, Han Yan, Lang Ren, Jianghong Ma","doi":"10.1002/widm.1489","DOIUrl":"https://doi.org/10.1002/widm.1489","url":null,"abstract":"With the development of social media and the ubiquity of the Internet, recent years have witnessed the rapid development of online video advertising among publishers and advertisers. Video advertising, as a new type of advertisement, has gained significant research attention from both academia and industry, coinciding with the ever‐growing volume of online videos. In this research, we provide a comprehensive survey of online video advertising in the fields of social science and computer science. We investigate state‐of‐the‐art articles from 1990 to the present and provide a new taxonomy of extant research topics based on these articles. We also highlight the factors that cause advertising to affect people and the most popular video advertising techniques used in computer science. Finally, on the basis of the analytics of the surveyed papers, future challenges are identified and potential solutions to these are discussed.","PeriodicalId":48970,"journal":{"name":"Wiley Interdisciplinary Reviews-Data Mining and Knowledge Discovery","volume":"22 1","pages":""},"PeriodicalIF":7.8,"publicationDate":"2023-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73811414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Federated learning (FL) is a collaborative, decentralized privacy‐preserving method to attach the challenges of storing data and data privacy. Artificial intelligence, machine learning, smart devices, and deep learning have strongly marked the last years. Two challenges arose in data science as a result. First, the regulation protected the data by creating the General Data Protection Regulation, in which organizations are not allowed to keep or transfer data without the owner's authorization. Another challenge is the large volume of data generated in the era of big data, and keeping that data in one only server becomes increasingly tricky. Therefore, the data is allocated into different locations or generated by devices, creating the need to build models or perform calculations without transferring data to a single location. The new term FL emerged as a sub‐area of machine learning that aims to solve the challenge of making distributed models with privacy considerations. This survey starts by describing relevant concepts, definitions, and methods, followed by an in‐depth investigation of federated model evaluation. Finally, we discuss three promising applications for further research: anomaly detection, distributed data streams, and graph representation.
{"title":"Towards federated learning: An overview of methods and applications","authors":"Paula Raissa Silva, João Vinagre, João Gama","doi":"10.1002/widm.1486","DOIUrl":"https://doi.org/10.1002/widm.1486","url":null,"abstract":"Federated learning (FL) is a collaborative, decentralized privacy‐preserving method to attach the challenges of storing data and data privacy. Artificial intelligence, machine learning, smart devices, and deep learning have strongly marked the last years. Two challenges arose in data science as a result. First, the regulation protected the data by creating the General Data Protection Regulation, in which organizations are not allowed to keep or transfer data without the owner's authorization. Another challenge is the large volume of data generated in the era of big data, and keeping that data in one only server becomes increasingly tricky. Therefore, the data is allocated into different locations or generated by devices, creating the need to build models or perform calculations without transferring data to a single location. The new term FL emerged as a sub‐area of machine learning that aims to solve the challenge of making distributed models with privacy considerations. This survey starts by describing relevant concepts, definitions, and methods, followed by an in‐depth investigation of federated model evaluation. Finally, we discuss three promising applications for further research: anomaly detection, distributed data streams, and graph representation.","PeriodicalId":48970,"journal":{"name":"Wiley Interdisciplinary Reviews-Data Mining and Knowledge Discovery","volume":"5 1","pages":""},"PeriodicalIF":7.8,"publicationDate":"2023-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74423107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Use of conversational agents, like chatbots, avatars, and robots is increasing worldwide. Yet, their effectiveness in health care is largely unknown. The aim of this advanced review was to assess the use and effectiveness of conversational agents in various fields of health care. A literature search, analysis, and synthesis were conducted in February 2022 in PubMed and CINAHL. The included evidence was analyzed narratively by employing the principles of thematic analysis. We reviewed articles on artificial intelligence‐based question‐answering systems in health care. Most of the identified articles report its effectiveness; less is known about its use. We outlined study findings and explored directions of future research, to provide evidence‐based knowledge about artificial intelligence‐based question‐answering systems.
{"title":"Review of artificial intelligence‐based question‐answering systems in healthcare","authors":"Leona Cilar Budler, Lucija Gosak, G. Štiglic","doi":"10.1002/widm.1487","DOIUrl":"https://doi.org/10.1002/widm.1487","url":null,"abstract":"Use of conversational agents, like chatbots, avatars, and robots is increasing worldwide. Yet, their effectiveness in health care is largely unknown. The aim of this advanced review was to assess the use and effectiveness of conversational agents in various fields of health care. A literature search, analysis, and synthesis were conducted in February 2022 in PubMed and CINAHL. The included evidence was analyzed narratively by employing the principles of thematic analysis. We reviewed articles on artificial intelligence‐based question‐answering systems in health care. Most of the identified articles report its effectiveness; less is known about its use. We outlined study findings and explored directions of future research, to provide evidence‐based knowledge about artificial intelligence‐based question‐answering systems.","PeriodicalId":48970,"journal":{"name":"Wiley Interdisciplinary Reviews-Data Mining and Knowledge Discovery","volume":"6 1","pages":""},"PeriodicalIF":7.8,"publicationDate":"2023-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75271708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Shaik, Xiaohui Tao, Niall Higgins, Lin Li, R. Gururajan, Xujuan Zhou, U. Rajendra Acharya
The adoption of artificial intelligence (AI) in healthcare is growing rapidly. Remote patient monitoring (RPM) is one of the common healthcare applications that assist doctors to monitor patients with chronic or acute illness at remote locations, elderly people in‐home care, and even hospitalized patients. The reliability of manual patient monitoring systems depends on staff time management which is dependent on their workload. Conventional patient monitoring involves invasive approaches which require skin contact to monitor health status. This study aims to do a comprehensive review of RPM systems including adopted advanced technologies, AI impact on RPM, challenges and trends in AI‐enabled RPM. This review explores the benefits and challenges of patient‐centric RPM architectures enabled with Internet of Things wearable devices and sensors using the cloud, fog, edge, and blockchain technologies. The role of AI in RPM ranges from physical activity classification to chronic disease monitoring and vital signs monitoring in emergency settings. This review results show that AI‐enabled RPM architectures have transformed healthcare monitoring applications because of their ability to detect early deterioration in patients' health, personalize individual patient health parameter monitoring using federated learning, and learn human behavior patterns using techniques such as reinforcement learning. This review discusses the challenges and trends to adopt AI to RPM systems and implementation issues. The future directions of AI in RPM applications are analyzed based on the challenges and trends.
{"title":"Remote patient monitoring using artificial intelligence: Current state, applications, and challenges","authors":"T. Shaik, Xiaohui Tao, Niall Higgins, Lin Li, R. Gururajan, Xujuan Zhou, U. Rajendra Acharya","doi":"10.1002/widm.1485","DOIUrl":"https://doi.org/10.1002/widm.1485","url":null,"abstract":"The adoption of artificial intelligence (AI) in healthcare is growing rapidly. Remote patient monitoring (RPM) is one of the common healthcare applications that assist doctors to monitor patients with chronic or acute illness at remote locations, elderly people in‐home care, and even hospitalized patients. The reliability of manual patient monitoring systems depends on staff time management which is dependent on their workload. Conventional patient monitoring involves invasive approaches which require skin contact to monitor health status. This study aims to do a comprehensive review of RPM systems including adopted advanced technologies, AI impact on RPM, challenges and trends in AI‐enabled RPM. This review explores the benefits and challenges of patient‐centric RPM architectures enabled with Internet of Things wearable devices and sensors using the cloud, fog, edge, and blockchain technologies. The role of AI in RPM ranges from physical activity classification to chronic disease monitoring and vital signs monitoring in emergency settings. This review results show that AI‐enabled RPM architectures have transformed healthcare monitoring applications because of their ability to detect early deterioration in patients' health, personalize individual patient health parameter monitoring using federated learning, and learn human behavior patterns using techniques such as reinforcement learning. This review discusses the challenges and trends to adopt AI to RPM systems and implementation issues. The future directions of AI in RPM applications are analyzed based on the challenges and trends.","PeriodicalId":48970,"journal":{"name":"Wiley Interdisciplinary Reviews-Data Mining and Knowledge Discovery","volume":"7 1","pages":""},"PeriodicalIF":7.8,"publicationDate":"2023-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80292672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2022-10-21DOI: 10.1002/widm.1479
Fernando Marmolejo-Ramos, Mauricio Tejo, Marek Brabec, Jakub Kuzilek, Srecko Joksimovic, Vitomir Kovanovic, Jorge González, Thomas Kneib, Peter Bühlmann, Lucas Kook, Guillermo Briseño-Sánchez, Raydonal Ospina
The advent of technological developments is allowing to gather large amounts of data in several research fields. Learning analytics (LA)/educational data mining has access to big observational unstructured data captured from educational settings and relies mostly on unsupervised machine learning (ML) algorithms to make sense of such type of data. Generalized additive models for location, scale, and shape (GAMLSS) are a supervised statistical learning framework that allows modeling all the parameters of the distribution of the response variable with respect to the explanatory variables. This article overviews the power and flexibility of GAMLSS in relation to some ML techniques. Also, GAMLSS' capability to be tailored toward causality via causal regularization is briefly commented. This overview is illustrated via a data set from the field of LA. This article is categorized under:Application Areas > Education and LearningAlgorithmic Development > StatisticsTechnologies > Machine Learning.
{"title":"Distributional regression modeling via generalized additive models for location, scale, and shape: An overview through a data set from learning analytics.","authors":"Fernando Marmolejo-Ramos, Mauricio Tejo, Marek Brabec, Jakub Kuzilek, Srecko Joksimovic, Vitomir Kovanovic, Jorge González, Thomas Kneib, Peter Bühlmann, Lucas Kook, Guillermo Briseño-Sánchez, Raydonal Ospina","doi":"10.1002/widm.1479","DOIUrl":"10.1002/widm.1479","url":null,"abstract":"<p><p>The advent of technological developments is allowing to gather large amounts of data in several research fields. Learning analytics (LA)/educational data mining has access to big observational unstructured data captured from educational settings and relies mostly on unsupervised machine learning (ML) algorithms to make sense of such type of data. Generalized additive models for location, scale, and shape (GAMLSS) are a supervised statistical learning framework that allows modeling all the parameters of the distribution of the response variable with respect to the explanatory variables. This article overviews the power and flexibility of GAMLSS in relation to some ML techniques. Also, GAMLSS' capability to be tailored toward causality via causal regularization is briefly commented. This overview is illustrated via a data set from the field of LA. This article is categorized under:Application Areas > Education and LearningAlgorithmic Development > StatisticsTechnologies > Machine Learning.</p>","PeriodicalId":48970,"journal":{"name":"Wiley Interdisciplinary Reviews-Data Mining and Knowledge Discovery","volume":"13 1","pages":"e1479"},"PeriodicalIF":6.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10369920/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9942054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tables are probably the most natural way to represent relational data in various media and formats. They store a large number of valuable facts that could be utilized for question answering, knowledge base population, natural language generation, and other applications. However, many tables are not accompanied by semantics for the automatic interpretation of the information they present. Table Understanding (TU) aims at recovering the missing semantics that enables the extraction of facts from tables. This problem covers a range of issues from table detection in document images to semantic table interpretation with the help of external knowledge bases. To date, the TU research has been ongoing on for 30 years. Nevertheless, there is no common point of view on the scope of TU; the terminology still needs agreement and unification. In recent years, science and technology have shown a rapidly increasing interest in TU. Nowadays, it is especially important to check the meaning of this research problem once again. This article gives a comprehensive characterization of the TU problem, including a description of its subproblems, tasks, subtasks, and applications. It also discusses the common limitations used in the existing problem statements and proposes some directions for further research that would help overcome the corresponding limitations.
{"title":"Table understanding: Problem overview","authors":"A. Shigarov","doi":"10.1002/widm.1482","DOIUrl":"https://doi.org/10.1002/widm.1482","url":null,"abstract":"Tables are probably the most natural way to represent relational data in various media and formats. They store a large number of valuable facts that could be utilized for question answering, knowledge base population, natural language generation, and other applications. However, many tables are not accompanied by semantics for the automatic interpretation of the information they present. Table Understanding (TU) aims at recovering the missing semantics that enables the extraction of facts from tables. This problem covers a range of issues from table detection in document images to semantic table interpretation with the help of external knowledge bases. To date, the TU research has been ongoing on for 30 years. Nevertheless, there is no common point of view on the scope of TU; the terminology still needs agreement and unification. In recent years, science and technology have shown a rapidly increasing interest in TU. Nowadays, it is especially important to check the meaning of this research problem once again. This article gives a comprehensive characterization of the TU problem, including a description of its subproblems, tasks, subtasks, and applications. It also discusses the common limitations used in the existing problem statements and proposes some directions for further research that would help overcome the corresponding limitations.","PeriodicalId":48970,"journal":{"name":"Wiley Interdisciplinary Reviews-Data Mining and Knowledge Discovery","volume":"70 1","pages":""},"PeriodicalIF":7.8,"publicationDate":"2022-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89983094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rohit Kaul, Chinedu I. Ossai, A. Forkan, P. Jayaraman, J. Zelcer, Stephen Vaughan, N. Wickramasinghe
Digital twins, succinctly described as the digital representation of a physical object, is a concept that has emerged relatively recently with increasing application in the manufacturing industry. This article proposes the application of this concept to the healthcare domain to provide enhanced clinical decision support and enable more patient‐centric, and simultaneously more precise and individualized care to ensue. Digital twins combined with advances in Artificial Intelligence (AI) have the potential to facilitate the integration and processing of vast amounts of heterogeneous data stemming from diversified sources. Hence, in healthcare this can provide enhanced diagnosis and treatment decision support. In applying digital twins in combination with AI to complex healthcare contexts to assist clinical decision making, it is also likely that a key current challenge in healthcare; namely, providing better quality care which is of high value and can lead to better clinical outcomes and a higher level of patient satisfaction, can ensue. In this focus article, we address this proposition by focusing on the case study of cancer care and present our conceptualization of a digital twin model combined with AI to address key, current limitations in endometrial cancer treatment. We highlight the role of AI techniques in developing digital twins for cancer care and simultaneously identify key barriers and facilitators of this process from both a healthcare and technology perspective.
{"title":"The role of AI for developing digital twins in healthcare: The case of cancer care","authors":"Rohit Kaul, Chinedu I. Ossai, A. Forkan, P. Jayaraman, J. Zelcer, Stephen Vaughan, N. Wickramasinghe","doi":"10.1002/widm.1480","DOIUrl":"https://doi.org/10.1002/widm.1480","url":null,"abstract":"Digital twins, succinctly described as the digital representation of a physical object, is a concept that has emerged relatively recently with increasing application in the manufacturing industry. This article proposes the application of this concept to the healthcare domain to provide enhanced clinical decision support and enable more patient‐centric, and simultaneously more precise and individualized care to ensue. Digital twins combined with advances in Artificial Intelligence (AI) have the potential to facilitate the integration and processing of vast amounts of heterogeneous data stemming from diversified sources. Hence, in healthcare this can provide enhanced diagnosis and treatment decision support. In applying digital twins in combination with AI to complex healthcare contexts to assist clinical decision making, it is also likely that a key current challenge in healthcare; namely, providing better quality care which is of high value and can lead to better clinical outcomes and a higher level of patient satisfaction, can ensue. In this focus article, we address this proposition by focusing on the case study of cancer care and present our conceptualization of a digital twin model combined with AI to address key, current limitations in endometrial cancer treatment. We highlight the role of AI techniques in developing digital twins for cancer care and simultaneously identify key barriers and facilitators of this process from both a healthcare and technology perspective.","PeriodicalId":48970,"journal":{"name":"Wiley Interdisciplinary Reviews-Data Mining and Knowledge Discovery","volume":"109 1","pages":""},"PeriodicalIF":7.8,"publicationDate":"2022-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74744437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A review of the deep learning based image steganography techniques is presented in this paper. For completeness, the recent traditional steganography techniques are also discussed briefly. The three key parameters (security, embedding capacity, and invisibility) for measuring the quality of an image steganographic technique are described. Various steganography techniques, with emphasis on the above three key parameters, are reviewed. The steganography techniques are classified here into three main categories: Traditional, Hybrid, and fully Deep Learning. The hybrid techniques are further divided into three sub‐categories: Cover Generation, Distortion Learning, and Adversarial Embedding. The fully Deep Learning techniques, based on the nature of the input, are further divided into three sub‐categories: GAN Embedding, Embedding Less, and Category Label. The main ideas of the important deep learning based steganography techniques are described. The strong and weak features of these techniques are outlined. The results reported by researchers on benchmark data sets CelebA, Bossbase, PASCAL‐VOC12, CIFAR‐100, ImageNet, and USC‐SIPI are used to evaluate the performance of various steganography techniques. Analysis of the results shows that there is scope for new suitable deep learning architectures that can improve the capacity and invisibility of image steganography.
{"title":"Deep learning based image steganography: A review","authors":"M. Wani, Bisma Sultan","doi":"10.1002/widm.1481","DOIUrl":"https://doi.org/10.1002/widm.1481","url":null,"abstract":"A review of the deep learning based image steganography techniques is presented in this paper. For completeness, the recent traditional steganography techniques are also discussed briefly. The three key parameters (security, embedding capacity, and invisibility) for measuring the quality of an image steganographic technique are described. Various steganography techniques, with emphasis on the above three key parameters, are reviewed. The steganography techniques are classified here into three main categories: Traditional, Hybrid, and fully Deep Learning. The hybrid techniques are further divided into three sub‐categories: Cover Generation, Distortion Learning, and Adversarial Embedding. The fully Deep Learning techniques, based on the nature of the input, are further divided into three sub‐categories: GAN Embedding, Embedding Less, and Category Label. The main ideas of the important deep learning based steganography techniques are described. The strong and weak features of these techniques are outlined. The results reported by researchers on benchmark data sets CelebA, Bossbase, PASCAL‐VOC12, CIFAR‐100, ImageNet, and USC‐SIPI are used to evaluate the performance of various steganography techniques. Analysis of the results shows that there is scope for new suitable deep learning architectures that can improve the capacity and invisibility of image steganography.","PeriodicalId":48970,"journal":{"name":"Wiley Interdisciplinary Reviews-Data Mining and Knowledge Discovery","volume":"26 1","pages":""},"PeriodicalIF":7.8,"publicationDate":"2022-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74007487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Parisa Moridian, A. Shoeibi, Marjane Khodatars, M. Jafari, R. B. Pachori, Ali Khadem, R. Alizadehsani, S. Ling
Apnea is a sleep disorder that stops or reduces airflow for a short time during sleep. Sleep apnea may last for a few seconds and happen for many while sleeping. This reduction in breathing is associated with loud snoring, which may awaken the person with a feeling of suffocation. So far, a variety of methods have been introduced by researchers to diagnose sleep apnea, among which the polysomnography (PSG) method is known to be the best. Analysis of PSG signals is very complicated. Many studies have been conducted on the automatic diagnosis of sleep apnea from biological signals using artificial intelligence (AI), including machine learning (ML) and deep learning (DL) methods. This research reviews and investigates the studies on the diagnosis of sleep apnea using AI methods. First, computer aided diagnosis system (CADS) for sleep apnea using ML and DL techniques along with its parts including dataset, preprocessing, and ML and DL methods are introduced. This research also summarizes the important specifications of the studies on the diagnosis of sleep apnea using ML and DL methods in a table. In the following, a comprehensive discussion is made on the studies carried out in this field. The challenges in the diagnosis of sleep apnea using AI methods are of paramount importance for researchers. Accordingly, these obstacles are elaborately addressed. In another section, the most important future works for studies on sleep apnea detection from PSG signals and AI techniques are presented. Ultimately, the essential findings of this study are provided in the conclusion section.
{"title":"Automatic diagnosis of sleep apnea from biomedical signals using artificial intelligence techniques: Methods, challenges, and future works","authors":"Parisa Moridian, A. Shoeibi, Marjane Khodatars, M. Jafari, R. B. Pachori, Ali Khadem, R. Alizadehsani, S. Ling","doi":"10.1002/widm.1478","DOIUrl":"https://doi.org/10.1002/widm.1478","url":null,"abstract":"Apnea is a sleep disorder that stops or reduces airflow for a short time during sleep. Sleep apnea may last for a few seconds and happen for many while sleeping. This reduction in breathing is associated with loud snoring, which may awaken the person with a feeling of suffocation. So far, a variety of methods have been introduced by researchers to diagnose sleep apnea, among which the polysomnography (PSG) method is known to be the best. Analysis of PSG signals is very complicated. Many studies have been conducted on the automatic diagnosis of sleep apnea from biological signals using artificial intelligence (AI), including machine learning (ML) and deep learning (DL) methods. This research reviews and investigates the studies on the diagnosis of sleep apnea using AI methods. First, computer aided diagnosis system (CADS) for sleep apnea using ML and DL techniques along with its parts including dataset, preprocessing, and ML and DL methods are introduced. This research also summarizes the important specifications of the studies on the diagnosis of sleep apnea using ML and DL methods in a table. In the following, a comprehensive discussion is made on the studies carried out in this field. The challenges in the diagnosis of sleep apnea using AI methods are of paramount importance for researchers. Accordingly, these obstacles are elaborately addressed. In another section, the most important future works for studies on sleep apnea detection from PSG signals and AI techniques are presented. Ultimately, the essential findings of this study are provided in the conclusion section.","PeriodicalId":48970,"journal":{"name":"Wiley Interdisciplinary Reviews-Data Mining and Knowledge Discovery","volume":"17 1","pages":""},"PeriodicalIF":7.8,"publicationDate":"2022-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77817445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computational phenotyping allows for unsupervised discovery of subgroups of patients as well as corresponding co‐occurring medical conditions from electronic health records (EHR). Typically, EHR data contains demographic information, diagnoses and laboratory results. Discovering (novel) phenotypes has the potential to be of prognostic and therapeutic value. Providing medical practitioners with transparent and interpretable results is an important requirement and an essential part for advancing precision medicine. Low‐rank data approximation methods such as matrix (e.g., nonnegative matrix factorization) and tensor decompositions (e.g., CANDECOMP/PARAFAC) have demonstrated that they can provide such transparent and interpretable insights. Recent developments have adapted low‐rank data approximation methods by incorporating different constraints and regularizations that facilitate interpretability further. In addition, they offer solutions for common challenges within EHR data such as high dimensionality, data sparsity and incompleteness. Especially extracting temporal phenotypes from longitudinal EHR has received much attention in recent years. In this paper, we provide a comprehensive review of low‐rank approximation‐based approaches for computational phenotyping. The existing literature is categorized into temporal versus static phenotyping approaches based on matrix versus tensor decompositions. Furthermore, we outline different approaches for the validation of phenotypes, that is, the assessment of clinical significance.
{"title":"Unsupervised EHR‐based phenotyping via matrix and tensor decompositions","authors":"Florian Becker, A. Smilde, E. Acar","doi":"10.1002/widm.1494","DOIUrl":"https://doi.org/10.1002/widm.1494","url":null,"abstract":"Computational phenotyping allows for unsupervised discovery of subgroups of patients as well as corresponding co‐occurring medical conditions from electronic health records (EHR). Typically, EHR data contains demographic information, diagnoses and laboratory results. Discovering (novel) phenotypes has the potential to be of prognostic and therapeutic value. Providing medical practitioners with transparent and interpretable results is an important requirement and an essential part for advancing precision medicine. Low‐rank data approximation methods such as matrix (e.g., nonnegative matrix factorization) and tensor decompositions (e.g., CANDECOMP/PARAFAC) have demonstrated that they can provide such transparent and interpretable insights. Recent developments have adapted low‐rank data approximation methods by incorporating different constraints and regularizations that facilitate interpretability further. In addition, they offer solutions for common challenges within EHR data such as high dimensionality, data sparsity and incompleteness. Especially extracting temporal phenotypes from longitudinal EHR has received much attention in recent years. In this paper, we provide a comprehensive review of low‐rank approximation‐based approaches for computational phenotyping. The existing literature is categorized into temporal versus static phenotyping approaches based on matrix versus tensor decompositions. Furthermore, we outline different approaches for the validation of phenotypes, that is, the assessment of clinical significance.","PeriodicalId":48970,"journal":{"name":"Wiley Interdisciplinary Reviews-Data Mining and Knowledge Discovery","volume":"5 1","pages":""},"PeriodicalIF":7.8,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72810907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}