Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2022.100060
Shawn Ming Song Toh , Ariyan Ashkanfar , Russell English , Glynn Rothwell
As the current obesity epidemic grows, an increased number of obese patients undergoing Total Hip Arthroplasty (THA) can be expected in the coming years. The National Health Service of the UK (NHS) recommends that an obese patient should undergo weight loss before THA. It is understood that an increased body weight would increase the wear rates on the prostheses, however, the extent of increased wear and the impact on the longevity of the prosthesis is unclear. The NHS found that 45% of THA failures in 2019 were caused by wear which led to a multitude of failures such as infection, aseptic loosening and dislocation such that a revision surgery is then needed. In this study, a finite element model was created to model a walking cycle and a newly developed wear algorithm was used to perform a series of computational wear analyses to investigate the effect of different patient weights on the evolution of wear in THAs up to 5 million cycles. The wear rates shown in this study are closely comparable to previous literature. The XLPE volumetric wear rates were found to be between 15 and 35 mm3/yr (range: 1.5–57 .6mm3/yr) and femoral head taper surface volumetric wear rates were between 0.174 and 0.225 mm3/yr (range: 0.01–3.15 mm3/yr). The results also showed that an increased weight of 140 kg can increase the metallic wear by 26% and polyethylene wear by 30% when compared to 100 kg body weight. As increased wear can lead to a multitude of failure such as aseptic loosening, dislocation and metallosis, from this study, it is recommended that obese patients undergo recommended weight loss and maintain this lesser weight to reduce wear and prolong the life of the THA.
{"title":"The relation between body weight and wear in total hip prosthesis: A finite element study","authors":"Shawn Ming Song Toh , Ariyan Ashkanfar , Russell English , Glynn Rothwell","doi":"10.1016/j.cmpbup.2022.100060","DOIUrl":"10.1016/j.cmpbup.2022.100060","url":null,"abstract":"<div><p>As the current obesity epidemic grows, an increased number of obese patients undergoing Total Hip Arthroplasty (THA) can be expected in the coming years. The National Health Service of the UK (NHS) recommends that an obese patient should undergo weight loss before THA. It is understood that an increased body weight would increase the wear rates on the prostheses, however, the extent of increased wear and the impact on the longevity of the prosthesis is unclear. The NHS found that 45% of THA failures in 2019 were caused by wear which led to a multitude of failures such as infection, aseptic loosening and dislocation such that a revision surgery is then needed. In this study, a finite element model was created to model a walking cycle and a newly developed wear algorithm was used to perform a series of computational wear analyses to investigate the effect of different patient weights on the evolution of wear in THAs up to 5 million cycles. The wear rates shown in this study are closely comparable to previous literature. The XLPE volumetric wear rates were found to be between 15 and 35 mm<sup>3</sup>/yr (range: 1.5–57 .6mm<sup>3</sup>/yr) and femoral head taper surface volumetric wear rates were between 0.174 and 0.225 mm<sup>3</sup>/yr (range: 0.01–3.15 mm<sup>3</sup>/yr). The results also showed that an increased weight of 140 kg can increase the metallic wear by 26% and polyethylene wear by 30% when compared to 100 kg body weight. As increased wear can lead to a multitude of failure such as aseptic loosening, dislocation and metallosis, from this study, it is recommended that obese patients undergo recommended weight loss and maintain this lesser weight to reduce wear and prolong the life of the THA.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"2 ","pages":"Article 100060"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000118/pdfft?md5=f3068b40d16496b6265bbc65b00f9e7e&pid=1-s2.0-S2666990022000118-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46078194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2022.100058
Wallaci P. Valentino , Michele C. Valentino , Douglas Azevedo , Natáli V.O. Bento-Torres
Background: The CDR scale is a standard qualitative staging instrument that has been widely applied for assessing the severity of dementia which is based on information elicited through a semi-structured interview standardized in an assessment protocol. Despite clinical skills to elicit appropriate information are required, subjectivity still lies in the administration of the protocol and scoring process of the CDR. In this paper we propose a fuzzy rule-based CDR instrument to stage dementia based on the usual CDR, aiming to cover the subjectivities of the scoring process in the usual CDR which are directly related to the scoring system. This is effectively achieved by the F-CDR, our proposed expert system, which allows assigning scores continuously throughout the interval [0,3].
Results: In order to test the performance of our fuzzy model, we compare the outputs FCDR obtained from of F-CDR approach to the outputs U-CDR obtained by a usual application of the CDR via the same inputs for both. The dataset provided by ADNI, composed of more than eleven thousand CDR tests, including the inputs and outputs (U-CDR), is the source for comparisons.
Methods: The fuzzy rule-based model for the CDR that we propose in this paper is a fuzzy inference system (FIS) constructed in MATLAB with the aid of the Fuzzy Logic Designer app. The FIS was constructed based on the CDR and the specialist’s indications and tested on real data provided by ADNI.
Conclusion: The high accuracy of matches between U-CDR and F-CDR via the same inputs over random samples selected from the ADNI dataset suggests that the fuzzy approach to the CDR instrument here proposed is suitable to extend the scoring process of the usual CDR since the fuzzy approach allows the possibility of scoring continuously in the interval [0,3].
{"title":"A fuzzy rule-based approach via MATLAB for the CDR instrument for staging the severity of dementia","authors":"Wallaci P. Valentino , Michele C. Valentino , Douglas Azevedo , Natáli V.O. Bento-Torres","doi":"10.1016/j.cmpbup.2022.100058","DOIUrl":"https://doi.org/10.1016/j.cmpbup.2022.100058","url":null,"abstract":"<div><p><em>Background</em>: The CDR scale is a standard qualitative staging instrument that has been widely applied for assessing the severity of dementia which is based on information elicited through a semi-structured interview standardized in an assessment protocol. Despite clinical skills to elicit appropriate information are required, subjectivity still lies in the administration of the protocol and scoring process of the CDR. In this paper we propose a fuzzy rule-based CDR instrument to stage dementia based on the usual CDR, aiming to cover the subjectivities of the scoring process in the usual CDR which are directly related to the scoring system. This is effectively achieved by the F-CDR, our proposed expert system, which allows assigning scores continuously throughout the interval [0,3].</p><p><em>Results</em>: In order to test the performance of our fuzzy model, we compare the outputs FCDR obtained from of F-CDR approach to the outputs U-CDR obtained by a usual application of the CDR via the same inputs for both. The dataset provided by ADNI, composed of more than eleven thousand CDR tests, including the inputs and outputs (U-CDR), is the source for comparisons.</p><p><em>Methods</em>: The fuzzy rule-based model for the CDR that we propose in this paper is a fuzzy inference system (FIS) constructed in MATLAB with the aid of the Fuzzy Logic Designer app. The FIS was constructed based on the CDR and the specialist’s indications and tested on real data provided by ADNI.</p><p><em>Conclusion</em>: The high accuracy of matches between U-CDR and F-CDR via the same inputs over random samples selected from the ADNI dataset suggests that the fuzzy approach to the CDR instrument here proposed is suitable to extend the scoring process of the usual CDR since the fuzzy approach allows the possibility of scoring continuously in the interval [0,3].</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"2 ","pages":"Article 100058"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266699002200009X/pdfft?md5=5dc1091faf200684058b9780240470c2&pid=1-s2.0-S266699002200009X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92038032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2022.100076
Arfan Ahmed , Marco Agus , Mahmood Alzubaidi , Sarah Aziz , Alaa Abd-Alrazaq , Anna Giannicchi , Mowafa Househ
Background: Big Data offers promise in the field of mental health and plays an important part when it comes to automation, analysis and prediction of mental health disorders.
Objective: The purpose of this scoping review is to explore how big data was exploited in mental health. This review specifically addresses the volume, velocity, veracity and variety of collected data as well as how data was attained, stored, managed, and kept private and secure.
Methods: Six databases were searched to find relevant articles. PRISMA Extension for Scoping Reviews (PRISMA-ScR) was used as a guideline methodology to develop a comprehensive scoping review. General and Big Data features were extracted from the studies reviewed, and analyzed in the context of data collection, protection, storage and for what concerns data processing, targeted disorder and application purpose.
Results: A collection of 23 studies were analyzed, mostly targeting depression () and anxiety (). For what concerns data sources, mostly social media posts (), tweets (), and medical records () were used. Various Big Data technologies were used: for data protection, only 7 studies faced the problem, with anonymization schemes for medical records and only surveys (), and safe authentication methods for social media (). For data processing, Machine Learning (ML) models appeared in 22 studies of which Random Forest (RF) was the most widely used (). Logistic Regression (LR) was used in 4 studies, and Support Vector Machine (SVM) was used in 3 studies.
Conclusion: In order to utilize Big Data as a way to mitigate mental health disorders and predict their appearance a great effort is still needed. Integration and analysis of Big Data coming from different sources such as social media and health records and information exchange between multiple disciplines is also needed. Doctors and researchers alike can find patterns in otherwise difficult to identify data by making use of Artificial Intelligence (AI) and Machine Learning (ML) techniques. Similarly, AI and ML can be used to automate the analytical process.
{"title":"Overview of the role of big data in mental health: A scoping review","authors":"Arfan Ahmed , Marco Agus , Mahmood Alzubaidi , Sarah Aziz , Alaa Abd-Alrazaq , Anna Giannicchi , Mowafa Househ","doi":"10.1016/j.cmpbup.2022.100076","DOIUrl":"https://doi.org/10.1016/j.cmpbup.2022.100076","url":null,"abstract":"<div><p><strong>Background</strong>: Big Data offers promise in the field of mental health and plays an important part when it comes to automation, analysis and prediction of mental health disorders.</p><p><strong>Objective</strong>: The purpose of this scoping review is to explore how big data was exploited in mental health. This review specifically addresses the volume, velocity, veracity and variety of collected data as well as how data was attained, stored, managed, and kept private and secure.</p><p><strong>Methods</strong>: Six databases were searched to find relevant articles. PRISMA Extension for Scoping Reviews (PRISMA-ScR) was used as a guideline methodology to develop a comprehensive scoping review. General and Big Data features were extracted from the studies reviewed, and analyzed in the context of data collection, protection, storage and for what concerns data processing, targeted disorder and application purpose.</p><p><strong>Results</strong>: A collection of 23 studies were analyzed, mostly targeting depression (<span><math><mrow><mi>n</mi><mo>=</mo><mn>13</mn></mrow></math></span>) and anxiety (<span><math><mrow><mi>n</mi><mo>=</mo><mn>4</mn></mrow></math></span>). For what concerns data sources, mostly social media posts (<span><math><mrow><mi>n</mi><mo>=</mo><mn>5</mn></mrow></math></span>), tweets (<span><math><mrow><mi>n</mi><mo>=</mo><mn>7</mn></mrow></math></span>), and medical records (<span><math><mrow><mi>n</mi><mo>=</mo><mn>6</mn></mrow></math></span>) were used. Various Big Data technologies were used: for data protection, only 7 studies faced the problem, with anonymization schemes for medical records and only surveys (<span><math><mrow><mi>n</mi><mo>=</mo><mn>4</mn></mrow></math></span>), and safe authentication methods for social media (<span><math><mrow><mi>n</mi><mo>=</mo><mn>3</mn></mrow></math></span>). For data processing, Machine Learning (ML) models appeared in 22 studies of which Random Forest (RF) was the most widely used (<span><math><mrow><mi>n</mi><mo>=</mo><mn>5</mn></mrow></math></span>). Logistic Regression (LR) was used in 4 studies, and Support Vector Machine (SVM) was used in 3 studies.</p><p><strong>Conclusion</strong>: In order to utilize Big Data as a way to mitigate mental health disorders and predict their appearance a great effort is still needed. Integration and analysis of Big Data coming from different sources such as social media and health records and information exchange between multiple disciplines is also needed. Doctors and researchers alike can find patterns in otherwise difficult to identify data by making use of Artificial Intelligence (AI) and Machine Learning (ML) techniques. Similarly, AI and ML can be used to automate the analytical process.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"2 ","pages":"Article 100076"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000271/pdfft?md5=bcce944a16f6cff3bb930ed72cffb67d&pid=1-s2.0-S2666990022000271-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137115510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2022.100083
Nikolaus Börner , Markus B. Schoenberg , Philipp Pöschke , Benedikt Pöllmann , Dominik Koch , Moritz Drefs , Dionysios Koliogiannis , Christian Böhm , Jens Werner , Markus Guba
Background and Objectives
Data science methods have grown to solve complex medical problems. Data records utilized are often incomplete. Within this study we developed and validated a novel multidimensional medical combined imputation (MMCI) application to analyse multifaceted and segmented datasets as found in liver transplantation registries.
Methods
The multidimensional medical combined imputation (MMCI) application is a pipeline of interconnected methods to impute segmented clinical data with the highest accuracy. Two different complete datasets were used in the testing procedure. A transplantation dataset (TxData) and a multivariate Wisconsin breast cancer (diagnostic) dataset (BcData). For both datasets, the most common imputation methods were tested, and their accuracy (ACC) compared to the novel MMCI (RF and LR).
Results
In the TxData the MMCI RF and MMCI LR outperformed the other imputation algorithms regarding ACC. In the BcData the overall performance was good. The MMCI LR was the most superior algorithm for up to 10% of missing values with ACC = 91.9 (at 5% missing) to 90.6 (at 10% missing). The MMCI RF was the most accurate from 89.9 at 20% missing to 89.4 at 30% missing. All other established imputation algorithm showed inferior ACC, with MF and MICE showing results close to ACC of 90.
Conclusion
This study presents the MMCI as a novel imputation pipeline to handle segmented and multifaceted clinical data. The MMCI proved to be more accurate than the established imputation methods when analysing 5–30% missing data. This study warrants future studies to investigate the value of the MMCI in predicting missing values in different datasets.
{"title":"A custom build multidimensional medical combined imputation application for a transplantation dataset","authors":"Nikolaus Börner , Markus B. Schoenberg , Philipp Pöschke , Benedikt Pöllmann , Dominik Koch , Moritz Drefs , Dionysios Koliogiannis , Christian Böhm , Jens Werner , Markus Guba","doi":"10.1016/j.cmpbup.2022.100083","DOIUrl":"https://doi.org/10.1016/j.cmpbup.2022.100083","url":null,"abstract":"<div><h3>Background and Objectives</h3><p>Data science methods have grown to solve complex medical problems. Data records utilized are often incomplete. Within this study we developed and validated a novel multidimensional medical combined imputation (MMCI) application to analyse multifaceted and segmented datasets as found in liver transplantation registries.</p></div><div><h3>Methods</h3><p>The multidimensional medical combined imputation (MMCI) application is a pipeline of interconnected methods to impute segmented clinical data with the highest accuracy. Two different complete datasets were used in the testing procedure. A transplantation dataset (TxData) and a multivariate Wisconsin breast cancer (diagnostic) dataset (BcData). For both datasets, the most common imputation methods were tested, and their accuracy (ACC) compared to the novel MMCI (RF and LR).</p></div><div><h3>Results</h3><p>In the TxData the MMCI RF and MMCI LR outperformed the other imputation algorithms regarding ACC. In the BcData the overall performance was good. The MMCI LR was the most superior algorithm for up to 10% of missing values with ACC = 91.9 (at 5% missing) to 90.6 (at 10% missing). The MMCI RF was the most accurate from 89.9 at 20% missing to 89.4 at 30% missing. All other established imputation algorithm showed inferior ACC, with MF and MICE showing results close to ACC of 90.</p></div><div><h3>Conclusion</h3><p>This study presents the MMCI as a novel imputation pipeline to handle segmented and multifaceted clinical data. The MMCI proved to be more accurate than the established imputation methods when analysing 5–30% missing data. This study warrants future studies to investigate the value of the MMCI in predicting missing values in different datasets.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"2 ","pages":"Article 100083"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000349/pdfft?md5=9184a7b6aa5850fbe81112a98d542518&pid=1-s2.0-S2666990022000349-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137115511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2022.100061
Rafiqul Islam , Fumihiko Yokota , Mariko Nishikitani , Kimiyo Kikuchi , Yoko Sato , Rieko Izukura , Md.Mahmudur Rahman , Md.Rajib Chowdhury , Ashir Ahmed , Naoki Nakashima
Background
A developing country like Bangladesh suffers very much from the sudden appearance of the COVID-19 pandemic due to the shortage of medical facilities for testing and follow-up treatment. The Portable Health Clinic (PHC) system has developed the COVID-19 module with a triage system for the detection of COVID-19 suspects and the follow-up of the home quarantined COVID-19 patients to reduce the workload of the limited medical facilities.
Methods
The PHC COVID-19 system maintains a questionnaire-based triage function using the experience of the Japanese practice of diseases management for early detection of suspected COVID-19 patients who may need a confirmation test. Then only the highly suspected patients go for testing preventing the unnecessary crowd from the confirmation PCR test centers and hospitals. Like the basic PHC system, it also has the features for patients’ treatment and follow-up for the home quarantined COVID-19 positive and suspect patients using a telemedicine system. This COVID-19 system service box contains 4 self-checking medical sensors, namely, (1) thermometer, (2) pulse oximeter, (3) blood pressure machine, and (4) glucometer for patient's health monitoring including a tablet PC installed with COVID-19 system application for communication between patient and doctor for tele-consultancy.
Results
This study conducted a COVID-19 triage among 300 villagers and identified 220 green, 45 light-yellow, 2 yellow, 30 orange, and 3 red patients. Besides the 3 red patients, the call center doctors also referred another 13 patients out of the 30 orange patients to health facilities for PCR tests as suspect COVID-19 positive, and to go under their follow-up. Out of these (3 + 13 =) 16 patients, only 4 went for PCR test and 3 of them had been tested positive. The remaining orange, yellow and light-yellow patients were advised home quarantine under the follow-up of the PHC health workers and got cured in 1–2 weeks.
Conclusions
This system can contribute to the community healthcare system by ensuring quality service to the suspected and 80% or more tested COVID-19 positive patients who are usually in the moderate or mild state and do not need to be hospitalized. The PHC COVID-19 system provides services maintaining social distance for preventing infection and ensuring clinical safety for both the patients and the health workers.
{"title":"Portable health clinic COVID-19 system for remote patient follow-up ensuring clinical safety","authors":"Rafiqul Islam , Fumihiko Yokota , Mariko Nishikitani , Kimiyo Kikuchi , Yoko Sato , Rieko Izukura , Md.Mahmudur Rahman , Md.Rajib Chowdhury , Ashir Ahmed , Naoki Nakashima","doi":"10.1016/j.cmpbup.2022.100061","DOIUrl":"10.1016/j.cmpbup.2022.100061","url":null,"abstract":"<div><h3>Background</h3><p>A developing country like Bangladesh suffers very much from the sudden appearance of the COVID-19 pandemic due to the shortage of medical facilities for testing and follow-up treatment. The Portable Health Clinic (PHC) system has developed the COVID-19 module with a triage system for the detection of COVID-19 suspects and the follow-up of the home quarantined COVID-19 patients to reduce the workload of the limited medical facilities.</p></div><div><h3>Methods</h3><p>The PHC COVID-19 system maintains a questionnaire-based triage function using the experience of the Japanese practice of diseases management for early detection of suspected COVID-19 patients who may need a confirmation test. Then only the highly suspected patients go for testing preventing the unnecessary crowd from the confirmation PCR test centers and hospitals. Like the basic PHC system, it also has the features for patients’ treatment and follow-up for the home quarantined COVID-19 positive and suspect patients using a telemedicine system. This COVID-19 system service box contains 4 self-checking medical sensors, namely, (1) thermometer, (2) pulse oximeter, (3) blood pressure machine, and (4) glucometer for patient's health monitoring including a tablet PC installed with COVID-19 system application for communication between patient and doctor for tele-consultancy.</p></div><div><h3>Results</h3><p>This study conducted a COVID-19 triage among 300 villagers and identified 220 green, 45 light-yellow, 2 yellow, 30 orange, and 3 red patients. Besides the 3 red patients, the call center doctors also referred another 13 patients out of the 30 orange patients to health facilities for PCR tests as suspect COVID-19 positive, and to go under their follow-up. Out of these (3 + 13 =) 16 patients, only 4 went for PCR test and 3 of them had been tested positive. The remaining orange, yellow and light-yellow patients were advised home quarantine under the follow-up of the PHC health workers and got cured in 1–2 weeks.</p></div><div><h3>Conclusions</h3><p>This system can contribute to the community healthcare system by ensuring quality service to the suspected and 80% or more tested COVID-19 positive patients who are usually in the moderate or mild state and do not need to be hospitalized. The PHC COVID-19 system provides services maintaining social distance for preventing infection and ensuring clinical safety for both the patients and the health workers.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"2 ","pages":"Article 100061"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9167465/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10670840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2022.100075
Dominik Siekierski, Krzysztof Siwek
The paper describes a process of formulating a classifier on the basis information contained by MIT-BIH arrhythmia database. This data source contains electrocardiographic signals from two sensors. Both were used, which represent not a typical phenomenon. In the learning process, the classifier uses only information with high certainty. Data are based on expert endorsements and the errors found have been corrected over the years. Specific types of heartbeats were divided into special groups according to the standard "Association for the Advancement of Medical Instrumentation" (AAMI). It recommends splitting the specific types into five separate groups according to physiological origin. Rare heartbeats have a limited number of occurrences. For one group, modifying methods were used which allowed to increase sufficiently the amount of data in training sets. This had a beneficial impact on the results. The solution includes features extraction. The main module of the classifier is a deep neural network. Good result was obtained with tools supporting automatic hyperparameter selection. In ECG signal diagnostics, the most significant task is to properly separate the group of supraventricular and ventricular beats. The study managed to obtain this error at an exceptionally low level and an overall accuracy of 98.37%.
{"title":"Heart beats classification method using a multi-signal ECG spectrogram and convolutional neural network with residual blocks","authors":"Dominik Siekierski, Krzysztof Siwek","doi":"10.1016/j.cmpbup.2022.100075","DOIUrl":"10.1016/j.cmpbup.2022.100075","url":null,"abstract":"<div><p>The paper describes a process of formulating a classifier on the basis information contained by MIT-BIH arrhythmia database. This data source contains electrocardiographic signals from two sensors. Both were used, which represent not a typical phenomenon. In the learning process, the classifier uses only information with high certainty. Data are based on expert endorsements and the errors found have been corrected over the years. Specific types of heartbeats were divided into special groups according to the standard \"Association for the Advancement of Medical Instrumentation\" (AAMI). It recommends splitting the specific types into five separate groups according to physiological origin. Rare heartbeats have a limited number of occurrences. For one group, modifying methods were used which allowed to increase sufficiently the amount of data in training sets. This had a beneficial impact on the results. The solution includes features extraction. The main module of the classifier is a deep neural network. Good result was obtained with tools supporting automatic hyperparameter selection. In ECG signal diagnostics, the most significant task is to properly separate the group of supraventricular and ventricular beats. The study managed to obtain this error at an exceptionally low level and an overall accuracy of 98.37%.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"2 ","pages":"Article 100075"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266699002200026X/pdfft?md5=735c7cfcf469ee1526b723dacd843b8a&pid=1-s2.0-S266699002200026X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45723014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2022.100069
Guillaume Mestrallet
Background
The gold-standard for the management of patients affected by large-surface full thickness burns is autologous skin graft. When burns affect <40% total body surface area (TBSA), meshed skin samples harvested from non-affected donor sites can be used as grafts. In more severe cases corresponding to burns affecting >40% TBSA), the donor site surfaces are insufficient. The alternative grafting strategy uses bioengineered skin substitutes that are generated using the own keratinocytes of the patient after ex vivo expansion. Today, although the technology for producing autografts is not new, there is no way to accurately assess burned areas and predict the number of cells necessary to produce the graft.
Methods
Optimal setup of the bioengineering process involved determination of the required graft surface, adjustment of cell quantities, and control of the timing necessary for production. Accordingly, tools to assist the design of personalized protocols will certainly contribute to care quality and cost limitation.
Results
The article describes the principle of a software-assisted calculation of the burn size, the required graft surface and keratinocyte numbers needed, according to specific patient clinical characteristics. The software also offers assistance to estimate the Baux score, a method that has been proposed to link the severity of burn injuries and the prognosis for the patient.
Conclusion
This software provides a principle of assisted burned patient diagnose and skin substitute bioengineering process. The software development may facilitate the design of personalized protocols for skin regenerative cell therapies.
{"title":"Software development for severe burn diagnosis and autologous skin substitute production","authors":"Guillaume Mestrallet","doi":"10.1016/j.cmpbup.2022.100069","DOIUrl":"10.1016/j.cmpbup.2022.100069","url":null,"abstract":"<div><h3>Background</h3><p>The gold-standard for the management of patients affected by large-surface full thickness burns is autologous skin graft. When burns affect <40% total body surface area (TBSA), meshed skin samples harvested from non-affected donor sites can be used as grafts. In more severe cases corresponding to burns affecting >40% TBSA), the donor site surfaces are insufficient. The alternative grafting strategy uses bioengineered skin substitutes that are generated using the own keratinocytes of the patient after ex vivo expansion. Today, although the technology for producing autografts is not new, there is no way to accurately assess burned areas and predict the number of cells necessary to produce the graft.</p></div><div><h3>Methods</h3><p>Optimal setup of the bioengineering process involved determination of the required graft surface, adjustment of cell quantities, and control of the timing necessary for production. Accordingly, tools to assist the design of personalized protocols will certainly contribute to care quality and cost limitation.</p></div><div><h3>Results</h3><p>The article describes the principle of a software-assisted calculation of the burn size, the required graft surface and keratinocyte numbers needed, according to specific patient clinical characteristics. The software also offers assistance to estimate the Baux score, a method that has been proposed to link the severity of burn injuries and the prognosis for the patient.</p></div><div><h3>Conclusion</h3><p>This software provides a principle of assisted burned patient diagnose and skin substitute bioengineering process. The software development may facilitate the design of personalized protocols for skin regenerative cell therapies.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"2 ","pages":"Article 100069"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000209/pdfft?md5=fe2e872e3ade1584887595d2a10392ee&pid=1-s2.0-S2666990022000209-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48254678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2022.100049
Arfan Ahmed, Nashva Ali, M. Alzubaidi, W. Zaghouani, Alaa A. Abd-alrazaq, Mowafa J Househ
{"title":"Free and Accessible Arabic Corpora: A Scoping Review","authors":"Arfan Ahmed, Nashva Ali, M. Alzubaidi, W. Zaghouani, Alaa A. Abd-alrazaq, Mowafa J Househ","doi":"10.1016/j.cmpbup.2022.100049","DOIUrl":"https://doi.org/10.1016/j.cmpbup.2022.100049","url":null,"abstract":"","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44879197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2022.100077
Beatriz Nistal-Nuño
Background
This article proposes a prototype of a user-adaptive system for helping patients to obtain their ambulatory prescribed medications when purchasing online in a more convenient manner than traditional methods, and the adoption of artificial intelligence to achieve improvements. The system developed simulates an online pharmacy with an introductory adaptive user interface using Bayesian user modeling for predicting the medication needs of patients. This program is used to show its step-by-step design and functioning.
Methods
The introductory adaptive user interface was developed on Visual C++ of Microsoft Visual Studio. The patient model acquisition and application implementing the learning and inference was performed with a Bayesian Network. The Bayesian network was elaborated with the GeNIe Modeler software, Version 2.3.R4, provided by BayesFusion, LLC. Synthetic data from a synthetically generated dataset of anonymous patients was used. The performance of the system was evaluated through simulations using testing data from the synthetic dataset. The Accuracy of predictions was analyzed.
Results
The Average accuracy was estimated with the average correct recommendations of medications, for different numbers of purchased medications per session. The Average accuracy increased with the number of purchased medications, from 86.3529% up to 92.5303%. The Average wrong recommendations decreased with the increase in the number of purchased medications, from an average of 3.4117 up to 1.5686.
Conclusion
The system quickly and consistently attained high accuracy in predicting the medication categories needed by the patients, potentially being able to save time and effort for the patients by relying on the system's recommendations.
{"title":"Medication recommendation system for online pharmacy using an adaptive user interface","authors":"Beatriz Nistal-Nuño","doi":"10.1016/j.cmpbup.2022.100077","DOIUrl":"10.1016/j.cmpbup.2022.100077","url":null,"abstract":"<div><h3>Background</h3><p>This article proposes a prototype of a user-adaptive system for helping patients to obtain their ambulatory prescribed medications when purchasing online in a more convenient manner than traditional methods, and the adoption of artificial intelligence to achieve improvements. The system developed simulates an online pharmacy with an introductory adaptive user interface using Bayesian user modeling for predicting the medication needs of patients. This program is used to show its step-by-step design and functioning.</p></div><div><h3>Methods</h3><p>The introductory adaptive user interface was developed on Visual C++ of Microsoft Visual Studio. The patient model acquisition and application implementing the learning and inference was performed with a Bayesian Network. The Bayesian network was elaborated with the GeNIe Modeler software, Version 2.3.R4, provided by BayesFusion, LLC. Synthetic data from a synthetically generated dataset of anonymous patients was used. The performance of the system was evaluated through simulations using testing data from the synthetic dataset. The Accuracy of predictions was analyzed.</p></div><div><h3>Results</h3><p>The Average accuracy was estimated with the average correct recommendations of medications, for different numbers of purchased medications per session. The Average accuracy increased with the number of purchased medications, from 86.3529% up to 92.5303%. The Average wrong recommendations decreased with the increase in the number of purchased medications, from an average of 3.4117 up to 1.5686.</p></div><div><h3>Conclusion</h3><p>The system quickly and consistently attained high accuracy in predicting the medication categories needed by the patients, potentially being able to save time and effort for the patients by relying on the system's recommendations.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"2 ","pages":"Article 100077"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000283/pdfft?md5=29de8819d69373b47177516bd3073d52&pid=1-s2.0-S2666990022000283-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"54050371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2022.100049
Arfan Ahmed , Nashva Ali , Mahmood Alzubaidi , Wajdi Zaghouani , Alaa A Abd-alrazaq , Mowafa Househ
Background
Corpora play a vital role when training machine learning (ML) models and building systems that use natural language processing (NLP). It can be challenging for researchers to access corpora in a language other than English, and even more so if the corpora are not available for free of cost. The Arabic language is used by more than 1.5 billion Muslims and is the native language of over 250 million people as the Quran, the core text of Islam, is written in Arabic.
Objective
To highlight peer-reviewed literature reporting free and accessible Arabic corpora. We aimed to benefit researchers by providing insights into freely available Arabic and accessible corpora, allowing them to achieve their research goals with ease.
Methods
By conducting a scoping review using PRISMA guidelines, we searched the most common information technology (IT) databases and identified free of cost and accessible Arabic corpora.
Results
We identified a total of 48 accessible corpora sources available free of cost in the Arabic language, we present our findings according to categories to further help readers understand the corpora with direct links where available. The results were classified by corpora type into five categories based on their primary purpose.
Conclusion
Arabic is underrepresented considering freely available corpora as most such corpora are available in English. Although previous studies have performed searches for corpora, ours is the first of its kind as it follows the PRISMA guidelines and includes peer-reviewed articles in the literature, obtained by searching the most common IT databases and source recommendations from language experts.
{"title":"Freely Available Arabic Corpora: A Scoping Review","authors":"Arfan Ahmed , Nashva Ali , Mahmood Alzubaidi , Wajdi Zaghouani , Alaa A Abd-alrazaq , Mowafa Househ","doi":"10.1016/j.cmpbup.2022.100049","DOIUrl":"https://doi.org/10.1016/j.cmpbup.2022.100049","url":null,"abstract":"<div><h3>Background</h3><p>Corpora play a vital role when training machine learning (ML) models and building systems that use natural language processing (NLP). It can be challenging for researchers to access corpora in a language other than English, and even more so if the corpora are not available for free of cost. The Arabic language is used by more than 1.5 billion Muslims and is the native language of over 250 million people as the Quran, the core text of Islam, is written in Arabic.</p></div><div><h3>Objective</h3><p>To highlight peer-reviewed literature reporting free and accessible Arabic corpora. We aimed to benefit researchers by providing insights into freely available Arabic and accessible corpora, allowing them to achieve their research goals with ease.</p></div><div><h3>Methods</h3><p>By conducting a scoping review using PRISMA guidelines, we searched the most common information technology (IT) databases and identified free of cost and accessible Arabic corpora.</p></div><div><h3>Results</h3><p>We identified a total of 48 accessible corpora sources available free of cost in the Arabic language, we present our findings according to categories to further help readers understand the corpora with direct links where available. The results were classified by corpora type into five categories based on their primary purpose.</p></div><div><h3>Conclusion</h3><p>Arabic is underrepresented considering freely available corpora as most such corpora are available in English. Although previous studies have performed searches for corpora, ours is the first of its kind as it follows the PRISMA guidelines and includes peer-reviewed articles in the literature, obtained by searching the most common IT databases and source recommendations from language experts.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"2 ","pages":"Article 100049"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000015/pdfft?md5=831b01a961c72be6134c80b48ab89f71&pid=1-s2.0-S2666990022000015-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92038040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}