Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2022.100071
Ibrahim Sadek , Josué Codjo , Shafiq Ul Rehman , Bessam Abdulrazak
The internet of things (IoT) technology can be nowadays used to track user activity in daily living and health-related quality of life. IoT healthcare sensors can play a great role in reducing health-related costs. It helps users to assess their health progression. Nonetheless, these IoT solutions add security challenges due to their direct access to numerous personal information and their close integration into user activities. As such, this IoT technology is always a viable target for cybercriminals. More importantly, any adversarial attacks on an individual IoT node undermine the overall security of the concerned networks. In this study, we present the privacy and security issues of IoT healthcare devices. Moreover, we address possible attack models needed to verify the robustness of such devices. Finally, we present our deployed AMbient Intelligence (AMI) Lab architecture, and we compare its performance to current IoT solutions.
{"title":"Security and privacy in the internet of things healthcare systems: Toward a robust solution in real-life deployment","authors":"Ibrahim Sadek , Josué Codjo , Shafiq Ul Rehman , Bessam Abdulrazak","doi":"10.1016/j.cmpbup.2022.100071","DOIUrl":"10.1016/j.cmpbup.2022.100071","url":null,"abstract":"<div><p>The internet of things (IoT) technology can be nowadays used to track user activity in daily living and health-related quality of life. IoT healthcare sensors can play a great role in reducing health-related costs. It helps users to assess their health progression. Nonetheless, these IoT solutions add security challenges due to their direct access to numerous personal information and their close integration into user activities. As such, this IoT technology is always a viable target for cybercriminals. More importantly, any adversarial attacks on an individual IoT node undermine the overall security of the concerned networks. In this study, we present the privacy and security issues of IoT healthcare devices. Moreover, we address possible attack models needed to verify the robustness of such devices. Finally, we present our deployed <strong>AM</strong>bient <strong>I</strong>ntelligence (AMI) Lab architecture, and we compare its performance to current IoT solutions.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000222/pdfft?md5=65f01de6eef747d960cef3b3d9443d40&pid=1-s2.0-S2666990022000222-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47053453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2022.100072
Kin Weng Wong , Tai-Hua Yang , Shao-Fu Huang , Yi-Jun Liu , Chi-Sheng Chien , Chun-Li Lin
The current tibiotalocalcaneal (TTC) nails used in ankle arthrodesis surgery have shortcomings leading to unfavorable clinical failures. This study proposes a novel nail design and fabricated by metal 3D printing that can enhance the global implant stability through finite element (FE) analysis and fatigue testing. A novel titanium nail was designed with trilobular cross-sectional design for increasing anti-rotation stability. This nail has three leads with different, increasing pitches that increase the self-compression effect in the fusion sites. Between the leads, there are two porous diamond microstructure regions that act as a bone ingrowth scaffold. The nail was fabricated by metal 3D printing and implanted into artificial ankle joint to evaluate the self-compression effects. The nonlinear FE analysis was performed models to compare the anti-rotation stability between trilobular nail (Tri-nail) and the conventional circular nail. The static and fatigue four-point bending tests were done to understand the mechanical strength of the novel nail. The experiment of self-compression effect showed that the three lead design provides two stages of significant compression effect, with a pressurization rate as high as 40%. FE simulated results indicated that the Tri-nail group provides significant tangent displacement reduction as well as reduction in the surrounding bone stress value and the stress distribution is more even in the Tri-nail group. Four-point test found that the Tri-nail yielding strength is 12,957 ± 577 N, which is much higher than the approved FDA reference (1026 N). One million cycles using 8% of the yielding strength (1036 N) were accomplished without Tri-nail failure. The proposed novel metal 3D printing Tri-nail can provide enough mechanical strength and is mechanically stable with superior anti-rotation ability and excellent fusion site self-compression effect.
{"title":"Biomechanical evaluation of a novel 3D printing tibiotalocalcaneus nail with trilobular cross-sectional design and self-compression effect","authors":"Kin Weng Wong , Tai-Hua Yang , Shao-Fu Huang , Yi-Jun Liu , Chi-Sheng Chien , Chun-Li Lin","doi":"10.1016/j.cmpbup.2022.100072","DOIUrl":"10.1016/j.cmpbup.2022.100072","url":null,"abstract":"<div><p>The current tibiotalocalcaneal (TTC) nails used in ankle arthrodesis surgery have shortcomings leading to unfavorable clinical failures. This study proposes a novel nail design and fabricated by metal 3D printing that can enhance the global implant stability through finite element (FE) analysis and fatigue testing. A novel titanium nail was designed with trilobular cross-sectional design for increasing anti-rotation stability. This nail has three leads with different, increasing pitches that increase the self-compression effect in the fusion sites. Between the leads, there are two porous diamond microstructure regions that act as a bone ingrowth scaffold. The nail was fabricated by metal 3D printing and implanted into artificial ankle joint to evaluate the self-compression effects. The nonlinear FE analysis was performed models to compare the anti-rotation stability between trilobular nail (Tri-nail) and the conventional circular nail. The static and fatigue four-point bending tests were done to understand the mechanical strength of the novel nail. The experiment of self-compression effect showed that the three lead design provides two stages of significant compression effect, with a pressurization rate as high as 40%. FE simulated results indicated that the Tri-nail group provides significant tangent displacement reduction as well as reduction in the surrounding bone stress value and the stress distribution is more even in the Tri-nail group. Four-point test found that the Tri-nail yielding strength is 12,957 ± 577 N, which is much higher than the approved FDA reference (1026 N). One million cycles using 8% of the yielding strength (1036 N) were accomplished without Tri-nail failure. The proposed novel metal 3D printing Tri-nail can provide enough mechanical strength and is mechanically stable with superior anti-rotation ability and excellent fusion site self-compression effect.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000234/pdfft?md5=7290bb3f66a1b0ca24111971f2f63c37&pid=1-s2.0-S2666990022000234-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47106980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2021.100047
Fazila Shams , Assad Abbas , Wasiq Khan , Umar Shahbaz Khan , Raheel Nawaz
Background
The SARS-Cov-2 virus (commonly known as COVID-19) has resulted in substantial casualties in many countries. The first case of COVID-19 was reported in China towards the end of 2019. Cases started to appear in several other countries (including Pakistan) by February 2020. To analyze the spreading pattern of the disease, several researchers used the Susceptible-Infectious-Recovered (SIR) model. However, the classical SIR model cannot predict the death rate.
Objective
In this article, we present a Death-Infection-Recovery (DIR) model to forecast the virus spread over a window of one (minimum) to fourteen (maximum) days. Our model captures the dynamic behavior of the virus and can assist authorities in making decisions on non-pharmaceutical interventions (NPI), like travel restrictions, lockdowns, etc.
Method
The size of training dataset used was 134 days. The Auto Regressive Integrated Moving Average (ARIMA) model was implemented using XLSTAT (add-in for Microsoft Excel), whereas the SIR and the proposed DIR model was implemented using python programming language. We compared the performance of DIR model with the SIR model and the ARIMA model by computing the Percentage Error and Mean Absolute Percentage Error (MAPE).
Results
Experimental results demonstrate that the maximum% error in predicting the number of deaths, infections, and recoveries for a period of fourteen days using the DIR model is only 2.33%, using ARIMA model is 10.03% and using SIR model is 53.07%.
Conclusion
This percentage of error obtained in forecasting using DIR model is significantly less than the% error of the compared models. Moreover, the MAPE of the DIR model is sufficiently below the two compared models that indicates its effectiveness.
{"title":"A death, infection, and recovery (DIR) model to forecast the COVID-19 spread","authors":"Fazila Shams , Assad Abbas , Wasiq Khan , Umar Shahbaz Khan , Raheel Nawaz","doi":"10.1016/j.cmpbup.2021.100047","DOIUrl":"10.1016/j.cmpbup.2021.100047","url":null,"abstract":"<div><h3>Background</h3><p>The SARS-Cov-2 virus (commonly known as COVID-19) has resulted in substantial casualties in many countries. The first case of COVID-19 was reported in China towards the end of 2019. Cases started to appear in several other countries (including Pakistan) by February 2020. To analyze the spreading pattern of the disease, several researchers used the Susceptible-Infectious-Recovered (SIR) model. However, the classical SIR model cannot predict the death rate.</p></div><div><h3>Objective</h3><p>In this article, we present a Death-Infection-Recovery (DIR) model to forecast the virus spread over a window of one (minimum) to fourteen (maximum) days. Our model captures the dynamic behavior of the virus and can assist authorities in making decisions on non-pharmaceutical interventions (NPI), like travel restrictions, lockdowns, etc.</p></div><div><h3>Method</h3><p>The size of training dataset used was 134 days. The Auto Regressive Integrated Moving Average (ARIMA) model was implemented using XLSTAT (add-in for Microsoft Excel), whereas the SIR and the proposed DIR model was implemented using python programming language. We compared the performance of DIR model with the SIR model and the ARIMA model by computing the Percentage Error and Mean Absolute Percentage Error (MAPE).</p></div><div><h3>Results</h3><p>Experimental results demonstrate that the maximum% error in predicting the number of deaths, infections, and recoveries for a period of fourteen days using the DIR model is only 2.33%, using ARIMA model is 10.03% and using SIR model is 53.07%.</p></div><div><h3>Conclusion</h3><p>This percentage of error obtained in forecasting using DIR model is significantly less than the% error of the compared models. Moreover, the MAPE of the DIR model is sufficiently below the two compared models that indicates its effectiveness.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8713423/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10380408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2022.100057
Arfan Ahmed , Nashva Ali , Mahmood Alzubaidi , Wajdi Zaghouani , Alaa Abd-alrazaq , Mowafa Househ
Background
Chatbots have been widely used in many spheres of life from customer services to mental health companions. Despite the breakthroughs in achieving human-like conversations, Arabic language chatbots driven by AI and NLP are relatively scarce due to the complex nature of the Arabic language.
Objective
We aim to review published literature on Arabic chatbots to gain insight into the technologies used highlighting the gap in this emerging field.
Methods
To identify relevant studies, we searched eight bibliographic databases and conducted backward and forward reference checking. Two reviewers independently performed study selection and data extraction. The extracted data was synthesized using a narrative approach.
Results
We included 18 of 1755 retrieved publications. Thirteen unique chatbots were identified from the 18 studies. ArabChat was the most common chatbot in the included studies (n = 5). The type of Arabic language in most chatbots (n = 13) was Modern Standard Arabic. The input and output modalities used in 17 chatbots were only text. Most chatbots (n = 14) were able to have long conversations. The majority of the chatbots (n = 14) were developed to serve a specific purpose (Closed domain). A retrieval-based model was used for developing most chatbots (n = 17).
Conclusion
Despite a large number of chatbots worldwide, there is relatively a small number of Arabic language chatbots. Furthermore, the available Arabic language chatbots are less advanced than other language chatbots. Researchers should develop more Arabic language chatbots that are based on more advanced input and output modalities, generative-based models, and natural language processing (NLP).
{"title":"Arabic chatbot technologies: A scoping review","authors":"Arfan Ahmed , Nashva Ali , Mahmood Alzubaidi , Wajdi Zaghouani , Alaa Abd-alrazaq , Mowafa Househ","doi":"10.1016/j.cmpbup.2022.100057","DOIUrl":"10.1016/j.cmpbup.2022.100057","url":null,"abstract":"<div><h3>Background</h3><p>Chatbots have been widely used in many spheres of life from customer services to mental health companions. Despite the breakthroughs in achieving human-like conversations, Arabic language chatbots driven by AI and NLP are relatively scarce due to the complex nature of the Arabic language.</p></div><div><h3>Objective</h3><p>We aim to review published literature on Arabic chatbots to gain insight into the technologies used highlighting the gap in this emerging field.</p></div><div><h3>Methods</h3><p>To identify relevant studies, we searched eight bibliographic databases and conducted backward and forward reference checking. Two reviewers independently performed study selection and data extraction. The extracted data was synthesized using a narrative approach.</p></div><div><h3>Results</h3><p>We included 18 of 1755 retrieved publications. Thirteen unique chatbots were identified from the 18 studies. ArabChat was the most common chatbot in the included studies (<em>n</em> = 5). The type of Arabic language in most chatbots (<em>n</em> = 13) was Modern Standard Arabic. The input and output modalities used in 17 chatbots were only text. Most chatbots (<em>n</em> = 14) were able to have long conversations. The majority of the chatbots (<em>n</em> = 14) were developed to serve a specific purpose (Closed domain). A retrieval-based model was used for developing most chatbots (<em>n</em> = 17).</p></div><div><h3>Conclusion</h3><p>Despite a large number of chatbots worldwide, there is relatively a small number of Arabic language chatbots. Furthermore, the available Arabic language chatbots are less advanced than other language chatbots. Researchers should develop more Arabic language chatbots that are based on more advanced input and output modalities, generative-based models, and natural language processing (NLP).</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000088/pdfft?md5=c0cb5218dcb9a5a08acc663588170abe&pid=1-s2.0-S2666990022000088-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43191925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2022.100051
Seema Sandeep Redekar , Satishkumar L. Varma , Atanu Bhattacharjee
Background and Objective
Glioblastoma (GBM) is the most aggressive type of brain tumor. In spite of having various treatment options, GBM patients usually have a poor prognosis. Genetic markers play a vital role in the progression of the disease. Identification of these novel molecular biomarkers is essential to explain the mechanisms or improve the prognosis of GBM. Advances in high throughput genomic technologies enable the analysis of the varied types of omics data to find biomarkers in GBM. Although data repositories like The Cancer Genome Atlas (TCGA) are rich sources of such multi-omics data, integrating these different genomic datasets of varying quality and patient heterogeneity is challenging.
Methods
Multi-omics gene expression datasets from TCGA consisting of DNA methylation, RNA sequencing, and copy number variation (CNV) of GBM patient is obtained to carry out the analysis. The Cox proportional hazards regression model is developed in R to identify significant genes from diverse datasets associated with the patient's survival. (Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) is used as an estimator for the model. Validation is performed to determine the accuracy and corresponding prediction error.
Results
Five key genes are identified from DNA Methylation and RNA sequencing datasets those are ANK1, HOXA9, TOX2, CXCR6, PIGZ, and L3MBTL, KDM5B, CCDC138, NUS1P1, and ARHGAP42, respectively. Higher expression values of these genes determine better survival of the GBM patients. Kaplan-Meier estimate curves show the exact correlation. Lower values of AIC and BIC determine the suitability of the model. The prediction model is validated on the test set and signifies a low error rate. Copy number variation data is also analysed to find the significant chromosomal location of GBM patients associated with chromosome 2,5,6,7,12,13, respectively. Among all nine CNV locations are found to be influencing the progression of GBM.
Conclusion
Integrated analysis of multiple omics dataset is carried out to identify significant genes from DNA Methylation and RNA sequencing profiles of 76 common individuals. Copy number variation dataset for the same patients is analyzed to recognize notable locations associated with 22 chromosomes. The survival analysis determines the correlation of these biomarkers with the progression of the disease.
{"title":"Identification of key genes associated with survival of glioblastoma multiforme using integrated analysis of TCGA datasets","authors":"Seema Sandeep Redekar , Satishkumar L. Varma , Atanu Bhattacharjee","doi":"10.1016/j.cmpbup.2022.100051","DOIUrl":"10.1016/j.cmpbup.2022.100051","url":null,"abstract":"<div><h3>Background and Objective</h3><p>Glioblastoma (GBM) is the most aggressive type of brain tumor. In spite of having various treatment options, GBM patients usually have a poor prognosis. Genetic markers play a vital role in the progression of the disease. Identification of these novel molecular biomarkers is essential to explain the mechanisms or improve the prognosis of GBM. Advances in high throughput genomic technologies enable the analysis of the varied types of omics data to find biomarkers in GBM. Although data repositories like The Cancer Genome Atlas (TCGA) are rich sources of such multi-omics data, integrating these different genomic datasets of varying quality and patient heterogeneity is challenging.</p></div><div><h3>Methods</h3><p>Multi-omics gene expression datasets from TCGA consisting of DNA methylation, RNA sequencing, and copy number variation (CNV) of GBM patient is obtained to carry out the analysis. The Cox proportional hazards regression model is developed in R to identify significant genes from diverse datasets associated with the patient's survival. (Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) is used as an estimator for the model. Validation is performed to determine the accuracy and corresponding prediction error.</p></div><div><h3>Results</h3><p>Five key genes are identified from DNA Methylation and RNA sequencing datasets those are ANK1, HOXA9, TOX2, CXCR6, PIGZ, and L3MBTL, KDM5B, CCDC138, NUS1P1, and ARHGAP42, respectively. Higher expression values of these genes determine better survival of the GBM patients. Kaplan-Meier estimate curves show the exact correlation. Lower values of AIC and BIC determine the suitability of the model. The prediction model is validated on the test set and signifies a low error rate. Copy number variation data is also analysed to find the significant chromosomal location of GBM patients associated with chromosome 2,5,6,7,12,13, respectively. Among all nine CNV locations are found to be influencing the progression of GBM.</p></div><div><h3>Conclusion</h3><p>Integrated analysis of multiple omics dataset is carried out to identify significant genes from DNA Methylation and RNA sequencing profiles of 76 common individuals. Copy number variation dataset for the same patients is analyzed to recognize notable locations associated with 22 chromosomes. The survival analysis determines the correlation of these biomarkers with the progression of the disease.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000039/pdfft?md5=e56e6a85d26c6ce9044564a4722badb2&pid=1-s2.0-S2666990022000039-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46959410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2022.100079
Ana Filipa Rebelo , António M. Ferreira , José M. Fonseca
Epicardial Fat Volume (EFV) represents a valuable predictor of cardio- and cerebrovascular events. However, the manual procedures for EFV calculation, diffused in clinical practice, are highly time-consuming for technicians or physicians and often involve significant intra- or inter-observer variances. To reduce the processing time and improve results repeatability, we propose a computer-assisted tool that automatically performs epicardial fat segmentation and volume quantification on non-contrast cardiac Computed Tomography (CT). The proposed algorithm prioritizes the use of basic image techniques, promoting lower computational complexity. The heart region is selected using Otsu's Method, Template Matching and Connected Component Analysis. Then, to refine the pericardium delineation, convex hull is applied. Lastly, epicardial fat is segmented by thresholding. In addition to the algorithm, an intuitive software (HARTA) was developed for clinical use, allowing human intervention for adjustments. A set of 878 non-contrast cardiac CT images was used to validate the method. Using HARTA, the average time to segment the epicardial fat on a CT was 15.5 2.42 s, while manually 10 to 26 min were required. Epicardial fat segmentation was evaluated obtaining an accuracy of 98.83% and a Dice Similarity Coefficient of 0.7730. EFV automatic quantification presents Pearson and Spearman correlation coefficients of 0.9366 and 0.8773, respectively. The proposed tool presents potential to be used in clinical contexts, assisting cardiologists to achieve faster and accurate EFV, leading towards personalized diagnosis and therapy. The human intervention component can also improve the automatic results and insure the credibility of this diagnostic support system. The software hereby presented is available for public access at GitHub.
{"title":"Automatic epicardial fat segmentation and volume quantification on non-contrast cardiac Computed Tomography","authors":"Ana Filipa Rebelo , António M. Ferreira , José M. Fonseca","doi":"10.1016/j.cmpbup.2022.100079","DOIUrl":"10.1016/j.cmpbup.2022.100079","url":null,"abstract":"<div><p>Epicardial Fat Volume (EFV) represents a valuable predictor of cardio- and cerebrovascular events. However, the manual procedures for EFV calculation, diffused in clinical practice, are highly time-consuming for technicians or physicians and often involve significant intra- or inter-observer variances. To reduce the processing time and improve results repeatability, we propose a computer-assisted tool that automatically performs epicardial fat segmentation and volume quantification on non-contrast cardiac Computed Tomography (CT). The proposed algorithm prioritizes the use of basic image techniques, promoting lower computational complexity. The heart region is selected using Otsu's Method, Template Matching and Connected Component Analysis. Then, to refine the pericardium delineation, convex hull is applied. Lastly, epicardial fat is segmented by thresholding. In addition to the algorithm, an intuitive software (HARTA) was developed for clinical use, allowing human intervention for adjustments. A set of 878 non-contrast cardiac CT images was used to validate the method. Using HARTA, the average time to segment the epicardial fat on a CT was 15.5 <span><math><mo>±</mo></math></span> 2.42 s, while manually 10 to 26 min were required. Epicardial fat segmentation was evaluated obtaining an accuracy of 98.83% and a Dice Similarity Coefficient of 0.7730. EFV automatic quantification presents Pearson and Spearman correlation coefficients of 0.9366 and 0.8773, respectively. The proposed tool presents potential to be used in clinical contexts, assisting cardiologists to achieve faster and accurate EFV, leading towards personalized diagnosis and therapy. The human intervention component can also improve the automatic results and insure the credibility of this diagnostic support system. The software hereby presented is available for public access at GitHub.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000301/pdfft?md5=485a26c19d2d44942860e5221943ea73&pid=1-s2.0-S2666990022000301-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42314270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a unified approach for the automatic and accurate segmentation of the pupil and iris from on-axis grayscale eye images. The segmentation of pupil and iris is achieved with Basis-spline-based active contour and circular active contour, respectively. The circular active contour shape template has three free parameters, i.e., a pair of center coordinates and the radius. Basis-spline has knots in the shape template and five free parameters i.e., a pair of center coordinates, scaling in the horizontal and vertical directions, and the rotation angle. The segmentation of the region of interest is done by minimization of the local energy function. Optimization of the local energy function of circular and Basis-spline-based active contour is carried out using gradient descent technique and Green’s theorem. To achieve the segmentation of iris boundary, the circular active contour method is combined with our novel occlusion removal algorithm. This helps in removing eyelid and eyelash occlusions for accurate iris segmentation. Automatic localization of the pupil is achieved by the sum of absolute difference method. The proposed algorithm is validated on three publicly available databases: IIT Delhi Iris, CASIA Iris Interval V3, and CASIA Iris Interval V4 databases consisting of 7518 grayscale iris images in total. For the segmentation of pupil from the aforementioned databases, we attained a Dice index of 0.971, 0.950, and 0.960, respectively, and for the segmentation of iris, we attained a Dice index of 0.905, 0.898, and 0.900, respectively. An exploratory data analysis was then done to visualize the distribution of the performance parameters throughout the databases. The segmentation performance of the proposed algorithm is on par with that of the reported state-of-the-art algorithms.
{"title":"A unified approach for automated segmentation of pupil and iris in on-axis images","authors":"Grissel Priyanka Mathias , J.H. Gagan , B. Vaibhav Mallya , J.R. Harish Kumar","doi":"10.1016/j.cmpbup.2022.100084","DOIUrl":"10.1016/j.cmpbup.2022.100084","url":null,"abstract":"<div><p>We propose a unified approach for the automatic and accurate segmentation of the pupil and iris from on-axis grayscale eye images. The segmentation of pupil and iris is achieved with Basis-spline-based active contour and circular active contour, respectively. The circular active contour shape template has three free parameters, i.e., a pair of center coordinates and the radius. Basis-spline has <span><math><mi>M</mi></math></span> knots in the shape template and five free parameters i.e., a pair of center coordinates, scaling in the horizontal and vertical directions, and the rotation angle. The segmentation of the region of interest is done by minimization of the local energy function. Optimization of the local energy function of circular and Basis-spline-based active contour is carried out using gradient descent technique and Green’s theorem. To achieve the segmentation of iris boundary, the circular active contour method is combined with our novel occlusion removal algorithm. This helps in removing eyelid and eyelash occlusions for accurate iris segmentation. Automatic localization of the pupil is achieved by the sum of absolute difference method. The proposed algorithm is validated on three publicly available databases: IIT Delhi Iris, CASIA Iris Interval V3, and CASIA Iris Interval V4 databases consisting of 7518 grayscale iris images in total. For the segmentation of pupil from the aforementioned databases, we attained a Dice index of 0.971, 0.950, and 0.960, respectively, and for the segmentation of iris, we attained a Dice index of 0.905, 0.898, and 0.900, respectively. An exploratory data analysis was then done to visualize the distribution of the performance parameters throughout the databases. The segmentation performance of the proposed algorithm is on par with that of the reported state-of-the-art algorithms.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000350/pdfft?md5=9c96bed223fe43655d35fca5047b373d&pid=1-s2.0-S2666990022000350-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45574713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2022.100081
Irina Balzer , Malin Mühlemann , Moritz Jokeit , Ishaan Singh Rawal , Jess G. Snedeker , Mazda Farshad , Jonas Widmer
Background
This work evaluates the feasibility, development, and validation of a machine learning pipeline that includes all tasks from MRI input to the segmentation and grading of the intervertebral discs in the lumbar spine, offering multiple different radiological gradings of degeneration as quantitative objective output.
Methods
The pipelines’ performance was analysed on 1′000 T2-weighted sagittal MRI. Binary outputs were assessed with the harmonic mean of precision and recall (DSC) and the area under the precision-recall curve (AUC-PR). Multi-class output scores were averaged and complemented by the Top-2 categorical accuracy. The processing success rate was evaluated on 10′053 unlabelled MRI scans of lumbar spines.
Results
The midsagittal plane selection achieved an DSC of 74,80% ± 2,99% and an AUC-PR score of 81.71% ± 2.72% (96.91% Top-2 categorical accuracy). The segmentation network obtained a DSC of 91.80% ± 0.44%. The Pfirrmann grading of intervertebral discs in the midsagittal plane was classified with a DSC of 64.08% ± 3.29% and an AUC-PR score of 68.25% ± 6.00% (91.65% Top-2 categorical accuracy). Disc herniations achieved a DSC of 61.57% ± 3.39% and an AUC-PR score of 66.86% ± 5.03%. The cranial endplate defects reached a DSC of 49.76% ± 3.45% and 52.36% ± 1.98% AUC-PR (slightly superior predictions of caudal endplate defect). The binary classifications for the caudal Schmorl's nodes obtained a DSC of 91.58% ± 2.25% with an AUC-PR metric of 96.69% ± 1.58% (similar performance for cranial Schmorl's nodes). Spondylolisthesis was classified with a DSC of 89.03% ± 2.42% and an AUC-PR score of 95.98% ± 1.82%. Annular Fissures were predicted with a DSC of 78.09% ± 7.21% and an AUC-PR score of 86.31% ± 7.45%. Intervertebral disc classifications in the parasagittal plane achieved an equivalent performance. The pipeline successfully processed 98.53% of the provided sagittal MRI scans.
Conclusions
The present deep learning framework has the potential to aid the quantitative evaluation of spinal MRI for an array of clinically established grading systems.
{"title":"A deep learning pipeline for automatized assessment of spinal MRI","authors":"Irina Balzer , Malin Mühlemann , Moritz Jokeit , Ishaan Singh Rawal , Jess G. Snedeker , Mazda Farshad , Jonas Widmer","doi":"10.1016/j.cmpbup.2022.100081","DOIUrl":"10.1016/j.cmpbup.2022.100081","url":null,"abstract":"<div><h3>Background</h3><p>This work evaluates the feasibility, development, and validation of a machine learning pipeline that includes all tasks from MRI input to the segmentation and grading of the intervertebral discs in the lumbar spine, offering multiple different radiological gradings of degeneration as quantitative objective output.</p></div><div><h3>Methods</h3><p>The pipelines’ performance was analysed on 1′000 T2-weighted sagittal MRI. Binary outputs were assessed with the harmonic mean of precision and recall (DSC) and the area under the precision-recall curve (AUC-PR). Multi-class output scores were averaged and complemented by the Top-2 categorical accuracy. The processing success rate was evaluated on 10′053 unlabelled MRI scans of lumbar spines.</p></div><div><h3>Results</h3><p>The midsagittal plane selection achieved an DSC of 74,80% ± 2,99% and an AUC-PR score of 81.71% ± 2.72% (96.91% Top-2 categorical accuracy). The segmentation network obtained a DSC of 91.80% ± 0.44%. The Pfirrmann grading of intervertebral discs in the midsagittal plane was classified with a DSC of 64.08% ± 3.29% and an AUC-PR score of 68.25% ± 6.00% (91.65% Top-2 categorical accuracy). Disc herniations achieved a DSC of 61.57% ± 3.39% and an AUC-PR score of 66.86% ± 5.03%. The cranial endplate defects reached a DSC of 49.76% ± 3.45% and 52.36% ± 1.98% AUC-PR (slightly superior predictions of caudal endplate defect). The binary classifications for the caudal Schmorl's nodes obtained a DSC of 91.58% ± 2.25% with an AUC-PR metric of 96.69% ± 1.58% (similar performance for cranial Schmorl's nodes). Spondylolisthesis was classified with a DSC of 89.03% ± 2.42% and an AUC-PR score of 95.98% ± 1.82%. Annular Fissures were predicted with a DSC of 78.09% ± 7.21% and an AUC-PR score of 86.31% ± 7.45%. Intervertebral disc classifications in the parasagittal plane achieved an equivalent performance. The pipeline successfully processed 98.53% of the provided sagittal MRI scans.</p></div><div><h3>Conclusions</h3><p>The present deep learning framework has the potential to aid the quantitative evaluation of spinal MRI for an array of clinically established grading systems.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000325/pdfft?md5=3d1660ccac365f091387c41c705eb11f&pid=1-s2.0-S2666990022000325-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49390635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2022.100070
Pratheeba Jeyananthan
Severe acute respiratory syndrome coronavirus type 2 (SARS-CoV-2) is identified as a highly transmissible coronavirus which threatens the world with this deadly pandemic. WHO reported that it spreads through contact, droplet, airborne, formite, fecal-oral, bloodborne, mother-to-child and animal-to-human. Hence, viral shedding has a huge impact on this pandemic. This study uses transcriptome data of coronavirus disease 2019 (COVID-19) patients to predict the prolonged viral shedding of the corresponding patient. This prediction starts with the transcriptome features which gives the lowest root mean squared value of 16.3±3.3 using top 25 feature selected using forward feature selection algorithm and linear regression algorithm. Then to see the impact of few non-molecular features in this prediction, they were added to the model one by one along with the selected transcriptome features. However, this study shows that those features do not have any impact on prolonged viral shedding prediction. Further this study predicts the day since onset in the same way. Here also top 25 transcriptome features selected using forward feature selection algorithm gives a comparably good accuracy (accuracy value of 0.74±0.1). However, the best accuracy was obtained using the best 20 features from feature importance using SVM (0.78±0.1). Moreover, adding non-molecular features shows a great impact on mutual information selected features in this prediction.
{"title":"Prolonged viral shedding prediction on non-hospitalized, uncomplicated SARS-CoV-2 patients using their transcriptome data","authors":"Pratheeba Jeyananthan","doi":"10.1016/j.cmpbup.2022.100070","DOIUrl":"10.1016/j.cmpbup.2022.100070","url":null,"abstract":"<div><p>Severe acute respiratory syndrome coronavirus type 2 (SARS-CoV-2) is identified as a highly transmissible coronavirus which threatens the world with this deadly pandemic. WHO reported that it spreads through contact, droplet, airborne, formite, fecal-oral, bloodborne, mother-to-child and animal-to-human. Hence, viral shedding has a huge impact on this pandemic. This study uses transcriptome data of coronavirus disease 2019 (COVID-19) patients to predict the prolonged viral shedding of the corresponding patient. This prediction starts with the transcriptome features which gives the lowest root mean squared value of 16.3±3.3 using top 25 feature selected using forward feature selection algorithm and linear regression algorithm. Then to see the impact of few non-molecular features in this prediction, they were added to the model one by one along with the selected transcriptome features. However, this study shows that those features do not have any impact on prolonged viral shedding prediction. Further this study predicts the day since onset in the same way. Here also top 25 transcriptome features selected using forward feature selection algorithm gives a comparably good accuracy (accuracy value of 0.74±0.1). However, the best accuracy was obtained using the best 20 features from feature importance using SVM (0.78±0.1). Moreover, adding non-molecular features shows a great impact on mutual information selected features in this prediction.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9444307/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10322488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2022.100082
Enny Rachmani , Haikal Haikal , Eti Rimawati
COVID-19 is a new disease in human life and has become pandemic. Pandemic Coronavirus Disease (COVID-19) has been speeding up digital transformation in every sector. Implementation of digital technology in health should be supported by the community's readiness, such as digital health literacy to achieve the goals, optimize health service performance, and blockage infodemics and miss information. Implementation of digital technology in health should be supported by the community's readiness, such as digital health literacy to achieve the goals, optimize health service performance, and blockage infodemics and miss information. This study aims to develop a tool to measure digital health literacy in the community through three stages such as expert review, pre-test and field test. DHLC adopted the five competencies areas into 18 questions and put eight questions related to health literacy; the total items question of DHLC are 26 items questions. This study reveals that all of the score digital competencies areas below 4. Score 4 in DHLC indicates that the community still need guidance to doing activity in the digital environment. Elevating digital health literacy in the citizens is urgent to control the spreading misinformation and disinformation that could worsen pandemics. Future studies need to conduct to test the validity and reliability of DHLC in various settings.
{"title":"Development and validation of digital health literacy competencies for citizens (DHLC), an instrument for measuring digital health literacy in the community","authors":"Enny Rachmani , Haikal Haikal , Eti Rimawati","doi":"10.1016/j.cmpbup.2022.100082","DOIUrl":"10.1016/j.cmpbup.2022.100082","url":null,"abstract":"<div><p>COVID-19 is a new disease in human life and has become pandemic. Pandemic Coronavirus Disease (COVID-19) has been speeding up digital transformation in every sector. Implementation of digital technology in health should be supported by the community's readiness, such as digital health literacy to achieve the goals, optimize health service performance, and blockage infodemics and miss information. Implementation of digital technology in health should be supported by the community's readiness, such as digital health literacy to achieve the goals, optimize health service performance, and blockage infodemics and miss information. This study aims to develop a tool to measure digital health literacy in the community through three stages such as expert review, pre-test and field test. DHLC adopted the five competencies areas into 18 questions and put eight questions related to health literacy; the total items question of DHLC are 26 items questions. This study reveals that all of the score digital competencies areas below 4. Score 4 in DHLC indicates that the community still need guidance to doing activity in the digital environment. Elevating digital health literacy in the citizens is urgent to control the spreading misinformation and disinformation that could worsen pandemics. Future studies need to conduct to test the validity and reliability of DHLC in various settings.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9659361/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10672963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}