Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2022.100066
Arfan Ahmed , Sarah Aziz , Carla T. Toro , Mahmood Alzubaidi , Sara Irshaidat , Hashem Abu Serhan , Alaa A. Abd-alrazaq , Mowafa Househ
Despite improvement in detection rates, the prevalence of mental health disorders such as anxiety and depression are on the rise especially since the outbreak of the COVID-19 pandemic. Symptoms of mental health disorders have been noted and observed on social media forums such Facebook. We explored machine learning models used to detect anxiety and depression through social media. Six bibliographic databases were searched for conducting the review following PRISMA-ScR protocol. We included 54 of 2219 retrieved studies. Users suffering from anxiety or depression were identified in the reviewed studies by screening their online presence and their sharing of diagnosis by patterns in their language and online activity. Majority of the studies (70%, 38/54) were conducted at the peak of the COVID-19 pandemic (2019–2020). The studies made use of social media data from a variety of different platforms to develop predictive models for the detection of depression or anxiety. These included Twitter, Facebook, Instagram, Reddit, Sina Weibo, and a combination of different social sites posts. We report the most common Machine Learning models identified. Identification of those suffering from anxiety and depression disorders may be achieved using prediction models to detect user's language on social media and has the potential to complimenting traditional screening. Such analysis could also provide insights into the mental health of the public especially so when access to health professionals can be restricted due to lockdowns and temporary closure of services such as we saw during the peak of the COVID-19 pandemic.
{"title":"Machine learning models to detect anxiety and depression through social media: A scoping review","authors":"Arfan Ahmed , Sarah Aziz , Carla T. Toro , Mahmood Alzubaidi , Sara Irshaidat , Hashem Abu Serhan , Alaa A. Abd-alrazaq , Mowafa Househ","doi":"10.1016/j.cmpbup.2022.100066","DOIUrl":"10.1016/j.cmpbup.2022.100066","url":null,"abstract":"<div><p>Despite improvement in detection rates, the prevalence of mental health disorders such as anxiety and depression are on the rise especially since the outbreak of the COVID-19 pandemic. Symptoms of mental health disorders have been noted and observed on social media forums such Facebook. We explored machine learning models used to detect anxiety and depression through social media. Six bibliographic databases were searched for conducting the review following PRISMA-ScR protocol. We included 54 of 2219 retrieved studies. Users suffering from anxiety or depression were identified in the reviewed studies by screening their online presence and their sharing of diagnosis by patterns in their language and online activity. Majority of the studies (70%, 38/54) were conducted at the peak of the COVID-19 pandemic (2019–2020). The studies made use of social media data from a variety of different platforms to develop predictive models for the detection of depression or anxiety. These included Twitter, Facebook, Instagram, Reddit, Sina Weibo, and a combination of different social sites posts. We report the most common Machine Learning models identified. Identification of those suffering from anxiety and depression disorders may be achieved using prediction models to detect user's language on social media and has the potential to complimenting traditional screening. Such analysis could also provide insights into the mental health of the public especially so when access to health professionals can be restricted due to lockdowns and temporary closure of services such as we saw during the peak of the COVID-19 pandemic.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"2 ","pages":"Article 100066"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9461333/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10326729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study proposes a novel convolutional neural network (CNN) structure in conjunction with classical machine learning models, utilizing the raw electroencephalography (EEG) signal as the input to diagnose attention deficit hyperactivity disorder (ADHD) in children. The proposed EEG-based approach does not require transformation or artifact rejection techniques.
Methods:
In the first step, the suggested method uses raw EEG to train a CNN to diagnose ADHD. Then, the feature maps from different layers of the trained CNN are extracted and used to train some classical classifiers such as support vector machine (SVM), logistic regression (LR), random forest (RF), etc. This study benefits from an extended version of a dataset acquired from 61 participants diagnosed with ADHD and 60 individuals in control group, age 7 through 12 years old.
Results:
The initial CNN structure (without further use of feature maps) achieved an accuracy of in 5-fold cross-validation scheme on training set, which is superior to results reported in previous studies. However, in order to increase the efficacy of the classifiers we used various feature representations across different CNN layers and after a rigorous evaluation of candidate classifiers, logistic regression provided an accuracy of in training epochs using 5-fold cross-validation scheme and 95.83% in ADHD identification in unseen epochs, were achieved. Also, other metrics such as precision, sensitivity, F1-score and receiver of operating characteristic (ROC) were presented for better comparison of different hybrid methods.
Conclusion:
The suggested method for detection of ADHD in children shows high performance in different metrics such as accuracy, sensitivity, and specificity, which is superior to previously reported results.
目的:本研究提出了一种新的卷积神经网络(CNN)结构,结合经典机器学习模型,利用原始脑电图(EEG)信号作为输入来诊断儿童注意缺陷多动障碍(ADHD)。建议的基于脑电图的方法不需要转换或工件拒绝技术。方法:第一步,采用原始EEG训练CNN进行ADHD诊断。然后,从训练好的CNN的不同层提取特征映射,用于训练一些经典的分类器,如支持向量机(SVM)、逻辑回归(LR)、随机森林(RF)等。这项研究得益于从61名被诊断为多动症的参与者和60名7至12岁的对照组中获得的数据集的扩展版本。结果:初始CNN结构(未进一步使用特征图)在训练集上的5倍交叉验证方案中准确率达到86.33±2.64%,优于以往研究报告的结果。然而,为了提高分类器的有效性,我们在不同的CNN层上使用了不同的特征表示,经过对候选分类器的严格评估,逻辑回归在使用5倍交叉验证方案的训练时段提供了91.16±0.03%的准确率,在未见的时段实现了95.83%的ADHD识别。此外,为了更好地比较不同混合方法,还提出了精度、灵敏度、f1评分和ROC (receiver of operating characteristic)等其他指标。结论:建议的儿童ADHD检测方法在准确性、敏感性和特异性等不同指标上表现优异,优于先前报道的结果。
{"title":"Detection of ADHD cases using CNN and classical classifiers of raw EEG","authors":"Behrad TaghiBeyglou , Ashkan Shahbazi , Fatemeh Bagheri , Sina Akbarian , Mehran Jahed","doi":"10.1016/j.cmpbup.2022.100080","DOIUrl":"10.1016/j.cmpbup.2022.100080","url":null,"abstract":"<div><h3>Purpose:</h3><p>This study proposes a novel convolutional neural network (CNN) structure in conjunction with classical machine learning models, utilizing the raw electroencephalography (EEG) signal as the input to diagnose attention deficit hyperactivity disorder (ADHD) in children. The proposed EEG-based approach does not require transformation or artifact rejection techniques.</p></div><div><h3>Methods:</h3><p>In the first step, the suggested method uses raw EEG to train a CNN to diagnose ADHD. Then, the feature maps from different layers of the trained CNN are extracted and used to train some classical classifiers such as support vector machine (SVM), logistic regression (LR), random forest (RF), etc. This study benefits from an extended version of a dataset acquired from 61 participants diagnosed with ADHD and 60 individuals in control group, age 7 through 12 years old.</p></div><div><h3>Results:</h3><p>The initial CNN structure (without further use of feature maps) achieved an accuracy of <span><math><mrow><mn>86</mn><mo>.</mo><mn>33</mn><mspace></mspace><mo>±</mo><mspace></mspace><mn>2</mn><mo>.</mo><mn>64</mn><mtext>%</mtext></mrow></math></span> in 5-fold cross-validation scheme on training set, which is superior to results reported in previous studies. However, in order to increase the efficacy of the classifiers we used various feature representations across different CNN layers and after a rigorous evaluation of candidate classifiers, logistic regression provided an accuracy of <span><math><mrow><mn>91</mn><mo>.</mo><mn>16</mn><mspace></mspace><mo>±</mo><mspace></mspace><mn>0</mn><mo>.</mo><mn>03</mn><mtext>%</mtext></mrow></math></span> in training epochs using 5-fold cross-validation scheme and 95.83% in ADHD identification in unseen epochs, were achieved. Also, other metrics such as precision, sensitivity, F1-score and receiver of operating characteristic (ROC) were presented for better comparison of different hybrid methods.</p></div><div><h3>Conclusion:</h3><p>The suggested method for detection of ADHD in children shows high performance in different metrics such as accuracy, sensitivity, and specificity, which is superior to previously reported results.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"2 ","pages":"Article 100080"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000313/pdfft?md5=32c29b3e7f4dcd153d6ceb84d3839f07&pid=1-s2.0-S2666990022000313-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45701279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2022.100071
Ibrahim Sadek , Josué Codjo , Shafiq Ul Rehman , Bessam Abdulrazak
The internet of things (IoT) technology can be nowadays used to track user activity in daily living and health-related quality of life. IoT healthcare sensors can play a great role in reducing health-related costs. It helps users to assess their health progression. Nonetheless, these IoT solutions add security challenges due to their direct access to numerous personal information and their close integration into user activities. As such, this IoT technology is always a viable target for cybercriminals. More importantly, any adversarial attacks on an individual IoT node undermine the overall security of the concerned networks. In this study, we present the privacy and security issues of IoT healthcare devices. Moreover, we address possible attack models needed to verify the robustness of such devices. Finally, we present our deployed AMbient Intelligence (AMI) Lab architecture, and we compare its performance to current IoT solutions.
{"title":"Security and privacy in the internet of things healthcare systems: Toward a robust solution in real-life deployment","authors":"Ibrahim Sadek , Josué Codjo , Shafiq Ul Rehman , Bessam Abdulrazak","doi":"10.1016/j.cmpbup.2022.100071","DOIUrl":"10.1016/j.cmpbup.2022.100071","url":null,"abstract":"<div><p>The internet of things (IoT) technology can be nowadays used to track user activity in daily living and health-related quality of life. IoT healthcare sensors can play a great role in reducing health-related costs. It helps users to assess their health progression. Nonetheless, these IoT solutions add security challenges due to their direct access to numerous personal information and their close integration into user activities. As such, this IoT technology is always a viable target for cybercriminals. More importantly, any adversarial attacks on an individual IoT node undermine the overall security of the concerned networks. In this study, we present the privacy and security issues of IoT healthcare devices. Moreover, we address possible attack models needed to verify the robustness of such devices. Finally, we present our deployed <strong>AM</strong>bient <strong>I</strong>ntelligence (AMI) Lab architecture, and we compare its performance to current IoT solutions.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"2 ","pages":"Article 100071"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000222/pdfft?md5=65f01de6eef747d960cef3b3d9443d40&pid=1-s2.0-S2666990022000222-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47053453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2022.100072
Kin Weng Wong , Tai-Hua Yang , Shao-Fu Huang , Yi-Jun Liu , Chi-Sheng Chien , Chun-Li Lin
The current tibiotalocalcaneal (TTC) nails used in ankle arthrodesis surgery have shortcomings leading to unfavorable clinical failures. This study proposes a novel nail design and fabricated by metal 3D printing that can enhance the global implant stability through finite element (FE) analysis and fatigue testing. A novel titanium nail was designed with trilobular cross-sectional design for increasing anti-rotation stability. This nail has three leads with different, increasing pitches that increase the self-compression effect in the fusion sites. Between the leads, there are two porous diamond microstructure regions that act as a bone ingrowth scaffold. The nail was fabricated by metal 3D printing and implanted into artificial ankle joint to evaluate the self-compression effects. The nonlinear FE analysis was performed models to compare the anti-rotation stability between trilobular nail (Tri-nail) and the conventional circular nail. The static and fatigue four-point bending tests were done to understand the mechanical strength of the novel nail. The experiment of self-compression effect showed that the three lead design provides two stages of significant compression effect, with a pressurization rate as high as 40%. FE simulated results indicated that the Tri-nail group provides significant tangent displacement reduction as well as reduction in the surrounding bone stress value and the stress distribution is more even in the Tri-nail group. Four-point test found that the Tri-nail yielding strength is 12,957 ± 577 N, which is much higher than the approved FDA reference (1026 N). One million cycles using 8% of the yielding strength (1036 N) were accomplished without Tri-nail failure. The proposed novel metal 3D printing Tri-nail can provide enough mechanical strength and is mechanically stable with superior anti-rotation ability and excellent fusion site self-compression effect.
{"title":"Biomechanical evaluation of a novel 3D printing tibiotalocalcaneus nail with trilobular cross-sectional design and self-compression effect","authors":"Kin Weng Wong , Tai-Hua Yang , Shao-Fu Huang , Yi-Jun Liu , Chi-Sheng Chien , Chun-Li Lin","doi":"10.1016/j.cmpbup.2022.100072","DOIUrl":"10.1016/j.cmpbup.2022.100072","url":null,"abstract":"<div><p>The current tibiotalocalcaneal (TTC) nails used in ankle arthrodesis surgery have shortcomings leading to unfavorable clinical failures. This study proposes a novel nail design and fabricated by metal 3D printing that can enhance the global implant stability through finite element (FE) analysis and fatigue testing. A novel titanium nail was designed with trilobular cross-sectional design for increasing anti-rotation stability. This nail has three leads with different, increasing pitches that increase the self-compression effect in the fusion sites. Between the leads, there are two porous diamond microstructure regions that act as a bone ingrowth scaffold. The nail was fabricated by metal 3D printing and implanted into artificial ankle joint to evaluate the self-compression effects. The nonlinear FE analysis was performed models to compare the anti-rotation stability between trilobular nail (Tri-nail) and the conventional circular nail. The static and fatigue four-point bending tests were done to understand the mechanical strength of the novel nail. The experiment of self-compression effect showed that the three lead design provides two stages of significant compression effect, with a pressurization rate as high as 40%. FE simulated results indicated that the Tri-nail group provides significant tangent displacement reduction as well as reduction in the surrounding bone stress value and the stress distribution is more even in the Tri-nail group. Four-point test found that the Tri-nail yielding strength is 12,957 ± 577 N, which is much higher than the approved FDA reference (1026 N). One million cycles using 8% of the yielding strength (1036 N) were accomplished without Tri-nail failure. The proposed novel metal 3D printing Tri-nail can provide enough mechanical strength and is mechanically stable with superior anti-rotation ability and excellent fusion site self-compression effect.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"2 ","pages":"Article 100072"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000234/pdfft?md5=7290bb3f66a1b0ca24111971f2f63c37&pid=1-s2.0-S2666990022000234-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47106980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2021.100047
Fazila Shams , Assad Abbas , Wasiq Khan , Umar Shahbaz Khan , Raheel Nawaz
Background
The SARS-Cov-2 virus (commonly known as COVID-19) has resulted in substantial casualties in many countries. The first case of COVID-19 was reported in China towards the end of 2019. Cases started to appear in several other countries (including Pakistan) by February 2020. To analyze the spreading pattern of the disease, several researchers used the Susceptible-Infectious-Recovered (SIR) model. However, the classical SIR model cannot predict the death rate.
Objective
In this article, we present a Death-Infection-Recovery (DIR) model to forecast the virus spread over a window of one (minimum) to fourteen (maximum) days. Our model captures the dynamic behavior of the virus and can assist authorities in making decisions on non-pharmaceutical interventions (NPI), like travel restrictions, lockdowns, etc.
Method
The size of training dataset used was 134 days. The Auto Regressive Integrated Moving Average (ARIMA) model was implemented using XLSTAT (add-in for Microsoft Excel), whereas the SIR and the proposed DIR model was implemented using python programming language. We compared the performance of DIR model with the SIR model and the ARIMA model by computing the Percentage Error and Mean Absolute Percentage Error (MAPE).
Results
Experimental results demonstrate that the maximum% error in predicting the number of deaths, infections, and recoveries for a period of fourteen days using the DIR model is only 2.33%, using ARIMA model is 10.03% and using SIR model is 53.07%.
Conclusion
This percentage of error obtained in forecasting using DIR model is significantly less than the% error of the compared models. Moreover, the MAPE of the DIR model is sufficiently below the two compared models that indicates its effectiveness.
{"title":"A death, infection, and recovery (DIR) model to forecast the COVID-19 spread","authors":"Fazila Shams , Assad Abbas , Wasiq Khan , Umar Shahbaz Khan , Raheel Nawaz","doi":"10.1016/j.cmpbup.2021.100047","DOIUrl":"10.1016/j.cmpbup.2021.100047","url":null,"abstract":"<div><h3>Background</h3><p>The SARS-Cov-2 virus (commonly known as COVID-19) has resulted in substantial casualties in many countries. The first case of COVID-19 was reported in China towards the end of 2019. Cases started to appear in several other countries (including Pakistan) by February 2020. To analyze the spreading pattern of the disease, several researchers used the Susceptible-Infectious-Recovered (SIR) model. However, the classical SIR model cannot predict the death rate.</p></div><div><h3>Objective</h3><p>In this article, we present a Death-Infection-Recovery (DIR) model to forecast the virus spread over a window of one (minimum) to fourteen (maximum) days. Our model captures the dynamic behavior of the virus and can assist authorities in making decisions on non-pharmaceutical interventions (NPI), like travel restrictions, lockdowns, etc.</p></div><div><h3>Method</h3><p>The size of training dataset used was 134 days. The Auto Regressive Integrated Moving Average (ARIMA) model was implemented using XLSTAT (add-in for Microsoft Excel), whereas the SIR and the proposed DIR model was implemented using python programming language. We compared the performance of DIR model with the SIR model and the ARIMA model by computing the Percentage Error and Mean Absolute Percentage Error (MAPE).</p></div><div><h3>Results</h3><p>Experimental results demonstrate that the maximum% error in predicting the number of deaths, infections, and recoveries for a period of fourteen days using the DIR model is only 2.33%, using ARIMA model is 10.03% and using SIR model is 53.07%.</p></div><div><h3>Conclusion</h3><p>This percentage of error obtained in forecasting using DIR model is significantly less than the% error of the compared models. Moreover, the MAPE of the DIR model is sufficiently below the two compared models that indicates its effectiveness.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"2 ","pages":"Article 100047"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8713423/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10380408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2022.100057
Arfan Ahmed , Nashva Ali , Mahmood Alzubaidi , Wajdi Zaghouani , Alaa Abd-alrazaq , Mowafa Househ
Background
Chatbots have been widely used in many spheres of life from customer services to mental health companions. Despite the breakthroughs in achieving human-like conversations, Arabic language chatbots driven by AI and NLP are relatively scarce due to the complex nature of the Arabic language.
Objective
We aim to review published literature on Arabic chatbots to gain insight into the technologies used highlighting the gap in this emerging field.
Methods
To identify relevant studies, we searched eight bibliographic databases and conducted backward and forward reference checking. Two reviewers independently performed study selection and data extraction. The extracted data was synthesized using a narrative approach.
Results
We included 18 of 1755 retrieved publications. Thirteen unique chatbots were identified from the 18 studies. ArabChat was the most common chatbot in the included studies (n = 5). The type of Arabic language in most chatbots (n = 13) was Modern Standard Arabic. The input and output modalities used in 17 chatbots were only text. Most chatbots (n = 14) were able to have long conversations. The majority of the chatbots (n = 14) were developed to serve a specific purpose (Closed domain). A retrieval-based model was used for developing most chatbots (n = 17).
Conclusion
Despite a large number of chatbots worldwide, there is relatively a small number of Arabic language chatbots. Furthermore, the available Arabic language chatbots are less advanced than other language chatbots. Researchers should develop more Arabic language chatbots that are based on more advanced input and output modalities, generative-based models, and natural language processing (NLP).
{"title":"Arabic chatbot technologies: A scoping review","authors":"Arfan Ahmed , Nashva Ali , Mahmood Alzubaidi , Wajdi Zaghouani , Alaa Abd-alrazaq , Mowafa Househ","doi":"10.1016/j.cmpbup.2022.100057","DOIUrl":"10.1016/j.cmpbup.2022.100057","url":null,"abstract":"<div><h3>Background</h3><p>Chatbots have been widely used in many spheres of life from customer services to mental health companions. Despite the breakthroughs in achieving human-like conversations, Arabic language chatbots driven by AI and NLP are relatively scarce due to the complex nature of the Arabic language.</p></div><div><h3>Objective</h3><p>We aim to review published literature on Arabic chatbots to gain insight into the technologies used highlighting the gap in this emerging field.</p></div><div><h3>Methods</h3><p>To identify relevant studies, we searched eight bibliographic databases and conducted backward and forward reference checking. Two reviewers independently performed study selection and data extraction. The extracted data was synthesized using a narrative approach.</p></div><div><h3>Results</h3><p>We included 18 of 1755 retrieved publications. Thirteen unique chatbots were identified from the 18 studies. ArabChat was the most common chatbot in the included studies (<em>n</em> = 5). The type of Arabic language in most chatbots (<em>n</em> = 13) was Modern Standard Arabic. The input and output modalities used in 17 chatbots were only text. Most chatbots (<em>n</em> = 14) were able to have long conversations. The majority of the chatbots (<em>n</em> = 14) were developed to serve a specific purpose (Closed domain). A retrieval-based model was used for developing most chatbots (<em>n</em> = 17).</p></div><div><h3>Conclusion</h3><p>Despite a large number of chatbots worldwide, there is relatively a small number of Arabic language chatbots. Furthermore, the available Arabic language chatbots are less advanced than other language chatbots. Researchers should develop more Arabic language chatbots that are based on more advanced input and output modalities, generative-based models, and natural language processing (NLP).</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"2 ","pages":"Article 100057"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000088/pdfft?md5=c0cb5218dcb9a5a08acc663588170abe&pid=1-s2.0-S2666990022000088-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43191925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2022.100051
Seema Sandeep Redekar , Satishkumar L. Varma , Atanu Bhattacharjee
Background and Objective
Glioblastoma (GBM) is the most aggressive type of brain tumor. In spite of having various treatment options, GBM patients usually have a poor prognosis. Genetic markers play a vital role in the progression of the disease. Identification of these novel molecular biomarkers is essential to explain the mechanisms or improve the prognosis of GBM. Advances in high throughput genomic technologies enable the analysis of the varied types of omics data to find biomarkers in GBM. Although data repositories like The Cancer Genome Atlas (TCGA) are rich sources of such multi-omics data, integrating these different genomic datasets of varying quality and patient heterogeneity is challenging.
Methods
Multi-omics gene expression datasets from TCGA consisting of DNA methylation, RNA sequencing, and copy number variation (CNV) of GBM patient is obtained to carry out the analysis. The Cox proportional hazards regression model is developed in R to identify significant genes from diverse datasets associated with the patient's survival. (Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) is used as an estimator for the model. Validation is performed to determine the accuracy and corresponding prediction error.
Results
Five key genes are identified from DNA Methylation and RNA sequencing datasets those are ANK1, HOXA9, TOX2, CXCR6, PIGZ, and L3MBTL, KDM5B, CCDC138, NUS1P1, and ARHGAP42, respectively. Higher expression values of these genes determine better survival of the GBM patients. Kaplan-Meier estimate curves show the exact correlation. Lower values of AIC and BIC determine the suitability of the model. The prediction model is validated on the test set and signifies a low error rate. Copy number variation data is also analysed to find the significant chromosomal location of GBM patients associated with chromosome 2,5,6,7,12,13, respectively. Among all nine CNV locations are found to be influencing the progression of GBM.
Conclusion
Integrated analysis of multiple omics dataset is carried out to identify significant genes from DNA Methylation and RNA sequencing profiles of 76 common individuals. Copy number variation dataset for the same patients is analyzed to recognize notable locations associated with 22 chromosomes. The survival analysis determines the correlation of these biomarkers with the progression of the disease.
{"title":"Identification of key genes associated with survival of glioblastoma multiforme using integrated analysis of TCGA datasets","authors":"Seema Sandeep Redekar , Satishkumar L. Varma , Atanu Bhattacharjee","doi":"10.1016/j.cmpbup.2022.100051","DOIUrl":"10.1016/j.cmpbup.2022.100051","url":null,"abstract":"<div><h3>Background and Objective</h3><p>Glioblastoma (GBM) is the most aggressive type of brain tumor. In spite of having various treatment options, GBM patients usually have a poor prognosis. Genetic markers play a vital role in the progression of the disease. Identification of these novel molecular biomarkers is essential to explain the mechanisms or improve the prognosis of GBM. Advances in high throughput genomic technologies enable the analysis of the varied types of omics data to find biomarkers in GBM. Although data repositories like The Cancer Genome Atlas (TCGA) are rich sources of such multi-omics data, integrating these different genomic datasets of varying quality and patient heterogeneity is challenging.</p></div><div><h3>Methods</h3><p>Multi-omics gene expression datasets from TCGA consisting of DNA methylation, RNA sequencing, and copy number variation (CNV) of GBM patient is obtained to carry out the analysis. The Cox proportional hazards regression model is developed in R to identify significant genes from diverse datasets associated with the patient's survival. (Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) is used as an estimator for the model. Validation is performed to determine the accuracy and corresponding prediction error.</p></div><div><h3>Results</h3><p>Five key genes are identified from DNA Methylation and RNA sequencing datasets those are ANK1, HOXA9, TOX2, CXCR6, PIGZ, and L3MBTL, KDM5B, CCDC138, NUS1P1, and ARHGAP42, respectively. Higher expression values of these genes determine better survival of the GBM patients. Kaplan-Meier estimate curves show the exact correlation. Lower values of AIC and BIC determine the suitability of the model. The prediction model is validated on the test set and signifies a low error rate. Copy number variation data is also analysed to find the significant chromosomal location of GBM patients associated with chromosome 2,5,6,7,12,13, respectively. Among all nine CNV locations are found to be influencing the progression of GBM.</p></div><div><h3>Conclusion</h3><p>Integrated analysis of multiple omics dataset is carried out to identify significant genes from DNA Methylation and RNA sequencing profiles of 76 common individuals. Copy number variation dataset for the same patients is analyzed to recognize notable locations associated with 22 chromosomes. The survival analysis determines the correlation of these biomarkers with the progression of the disease.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"2 ","pages":"Article 100051"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000039/pdfft?md5=e56e6a85d26c6ce9044564a4722badb2&pid=1-s2.0-S2666990022000039-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46959410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2022.100079
Ana Filipa Rebelo , António M. Ferreira , José M. Fonseca
Epicardial Fat Volume (EFV) represents a valuable predictor of cardio- and cerebrovascular events. However, the manual procedures for EFV calculation, diffused in clinical practice, are highly time-consuming for technicians or physicians and often involve significant intra- or inter-observer variances. To reduce the processing time and improve results repeatability, we propose a computer-assisted tool that automatically performs epicardial fat segmentation and volume quantification on non-contrast cardiac Computed Tomography (CT). The proposed algorithm prioritizes the use of basic image techniques, promoting lower computational complexity. The heart region is selected using Otsu's Method, Template Matching and Connected Component Analysis. Then, to refine the pericardium delineation, convex hull is applied. Lastly, epicardial fat is segmented by thresholding. In addition to the algorithm, an intuitive software (HARTA) was developed for clinical use, allowing human intervention for adjustments. A set of 878 non-contrast cardiac CT images was used to validate the method. Using HARTA, the average time to segment the epicardial fat on a CT was 15.5 2.42 s, while manually 10 to 26 min were required. Epicardial fat segmentation was evaluated obtaining an accuracy of 98.83% and a Dice Similarity Coefficient of 0.7730. EFV automatic quantification presents Pearson and Spearman correlation coefficients of 0.9366 and 0.8773, respectively. The proposed tool presents potential to be used in clinical contexts, assisting cardiologists to achieve faster and accurate EFV, leading towards personalized diagnosis and therapy. The human intervention component can also improve the automatic results and insure the credibility of this diagnostic support system. The software hereby presented is available for public access at GitHub.
{"title":"Automatic epicardial fat segmentation and volume quantification on non-contrast cardiac Computed Tomography","authors":"Ana Filipa Rebelo , António M. Ferreira , José M. Fonseca","doi":"10.1016/j.cmpbup.2022.100079","DOIUrl":"10.1016/j.cmpbup.2022.100079","url":null,"abstract":"<div><p>Epicardial Fat Volume (EFV) represents a valuable predictor of cardio- and cerebrovascular events. However, the manual procedures for EFV calculation, diffused in clinical practice, are highly time-consuming for technicians or physicians and often involve significant intra- or inter-observer variances. To reduce the processing time and improve results repeatability, we propose a computer-assisted tool that automatically performs epicardial fat segmentation and volume quantification on non-contrast cardiac Computed Tomography (CT). The proposed algorithm prioritizes the use of basic image techniques, promoting lower computational complexity. The heart region is selected using Otsu's Method, Template Matching and Connected Component Analysis. Then, to refine the pericardium delineation, convex hull is applied. Lastly, epicardial fat is segmented by thresholding. In addition to the algorithm, an intuitive software (HARTA) was developed for clinical use, allowing human intervention for adjustments. A set of 878 non-contrast cardiac CT images was used to validate the method. Using HARTA, the average time to segment the epicardial fat on a CT was 15.5 <span><math><mo>±</mo></math></span> 2.42 s, while manually 10 to 26 min were required. Epicardial fat segmentation was evaluated obtaining an accuracy of 98.83% and a Dice Similarity Coefficient of 0.7730. EFV automatic quantification presents Pearson and Spearman correlation coefficients of 0.9366 and 0.8773, respectively. The proposed tool presents potential to be used in clinical contexts, assisting cardiologists to achieve faster and accurate EFV, leading towards personalized diagnosis and therapy. The human intervention component can also improve the automatic results and insure the credibility of this diagnostic support system. The software hereby presented is available for public access at GitHub.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"2 ","pages":"Article 100079"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000301/pdfft?md5=485a26c19d2d44942860e5221943ea73&pid=1-s2.0-S2666990022000301-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42314270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a unified approach for the automatic and accurate segmentation of the pupil and iris from on-axis grayscale eye images. The segmentation of pupil and iris is achieved with Basis-spline-based active contour and circular active contour, respectively. The circular active contour shape template has three free parameters, i.e., a pair of center coordinates and the radius. Basis-spline has knots in the shape template and five free parameters i.e., a pair of center coordinates, scaling in the horizontal and vertical directions, and the rotation angle. The segmentation of the region of interest is done by minimization of the local energy function. Optimization of the local energy function of circular and Basis-spline-based active contour is carried out using gradient descent technique and Green’s theorem. To achieve the segmentation of iris boundary, the circular active contour method is combined with our novel occlusion removal algorithm. This helps in removing eyelid and eyelash occlusions for accurate iris segmentation. Automatic localization of the pupil is achieved by the sum of absolute difference method. The proposed algorithm is validated on three publicly available databases: IIT Delhi Iris, CASIA Iris Interval V3, and CASIA Iris Interval V4 databases consisting of 7518 grayscale iris images in total. For the segmentation of pupil from the aforementioned databases, we attained a Dice index of 0.971, 0.950, and 0.960, respectively, and for the segmentation of iris, we attained a Dice index of 0.905, 0.898, and 0.900, respectively. An exploratory data analysis was then done to visualize the distribution of the performance parameters throughout the databases. The segmentation performance of the proposed algorithm is on par with that of the reported state-of-the-art algorithms.
{"title":"A unified approach for automated segmentation of pupil and iris in on-axis images","authors":"Grissel Priyanka Mathias , J.H. Gagan , B. Vaibhav Mallya , J.R. Harish Kumar","doi":"10.1016/j.cmpbup.2022.100084","DOIUrl":"10.1016/j.cmpbup.2022.100084","url":null,"abstract":"<div><p>We propose a unified approach for the automatic and accurate segmentation of the pupil and iris from on-axis grayscale eye images. The segmentation of pupil and iris is achieved with Basis-spline-based active contour and circular active contour, respectively. The circular active contour shape template has three free parameters, i.e., a pair of center coordinates and the radius. Basis-spline has <span><math><mi>M</mi></math></span> knots in the shape template and five free parameters i.e., a pair of center coordinates, scaling in the horizontal and vertical directions, and the rotation angle. The segmentation of the region of interest is done by minimization of the local energy function. Optimization of the local energy function of circular and Basis-spline-based active contour is carried out using gradient descent technique and Green’s theorem. To achieve the segmentation of iris boundary, the circular active contour method is combined with our novel occlusion removal algorithm. This helps in removing eyelid and eyelash occlusions for accurate iris segmentation. Automatic localization of the pupil is achieved by the sum of absolute difference method. The proposed algorithm is validated on three publicly available databases: IIT Delhi Iris, CASIA Iris Interval V3, and CASIA Iris Interval V4 databases consisting of 7518 grayscale iris images in total. For the segmentation of pupil from the aforementioned databases, we attained a Dice index of 0.971, 0.950, and 0.960, respectively, and for the segmentation of iris, we attained a Dice index of 0.905, 0.898, and 0.900, respectively. An exploratory data analysis was then done to visualize the distribution of the performance parameters throughout the databases. The segmentation performance of the proposed algorithm is on par with that of the reported state-of-the-art algorithms.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"2 ","pages":"Article 100084"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000350/pdfft?md5=9c96bed223fe43655d35fca5047b373d&pid=1-s2.0-S2666990022000350-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45574713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1016/j.cmpbup.2022.100081
Irina Balzer , Malin Mühlemann , Moritz Jokeit , Ishaan Singh Rawal , Jess G. Snedeker , Mazda Farshad , Jonas Widmer
Background
This work evaluates the feasibility, development, and validation of a machine learning pipeline that includes all tasks from MRI input to the segmentation and grading of the intervertebral discs in the lumbar spine, offering multiple different radiological gradings of degeneration as quantitative objective output.
Methods
The pipelines’ performance was analysed on 1′000 T2-weighted sagittal MRI. Binary outputs were assessed with the harmonic mean of precision and recall (DSC) and the area under the precision-recall curve (AUC-PR). Multi-class output scores were averaged and complemented by the Top-2 categorical accuracy. The processing success rate was evaluated on 10′053 unlabelled MRI scans of lumbar spines.
Results
The midsagittal plane selection achieved an DSC of 74,80% ± 2,99% and an AUC-PR score of 81.71% ± 2.72% (96.91% Top-2 categorical accuracy). The segmentation network obtained a DSC of 91.80% ± 0.44%. The Pfirrmann grading of intervertebral discs in the midsagittal plane was classified with a DSC of 64.08% ± 3.29% and an AUC-PR score of 68.25% ± 6.00% (91.65% Top-2 categorical accuracy). Disc herniations achieved a DSC of 61.57% ± 3.39% and an AUC-PR score of 66.86% ± 5.03%. The cranial endplate defects reached a DSC of 49.76% ± 3.45% and 52.36% ± 1.98% AUC-PR (slightly superior predictions of caudal endplate defect). The binary classifications for the caudal Schmorl's nodes obtained a DSC of 91.58% ± 2.25% with an AUC-PR metric of 96.69% ± 1.58% (similar performance for cranial Schmorl's nodes). Spondylolisthesis was classified with a DSC of 89.03% ± 2.42% and an AUC-PR score of 95.98% ± 1.82%. Annular Fissures were predicted with a DSC of 78.09% ± 7.21% and an AUC-PR score of 86.31% ± 7.45%. Intervertebral disc classifications in the parasagittal plane achieved an equivalent performance. The pipeline successfully processed 98.53% of the provided sagittal MRI scans.
Conclusions
The present deep learning framework has the potential to aid the quantitative evaluation of spinal MRI for an array of clinically established grading systems.
{"title":"A deep learning pipeline for automatized assessment of spinal MRI","authors":"Irina Balzer , Malin Mühlemann , Moritz Jokeit , Ishaan Singh Rawal , Jess G. Snedeker , Mazda Farshad , Jonas Widmer","doi":"10.1016/j.cmpbup.2022.100081","DOIUrl":"10.1016/j.cmpbup.2022.100081","url":null,"abstract":"<div><h3>Background</h3><p>This work evaluates the feasibility, development, and validation of a machine learning pipeline that includes all tasks from MRI input to the segmentation and grading of the intervertebral discs in the lumbar spine, offering multiple different radiological gradings of degeneration as quantitative objective output.</p></div><div><h3>Methods</h3><p>The pipelines’ performance was analysed on 1′000 T2-weighted sagittal MRI. Binary outputs were assessed with the harmonic mean of precision and recall (DSC) and the area under the precision-recall curve (AUC-PR). Multi-class output scores were averaged and complemented by the Top-2 categorical accuracy. The processing success rate was evaluated on 10′053 unlabelled MRI scans of lumbar spines.</p></div><div><h3>Results</h3><p>The midsagittal plane selection achieved an DSC of 74,80% ± 2,99% and an AUC-PR score of 81.71% ± 2.72% (96.91% Top-2 categorical accuracy). The segmentation network obtained a DSC of 91.80% ± 0.44%. The Pfirrmann grading of intervertebral discs in the midsagittal plane was classified with a DSC of 64.08% ± 3.29% and an AUC-PR score of 68.25% ± 6.00% (91.65% Top-2 categorical accuracy). Disc herniations achieved a DSC of 61.57% ± 3.39% and an AUC-PR score of 66.86% ± 5.03%. The cranial endplate defects reached a DSC of 49.76% ± 3.45% and 52.36% ± 1.98% AUC-PR (slightly superior predictions of caudal endplate defect). The binary classifications for the caudal Schmorl's nodes obtained a DSC of 91.58% ± 2.25% with an AUC-PR metric of 96.69% ± 1.58% (similar performance for cranial Schmorl's nodes). Spondylolisthesis was classified with a DSC of 89.03% ± 2.42% and an AUC-PR score of 95.98% ± 1.82%. Annular Fissures were predicted with a DSC of 78.09% ± 7.21% and an AUC-PR score of 86.31% ± 7.45%. Intervertebral disc classifications in the parasagittal plane achieved an equivalent performance. The pipeline successfully processed 98.53% of the provided sagittal MRI scans.</p></div><div><h3>Conclusions</h3><p>The present deep learning framework has the potential to aid the quantitative evaluation of spinal MRI for an array of clinically established grading systems.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"2 ","pages":"Article 100081"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000325/pdfft?md5=3d1660ccac365f091387c41c705eb11f&pid=1-s2.0-S2666990022000325-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49390635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}