首页 > 最新文献

Computer methods and programs in biomedicine update最新文献

英文 中文
Security and privacy in the internet of things healthcare systems: Toward a robust solution in real-life deployment 物联网医疗保健系统中的安全和隐私:面向现实部署的强大解决方案
Pub Date : 2022-01-01 DOI: 10.1016/j.cmpbup.2022.100071
Ibrahim Sadek , Josué Codjo , Shafiq Ul Rehman , Bessam Abdulrazak

The internet of things (IoT) technology can be nowadays used to track user activity in daily living and health-related quality of life. IoT healthcare sensors can play a great role in reducing health-related costs. It helps users to assess their health progression. Nonetheless, these IoT solutions add security challenges due to their direct access to numerous personal information and their close integration into user activities. As such, this IoT technology is always a viable target for cybercriminals. More importantly, any adversarial attacks on an individual IoT node undermine the overall security of the concerned networks. In this study, we present the privacy and security issues of IoT healthcare devices. Moreover, we address possible attack models needed to verify the robustness of such devices. Finally, we present our deployed AMbient Intelligence (AMI) Lab architecture, and we compare its performance to current IoT solutions.

如今,物联网(IoT)技术可用于跟踪用户的日常生活活动和与健康相关的生活质量。物联网医疗传感器可以在降低健康相关成本方面发挥重要作用。它可以帮助用户评估自己的健康状况。尽管如此,这些物联网解决方案由于直接访问大量个人信息并与用户活动紧密集成而增加了安全挑战。因此,这种物联网技术始终是网络犯罪分子的可行目标。更重要的是,对单个物联网节点的任何对抗性攻击都会破坏相关网络的整体安全性。在这项研究中,我们提出了物联网医疗设备的隐私和安全问题。此外,我们还解决了验证此类设备的鲁棒性所需的可能攻击模型。最后,我们展示了我们部署的环境智能(AMI)实验室架构,并将其性能与当前的物联网解决方案进行了比较。
{"title":"Security and privacy in the internet of things healthcare systems: Toward a robust solution in real-life deployment","authors":"Ibrahim Sadek ,&nbsp;Josué Codjo ,&nbsp;Shafiq Ul Rehman ,&nbsp;Bessam Abdulrazak","doi":"10.1016/j.cmpbup.2022.100071","DOIUrl":"10.1016/j.cmpbup.2022.100071","url":null,"abstract":"<div><p>The internet of things (IoT) technology can be nowadays used to track user activity in daily living and health-related quality of life. IoT healthcare sensors can play a great role in reducing health-related costs. It helps users to assess their health progression. Nonetheless, these IoT solutions add security challenges due to their direct access to numerous personal information and their close integration into user activities. As such, this IoT technology is always a viable target for cybercriminals. More importantly, any adversarial attacks on an individual IoT node undermine the overall security of the concerned networks. In this study, we present the privacy and security issues of IoT healthcare devices. Moreover, we address possible attack models needed to verify the robustness of such devices. Finally, we present our deployed <strong>AM</strong>bient <strong>I</strong>ntelligence (AMI) Lab architecture, and we compare its performance to current IoT solutions.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000222/pdfft?md5=65f01de6eef747d960cef3b3d9443d40&pid=1-s2.0-S2666990022000222-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47053453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Biomechanical evaluation of a novel 3D printing tibiotalocalcaneus nail with trilobular cross-sectional design and self-compression effect 新型3D打印胫距跟骨三叶截面设计及自压缩效果的生物力学评价
Pub Date : 2022-01-01 DOI: 10.1016/j.cmpbup.2022.100072
Kin Weng Wong , Tai-Hua Yang , Shao-Fu Huang , Yi-Jun Liu , Chi-Sheng Chien , Chun-Li Lin

The current tibiotalocalcaneal (TTC) nails used in ankle arthrodesis surgery have shortcomings leading to unfavorable clinical failures. This study proposes a novel nail design and fabricated by metal 3D printing that can enhance the global implant stability through finite element (FE) analysis and fatigue testing. A novel titanium nail was designed with trilobular cross-sectional design for increasing anti-rotation stability. This nail has three leads with different, increasing pitches that increase the self-compression effect in the fusion sites. Between the leads, there are two porous diamond microstructure regions that act as a bone ingrowth scaffold. The nail was fabricated by metal 3D printing and implanted into artificial ankle joint to evaluate the self-compression effects. The nonlinear FE analysis was performed models to compare the anti-rotation stability between trilobular nail (Tri-nail) and the conventional circular nail. The static and fatigue four-point bending tests were done to understand the mechanical strength of the novel nail. The experiment of self-compression effect showed that the three lead design provides two stages of significant compression effect, with a pressurization rate as high as 40%. FE simulated results indicated that the Tri-nail group provides significant tangent displacement reduction as well as reduction in the surrounding bone stress value and the stress distribution is more even in the Tri-nail group. Four-point test found that the Tri-nail yielding strength is 12,957 ± 577 N, which is much higher than the approved FDA reference (1026 N). One million cycles using 8% of the yielding strength (1036 N) were accomplished without Tri-nail failure. The proposed novel metal 3D printing Tri-nail can provide enough mechanical strength and is mechanically stable with superior anti-rotation ability and excellent fusion site self-compression effect.

目前用于踝关节融合术的胫距趾骨(TTC)钉存在缺陷,导致临床失败。本研究提出了一种新型的金属3D打印钉设计和制造方法,通过有限元分析和疲劳测试可以提高种植体的整体稳定性。设计了一种新型的钛钉,采用三叶状截面设计,提高了钛钉的抗旋转稳定性。该钉有三个不同的导联,增加了融合部位的自压缩效果。在导线之间,有两个多孔的金刚石微结构区域,作为骨长入支架。采用金属3D打印技术制作甲钉,植入人工踝关节内,观察其自压缩效果。采用非线性有限元分析模型比较了三叶钉(Tri-nail)与常规圆形钉的抗旋转稳定性。通过静态和疲劳四点弯曲试验,了解了新型钉的力学强度。自压缩效果实验表明,三导联设计提供了两级显著的压缩效果,增压率高达40%。有限元模拟结果表明,三钉组具有明显的切线位移减少和周围骨应力值降低,应力分布更均匀。四点试验发现,Tri-nail的屈服强度为12,957±577 N,远高于FDA批准的参考值(1026 N)。使用8%的屈服强度(1036 N)完成了100万次循环,没有Tri-nail失效。所提出的新型金属3D打印三钉具有足够的机械强度和机械稳定性,具有优异的抗旋转能力和良好的融合部位自压缩效果。
{"title":"Biomechanical evaluation of a novel 3D printing tibiotalocalcaneus nail with trilobular cross-sectional design and self-compression effect","authors":"Kin Weng Wong ,&nbsp;Tai-Hua Yang ,&nbsp;Shao-Fu Huang ,&nbsp;Yi-Jun Liu ,&nbsp;Chi-Sheng Chien ,&nbsp;Chun-Li Lin","doi":"10.1016/j.cmpbup.2022.100072","DOIUrl":"10.1016/j.cmpbup.2022.100072","url":null,"abstract":"<div><p>The current tibiotalocalcaneal (TTC) nails used in ankle arthrodesis surgery have shortcomings leading to unfavorable clinical failures. This study proposes a novel nail design and fabricated by metal 3D printing that can enhance the global implant stability through finite element (FE) analysis and fatigue testing. A novel titanium nail was designed with trilobular cross-sectional design for increasing anti-rotation stability. This nail has three leads with different, increasing pitches that increase the self-compression effect in the fusion sites. Between the leads, there are two porous diamond microstructure regions that act as a bone ingrowth scaffold. The nail was fabricated by metal 3D printing and implanted into artificial ankle joint to evaluate the self-compression effects. The nonlinear FE analysis was performed models to compare the anti-rotation stability between trilobular nail (Tri-nail) and the conventional circular nail. The static and fatigue four-point bending tests were done to understand the mechanical strength of the novel nail. The experiment of self-compression effect showed that the three lead design provides two stages of significant compression effect, with a pressurization rate as high as 40%. FE simulated results indicated that the Tri-nail group provides significant tangent displacement reduction as well as reduction in the surrounding bone stress value and the stress distribution is more even in the Tri-nail group. Four-point test found that the Tri-nail yielding strength is 12,957 ± 577 N, which is much higher than the approved FDA reference (1026 N). One million cycles using 8% of the yielding strength (1036 N) were accomplished without Tri-nail failure. The proposed novel metal 3D printing Tri-nail can provide enough mechanical strength and is mechanically stable with superior anti-rotation ability and excellent fusion site self-compression effect.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000234/pdfft?md5=7290bb3f66a1b0ca24111971f2f63c37&pid=1-s2.0-S2666990022000234-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47106980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A death, infection, and recovery (DIR) model to forecast the COVID-19 spread 预测COVID-19传播的死亡、感染和恢复(DIR)模型
Pub Date : 2022-01-01 DOI: 10.1016/j.cmpbup.2021.100047
Fazila Shams , Assad Abbas , Wasiq Khan , Umar Shahbaz Khan , Raheel Nawaz

Background

The SARS-Cov-2 virus (commonly known as COVID-19) has resulted in substantial casualties in many countries. The first case of COVID-19 was reported in China towards the end of 2019. Cases started to appear in several other countries (including Pakistan) by February 2020. To analyze the spreading pattern of the disease, several researchers used the Susceptible-Infectious-Recovered (SIR) model. However, the classical SIR model cannot predict the death rate.

Objective

In this article, we present a Death-Infection-Recovery (DIR) model to forecast the virus spread over a window of one (minimum) to fourteen (maximum) days. Our model captures the dynamic behavior of the virus and can assist authorities in making decisions on non-pharmaceutical interventions (NPI), like travel restrictions, lockdowns, etc.

Method

The size of training dataset used was 134 days. The Auto Regressive Integrated Moving Average (ARIMA) model was implemented using XLSTAT (add-in for Microsoft Excel), whereas the SIR and the proposed DIR model was implemented using python programming language. We compared the performance of DIR model with the SIR model and the ARIMA model by computing the Percentage Error and Mean Absolute Percentage Error (MAPE).

Results

Experimental results demonstrate that the maximum% error in predicting the number of deaths, infections, and recoveries for a period of fourteen days using the DIR model is only 2.33%, using ARIMA model is 10.03% and using SIR model is 53.07%.

Conclusion

This percentage of error obtained in forecasting using DIR model is significantly less than the% error of the compared models. Moreover, the MAPE of the DIR model is sufficiently below the two compared models that indicates its effectiveness.

SARS-Cov-2病毒(通常称为COVID-19)在许多国家造成了大量人员伤亡。2019年底,中国报告了第一例COVID-19病例。到2020年2月,其他几个国家(包括巴基斯坦)开始出现病例。为了分析这种疾病的传播模式,一些研究人员使用了易感-感染-恢复(SIR)模型。然而,经典的SIR模型不能预测死亡率。目的在本文中,我们提出了一个死亡-感染-恢复(DIR)模型来预测病毒在1天(最少)至14天(最多)内的传播。我们的模型捕捉了病毒的动态行为,可以帮助当局制定非药物干预(NPI)决策,如旅行限制、封锁等。方法使用的训练数据集规模为134天。自动回归综合移动平均(ARIMA)模型使用XLSTAT (Microsoft Excel的插件)实现,而SIR和提议的DIR模型使用python编程语言实现。通过计算百分比误差和平均绝对百分比误差(MAPE),比较了DIR模型与SIR模型和ARIMA模型的性能。结果DIR模型预测14天内死亡、感染和康复人数的最大%误差仅为2.33%,ARIMA模型为10.03%,SIR模型为53.07%。结论DIR模型预测的误差百分比明显小于比较模型的误差百分比。此外,DIR模型的MAPE远远低于两种比较模型,表明其有效性。
{"title":"A death, infection, and recovery (DIR) model to forecast the COVID-19 spread","authors":"Fazila Shams ,&nbsp;Assad Abbas ,&nbsp;Wasiq Khan ,&nbsp;Umar Shahbaz Khan ,&nbsp;Raheel Nawaz","doi":"10.1016/j.cmpbup.2021.100047","DOIUrl":"10.1016/j.cmpbup.2021.100047","url":null,"abstract":"<div><h3>Background</h3><p>The SARS-Cov-2 virus (commonly known as COVID-19) has resulted in substantial casualties in many countries. The first case of COVID-19 was reported in China towards the end of 2019. Cases started to appear in several other countries (including Pakistan) by February 2020. To analyze the spreading pattern of the disease, several researchers used the Susceptible-Infectious-Recovered (SIR) model. However, the classical SIR model cannot predict the death rate.</p></div><div><h3>Objective</h3><p>In this article, we present a Death-Infection-Recovery (DIR) model to forecast the virus spread over a window of one (minimum) to fourteen (maximum) days. Our model captures the dynamic behavior of the virus and can assist authorities in making decisions on non-pharmaceutical interventions (NPI), like travel restrictions, lockdowns, etc.</p></div><div><h3>Method</h3><p>The size of training dataset used was 134 days. The Auto Regressive Integrated Moving Average (ARIMA) model was implemented using XLSTAT (add-in for Microsoft Excel), whereas the SIR and the proposed DIR model was implemented using python programming language. We compared the performance of DIR model with the SIR model and the ARIMA model by computing the Percentage Error and Mean Absolute Percentage Error (MAPE).</p></div><div><h3>Results</h3><p>Experimental results demonstrate that the maximum% error in predicting the number of deaths, infections, and recoveries for a period of fourteen days using the DIR model is only 2.33%, using ARIMA model is 10.03% and using SIR model is 53.07%.</p></div><div><h3>Conclusion</h3><p>This percentage of error obtained in forecasting using DIR model is significantly less than the% error of the compared models. Moreover, the MAPE of the DIR model is sufficiently below the two compared models that indicates its effectiveness.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8713423/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10380408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Arabic chatbot technologies: A scoping review 阿拉伯语聊天机器人技术:范围审查
Pub Date : 2022-01-01 DOI: 10.1016/j.cmpbup.2022.100057
Arfan Ahmed , Nashva Ali , Mahmood Alzubaidi , Wajdi Zaghouani , Alaa Abd-alrazaq , Mowafa Househ

Background

Chatbots have been widely used in many spheres of life from customer services to mental health companions. Despite the breakthroughs in achieving human-like conversations, Arabic language chatbots driven by AI and NLP are relatively scarce due to the complex nature of the Arabic language.

Objective

We aim to review published literature on Arabic chatbots to gain insight into the technologies used highlighting the gap in this emerging field.

Methods

To identify relevant studies, we searched eight bibliographic databases and conducted backward and forward reference checking. Two reviewers independently performed study selection and data extraction. The extracted data was synthesized using a narrative approach.

Results

We included 18 of 1755 retrieved publications. Thirteen unique chatbots were identified from the 18 studies. ArabChat was the most common chatbot in the included studies (n = 5). The type of Arabic language in most chatbots (n = 13) was Modern Standard Arabic. The input and output modalities used in 17 chatbots were only text. Most chatbots (n = 14) were able to have long conversations. The majority of the chatbots (n = 14) were developed to serve a specific purpose (Closed domain). A retrieval-based model was used for developing most chatbots (n = 17).

Conclusion

Despite a large number of chatbots worldwide, there is relatively a small number of Arabic language chatbots. Furthermore, the available Arabic language chatbots are less advanced than other language chatbots. Researchers should develop more Arabic language chatbots that are based on more advanced input and output modalities, generative-based models, and natural language processing (NLP).

聊天机器人已经广泛应用于生活的许多领域,从客户服务到心理健康伴侣。尽管在实现类人对话方面取得了突破,但由于阿拉伯语的复杂性,由人工智能和自然语言处理驱动的阿拉伯语聊天机器人相对较少。我们的目标是回顾已发表的关于阿拉伯语聊天机器人的文献,以深入了解所使用的技术,突出这一新兴领域的差距。方法检索8个文献数据库,进行前后向参考文献检查,以确定相关研究。两名审稿人独立进行研究选择和数据提取。提取的数据使用叙述方法进行综合。结果我们纳入了1755篇检索到的出版物中的18篇。从这18项研究中确定了13个独特的聊天机器人。ArabChat是纳入研究中最常见的聊天机器人(n = 5)。大多数聊天机器人的阿拉伯语类型(n = 13)是现代标准阿拉伯语。17个聊天机器人使用的输入和输出方式只有文本。大多数聊天机器人(n = 14)能够进行长时间的对话。大多数聊天机器人(n = 14)都是为特定目的(封闭领域)而开发的。基于检索的模型用于开发大多数聊天机器人(n = 17)。尽管全世界有大量的聊天机器人,但阿拉伯语聊天机器人的数量相对较少。此外,可用的阿拉伯语聊天机器人不如其他语言聊天机器人先进。研究人员应该开发更多基于更先进的输入和输出模式、基于生成的模型和自然语言处理(NLP)的阿拉伯语聊天机器人。
{"title":"Arabic chatbot technologies: A scoping review","authors":"Arfan Ahmed ,&nbsp;Nashva Ali ,&nbsp;Mahmood Alzubaidi ,&nbsp;Wajdi Zaghouani ,&nbsp;Alaa Abd-alrazaq ,&nbsp;Mowafa Househ","doi":"10.1016/j.cmpbup.2022.100057","DOIUrl":"10.1016/j.cmpbup.2022.100057","url":null,"abstract":"<div><h3>Background</h3><p>Chatbots have been widely used in many spheres of life from customer services to mental health companions. Despite the breakthroughs in achieving human-like conversations, Arabic language chatbots driven by AI and NLP are relatively scarce due to the complex nature of the Arabic language.</p></div><div><h3>Objective</h3><p>We aim to review published literature on Arabic chatbots to gain insight into the technologies used highlighting the gap in this emerging field.</p></div><div><h3>Methods</h3><p>To identify relevant studies, we searched eight bibliographic databases and conducted backward and forward reference checking. Two reviewers independently performed study selection and data extraction. The extracted data was synthesized using a narrative approach.</p></div><div><h3>Results</h3><p>We included 18 of 1755 retrieved publications. Thirteen unique chatbots were identified from the 18 studies. ArabChat was the most common chatbot in the included studies (<em>n</em> = 5). The type of Arabic language in most chatbots (<em>n</em> = 13) was Modern Standard Arabic. The input and output modalities used in 17 chatbots were only text. Most chatbots (<em>n</em> = 14) were able to have long conversations. The majority of the chatbots (<em>n</em> = 14) were developed to serve a specific purpose (Closed domain). A retrieval-based model was used for developing most chatbots (<em>n</em> = 17).</p></div><div><h3>Conclusion</h3><p>Despite a large number of chatbots worldwide, there is relatively a small number of Arabic language chatbots. Furthermore, the available Arabic language chatbots are less advanced than other language chatbots. Researchers should develop more Arabic language chatbots that are based on more advanced input and output modalities, generative-based models, and natural language processing (NLP).</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000088/pdfft?md5=c0cb5218dcb9a5a08acc663588170abe&pid=1-s2.0-S2666990022000088-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43191925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Identification of key genes associated with survival of glioblastoma multiforme using integrated analysis of TCGA datasets 利用TCGA数据集的综合分析鉴定与多形性胶质母细胞瘤存活相关的关键基因
Pub Date : 2022-01-01 DOI: 10.1016/j.cmpbup.2022.100051
Seema Sandeep Redekar , Satishkumar L. Varma , Atanu Bhattacharjee

Background and Objective

Glioblastoma (GBM) is the most aggressive type of brain tumor. In spite of having various treatment options, GBM patients usually have a poor prognosis. Genetic markers play a vital role in the progression of the disease. Identification of these novel molecular biomarkers is essential to explain the mechanisms or improve the prognosis of GBM. Advances in high throughput genomic technologies enable the analysis of the varied types of omics data to find biomarkers in GBM. Although data repositories like The Cancer Genome Atlas (TCGA) are rich sources of such multi-omics data, integrating these different genomic datasets of varying quality and patient heterogeneity is challenging.

Methods

Multi-omics gene expression datasets from TCGA consisting of DNA methylation, RNA sequencing, and copy number variation (CNV) of GBM patient is obtained to carry out the analysis. The Cox proportional hazards regression model is developed in R to identify significant genes from diverse datasets associated with the patient's survival. (Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) is used as an estimator for the model. Validation is performed to determine the accuracy and corresponding prediction error.

Results

Five key genes are identified from DNA Methylation and RNA sequencing datasets those are ANK1, HOXA9, TOX2, CXCR6, PIGZ, and L3MBTL, KDM5B, CCDC138, NUS1P1, and ARHGAP42, respectively. Higher expression values of these genes determine better survival of the GBM patients. Kaplan-Meier estimate curves show the exact correlation. Lower values of AIC and BIC determine the suitability of the model. The prediction model is validated on the test set and signifies a low error rate. Copy number variation data is also analysed to find the significant chromosomal location of GBM patients associated with chromosome 2,5,6,7,12,13, respectively. Among all nine CNV locations are found to be influencing the progression of GBM.

Conclusion

Integrated analysis of multiple omics dataset is carried out to identify significant genes from DNA Methylation and RNA sequencing profiles of 76 common individuals. Copy number variation dataset for the same patients is analyzed to recognize notable locations associated with 22 chromosomes. The survival analysis determines the correlation of these biomarkers with the progression of the disease.

背景与目的胶质母细胞瘤(GBM)是最具侵袭性的脑肿瘤类型。尽管有多种治疗选择,但GBM患者通常预后较差。遗传标记在疾病的进展中起着至关重要的作用。鉴定这些新的分子生物标志物对于解释GBM的机制或改善预后至关重要。高通量基因组技术的进步使分析不同类型的组学数据能够在GBM中找到生物标志物。尽管像癌症基因组图谱(TCGA)这样的数据存储库是这种多组学数据的丰富来源,但整合这些不同质量和患者异质性的不同基因组数据集是具有挑战性的。方法从TCGA中获取GBM患者DNA甲基化、RNA测序和拷贝数变异(拷贝数变异)的多组学基因表达数据集进行分析。在R中开发了Cox比例风险回归模型,以从与患者生存相关的不同数据集中识别重要基因。采用赤池信息准则(AIC)和贝叶斯信息准则(BIC)作为模型的估计量。进行验证以确定准确度和相应的预测误差。结果从DNA甲基化和RNA测序数据中鉴定出5个关键基因,分别为ANK1、HOXA9、TOX2、CXCR6、PIGZ和L3MBTL、KDM5B、CCDC138、NUS1P1和ARHGAP42。这些基因的高表达值决定了GBM患者的生存率。Kaplan-Meier估计曲线显示了精确的相关性。AIC和BIC值越低,模型的适用性越大。该预测模型在测试集上得到了验证,错误率低。对拷贝数变异数据进行分析,发现GBM患者的显著染色体位置分别与染色体2、5、6、7、12、13相关。在所有9个CNV位点中发现影响GBM的进展。结论对76例常见个体的DNA甲基化和RNA测序图谱进行了多组学数据的综合分析,鉴定出了显著基因。对同一患者的拷贝数变异数据集进行分析,以识别与22条染色体相关的显著位置。生存分析确定了这些生物标志物与疾病进展的相关性。
{"title":"Identification of key genes associated with survival of glioblastoma multiforme using integrated analysis of TCGA datasets","authors":"Seema Sandeep Redekar ,&nbsp;Satishkumar L. Varma ,&nbsp;Atanu Bhattacharjee","doi":"10.1016/j.cmpbup.2022.100051","DOIUrl":"10.1016/j.cmpbup.2022.100051","url":null,"abstract":"<div><h3>Background and Objective</h3><p>Glioblastoma (GBM) is the most aggressive type of brain tumor. In spite of having various treatment options, GBM patients usually have a poor prognosis. Genetic markers play a vital role in the progression of the disease. Identification of these novel molecular biomarkers is essential to explain the mechanisms or improve the prognosis of GBM. Advances in high throughput genomic technologies enable the analysis of the varied types of omics data to find biomarkers in GBM. Although data repositories like The Cancer Genome Atlas (TCGA) are rich sources of such multi-omics data, integrating these different genomic datasets of varying quality and patient heterogeneity is challenging.</p></div><div><h3>Methods</h3><p>Multi-omics gene expression datasets from TCGA consisting of DNA methylation, RNA sequencing, and copy number variation (CNV) of GBM patient is obtained to carry out the analysis. The Cox proportional hazards regression model is developed in R to identify significant genes from diverse datasets associated with the patient's survival. (Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) is used as an estimator for the model. Validation is performed to determine the accuracy and corresponding prediction error.</p></div><div><h3>Results</h3><p>Five key genes are identified from DNA Methylation and RNA sequencing datasets those are ANK1, HOXA9, TOX2, CXCR6, PIGZ, and L3MBTL, KDM5B, CCDC138, NUS1P1, and ARHGAP42, respectively. Higher expression values of these genes determine better survival of the GBM patients. Kaplan-Meier estimate curves show the exact correlation. Lower values of AIC and BIC determine the suitability of the model. The prediction model is validated on the test set and signifies a low error rate. Copy number variation data is also analysed to find the significant chromosomal location of GBM patients associated with chromosome 2,5,6,7,12,13, respectively. Among all nine CNV locations are found to be influencing the progression of GBM.</p></div><div><h3>Conclusion</h3><p>Integrated analysis of multiple omics dataset is carried out to identify significant genes from DNA Methylation and RNA sequencing profiles of 76 common individuals. Copy number variation dataset for the same patients is analyzed to recognize notable locations associated with 22 chromosomes. The survival analysis determines the correlation of these biomarkers with the progression of the disease.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000039/pdfft?md5=e56e6a85d26c6ce9044564a4722badb2&pid=1-s2.0-S2666990022000039-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46959410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Automatic epicardial fat segmentation and volume quantification on non-contrast cardiac Computed Tomography 非对比心脏计算机断层扫描自动心外膜脂肪分割和体积定量
Pub Date : 2022-01-01 DOI: 10.1016/j.cmpbup.2022.100079
Ana Filipa Rebelo , António M. Ferreira , José M. Fonseca

Epicardial Fat Volume (EFV) represents a valuable predictor of cardio- and cerebrovascular events. However, the manual procedures for EFV calculation, diffused in clinical practice, are highly time-consuming for technicians or physicians and often involve significant intra- or inter-observer variances. To reduce the processing time and improve results repeatability, we propose a computer-assisted tool that automatically performs epicardial fat segmentation and volume quantification on non-contrast cardiac Computed Tomography (CT). The proposed algorithm prioritizes the use of basic image techniques, promoting lower computational complexity. The heart region is selected using Otsu's Method, Template Matching and Connected Component Analysis. Then, to refine the pericardium delineation, convex hull is applied. Lastly, epicardial fat is segmented by thresholding. In addition to the algorithm, an intuitive software (HARTA) was developed for clinical use, allowing human intervention for adjustments. A set of 878 non-contrast cardiac CT images was used to validate the method. Using HARTA, the average time to segment the epicardial fat on a CT was 15.5 ± 2.42 s, while manually 10 to 26 min were required. Epicardial fat segmentation was evaluated obtaining an accuracy of 98.83% and a Dice Similarity Coefficient of 0.7730. EFV automatic quantification presents Pearson and Spearman correlation coefficients of 0.9366 and 0.8773, respectively. The proposed tool presents potential to be used in clinical contexts, assisting cardiologists to achieve faster and accurate EFV, leading towards personalized diagnosis and therapy. The human intervention component can also improve the automatic results and insure the credibility of this diagnostic support system. The software hereby presented is available for public access at GitHub.

心外膜脂肪体积(EFV)是一个有价值的预测心脑血管事件的指标。然而,在临床实践中,手工计算EFV的程序对技术人员或医生来说非常耗时,并且经常涉及显著的观察者内部或观察者之间的差异。为了减少处理时间和提高结果的可重复性,我们提出了一种计算机辅助工具,可以在非对比心脏计算机断层扫描(CT)上自动进行心外膜脂肪分割和体积量化。该算法优先使用基本图像技术,提高了较低的计算复杂度。使用Otsu方法、模板匹配和连通成分分析选择心脏区域。然后,为了完善心包的描绘,凸包被应用。最后,通过阈值分割心外膜脂肪。除了算法之外,还开发了一种用于临床的直观软件(HARTA),允许人工干预进行调整。使用878张心脏CT图像验证该方法。在CT上使用HARTA分割心外膜脂肪的平均时间为15.5±2.42 s,而手工分割需要10 ~ 26 min。心外膜脂肪分割的准确率为98.83%,Dice相似系数为0.7730。EFV自动量化的Pearson和Spearman相关系数分别为0.9366和0.8773。所提出的工具有可能用于临床环境,帮助心脏病专家实现更快、更准确的EFV,从而实现个性化的诊断和治疗。人为干预也可以提高自动结果,保证诊断支持系统的可信度。在此提供的软件可在GitHub公开访问。
{"title":"Automatic epicardial fat segmentation and volume quantification on non-contrast cardiac Computed Tomography","authors":"Ana Filipa Rebelo ,&nbsp;António M. Ferreira ,&nbsp;José M. Fonseca","doi":"10.1016/j.cmpbup.2022.100079","DOIUrl":"10.1016/j.cmpbup.2022.100079","url":null,"abstract":"<div><p>Epicardial Fat Volume (EFV) represents a valuable predictor of cardio- and cerebrovascular events. However, the manual procedures for EFV calculation, diffused in clinical practice, are highly time-consuming for technicians or physicians and often involve significant intra- or inter-observer variances. To reduce the processing time and improve results repeatability, we propose a computer-assisted tool that automatically performs epicardial fat segmentation and volume quantification on non-contrast cardiac Computed Tomography (CT). The proposed algorithm prioritizes the use of basic image techniques, promoting lower computational complexity. The heart region is selected using Otsu's Method, Template Matching and Connected Component Analysis. Then, to refine the pericardium delineation, convex hull is applied. Lastly, epicardial fat is segmented by thresholding. In addition to the algorithm, an intuitive software (HARTA) was developed for clinical use, allowing human intervention for adjustments. A set of 878 non-contrast cardiac CT images was used to validate the method. Using HARTA, the average time to segment the epicardial fat on a CT was 15.5 <span><math><mo>±</mo></math></span> 2.42 s, while manually 10 to 26 min were required. Epicardial fat segmentation was evaluated obtaining an accuracy of 98.83% and a Dice Similarity Coefficient of 0.7730. EFV automatic quantification presents Pearson and Spearman correlation coefficients of 0.9366 and 0.8773, respectively. The proposed tool presents potential to be used in clinical contexts, assisting cardiologists to achieve faster and accurate EFV, leading towards personalized diagnosis and therapy. The human intervention component can also improve the automatic results and insure the credibility of this diagnostic support system. The software hereby presented is available for public access at GitHub.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000301/pdfft?md5=485a26c19d2d44942860e5221943ea73&pid=1-s2.0-S2666990022000301-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42314270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A unified approach for automated segmentation of pupil and iris in on-axis images 轴上图像中瞳孔和虹膜自动分割的统一方法
Pub Date : 2022-01-01 DOI: 10.1016/j.cmpbup.2022.100084
Grissel Priyanka Mathias , J.H. Gagan , B. Vaibhav Mallya , J.R. Harish Kumar

We propose a unified approach for the automatic and accurate segmentation of the pupil and iris from on-axis grayscale eye images. The segmentation of pupil and iris is achieved with Basis-spline-based active contour and circular active contour, respectively. The circular active contour shape template has three free parameters, i.e., a pair of center coordinates and the radius. Basis-spline has M knots in the shape template and five free parameters i.e., a pair of center coordinates, scaling in the horizontal and vertical directions, and the rotation angle. The segmentation of the region of interest is done by minimization of the local energy function. Optimization of the local energy function of circular and Basis-spline-based active contour is carried out using gradient descent technique and Green’s theorem. To achieve the segmentation of iris boundary, the circular active contour method is combined with our novel occlusion removal algorithm. This helps in removing eyelid and eyelash occlusions for accurate iris segmentation. Automatic localization of the pupil is achieved by the sum of absolute difference method. The proposed algorithm is validated on three publicly available databases: IIT Delhi Iris, CASIA Iris Interval V3, and CASIA Iris Interval V4 databases consisting of 7518 grayscale iris images in total. For the segmentation of pupil from the aforementioned databases, we attained a Dice index of 0.971, 0.950, and 0.960, respectively, and for the segmentation of iris, we attained a Dice index of 0.905, 0.898, and 0.900, respectively. An exploratory data analysis was then done to visualize the distribution of the performance parameters throughout the databases. The segmentation performance of the proposed algorithm is on par with that of the reported state-of-the-art algorithms.

我们提出了一种统一的方法,用于从轴上灰度眼睛图像中自动准确地分割瞳孔和虹膜。分别采用基于基样条的活动轮廓和圆形活动轮廓实现瞳孔和虹膜的分割。圆形活动轮廓形状模板有三个自由参数,即一对中心坐标和半径。基样条在形状模板中有M个结点和5个自由参数,即一对中心坐标、水平和垂直方向的缩放以及旋转角度。感兴趣区域的分割是通过局部能量函数的最小化来完成的。利用梯度下降法和格林定理对圆形和基样条活动轮廓的局部能量函数进行了优化。为了实现虹膜边界的分割,将圆形活动轮廓法与我们提出的新的去遮挡算法相结合。这有助于去除眼睑和睫毛堵塞,准确分割虹膜。采用绝对差和法实现瞳孔的自动定位。本文算法在IIT Delhi Iris、CASIA Iris Interval V3和CASIA Iris Interval V4三个公开的数据库上进行了验证,共包含7518张灰度虹膜图像。对于瞳孔的分割,我们的Dice指数分别为0.971、0.950和0.960;对于虹膜的分割,我们的Dice指数分别为0.905、0.898和0.900。然后进行探索性数据分析,以可视化整个数据库中性能参数的分布。所提出的算法的分割性能与报道的最先进的算法相当。
{"title":"A unified approach for automated segmentation of pupil and iris in on-axis images","authors":"Grissel Priyanka Mathias ,&nbsp;J.H. Gagan ,&nbsp;B. Vaibhav Mallya ,&nbsp;J.R. Harish Kumar","doi":"10.1016/j.cmpbup.2022.100084","DOIUrl":"10.1016/j.cmpbup.2022.100084","url":null,"abstract":"<div><p>We propose a unified approach for the automatic and accurate segmentation of the pupil and iris from on-axis grayscale eye images. The segmentation of pupil and iris is achieved with Basis-spline-based active contour and circular active contour, respectively. The circular active contour shape template has three free parameters, i.e., a pair of center coordinates and the radius. Basis-spline has <span><math><mi>M</mi></math></span> knots in the shape template and five free parameters i.e., a pair of center coordinates, scaling in the horizontal and vertical directions, and the rotation angle. The segmentation of the region of interest is done by minimization of the local energy function. Optimization of the local energy function of circular and Basis-spline-based active contour is carried out using gradient descent technique and Green’s theorem. To achieve the segmentation of iris boundary, the circular active contour method is combined with our novel occlusion removal algorithm. This helps in removing eyelid and eyelash occlusions for accurate iris segmentation. Automatic localization of the pupil is achieved by the sum of absolute difference method. The proposed algorithm is validated on three publicly available databases: IIT Delhi Iris, CASIA Iris Interval V3, and CASIA Iris Interval V4 databases consisting of 7518 grayscale iris images in total. For the segmentation of pupil from the aforementioned databases, we attained a Dice index of 0.971, 0.950, and 0.960, respectively, and for the segmentation of iris, we attained a Dice index of 0.905, 0.898, and 0.900, respectively. An exploratory data analysis was then done to visualize the distribution of the performance parameters throughout the databases. The segmentation performance of the proposed algorithm is on par with that of the reported state-of-the-art algorithms.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000350/pdfft?md5=9c96bed223fe43655d35fca5047b373d&pid=1-s2.0-S2666990022000350-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45574713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep learning pipeline for automatized assessment of spinal MRI 用于脊柱MRI自动化评估的深度学习管道
Pub Date : 2022-01-01 DOI: 10.1016/j.cmpbup.2022.100081
Irina Balzer , Malin Mühlemann , Moritz Jokeit , Ishaan Singh Rawal , Jess G. Snedeker , Mazda Farshad , Jonas Widmer

Background

This work evaluates the feasibility, development, and validation of a machine learning pipeline that includes all tasks from MRI input to the segmentation and grading of the intervertebral discs in the lumbar spine, offering multiple different radiological gradings of degeneration as quantitative objective output.

Methods

The pipelines’ performance was analysed on 1′000 T2-weighted sagittal MRI. Binary outputs were assessed with the harmonic mean of precision and recall (DSC) and the area under the precision-recall curve (AUC-PR). Multi-class output scores were averaged and complemented by the Top-2 categorical accuracy. The processing success rate was evaluated on 10′053 unlabelled MRI scans of lumbar spines.

Results

The midsagittal plane selection achieved an DSC of 74,80% ± 2,99% and an AUC-PR score of 81.71% ± 2.72% (96.91% Top-2 categorical accuracy). The segmentation network obtained a DSC of 91.80% ± 0.44%. The Pfirrmann grading of intervertebral discs in the midsagittal plane was classified with a DSC of 64.08% ± 3.29% and an AUC-PR score of 68.25% ± 6.00% (91.65% Top-2 categorical accuracy). Disc herniations achieved a DSC of 61.57% ± 3.39% and an AUC-PR score of 66.86% ± 5.03%. The cranial endplate defects reached a DSC of 49.76% ± 3.45% and 52.36% ± 1.98% AUC-PR (slightly superior predictions of caudal endplate defect). The binary classifications for the caudal Schmorl's nodes obtained a DSC of 91.58% ± 2.25% with an AUC-PR metric of 96.69% ± 1.58% (similar performance for cranial Schmorl's nodes). Spondylolisthesis was classified with a DSC of 89.03% ± 2.42% and an AUC-PR score of 95.98% ± 1.82%. Annular Fissures were predicted with a DSC of 78.09% ± 7.21% and an AUC-PR score of 86.31% ± 7.45%. Intervertebral disc classifications in the parasagittal plane achieved an equivalent performance. The pipeline successfully processed 98.53% of the provided sagittal MRI scans.

Conclusions

The present deep learning framework has the potential to aid the quantitative evaluation of spinal MRI for an array of clinically established grading systems.

本工作评估了机器学习管道的可行性、发展和验证,该管道包括从MRI输入到腰椎椎间盘分割和分级的所有任务,提供多种不同的退变放射分级作为定量客观输出。方法在1 000张t2加权矢状面MRI上对导管的表现进行分析。用精密度和召回率的谐波平均值(DSC)和精密度-召回率曲线下面积(AUC-PR)对二值输出进行评价。多类输出得分取平均值,并辅以Top-2的分类准确率。通过10 ' 053腰椎无标记MRI扫描评估处理成功率。结果中矢状面选择的DSC分别为74.80%±2.99%,AUC-PR评分为81.71%±2.72% (Top-2分类准确率为96.91%)。分割网络的DSC为91.80%±0.44%。中矢状面椎间盘Pfirrmann分级DSC为64.08%±3.29%,AUC-PR评分为68.25%±6.00% (Top-2分类准确率为91.65%)。椎间盘突出的DSC为61.57%±3.39%,AUC-PR评分为66.86%±5.03%。颅骨终板缺损的DSC为49.76%±3.45%,AUC-PR为52.36%±1.98%(对尾侧终板缺损的预测略优于前者)。尾部Schmorl's淋巴结的二元分类DSC为91.58%±2.25%,AUC-PR为96.69%±1.58%(颅Schmorl's淋巴结的表现相似)。DSC为89.03%±2.42%,AUC-PR评分为95.98%±1.82%。预测牙环裂的DSC为78.09%±7.21%,AUC-PR评分为86.31%±7.45%。副矢状面椎间盘分类达到了相同的效果。该管道成功处理了98.53%的矢状面MRI扫描。目前的深度学习框架有潜力为一系列临床建立的分级系统提供脊柱MRI的定量评估。
{"title":"A deep learning pipeline for automatized assessment of spinal MRI","authors":"Irina Balzer ,&nbsp;Malin Mühlemann ,&nbsp;Moritz Jokeit ,&nbsp;Ishaan Singh Rawal ,&nbsp;Jess G. Snedeker ,&nbsp;Mazda Farshad ,&nbsp;Jonas Widmer","doi":"10.1016/j.cmpbup.2022.100081","DOIUrl":"10.1016/j.cmpbup.2022.100081","url":null,"abstract":"<div><h3>Background</h3><p>This work evaluates the feasibility, development, and validation of a machine learning pipeline that includes all tasks from MRI input to the segmentation and grading of the intervertebral discs in the lumbar spine, offering multiple different radiological gradings of degeneration as quantitative objective output.</p></div><div><h3>Methods</h3><p>The pipelines’ performance was analysed on 1′000 T2-weighted sagittal MRI. Binary outputs were assessed with the harmonic mean of precision and recall (DSC) and the area under the precision-recall curve (AUC-PR). Multi-class output scores were averaged and complemented by the Top-2 categorical accuracy. The processing success rate was evaluated on 10′053 unlabelled MRI scans of lumbar spines.</p></div><div><h3>Results</h3><p>The midsagittal plane selection achieved an DSC of 74,80% ± 2,99% and an AUC-PR score of 81.71% ± 2.72% (96.91% Top-2 categorical accuracy). The segmentation network obtained a DSC of 91.80% ± 0.44%. The Pfirrmann grading of intervertebral discs in the midsagittal plane was classified with a DSC of 64.08% ± 3.29% and an AUC-PR score of 68.25% ± 6.00% (91.65% Top-2 categorical accuracy). Disc herniations achieved a DSC of 61.57% ± 3.39% and an AUC-PR score of 66.86% ± 5.03%. The cranial endplate defects reached a DSC of 49.76% ± 3.45% and 52.36% ± 1.98% AUC-PR (slightly superior predictions of caudal endplate defect). The binary classifications for the caudal Schmorl's nodes obtained a DSC of 91.58% ± 2.25% with an AUC-PR metric of 96.69% ± 1.58% (similar performance for cranial Schmorl's nodes). Spondylolisthesis was classified with a DSC of 89.03% ± 2.42% and an AUC-PR score of 95.98% ± 1.82%. Annular Fissures were predicted with a DSC of 78.09% ± 7.21% and an AUC-PR score of 86.31% ± 7.45%. Intervertebral disc classifications in the parasagittal plane achieved an equivalent performance. The pipeline successfully processed 98.53% of the provided sagittal MRI scans.</p></div><div><h3>Conclusions</h3><p>The present deep learning framework has the potential to aid the quantitative evaluation of spinal MRI for an array of clinically established grading systems.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666990022000325/pdfft?md5=3d1660ccac365f091387c41c705eb11f&pid=1-s2.0-S2666990022000325-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49390635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Prolonged viral shedding prediction on non-hospitalized, uncomplicated SARS-CoV-2 patients using their transcriptome data 利用转录组数据预测非住院、无并发症SARS-CoV-2患者的长期病毒脱落
Pub Date : 2022-01-01 DOI: 10.1016/j.cmpbup.2022.100070
Pratheeba Jeyananthan

Severe acute respiratory syndrome coronavirus type 2 (SARS-CoV-2) is identified as a highly transmissible coronavirus which threatens the world with this deadly pandemic. WHO reported that it spreads through contact, droplet, airborne, formite, fecal-oral, bloodborne, mother-to-child and animal-to-human. Hence, viral shedding has a huge impact on this pandemic. This study uses transcriptome data of coronavirus disease 2019 (COVID-19) patients to predict the prolonged viral shedding of the corresponding patient. This prediction starts with the transcriptome features which gives the lowest root mean squared value of 16.3±3.3 using top 25 feature selected using forward feature selection algorithm and linear regression algorithm. Then to see the impact of few non-molecular features in this prediction, they were added to the model one by one along with the selected transcriptome features. However, this study shows that those features do not have any impact on prolonged viral shedding prediction. Further this study predicts the day since onset in the same way. Here also top 25 transcriptome features selected using forward feature selection algorithm gives a comparably good accuracy (accuracy value of 0.74±0.1). However, the best accuracy was obtained using the best 20 features from feature importance using SVM (0.78±0.1). Moreover, adding non-molecular features shows a great impact on mutual information selected features in this prediction.

严重急性呼吸综合征冠状病毒2型(SARS-CoV-2)被确定为一种高传染性冠状病毒,它以这种致命的大流行威胁着世界。世卫组织报告说,它通过接触、飞沫、空气传播、虫媒、粪口传播、血液传播、母婴传播和动物-人传播。因此,病毒脱落对此次大流行具有巨大影响。本研究利用2019冠状病毒病(COVID-19)患者的转录组数据预测相应患者的病毒脱落时间延长。该预测从转录组特征开始,使用前向特征选择算法和线性回归算法选择前25个特征,得到最低的均方根值16.3±3.3。然后,为了观察少数非分子特征在预测中的影响,我们将它们与选定的转录组特征一起逐一添加到模型中。然而,这项研究表明,这些特征对长期病毒脱落的预测没有任何影响。此外,这项研究以同样的方式预测了发病后的一天。在这里,使用正向特征选择算法选择的前25个转录组特征具有相当好的准确性(精度值为0.74±0.1)。而使用SVM从特征重要度中选取20个最优的特征,准确率最高(0.78±0.1)。此外,非分子特征的加入对预测中互信息选择特征的影响很大。
{"title":"Prolonged viral shedding prediction on non-hospitalized, uncomplicated SARS-CoV-2 patients using their transcriptome data","authors":"Pratheeba Jeyananthan","doi":"10.1016/j.cmpbup.2022.100070","DOIUrl":"10.1016/j.cmpbup.2022.100070","url":null,"abstract":"<div><p>Severe acute respiratory syndrome coronavirus type 2 (SARS-CoV-2) is identified as a highly transmissible coronavirus which threatens the world with this deadly pandemic. WHO reported that it spreads through contact, droplet, airborne, formite, fecal-oral, bloodborne, mother-to-child and animal-to-human. Hence, viral shedding has a huge impact on this pandemic. This study uses transcriptome data of coronavirus disease 2019 (COVID-19) patients to predict the prolonged viral shedding of the corresponding patient. This prediction starts with the transcriptome features which gives the lowest root mean squared value of 16.3±3.3 using top 25 feature selected using forward feature selection algorithm and linear regression algorithm. Then to see the impact of few non-molecular features in this prediction, they were added to the model one by one along with the selected transcriptome features. However, this study shows that those features do not have any impact on prolonged viral shedding prediction. Further this study predicts the day since onset in the same way. Here also top 25 transcriptome features selected using forward feature selection algorithm gives a comparably good accuracy (accuracy value of 0.74±0.1). However, the best accuracy was obtained using the best 20 features from feature importance using SVM (0.78±0.1). Moreover, adding non-molecular features shows a great impact on mutual information selected features in this prediction.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9444307/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10322488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Development and validation of digital health literacy competencies for citizens (DHLC), an instrument for measuring digital health literacy in the community 公民数字卫生素养能力(DHLC)的开发和验证,这是衡量社区数字卫生素养的工具
Pub Date : 2022-01-01 DOI: 10.1016/j.cmpbup.2022.100082
Enny Rachmani , Haikal Haikal , Eti Rimawati

COVID-19 is a new disease in human life and has become pandemic. Pandemic Coronavirus Disease (COVID-19) has been speeding up digital transformation in every sector. Implementation of digital technology in health should be supported by the community's readiness, such as digital health literacy to achieve the goals, optimize health service performance, and blockage infodemics and miss information. Implementation of digital technology in health should be supported by the community's readiness, such as digital health literacy to achieve the goals, optimize health service performance, and blockage infodemics and miss information. This study aims to develop a tool to measure digital health literacy in the community through three stages such as expert review, pre-test and field test. DHLC adopted the five competencies areas into 18 questions and put eight questions related to health literacy; the total items question of DHLC are 26 items questions. This study reveals that all of the score digital competencies areas below 4. Score 4 in DHLC indicates that the community still need guidance to doing activity in the digital environment. Elevating digital health literacy in the citizens is urgent to control the spreading misinformation and disinformation that could worsen pandemics. Future studies need to conduct to test the validity and reliability of DHLC in various settings.

COVID-19是人类生活中的一种新疾病,已成为大流行疾病。2019冠状病毒病(COVID-19)正在加速各个领域的数字化转型。数字技术在卫生领域的实施应得到社区准备的支持,例如数字卫生素养,以实现目标,优化卫生服务绩效,以及堵塞信息和遗漏信息。数字技术在卫生领域的实施应得到社区准备的支持,例如数字卫生素养,以实现目标,优化卫生服务绩效,以及堵塞信息和遗漏信息。本研究旨在通过专家评审、预测试和实地测试三个阶段,开发一种衡量社区数字健康素养的工具。卫生服务委员会将五个能力领域纳入18个问题,并提出了8个与卫生知识普及有关的问题;DHLC总共有26个题目。该研究显示,所有数字能力的得分都低于4分。DHLC得分为4,表明社区仍然需要指导在数字环境中进行活动。迫切需要提高公民的数字卫生素养,以控制可能使大流行病恶化的错误信息和虚假信息的传播。未来的研究需要对DHLC在不同环境下的效度和信度进行检验。
{"title":"Development and validation of digital health literacy competencies for citizens (DHLC), an instrument for measuring digital health literacy in the community","authors":"Enny Rachmani ,&nbsp;Haikal Haikal ,&nbsp;Eti Rimawati","doi":"10.1016/j.cmpbup.2022.100082","DOIUrl":"10.1016/j.cmpbup.2022.100082","url":null,"abstract":"<div><p>COVID-19 is a new disease in human life and has become pandemic. Pandemic Coronavirus Disease (COVID-19) has been speeding up digital transformation in every sector. Implementation of digital technology in health should be supported by the community's readiness, such as digital health literacy to achieve the goals, optimize health service performance, and blockage infodemics and miss information. Implementation of digital technology in health should be supported by the community's readiness, such as digital health literacy to achieve the goals, optimize health service performance, and blockage infodemics and miss information. This study aims to develop a tool to measure digital health literacy in the community through three stages such as expert review, pre-test and field test. DHLC adopted the five competencies areas into 18 questions and put eight questions related to health literacy; the total items question of DHLC are 26 items questions. This study reveals that all of the score digital competencies areas below 4. Score 4 in DHLC indicates that the community still need guidance to doing activity in the digital environment. Elevating digital health literacy in the citizens is urgent to control the spreading misinformation and disinformation that could worsen pandemics. Future studies need to conduct to test the validity and reliability of DHLC in various settings.</p></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9659361/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10672963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Computer methods and programs in biomedicine update
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1