Pub Date : 2023-12-01Epub Date: 2024-02-19DOI: 10.1109/bibe60311.2023.00013
Shiva Ebrahimi, Xuan Guo
Tandem mass spectrometry (MS/MS) stands as the predominant high-throughput technique for comprehensively analyzing protein content within biological samples. This methodology is a cornerstone driving the advancement of proteomics. In recent years, substantial strides have been made in Data-Independent Acquisition (DIA) strategies, facilitating impartial and non-targeted fragmentation of precursor ions. The DIA-generated MS/MS spectra present a formidable obstacle due to their inherent high multiplexing nature. Each spectrum encapsulates fragmented product ions originating from multiple precursor peptides. This intricacy poses a particularly acute challenge in de novo peptide/protein sequencing, where current methods are ill-equipped to address the multiplexing conundrum. In this paper, we introduce Casanovo-DIA, a deep-learning model based on transformer architecture. It deciphers peptide sequences from DIA mass spectrometry data. Our results show significant improvements over existing STOA methods, including DeepNovo-DIA and PepNet. Casanovo-DIA enhances precision by 15.14% to 34.8%, recall by 11.62% to 31.94% at the amino acid level, and boosts precision by 59% to 81.36% at the peptide level. Integrating DIA data and our Casanovo-DIA model holds considerable promise to uncover novel peptides and more comprehensive profiling of biological samples. Casanovo-DIA is freely available under the GNU GPL license at https://github.com/Biocomputing-Research-Group/Casanovo-DIA.
{"title":"Transformer-based de novo peptide sequencing for data-independent acquisition mass spectrometry.","authors":"Shiva Ebrahimi, Xuan Guo","doi":"10.1109/bibe60311.2023.00013","DOIUrl":"https://doi.org/10.1109/bibe60311.2023.00013","url":null,"abstract":"<p><p>Tandem mass spectrometry (MS/MS) stands as the predominant high-throughput technique for comprehensively analyzing protein content within biological samples. This methodology is a cornerstone driving the advancement of proteomics. In recent years, substantial strides have been made in Data-Independent Acquisition (DIA) strategies, facilitating impartial and non-targeted fragmentation of precursor ions. The DIA-generated MS/MS spectra present a formidable obstacle due to their inherent high multiplexing nature. Each spectrum encapsulates fragmented product ions originating from multiple precursor peptides. This intricacy poses a particularly acute challenge in de novo peptide/protein sequencing, where current methods are ill-equipped to address the multiplexing conundrum. In this paper, we introduce Casanovo-DIA, a deep-learning model based on transformer architecture. It deciphers peptide sequences from DIA mass spectrometry data. Our results show significant improvements over existing STOA methods, including DeepNovo-DIA and PepNet. Casanovo-DIA enhances precision by 15.14% to 34.8%, recall by 11.62% to 31.94% at the amino acid level, and boosts precision by 59% to 81.36% at the peptide level. Integrating DIA data and our Casanovo-DIA model holds considerable promise to uncover novel peptides and more comprehensive profiling of biological samples. Casanovo-DIA is freely available under the GNU GPL license at https://github.com/Biocomputing-Research-Group/Casanovo-DIA.</p>","PeriodicalId":87347,"journal":{"name":"Proceedings. IEEE International Symposium on Bioinformatics and Bioengineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11044815/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140873985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01Epub Date: 2020-12-16DOI: 10.1109/bibe50027.2020.00082
Stephen Hansen, Daniel Schwartz, Jesse Stover, Md Abu Saleh Tajin, William M Mongan, Kapil R Dandekar
Future advances in the medical Internet of Things (IoT) will require sensors that are unobtrusive and passively powered. With the use of wireless, wearable, and passive knitted smart garment sensors, we monitor infant respiratory activity. We improve the utility of multi-tag Radio Frequency Identification (RFID) measurements via fusion learning across various features from multiple tags to determine the magnitude and temporal information of the artifacts. In this paper, we develop an algorithm that classifies and separates respiratory activity via a Regime Hidden Markov Model compounded with higher-order features of Minkowski and Mahalanobis distances. Our algorithm improves respiratory rate detection by increasing the Signal to Noise Ratio (SNR) on average from 17.12 dB to 34.74 dB. The effectiveness of our algorithm in increasing SNR shows that higher-order features can improve signal strength detection in RFID systems. Our algorithm can be extended to include more feature sources and can be used in a variety of machine learning algorithms for respiratory data classification, and other applications. Further work on the algorithm will include accurate parameterization of the algorithm's window size.
未来医疗物联网(IoT)的发展需要不显眼、无源供电的传感器。通过使用无线、可穿戴和无源针织智能服装传感器,我们监测了婴儿的呼吸活动。我们通过对来自多个标签的各种特征进行融合学习,来确定工件的大小和时间信息,从而提高多标签射频识别(RFID)测量的实用性。在本文中,我们开发了一种算法,该算法通过时序隐马尔可夫模型与闵科夫斯基距离和马哈拉诺比斯距离的高阶特征相结合,对呼吸活动进行分类和分离。我们的算法平均可将信噪比(SNR)从 17.12 dB 提高到 34.74 dB,从而改善呼吸频率检测。我们的算法在提高信噪比方面的有效性表明,高阶特征可以改善 RFID 系统中的信号强度检测。我们的算法可以扩展到更多的特征源,并可用于呼吸数据分类的各种机器学习算法和其他应用中。该算法的下一步工作将包括算法窗口大小的精确参数化。
{"title":"Fusion Learning on Multiple-Tag RFID Measurements for Respiratory Rate Monitoring.","authors":"Stephen Hansen, Daniel Schwartz, Jesse Stover, Md Abu Saleh Tajin, William M Mongan, Kapil R Dandekar","doi":"10.1109/bibe50027.2020.00082","DOIUrl":"10.1109/bibe50027.2020.00082","url":null,"abstract":"<p><p>Future advances in the medical Internet of Things (IoT) will require sensors that are unobtrusive and passively powered. With the use of wireless, wearable, and passive knitted smart garment sensors, we monitor infant respiratory activity. We improve the utility of multi-tag Radio Frequency Identification (RFID) measurements via fusion learning across various features from multiple tags to determine the magnitude and temporal information of the artifacts. In this paper, we develop an algorithm that classifies and separates respiratory activity via a Regime Hidden Markov Model compounded with higher-order features of Minkowski and Mahalanobis distances. Our algorithm improves respiratory rate detection by increasing the Signal to Noise Ratio (SNR) on average from 17.12 dB to 34.74 dB. The effectiveness of our algorithm in increasing SNR shows that higher-order features can improve signal strength detection in RFID systems. Our algorithm can be extended to include more feature sources and can be used in a variety of machine learning algorithms for respiratory data classification, and other applications. Further work on the algorithm will include accurate parameterization of the algorithm's window size.</p>","PeriodicalId":87347,"journal":{"name":"Proceedings. IEEE International Symposium on Bioinformatics and Bioengineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8130190/pdf/nihms-1701065.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39000982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01Epub Date: 2020-12-16DOI: 10.1109/BIBE50027.2020.00097
J Vince Pulido, Shan Guleria, Lubaina Ehsan, Matthew Fasullo, Robert Lippman, Pritesh Mutha, Tilak Shah, Sana Syed, Donald E Brown
One of the greatest obstacles in the adoption of deep neural networks for new medical applications is that training these models typically require a large amount of manually labeled training samples. In this body of work, we investigate the semi-supervised scenario where one has access to large amounts of unlabeled data and only a few labeled samples. We study the performance of MixMatch and FixMatch-two popular semi-supervised learning methods-on a histology dataset. More specifically, we study these models' impact under a highly noisy and imbalanced setting. The findings here motivate the development of semi-supervised methods to ameliorate problems commonly encountered in medical data applications.
{"title":"Semi-Supervised Classification of Noisy, Gigapixel Histology Images.","authors":"J Vince Pulido, Shan Guleria, Lubaina Ehsan, Matthew Fasullo, Robert Lippman, Pritesh Mutha, Tilak Shah, Sana Syed, Donald E Brown","doi":"10.1109/BIBE50027.2020.00097","DOIUrl":"10.1109/BIBE50027.2020.00097","url":null,"abstract":"<p><p>One of the greatest obstacles in the adoption of deep neural networks for new medical applications is that training these models typically require a large amount of manually labeled training samples. In this body of work, we investigate the semi-supervised scenario where one has access to large amounts of unlabeled data and only a few labeled samples. We study the performance of MixMatch and FixMatch-two popular semi-supervised learning methods-on a histology dataset. More specifically, we study these models' impact under a highly noisy and imbalanced setting. The findings here motivate the development of semi-supervised methods to ameliorate problems commonly encountered in medical data applications.</p>","PeriodicalId":87347,"journal":{"name":"Proceedings. IEEE International Symposium on Bioinformatics and Bioengineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8144886/pdf/nihms-1696232.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39027379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01Epub Date: 2020-12-16DOI: 10.1109/bibe50027.2020.00057
Yixue Feng, Kefei Liu, Mansu Kim, Qi Long, Xiaohui Yao, Li Shen
We present an effective deep multiview learning framework to identify population structure using multimodal imaging data. Our approach is based on canonical correlation analysis (CCA). We propose to use deep generalized CCA (DGCCA) to learn a shared latent representation of non-linearly mapped and maximally correlated components from multiple imaging modalities with reduced dimensionality. In our empirical study, this representation is shown to effectively capture more variance in original data than conventional generalized CCA (GCCA) which applies only linear transformation to the multi-view data. Furthermore, subsequent cluster analysis on the new feature set learned from DGCCA is able to identify a promising population structure in an Alzheimer's disease (AD) cohort. Genetic association analyses of the clustering results demonstrate that the shared representation learned from DGCCA yields a population structure with a stronger genetic basis than several competing feature learning methods.
{"title":"Deep Multiview Learning to Identify Population Structure with Multimodal Imaging.","authors":"Yixue Feng, Kefei Liu, Mansu Kim, Qi Long, Xiaohui Yao, Li Shen","doi":"10.1109/bibe50027.2020.00057","DOIUrl":"https://doi.org/10.1109/bibe50027.2020.00057","url":null,"abstract":"<p><p>We present an effective deep multiview learning framework to identify population structure using multimodal imaging data. Our approach is based on canonical correlation analysis (CCA). We propose to use deep generalized CCA (DGCCA) to learn a shared latent representation of non-linearly mapped and maximally correlated components from multiple imaging modalities with reduced dimensionality. In our empirical study, this representation is shown to effectively capture more variance in original data than conventional generalized CCA (GCCA) which applies only linear transformation to the multi-view data. Furthermore, subsequent cluster analysis on the new feature set learned from DGCCA is able to identify a promising population structure in an Alzheimer's disease (AD) cohort. Genetic association analyses of the clustering results demonstrate that the shared representation learned from DGCCA yields a population structure with a stronger genetic basis than several competing feature learning methods.</p>","PeriodicalId":87347,"journal":{"name":"Proceedings. IEEE International Symposium on Bioinformatics and Bioengineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/bibe50027.2020.00057","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25422735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01Epub Date: 2019-12-26DOI: 10.1109/bibe.2019.00020
Eleni Adam, Tunazzina Islam, Desh Ranjan, Harold Riethman
Human subtelomere regions are highly enriched in large segmental duplications and structural variants, leading to many gaps and misassemblies in these regions. We develop a novel method, NPGREAT (NanoPore Guided REgional Assembly Tool), which combines Nanopore ultralong read datasets and short-read assemblies derived from 10x linked-reads to efficiently assemble these subtelomere regions into a single continuous sequence. We show that with the use of ultralong Nanopore reads as a guide, the highly accurate shorter linked-read sequence contigs are correctly oriented, ordered, spaced and extended. In the rare cases where a linked-read sequence contig contains inaccurately assembled segments, the use of Nanopore reads allows for detection and correction of this error. We tested NPGREAT on four representative subtelomeres of the NA12878 human genome (10p, 16p, 19q and 20p). The results demonstrate that the final computed assembly of each subtelomere is accurate and complete.
{"title":"Nanopore Guided Assembly of Segmental Duplications near Telomeres.","authors":"Eleni Adam, Tunazzina Islam, Desh Ranjan, Harold Riethman","doi":"10.1109/bibe.2019.00020","DOIUrl":"10.1109/bibe.2019.00020","url":null,"abstract":"<p><p>Human subtelomere regions are highly enriched in large segmental duplications and structural variants, leading to many gaps and misassemblies in these regions. We develop a novel method, NPGREAT (NanoPore Guided REgional Assembly Tool), which combines Nanopore ultralong read datasets and short-read assemblies derived from 10x linked-reads to efficiently assemble these subtelomere regions into a single continuous sequence. We show that with the use of ultralong Nanopore reads as a guide, the highly accurate shorter linked-read sequence contigs are correctly oriented, ordered, spaced and extended. In the rare cases where a linked-read sequence contig contains inaccurately assembled segments, the use of Nanopore reads allows for detection and correction of this error. We tested NPGREAT on four representative subtelomeres of the NA12878 human genome (10p, 16p, 19q and 20p). The results demonstrate that the final computed assembly of each subtelomere is accurate and complete.</p>","PeriodicalId":87347,"journal":{"name":"Proceedings. IEEE International Symposium on Bioinformatics and Bioengineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8049597/pdf/nihms-1060068.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38884671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01Epub Date: 2019-12-26DOI: 10.1109/bibe.2019.00044
Cyrus Tanade, Nathanael Pate, Elianna Paljug, Ryan A Hoffman, May D Wang
The Ebola virus disease (EVD) epidemic that occurred in West Africa between 2014-16 resulted in over 28,000 cases and 11,000 deaths - one of the deadliest to date. A generalized model of the spatiotemporal progression of EVD for Liberia, Guinea, and Sierra Leone in 2014-16 remains elusive. There is also a disconnect in the literature on which interventions are most effective in curbing disease progression. To solve these two key issues, we designed a hybrid agent-based and compartmental model that switches from one paradigm to the other on a stochastic threshold. We modeled disease progression with promising accuracy using WHO datasets.
{"title":"Hybrid Modeling of Ebola Propagation.","authors":"Cyrus Tanade, Nathanael Pate, Elianna Paljug, Ryan A Hoffman, May D Wang","doi":"10.1109/bibe.2019.00044","DOIUrl":"https://doi.org/10.1109/bibe.2019.00044","url":null,"abstract":"<p><p>The Ebola virus disease (EVD) epidemic that occurred in West Africa between 2014-16 resulted in over 28,000 cases and 11,000 deaths - one of the deadliest to date. A generalized model of the spatiotemporal progression of EVD for Liberia, Guinea, and Sierra Leone in 2014-16 remains elusive. There is also a disconnect in the literature on which interventions are most effective in curbing disease progression. To solve these two key issues, we designed a hybrid agent-based and compartmental model that switches from one paradigm to the other on a stochastic threshold. We modeled disease progression with promising accuracy using WHO datasets.</p>","PeriodicalId":87347,"journal":{"name":"Proceedings. IEEE International Symposium on Bioinformatics and Bioengineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/bibe.2019.00044","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38060154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01Epub Date: 2018-01-11DOI: 10.1109/BIBE.2017.00-61
Akm Sabbir, Antonio Jimeno-Yepes, Ramakanth Kavuluru
Biomedical word sense disambiguation (WSD) is an important intermediate task in many natural language processing applications such as named entity recognition, syntactic parsing, and relation extraction. In this paper, we employ knowledge-based approaches that also exploit recent advances in neural word/concept embeddings to improve over the state-of-the-art in biomedical WSD using the public MSH WSD dataset [1] as the test set. Our methods involve weak supervision - we do not use any hand-labeled examples for WSD to build our prediction models; however, we employ an existing concept mapping program, MetaMap, to obtain our concept vectors. Over the MSH WSD dataset, our linear time (in terms of numbers of senses and words in the test instance) method achieves an accuracy of 92.24% which is a 3% improvement over the best known results [2] obtained via unsupervised means. A more expensive approach that we developed relies on a nearest neighbor framework and achieves accuracy of 94.34%, essentially cutting the error rate in half. Employing dense vector representations learned from unlabeled free text has been shown to benefit many language processing tasks recently and our efforts show that biomedical WSD is no exception to this trend. For a complex and rapidly evolving domain such as biomedicine, building labeled datasets for larger sets of ambiguous terms may be impractical. Here, we show that weak supervision that leverages recent advances in representation learning can rival supervised approaches in biomedical WSD. However, external knowledge bases (here sense inventories) play a key role in the improvements achieved.
{"title":"Knowledge-Based Biomedical Word Sense Disambiguation with Neural Concept Embeddings","authors":"Akm Sabbir, Antonio Jimeno-Yepes, Ramakanth Kavuluru","doi":"10.1109/BIBE.2017.00-61","DOIUrl":"10.1109/BIBE.2017.00-61","url":null,"abstract":"<p><p>Biomedical word sense disambiguation (WSD) is an important intermediate task in many natural language processing applications such as named entity recognition, syntactic parsing, and relation extraction. In this paper, we employ knowledge-based approaches that also exploit recent advances in neural word/concept embeddings to improve over the state-of-the-art in biomedical WSD using the public MSH WSD dataset [1] as the test set. Our methods involve weak supervision - we do not use any hand-labeled examples for WSD to build our prediction models; however, we employ an existing concept mapping program, MetaMap, to obtain our concept vectors. Over the MSH WSD dataset, our linear time (in terms of numbers of senses and words in the test instance) method achieves an accuracy of 92.24% which is a 3% improvement over the best known results [2] obtained via unsupervised means. A more expensive approach that we developed relies on a nearest neighbor framework and achieves accuracy of 94.34%, essentially cutting the error rate in half. Employing dense vector representations learned from unlabeled free text has been shown to benefit many language processing tasks recently and our efforts show that biomedical WSD is no exception to this trend. For a complex and rapidly evolving domain such as biomedicine, building labeled datasets for larger sets of ambiguous terms may be impractical. Here, we show that weak supervision that leverages recent advances in representation learning can rival supervised approaches in biomedical WSD. However, external knowledge bases (here sense inventories) play a key role in the improvements achieved.</p>","PeriodicalId":87347,"journal":{"name":"Proceedings. IEEE International Symposium on Bioinformatics and Bioengineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5792196/pdf/nihms919324.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35792371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01Epub Date: 2018-01-11DOI: 10.1109/BIBE.2017.00014
Anis Davoudi, Ashkan Ebadi, Parisa Rashidi, Tazcan Ozrazgat-Baslanti, Azra Bihorac, Alberto C Bursian
Electronic Health Records (EHR) are mainly designed to record relevant patient information during their stay in the hospital for administrative purposes. They additionally provide an efficient and inexpensive source of data for medical research, such as patient outcome prediction. In this study, we used preoperative Electronic Health Records to predict postoperative delirium. We compared the performance of seven machine learning models on delirium prediction: linear models, generalized additive models, random forests, support vector machine, neural networks, and extreme gradient boosting. Among the models evaluated in this study, random forests and generalized additive model outperformed the other models in terms of the overall performance metrics for prediction of delirium, particularly with respect to sensitivity. We found that age, alcohol or drug abuse, socioeconomic status, underlying medical issue, severity of medical problem, and attending surgeon can affect the risk of delirium.
{"title":"Delirium Prediction using Machine Learning Models on Preoperative Electronic Health Records Data.","authors":"Anis Davoudi, Ashkan Ebadi, Parisa Rashidi, Tazcan Ozrazgat-Baslanti, Azra Bihorac, Alberto C Bursian","doi":"10.1109/BIBE.2017.00014","DOIUrl":"https://doi.org/10.1109/BIBE.2017.00014","url":null,"abstract":"<p><p>Electronic Health Records (EHR) are mainly designed to record relevant patient information during their stay in the hospital for administrative purposes. They additionally provide an efficient and inexpensive source of data for medical research, such as patient outcome prediction. In this study, we used preoperative Electronic Health Records to predict postoperative delirium. We compared the performance of seven machine learning models on delirium prediction: linear models, generalized additive models, random forests, support vector machine, neural networks, and extreme gradient boosting. Among the models evaluated in this study, random forests and generalized additive model outperformed the other models in terms of the overall performance metrics for prediction of delirium, particularly with respect to sensitivity. We found that age, alcohol or drug abuse, socioeconomic status, underlying medical issue, severity of medical problem, and attending surgeon can affect the risk of delirium.</p>","PeriodicalId":87347,"journal":{"name":"Proceedings. IEEE International Symposium on Bioinformatics and Bioengineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/BIBE.2017.00014","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36647396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01Epub Date: 2018-01-11DOI: 10.1109/BIBE.2017.00-11
Georgia S Karanasiou, Nikolaos S Tachos, Antonios Sakellarios, Lampros K Michalis, Claire Conway, Elazer R Edelman, Dimitrios I Fotiadis
Coronary stents are expandable scaffolds that are used to widen occluded diseased arteries and restore blood flow. Because of the strain they are exposed to and forces they must resist as well as the importance of surface interactions, material properties are dominant. Indeed, a common differentiating factors amongst commercially available stents is their material. Several performance requirements relate to stent materials including radial strength for adequate arterial support post-deployment. This study investigated the effect of the stent material in three finite element models using different stents made of: (i) Cobalt-Chromium (CoCr), (ii) Stainless Steel (SS316L), and (iii) Platinum Chromium (PtCr). Deployment was investigated in a patient specific arterial geometry, created based on a fusion of angiographic data and intravascular ultrasound images. In silico results show that: (i) the maximum von Mises stress occurs for the CoCr, however the curved areas of the stent links present higher stresses compared to the straight stent segments for all stents, (ii) more areas of high inner arterial stress exist in the case of the CoCr stent deployment, (iii) there is no significant difference in the percentage of arterial stress volume distribution among all models.
{"title":"<i>In silico</i> assessment of the effects of material on stent deployment.","authors":"Georgia S Karanasiou, Nikolaos S Tachos, Antonios Sakellarios, Lampros K Michalis, Claire Conway, Elazer R Edelman, Dimitrios I Fotiadis","doi":"10.1109/BIBE.2017.00-11","DOIUrl":"https://doi.org/10.1109/BIBE.2017.00-11","url":null,"abstract":"<p><p>Coronary stents are expandable scaffolds that are used to widen occluded diseased arteries and restore blood flow. Because of the strain they are exposed to and forces they must resist as well as the importance of surface interactions, material properties are dominant. Indeed, a common differentiating factors amongst commercially available stents is their material. Several performance requirements relate to stent materials including radial strength for adequate arterial support post-deployment. This study investigated the effect of the stent material in three finite element models using different stents made of: (i) Cobalt-Chromium (CoCr), (ii) Stainless Steel (SS316L), and (iii) Platinum Chromium (PtCr). Deployment was investigated in a patient specific arterial geometry, created based on a fusion of angiographic data and intravascular ultrasound images. <i>In silico</i> results show that: (i) the maximum von Mises stress occurs for the CoCr, however the curved areas of the stent links present higher stresses compared to the straight stent segments for all stents, (ii) more areas of high inner arterial stress exist in the case of the CoCr stent deployment, (iii) there is no significant difference in the percentage of arterial stress volume distribution among all models.</p>","PeriodicalId":87347,"journal":{"name":"Proceedings. IEEE International Symposium on Bioinformatics and Bioengineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/BIBE.2017.00-11","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36371103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/BIBE.2013.6701683
Ioannis N Melas, Douglas A Lauffenburger, Leonidas G Alexopoulos
Hepatocellular Carcinoma (HCC) is one of the leading causes of death worldwide, with only a handful of treatments effective in unresectable HCC. Most of the clinical trials for HCC using new generation interventions (drug-targeted therapies) have poor efficacy whereas just a few of them show some promising clinical outcomes [1]. This is amongst the first studies where the mode of action of some of the compounds extensively used in clinical trials is interrogated on the phosphoproteomic level, in an attempt to build predictive models for clinical efficacy. Signaling data are combined with previously published gene expression and clinical data within a consistent framework that identifies drug effects on the phosphoproteomic level and translates them to the gene expression level. The interrogated drugs are then correlated with genes differentially expressed in normal versus tumor tissue, and genes predictive of patient survival. Although the number of clinical trial results considered is small, our approach shows potential for discerning signaling activities that may help predict drug efficacy for HCC.
{"title":"Identification of signaling pathways related to drug efficacy in hepatocellular carcinoma via integration of phosphoproteomic, genomic and clinical data.","authors":"Ioannis N Melas, Douglas A Lauffenburger, Leonidas G Alexopoulos","doi":"10.1109/BIBE.2013.6701683","DOIUrl":"https://doi.org/10.1109/BIBE.2013.6701683","url":null,"abstract":"<p><p>Hepatocellular Carcinoma (HCC) is one of the leading causes of death worldwide, with only a handful of treatments effective in unresectable HCC. Most of the clinical trials for HCC using new generation interventions (drug-targeted therapies) have poor efficacy whereas just a few of them show some promising clinical outcomes [1]. This is amongst the first studies where the mode of action of some of the compounds extensively used in clinical trials is interrogated on the phosphoproteomic level, in an attempt to build predictive models for clinical efficacy. Signaling data are combined with previously published gene expression and clinical data within a consistent framework that identifies drug effects on the phosphoproteomic level and translates them to the gene expression level. The interrogated drugs are then correlated with genes differentially expressed in normal versus tumor tissue, and genes predictive of patient survival. Although the number of clinical trial results considered is small, our approach shows potential for discerning signaling activities that may help predict drug efficacy for HCC.</p>","PeriodicalId":87347,"journal":{"name":"Proceedings. IEEE International Symposium on Bioinformatics and Bioengineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/BIBE.2013.6701683","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33094824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}