Pub Date : 2023-07-22DOI: 10.1186/s13040-023-00338-w
Zarif L Azher, Anish Suvarna, Ji-Qing Chen, Ze Zhang, Brock C Christensen, Lucas A Salas, Louis J Vaickus, Joshua J Levy
Background: Deep learning models can infer cancer patient prognosis from molecular and anatomic pathology information. Recent studies that leveraged information from complementary multimodal data improved prognostication, further illustrating the potential utility of such methods. However, current approaches: 1) do not comprehensively leverage biological and histomorphological relationships and 2) make use of emerging strategies to "pretrain" models (i.e., train models on a slightly orthogonal dataset/modeling objective) which may aid prognostication by reducing the amount of information required for achieving optimal performance. In addition, model interpretation is crucial for facilitating the clinical adoption of deep learning methods by fostering practitioner understanding and trust in the technology.
Methods: Here, we develop an interpretable multimodal modeling framework that combines DNA methylation, gene expression, and histopathology (i.e., tissue slides) data, and we compare performance of crossmodal pretraining, contrastive learning, and transfer learning versus the standard procedure.
Results: Our models outperform the existing state-of-the-art method (average 11.54% C-index increase), and baseline clinically driven models (average 11.7% C-index increase). Model interpretations elucidate consideration of biologically meaningful factors in making prognosis predictions.
Discussion: Our results demonstrate that the selection of pretraining strategies is crucial for obtaining highly accurate prognostication models, even more so than devising an innovative model architecture, and further emphasize the all-important role of the tumor microenvironment on disease progression.
{"title":"Assessment of emerging pretraining strategies in interpretable multimodal deep learning for cancer prognostication.","authors":"Zarif L Azher, Anish Suvarna, Ji-Qing Chen, Ze Zhang, Brock C Christensen, Lucas A Salas, Louis J Vaickus, Joshua J Levy","doi":"10.1186/s13040-023-00338-w","DOIUrl":"https://doi.org/10.1186/s13040-023-00338-w","url":null,"abstract":"<p><strong>Background: </strong>Deep learning models can infer cancer patient prognosis from molecular and anatomic pathology information. Recent studies that leveraged information from complementary multimodal data improved prognostication, further illustrating the potential utility of such methods. However, current approaches: 1) do not comprehensively leverage biological and histomorphological relationships and 2) make use of emerging strategies to \"pretrain\" models (i.e., train models on a slightly orthogonal dataset/modeling objective) which may aid prognostication by reducing the amount of information required for achieving optimal performance. In addition, model interpretation is crucial for facilitating the clinical adoption of deep learning methods by fostering practitioner understanding and trust in the technology.</p><p><strong>Methods: </strong>Here, we develop an interpretable multimodal modeling framework that combines DNA methylation, gene expression, and histopathology (i.e., tissue slides) data, and we compare performance of crossmodal pretraining, contrastive learning, and transfer learning versus the standard procedure.</p><p><strong>Results: </strong>Our models outperform the existing state-of-the-art method (average 11.54% C-index increase), and baseline clinically driven models (average 11.7% C-index increase). Model interpretations elucidate consideration of biologically meaningful factors in making prognosis predictions.</p><p><strong>Discussion: </strong>Our results demonstrate that the selection of pretraining strategies is crucial for obtaining highly accurate prognostication models, even more so than devising an innovative model architecture, and further emphasize the all-important role of the tumor microenvironment on disease progression.</p>","PeriodicalId":48947,"journal":{"name":"Biodata Mining","volume":null,"pages":null},"PeriodicalIF":4.5,"publicationDate":"2023-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10363299/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9865606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Backgrounds: The incidence of gastric cardiac cancer (GCC) has obviously increased recently with poor prognosis. It's necessary to compare GCC prognosis with other gastric sites carcinoma and set up an effective prognostic model based on a neural network to predict the survival of GCC patients.
Methods: In the population-based cohort study, we first enrolled the clinical features from the Surveillance, Epidemiology and End Results (SEER) data (n = 31,397) as well as the public Chinese data from different hospitals (n = 1049). Then according to the diagnostic time, the SEER data were then divided into two cohorts, the train cohort (patients were diagnosed as GCC in 2010-2014, n = 4414) and the test cohort (diagnosed in 2015, n = 957). Age, sex, pathology, tumor, node, and metastasis (TNM) stage, tumor size, surgery or not, radiotherapy or not, chemotherapy or not and history of malignancy were chosen as the predictive clinical features. The train cohort was utilized to conduct the neural network-based prognostic predictive model which validated by itself and the test cohort. Area under the receiver operating characteristics curve (AUC) was used to evaluate model performance.
Results: The prognosis of GCC patients in SEER database was worse than that of non GCC (NGCC) patients, while it was not worse in the Chinese data. The total of 5371 patients were used to conduct the model, following inclusion and exclusion criteria. Neural network-based prognostic predictive model had a satisfactory performance for GCC overall survival (OS) prediction, which owned 0.7431 AUC in the train cohort (95% confidence intervals, CI, 0.7423-0.7439) and 0.7419 in the test cohort (95% CI, 0.7411-0.7428).
Conclusions: GCC patients indeed have different survival time compared with non GCC patients. And the neural network-based prognostic predictive tool developed in this study is a novel and promising software for the clinical outcome analysis of GCC patients.
{"title":"Neural network-based prognostic predictive tool for gastric cardiac cancer: the worldwide retrospective study.","authors":"Wei Li, Minghang Zhang, Siyu Cai, Liangliang Wu, Chao Li, Yuqi He, Guibin Yang, Jinghui Wang, Yuanming Pan","doi":"10.1186/s13040-023-00335-z","DOIUrl":"https://doi.org/10.1186/s13040-023-00335-z","url":null,"abstract":"<p><strong>Backgrounds: </strong>The incidence of gastric cardiac cancer (GCC) has obviously increased recently with poor prognosis. It's necessary to compare GCC prognosis with other gastric sites carcinoma and set up an effective prognostic model based on a neural network to predict the survival of GCC patients.</p><p><strong>Methods: </strong>In the population-based cohort study, we first enrolled the clinical features from the Surveillance, Epidemiology and End Results (SEER) data (n = 31,397) as well as the public Chinese data from different hospitals (n = 1049). Then according to the diagnostic time, the SEER data were then divided into two cohorts, the train cohort (patients were diagnosed as GCC in 2010-2014, n = 4414) and the test cohort (diagnosed in 2015, n = 957). Age, sex, pathology, tumor, node, and metastasis (TNM) stage, tumor size, surgery or not, radiotherapy or not, chemotherapy or not and history of malignancy were chosen as the predictive clinical features. The train cohort was utilized to conduct the neural network-based prognostic predictive model which validated by itself and the test cohort. Area under the receiver operating characteristics curve (AUC) was used to evaluate model performance.</p><p><strong>Results: </strong>The prognosis of GCC patients in SEER database was worse than that of non GCC (NGCC) patients, while it was not worse in the Chinese data. The total of 5371 patients were used to conduct the model, following inclusion and exclusion criteria. Neural network-based prognostic predictive model had a satisfactory performance for GCC overall survival (OS) prediction, which owned 0.7431 AUC in the train cohort (95% confidence intervals, CI, 0.7423-0.7439) and 0.7419 in the test cohort (95% CI, 0.7411-0.7428).</p><p><strong>Conclusions: </strong>GCC patients indeed have different survival time compared with non GCC patients. And the neural network-based prognostic predictive tool developed in this study is a novel and promising software for the clinical outcome analysis of GCC patients.</p>","PeriodicalId":48947,"journal":{"name":"Biodata Mining","volume":null,"pages":null},"PeriodicalIF":4.5,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10353146/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9844770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-18DOI: 10.1186/s13040-023-00337-x
Marian Petrica, Ionel Popescu
In this paper, we propose a parameter identification methodology of the SIRD model, an extension of the classical SIR model, that considers the deceased as a separate category. In addition, our model includes one parameter which is the ratio between the real total number of infected and the number of infected that were documented in the official statistics. Due to many factors, like governmental decisions, several variants circulating, opening and closing of schools, the typical assumption that the parameters of the model stay constant for long periods of time is not realistic. Thus our objective is to create a method which works for short periods of time. In this scope, we approach the estimation relying on the previous 7 days of data and then use the identified parameters to make predictions. To perform the estimation of the parameters we propose the average of an ensemble of neural networks. Each neural network is constructed based on a database built by solving the SIRD for 7 days, with random parameters. In this way, the networks learn the parameters from the solution of the SIRD model. Lastly we use the ensemble to get estimates of the parameters from the real data of Covid19 in Romania and then we illustrate the predictions for different periods of time, from 10 up to 45 days, for the number of deaths. The main goal was to apply this approach on the analysis of COVID-19 evolution in Romania, but this was also exemplified on other countries like Hungary, Czech Republic and Poland with similar results. The results are backed by a theorem which guarantees that we can recover the parameters of the model from the reported data. We believe this methodology can be used as a general tool for dealing with short term predictions of infectious diseases or in other compartmental models.
{"title":"Inverse problem for parameters identification in a modified SIRD epidemic model using ensemble neural networks.","authors":"Marian Petrica, Ionel Popescu","doi":"10.1186/s13040-023-00337-x","DOIUrl":"https://doi.org/10.1186/s13040-023-00337-x","url":null,"abstract":"<p><p>In this paper, we propose a parameter identification methodology of the SIRD model, an extension of the classical SIR model, that considers the deceased as a separate category. In addition, our model includes one parameter which is the ratio between the real total number of infected and the number of infected that were documented in the official statistics. Due to many factors, like governmental decisions, several variants circulating, opening and closing of schools, the typical assumption that the parameters of the model stay constant for long periods of time is not realistic. Thus our objective is to create a method which works for short periods of time. In this scope, we approach the estimation relying on the previous 7 days of data and then use the identified parameters to make predictions. To perform the estimation of the parameters we propose the average of an ensemble of neural networks. Each neural network is constructed based on a database built by solving the SIRD for 7 days, with random parameters. In this way, the networks learn the parameters from the solution of the SIRD model. Lastly we use the ensemble to get estimates of the parameters from the real data of Covid19 in Romania and then we illustrate the predictions for different periods of time, from 10 up to 45 days, for the number of deaths. The main goal was to apply this approach on the analysis of COVID-19 evolution in Romania, but this was also exemplified on other countries like Hungary, Czech Republic and Poland with similar results. The results are backed by a theorem which guarantees that we can recover the parameters of the model from the reported data. We believe this methodology can be used as a general tool for dealing with short term predictions of infectious diseases or in other compartmental models.</p>","PeriodicalId":48947,"journal":{"name":"Biodata Mining","volume":null,"pages":null},"PeriodicalIF":4.5,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10354917/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9847109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-13DOI: 10.1186/s13040-023-00339-9
Jesse G Meyer, Ryan J Urbanowicz, Patrick C N Martin, Karen O'Connor, Ruowang Li, Pei-Chen Peng, Tiffani J Bright, Nicholas Tatonetti, Kyoung Jae Won, Graciela Gonzalez-Hernandez, Jason H Moore
The introduction of large language models (LLMs) that allow iterative "chat" in late 2022 is a paradigm shift that enables generation of text often indistinguishable from that written by humans. LLM-based chatbots have immense potential to improve academic work efficiency, but the ethical implications of their fair use and inherent bias must be considered. In this editorial, we discuss this technology from the academic's perspective with regard to its limitations and utility for academic writing, education, and programming. We end with our stance with regard to using LLMs and chatbots in academia, which is summarized as (1) we must find ways to effectively use them, (2) their use does not constitute plagiarism (although they may produce plagiarized text), (3) we must quantify their bias, (4) users must be cautious of their poor accuracy, and (5) the future is bright for their application to research and as an academic tool.
{"title":"ChatGPT and large language models in academia: opportunities and challenges.","authors":"Jesse G Meyer, Ryan J Urbanowicz, Patrick C N Martin, Karen O'Connor, Ruowang Li, Pei-Chen Peng, Tiffani J Bright, Nicholas Tatonetti, Kyoung Jae Won, Graciela Gonzalez-Hernandez, Jason H Moore","doi":"10.1186/s13040-023-00339-9","DOIUrl":"10.1186/s13040-023-00339-9","url":null,"abstract":"<p><p>The introduction of large language models (LLMs) that allow iterative \"chat\" in late 2022 is a paradigm shift that enables generation of text often indistinguishable from that written by humans. LLM-based chatbots have immense potential to improve academic work efficiency, but the ethical implications of their fair use and inherent bias must be considered. In this editorial, we discuss this technology from the academic's perspective with regard to its limitations and utility for academic writing, education, and programming. We end with our stance with regard to using LLMs and chatbots in academia, which is summarized as (1) we must find ways to effectively use them, (2) their use does not constitute plagiarism (although they may produce plagiarized text), (3) we must quantify their bias, (4) users must be cautious of their poor accuracy, and (5) the future is bright for their application to research and as an academic tool.</p>","PeriodicalId":48947,"journal":{"name":"Biodata Mining","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10339472/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9817686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Motor imagery brain-computer interfaces (BCIs) is a classic and potential BCI technology achieving brain computer integration. In motor imagery BCI, the operational frequency band of the EEG greatly affects the performance of motor imagery EEG recognition model. However, as most algorithms used a broad frequency band, the discrimination from multiple sub-bands were not fully utilized. Thus, using convolutional neural network (CNNs) to extract discriminative features from EEG signals of different frequency components is a promising method in multisubject EEG recognition.
Methods: This paper presents a novel overlapping filter bank CNN to incorporate discriminative information from multiple frequency components in multisubject motor imagery recognition. Specifically, two overlapping filter banks with fixed low-cut frequency or sliding low-cut frequency are employed to obtain multiple frequency component representations of EEG signals. Then, multiple CNN models are trained separately. Finally, the output probabilities of multiple CNN models are integrated to determine the predicted EEG label.
Results: Experiments were conducted based on four popular CNN backbone models and three public datasets. And the results showed that the overlapping filter bank CNN was efficient and universal in improving multisubject motor imagery BCI performance. Specifically, compared with the original backbone model, the proposed method can improve the average accuracy by 3.69 percentage points, F1 score by 0.04, and AUC by 0.03. In addition, the proposed method performed best among the comparison with the state-of-the-art methods.
Conclusion: The proposed overlapping filter bank CNN framework with fixed low-cut frequency is an efficient and universal method to improve the performance of multisubject motor imagery BCI.
{"title":"Overlapping filter bank convolutional neural network for multisubject multicategory motor imagery brain-computer interface.","authors":"Jing Luo, Jundong Li, Qi Mao, Zhenghao Shi, Haiqin Liu, Xiaoyong Ren, Xinhong Hei","doi":"10.1186/s13040-023-00336-y","DOIUrl":"https://doi.org/10.1186/s13040-023-00336-y","url":null,"abstract":"<p><strong>Background: </strong>Motor imagery brain-computer interfaces (BCIs) is a classic and potential BCI technology achieving brain computer integration. In motor imagery BCI, the operational frequency band of the EEG greatly affects the performance of motor imagery EEG recognition model. However, as most algorithms used a broad frequency band, the discrimination from multiple sub-bands were not fully utilized. Thus, using convolutional neural network (CNNs) to extract discriminative features from EEG signals of different frequency components is a promising method in multisubject EEG recognition.</p><p><strong>Methods: </strong>This paper presents a novel overlapping filter bank CNN to incorporate discriminative information from multiple frequency components in multisubject motor imagery recognition. Specifically, two overlapping filter banks with fixed low-cut frequency or sliding low-cut frequency are employed to obtain multiple frequency component representations of EEG signals. Then, multiple CNN models are trained separately. Finally, the output probabilities of multiple CNN models are integrated to determine the predicted EEG label.</p><p><strong>Results: </strong>Experiments were conducted based on four popular CNN backbone models and three public datasets. And the results showed that the overlapping filter bank CNN was efficient and universal in improving multisubject motor imagery BCI performance. Specifically, compared with the original backbone model, the proposed method can improve the average accuracy by 3.69 percentage points, F1 score by 0.04, and AUC by 0.03. In addition, the proposed method performed best among the comparison with the state-of-the-art methods.</p><p><strong>Conclusion: </strong>The proposed overlapping filter bank CNN framework with fixed low-cut frequency is an efficient and universal method to improve the performance of multisubject motor imagery BCI.</p>","PeriodicalId":48947,"journal":{"name":"Biodata Mining","volume":null,"pages":null},"PeriodicalIF":4.5,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10337209/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9817376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-07DOI: 10.1186/s13040-023-00334-0
JiYoon Park, Jae Won Lee, Mira Park
Background: Cancer subtype identification is important for the early diagnosis of cancer and the provision of adequate treatment. Prior to identifying the subtype of cancer in a patient, feature selection is also crucial for reducing the dimensionality of the data by detecting genes that contain important information about the cancer subtype. Numerous cancer subtyping methods have been developed, and their performance has been compared. However, combinations of feature selection and subtype identification methods have rarely been considered. This study aimed to identify the best combination of variable selection and subtype identification methods in single omics data analysis.
Results: Combinations of six filter-based methods and six unsupervised subtype identification methods were investigated using The Cancer Genome Atlas (TCGA) datasets for four cancers. The number of features selected varied, and several evaluation metrics were used. Although no single combination was found to have a distinctively good performance, Consensus Clustering (CC) and Neighborhood-Based Multi-omics Clustering (NEMO) used with variance-based feature selection had a tendency to show lower p-values, and nonnegative matrix factorization (NMF) stably showed good performance in many cases unless the Dip test was used for feature selection. In terms of accuracy, the combination of NMF and similarity network fusion (SNF) with Monte Carlo Feature Selection (MCFS) and Minimum-Redundancy Maximum Relevance (mRMR) showed good overall performance. NMF always showed among the worst performances without feature selection in all datasets, but performed much better when used with various feature selection methods. iClusterBayes (ICB) had decent performance when used without feature selection.
Conclusions: Rather than a single method clearly emerging as optimal, the best methodology was different depending on the data used, the number of features selected, and the evaluation method. A guideline for choosing the best combination method under various situations is provided.
背景:癌症亚型识别对于癌症的早期诊断和提供适当的治疗非常重要。在确定患者的癌症亚型之前,通过检测包含癌症亚型重要信息的基因,特征选择对于降低数据的维数也至关重要。已经开发了许多癌症亚型方法,并对它们的性能进行了比较。然而,结合特征选择和亚型识别的方法很少被考虑。本研究旨在确定单组学数据分析中变量选择和亚型鉴定的最佳组合方法。结果:使用The Cancer Genome Atlas (TCGA)数据集对4种癌症进行了6种基于过滤器的方法和6种无监督亚型鉴定方法的组合研究。所选择的特征数量各不相同,并且使用了几种评估指标。虽然没有发现单一组合具有明显的良好性能,但共识聚类(CC)和基于邻域的多组学聚类(NEMO)与基于方差的特征选择一起使用有显示较低p值的趋势,非负矩阵分解(NMF)在许多情况下稳定地显示出良好的性能,除非使用Dip测试进行特征选择。在准确率方面,NMF和相似网络融合(SNF)与蒙特卡罗特征选择(MCFS)和最小冗余最大相关性(mRMR)相结合,整体表现良好。在没有特征选择的情况下,NMF在所有数据集中的表现都是最差的,而在与各种特征选择方法结合使用时,NMF的表现要好得多。iClusterBayes (ICB)在没有特征选择的情况下具有良好的性能。结论:最佳方法不是单一方法,而是根据所使用的数据、所选择的特征数量和评估方法而有所不同。为在各种情况下选择最佳组合方法提供了指导。
{"title":"Comparison of cancer subtype identification methods combined with feature selection methods in omics data analysis.","authors":"JiYoon Park, Jae Won Lee, Mira Park","doi":"10.1186/s13040-023-00334-0","DOIUrl":"https://doi.org/10.1186/s13040-023-00334-0","url":null,"abstract":"<p><strong>Background: </strong>Cancer subtype identification is important for the early diagnosis of cancer and the provision of adequate treatment. Prior to identifying the subtype of cancer in a patient, feature selection is also crucial for reducing the dimensionality of the data by detecting genes that contain important information about the cancer subtype. Numerous cancer subtyping methods have been developed, and their performance has been compared. However, combinations of feature selection and subtype identification methods have rarely been considered. This study aimed to identify the best combination of variable selection and subtype identification methods in single omics data analysis.</p><p><strong>Results: </strong>Combinations of six filter-based methods and six unsupervised subtype identification methods were investigated using The Cancer Genome Atlas (TCGA) datasets for four cancers. The number of features selected varied, and several evaluation metrics were used. Although no single combination was found to have a distinctively good performance, Consensus Clustering (CC) and Neighborhood-Based Multi-omics Clustering (NEMO) used with variance-based feature selection had a tendency to show lower p-values, and nonnegative matrix factorization (NMF) stably showed good performance in many cases unless the Dip test was used for feature selection. In terms of accuracy, the combination of NMF and similarity network fusion (SNF) with Monte Carlo Feature Selection (MCFS) and Minimum-Redundancy Maximum Relevance (mRMR) showed good overall performance. NMF always showed among the worst performances without feature selection in all datasets, but performed much better when used with various feature selection methods. iClusterBayes (ICB) had decent performance when used without feature selection.</p><p><strong>Conclusions: </strong>Rather than a single method clearly emerging as optimal, the best methodology was different depending on the data used, the number of features selected, and the evaluation method. A guideline for choosing the best combination method under various situations is provided.</p>","PeriodicalId":48947,"journal":{"name":"Biodata Mining","volume":null,"pages":null},"PeriodicalIF":4.5,"publicationDate":"2023-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10329370/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9807660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-10DOI: 10.1186/s13040-023-00333-1
Weiquan Pan, Faning Long, Jian Pan
Single-cell RNA-sequencing (scRNA-seq) data can serve as a good indicator of cell-to-cell heterogeneity and can aid in the study of cell growth by identifying cell types. Recently, advances in Variational Autoencoder (VAE) have demonstrated their ability to learn robust feature representations for scRNA-seq. However, it has been observed that VAEs tend to ignore the latent variables when combined with a decoding distribution that is too flexible. In this paper, we introduce ScInfoVAE, a dimensional reduction method based on the mutual information variational autoencoder (InfoVAE), which can more effectively identify various cell types in scRNA-seq data of complex tissues. A joint InfoVAE deep model and zero-inflated negative binomial distributed model design based on ScInfoVAE reconstructs the objective function to noise scRNA-seq data and learn an efficient low-dimensional representation of it. We use ScInfoVAE to analyze the clustering performance of 15 real scRNA-seq datasets and demonstrate that our method provides high clustering performance. In addition, we use simulated data to investigate the interpretability of feature extraction, and visualization results show that the low-dimensional representation learned by ScInfoVAE retains local and global neighborhood structure data well. In addition, our model can significantly improve the quality of the variational posterior.
{"title":"ScInfoVAE: interpretable dimensional reduction of single cell transcription data with variational autoencoders and extended mutual information regularization.","authors":"Weiquan Pan, Faning Long, Jian Pan","doi":"10.1186/s13040-023-00333-1","DOIUrl":"https://doi.org/10.1186/s13040-023-00333-1","url":null,"abstract":"<p><p>Single-cell RNA-sequencing (scRNA-seq) data can serve as a good indicator of cell-to-cell heterogeneity and can aid in the study of cell growth by identifying cell types. Recently, advances in Variational Autoencoder (VAE) have demonstrated their ability to learn robust feature representations for scRNA-seq. However, it has been observed that VAEs tend to ignore the latent variables when combined with a decoding distribution that is too flexible. In this paper, we introduce ScInfoVAE, a dimensional reduction method based on the mutual information variational autoencoder (InfoVAE), which can more effectively identify various cell types in scRNA-seq data of complex tissues. A joint InfoVAE deep model and zero-inflated negative binomial distributed model design based on ScInfoVAE reconstructs the objective function to noise scRNA-seq data and learn an efficient low-dimensional representation of it. We use ScInfoVAE to analyze the clustering performance of 15 real scRNA-seq datasets and demonstrate that our method provides high clustering performance. In addition, we use simulated data to investigate the interpretability of feature extraction, and visualization results show that the low-dimensional representation learned by ScInfoVAE retains local and global neighborhood structure data well. In addition, our model can significantly improve the quality of the variational posterior.</p>","PeriodicalId":48947,"journal":{"name":"Biodata Mining","volume":null,"pages":null},"PeriodicalIF":4.5,"publicationDate":"2023-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10257850/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9673072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-05DOI: 10.1186/s13040-023-00332-2
David N Nicholson, Faisal Alquaddoomi, Vincent Rubinetti, Casey S Greene
While we often think of words as having a fixed meaning that we use to describe a changing world, words are also dynamic and changing. Scientific research can also be remarkably fast-moving, with new concepts or approaches rapidly gaining mind share. We examined scientific writing, both preprint and pre-publication peer-reviewed text, to identify terms that have changed and examine their use. One particular challenge that we faced was that the shift from closed to open access publishing meant that the size of available corpora changed by over an order of magnitude in the last two decades. We developed an approach to evaluate semantic shift by accounting for both intra- and inter-year variability using multiple integrated models. This analysis revealed thousands of change points in both corpora, including for terms such as 'cas9', 'pandemic', and 'sars'. We found that the consistent change-points between pre-publication peer-reviewed and preprinted text are largely related to the COVID-19 pandemic. We also created a web app for exploration that allows users to investigate individual terms ( https://greenelab.github.io/word-lapse/ ). To our knowledge, our research is the first to examine semantic shift in biomedical preprints and pre-publication peer-reviewed text, and provides a foundation for future work to understand how terms acquire new meanings and how peer review affects this process.
{"title":"Changing word meanings in biomedical literature reveal pandemics and new technologies.","authors":"David N Nicholson, Faisal Alquaddoomi, Vincent Rubinetti, Casey S Greene","doi":"10.1186/s13040-023-00332-2","DOIUrl":"10.1186/s13040-023-00332-2","url":null,"abstract":"<p><p>While we often think of words as having a fixed meaning that we use to describe a changing world, words are also dynamic and changing. Scientific research can also be remarkably fast-moving, with new concepts or approaches rapidly gaining mind share. We examined scientific writing, both preprint and pre-publication peer-reviewed text, to identify terms that have changed and examine their use. One particular challenge that we faced was that the shift from closed to open access publishing meant that the size of available corpora changed by over an order of magnitude in the last two decades. We developed an approach to evaluate semantic shift by accounting for both intra- and inter-year variability using multiple integrated models. This analysis revealed thousands of change points in both corpora, including for terms such as 'cas9', 'pandemic', and 'sars'. We found that the consistent change-points between pre-publication peer-reviewed and preprinted text are largely related to the COVID-19 pandemic. We also created a web app for exploration that allows users to investigate individual terms ( https://greenelab.github.io/word-lapse/ ). To our knowledge, our research is the first to examine semantic shift in biomedical preprints and pre-publication peer-reviewed text, and provides a foundation for future work to understand how terms acquire new meanings and how peer review affects this process.</p>","PeriodicalId":48947,"journal":{"name":"Biodata Mining","volume":null,"pages":null},"PeriodicalIF":4.5,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10161184/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9426349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In many healthcare applications, datasets for classification may be highly imbalanced due to the rare occurrence of target events such as disease onset. The SMOTE (Synthetic Minority Over-sampling Technique) algorithm has been developed as an effective resampling method for imbalanced data classification by oversampling samples from the minority class. However, samples generated by SMOTE may be ambiguous, low-quality and non-separable with the majority class. To enhance the quality of generated samples, we proposed a novel self-inspected adaptive SMOTE (SASMOTE) model that leverages an adaptive nearest neighborhood selection algorithm to identify the "visible" nearest neighbors, which are used to generate samples likely to fall into the minority class. To further enhance the quality of the generated samples, an uncertainty elimination via self-inspection approach is introduced in the proposed SASMOTE model. Its objective is to filter out the generated samples that are highly uncertain and inseparable with the majority class. The effectiveness of the proposed algorithm is compared with existing SMOTE-based algorithms and demonstrated through two real-world case studies in healthcare, including risk gene discovery and fatal congenital heart disease prediction. By generating the higher quality synthetic samples, the proposed algorithm is able to help achieve better prediction performance (in terms of F1 score) on average compared to the other methods, which is promising to enhance the usability of machine learning models on highly imbalanced healthcare data.
{"title":"A self-inspected adaptive SMOTE algorithm (SASMOTE) for highly imbalanced data classification in healthcare.","authors":"Tanapol Kosolwattana, Chenang Liu, Renjie Hu, Shizhong Han, Hua Chen, Ying Lin","doi":"10.1186/s13040-023-00330-4","DOIUrl":"https://doi.org/10.1186/s13040-023-00330-4","url":null,"abstract":"<p><p>In many healthcare applications, datasets for classification may be highly imbalanced due to the rare occurrence of target events such as disease onset. The SMOTE (Synthetic Minority Over-sampling Technique) algorithm has been developed as an effective resampling method for imbalanced data classification by oversampling samples from the minority class. However, samples generated by SMOTE may be ambiguous, low-quality and non-separable with the majority class. To enhance the quality of generated samples, we proposed a novel self-inspected adaptive SMOTE (SASMOTE) model that leverages an adaptive nearest neighborhood selection algorithm to identify the \"visible\" nearest neighbors, which are used to generate samples likely to fall into the minority class. To further enhance the quality of the generated samples, an uncertainty elimination via self-inspection approach is introduced in the proposed SASMOTE model. Its objective is to filter out the generated samples that are highly uncertain and inseparable with the majority class. The effectiveness of the proposed algorithm is compared with existing SMOTE-based algorithms and demonstrated through two real-world case studies in healthcare, including risk gene discovery and fatal congenital heart disease prediction. By generating the higher quality synthetic samples, the proposed algorithm is able to help achieve better prediction performance (in terms of F1 score) on average compared to the other methods, which is promising to enhance the usability of machine learning models on highly imbalanced healthcare data.</p>","PeriodicalId":48947,"journal":{"name":"Biodata Mining","volume":null,"pages":null},"PeriodicalIF":4.5,"publicationDate":"2023-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10131309/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9361405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-10DOI: 10.1186/s13040-023-00331-3
Philip J Freda, Attri Ghosh, Elizabeth Zhang, Tianhao Luo, Apurva S Chitre, Oksana Polesskaya, Celine L St Pierre, Jianjun Gao, Connor D Martin, Hao Chen, Angel G Garcia-Martinez, Tengfei Wang, Wenyan Han, Keita Ishiwari, Paul Meyer, Alexander Lamparelli, Christopher P King, Abraham A Palmer, Ruowang Li, Jason H Moore
Background: Quantitative Trait Locus (QTL) analysis and Genome-Wide Association Studies (GWAS) have the power to identify variants that capture significant levels of phenotypic variance in complex traits. However, effort and time are required to select the best methods and optimize parameters and pre-processing steps. Although machine learning approaches have been shown to greatly assist in optimization and data processing, applying them to QTL analysis and GWAS is challenging due to the complexity of large, heterogenous datasets. Here, we describe proof-of-concept for an automated machine learning approach, AutoQTL, with the ability to automate many complicated decisions related to analysis of complex traits and generate solutions to describe relationships that exist in genetic data.
Results: Using a publicly available dataset of 18 putative QTL from a large-scale GWAS of body mass index in the laboratory rat, Rattus norvegicus, AutoQTL captures the phenotypic variance explained under a standard additive model. AutoQTL also detects evidence of non-additive effects including deviations from additivity and 2-way epistatic interactions in simulated data via multiple optimal solutions. Additionally, feature importance metrics provide different insights into the inheritance models and predictive power of multiple GWAS-derived putative QTL.
Conclusions: This proof-of-concept illustrates that automated machine learning techniques can complement standard approaches and have the potential to detect both additive and non-additive effects via various optimal solutions and feature importance metrics. In the future, we aim to expand AutoQTL to accommodate omics-level datasets with intelligent feature selection and feature engineering strategies.
{"title":"Automated quantitative trait locus analysis (AutoQTL).","authors":"Philip J Freda, Attri Ghosh, Elizabeth Zhang, Tianhao Luo, Apurva S Chitre, Oksana Polesskaya, Celine L St Pierre, Jianjun Gao, Connor D Martin, Hao Chen, Angel G Garcia-Martinez, Tengfei Wang, Wenyan Han, Keita Ishiwari, Paul Meyer, Alexander Lamparelli, Christopher P King, Abraham A Palmer, Ruowang Li, Jason H Moore","doi":"10.1186/s13040-023-00331-3","DOIUrl":"https://doi.org/10.1186/s13040-023-00331-3","url":null,"abstract":"<p><strong>Background: </strong>Quantitative Trait Locus (QTL) analysis and Genome-Wide Association Studies (GWAS) have the power to identify variants that capture significant levels of phenotypic variance in complex traits. However, effort and time are required to select the best methods and optimize parameters and pre-processing steps. Although machine learning approaches have been shown to greatly assist in optimization and data processing, applying them to QTL analysis and GWAS is challenging due to the complexity of large, heterogenous datasets. Here, we describe proof-of-concept for an automated machine learning approach, AutoQTL, with the ability to automate many complicated decisions related to analysis of complex traits and generate solutions to describe relationships that exist in genetic data.</p><p><strong>Results: </strong>Using a publicly available dataset of 18 putative QTL from a large-scale GWAS of body mass index in the laboratory rat, Rattus norvegicus, AutoQTL captures the phenotypic variance explained under a standard additive model. AutoQTL also detects evidence of non-additive effects including deviations from additivity and 2-way epistatic interactions in simulated data via multiple optimal solutions. Additionally, feature importance metrics provide different insights into the inheritance models and predictive power of multiple GWAS-derived putative QTL.</p><p><strong>Conclusions: </strong>This proof-of-concept illustrates that automated machine learning techniques can complement standard approaches and have the potential to detect both additive and non-additive effects via various optimal solutions and feature importance metrics. In the future, we aim to expand AutoQTL to accommodate omics-level datasets with intelligent feature selection and feature engineering strategies.</p>","PeriodicalId":48947,"journal":{"name":"Biodata Mining","volume":null,"pages":null},"PeriodicalIF":4.5,"publicationDate":"2023-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10088184/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9633507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}