Pub Date : 2022-04-26eCollection Date: 2022-01-01DOI: 10.34133/2022/9765307
Shuwei Shen, Mengjuan Xu, Fan Zhang, Pengfei Shao, Honghong Liu, Liang Xu, Chi Zhang, Peng Liu, Peng Yao, Ronald X Xu
Objective and Impact Statement. There is a need to develop high-performance and low-cost data augmentation strategies for intelligent skin cancer screening devices that can be deployed in rural or underdeveloped communities. The proposed strategy can not only improve the classification performance of skin lesions but also highlight the potential regions of interest for clinicians' attention. This strategy can also be implemented in a broad range of clinical disciplines for early screening and automatic diagnosis of many other diseases in low resource settings. Methods. We propose a high-performance data augmentation strategy of search space 101, which can be combined with any model through a plug-and-play mode and search for the best argumentation method for a medical database with low resource cost. Results. With EfficientNets as a baseline, the best BACC of HAM10000 is 0.853, outperforming the other published models of "single-model and no-external-database" for ISIC 2018 Lesion Diagnosis Challenge (Task 3). The best average AUC performance on ISIC 2017 achieves 0.909 (±0.015), exceeding most of the ensembling models and those using external datasets. Performance on Derm7pt archives the best BACC of 0.735 (±0.018) ahead of all other related studies. Moreover, the model-based heatmaps generated by Grad-CAM++ verify the accurate selection of lesion features in model judgment, further proving the scientific rationality of model-based diagnosis. Conclusion. The proposed data augmentation strategy greatly reduces the computational cost for clinically intelligent diagnosis of skin lesions. It may also facilitate further research in low-cost, portable, and AI-based mobile devices for skin cancer screening and therapeutic guidance.
{"title":"A Low-Cost High-Performance Data Augmentation for Deep Learning-Based Skin Lesion Classification.","authors":"Shuwei Shen, Mengjuan Xu, Fan Zhang, Pengfei Shao, Honghong Liu, Liang Xu, Chi Zhang, Peng Liu, Peng Yao, Ronald X Xu","doi":"10.34133/2022/9765307","DOIUrl":"10.34133/2022/9765307","url":null,"abstract":"<p><p><i>Objective and Impact Statement</i>. There is a need to develop high-performance and low-cost data augmentation strategies for intelligent skin cancer screening devices that can be deployed in rural or underdeveloped communities. The proposed strategy can not only improve the classification performance of skin lesions but also highlight the potential regions of interest for clinicians' attention. This strategy can also be implemented in a broad range of clinical disciplines for early screening and automatic diagnosis of many other diseases in low resource settings. <i>Methods</i>. We propose a high-performance data augmentation strategy of search space 10<sup>1</sup>, which can be combined with any model through a plug-and-play mode and search for the best argumentation method for a medical database with low resource cost. <i>Results</i>. With EfficientNets as a baseline, the best BACC of HAM10000 is 0.853, outperforming the other published models of \"single-model and no-external-database\" for ISIC 2018 Lesion Diagnosis Challenge (Task 3). The best average AUC performance on ISIC 2017 achieves 0.909 (±0.015), exceeding most of the ensembling models and those using external datasets. Performance on Derm7pt archives the best BACC of 0.735 (±0.018) ahead of all other related studies. Moreover, the model-based heatmaps generated by Grad-CAM++ verify the accurate selection of lesion features in model judgment, further proving the scientific rationality of model-based diagnosis. <i>Conclusion</i>. The proposed data augmentation strategy greatly reduces the computational cost for clinically intelligent diagnosis of skin lesions. It may also facilitate further research in low-cost, portable, and AI-based mobile devices for skin cancer screening and therapeutic guidance.</p>","PeriodicalId":72430,"journal":{"name":"BME frontiers","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2022-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10521644/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41241362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-12eCollection Date: 2022-01-01DOI: 10.34133/2022/9813062
Alina Dubatovka, Joachim M Buhmann
Objective and Impact Statement. Atrial fibrillation (AF) is a serious medical condition that requires effective and timely treatment to prevent stroke. We explore deep neural networks (DNNs) for learning cardiac cycles and reliably detecting AF from single-lead electrocardiogram (ECG) signals. Introduction. Electrocardiograms are widely used for diagnosis of various cardiac dysfunctions including AF. The huge amount of collected ECGs and recent algorithmic advances to process time-series data with DNNs substantially improve the accuracy of the AF diagnosis. DNNs, however, are often designed as general purpose black-box models and lack interpretability of their decisions. Methods. We design a three-step pipeline for AF detection from ECGs. First, a recording is split into a sequence of individual heartbeats based on R-peak detection. Individual heartbeats are then encoded using a DNN that extracts interpretable features of a heartbeat by disentangling the duration of a heartbeat from its shape. Second, the sequence of heartbeat codes is passed to a DNN to combine a signal-level representation capturing heart rhythm. Third, the signal representations are passed to a DNN for detecting AF. Results. Our approach demonstrates a superior performance to existing ECG analysis methods on AF detection. Additionally, the method provides interpretations of the features extracted from heartbeats by DNNs and enables cardiologists to study ECGs in terms of the shapes of individual heartbeats and rhythm of the whole signals. Conclusion. By considering ECGs on two levels and employing DNNs for modelling of cardiac cycles, this work presents a method for reliable detection of AF from single-lead ECGs.
{"title":"Automatic Detection of Atrial Fibrillation from Single-Lead ECG Using Deep Learning of the Cardiac Cycle.","authors":"Alina Dubatovka, Joachim M Buhmann","doi":"10.34133/2022/9813062","DOIUrl":"10.34133/2022/9813062","url":null,"abstract":"<p><p><i>Objective and Impact Statement</i>. Atrial fibrillation (AF) is a serious medical condition that requires effective and timely treatment to prevent stroke. We explore deep neural networks (DNNs) for learning cardiac cycles and reliably detecting AF from single-lead electrocardiogram (ECG) signals. <i>Introduction</i>. Electrocardiograms are widely used for diagnosis of various cardiac dysfunctions including AF. The huge amount of collected ECGs and recent algorithmic advances to process time-series data with DNNs substantially improve the accuracy of the AF diagnosis. DNNs, however, are often designed as general purpose black-box models and lack interpretability of their decisions. <i>Methods</i>. We design a three-step pipeline for AF detection from ECGs. First, a recording is split into a sequence of individual heartbeats based on R-peak detection. Individual heartbeats are then encoded using a DNN that extracts interpretable features of a heartbeat by disentangling the duration of a heartbeat from its shape. Second, the sequence of heartbeat codes is passed to a DNN to combine a signal-level representation capturing heart rhythm. Third, the signal representations are passed to a DNN for detecting AF. <i>Results</i>. Our approach demonstrates a superior performance to existing ECG analysis methods on AF detection. Additionally, the method provides interpretations of the features extracted from heartbeats by DNNs and enables cardiologists to study ECGs in terms of the shapes of individual heartbeats and rhythm of the whole signals. <i>Conclusion</i>. By considering ECGs on two levels and employing DNNs for modelling of cardiac cycles, this work presents a method for reliable detection of AF from single-lead ECGs.</p>","PeriodicalId":72430,"journal":{"name":"BME frontiers","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2022-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10521743/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41241365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-07eCollection Date: 2022-01-01DOI: 10.34133/2022/9872028
Zheng Cao, Xiang Pan, Hongyun Yu, Shiyuan Hua, Da Wang, Danny Z Chen, Min Zhou, Jian Wu
Objective and Impact Statement. Distinguishing tumors from normal tissues is vital in the intraoperative diagnosis and pathological examination. In this work, we propose to utilize Raman spectroscopy as a novel modality in surgery to detect colorectal cancer tissues. Introduction. Raman spectra can reflect the substance components of the target tissues. However, the feature peak is slight and hard to detect due to environmental noise. Collecting a high-quality Raman spectroscopy dataset and developing effective deep learning detection methods are possibly viable approaches. Methods. First, we collect a large Raman spectroscopy dataset from 26 colorectal cancer patients with the Raman shift ranging from 385 to 1545 cm. Second, a one-dimensional residual convolutional neural network (1D-ResNet) architecture is designed to classify the tumor tissues of colorectal cancer. Third, we visualize and interpret the fingerprint peaks found by our deep learning model. Results. Experimental results show that our deep learning method achieves 98.5% accuracy in the detection of colorectal cancer and outperforms traditional methods. Conclusion. Overall, Raman spectra are a novel modality for clinical detection of colorectal cancer. Our proposed ensemble 1D-ResNet could effectively classify the Raman spectra obtained from colorectal tumor tissues or normal tissues.
{"title":"A Deep Learning Approach for Detecting Colorectal Cancer via Raman Spectra.","authors":"Zheng Cao, Xiang Pan, Hongyun Yu, Shiyuan Hua, Da Wang, Danny Z Chen, Min Zhou, Jian Wu","doi":"10.34133/2022/9872028","DOIUrl":"https://doi.org/10.34133/2022/9872028","url":null,"abstract":"<p><p><i>Objective and Impact Statement.</i> Distinguishing tumors from normal tissues is vital in the intraoperative diagnosis and pathological examination. In this work, we propose to utilize Raman spectroscopy as a novel modality in surgery to detect colorectal cancer tissues. <i>Introduction.</i> Raman spectra can reflect the substance components of the target tissues. However, the feature peak is slight and hard to detect due to environmental noise. Collecting a high-quality Raman spectroscopy dataset and developing effective deep learning detection methods are possibly viable approaches. <i>Methods.</i> First, we collect a large Raman spectroscopy dataset from 26 colorectal cancer patients with the Raman shift ranging from 385 to 1545 cm<math><msup><mrow><mtext> </mtext></mrow><mrow><mo>-</mo><mn>1</mn></mrow></msup></math>. Second, a one-dimensional residual convolutional neural network (1D-ResNet) architecture is designed to classify the tumor tissues of colorectal cancer. Third, we visualize and interpret the fingerprint peaks found by our deep learning model. <i>Results.</i> Experimental results show that our deep learning method achieves 98.5% accuracy in the detection of colorectal cancer and outperforms traditional methods. <i>Conclusion.</i> Overall, Raman spectra are a novel modality for clinical detection of colorectal cancer. Our proposed ensemble 1D-ResNet could effectively classify the Raman spectra obtained from colorectal tumor tissues or normal tissues.</p>","PeriodicalId":72430,"journal":{"name":"BME frontiers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10521640/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41241361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-05eCollection Date: 2022-01-01DOI: 10.34133/2022/9867230
Chih-Yen Chien, Yaoheng Yang, Yan Gong, Yimei Yue, Hong Chen
Objective and Impact Statement. To develop an approach for individualized closed-loop feedback control of microbubble cavitation to achieve safe and effective focused ultrasound in combination with microbubble-induced blood-brain barrier opening (FUS-BBBO). Introduction. FUS-BBBO is a promising strategy for noninvasive and localized brain drug delivery with a growing number of clinical studies currently ongoing. Real-time cavitation monitoring and feedback control are critical to achieving safe and effective FUS-BBBO. However, feedback control algorithms used in the past were either open-loop or without consideration of baseline cavitation level difference among subjects. Methods. This study performed feedback-controlled FUS-BBBO by defining the target cavitation level based on the baseline stable cavitation level of an individual subject with "dummy" FUS sonication. The dummy FUS sonication applied FUS with a low acoustic pressure for a short duration in the presence of microbubbles to define the baseline stable cavitation level that took into consideration of individual differences in the detected cavitation emissions. FUS-BBBO was then achieved through two sonication phases: ramping-up phase to reach the target cavitation level and maintaining phase to control the stable cavitation level at the target cavitation level. Results. Evaluations performed in wild-type mice demonstrated that this approach achieved effective and safe trans-BBB delivery of a model drug. The drug delivery efficiency increased as the target cavitation level increased from 0.5 dB to 2 dB without causing vascular damage. Increasing the target cavitation level to 3 dB and 4 dB increased the probability of tissue damage. Conclusions. Safe and effective brain drug delivery was achieved using the individualized closed-loop feedback-controlled FUS-BBBO.
{"title":"Blood-Brain Barrier Opening by Individualized Closed-Loop Feedback Control of Focused Ultrasound.","authors":"Chih-Yen Chien, Yaoheng Yang, Yan Gong, Yimei Yue, Hong Chen","doi":"10.34133/2022/9867230","DOIUrl":"10.34133/2022/9867230","url":null,"abstract":"<p><p><i>Objective and Impact Statement</i>. To develop an approach for individualized closed-loop feedback control of microbubble cavitation to achieve safe and effective focused ultrasound in combination with microbubble-induced blood-brain barrier opening (FUS-BBBO). <i>Introduction</i>. FUS-BBBO is a promising strategy for noninvasive and localized brain drug delivery with a growing number of clinical studies currently ongoing. Real-time cavitation monitoring and feedback control are critical to achieving safe and effective FUS-BBBO. However, feedback control algorithms used in the past were either open-loop or without consideration of baseline cavitation level difference among subjects. <i>Methods</i>. This study performed feedback-controlled FUS-BBBO by defining the target cavitation level based on the baseline stable cavitation level of an individual subject with \"dummy\" FUS sonication. The dummy FUS sonication applied FUS with a low acoustic pressure for a short duration in the presence of microbubbles to define the baseline stable cavitation level that took into consideration of individual differences in the detected cavitation emissions. FUS-BBBO was then achieved through two sonication phases: ramping-up phase to reach the target cavitation level and maintaining phase to control the stable cavitation level at the target cavitation level. <i>Results</i>. Evaluations performed in wild-type mice demonstrated that this approach achieved effective and safe trans-BBB delivery of a model drug. The drug delivery efficiency increased as the target cavitation level increased from 0.5 dB to 2 dB without causing vascular damage. Increasing the target cavitation level to 3 dB and 4 dB increased the probability of tissue damage. <i>Conclusions</i>. Safe and effective brain drug delivery was achieved using the individualized closed-loop feedback-controlled FUS-BBBO.</p>","PeriodicalId":72430,"journal":{"name":"BME frontiers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10521637/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41241396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective and Impact Statement. This study developed and validated a deep semantic segmentation feature-based radiomics (DSFR) model based on preoperative contrast-enhanced computed tomography (CECT) combined with clinical information to predict early recurrence (ER) of single hepatocellular carcinoma (HCC) after curative resection. ER prediction is of great significance to the therapeutic decision-making and surveillance strategy of HCC. Introduction. ER prediction is important for HCC. However, it cannot currently be adequately determined. Methods. Totally, 208 patients with single HCC after curative resection were retrospectively recruited into a model-development cohort () and an independent validation cohort (). DSFR models based on different CT phases were developed. The optimal DSFR model was incorporated with clinical information to establish a DSFR-C model. An integrated nomogram based on the Cox regression was established. The DSFR signature was used to stratify high- and low-risk ER groups. Results. A portal phase-based DSFR model was selected as the optimal model (area under receiver operating characteristic curve (AUC): development cohort, 0.740; validation cohort, 0.717). The DSFR-C model achieved AUCs of 0.782 and 0.744 in the development and validation cohorts, respectively. In the development and validation cohorts, the integrated nomogram achieved C-index of 0.748 and 0.741 and time-dependent AUCs of 0.823 and 0.822, respectively, for recurrence-free survival (RFS) prediction. The RFS difference between the risk groups was statistically significant ( and in the development and validation cohorts, respectively). Conclusion. CECT-based DSFR can predict ER in single HCC after curative resection, and its combination with clinical information further improved the performance for ER prediction.
{"title":"Deep Segmentation Feature-Based Radiomics Improves Recurrence Prediction of Hepatocellular Carcinoma.","authors":"Jifei Wang, Dasheng Wu, Meili Sun, Zhenpeng Peng, Yingyu Lin, Hongxin Lin, Jiazhao Chen, Tingyu Long, Zi-Ping Li, Chuanmiao Xie, Bingsheng Huang, Shi-Ting Feng","doi":"10.34133/2022/9793716","DOIUrl":"https://doi.org/10.34133/2022/9793716","url":null,"abstract":"<p><p><i>Objective and Impact Statement</i>. This study developed and validated a deep semantic segmentation feature-based radiomics (DSFR) model based on preoperative contrast-enhanced computed tomography (CECT) combined with clinical information to predict early recurrence (ER) of single hepatocellular carcinoma (HCC) after curative resection. ER prediction is of great significance to the therapeutic decision-making and surveillance strategy of HCC. <i>Introduction</i>. ER prediction is important for HCC. However, it cannot currently be adequately determined. <i>Methods</i>. Totally, 208 patients with single HCC after curative resection were retrospectively recruited into a model-development cohort (<math><mi>n</mi><mo>=</mo><mn>180</mn></math>) and an independent validation cohort (<math><mi>n</mi><mo>=</mo><mn>28</mn></math>). DSFR models based on different CT phases were developed. The optimal DSFR model was incorporated with clinical information to establish a DSFR-C model. An integrated nomogram based on the Cox regression was established. The DSFR signature was used to stratify high- and low-risk ER groups. <i>Results</i>. A portal phase-based DSFR model was selected as the optimal model (area under receiver operating characteristic curve (AUC): development cohort, 0.740; validation cohort, 0.717). The DSFR-C model achieved AUCs of 0.782 and 0.744 in the development and validation cohorts, respectively. In the development and validation cohorts, the integrated nomogram achieved C-index of 0.748 and 0.741 and time-dependent AUCs of 0.823 and 0.822, respectively, for recurrence-free survival (RFS) prediction. The RFS difference between the risk groups was statistically significant (<math><mi>P</mi><mo><</mo><mn>0.0001</mn></math> and <math><mi>P</mi><mo>=</mo><mn>0.045</mn></math> in the development and validation cohorts, respectively). <i>Conclusion</i>. CECT-based DSFR can predict ER in single HCC after curative resection, and its combination with clinical information further improved the performance for ER prediction.</p>","PeriodicalId":72430,"journal":{"name":"BME frontiers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10521680/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41241399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-02eCollection Date: 2022-01-01DOI: 10.34133/2022/9763284
Wei Xiong, Neil Yeung, Shubo Wang, Haofu Liao, Liyun Wang, Jiebo Luo
Objective and Impact Statement. We adopt a deep learning model for bone osteolysis prediction on computed tomography (CT) images of murine breast cancer bone metastases. Given the bone CT scans at previous time steps, the model incorporates the bone-cancer interactions learned from the sequential images and generates future CT images. Its ability of predicting the development of bone lesions in cancer-invading bones can assist in assessing the risk of impending fractures and choosing proper treatments in breast cancer bone metastasis. Introduction. Breast cancer often metastasizes to bone, causes osteolytic lesions, and results in skeletal-related events (SREs) including severe pain and even fatal fractures. Although current imaging techniques can detect macroscopic bone lesions, predicting the occurrence and progression of bone lesions remains a challenge. Methods. We adopt a temporal variational autoencoder (T-VAE) model that utilizes a combination of variational autoencoders and long short-term memory networks to predict bone lesion emergence on our micro-CT dataset containing sequential images of murine tibiae. Given the CT scans of murine tibiae at early weeks, our model can learn the distribution of their future states from data. Results. We test our model against other deep learning-based prediction models on the bone lesion progression prediction task. Our model produces much more accurate predictions than existing models under various evaluation metrics. Conclusion. We develop a deep learning framework that can accurately predict and visualize the progression of osteolytic bone lesions. It will assist in planning and evaluating treatment strategies to prevent SREs in breast cancer patients.
{"title":"Breast Cancer Induced Bone Osteolysis Prediction Using Temporal Variational Autoencoders.","authors":"Wei Xiong, Neil Yeung, Shubo Wang, Haofu Liao, Liyun Wang, Jiebo Luo","doi":"10.34133/2022/9763284","DOIUrl":"10.34133/2022/9763284","url":null,"abstract":"<p><p><i>Objective and Impact Statement</i>. We adopt a deep learning model for bone osteolysis prediction on computed tomography (CT) images of murine breast cancer bone metastases. Given the bone CT scans at previous time steps, the model incorporates the bone-cancer interactions learned from the sequential images and generates future CT images. Its ability of predicting the development of bone lesions in cancer-invading bones can assist in assessing the risk of impending fractures and choosing proper treatments in breast cancer bone metastasis. <i>Introduction</i>. Breast cancer often metastasizes to bone, causes osteolytic lesions, and results in skeletal-related events (SREs) including severe pain and even fatal fractures. Although current imaging techniques can detect macroscopic bone lesions, predicting the occurrence and progression of bone lesions remains a challenge. <i>Methods</i>. We adopt a temporal variational autoencoder (T-VAE) model that utilizes a combination of variational autoencoders and long short-term memory networks to predict bone lesion emergence on our micro-CT dataset containing sequential images of murine tibiae. Given the CT scans of murine tibiae at early weeks, our model can learn the distribution of their future states from data. <i>Results</i>. We test our model against other deep learning-based prediction models on the bone lesion progression prediction task. Our model produces much more accurate predictions than existing models under various evaluation metrics. <i>Conclusion</i>. We develop a deep learning framework that can accurately predict and visualize the progression of osteolytic bone lesions. It will assist in planning and evaluating treatment strategies to prevent SREs in breast cancer patients.</p>","PeriodicalId":72430,"journal":{"name":"BME frontiers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10521666/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41241397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-16eCollection Date: 2022-01-01DOI: 10.34133/2022/9860179
Hailing Liu, Yu Zhao, Fan Yang, Xiaoying Lou, Feng Wu, Hang Li, Xiaohan Xing, Tingying Peng, Bjoern Menze, Junzhou Huang, Shujun Zhang, Anjia Han, Jianhua Yao, Xinjuan Fan
Objective. To develop an artificial intelligence method predicting lymph node metastasis (LNM) for patients with colorectal cancer (CRC). Impact Statement. A novel interpretable multimodal AI-based method to predict LNM for CRC patients by integrating information of pathological images and serum tumor-specific biomarkers. Introduction. Preoperative diagnosis of LNM is essential in treatment planning for CRC patients. Existing radiology imaging and genomic tests approaches are either unreliable or too costly. Methods. A total of 1338 patients were recruited, where 1128 patients from one centre were included as the discovery cohort and 210 patients from other two centres were involved as the external validation cohort. We developed a Multimodal Multiple Instance Learning (MMIL) model to learn latent features from pathological images and then jointly integrated the clinical biomarker features for predicting LNM status. The heatmaps of the obtained MMIL model were generated for model interpretation. Results. The MMIL model outperformed preoperative radiology-imaging diagnosis and yielded high area under the curve (AUCs) of 0.926, 0.878, 0.809, and 0.857 for patients with stage T1, T2, T3, and T4 CRC, on the discovery cohort. On the external cohort, it obtained AUCs of 0.855, 0.832, 0.691, and 0.792, respectively (T1-T4), which indicates its prediction accuracy and potential adaptability among multiple centres. Conclusion. The MMIL model showed the potential in the early diagnosis of LNM by referring to pathological images and tumor-specific biomarkers, which is easily accessed in different institutes. We revealed the histomorphologic features determining the LNM prediction indicating the model ability to learn informative latent features.
{"title":"Preoperative Prediction of Lymph Node Metastasis in Colorectal Cancer with Deep Learning.","authors":"Hailing Liu, Yu Zhao, Fan Yang, Xiaoying Lou, Feng Wu, Hang Li, Xiaohan Xing, Tingying Peng, Bjoern Menze, Junzhou Huang, Shujun Zhang, Anjia Han, Jianhua Yao, Xinjuan Fan","doi":"10.34133/2022/9860179","DOIUrl":"https://doi.org/10.34133/2022/9860179","url":null,"abstract":"<p><p><i>Objective</i>. To develop an artificial intelligence method predicting lymph node metastasis (LNM) for patients with colorectal cancer (CRC). <i>Impact Statement</i>. A novel interpretable multimodal AI-based method to predict LNM for CRC patients by integrating information of pathological images and serum tumor-specific biomarkers. <i>Introduction</i>. Preoperative diagnosis of LNM is essential in treatment planning for CRC patients. Existing radiology imaging and genomic tests approaches are either unreliable or too costly. <i>Methods</i>. A total of 1338 patients were recruited, where 1128 patients from one centre were included as the discovery cohort and 210 patients from other two centres were involved as the external validation cohort. We developed a Multimodal Multiple Instance Learning (MMIL) model to learn latent features from pathological images and then jointly integrated the clinical biomarker features for predicting LNM status. The heatmaps of the obtained MMIL model were generated for model interpretation. <i>Results</i>. The MMIL model outperformed preoperative radiology-imaging diagnosis and yielded high area under the curve (AUCs) of 0.926, 0.878, 0.809, and 0.857 for patients with stage T1, T2, T3, and T4 CRC, on the discovery cohort. On the external cohort, it obtained AUCs of 0.855, 0.832, 0.691, and 0.792, respectively (T1-T4), which indicates its prediction accuracy and potential adaptability among multiple centres. <i>Conclusion</i>. The MMIL model showed the potential in the early diagnosis of LNM by referring to pathological images and tumor-specific biomarkers, which is easily accessed in different institutes. We revealed the histomorphologic features determining the LNM prediction indicating the model ability to learn informative latent features.</p>","PeriodicalId":72430,"journal":{"name":"BME frontiers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10521754/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41241437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-08eCollection Date: 2022-01-01DOI: 10.34133/2022/9814824
Peiting You, Xiang Li, Fan Zhang, Quanzheng Li
Objective. Objective of this work is the development and evaluation of a cortical parcellation framework based on tractography-derived brain structural connectivity. Impact Statement. The proposed framework utilizes novel spatial-graph representation learning methods for solving the task of cortical parcellation, an important medical image analysis and neuroscientific problem. Introduction. The concept of "connectional fingerprint" has motivated many investigations on the connectivity-based cortical parcellation, especially with the technical advancement of diffusion imaging. Previous studies on multiple brain regions have been conducted with promising results. However, performance and applicability of these models are limited by the relatively simple computational scheme and the lack of effective representation of brain imaging data. Methods. We propose the Spatial-graph Convolution Parcellation (SGCP) framework, a two-stage deep learning-based modeling for the graph representation brain imaging. In the first stage, SGCP learns an effective embedding of the input data through a self-supervised contrastive learning scheme with the backbone encoder of a spatial-graph convolution network. In the second stage, SGCP learns a supervised classifier to perform voxel-wise classification for parcellating the desired brain region. Results. SGCP is evaluated on the parcellation task for 5 brain regions in a 15-subject DWI dataset. Performance comparisons between SGCP, traditional parcellation methods, and other deep learning-based methods show that SGCP can achieve superior performance in all the cases. Conclusion. Consistent good performance of the proposed SGCP framework indicates its potential to be used as a general solution for investigating the regional/subregional composition of human brain based on one or more connectivity measurements.
{"title":"Connectivity-based Cortical Parcellation via Contrastive Learning on Spatial-Graph Convolution.","authors":"Peiting You, Xiang Li, Fan Zhang, Quanzheng Li","doi":"10.34133/2022/9814824","DOIUrl":"10.34133/2022/9814824","url":null,"abstract":"<p><p><i>Objective</i>. Objective of this work is the development and evaluation of a cortical parcellation framework based on tractography-derived brain structural connectivity. <i>Impact Statement</i>. The proposed framework utilizes novel spatial-graph representation learning methods for solving the task of cortical parcellation, an important medical image analysis and neuroscientific problem. <i>Introduction</i>. The concept of \"connectional fingerprint\" has motivated many investigations on the connectivity-based cortical parcellation, especially with the technical advancement of diffusion imaging. Previous studies on multiple brain regions have been conducted with promising results. However, performance and applicability of these models are limited by the relatively simple computational scheme and the lack of effective representation of brain imaging data. <i>Methods</i>. We propose the Spatial-graph Convolution Parcellation (SGCP) framework, a two-stage deep learning-based modeling for the graph representation brain imaging. In the first stage, SGCP learns an effective embedding of the input data through a self-supervised contrastive learning scheme with the backbone encoder of a spatial-graph convolution network. In the second stage, SGCP learns a supervised classifier to perform voxel-wise classification for parcellating the desired brain region. <i>Results</i>. SGCP is evaluated on the parcellation task for 5 brain regions in a 15-subject DWI dataset. Performance comparisons between SGCP, traditional parcellation methods, and other deep learning-based methods show that SGCP can achieve superior performance in all the cases. <i>Conclusion</i>. Consistent good performance of the proposed SGCP framework indicates its potential to be used as a general solution for investigating the regional/subregional composition of human brain based on one or more connectivity measurements.</p>","PeriodicalId":72430,"journal":{"name":"BME frontiers","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2022-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10521716/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41241398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective and Impact Statement. There is a need to develop rodent coils capable of targeted brain stimulation for treating neuropsychiatric disorders and understanding brain mechanisms. We describe a novel rodent coil design to improve the focality for targeted stimulations in small rodent brains. Introduction. Transcranial magnetic stimulation (TMS) is becoming increasingly important for treating neuropsychiatric disorders and understanding brain mechanisms. Preclinical studies permit invasive manipulations and are essential for the mechanistic understanding of TMS effects and explorations of therapeutic outcomes in disease models. However, existing TMS tools lack focality for targeted stimulations. Notably, there has been limited fundamental research on developing coils capable of focal stimulation at deep brain regions on small animals like rodents. Methods. In this study, ferromagnetic cores are added to a novel angle-tuned coil design to enhance the coil performance regarding penetration depth and focality. Numerical simulations and experimental electric field measurements were conducted to optimize the coil design. Results. The proposed coil system demonstrated a significantly smaller stimulation spot size and enhanced electric field decay rate in comparison to existing coils. Adding the ferromagnetic core reduces the energy requirements up to 60% for rodent brain stimulation. The simulated results are validated with experimental measurements and demonstration of suprathreshold rodent limb excitation through targeted motor cortex activation. Conclusion. The newly developed coils are suitable tools for focal stimulations of the rodent brain due to their smaller stimulation spot size and improved electric field decay rate.
{"title":"High-Performance Magnetic-core Coils for Targeted Rodent Brain Stimulations.","authors":"Hedyeh Bagherzadeh, Qinglei Meng, Hanbing Lu, Elliott Hong, Yihong Yang, Fow-Sen Choa","doi":"10.34133/2022/9854846","DOIUrl":"https://doi.org/10.34133/2022/9854846","url":null,"abstract":"<p><p><i>Objective and Impact Statement</i>. There is a need to develop rodent coils capable of targeted brain stimulation for treating neuropsychiatric disorders and understanding brain mechanisms. We describe a novel rodent coil design to improve the focality for targeted stimulations in small rodent brains. <i>Introduction</i>. Transcranial magnetic stimulation (TMS) is becoming increasingly important for treating neuropsychiatric disorders and understanding brain mechanisms. Preclinical studies permit invasive manipulations and are essential for the mechanistic understanding of TMS effects and explorations of therapeutic outcomes in disease models. However, existing TMS tools lack focality for targeted stimulations. Notably, there has been limited fundamental research on developing coils capable of focal stimulation at deep brain regions on small animals like rodents. <i>Methods</i>. In this study, ferromagnetic cores are added to a novel angle-tuned coil design to enhance the coil performance regarding penetration depth and focality. Numerical simulations and experimental electric field measurements were conducted to optimize the coil design. <i>Results</i>. The proposed coil system demonstrated a significantly smaller stimulation spot size and enhanced electric field decay rate in comparison to existing coils. Adding the ferromagnetic core reduces the energy requirements up to 60% for rodent brain stimulation. The simulated results are validated with experimental measurements and demonstration of suprathreshold rodent limb excitation through targeted motor cortex activation. <i>Conclusion</i>. The newly developed coils are suitable tools for focal stimulations of the rodent brain due to their smaller stimulation spot size and improved electric field decay rate.</p>","PeriodicalId":72430,"journal":{"name":"BME frontiers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10521704/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41241404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-25eCollection Date: 2022-01-01DOI: 10.34133/2022/9837076
Alex Ling Yu Hung, Edward Chen, John Galeotti
Objective and Impact Statement. We propose a weakly- and semisupervised, probabilistic needle-and-reverberation-artifact segmentation algorithm to separate the desired tissue-based pixel values from the superimposed artifacts. Our method models the intensity decay of artifact intensities and is designed to minimize the human labeling error. Introduction. Ultrasound image quality has continually been improving. However, when needles or other metallic objects are operating inside the tissue, the resulting reverberation artifacts can severely corrupt the surrounding image quality. Such effects are challenging for existing computer vision algorithms for medical image analysis. Needle reverberation artifacts can be hard to identify at times and affect various pixel values to different degrees. The boundaries of such artifacts are ambiguous, leading to disagreement among human experts labeling the artifacts. Methods. Our learning-based framework consists of three parts. The first part is a probabilistic segmentation network to generate the soft labels based on the human labels. These soft labels are input into the second part which is the transform function, where the training labels for the third part are generated. The third part outputs the final masks which quantifies the reverberation artifacts. Results. We demonstrate the applicability of the approach and compare it against other segmentation algorithms. Our method is capable of both differentiating between the reverberations from artifact-free patches and modeling the intensity fall-off in the artifacts. Conclusion. Our method matches state-of-the-art artifact segmentation performance and sets a new standard in estimating the per-pixel contributions of artifact vs underlying anatomy, especially in the immediately adjacent regions between reverberation lines. Our algorithm is also able to improve the performance of downstream image analysis algorithms.
{"title":"Weakly- and Semisupervised Probabilistic Segmentation and Quantification of Reverberation Artifacts.","authors":"Alex Ling Yu Hung, Edward Chen, John Galeotti","doi":"10.34133/2022/9837076","DOIUrl":"10.34133/2022/9837076","url":null,"abstract":"Objective and Impact Statement. We propose a weakly- and semisupervised, probabilistic needle-and-reverberation-artifact segmentation algorithm to separate the desired tissue-based pixel values from the superimposed artifacts. Our method models the intensity decay of artifact intensities and is designed to minimize the human labeling error. Introduction. Ultrasound image quality has continually been improving. However, when needles or other metallic objects are operating inside the tissue, the resulting reverberation artifacts can severely corrupt the surrounding image quality. Such effects are challenging for existing computer vision algorithms for medical image analysis. Needle reverberation artifacts can be hard to identify at times and affect various pixel values to different degrees. The boundaries of such artifacts are ambiguous, leading to disagreement among human experts labeling the artifacts. Methods. Our learning-based framework consists of three parts. The first part is a probabilistic segmentation network to generate the soft labels based on the human labels. These soft labels are input into the second part which is the transform function, where the training labels for the third part are generated. The third part outputs the final masks which quantifies the reverberation artifacts. Results. We demonstrate the applicability of the approach and compare it against other segmentation algorithms. Our method is capable of both differentiating between the reverberations from artifact-free patches and modeling the intensity fall-off in the artifacts. Conclusion. Our method matches state-of-the-art artifact segmentation performance and sets a new standard in estimating the per-pixel contributions of artifact vs underlying anatomy, especially in the immediately adjacent regions between reverberation lines. Our algorithm is also able to improve the performance of downstream image analysis algorithms.","PeriodicalId":72430,"journal":{"name":"BME frontiers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10521739/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41241444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}