Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention最新文献
Pub Date : 2024-10-01Epub Date: 2024-10-03DOI: 10.1007/978-3-031-72089-5_45
Reuben Dorent, Erickson Torio, Nazim Haouchine, Colin Galvin, Sarah Frisken, Alexandra Golby, Tina Kapur, William Wells
Intraoperative ultrasound (iUS) imaging has the potential to improve surgical outcomes in brain surgery. However, its interpretation is challenging, even for expert neurosurgeons. In this work, we designed the first patient-specific framework that performs brain tumor segmentation in trackerless iUS. To disambiguate ultrasound imaging and adapt to the neurosurgeon's surgical objective, a patient-specific real-time network is trained using synthetic ultrasound data generated by simulating virtual iUS sweep acquisitions in pre-operative MR data. Extensive experiments performed in real ultrasound data demonstrate the effectiveness of the proposed approach, allowing for adapting to the surgeon's definition of surgical targets and outperforming non-patient-specific models, neurosurgeon experts, and high-end tracking systems. Our code is available at: https://github.com/ReubenDo/MHVAE-Seg.
{"title":"Patient-Specific Real-Time Segmentation in Trackerless Brain Ultrasound.","authors":"Reuben Dorent, Erickson Torio, Nazim Haouchine, Colin Galvin, Sarah Frisken, Alexandra Golby, Tina Kapur, William Wells","doi":"10.1007/978-3-031-72089-5_45","DOIUrl":"10.1007/978-3-031-72089-5_45","url":null,"abstract":"<p><p>Intraoperative ultrasound (iUS) imaging has the potential to improve surgical outcomes in brain surgery. However, its interpretation is challenging, even for expert neurosurgeons. In this work, we designed the first patient-specific framework that performs brain tumor segmentation in trackerless iUS. To disambiguate ultrasound imaging and adapt to the neurosurgeon's surgical objective, a patient-specific real-time network is trained using synthetic ultrasound data generated by simulating virtual iUS sweep acquisitions in pre-operative MR data. Extensive experiments performed in real ultrasound data demonstrate the effectiveness of the proposed approach, allowing for adapting to the surgeon's definition of surgical targets and outperforming non-patient-specific models, neurosurgeon experts, and high-end tracking systems. Our code is available at: https://github.com/ReubenDo/MHVAE-Seg.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15006 ","pages":"477-487"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12714359/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145807089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2024-10-04DOI: 10.1007/978-3-031-72069-7_9
Dan Hu, Kangfu Han, Jiale Cheng, Gang Li
Individualized brain parcellations derived from functional MRI (fMRI) are essential for discerning unique functional patterns of individuals, facilitating personalized diagnoses and treatments. Unfortunately, as fMRI signals are inherently noisy, establishing reliable individualized parcellations typically necessitates long-duration fMRI scan (> 25 min), posing a major challenge and resulting in the exclusion of numerous short-duration fMRI scans from individualized studies. To address this issue, we develop a novel Consecutive-Contrastive Spherical U-net (CC-SUnet) to enable the prediction of reliable individualized brain parcellation using short-duration fMRI data, greatly expanding its practical applicability. Specifically, 1) the widely used functional diffusion map (DM), obtained from functional connectivity, is carefully selected as the predictive feature, for its advantage in tracing the transitions between regions while reducing noise. To ensure a robust depiction of brain network, we propose a dual-task model to predict DM and cortical parcellation simultaneously, fully utilizing their reciprocal relationship. 2) By constructing a stepwise dataset to capture the gradual changes of DM over increasing scan durations, a consecutive prediction framework is designed to realize the prediction from short-to-long gradually. 3) A stepwise-denoising-prediction module is further proposed. The noise representations are separated and replaced by the latent representations of a group-level diffusion map, realizing informative guidance and denoising concurrently. 4) Additionally, an N-pair contrastive loss is introduced to strengthen the discriminability of the individualized parcellations. Extensive experimental results demonstrated the superiority of our proposed CC-SUnet in enhancing the reliability of the individualized parcellation with short-duration fMRI data, thereby significantly boosting their utility in individualized studies.
{"title":"Consecutive-Contrastive Spherical U-Net: Enhancing Reliability of Individualized Functional Brain Parcellation for Short-Duration fMRI Scans.","authors":"Dan Hu, Kangfu Han, Jiale Cheng, Gang Li","doi":"10.1007/978-3-031-72069-7_9","DOIUrl":"10.1007/978-3-031-72069-7_9","url":null,"abstract":"<p><p>Individualized brain parcellations derived from functional MRI (fMRI) are essential for discerning unique functional patterns of individuals, facilitating personalized diagnoses and treatments. Unfortunately, as fMRI signals are inherently noisy, establishing reliable individualized parcellations typically necessitates long-duration fMRI scan (> 25 min), posing a major challenge and resulting in the exclusion of numerous short-duration fMRI scans from individualized studies. To address this issue, we develop a novel Consecutive-Contrastive Spherical U-net (CC-SUnet) to enable the prediction of reliable individualized brain parcellation using short-duration fMRI data, greatly expanding its practical applicability. Specifically, 1) the widely used functional diffusion map (DM), obtained from functional connectivity, is carefully selected as the predictive feature, for its advantage in tracing the transitions between regions while reducing noise. To ensure a robust depiction of brain network, we propose a dual-task model to predict DM and cortical parcellation simultaneously, fully utilizing their reciprocal relationship. 2) By constructing a stepwise dataset to capture the gradual changes of DM over increasing scan durations, a consecutive prediction framework is designed to realize the prediction from short-to-long gradually. 3) A stepwise-denoising-prediction module is further proposed. The noise representations are separated and replaced by the latent representations of a group-level diffusion map, realizing informative guidance and denoising concurrently. 4) Additionally, an N-pair contrastive loss is introduced to strengthen the discriminability of the individualized parcellations. Extensive experimental results demonstrated the superiority of our proposed CC-SUnet in enhancing the reliability of the individualized parcellation with short-duration fMRI data, thereby significantly boosting their utility in individualized studies.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15002 ","pages":"88-98"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12716869/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145807080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graph neural networks (GNNs) are proficient machine learning models in handling irregularly structured data. Nevertheless, their generic formulation falls short when applied to the analysis of brain connectomes in Alzheimer's Disease (AD), necessitating the incorporation of domain-specific knowledge to achieve optimal model performance. The integration of AD-related expertise into GNNs presents a significant challenge. Current methodologies reliant on manual design often demand substantial expertise from external domain specialists to guide the development of novel models, thereby consuming considerable time and resources. To mitigate the need for manual curation, this paper introduces a novel self-guided knowledge-infused multimodal GNN to autonomously integrate domain knowledge into the model development process. We propose to conceptualize existing domain knowledge as natural language, and devise a specialized multimodal GNN framework tailored to leverage this uncurated knowledge to direct the learning of the GNN submodule, thereby enhancing its efficacy and improving prediction interpretability. To assess the effectiveness of our framework, we compile a comprehensive literature dataset comprising recent peer-reviewed publications on AD. By integrating this literature dataset with several real-world AD datasets, our experimental results illustrate the effectiveness of the proposed method in extracting curated knowledge and offering explanations on graphs for domain-specific applications. Furthermore, our approach successfully utilizes the extracted information to enhance the performance of the GNN.
{"title":"Self-guided Knowledge-Injected Graph Neural Network for Alzheimer's Diseases.","authors":"Zhepeng Wang, Runxue Bao, Yawen Wu, Guodong Liu, Lei Yang, Liang Zhan, Feng Zheng, Weiwen Jiang, Yanfu Zhang","doi":"10.1007/978-3-031-72069-7_36","DOIUrl":"10.1007/978-3-031-72069-7_36","url":null,"abstract":"<p><p>Graph neural networks (GNNs) are proficient machine learning models in handling irregularly structured data. Nevertheless, their generic formulation falls short when applied to the analysis of brain connectomes in Alzheimer's Disease (AD), necessitating the incorporation of domain-specific knowledge to achieve optimal model performance. The integration of AD-related expertise into GNNs presents a significant challenge. Current methodologies reliant on manual design often demand substantial expertise from external domain specialists to guide the development of novel models, thereby consuming considerable time and resources. To mitigate the need for manual curation, this paper introduces a novel self-guided knowledge-infused multimodal GNN to autonomously integrate domain knowledge into the model development process. We propose to conceptualize existing domain knowledge as natural language, and devise a specialized multimodal GNN framework tailored to leverage this uncurated knowledge to direct the learning of the GNN submodule, thereby enhancing its efficacy and improving prediction interpretability. To assess the effectiveness of our framework, we compile a comprehensive literature dataset comprising recent peer-reviewed publications on AD. By integrating this literature dataset with several real-world AD datasets, our experimental results illustrate the effectiveness of the proposed method in extracting curated knowledge and offering explanations on graphs for domain-specific applications. Furthermore, our approach successfully utilizes the extracted information to enhance the performance of the GNN.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15002 ","pages":"378-388"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11488260/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142484836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Low-dose computed tomography (LDCT) reduces the risks of radiation exposure but introduces noise and artifacts into CT images. The Feature Pyramid Network (FPN) is a conventional method for extracting multi-scale feature maps from input images. While upper layers in FPN enhance semantic value, details become generalized with reduced spatial resolution at each layer. In this work, we propose a Gradient Guided Co-Retention Feature Pyramid Network (G2CR-FPN) to address the connection between spatial resolution and semantic value beyond feature maps extracted from LDCT images. The network is structured with three essential paths: the bottom-up path utilizes the FPN structure to generate the hierarchical feature maps, representing multi-scale spatial resolutions and semantic values. Meanwhile, the lateral path serves as a skip connection between feature maps with the same spatial resolution, while also functioning feature maps as directional gradients. This path incorporates a gradient approximation, deriving edge-like enhanced feature maps in horizontal and vertical directions. The top-down path incorporates a proposed co-retention block that learns the high-level semantic value embedded in the preceding map of the path. This learning process is guided by the directional gradient approximation of the high-resolution feature map from the bottom-up path. Experimental results on the clinical CT images demonstrated the promising performance of the model. Our code is available at: https://github.com/liz109/G2CR-FPN.
{"title":"Gradient Guided Co-Retention Feature Pyramid Network for LDCT Image Denoising.","authors":"Li Zhou, Dayang Wang, Yongshun Xu, Shuo Han, Bahareh Morovati, Shuyi Fan, Hengyong Yu","doi":"10.1007/978-3-031-72390-2_15","DOIUrl":"10.1007/978-3-031-72390-2_15","url":null,"abstract":"<p><p>Low-dose computed tomography (LDCT) reduces the risks of radiation exposure but introduces noise and artifacts into CT images. The Feature Pyramid Network (FPN) is a conventional method for extracting multi-scale feature maps from input images. While upper layers in FPN enhance semantic value, details become generalized with reduced spatial resolution at each layer. In this work, we propose a Gradient Guided Co-Retention Feature Pyramid Network (G2CR-FPN) to address the connection between spatial resolution and semantic value beyond feature maps extracted from LDCT images. The network is structured with three essential paths: the bottom-up path utilizes the FPN structure to generate the hierarchical feature maps, representing multi-scale spatial resolutions and semantic values. Meanwhile, the lateral path serves as a skip connection between feature maps with the same spatial resolution, while also functioning feature maps as directional gradients. This path incorporates a gradient approximation, deriving edge-like enhanced feature maps in horizontal and vertical directions. The top-down path incorporates a proposed co-retention block that learns the high-level semantic value embedded in the preceding map of the path. This learning process is guided by the directional gradient approximation of the high-resolution feature map from the bottom-up path. Experimental results on the clinical CT images demonstrated the promising performance of the model. Our code is available at: https://github.com/liz109/G2CR-FPN.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15012 ","pages":"153-163"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12443485/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145088647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2024-10-03DOI: 10.1007/978-3-031-72384-1_13
Guoshi Li, Kim-Han Thung, Hoyt Taylor, Zhengwang Wu, Gang Li, Li Wang, Weili Lin, Sahar Ahmad, Pew-Thian Yap
Delineating the normative developmental profile of functional connectome is important for both standardized assessment of individual growth and early detection of diseases. However, functional connectome has been mostly studied using functional connectivity (FC), where undirected connectivity strengths are estimated from statistical correlation of resting-state functional MRI (rs-fMRI) signals. To address this limitation, we applied regression dynamic causal modeling (rDCM) to delineate the developmental trajectories of effective connectivity (EC), the directed causal influence among neuronal populations, in whole-brain networks from infancy to adolescence (0-22 years old) based on high-quality rs-fMRI data from Baby Connectome Project (BCP) and Human Connectome Project Development (HCP-D). Analysis with linear mixed model demonstrates significant age effect on the mean nodal EC which is best fit by a "U" shaped quadratic curve with minimal EC at around 2 years old. Further analysis indicates that five brain regions including the left and right cuneus, left precuneus, left supramarginal gyrus and right inferior temporal gyrus have the most significant age effect on nodal EC (p < 0.05, FDR corrected). Moreover, the frontoparietal control (FPC) network shows the fastest increase from early childhood to adolescence followed by the visual and salience networks. Our findings suggest complex nonlinear developmental profile of EC from infancy to adolescence, which may reflect dynamic structural and functional maturation during this critical growth period.
{"title":"Development of Effective Connectome from Infancy to Adolescence.","authors":"Guoshi Li, Kim-Han Thung, Hoyt Taylor, Zhengwang Wu, Gang Li, Li Wang, Weili Lin, Sahar Ahmad, Pew-Thian Yap","doi":"10.1007/978-3-031-72384-1_13","DOIUrl":"10.1007/978-3-031-72384-1_13","url":null,"abstract":"<p><p>Delineating the normative developmental profile of functional connectome is important for both standardized assessment of individual growth and early detection of diseases. However, functional connectome has been mostly studied using functional connectivity (FC), where undirected connectivity strengths are estimated from statistical correlation of resting-state functional MRI (rs-fMRI) signals. To address this limitation, we applied regression dynamic causal modeling (rDCM) to delineate the developmental trajectories of effective connectivity (EC), the directed causal influence among neuronal populations, in whole-brain networks from infancy to adolescence (0-22 years old) based on high-quality rs-fMRI data from Baby Connectome Project (BCP) and Human Connectome Project Development (HCP-D). Analysis with linear mixed model demonstrates significant age effect on the mean nodal EC which is best fit by a \"U\" shaped quadratic curve with minimal EC at around 2 years old. Further analysis indicates that five brain regions including the left and right cuneus, left precuneus, left supramarginal gyrus and right inferior temporal gyrus have the most significant age effect on nodal EC (<i>p</i> < 0.05, FDR corrected). Moreover, the frontoparietal control (FPC) network shows the fastest increase from early childhood to adolescence followed by the visual and salience networks. Our findings suggest complex nonlinear developmental profile of EC from infancy to adolescence, which may reflect dynamic structural and functional maturation during this critical growth period.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15003 ","pages":"131-140"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11758277/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143049390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2024-10-23DOI: 10.1007/978-3-031-72390-2_9
McKell Woodland, Austin Castelo, Mais Al Taie, Jessica Albuquerque Marques Silva, Mohamed Eltaher, Frank Mohn, Alexander Shieh, Suprateek Kundu, Joshua P Yung, Ankit B Patel, Kristy K Brock
Fréchet Inception Distance (FID) is a widely used metric for assessing synthetic image quality. It relies on an ImageNet-based feature extractor, making its applicability to medical imaging unclear. A recent trend is to adapt FID to medical imaging through feature extractors trained on medical images. Our study challenges this practice by demonstrating that ImageNet-based extractors are more consistent and aligned with human judgment than their RadImageNet counterparts. We evaluated sixteen StyleGAN2 networks across four medical imaging modalities and four data augmentation techniques with Fréchet distances (FDs) computed using eleven ImageNet or RadImageNet-trained feature extractors. Comparison with human judgment via visual Turing tests revealed that ImageNet-based extractors produced rankings consistent with human judgment, with the FD derived from the ImageNet-trained SwAV extractor significantly correlating with expert evaluations. In contrast, RadImageNet-based rankings were volatile and inconsistent with human judgment. Our findings challenge prevailing assumptions, providing novel evidence that medical image-trained feature extractors do not inherently improve FDs and can even compromise their reliability. Our code is available at https://github.com/mckellwoodland/fid-med-eval.
{"title":"Feature Extraction for Generative Medical Imaging Evaluation: New Evidence Against an Evolving Trend.","authors":"McKell Woodland, Austin Castelo, Mais Al Taie, Jessica Albuquerque Marques Silva, Mohamed Eltaher, Frank Mohn, Alexander Shieh, Suprateek Kundu, Joshua P Yung, Ankit B Patel, Kristy K Brock","doi":"10.1007/978-3-031-72390-2_9","DOIUrl":"10.1007/978-3-031-72390-2_9","url":null,"abstract":"<p><p>Fréchet Inception Distance (FID) is a widely used metric for assessing synthetic image quality. It relies on an ImageNet-based feature extractor, making its applicability to medical imaging unclear. A recent trend is to adapt FID to medical imaging through feature extractors trained on medical images. Our study challenges this practice by demonstrating that ImageNet-based extractors are more consistent and aligned with human judgment than their RadImageNet counterparts. We evaluated sixteen StyleGAN2 networks across four medical imaging modalities and four data augmentation techniques with Fréchet distances (FDs) computed using eleven ImageNet or RadImageNet-trained feature extractors. Comparison with human judgment via visual Turing tests revealed that ImageNet-based extractors produced rankings consistent with human judgment, with the FD derived from the ImageNet-trained SwAV extractor significantly correlating with expert evaluations. In contrast, RadImageNet-based rankings were volatile and inconsistent with human judgment. Our findings challenge prevailing assumptions, providing novel evidence that medical image-trained feature extractors do not inherently improve FDs and can even compromise their reliability. Our code is available at https://github.com/mckellwoodland/fid-med-eval.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15012 ","pages":"87-97"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12117514/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144183435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2024-10-14DOI: 10.1007/978-3-031-72083-3_40
Talat Zehra, Joseph Marino, Wendy Wang, Grigoriy Frantsuzov, Saad Nadeem
Histology slide digitization is becoming essential for telepathology (remote consultation), knowledge sharing (education), and using the state-of-the-art artificial intelligence algorithms (augmented/automated end-to-end clinical workflows). However, the cumulative costs of digital multi-slide high-speed brightfield scanners, cloud/on-premises storage, and personnel (IT and technicians) make the current slide digitization workflows out-of-reach for limited-resource settings, further widening the health equity gap; even single-slide manual scanning commercial solutions are costly due to hardware requirements (high-resolution cameras, high-spec PC/workstation, and support for only high-end microscopes). In this work, we present a new cloud slide digitization workflow for creating scanner-quality whole-slide images (WSIs) from uploaded low-quality videos, acquired from cheap and inexpensive microscopes with built-in cameras. Specifically, we present a pipeline to create stitched WSIs while automatically deblurring out-of-focus regions, upsampling input 10X images to 40X resolution, and reducing brightness/contrast and light-source illumination variations. We demonstrate the WSI creation efficacy from our workflow on World Health Organization-declared neglected tropical disease, Cutaneous Leishmaniasis (prevalent only in the poorest regions of the world and only diagnosed by sub-specialist dermatopathologists, rare in poor countries), as well as other common pathologies on core biopsies of breast, liver, duodenum, stomach and lymph node. The code and pretrained models will be accessible via our GitHub (https://github.com/nadeemlab/DeepLIIF), and the cloud platform will be available at https://deepliif.org for uploading microscope videos and downloading/viewing WSIs with shareable links (no sign-in required) for telepathology and knowledge sharing.
{"title":"Rethinking Histology Slide Digitization Workflows for Low-Resource Settings.","authors":"Talat Zehra, Joseph Marino, Wendy Wang, Grigoriy Frantsuzov, Saad Nadeem","doi":"10.1007/978-3-031-72083-3_40","DOIUrl":"10.1007/978-3-031-72083-3_40","url":null,"abstract":"<p><p>Histology slide digitization is becoming essential for telepathology (remote consultation), knowledge sharing (education), and using the state-of-the-art artificial intelligence algorithms (augmented/automated end-to-end clinical workflows). However, the cumulative costs of digital multi-slide high-speed brightfield scanners, cloud/on-premises storage, and personnel (IT and technicians) make the current slide digitization workflows out-of-reach for limited-resource settings, further widening the health equity gap; even single-slide manual scanning commercial solutions are costly due to hardware requirements (high-resolution cameras, high-spec PC/workstation, and support for only high-end microscopes). In this work, we present a new cloud slide digitization workflow for creating scanner-quality whole-slide images (WSIs) from uploaded low-quality videos, acquired from cheap and inexpensive microscopes with built-in cameras. Specifically, we present a pipeline to create stitched WSIs while automatically deblurring out-of-focus regions, upsampling input 10X images to 40X resolution, and reducing brightness/contrast and light-source illumination variations. We demonstrate the WSI creation efficacy from our workflow on World Health Organization-declared neglected tropical disease, Cutaneous Leishmaniasis (prevalent only in the poorest regions of the world and only diagnosed by sub-specialist dermatopathologists, rare in poor countries), as well as other common pathologies on core biopsies of breast, liver, duodenum, stomach and lymph node. The code and pretrained models will be accessible via our GitHub (https://github.com/nadeemlab/DeepLIIF), and the cloud platform will be available at https://deepliif.org for uploading microscope videos and downloading/viewing WSIs with shareable links (no sign-in required) for telepathology and knowledge sharing.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15004 ","pages":"427-436"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11786607/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143082977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2024-10-04DOI: 10.1007/978-3-031-72069-7_40
Yuan Li, Xinyu Nie, Jianwei Zhang, Yonggang Shi
Superficial white matter (SWM) U-fibers contain considerable structural connectivity in the human brain; however, related studies are not well-developed compared to the well-studied deep white matter (DWM). Conventionally, SWM U-fiber is obtained through DWM tracking, which is inaccurate on the cortical surface. The significant variability in the cortical folding patterns of the human brain renders a conventional template-based atlas unsuitable for accurately mapping U-fibers within the thin layer of SWM beneath the cortical surface. Recently, new surface-based tracking methods have been developed to reconstruct more complete and reliable U-fibers. To leverage surface-based U-fiber tracking methods, we propose to create a surface-based U-fiber dictionary using high-resolution diffusion MRI (dMRI) data from the Human Connectome Project (HCP). We first identify the major U-fiber bundles and then build a dictionary containing subjects with high groupwise consistency of major U-fiber bundles. Finally, we propose a shape-informed U-fiber atlasing method for robust SWM connectivity analysis. Through experiments, we demonstrate that our shape-informed atlasing method can obtain anatomically more accurate U-fiber representations than state-of-the-art atlas. Additionally, our method is capable of restoring incomplete U-fibers in low-resolution dMRI, thus helping better characterize SWM connectivity in clinical studies such as the Alzheimer's Disease Neuroimaging Initiative (ADNI).
浅表白质(SWM) u -纤维在人脑中包含大量的结构连接;然而,与深度白质(DWM)相比,相关研究并不发达。传统的SWM u -光纤是通过DWM跟踪获得的,在皮质表面是不准确的。人类大脑皮层折叠模式的显著可变性使得传统的基于模板的图谱不适合精确地绘制皮层表面下SWM薄层内的u -纤维。最近,新的基于表面的跟踪方法被开发出来,以重建更完整和可靠的u -纤维。为了利用基于表面的u -纤维跟踪方法,我们建议使用来自人类连接组计划(HCP)的高分辨率扩散MRI (dMRI)数据创建一个基于表面的u -纤维字典。我们首先对主要的U-fiber束进行了识别,然后建立了包含主要的U-fiber束具有高群一致性的主题的字典。最后,我们提出了一种形状知情的u型光纤atlasing方法,用于稳健的SWM连通性分析。通过实验,我们证明了我们的形状信息图谱方法可以获得比最先进的图谱更准确的解剖学u -纤维表征。此外,我们的方法能够在低分辨率dMRI中恢复不完整的u -纤维,从而有助于在阿尔茨海默病神经成像倡议(ADNI)等临床研究中更好地表征SWM连接。
{"title":"Surface-based and Shape-informed U-fiber Atlasing for Robust Superficial White Matter Connectivity Analysis.","authors":"Yuan Li, Xinyu Nie, Jianwei Zhang, Yonggang Shi","doi":"10.1007/978-3-031-72069-7_40","DOIUrl":"10.1007/978-3-031-72069-7_40","url":null,"abstract":"<p><p>Superficial white matter (SWM) U-fibers contain considerable structural connectivity in the human brain; however, related studies are not well-developed compared to the well-studied deep white matter (DWM). Conventionally, SWM U-fiber is obtained through DWM tracking, which is inaccurate on the cortical surface. The significant variability in the cortical folding patterns of the human brain renders a conventional template-based atlas unsuitable for accurately mapping U-fibers within the thin layer of SWM beneath the cortical surface. Recently, new surface-based tracking methods have been developed to reconstruct more complete and reliable U-fibers. To leverage surface-based U-fiber tracking methods, we propose to create a surface-based U-fiber dictionary using high-resolution diffusion MRI (dMRI) data from the Human Connectome Project (HCP). We first identify the major U-fiber bundles and then build a dictionary containing subjects with high groupwise consistency of major U-fiber bundles. Finally, we propose a shape-informed U-fiber atlasing method for robust SWM connectivity analysis. Through experiments, we demonstrate that our shape-informed atlasing method can obtain anatomically more accurate U-fiber representations than state-of-the-art atlas. Additionally, our method is capable of restoring incomplete U-fibers in low-resolution dMRI, thus helping better characterize SWM connectivity in clinical studies such as the Alzheimer's Disease Neuroimaging Initiative (ADNI).</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15002 ","pages":"422-432"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12448713/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145115740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generative models, such as GANs and diffusion models, have been used to augment training sets and boost performances in different tasks. We focus on generative models for cell detection instead, i.e., locating and classifying cells in given pathology images. One important information that has been largely overlooked is the spatial patterns of the cells. In this paper, we propose a spatial-pattern-guided generative model for cell layout generation. Specifically, a novel diffusion model guided by spatial features and generates realistic cell layouts has been proposed. We explore different density models as spatial features for the diffusion model. In downstream tasks, we show that the generated cell layouts can be used to guide the generation of high-quality pathology images. Augmenting with these images can significantly boost the performance of SOTA cell detection methods. The code is available at https://github.com/superlc1995/Diffusion-cell.
{"title":"Spatial Diffusion for Cell Layout Generation.","authors":"Chen Li, Xiaoling Hu, Shahira Abousamra, Meilong Xu, Chao Chen","doi":"10.1007/978-3-031-72083-3_45","DOIUrl":"10.1007/978-3-031-72083-3_45","url":null,"abstract":"<p><p>Generative models, such as GANs and diffusion models, have been used to augment training sets and boost performances in different tasks. We focus on generative models for cell detection instead, i.e., locating and classifying cells in given pathology images. One important information that has been largely overlooked is the spatial patterns of the cells. In this paper, we propose a spatial-pattern-guided generative model for cell layout generation. Specifically, a novel diffusion model guided by spatial features and generates realistic cell layouts has been proposed. We explore different density models as spatial features for the diffusion model. In downstream tasks, we show that the generated cell layouts can be used to guide the generation of high-quality pathology images. Augmenting with these images can significantly boost the performance of SOTA cell detection methods. The code is available at https://github.com/superlc1995/Diffusion-cell.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15004 ","pages":"481-491"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12206494/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144532224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2024-10-04DOI: 10.1007/978-3-031-72069-7_22
Haoteng Tang, Guodong Liu, Siyuan Dai, Kai Ye, Kun Zhao, Wenlu Wang, Carl Yang, Lifang He, Alex Leow, Paul Thompson, Heng Huang, Liang Zhan
The MRI-derived brain network serves as a pivotal instrument in elucidating both the structural and functional aspects of the brain, encompassing the ramifications of diseases and developmental processes. However, prevailing methodologies, often focusing on synchronous BOLD signals from functional MRI (fMRI), may not capture directional influences among brain regions and rarely tackle temporal functional dynamics. In this study, we first construct the brain-effective network via the dynamic causal model. Subsequently, we introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE). This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic inter-play between structural and effective networks via an ordinary differential equation (ODE) model, which characterizes spatial-temporal brain dynamics. Our framework is validated on several clinical phenotype prediction tasks using two independent publicly available datasets (HCP and OASIS). The experimental results clearly demonstrate the advantages of our model compared to several state-of-the-art methods.
{"title":"Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation.","authors":"Haoteng Tang, Guodong Liu, Siyuan Dai, Kai Ye, Kun Zhao, Wenlu Wang, Carl Yang, Lifang He, Alex Leow, Paul Thompson, Heng Huang, Liang Zhan","doi":"10.1007/978-3-031-72069-7_22","DOIUrl":"10.1007/978-3-031-72069-7_22","url":null,"abstract":"<p><p>The MRI-derived brain network serves as a pivotal instrument in elucidating both the structural and functional aspects of the brain, encompassing the ramifications of diseases and developmental processes. However, prevailing methodologies, often focusing on synchronous BOLD signals from functional MRI (fMRI), may not capture directional influences among brain regions and rarely tackle temporal functional dynamics. In this study, we first construct the brain-effective network via the dynamic causal model. Subsequently, we introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE). This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic inter-play between structural and effective networks via an ordinary differential equation (ODE) model, which characterizes spatial-temporal brain dynamics. Our framework is validated on several clinical phenotype prediction tasks using two independent publicly available datasets (HCP and OASIS). The experimental results clearly demonstrate the advantages of our model compared to several state-of-the-art methods.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15002 ","pages":"227-237"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11513182/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142515737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention