Petar N Petrov, Jessie T Zhang, Jeremy J Axelrod, Holger Müller
For decades since the development of phase-contrast optical microscopy, an analogous approach has been sought for maximizing the image contrast of weakly-scattering objects in transmission electron microscopy (TEM). The recent development of the laser phase plate (LPP) has demonstrated that an amplified, focused laser standing wave provides stable, tunable phase shift to the high-energy electron beam, achieving phase-contrast TEM. Building on proof-of-concept experimental demonstrations, this paper explores design improvements tailored to biological imaging. In particular, we introduce the approach of crossed laser phase plates (XLPP): two laser standing waves intersecting in the diffraction plane of the TEM, rather than a single beam as in the current LPP. We provide a theoretical model for the XLPP inside the microscope and use simulations to quantify its effect on image formation. We find that the XLPP increases information transfer at low spatial frequencies while also suppressing the ghost images formed by Kapitza-Dirac diffraction of the electron beam by the laser beam. We also demonstrate a simple acquisition scheme, enabled by the XLPP, which dramatically suppresses unwanted diffraction effects. The results of this study chart the course for future developments of LPP hardware.
{"title":"Crossed laser phase plates for transmission electron microscopy.","authors":"Petar N Petrov, Jessie T Zhang, Jeremy J Axelrod, Holger Müller","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>For decades since the development of phase-contrast optical microscopy, an analogous approach has been sought for maximizing the image contrast of weakly-scattering objects in transmission electron microscopy (TEM). The recent development of the laser phase plate (LPP) has demonstrated that an amplified, focused laser standing wave provides stable, tunable phase shift to the high-energy electron beam, achieving phase-contrast TEM. Building on proof-of-concept experimental demonstrations, this paper explores design improvements tailored to biological imaging. In particular, we introduce the approach of crossed laser phase plates (XLPP): two laser standing waves intersecting in the diffraction plane of the TEM, rather than a single beam as in the current LPP. We provide a theoretical model for the XLPP inside the microscope and use simulations to quantify its effect on image formation. We find that the XLPP increases information transfer at low spatial frequencies while also suppressing the ghost images formed by Kapitza-Dirac diffraction of the electron beam by the laser beam. We also demonstrate a simple acquisition scheme, enabled by the XLPP, which dramatically suppresses unwanted diffraction effects. The results of this study chart the course for future developments of LPP hardware.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11527100/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142559712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cancer is a complex disease driven by genomic alterations, and tumor sequencing is becoming a mainstay of clinical care for cancer patients. The emergence of multi-institution sequencing data presents a powerful resource for learning real-world evidence to enhance precision oncology. GENIE BPC, led by American Association for Cancer Research, establishes a unique database linking genomic data with clinical information for patients treated at multiple cancer centers. However, leveraging sequencing data from multiple institutions presents significant challenges. Variability in gene panels can lead to loss of information when analyses focus on genes common across panels. Additionally, differences in sequencing techniques and patient heterogeneity across institutions add complexity. High data dimensionality, sparse gene mutation patterns, and weak signals at the individual gene level further complicate matters. Motivated by these real-world challenges, we introduce the Bridge model. It uses a quantile-matched latent variable approach to derive integrated features to preserve information beyond common genes and maximize the utilization of all available data, while leveraging information sharing to enhance both learning efficiency and the model's capacity to generalize. By extracting harmonized and noise-reduced lower-dimensional latent variables, the true mutation pattern unique to each individual is captured. We assess model's performance and parameter estimation through extensive simulation studies. The extracted latent features from the Bridge model consistently excel in predicting patient survival across six cancer types in GENIE BPC data.
{"title":"Unlocking the Power of Multi-institutional Data: Integrating and Harmonizing Genomic Data Across Institutions.","authors":"Yuan Chen, Ronglai Shen, Xiwen Feng, Katherine Panageas","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Cancer is a complex disease driven by genomic alterations, and tumor sequencing is becoming a mainstay of clinical care for cancer patients. The emergence of multi-institution sequencing data presents a powerful resource for learning real-world evidence to enhance precision oncology. GENIE BPC, led by American Association for Cancer Research, establishes a unique database linking genomic data with clinical information for patients treated at multiple cancer centers. However, leveraging sequencing data from multiple institutions presents significant challenges. Variability in gene panels can lead to loss of information when analyses focus on genes common across panels. Additionally, differences in sequencing techniques and patient heterogeneity across institutions add complexity. High data dimensionality, sparse gene mutation patterns, and weak signals at the individual gene level further complicate matters. Motivated by these real-world challenges, we introduce the Bridge model. It uses a quantile-matched latent variable approach to derive integrated features to preserve information beyond common genes and maximize the utilization of all available data, while leveraging information sharing to enhance both learning efficiency and the model's capacity to generalize. By extracting harmonized and noise-reduced lower-dimensional latent variables, the true mutation pattern unique to each individual is captured. We assess model's performance and parameter estimation through extensive simulation studies. The extracted latent features from the Bridge model consistently excel in predicting patient survival across six cancer types in GENIE BPC data.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11581117/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142689973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We develop a version of stochastic control that accounts for computational costs of inference. Past studies identified efficient coding without control, or efficient control that neglects the cost of synthesizing information. Here we combine these concepts into a framework where agents rationally approximate inference for efficient control. Specifically, we study Linear Quadratic Gaussian (LQG) control with an added internal cost on the relative precision of the posterior probability over the world state. This creates a trade-off: an agent can obtain more utility overall by sacrificing some task performance, if doing so saves enough bits during inference. We discover that the rational strategy that solves the joint inference and control problem goes through phase transitions depending on the task demands, switching from a costly but optimal inference to a family of suboptimal inferences related by rotation transformations, each misestimate the stability of the world. In all cases, the agent moves more to think less. This work provides a foundation for a new type of rational computations that could be used by both brains and machines for efficient but computationally constrained control.
{"title":"Control when confidence is costly.","authors":"Itzel Olivos Castillo, Paul Schrater, Xaq Pitkow","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We develop a version of stochastic control that accounts for computational costs of inference. Past studies identified efficient coding without control, or efficient control that neglects the cost of synthesizing information. Here we combine these concepts into a framework where agents rationally approximate inference for efficient control. Specifically, we study Linear Quadratic Gaussian (LQG) control with an added internal cost on the relative precision of the posterior probability over the world state. This creates a trade-off: an agent can obtain more utility overall by sacrificing some task performance, if doing so saves enough bits during inference. We discover that the rational strategy that solves the joint inference and control problem goes through phase transitions depending on the task demands, switching from a costly but optimal inference to a family of suboptimal inferences related by rotation transformations, each misestimate the stability of the world. In all cases, the agent moves more to think less. This work provides a foundation for a new type of rational computations that could be used by both brains and machines for efficient but computationally constrained control.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11581108/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142689935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ability of a brain or a neural network to efficiently learn depends crucially on both the task structure and the learning rule. Previous works have analyzed the dynamical equations describing learning in the relatively simplified context of the perceptron under assumptions of a student-teacher framework or a linearized output. While these assumptions have facilitated theoretical understanding, they have precluded a detailed understanding of the roles of the nonlinearity and input-data distribution in determining the learning dynamics, limiting the applicability of the theories to real biological or artificial neural networks. Here, we use a stochastic-process approach to derive flow equations describing learning, applying this framework to the case of a nonlinear perceptron performing binary classification. We characterize the effects of the learning rule (supervised or reinforcement learning, SL/RL) and input-data distribution on the perceptron's learning curve and the forgetting curve as subsequent tasks are learned. In particular, we find that the input-data noise differently affects the learning speed under SL vs. RL, as well as determines how quickly learning of a task is overwritten by subsequent learning. Additionally, we verify our approach with real data using the MNIST dataset. This approach points a way toward analyzing learning dynamics for more-complex circuit architectures.
{"title":"Dynamics of Supervised and Reinforcement Learning in the Non-Linear Perceptron.","authors":"Christian Schmid, James M Murray","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The ability of a brain or a neural network to efficiently learn depends crucially on both the task structure and the learning rule. Previous works have analyzed the dynamical equations describing learning in the relatively simplified context of the perceptron under assumptions of a student-teacher framework or a linearized output. While these assumptions have facilitated theoretical understanding, they have precluded a detailed understanding of the roles of the nonlinearity and input-data distribution in determining the learning dynamics, limiting the applicability of the theories to real biological or artificial neural networks. Here, we use a stochastic-process approach to derive flow equations describing learning, applying this framework to the case of a nonlinear perceptron performing binary classification. We characterize the effects of the learning rule (supervised or reinforcement learning, SL/RL) and input-data distribution on the perceptron's learning curve and the forgetting curve as subsequent tasks are learned. In particular, we find that the input-data noise differently affects the learning speed under SL vs. RL, as well as determines how quickly learning of a task is overwritten by subsequent learning. Additionally, we verify our approach with real data using the MNIST dataset. This approach points a way toward analyzing learning dynamics for more-complex circuit architectures.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11398553/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142303246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fakrul Islam Tushar, Liesbeth Vancoillie, Cindy McCabe, Amareswararao Kavuri, Lavsen Dahal, Brian Harrawood, Milo Fryling, Mojtaba Zarei, Saman Sotoudeh-Paima, Fong Chi Ho, Dhrubajyoti Ghosh, Michael R Harowicz, Tina D Tailor, Sheng Luo, W Paul Segars, Ehsan Abadi, Kyle J Lafata, Joseph Y Lo, Ehsan Samei
Importance: Clinical imaging trials are crucial for evaluation of medical innovations, but the process is inefficient, expensive, and ethically-constrained. Virtual imaging trial (VIT) approach addresses these limitations by emulating the components of a clinical trial. An in silico rendition of the National Lung Screening Trial (NCLS) via Virtual Lung Screening Trial (VLST) demonstrates the promise of VITs to expedite clinical trials, reduce risks to subjects, and facilitate the optimal use of imaging technologies in clinical settings.
Objectives: To demonstrate that a virtual imaging trial platform can accurately emulate a major clinical trial, specifically the National Lung Screening Trial (NLST) that compared computed tomography (CT) and chest radiography (CXR) imaging for lung cancer screening.
Design setting and participants: A virtual patient population of 294 subjects was created from human models (XCAT) emulating the NLST, with two types of simulated cancerous lung nodules. Each virtual patient in the cohort was assessed using simulated CT and CXR systems to generate images reflecting the NLST imaging technologies. Deep learning models trained for lesion detection, AI CT-Reader, and AI CXR-Reader served as virtual readers.
Main outcomes and measures: The primary outcome was the difference in the Receiver Operating Characteristic Area Under the Curve (AUC) for CT and CXR modalities.
Results: The study analyzed paired CT and CXR simulated images from 294 virtual patients. The AI CT-Reader outperformed the AI CXR-Reader across all levels of analysis. At the patient level, CT demonstrated superior diagnostic performance with an AUC of 0.92 (95% CI: 0.90-0.95), compared to CXR's AUC of 0.72 (0.67-0.77). Subgroup analyses of lesion types revealed CT had significantly better detection of homogeneous lesions (AUC 0.97, 95% CI: 0.95-0.98) compared to heterogeneous lesions (0.89; 0.86-0.93). Furthermore, when the specificity of the AI CT-Reader was adjusted to match the NLST sensitivity of 94% for CT, the VLST results closely mirrored the NLST findings, further highlighting the alignment between the two studies.
Conclusion and relevance: The VIT results closely replicated those of the earlier NLST, underscoring its potential to replicate real clinical imaging trials. Integration of virtual trials may aid in the evaluation and improvement of imaging-based diagnosis.
{"title":"Virtual Lung Screening Trial (VLST): An <i>In Silico</i> Replica of the National Lung Screening Trial for Lung Cancer Detection.","authors":"Fakrul Islam Tushar, Liesbeth Vancoillie, Cindy McCabe, Amareswararao Kavuri, Lavsen Dahal, Brian Harrawood, Milo Fryling, Mojtaba Zarei, Saman Sotoudeh-Paima, Fong Chi Ho, Dhrubajyoti Ghosh, Michael R Harowicz, Tina D Tailor, Sheng Luo, W Paul Segars, Ehsan Abadi, Kyle J Lafata, Joseph Y Lo, Ehsan Samei","doi":"","DOIUrl":"","url":null,"abstract":"<p><strong>Importance: </strong>Clinical imaging trials are crucial for evaluation of medical innovations, but the process is inefficient, expensive, and ethically-constrained. Virtual imaging trial (VIT) approach addresses these limitations by emulating the components of a clinical trial. An <i>in silico</i> rendition of the National Lung Screening Trial (NCLS) via Virtual Lung Screening Trial (VLST) demonstrates the promise of VITs to expedite clinical trials, reduce risks to subjects, and facilitate the optimal use of imaging technologies in clinical settings.</p><p><strong>Objectives: </strong>To demonstrate that a virtual imaging trial platform can accurately emulate a major clinical trial, specifically the National Lung Screening Trial (NLST) that compared computed tomography (CT) and chest radiography (CXR) imaging for lung cancer screening.</p><p><strong>Design setting and participants: </strong>A virtual patient population of 294 subjects was created from human models (XCAT) emulating the NLST, with two types of simulated cancerous lung nodules. Each virtual patient in the cohort was assessed using simulated CT and CXR systems to generate images reflecting the NLST imaging technologies. Deep learning models trained for lesion detection, AI CT-Reader, and AI CXR-Reader served as virtual readers.</p><p><strong>Main outcomes and measures: </strong>The primary outcome was the difference in the Receiver Operating Characteristic Area Under the Curve (AUC) for CT and CXR modalities.</p><p><strong>Results: </strong>The study analyzed paired CT and CXR simulated images from 294 virtual patients. The AI CT-Reader outperformed the AI CXR-Reader across all levels of analysis. At the patient level, CT demonstrated superior diagnostic performance with an AUC of 0.92 (95% CI: 0.90-0.95), compared to CXR's AUC of 0.72 (0.67-0.77). Subgroup analyses of lesion types revealed CT had significantly better detection of homogeneous lesions (AUC 0.97, 95% CI: 0.95-0.98) compared to heterogeneous lesions (0.89; 0.86-0.93). Furthermore, when the specificity of the AI CT-Reader was adjusted to match the NLST sensitivity of 94% for CT, the VLST results closely mirrored the NLST findings, further highlighting the alignment between the two studies.</p><p><strong>Conclusion and relevance: </strong>The VIT results closely replicated those of the earlier NLST, underscoring its potential to replicate real clinical imaging trials. Integration of virtual trials may aid in the evaluation and improvement of imaging-based diagnosis.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11065052/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140875085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heng Sun, Sai Manoj Jalam, Havish Kodali, Subhash Nerella, Ruben D Zapata, Nicole Gravina, Jessica Ray, Erik C Schmidt, Todd Matthew Manini, Rashidi Parisa
Mobile health (mHealth) apps have gained popularity over the past decade for patient health monitoring, yet their potential for timely intervention is underutilized due to limited integration with electronic health records (EHR) systems. Current EHR systems lack real-time monitoring capabilities for symptoms, medication adherence, physical and social functions, and community integration. Existing systems typically rely on static, in-clinic measures rather than dynamic, real-time patient data. This highlights the need for automated, scalable, and human-centered platforms to integrate patient-generated health data (PGHD) within EHR. Incorporating PGHD in a user-friendly format can enhance patient symptom surveillance, ultimately improving care management and post-surgical outcomes. To address this barrier, we have developed an mHealth platform, ROAMM-EHR, to capture real-time sensor data and Patient Reported Outcomes (PROs) using a smartwatch. The ROAMM-EHR platform can capture data from a consumer smartwatch, send captured data to a secure server, and display information within the Epic EHR system using a user-friendly interface, thus enabling healthcare providers to monitor post-surgical symptoms effectively.
过去十年来,移动医疗(mHealth)应用程序在患者健康监测方面越来越受欢迎,但由于与电子健康记录(EHR)系统的集成度有限,其及时干预的潜力尚未得到充分利用。目前的电子病历系统缺乏对症状、用药依从性、身体和社会功能以及社区融合的实时监控功能。现有系统通常依赖静态的诊室测量数据,而非动态的实时患者数据。这凸显了在电子病历中整合患者生成的健康数据 (PGHD) 的自动化、可扩展和以人为本平台的必要性。以用户友好的格式整合患者生成的健康数据可以加强对患者症状的监控,最终改善护理管理和术后效果。为了解决这一障碍,我们开发了一个移动医疗平台 ROAMM-EHR,利用智能手表采集实时传感器数据和患者报告结果 (PRO)。ROAMM-EHR 平台可以从消费者智能手表中捕获数据,将捕获的数据发送到安全服务器,并使用用户友好的界面在 Epic EHR 系统中显示信息,从而使医疗服务提供者能够有效监测手术后症状。
{"title":"Enhancing EHR Systems with data from wearables: An end-to-end Solution for monitoring post-Surgical Symptoms in older adults.","authors":"Heng Sun, Sai Manoj Jalam, Havish Kodali, Subhash Nerella, Ruben D Zapata, Nicole Gravina, Jessica Ray, Erik C Schmidt, Todd Matthew Manini, Rashidi Parisa","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Mobile health (mHealth) apps have gained popularity over the past decade for patient health monitoring, yet their potential for timely intervention is underutilized due to limited integration with electronic health records (EHR) systems. Current EHR systems lack real-time monitoring capabilities for symptoms, medication adherence, physical and social functions, and community integration. Existing systems typically rely on static, in-clinic measures rather than dynamic, real-time patient data. This highlights the need for automated, scalable, and human-centered platforms to integrate patient-generated health data (PGHD) within EHR. Incorporating PGHD in a user-friendly format can enhance patient symptom surveillance, ultimately improving care management and post-surgical outcomes. To address this barrier, we have developed an mHealth platform, ROAMM-EHR, to capture real-time sensor data and Patient Reported Outcomes (PROs) using a smartwatch. The ROAMM-EHR platform can capture data from a consumer smartwatch, send captured data to a secure server, and display information within the Epic EHR system using a user-friendly interface, thus enabling healthcare providers to monitor post-surgical symptoms effectively.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11581106/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142689939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Resting-state functional magnetic resonance imaging (rsfMRI) is a powerful tool for investigating the relationship between brain function and cognitive processes as it allows for the functional organization of the brain to be captured without relying on a specific task or stimuli. In this paper, we present a novel modeling architecture called BrainRGIN for predicting intelligence (fluid, crystallized and total intelligence) using graph neural networks on rsfMRI derived static functional network connectivity matrices. Extending from the existing graph convolution networks, our approach incorporates a clustering-based embedding and graph isomorphism network in the graph convolutional layer to reflect the nature of the brain sub-network organization and efficient network expression, in combination with TopK pooling and attention-based readout functions. We evaluated our proposed architecture on a large dataset, specifically the Adolescent Brain Cognitive Development Dataset, and demonstrated its effectiveness in predicting individual differences in intelligence. Our model achieved lower mean squared errors, and higher correlation scores than existing relevant graph architectures and other traditional machine learning models for all of the intelligence prediction tasks. The middle frontal gyrus exhibited a significant contribution to both fluid and crystallized intelligence, suggesting their pivotal role in these cognitive processes. Total composite scores identified a diverse set of brain regions to be relevant which underscores the complex nature of total intelligence.
{"title":"Brain Networks and Intelligence: A Graph Neural Network Based Approach to Resting State fMRI Data.","authors":"Bishal Thapaliya, Esra Akbas, Jiayu Chen, Raam Sapkota, Bhaskar Ray, Pranav Suresh, Vince Calhoun, Jingyu Liu","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Resting-state functional magnetic resonance imaging (rsfMRI) is a powerful tool for investigating the relationship between brain function and cognitive processes as it allows for the functional organization of the brain to be captured without relying on a specific task or stimuli. In this paper, we present a novel modeling architecture called BrainRGIN for predicting intelligence (fluid, crystallized and total intelligence) using graph neural networks on rsfMRI derived static functional network connectivity matrices. Extending from the existing graph convolution networks, our approach incorporates a clustering-based embedding and graph isomorphism network in the graph convolutional layer to reflect the nature of the brain sub-network organization and efficient network expression, in combination with TopK pooling and attention-based readout functions. We evaluated our proposed architecture on a large dataset, specifically the Adolescent Brain Cognitive Development Dataset, and demonstrated its effectiveness in predicting individual differences in intelligence. Our model achieved lower mean squared errors, and higher correlation scores than existing relevant graph architectures and other traditional machine learning models for all of the intelligence prediction tasks. The middle frontal gyrus exhibited a significant contribution to both fluid and crystallized intelligence, suggesting their pivotal role in these cognitive processes. Total composite scores identified a diverse set of brain regions to be relevant which underscores the complex nature of total intelligence.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10659448/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138178263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Caroline C McGrouther, Aaditya V Rangan, Arianna Di Florio, Jeremy A Elman, Nicholas J Schork, John Kelsoe
Background: Bipolar Disorder (BD) is a complex disease. It is heterogeneous, both at the phenotypic and genetic level, although the extent and impact of this heterogeneity is not fully understood. One way to assess this heterogeneity is to look for patterns in the subphenotype data. Because of the variability in how phenotypic data was collected by the various BD studies over the years, homogenizing this subphenotypic data is a challenging task, and so is replication. An alternative methodology, taken here, is to set aside the intricacies of subphenotype and allow the genetic data itself to determine which subjects define a homogeneous genetic subgroup (termed 'bicluster' below).
Results: In this paper, we leverage recent advances in heterogeneity analysis to look for genetically-driven subgroups (i.e., biclusters) within the broad phenotype of Bipolar Disorder. We first apply this covariate-corrected biclustering algorithm to a cohort of 2524 BD cases and 4106 controls from the Bipolar Disease Research Network (BDRN) within the Psychiatric Genomics Consortium (PGC). We find evidence of genetic heterogeneity delineating a statistically significant bicluster comprising a subset of BD cases which exhibits a disease-specific pattern of differential-expression across a subset of SNPs. This disease-specific genetic pattern (i.e., 'genetic subgroup') replicates across the remaining data-sets collected by the PGC containing 5781/8289, 3581/7591, and 6825/9752 cases/controls, respectively. This genetic subgroup (discovered without using any BD subtype information) was more prevalent in Bipolar type-I than in Bipolar type-II.
Conclusions: Our methodology has successfully identified a replicable homogeneous genetic subgroup of bipolar disorder. This subgroup may represent a collection of correlated genetic risk-factors for BDI. By investigating the subgroup's bicluster-informed polygenic-risk-scoring (PRS), we find that the disease-specific pattern highlighted by the bicluster can be leveraged to eliminate noise from our GWAS analyses and improve risk prediction. This improvement is particularly notable when using only a relatively small subset of the available SNPs, implying improved SNP replication. Though our primary focus is only the analysis of disease-related signal, we also identify replicable control-related heterogeneity.
{"title":"Heterogeneity analysis provides evidence for a genetically homogeneous subtype of bipolar-disorder.","authors":"Caroline C McGrouther, Aaditya V Rangan, Arianna Di Florio, Jeremy A Elman, Nicholas J Schork, John Kelsoe","doi":"","DOIUrl":"","url":null,"abstract":"<p><strong>Background: </strong>Bipolar Disorder (BD) is a complex disease. It is heterogeneous, both at the phenotypic and genetic level, although the extent and impact of this heterogeneity is not fully understood. One way to assess this heterogeneity is to look for patterns in the subphenotype data. Because of the variability in how phenotypic data was collected by the various BD studies over the years, homogenizing this subphenotypic data is a challenging task, and so is replication. An alternative methodology, taken here, is to set aside the intricacies of subphenotype and allow the genetic data itself to determine which subjects define a homogeneous genetic subgroup (termed 'bicluster' below).</p><p><strong>Results: </strong>In this paper, we leverage recent advances in heterogeneity analysis to look for genetically-driven subgroups (i.e., biclusters) within the broad phenotype of Bipolar Disorder. We first apply this covariate-corrected biclustering algorithm to a cohort of 2524 BD cases and 4106 controls from the Bipolar Disease Research Network (BDRN) within the Psychiatric Genomics Consortium (PGC). We find evidence of genetic heterogeneity delineating a statistically significant bicluster comprising a subset of BD cases which exhibits a disease-specific pattern of differential-expression across a subset of SNPs. This disease-specific genetic pattern (i.e., 'genetic subgroup') replicates across the remaining data-sets collected by the PGC containing 5781/8289, 3581/7591, and 6825/9752 cases/controls, respectively. This genetic subgroup (discovered without using any BD subtype information) was more prevalent in Bipolar type-I than in Bipolar type-II.</p><p><strong>Conclusions: </strong>Our methodology has successfully identified a replicable homogeneous genetic subgroup of bipolar disorder. This subgroup may represent a collection of correlated genetic risk-factors for BDI. By investigating the subgroup's bicluster-informed polygenic-risk-scoring (PRS), we find that the disease-specific pattern highlighted by the bicluster can be leveraged to eliminate noise from our GWAS analyses and improve risk prediction. This improvement is particularly notable when using only a relatively small subset of the available SNPs, implying improved SNP replication. Though our primary focus is only the analysis of disease-related signal, we also identify replicable control-related heterogeneity.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11092873/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140920912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patrick C Kinnunen, Siddhartha Srivastava, Zhenlin Wang, Kenneth K Y Ho, Brock A Humphries, Siyi Chen, Jennifer J Linderman, Gary D Luker, Kathryn E Luker, Krishna Garikipati
Targeting signaling pathways that drive cancer cell migration or proliferation is a common therapeutic approach. A popular experimental technique, the scratch assay, measures the migration and proliferation-driven cell closure of a defect in a confluent cell monolayer. These assays do not measure dynamic effects. To improve analysis of scratch assays, we combine high-throughput scratch assays, video microscopy, and system identification to infer partial differential equation (PDE) models of cell migration and proliferation. We capture the evolution of cell density fields over time using live cell microscopy and automated image processing. We employ weak form-based system identification techniques for cell density dynamics modeled with first-order kinetics of advection-diffusion-reaction systems. We present a comparison of our methods to results obtained using traditional inference approaches on previously analyzed 1-dimensional scratch assay data. We demonstrate the application of this pipeline on high throughput 2-dimensional scratch assays and find that low levels of trametinib inhibit wound closure primarily by decreasing random cell migration by approximately 20%. Our integrated experimental and computational pipeline can be adapted for quantitatively inferring the effect of biological perturbations on cell migration and proliferation in various cell lines.
{"title":"Inference of weak-form partial differential equations describing migration and proliferation mechanisms in wound healing experiments on cancer cells.","authors":"Patrick C Kinnunen, Siddhartha Srivastava, Zhenlin Wang, Kenneth K Y Ho, Brock A Humphries, Siyi Chen, Jennifer J Linderman, Gary D Luker, Kathryn E Luker, Krishna Garikipati","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Targeting signaling pathways that drive cancer cell migration or proliferation is a common therapeutic approach. A popular experimental technique, the scratch assay, measures the migration and proliferation-driven cell closure of a defect in a confluent cell monolayer. These assays do not measure dynamic effects. To improve analysis of scratch assays, we combine high-throughput scratch assays, video microscopy, and system identification to infer partial differential equation (PDE) models of cell migration and proliferation. We capture the evolution of cell density fields over time using live cell microscopy and automated image processing. We employ weak form-based system identification techniques for cell density dynamics modeled with first-order kinetics of advection-diffusion-reaction systems. We present a comparison of our methods to results obtained using traditional inference approaches on previously analyzed 1-dimensional scratch assay data. We demonstrate the application of this pipeline on high throughput 2-dimensional scratch assays and find that low levels of trametinib inhibit wound closure primarily by decreasing random cell migration by approximately 20%. Our integrated experimental and computational pipeline can be adapted for quantitatively inferring the effect of biological perturbations on cell migration and proliferation in various cell lines.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11537331/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142580815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D Korošak, S Postić, A Stožer, B Podobnik, M Slak Rupnik
Calcium signals in pancreatic cells collectives show a sharp transition from uncorrelated to correlated state resembling a phase transition as the slowly increasing glucose concentration crosses the tipping point. However, the exact nature or the order of this phase transition is not well understood. Using confocal microscopy to record the collective calcium activation of cells in an intact islet under changing glucose concentration in increasing and then decreasing way, we first show that in addition to the sharp transition, the coordinated calcium response exhibits a hysteresis indicating a critical, first order transition. A network model of cells combining link selection and coordination mechanisms capture the observed hysteresis loop and the critical nature of the transition. Our results point towards the understanding the role of islets as tipping elements in the pancreas that interconnected by perfusion, diffusion and innervation cause the tipping dynamics and abrupt insulin release.
{"title":"Critical transitions in pancreatic islets.","authors":"D Korošak, S Postić, A Stožer, B Podobnik, M Slak Rupnik","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Calcium signals in pancreatic <math><mrow><mi>β</mi></mrow> </math> cells collectives show a sharp transition from uncorrelated to correlated state resembling a phase transition as the slowly increasing glucose concentration crosses the tipping point. However, the exact nature or the order of this phase transition is not well understood. Using confocal microscopy to record the collective calcium activation of <math><mrow><mi>β</mi></mrow> </math> cells in an intact islet under changing glucose concentration in increasing and then decreasing way, we first show that in addition to the sharp transition, the coordinated calcium response exhibits a hysteresis indicating a critical, first order transition. A network model of <math><mrow><mi>β</mi></mrow> </math> cells combining link selection and coordination mechanisms capture the observed hysteresis loop and the critical nature of the transition. Our results point towards the understanding the role of islets as tipping elements in the pancreas that interconnected by perfusion, diffusion and innervation cause the tipping dynamics and abrupt insulin release.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11537337/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142585103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}