首页 > 最新文献

Journal of Neuroscience Methods最新文献

英文 中文
Validating a novel paradigm for simultaneously assessing mismatch response and frequency-following response to speech sounds 验证一种同时评估语音错配反应和频率跟随反应的新型范式。
IF 2.7 4区 医学 Q2 BIOCHEMICAL RESEARCH METHODS Pub Date : 2024-09-06 DOI: 10.1016/j.jneumeth.2024.110277
Tzu-Han Zoe Cheng , Tian Christina Zhao

Background

Speech sounds are processed in the human brain through intricate and interconnected cortical and subcortical structures. Two neural signatures, one largely from cortical sources (mismatch response, MMR) and one largely from subcortical sources (frequency-following response, FFR) are critical for assessing speech processing as they both show sensitivity to high-level linguistic information. However, there are distinct prerequisites for recording MMR and FFR, making them difficult to acquire simultaneously

New method

Using a new paradigm, our study aims to concurrently capture both signals and test them against the following criteria: (1) replicating the effect that the MMR to a native speech contrast significantly differs from the MMR to a nonnative speech contrast, and (2) demonstrating that FFRs to three speech sounds can be reliably differentiated.

Results

Using EEG from 18 adults, we observed a decoding accuracy of 72.2 % between the MMR to native vs. nonnative speech contrasts. A significantly larger native MMR was shown in the expected time window. Similarly, a significant decoding accuracy of 79.6 % was found for FFR. A high stimulus-to-response cross-correlation with a 9 ms lag suggested that FFR closely tracks speech sounds.

Comparison with existing method(s)

These findings demonstrate that our paradigm reliably captures both MMR and FFR concurrently, replicating and extending past research with much fewer trials (MMR: 50 trials; FFR: 200 trials) and shorter experiment time (12 minutes).

Conclusions

This study paves the way to understanding cortical-subcortical interactions for speech and language processing, with the ultimate goal of developing an assessment tool specific to early development.

背景:语音在人脑中通过错综复杂、相互关联的皮层和皮层下结构进行处理。有两种神经特征,一种主要来自大脑皮层(错配反应,MMR),另一种主要来自大脑皮层下(频率跟随反应,FFR),这两种特征对评估语音处理过程至关重要,因为它们都显示了对高级语言信息的敏感性。新方法:我们的研究采用了一种新范式,旨在同时捕捉这两种信号,并根据以下标准对其进行测试:(1)复制母语语音对比的 MMR 与非母语语音对比的 MMR 显著不同的效果;(2)证明三种语音的 FFR 可以可靠地区分:利用 18 名成人的脑电图,我们观察到母语与非母语语音对比的 MMR 解码准确率为 72.2%。在预期的时间窗口中,母语 MMR 明显更大。同样,FFR 的解码准确率也达到了 79.6%。9 毫秒滞后的高刺激-反应交叉相关性表明,FFR 能密切跟踪语音:这些研究结果表明,我们的范式能同时可靠地捕捉 MMR 和 FFR,以更少的实验次数(MMR:50 次;FFR:200 次)和更短的实验时间(12 分钟)复制并扩展了过去的研究:本研究为了解大脑皮层和皮层下之间在语音和语言处理方面的相互作用铺平了道路,其最终目标是开发出一种专门用于早期发育的评估工具。
{"title":"Validating a novel paradigm for simultaneously assessing mismatch response and frequency-following response to speech sounds","authors":"Tzu-Han Zoe Cheng ,&nbsp;Tian Christina Zhao","doi":"10.1016/j.jneumeth.2024.110277","DOIUrl":"10.1016/j.jneumeth.2024.110277","url":null,"abstract":"<div><h3>Background</h3><p>Speech sounds are processed in the human brain through intricate and interconnected cortical and subcortical structures. Two neural signatures, one largely from cortical sources (mismatch response, MMR) and one largely from subcortical sources (frequency-following response, FFR) are critical for assessing speech processing as they both show sensitivity to high-level linguistic information. However, there are distinct prerequisites for recording MMR and FFR, making them difficult to acquire simultaneously</p></div><div><h3>New method</h3><p>Using a new paradigm, our study aims to concurrently capture both signals and test them against the following criteria: (1) replicating the effect that the MMR to a native speech contrast significantly differs from the MMR to a nonnative speech contrast, and (2) demonstrating that FFRs to three speech sounds can be reliably differentiated.</p></div><div><h3>Results</h3><p>Using EEG from 18 adults, we observed a decoding accuracy of 72.2 % between the MMR to native vs. nonnative speech contrasts. A significantly larger native MMR was shown in the expected time window. Similarly, a significant decoding accuracy of 79.6 % was found for FFR. A high stimulus-to-response cross-correlation with a 9 ms lag suggested that FFR closely tracks speech sounds.</p></div><div><h3>Comparison with existing method(s)</h3><p>These findings demonstrate that our paradigm reliably captures both MMR and FFR concurrently, replicating and extending past research with much fewer trials (MMR: 50 trials; FFR: 200 trials) and shorter experiment time (12 minutes).</p></div><div><h3>Conclusions</h3><p>This study paves the way to understanding cortical-subcortical interactions for speech and language processing, with the ultimate goal of developing an assessment tool specific to early development.</p></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"412 ","pages":"Article 110277"},"PeriodicalIF":2.7,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142154394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel method for sparse dynamic functional connectivity analysis from resting-state fMRI 从静息态 fMRI 分析稀疏动态功能连接的新方法
IF 2.7 4区 医学 Q2 BIOCHEMICAL RESEARCH METHODS Pub Date : 2024-09-04 DOI: 10.1016/j.jneumeth.2024.110275
Houxiang Wang , Jiaqing Chen , Zihao Yuan , Yangxin Huang , Fuchun Lin

Background:

There is growing interest in understanding the dynamic functional connectivity (DFC) between distributed brain regions. However, it remains challenging to reliably estimate the temporal dynamics from resting-state functional magnetic resonance imaging (rs-fMRI) due to the limitations of current methods.

New methods:

We propose a new model called HDP-HSMM-BPCA for sparse DFC analysis of high-dimensional rs-fMRI data, which is a temporal extension of probabilistic principal component analysis using Bayesian nonparametric hidden semi-Markov model (HSMM). Specifically, we utilize a hierarchical Dirichlet process (HDP) prior to remove the parametric assumption of the HMM framework, overcoming the limitations of the standard HMM. An attractive superiority is its ability to automatically infer the state-specific latent space dimensionality within the Bayesian formulation.

Results:

The experiment results of synthetic data show that our model outperforms the competitive models with relatively higher estimation accuracy. In addition, the proposed framework is applied to real rs-fMRI data to explore sparse DFC patterns. The findings indicate that there is a time-varying underlying structure and sparse DFC patterns in high-dimensional rs-fMRI data.

Comparison with existing methods:

Compared with the existing DFC approaches based on HMM, our method overcomes the limitations of standard HMM. The observation model of HDP-HSMM-BPCA can discover the underlying temporal structure of rs-fMRI data. Furthermore, the relevant sparse DFC construction algorithm provides a scheme for estimating sparse DFC.

Conclusion:

We describe a new computational framework for sparse DFC analysis to discover the underlying temporal structure of rs-fMRI data, which will facilitate the study of brain functional connectivity.

背景:人们对了解分布式脑区之间的动态功能连接(DFC)越来越感兴趣。然而,由于当前方法的局限性,从静息态功能磁共振成像(rs-fMRI)中可靠地估计时间动态仍具有挑战性:我们提出了一种名为 HDP-HSMM-BPCA 的新模型,用于高维 rs-fMRI 数据的稀疏 DFC 分析,这是使用贝叶斯非参数隐藏半马尔可夫模型(HSMM)的概率主成分分析的时间扩展。具体来说,我们利用分层 Dirichlet 过程(HDP)先验来消除 HMM 框架的参数假设,从而克服了标准 HMM 的局限性。其吸引人的优越性在于它能够在贝叶斯公式中自动推断特定状态的潜在空间维度:合成数据的实验结果表明,我们的模型优于其他竞争模型,估计精度相对更高。此外,我们还将提出的框架应用于真实的 rs-fMRI 数据,以探索稀疏的 DFC 模式。研究结果表明,高维 rs-fMRI 数据中存在随时间变化的潜在结构和稀疏 DFC 模式:与现有基于 HMM 的 DFC 方法相比,我们的方法克服了标准 HMM 的局限性。HDP-HSMM-BPCA 的观测模型可以发现 rs-fMRI 数据的潜在时间结构。此外,相关的稀疏 DFC 构建算法提供了一种估算稀疏 DFC 的方案:我们描述了一种新的稀疏 DFC 分析计算框架,用于发现 rs-fMRI 数据的潜在时间结构,这将有助于大脑功能连接的研究。
{"title":"A novel method for sparse dynamic functional connectivity analysis from resting-state fMRI","authors":"Houxiang Wang ,&nbsp;Jiaqing Chen ,&nbsp;Zihao Yuan ,&nbsp;Yangxin Huang ,&nbsp;Fuchun Lin","doi":"10.1016/j.jneumeth.2024.110275","DOIUrl":"10.1016/j.jneumeth.2024.110275","url":null,"abstract":"<div><h3>Background:</h3><p>There is growing interest in understanding the dynamic functional connectivity (DFC) between distributed brain regions. However, it remains challenging to reliably estimate the temporal dynamics from resting-state functional magnetic resonance imaging (rs-fMRI) due to the limitations of current methods.</p></div><div><h3>New methods:</h3><p>We propose a new model called HDP-HSMM-BPCA for sparse DFC analysis of high-dimensional rs-fMRI data, which is a temporal extension of probabilistic principal component analysis using Bayesian nonparametric hidden semi-Markov model (HSMM). Specifically, we utilize a hierarchical Dirichlet process (HDP) prior to remove the parametric assumption of the HMM framework, overcoming the limitations of the standard HMM. An attractive superiority is its ability to automatically infer the state-specific latent space dimensionality within the Bayesian formulation.</p></div><div><h3>Results:</h3><p>The experiment results of synthetic data show that our model outperforms the competitive models with relatively higher estimation accuracy. In addition, the proposed framework is applied to real rs-fMRI data to explore sparse DFC patterns. The findings indicate that there is a time-varying underlying structure and sparse DFC patterns in high-dimensional rs-fMRI data.</p></div><div><h3>Comparison with existing methods:</h3><p>Compared with the existing DFC approaches based on HMM, our method overcomes the limitations of standard HMM. The observation model of HDP-HSMM-BPCA can discover the underlying temporal structure of rs-fMRI data. Furthermore, the relevant sparse DFC construction algorithm provides a scheme for estimating sparse DFC.</p></div><div><h3>Conclusion:</h3><p>We describe a new computational framework for sparse DFC analysis to discover the underlying temporal structure of rs-fMRI data, which will facilitate the study of brain functional connectivity.</p></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"411 ","pages":"Article 110275"},"PeriodicalIF":2.7,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142145807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-subject emotion recognition in brain-computer interface based on frequency band attention graph convolutional adversarial neural networks 基于频带注意图卷积对抗神经网络的脑机接口中的跨主体情感识别。
IF 2.7 4区 医学 Q2 BIOCHEMICAL RESEARCH METHODS Pub Date : 2024-09-03 DOI: 10.1016/j.jneumeth.2024.110276
Shinan Chen , Yuchen Wang , Xuefen Lin , Xiaoyong Sun , Weihua Li , Weifeng Ma

Background:

Emotion is an important area in neuroscience. Cross-subject emotion recognition based on electroencephalogram (EEG) data is challenging due to physiological differences between subjects. Domain gap, which refers to the different distributions of EEG data at different subjects, has attracted great attention for cross-subject emotion recognition.

Comparison with existing methods:

This study focuses on narrowing the domain gap between subjects through the emotional frequency bands and the relationship information between EEG channels. Emotional frequency band features represent the energy distribution of EEG data in different frequency ranges, while relationship information between EEG channels provides spatial distribution information about EEG data.

New method:

To achieve this, this paper proposes a model called the Frequency Band Attention Graph convolutional Adversarial neural Network (FBAGAN). This model includes three components: a feature extractor, a classifier, and a discriminator. The feature extractor consists of a layer with a frequency band attention mechanism and a graph convolutional neural network. The mechanism effectively extracts frequency band information by assigning weights and Graph Convolutional Networks can extract relationship information between EEG channels by modeling the graph structure. The discriminator then helps minimize the gap in the frequency information and relationship information between the source and target domains, improving the model’s ability to generalize.

Results:

The FBAGAN model is extensively tested on the SEED, SEED-IV, and DEAP datasets. The accuracy and standard deviation scores are 88.17% and 4.88, respectively, on the SEED dataset, and 77.35% and 3.72 on the SEED-IV dataset. On the DEAP dataset, the model achieves 69.64% for Arousal and 65.18% for Valence. These results outperform most existing models.

Conclusions:

The experiments indicate that FBAGAN effectively addresses the challenges of transferring EEG channel domain and frequency band domain, leading to improved performance.

背景:情绪是神经科学的一个重要领域。由于受试者之间的生理差异,基于脑电图(EEG)数据的跨受试者情感识别具有挑战性。域差距(domain gap)是指不同受试者脑电图数据的不同分布,它在跨受试者情感识别中引起了极大的关注:本研究主要通过情感频段和脑电图通道之间的关系信息来缩小主体间的域差距。情绪频段特征代表了脑电图数据在不同频率范围内的能量分布,而脑电图通道之间的关系信息则提供了脑电图数据的空间分布信息:为此,本文提出了一种名为频带注意图卷积对抗神经网络(FBAGAN)的模型。该模型包括三个部分:特征提取器、分类器和判别器。特征提取器由一个带有频带注意机制的层和一个图卷积神经网络组成。该机制通过分配权重有效提取频段信息,而图卷积神经网络可通过图结构建模提取脑电图通道之间的关系信息。然后,判别器有助于最小化源域和目标域之间频率信息和关系信息的差距,从而提高模型的泛化能力:在 SEED、SEED-IV 和 DEAP 数据集上对 FBAGAN 模型进行了广泛测试。在 SEED 数据集上的准确率和标准偏差分别为 88.17% 和 4.88,在 SEED-IV 数据集上的准确率和标准偏差分别为 77.35% 和 3.72。在 DEAP 数据集上,该模型的 "唤醒 "得分率为 69.64%,"情感 "得分率为 65.18%。这些结果优于大多数现有模型:实验表明,FBAGAN 有效地解决了脑电图信道域和频带域传输的难题,从而提高了性能。
{"title":"Cross-subject emotion recognition in brain-computer interface based on frequency band attention graph convolutional adversarial neural networks","authors":"Shinan Chen ,&nbsp;Yuchen Wang ,&nbsp;Xuefen Lin ,&nbsp;Xiaoyong Sun ,&nbsp;Weihua Li ,&nbsp;Weifeng Ma","doi":"10.1016/j.jneumeth.2024.110276","DOIUrl":"10.1016/j.jneumeth.2024.110276","url":null,"abstract":"<div><h3><em>Background:</em></h3><p>Emotion is an important area in neuroscience. Cross-subject emotion recognition based on electroencephalogram (EEG) data is challenging due to physiological differences between subjects. Domain gap, which refers to the different distributions of EEG data at different subjects, has attracted great attention for cross-subject emotion recognition.</p></div><div><h3><em>Comparison with existing methods:</em></h3><p>This study focuses on narrowing the domain gap between subjects through the emotional frequency bands and the relationship information between EEG channels. Emotional frequency band features represent the energy distribution of EEG data in different frequency ranges, while relationship information between EEG channels provides spatial distribution information about EEG data.</p></div><div><h3><em>New method:</em></h3><p>To achieve this, this paper proposes a model called the Frequency Band Attention Graph convolutional Adversarial neural Network (FBAGAN). This model includes three components: a feature extractor, a classifier, and a discriminator. The feature extractor consists of a layer with a frequency band attention mechanism and a graph convolutional neural network. The mechanism effectively extracts frequency band information by assigning weights and Graph Convolutional Networks can extract relationship information between EEG channels by modeling the graph structure. The discriminator then helps minimize the gap in the frequency information and relationship information between the source and target domains, improving the model’s ability to generalize.</p></div><div><h3><em>Results:</em></h3><p>The FBAGAN model is extensively tested on the SEED, SEED-IV, and DEAP datasets. The accuracy and standard deviation scores are 88.17% and 4.88, respectively, on the SEED dataset, and 77.35% and 3.72 on the SEED-IV dataset. On the DEAP dataset, the model achieves 69.64% for Arousal and 65.18% for Valence. These results outperform most existing models.</p></div><div><h3><em>Conclusions:</em></h3><p>The experiments indicate that FBAGAN effectively addresses the challenges of transferring EEG channel domain and frequency band domain, leading to improved performance.</p></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"411 ","pages":"Article 110276"},"PeriodicalIF":2.7,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142140348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconstruction of natural images from human fMRI using a three-stage multi-level deep fusion model 利用三阶段多层次深度融合模型从人体 fMRI 重建自然图像
IF 2.7 4区 医学 Q2 BIOCHEMICAL RESEARCH METHODS Pub Date : 2024-08-31 DOI: 10.1016/j.jneumeth.2024.110269
Lu Meng , Zhenxuan Tang , Yangqian Liu

Background

Image reconstruction is a critical task in brain decoding research, primarily utilizing functional magnetic resonance imaging (fMRI) data. However, due to challenges such as limited samples in fMRI data, the quality of reconstruction results often remains poor.

New method

We proposed a three-stage multi-level deep fusion model (TS-ML-DFM). The model employed a three-stage training process, encompassing components such as image encoders, generators, discriminators, and fMRI encoders. In this method, we incorporated distinct supplementary features derived separately from depth images and original images. Additionally, the method integrated several components, including a random shift module, dual attention module, and multi-level feature fusion module.

Results

In both qualitative and quantitative comparisons on the Horikawa17 and VanGerven10 datasets, our method exhibited excellent performance.

Comparison with existing methods: For example, on the primary Horikawa17 dataset, our method was compared with other leading methods based on metrics the average hash value, histogram similarity, mutual information, structural similarity accuracy, AlexNet(2), AlexNet(5), and pairwise human perceptual similarity accuracy. Compared to the second-ranked results in each metric, the proposed method achieved improvements of 0.99 %, 3.62 %, 3.73 %, 2.45 %, 3.51 %, 0.62 %, and 1.03 %, respectively. In terms of the SwAV top-level semantic metric, a substantial improvement of 10.53 % was achieved compared to the second-ranked result in the pixel-level reconstruction methods.

Conclusions

The TS-ML-DFM method proposed in this study, when applied to decoding brain visual patterns using fMRI data, has outperformed previous algorithms, thereby facilitating further advancements in research within this field.

背景:图像重建是脑解码研究中的一项关键任务,主要利用功能磁共振成像(fMRI)数据。然而,由于 fMRI 数据样本有限等挑战,重建结果的质量往往不高:我们提出了一种三阶段多层次深度融合模型(TS-ML-DFM)。该模型采用三阶段训练过程,包含图像编码器、生成器、判别器和 fMRI 编码器等组件。在该方法中,我们加入了分别从深度图像和原始图像中提取的不同补充特征。此外,该方法还集成了多个组件,包括随机移动模块、双重注意模块和多级特征融合模块:在 Horikawa17 和 VanGerven10 数据集的定性和定量比较中,我们的方法都表现出了卓越的性能:例如,在主要的 Horikawa17 数据集上,根据平均哈希值、直方图相似性、互信息、结构相似性准确度、AlexNet(2)、AlexNet(5) 和成对人类感知相似性准确度等指标,我们的方法与其他领先方法进行了比较。与各项指标排名第二的结果相比,所提出的方法分别提高了 0.99%、3.62%、3.73%、2.45%、3.51%、0.62% 和 1.03%。在 SwAV 顶层语义指标方面,与像素级重建方法中排名第二的结果相比,实现了 10.53% 的大幅改进:本研究提出的 TS-ML-DFM 方法在利用 fMRI 数据解码大脑视觉模式时,表现优于之前的算法,从而促进了该领域研究的进一步发展。
{"title":"Reconstruction of natural images from human fMRI using a three-stage multi-level deep fusion model","authors":"Lu Meng ,&nbsp;Zhenxuan Tang ,&nbsp;Yangqian Liu","doi":"10.1016/j.jneumeth.2024.110269","DOIUrl":"10.1016/j.jneumeth.2024.110269","url":null,"abstract":"<div><h3>Background</h3><p>Image reconstruction is a critical task in brain decoding research, primarily utilizing functional magnetic resonance imaging (fMRI) data. However, due to challenges such as limited samples in fMRI data, the quality of reconstruction results often remains poor.</p></div><div><h3>New method</h3><p>We proposed a three-stage multi-level deep fusion model (TS-ML-DFM). The model employed a three-stage training process, encompassing components such as image encoders, generators, discriminators, and fMRI encoders. In this method, we incorporated distinct supplementary features derived separately from depth images and original images. Additionally, the method integrated several components, including a random shift module, dual attention module, and multi-level feature fusion module.</p></div><div><h3>Results</h3><p>In both qualitative and quantitative comparisons on the Horikawa17 and VanGerven10 datasets, our method exhibited excellent performance.</p><p>Comparison with existing methods: For example, on the primary Horikawa17 dataset, our method was compared with other leading methods based on metrics the average hash value, histogram similarity, mutual information, structural similarity accuracy, AlexNet(2), AlexNet(5), and pairwise human perceptual similarity accuracy. Compared to the second-ranked results in each metric, the proposed method achieved improvements of 0.99 %, 3.62 %, 3.73 %, 2.45 %, 3.51 %, 0.62 %, and 1.03 %, respectively. In terms of the SwAV top-level semantic metric, a substantial improvement of 10.53 % was achieved compared to the second-ranked result in the pixel-level reconstruction methods.</p></div><div><h3>Conclusions</h3><p>The TS-ML-DFM method proposed in this study, when applied to decoding brain visual patterns using fMRI data, has outperformed previous algorithms, thereby facilitating further advancements in research within this field.</p></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"411 ","pages":"Article 110269"},"PeriodicalIF":2.7,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0165027024002140/pdfft?md5=cf2903d860a78e3a684efb8d1cc769d2&pid=1-s2.0-S0165027024002140-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142120065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Novel comprehensive analysis of skilled reaching and grasping behavior in adult rats 对成年大鼠熟练伸手和抓握行为的新颖综合分析
IF 2.7 4区 医学 Q2 BIOCHEMICAL RESEARCH METHODS Pub Date : 2024-08-31 DOI: 10.1016/j.jneumeth.2024.110271
Pawan Sharma , Yixuan Du , Kripi Singapuri , Debbi Moalemi Delafraz , Prithvi K. Shah

Background

Reaching and grasping (R&G) in rats is commonly used as an outcome measure to investigate the effectiveness of rehabilitation or treatment strategies to recover forelimb function post spinal cord injury. Kinematic analysis has been limited to the wrist and digit movements. Kinematic profiles of the more proximal body segments that play an equally crucial role in successfully executing the task remain unexplored. Additionally, understanding of different forelimb muscle activity, their interactions, and their correlation with the kinematics of R&G movement is scarce.

New method

In this work, novel methodologies to comprehensively assess and quantify the 3D kinematics of the proximal and distal forelimb joints along with associated muscle activity during R&G movements in adult rats are developed and discussed.

Results

Our data show that different phases of R&G identified using the novel kinematic and EMG-based approach correlate with the well-established descriptors of R&G stages derived from the Whishaw scoring system. Additionally, the developed methodology allows describing the temporal activity of individual muscles and associated mechanical and physiological properties during different phases of the motor task.

Comparison with existing method(s)

R&G phases and their sub-components are identified and quantified using the developed kinematic and EMG-based approach. Importantly, the identified R&G phases closely match the well-established qualitative descriptors of the R&G task proposed by Whishaw and colleagues.

Conclusions

The present work provides an in-depth objective analysis of kinematics and EMG activity of R&G behavior, paving the way to a standardized approach to assessing this critical rodent motor function in future studies.

背景:在研究脊髓损伤后前肢功能恢复的康复或治疗策略的有效性时,通常使用大鼠的伸手抓握(R&G)作为结果测量。运动学分析仅限于手腕和手指运动。对成功执行任务起着同样关键作用的更近端身体节段的运动学特征仍未得到研究。此外,对不同前肢肌肉活动、它们之间的相互作用以及它们与 R&G 运动的运动学相关性的了解也很少:在这项研究中,我们开发并讨论了一种新方法,用于全面评估和量化成年大鼠 R&G 运动过程中前肢近端和远端关节的三维运动学以及相关肌肉活动:结果:我们的数据显示,使用基于运动学和肌电图的新方法确定的 R&G 不同阶段与 Whishaw 评分系统得出的 R&G 阶段的成熟描述相关。此外,所开发的方法还能描述运动任务不同阶段中单个肌肉的时间活动以及相关的机械和生理特性:与现有方法的比较:使用所开发的基于运动学和肌电图的方法,可识别并量化 R&G 阶段及其子组件。重要的是,确定的 R&G 阶段与 Whishaw 及其同事提出的 R&G 任务的成熟定性描述非常吻合:本研究对 R&G 行为的运动学和肌电图活动进行了深入客观的分析,为在未来研究中采用标准化方法评估这一关键的啮齿动物运动功能铺平了道路。
{"title":"Novel comprehensive analysis of skilled reaching and grasping behavior in adult rats","authors":"Pawan Sharma ,&nbsp;Yixuan Du ,&nbsp;Kripi Singapuri ,&nbsp;Debbi Moalemi Delafraz ,&nbsp;Prithvi K. Shah","doi":"10.1016/j.jneumeth.2024.110271","DOIUrl":"10.1016/j.jneumeth.2024.110271","url":null,"abstract":"<div><h3>Background</h3><p>Reaching and grasping (R&amp;G) in rats is commonly used as an outcome measure to investigate the effectiveness of rehabilitation or treatment strategies to recover forelimb function post spinal cord injury. Kinematic analysis has been limited to the wrist and digit movements. Kinematic profiles of the more proximal body segments that play an equally crucial role in successfully executing the task remain unexplored. Additionally, understanding of different forelimb muscle activity, their interactions, and their correlation with the kinematics of R&amp;G movement is scarce.</p></div><div><h3>New method</h3><p>In this work, novel methodologies to comprehensively assess and quantify the 3D kinematics of the proximal and distal forelimb joints along with associated muscle activity during R&amp;G movements in adult rats are developed and discussed.</p></div><div><h3>Results</h3><p>Our data show that different phases of R&amp;G identified using the novel kinematic and EMG-based approach correlate with the well-established descriptors of R&amp;G stages derived from the Whishaw scoring system. Additionally, the developed methodology allows describing the temporal activity of individual muscles and associated mechanical and physiological properties during different phases of the motor task.</p></div><div><h3>Comparison with existing method(s)</h3><p>R&amp;G phases and their sub-components are identified and quantified using the developed kinematic and EMG-based approach. Importantly, the identified R&amp;G phases closely match the well-established qualitative descriptors of the R&amp;G task proposed by Whishaw and colleagues.</p></div><div><h3>Conclusions</h3><p>The present work provides an in-depth objective analysis of kinematics and EMG activity of R&amp;G behavior, paving the way to a standardized approach to assessing this critical rodent motor function in future studies.</p></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"411 ","pages":"Article 110271"},"PeriodicalIF":2.7,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142108291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High quality, high throughput, and low-cost simultaneous video recording of 60 animals in operant chambers using PiRATeMC 使用 PiRATeMC 对操作室中的 60 只动物进行高质量、高通量和低成本的同步视频记录。
IF 2.7 4区 医学 Q2 BIOCHEMICAL RESEARCH METHODS Pub Date : 2024-08-31 DOI: 10.1016/j.jneumeth.2024.110270
Jarryd Ramborger , Sumay Kalra , Joseph Mosquera , Alexander C.W. Smith , Olivier George

Background

The development of Raspberry Pi-based recording devices for video analyses of drug self-administration studies has been shown to be promising in terms of affordability, customizability, and capacity to extract in-depth behavioral patterns. Yet, most video recording systems are limited to a few cameras making them incompatible with large-scale studies.

New method

We expanded the PiRATeMC (Pi-based Remote Acquisition Technology for Motion Capture) recording system by increasing its scale, modifying its code, and adding equipment to accommodate large-scale video acquisition, accompanied by data on throughput capabilities, video fidelity, synchronicity of devices, and comparisons between Raspberry Pi 3B+ and 4B models.

Results

Using PiRATeMC default recording parameters resulted in minimal storage (∼350MB/h), high throughput (< ∼120 seconds/Pi), high video fidelity, and synchronicity within ∼0.02 seconds, affording the ability to simultaneously record 60 animals in individual self-administration chambers for various session lengths at a fraction of commercial costs. No consequential differences were found between Raspberry Pi models.

Comparison with existing method(s)

This system allows greater acquisition of video data simultaneously than other video recording systems by an order of magnitude with less storage needs and lower costs. Additionally, we report in-depth quantitative assessments of throughput, fidelity, and synchronicity, displaying real-time system capabilities.

Conclusions

The system presented is able to be fully installed in a month’s time by a single technician and provides a scalable, low cost, and quality-assured procedure with a high-degree of customization and synchronicity between recording devices, capable of recording a large number of subjects and timeframes with high turnover in a variety of species and settings.

背景:基于树莓派(Raspberry Pi)的录制设备在药物自我给药研究的视频分析方面的发展前景广阔,因为它经济实惠、可定制,并且能够提取深入的行为模式。然而,大多数视频记录系统仅限于几个摄像头,因此无法进行大规模研究:新方法:我们扩展了 PiRATeMC(基于 Pi 的运动捕捉远程采集技术)录制系统,扩大了其规模,修改了其代码,并增加了设备以适应大规模视频采集,同时提供了有关吞吐能力、视频保真度、设备同步性的数据,并对 Raspberry Pi 3B+ 和 4B 型号进行了比较:使用 PiRATeMC 默认录制参数可实现最小存储量(约 350MB/h)、高吞吐量(< ~120秒/Pi)、高视频保真度以及约 0.02 秒内的同步性,从而能够以商业成本的一小部分同时录制 60 只动物在单个自我给药室中的不同会话长度。与现有方法相比,Raspberry Pi 型号之间没有明显差异:与其他视频记录系统相比,该系统能够以更少的存储需求和更低的成本同时获取更多的视频数据。此外,我们还报告了对吞吐量、保真度和同步性的深入量化评估,展示了系统的实时能力:本文介绍的系统只需一名技术人员在一个月内就能完全安装完毕,并提供了一个可扩展、低成本、有质量保证的程序,具有高度的定制性和记录设备之间的同步性,能够在各种物种和环境中记录大量受试者和高周转率的时间段。
{"title":"High quality, high throughput, and low-cost simultaneous video recording of 60 animals in operant chambers using PiRATeMC","authors":"Jarryd Ramborger ,&nbsp;Sumay Kalra ,&nbsp;Joseph Mosquera ,&nbsp;Alexander C.W. Smith ,&nbsp;Olivier George","doi":"10.1016/j.jneumeth.2024.110270","DOIUrl":"10.1016/j.jneumeth.2024.110270","url":null,"abstract":"<div><h3>Background</h3><p>The development of Raspberry Pi-based recording devices for video analyses of drug self-administration studies has been shown to be promising in terms of affordability, customizability, and capacity to extract in-depth behavioral patterns. Yet, most video recording systems are limited to a few cameras making them incompatible with large-scale studies.</p></div><div><h3>New method</h3><p>We expanded the PiRATeMC (Pi-based Remote Acquisition Technology for Motion Capture) recording system by increasing its scale, modifying its code, and adding equipment to accommodate large-scale video acquisition, accompanied by data on throughput capabilities, video fidelity, synchronicity of devices, and comparisons between Raspberry Pi 3B+ and 4B models.</p></div><div><h3>Results</h3><p>Using PiRATeMC default recording parameters resulted in minimal storage (∼350MB/h), high throughput (&lt; ∼120 seconds/Pi), high video fidelity, and synchronicity within ∼0.02 seconds, affording the ability to simultaneously record 60 animals in individual self-administration chambers for various session lengths at a fraction of commercial costs. No consequential differences were found between Raspberry Pi models.</p></div><div><h3>Comparison with existing method(s)</h3><p>This system allows greater acquisition of video data simultaneously than other video recording systems by an order of magnitude with less storage needs and lower costs. Additionally, we report in-depth quantitative assessments of throughput, fidelity, and synchronicity, displaying real-time system capabilities.</p></div><div><h3>Conclusions</h3><p>The system presented is able to be fully installed in a month’s time by a single technician and provides a scalable, low cost, and quality-assured procedure with a high-degree of customization and synchronicity between recording devices, capable of recording a large number of subjects and timeframes with high turnover in a variety of species and settings.</p></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"411 ","pages":"Article 110270"},"PeriodicalIF":2.7,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0165027024002152/pdfft?md5=06fea00dd45e5f4ed19740e351731e89&pid=1-s2.0-S0165027024002152-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142120064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pushing the boundaries of brain-computer interfacing (BCI) and neuron-electronics 推动脑机接口(BCI)和神经元电子学的发展。
IF 2.7 4区 医学 Q2 BIOCHEMICAL RESEARCH METHODS Pub Date : 2024-08-30 DOI: 10.1016/j.jneumeth.2024.110274
Mohammed Seghir Guellil, Fatima Kies, Emad Kamil Hussein, Mohammad Shabaz, Robert E. Hampson
{"title":"Pushing the boundaries of brain-computer interfacing (BCI) and neuron-electronics","authors":"Mohammed Seghir Guellil,&nbsp;Fatima Kies,&nbsp;Emad Kamil Hussein,&nbsp;Mohammad Shabaz,&nbsp;Robert E. Hampson","doi":"10.1016/j.jneumeth.2024.110274","DOIUrl":"10.1016/j.jneumeth.2024.110274","url":null,"abstract":"","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"411 ","pages":"Article 110274"},"PeriodicalIF":2.7,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142108292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Small animal brain surgery with neither a brain atlas nor a stereotaxic frame 既没有脑图谱也没有立体定向框架的小动物脑部手术
IF 2.7 4区 医学 Q2 BIOCHEMICAL RESEARCH METHODS Pub Date : 2024-08-28 DOI: 10.1016/j.jneumeth.2024.110272
Shaked Ron, Hadar Beeri, Ori Shinover , Noam M. Tur , Jonathan Brokman , Ben Engelhard, Yoram Gutfreund

Background

Stereotaxic surgery is a cornerstone in brain research for the precise positioning of electrodes and probes, but its application is limited to species with available brain atlases and tailored stereotaxic frames. Addressing this limitation, we introduce an alternative technique for small animal brain surgery that requires neither an aligned brain atlas nor a stereotaxic frame.

New method

The new method requires an ex-vivo high-contrast MRI brain scan of one specimen and access to a micro-CT scanner. The process involves attaching miniature markers to the skull, followed by CT scanning of the head. Subsequently, MRI and CT images are co-registered using standard image processing software and the targets for brain recordings are marked in the MRI image. During surgery, the animal's head is stabilized in any convenient orientation, and the probe’s 3D position and angle are tracked using a multi-camera system. We have developed a software that utilizes the on-skull markers as fiducial points to align the CT/MRI 3D model with the surgical positioning system, and in turn instructs the surgeon how to move the probe to reach the targets within the brain.

Results

Our technique allows the execution of insertion tracks connecting two points in the brain. We successfully applied this method for neuropixels probe positioning in owls, quails, and mice, demonstrating its versatility.

Comparison with existing methods

We present an alternative to traditional stereotaxic brain surgeries that does not require established stereotaxic tools. Thus, this method is especially of advantage for research in non-standard and novel animal models.

背景立体定向手术是脑科学研究中精确定位电极和探针的基石,但其应用仅限于拥有脑图谱和定制立体定向框架的物种。针对这一局限性,我们介绍了一种小动物脑部手术的替代技术,它既不需要对齐的脑图谱,也不需要立体定向框架。新方法新方法需要对一个标本进行体外高对比度磁共振成像脑部扫描,并使用微型计算机断层扫描仪。这个过程包括在头骨上安装微型标记,然后对头部进行 CT 扫描。随后,使用标准图像处理软件对核磁共振成像和 CT 图像进行共同注册,并在核磁共振成像图像中标记出大脑记录的目标。在手术过程中,动物的头部被稳定在任何方便的方位,探针的三维位置和角度通过多摄像头系统进行跟踪。我们开发了一款软件,利用颅骨上的标记作为靶点,将 CT/MRI 三维模型与手术定位系统对齐,进而指导外科医生如何移动探针以到达脑内目标。我们成功地将这种方法应用于猫头鹰、鹌鹑和小鼠的神经像素探针定位,证明了它的多功能性。与现有方法的比较我们提出了一种替代传统立体定向脑部手术的方法,它不需要成熟的立体定向工具。因此,这种方法尤其适用于非标准和新型动物模型的研究。
{"title":"Small animal brain surgery with neither a brain atlas nor a stereotaxic frame","authors":"Shaked Ron,&nbsp;Hadar Beeri,&nbsp;Ori Shinover ,&nbsp;Noam M. Tur ,&nbsp;Jonathan Brokman ,&nbsp;Ben Engelhard,&nbsp;Yoram Gutfreund","doi":"10.1016/j.jneumeth.2024.110272","DOIUrl":"10.1016/j.jneumeth.2024.110272","url":null,"abstract":"<div><h3>Background</h3><p>Stereotaxic surgery is a cornerstone in brain research for the precise positioning of electrodes and probes, but its application is limited to species with available brain atlases and tailored stereotaxic frames. Addressing this limitation, we introduce an alternative technique for small animal brain surgery that requires neither an aligned brain atlas nor a stereotaxic frame.</p></div><div><h3>New method</h3><p>The new method requires an ex-vivo high-contrast MRI brain scan of one specimen and access to a micro-CT scanner. The process involves attaching miniature markers to the skull, followed by CT scanning of the head. Subsequently, MRI and CT images are co-registered using standard image processing software and the targets for brain recordings are marked in the MRI image. During surgery, the animal's head is stabilized in any convenient orientation, and the probe’s 3D position and angle are tracked using a multi-camera system. We have developed a software that utilizes the on-skull markers as fiducial points to align the CT/MRI 3D model with the surgical positioning system, and in turn instructs the surgeon how to move the probe to reach the targets within the brain.</p></div><div><h3>Results</h3><p>Our technique allows the execution of insertion tracks connecting two points in the brain. We successfully applied this method for neuropixels probe positioning in owls, quails, and mice, demonstrating its versatility.</p></div><div><h3>Comparison with existing methods</h3><p>We present an alternative to traditional stereotaxic brain surgeries that does not require established stereotaxic tools. Thus, this method is especially of advantage for research in non-standard and novel animal models.</p></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"411 ","pages":"Article 110272"},"PeriodicalIF":2.7,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142094906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NeuroQuantify – An image analysis software for detection and quantification of neuron cells and neurite lengths using deep learning NeuroQuantify - 利用深度学习检测和量化神经元细胞和神经元长度的图像分析软件。
IF 2.7 4区 医学 Q2 BIOCHEMICAL RESEARCH METHODS Pub Date : 2024-08-27 DOI: 10.1016/j.jneumeth.2024.110273
Ka My Dang , Yi Jia Zhang , Tianchen Zhang , Chao Wang , Anton Sinner , Piero Coronica , Joyce K.S. Poon

Background

The segmentation of cells and neurites in microscopy images of neuronal networks provides valuable quantitative information about neuron growth and neuronal differentiation, including the number of cells, neurites, neurite length and neurite orientation. This information is essential for assessing the development of neuronal networks in response to extracellular stimuli, which is useful for studying neuronal structures, for example, the study of neurodegenerative diseases and pharmaceuticals.

New method

We have developed NeuroQuantify, an open-source software that uses deep learning to efficiently and quickly segment cells and neurites in phase contrast microscopy images.

Results

NeuroQuantify offers several key features: (i) automatic detection of cells and neurites; (ii) post-processing of the images for the quantitative neurite length measurement based on segmentation of phase contrast microscopy images, and (iii) identification of neurite orientations.

Comparison with existing methods

NeuroQuantify overcomes some of the limitations of existing methods in the automatic and accurate analysis of neuronal structures. It has been developed for phase contrast images rather than fluorescence images. In addition to typical functionality of cell counting, NeuroQuantify also detects and counts neurites, measures the neurite lengths, and produces the neurite orientation distribution.

Conclusions

We offer a valuable tool to assess network development rapidly and effectively. The user-friendly NeuroQuantify software can be installed and freely downloaded from GitHub at https://github.com/StanleyZ0528/neural-image-segmentation.

背景:神经元网络显微图像中细胞和神经元的分割提供了有关神经元生长和神经元分化的宝贵定量信息,包括细胞数量、神经元、神经元长度和神经元方向。这些信息对于评估神经元网络的发展对细胞外刺激的反应至关重要,这对于研究神经元结构非常有用,例如神经退行性疾病和药物研究:我们开发了一款开源软件NeuroQuantify,它利用深度学习技术高效、快速地分割相衬显微镜图像中的细胞和神经元:NeuroQuantify具有以下几个主要功能:(i) 自动检测细胞和神经元;(ii) 基于相衬显微镜图像分割对图像进行后处理,以定量测量神经元长度;(iii) 识别神经元方向:NeuroQuantify 克服了现有方法在自动准确分析神经元结构方面的一些局限性。它是针对相衬图像而非荧光图像开发的。除了典型的细胞计数功能外,NeuroQuantify 还能检测和计数神经元、测量神经元长度并生成神经元方向分布:我们提供了一种快速有效地评估网络发展的宝贵工具。用户友好的 NeuroQuantify 软件可从 GitHub 上免费安装和下载,网址是 https://github.com/StanleyZ0528/neural-image-segmentation。
{"title":"NeuroQuantify – An image analysis software for detection and quantification of neuron cells and neurite lengths using deep learning","authors":"Ka My Dang ,&nbsp;Yi Jia Zhang ,&nbsp;Tianchen Zhang ,&nbsp;Chao Wang ,&nbsp;Anton Sinner ,&nbsp;Piero Coronica ,&nbsp;Joyce K.S. Poon","doi":"10.1016/j.jneumeth.2024.110273","DOIUrl":"10.1016/j.jneumeth.2024.110273","url":null,"abstract":"<div><h3>Background</h3><p>The segmentation of cells and neurites in microscopy images of neuronal networks provides valuable quantitative information about neuron growth and neuronal differentiation, including the number of cells, neurites, neurite length and neurite orientation. This information is essential for assessing the development of neuronal networks in response to extracellular stimuli, which is useful for studying neuronal structures, for example, the study of neurodegenerative diseases and pharmaceuticals.</p></div><div><h3>New method</h3><p>We have developed NeuroQuantify, an open-source software that uses deep learning to efficiently and quickly segment cells and neurites in phase contrast microscopy images.</p></div><div><h3>Results</h3><p>NeuroQuantify offers several key features: (i) automatic detection of cells and neurites; (ii) post-processing of the images for the quantitative neurite length measurement based on segmentation of phase contrast microscopy images, and (iii) identification of neurite orientations.</p></div><div><h3>Comparison with existing methods</h3><p>NeuroQuantify overcomes some of the limitations of existing methods in the automatic and accurate analysis of neuronal structures. It has been developed for phase contrast images rather than fluorescence images. In addition to typical functionality of cell counting, NeuroQuantify also detects and counts neurites, measures the neurite lengths, and produces the neurite orientation distribution.</p></div><div><h3>Conclusions</h3><p>We offer a valuable tool to assess network development rapidly and effectively. The user-friendly NeuroQuantify software can be installed and freely downloaded from GitHub at <span><span>https://github.com/StanleyZ0528/neural-image-segmentation</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"411 ","pages":"Article 110273"},"PeriodicalIF":2.7,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0165027024002188/pdfft?md5=0023c239982bf823c04acc4b3908a6ee&pid=1-s2.0-S0165027024002188-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142093529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Direct dorsal root ganglia (DRG) injection in mice for analysis of adeno-associated viral (AAV) gene transfer to peripheral somatosensory neurons 直接向小鼠背根神经节 (DRG) 注射,分析腺相关病毒 (AAV) 基因向外周躯体感觉神经元的转移。
IF 2.7 4区 医学 Q2 BIOCHEMICAL RESEARCH METHODS Pub Date : 2024-08-25 DOI: 10.1016/j.jneumeth.2024.110268
Michael O’Donnell , Arjun Fontaine , John Caldwell , Richard Weir

Background

Delivering optogenetic genes to the peripheral sensory nervous system provides an efficient approach to study and treat neurological disorders and offers the potential to reintroduce sensory feedback to prostheses users and those who have incurred other neuropathies. Adeno-associated viral (AAV) vectors are a common method of gene delivery due to efficiency of gene transfer and minimal toxicity. AAVs are capable of being designed to target specific tissues, with transduction efficacy determined through the combination of serotype and genetic promoter selection, as well as location of vector administration. The dorsal root ganglia (DRGs) are collections of cell bodies of sensory neurons which project from the periphery to the central nervous system (CNS). The anatomical make-up of DRGs make them an ideal injection location to target the somatosensory neurons in the peripheral nervous system (PNS).

Comparison to existing methods

Previous studies have detailed methods of direct DRG injection in rats and dorsal horn injection in mice, however, due to the size and anatomical differences between rats and strains of mice, there is only one other published method for AAV injection into murine DRGs for transduction of peripheral sensory neurons using a different methodology.

New Method/Results

Here, we detail the necessary materials and methods required to inject AAVs into the L3 and L4 DRGs of mice, as well as how to harvest the sciatic nerve and L3/L4 DRGs for analysis. This methodology results in optogenetic expression in both the L3/L4 DRGs and sciatic nerve and can be adapted to inject any DRG.

背景:向外周感觉神经系统传递光遗传基因为研究和治疗神经系统疾病提供了一种有效的方法,并为假肢使用者和其他神经病患者重新获得感觉反馈提供了可能性。腺相关病毒(AAV)载体是一种常见的基因递送方法,因为其基因转移效率高、毒性小。AAV 可针对特定组织进行设计,通过血清型和基因启动子的选择以及载体的给药位置来确定转导效果。背根神经节(DRGs)是感觉神经元细胞体的集合,从外周投射到中枢神经系统(CNS)。背根神经节的解剖结构使其成为针对周围神经系统(PNS)中躯体感觉神经元的理想注射位置:以前的研究详细介绍了大鼠 DRG 直接注射和小鼠背角注射的方法,但是,由于大鼠和小鼠品系在体型和解剖学上的差异,目前仅有一种已发表的方法可将 AAV 注射到小鼠 DRG 中,使用不同的方法转导外周感觉神经元:在此,我们详细介绍了将 AAV 注入小鼠 L3 和 L4 DRG 所需的材料和方法,以及如何收获坐骨神经和 L3/L4 DRG 进行分析。这种方法可在 L3/L4 DRG 和坐骨神经中实现光遗传表达,也可用于注射任何 DRG。
{"title":"Direct dorsal root ganglia (DRG) injection in mice for analysis of adeno-associated viral (AAV) gene transfer to peripheral somatosensory neurons","authors":"Michael O’Donnell ,&nbsp;Arjun Fontaine ,&nbsp;John Caldwell ,&nbsp;Richard Weir","doi":"10.1016/j.jneumeth.2024.110268","DOIUrl":"10.1016/j.jneumeth.2024.110268","url":null,"abstract":"<div><h3>Background</h3><p>Delivering optogenetic genes to the peripheral sensory nervous system provides an efficient approach to study and treat neurological disorders and offers the potential to reintroduce sensory feedback to prostheses users and those who have incurred other neuropathies. Adeno-associated viral (AAV) vectors are a common method of gene delivery due to efficiency of gene transfer and minimal toxicity. AAVs are capable of being designed to target specific tissues, with transduction efficacy determined through the combination of serotype and genetic promoter selection, as well as location of vector administration. The dorsal root ganglia (DRGs) are collections of cell bodies of sensory neurons which project from the periphery to the central nervous system (CNS). The anatomical make-up of DRGs make them an ideal injection location to target the somatosensory neurons in the peripheral nervous system (PNS).</p></div><div><h3>Comparison to existing methods</h3><p>Previous studies have detailed methods of direct DRG injection in rats and dorsal horn injection in mice, however, due to the size and anatomical differences between rats and strains of mice, there is only one other published method for AAV injection into murine DRGs for transduction of peripheral sensory neurons using a different methodology.</p></div><div><h3>New Method/Results</h3><p>Here, we detail the necessary materials and methods required to inject AAVs into the L3 and L4 DRGs of mice, as well as how to harvest the sciatic nerve and L3/L4 DRGs for analysis. This methodology results in optogenetic expression in both the L3/L4 DRGs and sciatic nerve and can be adapted to inject any DRG.</p></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"411 ","pages":"Article 110268"},"PeriodicalIF":2.7,"publicationDate":"2024-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142080601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Neuroscience Methods
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1