To assess post-stroke functional recovery and develop new treatments, numerous preclinical models have been developed, including the photothrombotic stroke model. This reproducible and simple method induces a targeted ischemic lesion in a chosen cortical area following intravenous rose bengal injection and controlled illumination with a 532 nm laser. However, identifying the infarct’s location and extent in vivo requires sophisticated, time-consuming, and/or expensive tools such as MRI or advanced optical imaging techniques. Thus, we introduce here a simple and low-tech method.
New method
Our direct method takes advantage of the long-lasting fluorescence of rose bengal remaining in the damaged cortex and detectable through the intact skull using the same 532 nm laser.
Result
At the lesion site, we observed an emission spot glowing through the skull for several weeks after stroke induction. Ex vivo immunohistochemical analysis showed that rose bengal fluorescence remains confined to the lesion, precisely delineating the infarct's boundaries.
Comparison with existing methods
This technique simplifies lesion localization and guides subsequent in vivo investigations, such as probe implantation, optogenetic fiber placement, or targeted tissue sampling in the perilesional cortex, where neuroplasticity and repair processes occur.
{"title":"Long term direct visualization of the photothrombotic cortical infarction through the intact skull of anesthetized mice","authors":"Juliette Leclerc, Théotime Briar, Caroline Derouck, Cheima Mortier, Célia Duclos, Karelle Bénardais, Eric Verin, Jean-Paul Marie, Julien Chuquet","doi":"10.1016/j.jneumeth.2026.110683","DOIUrl":"10.1016/j.jneumeth.2026.110683","url":null,"abstract":"<div><h3>Background</h3><div>To assess post-stroke functional recovery and develop new treatments, numerous preclinical models have been developed, including the photothrombotic stroke model. This reproducible and simple method induces a targeted ischemic lesion in a chosen cortical area following intravenous rose bengal injection and controlled illumination with a 532 nm laser. However, identifying the infarct’s location and extent <em>in vivo</em> requires sophisticated, time-consuming, and/or expensive tools such as MRI or advanced optical imaging techniques. Thus, we introduce here a simple and low-tech method.</div></div><div><h3>New method</h3><div>Our direct method takes advantage of the long-lasting fluorescence of rose bengal remaining in the damaged cortex and detectable through the intact skull using the same 532 nm laser.</div></div><div><h3>Result</h3><div>At the lesion site, we observed an emission spot glowing through the skull for several weeks after stroke induction. <em>Ex vivo</em> immunohistochemical analysis showed that rose bengal fluorescence remains confined to the lesion, precisely delineating the infarct's boundaries.</div></div><div><h3>Comparison with existing methods</h3><div>This technique simplifies lesion localization and guides subsequent <em>in vivo</em> investigations, such as probe implantation, optogenetic fiber placement, or targeted tissue sampling in the perilesional cortex, where neuroplasticity and repair processes occur.</div></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"428 ","pages":"Article 110683"},"PeriodicalIF":2.3,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145928155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-07DOI: 10.1016/j.jneumeth.2026.110679
Wei Wang , Tehan Zhang , Shaolong Li , Wenzhao Wang , Quanhe Jin , Chi Zhang , Jie Liu , Haijian Sun , Shiqing Feng
Background
The corticospinal tract (CST) is a major descending motor pathway essential for voluntary motor control. While adeno-associated virus-mediated anterograde tracing is widely used to label CST projections in mice, what the best stereotaxic injection coordinates and post-injection intervals remain unclear.
New method
Here, we systematically evaluated eight cortical injection strategies, differing in anterior-posterior (AP) and medial-lateral (ML) coordinates, the number of injection sites, and post-injection intervals. CST labeling was quantitatively assessed at cervical 2, cervical 5, thoracic 2, thoracic 6, and lumbar 2 spinal levels using transduced axon count (TAC), mean fluorescence intensity (MFI) and transduced area within the dorsal columns, normalized to the C2 segment.
Results
Across the tested AP/ML coordinates and single- versus multi-site injections, TAC and MFI were broadly comparable across cervical and thoracic levels, with the exception of reduced L2 labeling in Group IV (AP +0.70 mm). Furthermore, reducing the post-injection interval from four weeks to two weeks did not compromise labeling efficiency.
Comparison with existing methods
Conventional CST tracing typically requires multiple injections and ≥ 4-week intervals, increasing complexity and duration. Our optimized single-injection, 2-week protocol achieves comparable labeling fidelity while reducing procedural burden and improving reproducibility.
Conclusion
We suggest that effective CST labeling from L2 and rostral segments can be achieved with a single-point injection at AP coordinates between 0.0 and + 0.7 mm (ML fixed at 1.2 mm) or ML coordinates from + 0.7 to + 1.5 mm (AP fixed at 0.0 mm). These results establish a simplified, reproducible strategy for CST tracing.
{"title":"Optimizing stereotaxic injection strategy for AAV-mediated corticospinal tract tracing in mice","authors":"Wei Wang , Tehan Zhang , Shaolong Li , Wenzhao Wang , Quanhe Jin , Chi Zhang , Jie Liu , Haijian Sun , Shiqing Feng","doi":"10.1016/j.jneumeth.2026.110679","DOIUrl":"10.1016/j.jneumeth.2026.110679","url":null,"abstract":"<div><h3>Background</h3><div>The corticospinal tract (CST) is a major descending motor pathway essential for voluntary motor control. While adeno-associated virus-mediated anterograde tracing is widely used to label CST projections in mice, what the best stereotaxic injection coordinates and post-injection intervals remain unclear.</div></div><div><h3>New method</h3><div>Here, we systematically evaluated eight cortical injection strategies, differing in anterior-posterior (AP) and medial-lateral (ML) coordinates, the number of injection sites, and post-injection intervals. CST labeling was quantitatively assessed at cervical 2, cervical 5, thoracic 2, thoracic 6, and lumbar 2 spinal levels using transduced axon count (TAC), mean fluorescence intensity (MFI) and transduced area within the dorsal columns, normalized to the C2 segment.</div></div><div><h3>Results</h3><div>Across the tested AP/ML coordinates and single- versus multi-site injections, TAC and MFI were broadly comparable across cervical and thoracic levels, with the exception of reduced L2 labeling in Group IV (AP +0.70 mm). Furthermore, reducing the post-injection interval from four weeks to two weeks did not compromise labeling efficiency.</div></div><div><h3>Comparison with existing methods</h3><div>Conventional CST tracing typically requires multiple injections and ≥ 4-week intervals, increasing complexity and duration. Our optimized single-injection, 2-week protocol achieves comparable labeling fidelity while reducing procedural burden and improving reproducibility.</div></div><div><h3>Conclusion</h3><div>We suggest that effective CST labeling from L2 and rostral segments can be achieved with a single-point injection at AP coordinates between 0.0 and + 0.7 mm (ML fixed at 1.2 mm) or ML coordinates from + 0.7 to + 1.5 mm (AP fixed at 0.0 mm). These results establish a simplified, reproducible strategy for CST tracing.</div></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"428 ","pages":"Article 110679"},"PeriodicalIF":2.3,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145944572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-07DOI: 10.1016/j.jneumeth.2025.110665
Yimin Qu , Songhui Rao , Ting Li , Ying Li , Yan Niu , Ruiyun Chang , Xianchuan Chen , Bin Wang
Background:
Epilepsy poses ongoing physical and mental threats and causes substantial economic burdens. Better seizure forecasting enables faster medical responses, improving patients’ quality of life and lowering healthcare costs. Research mainly focuses on early forecasting within a short preictal window, often too brief for effective drug administration. A major challenge is that a longer preictal phase may resemble the interictal state, making differentiation difficult.
New methods:
We propose a causal attention network (CANet) with a longer interictal and preictal of 1 h and 2 h respectively as the research object. In the feature extraction, a dilated causal convolution network is employed to extract local features. Causal attention is innovatively incorporated into epilepsy prediction to capture global correlation features. The complementary integration of these two methods enhances feature extraction and enables a more precise distinction between interictal and preictal periods. A double-layer dynamic window algorithm is developed for seizure prediction.
Results:
We evaluate the performance on Freiburg and CHB-MIT datasets. On the Freiburg dataset, the sensitivity(Sen) of the 1/2-hour preictal intervals was 100.00%/96.67%, with a false alarm rate per hour (FAR) of 0.0077/h/0.0472/h, and the average prediction time (APT) was 97.59 min. On the CHB-MIT dataset, we achieved Sen of 97.06%/92.31%, FAR of 0.0251/h/0.0666/h, and APT of 94.85 min, under the same conditions.
Comparison with existing methods and conclusion:
Our approach outperforms most of the previous methods, and the intracranial EEG (Freiburg) can more effectively distinguish interictal and preictal periods than scalp EEG (CHB-MIT).
{"title":"A causal attention network with time frequency channel feature fusion for epileptic seizure prediction","authors":"Yimin Qu , Songhui Rao , Ting Li , Ying Li , Yan Niu , Ruiyun Chang , Xianchuan Chen , Bin Wang","doi":"10.1016/j.jneumeth.2025.110665","DOIUrl":"10.1016/j.jneumeth.2025.110665","url":null,"abstract":"<div><h3>Background:</h3><div>Epilepsy poses ongoing physical and mental threats and causes substantial economic burdens. Better seizure forecasting enables faster medical responses, improving patients’ quality of life and lowering healthcare costs. Research mainly focuses on early forecasting within a short preictal window, often too brief for effective drug administration. A major challenge is that a longer preictal phase may resemble the interictal state, making differentiation difficult.</div></div><div><h3>New methods:</h3><div>We propose a causal attention network (CANet) with a longer interictal and preictal of 1 h and 2 h respectively as the research object. In the feature extraction, a dilated causal convolution network is employed to extract local features. Causal attention is innovatively incorporated into epilepsy prediction to capture global correlation features. The complementary integration of these two methods enhances feature extraction and enables a more precise distinction between interictal and preictal periods. A double-layer dynamic window algorithm is developed for seizure prediction.</div></div><div><h3>Results:</h3><div>We evaluate the performance on Freiburg and CHB-MIT datasets. On the Freiburg dataset, the sensitivity(Sen) of the 1/2-hour preictal intervals was 100.00%/96.67%, with a false alarm rate per hour (FAR) of 0.0077/h/0.0472/h, and the average prediction time (APT) was 97.59 min. On the CHB-MIT dataset, we achieved Sen of 97.06%/92.31%, FAR of 0.0251/h/0.0666/h, and APT of 94.85 min, under the same conditions.</div></div><div><h3>Comparison with existing methods and conclusion:</h3><div>Our approach outperforms most of the previous methods, and the intracranial EEG (Freiburg) can more effectively distinguish interictal and preictal periods than scalp EEG (CHB-MIT).</div></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"428 ","pages":"Article 110665"},"PeriodicalIF":2.3,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145928232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-05DOI: 10.1016/j.jneumeth.2025.110670
Yanhong Yan , Yang Zhao , Yong Peng , Lingjun Han , Shuhao Sun , Yudong Wen , Xueying Dong
Background
Brain atlas is an important tool to provide information on the location and function of brain motor nuclei, and the segmentation and registration of brain tissue slice images is the necessary basis for building brain atlases.
New method
In order to construct the carp brain atlas, the segmentation and the registration of carp brain tissue images were studied in this study. In this study, carp brain tissue sections were prepared by the method of pre-fixation of brain tissue specimens and paraffin tissue sections with HE staining techniques, and a multi-threshold image segmentation algorithm based on HSI color space was selected for image segmentation according to the image characteristics of carp brain tissue sections. In the aspect of image registration, a method of selecting feature points for registration based on the morphological and structural characteristics of different brain regions of carp was used for image registration of brain tissue slices.
Results
The results showed that the image segmentation and registration algorithm used in this study can meet the experimental requirements and conform to the structural characteristics of carp brain in spatial position.
Comparison with existing methods
The traditional brain mapping is drawn manually by humans, which is time-consuming and labor-intensive, and is limited to two dimensions. Image segmentation technology can automatically identify the contours of brain regions, saving manpower and promoting three-dimensional reconstruction and digital research.
Conclusions
This study laid a foundation for the construction of stereotaxic map of carp brain.
{"title":"Image segmentation and registration of carp brain tissue slices oriented to brain atlas construction","authors":"Yanhong Yan , Yang Zhao , Yong Peng , Lingjun Han , Shuhao Sun , Yudong Wen , Xueying Dong","doi":"10.1016/j.jneumeth.2025.110670","DOIUrl":"10.1016/j.jneumeth.2025.110670","url":null,"abstract":"<div><h3>Background</h3><div>Brain atlas is an important tool to provide information on the location and function of brain motor nuclei, and the segmentation and registration of brain tissue slice images is the necessary basis for building brain atlases.</div></div><div><h3>New method</h3><div>In order to construct the carp brain atlas, the segmentation and the registration of carp brain tissue images were studied in this study. In this study, carp brain tissue sections were prepared by the method of pre-fixation of brain tissue specimens and paraffin tissue sections with HE staining techniques, and a multi-threshold image segmentation algorithm based on HSI color space was selected for image segmentation according to the image characteristics of carp brain tissue sections. In the aspect of image registration, a method of selecting feature points for registration based on the morphological and structural characteristics of different brain regions of carp was used for image registration of brain tissue slices.</div></div><div><h3>Results</h3><div>The results showed that the image segmentation and registration algorithm used in this study can meet the experimental requirements and conform to the structural characteristics of carp brain in spatial position.</div></div><div><h3>Comparison with existing methods</h3><div>The traditional brain mapping is drawn manually by humans, which is time-consuming and labor-intensive, and is limited to two dimensions. Image segmentation technology can automatically identify the contours of brain regions, saving manpower and promoting three-dimensional reconstruction and digital research.</div></div><div><h3>Conclusions</h3><div>This study laid a foundation for the construction of stereotaxic map of carp brain.</div></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"428 ","pages":"Article 110670"},"PeriodicalIF":2.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145917662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-03DOI: 10.1016/j.jneumeth.2025.110671
Pedro Andrade, Asla Pitkänen
Background
To generate a behavioral feature library for a more granular description of seizure semiology in rats with traumatic brain injury (TBI). To compare the new approach to the Racine score.
New method
A library of 59 seizure-related behavioral features was generated by annotating 329 seizures in 31 rats with TBI, which were monitored using high-resolution video-electroencephalogram. Of the 329 seizures, 149 were early, 85 post-electrode implantation (6th post-injury month), and 95 late seizures (7th post-injury month). Of the 59 behavioral features, 3 were pre-ictal, 43 ictal, and 13 post-ictal. Of the 43 ictal features, 7 related to consciousness, 5 to mouth and whiskers, 2 to eyes, 7 to head, 2 to ears, 6 to paws, 12 to body and tail, 2 to autonomic function, and 1 to wet-dog shakes.
Results
Early, post-implantation, and late seizures showed different behavioral phenotypes (p < 0.001). The number of behavioral features in post-electrode implantation and late seizures was greater than that in early seizures (p < 0.05). Behavioral features did not reliably differentiate transitions from pre-ictal to ictal or from ictal to post-ictal phases.
Comparison with existing methods
Ninety-one percent of early, 45 % of post-electrode implantation, and 18 % of late seizures with a Racine score of 0 showed up to 6–7 ictal-related behaviors.
Conclusions
The Proposed feature list can be applied for the harmonization of data analysis and reporting, and training of video-based seizure detection algorithms to speed up non-invasive, affordable epilepsy diagnosis and assessment of treatment effects in TBI models.
{"title":"Feature library for behavioural characterization of early and late seizures in an experimental model of post-traumatic epilepsy","authors":"Pedro Andrade, Asla Pitkänen","doi":"10.1016/j.jneumeth.2025.110671","DOIUrl":"10.1016/j.jneumeth.2025.110671","url":null,"abstract":"<div><h3>Background</h3><div>To generate a behavioral feature library for a more granular description of seizure semiology in rats with traumatic brain injury (TBI). To compare the new approach to the Racine score.</div></div><div><h3>New method</h3><div>A library of 59 seizure-related behavioral features was generated by annotating 329 seizures in 31 rats with TBI, which were monitored using high-resolution video-electroencephalogram. Of the 329 seizures, 149 were early, 85 post-electrode implantation (6th post-injury month), and 95 late seizures (7th post-injury month). Of the 59 behavioral features, 3 were pre-ictal, 43 ictal, and 13 post-ictal. Of the 43 ictal features, 7 related to consciousness, 5 to mouth and whiskers, 2 to eyes, 7 to head, 2 to ears, 6 to paws, 12 to body and tail, 2 to autonomic function, and 1 to wet-dog shakes.</div></div><div><h3>Results</h3><div>Early, post-implantation, and late seizures showed different behavioral phenotypes (p < 0.001). The number of behavioral features in post-electrode implantation and late seizures was greater than that in early seizures (p < 0.05). Behavioral features did not reliably differentiate transitions from pre-ictal to ictal or from ictal to post-ictal phases.</div></div><div><h3>Comparison with existing methods</h3><div>Ninety-one percent of early, 45 % of post-electrode implantation, and 18 % of late seizures with a Racine score of 0 showed up to 6–7 ictal-related behaviors.</div></div><div><h3>Conclusions</h3><div>The Proposed feature list can be applied for the harmonization of data analysis and reporting, and training of video-based seizure detection algorithms to speed up non-invasive, affordable epilepsy diagnosis and assessment of treatment effects in TBI models.</div></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"428 ","pages":"Article 110671"},"PeriodicalIF":2.3,"publicationDate":"2026-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145906204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-02DOI: 10.1016/j.jneumeth.2025.110667
Muzammil Kabier , Shamili Mariya Varghese , K.V. Athira , Mohamed A. Abdelgawad , Mohammed M. Ghoneim , Hailah M. Almohaimeed , Sunil Kumar , K.P. Sreekumar , Ashok R. Unni , Bijo Mathew
Open field test (OFT) is one of the widely used pre-clinical models for assessing the exploratory, locomotion and anxiety behavior of rodents. OFT parameters are often analyzed manually or by using automated systems. Although many softwares exist for OFT analysis, the steep learning curve and the cost of commercial software leads the researcher to opt for traditional approaches. Manual analysis is riddled with observer bias which can lead to ambiguity in behavior classification. This leads to the preference of automated system over manual observation in a scientific context. Herein we present OpenFieldAI, which is an open-source python-based software which has an intuitive graphical user interface (GUI) interface which makes it beginner friendly for researcher. The software utilizes YOLO algorithm for the detection and tracking of rodents in OFT apparatus. When pre-trained models are unable to infer sufficiently, the ability to train models according to user criteria can be helpful. Single/multiple pre-recorded videos (mp4) or live video with external web cam can be given as input to calculate the parameters like speed, distance, time spent in/out of region of interest (ROI) and entries/exits with the generation of box centroid graph, heat map and line path which are crucial information that give scientific insights into the rodents neurobehavior. The automatic detection of central/peripheral region by min/max calculation in 2D plane and manual drawing of ROI contribute to the ease in use and collection of intricate information. To validate the software, we compared the observation of 5 parameters – total distance, speed, entries into the central region and time spend in central and peripheral regions from the readings of ANY-maze (commercial software) using Pearson correlation coefficient. Correlation within all the three groups was found to be above 0.9 which indicates the reliability of the new software. Moreover, User’s guide has been provided for proper utilization of the tool. OpenFieldAI which can be downloaded and installed in Windows OS for free at: https://sourceforge.net/projects/openfieldai/
Open field test (OFT)是一种广泛应用于评估啮齿动物探索、运动和焦虑行为的临床前模型。OFT参数通常通过手动或使用自动化系统进行分析。尽管存在许多OFT分析软件,但陡峭的学习曲线和商业软件的成本导致研究人员选择传统方法。人工分析充满了观察者的偏见,这可能导致行为分类的模糊性。这导致在科学背景下,自动化系统优于人工观察。在此,我们介绍OpenFieldAI,这是一个基于python的开源软件,具有直观的图形用户界面(GUI)界面,使其对研究人员初学者友好。该软件利用YOLO算法对OFT设备中的啮齿动物进行检测和跟踪。当预先训练的模型无法充分推断时,根据用户标准训练模型的能力可能会有所帮助。单个/多个预先录制的视频(mp4)或带有外部网络摄像头的实时视频可以作为输入,用于计算速度、距离、在感兴趣区域(ROI)内/外花费的时间以及进入/退出等参数,并生成盒形心图、热图和线路径,这些信息是对啮齿动物神经行为进行科学洞察的重要信息。通过二维平面的最小/最大计算自动检测中心/周边区域,手工绘制ROI,便于使用和收集复杂的信息。为了验证软件的有效性,我们使用Pearson相关系数比较了ANY-maze(商业软件)读数对总距离、速度、进入中心区域的次数和在中心和外围区域花费的时间这5个参数的观察结果。所有三组之间的相关性都在0.9以上,这表明新软件的可靠性。此外,还提供了正确使用该工具的用户指南。OpenFieldAI可以在https://sourceforge.net/projects/openfieldai/免费下载并安装在Windows操作系统中
{"title":"OpenFieldAI – new open-source AI based software for tracking rodents and training open field test models","authors":"Muzammil Kabier , Shamili Mariya Varghese , K.V. Athira , Mohamed A. Abdelgawad , Mohammed M. Ghoneim , Hailah M. Almohaimeed , Sunil Kumar , K.P. Sreekumar , Ashok R. Unni , Bijo Mathew","doi":"10.1016/j.jneumeth.2025.110667","DOIUrl":"10.1016/j.jneumeth.2025.110667","url":null,"abstract":"<div><div>Open field test (OFT) is one of the widely used pre-clinical models for assessing the exploratory, locomotion and anxiety behavior of rodents. OFT parameters are often analyzed manually or by using automated systems. Although many softwares exist for OFT analysis, the steep learning curve and the cost of commercial software leads the researcher to opt for traditional approaches. Manual analysis is riddled with observer bias which can lead to ambiguity in behavior classification. This leads to the preference of automated system over manual observation in a scientific context. Herein we present <strong>OpenFieldAI</strong>, which is an open-source python-based software which has an intuitive graphical user interface (GUI) interface which makes it beginner friendly for researcher. The software utilizes YOLO algorithm for the detection and tracking of rodents in OFT apparatus. When pre-trained models are unable to infer sufficiently, the ability to train models according to user criteria can be helpful. Single/multiple pre-recorded videos (mp4) or live video with external web cam can be given as input to calculate the parameters like speed, distance, time spent in/out of region of interest (ROI) and entries/exits with the generation of box centroid graph, heat map and line path which are crucial information that give scientific insights into the rodents neurobehavior. The automatic detection of central/peripheral region by min/max calculation in 2D plane and manual drawing of ROI contribute to the ease in use and collection of intricate information. To validate the software, we compared the observation of 5 parameters – total distance, speed, entries into the central region and time spend in central and peripheral regions from the readings of ANY-maze (commercial software) using Pearson correlation coefficient. Correlation within all the three groups was found to be above 0.9 which indicates the reliability of the new software. Moreover, User’s guide has been provided for proper utilization of the tool. OpenFieldAI which can be downloaded and installed in Windows OS for free at: <span><span>https://sourceforge.net/projects/openfieldai/</span><svg><path></path></svg></span></div></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"428 ","pages":"Article 110667"},"PeriodicalIF":2.3,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145895820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-02DOI: 10.1016/j.jneumeth.2025.110664
Shuang Yu , Jing Zhao , Jing Ouyang , Xiaming Wang , Peng Kou , Keying Zhu , Ping Liu
Background
Mild cognitive impairment (MCI), a precursor to Alzheimer’s disease (AD), requires precise early diagnosis. Single-omics approaches often miss disease complexity, motivating integrative and interpretable solutions.
New method
We present the Attention-based Multimodal Graph Fusion Network (A-MGFN), which integrates clinical, genomic, epigenomic, and transcriptomic data via biologically curated features – Clinico-Genetic Risk Score (CGRS), Curated Epigenomic Signature (CES), and Differential Expression Signature (DES). Each modality is encoded by a modality-specific graph convolutional network to capture higher-order intra-modal interactions, and a downstream attention module adaptively weights modalities for fusion.
Results
On the ADNI cohort, A-MGFN achieved an AUC of 0.86 ± 0.03 and an F1-score of 0.88 ± 0.03. Ablation and attention-weight analyses confirmed multi-omics synergy, with CES providing the largest marginal performance gains.
Comparison with existing methods
A-MGFN outperformed traditional machine-learning baselines and Graph Convolutional Network (GCN) frameworks (MO-GCAN, AD-GCN), with 5–7 percentage-point gains in F1-score, attributable to attention-guided fusion rather than fixed or unified-graph schemes.
Conclusions
A-MGFN offers a robust and interpretable multi-omics framework for early MCI detection and provides insights into modality contributions that may inform clinical translation. Its design is extensible to other neurodegenerative disorders (e.g., Parkinson’s disease).
{"title":"Synergistic integration of clinical and multi-omics data for early MCI diagnosis using an attention-based graph fusion network","authors":"Shuang Yu , Jing Zhao , Jing Ouyang , Xiaming Wang , Peng Kou , Keying Zhu , Ping Liu","doi":"10.1016/j.jneumeth.2025.110664","DOIUrl":"10.1016/j.jneumeth.2025.110664","url":null,"abstract":"<div><h3>Background</h3><div>Mild cognitive impairment (MCI), a precursor to Alzheimer’s disease (AD), requires precise early diagnosis. Single-omics approaches often miss disease complexity, motivating integrative and interpretable solutions.</div></div><div><h3>New method</h3><div>We present the Attention-based Multimodal Graph Fusion Network (A-MGFN), which integrates clinical, genomic, epigenomic, and transcriptomic data via biologically curated features – Clinico-Genetic Risk Score (CGRS), Curated Epigenomic Signature (CES), and Differential Expression Signature (DES). Each modality is encoded by a modality-specific graph convolutional network to capture higher-order intra-modal interactions, and a downstream attention module adaptively weights modalities for fusion.</div></div><div><h3>Results</h3><div>On the ADNI cohort, A-MGFN achieved an AUC of 0.86 ± 0.03 and an F1-score of 0.88 ± 0.03. Ablation and attention-weight analyses confirmed multi-omics synergy, with CES providing the largest marginal performance gains.</div></div><div><h3>Comparison with existing methods</h3><div>A-MGFN outperformed traditional machine-learning baselines and Graph Convolutional Network (GCN) frameworks (MO-GCAN, AD-GCN), with 5–7 percentage-point gains in F1-score, attributable to attention-guided fusion rather than fixed or unified-graph schemes.</div></div><div><h3>Conclusions</h3><div>A-MGFN offers a robust and interpretable multi-omics framework for early MCI detection and provides insights into modality contributions that may inform clinical translation. Its design is extensible to other neurodegenerative disorders (e.g., Parkinson’s disease).</div></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"428 ","pages":"Article 110664"},"PeriodicalIF":2.3,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145900618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01DOI: 10.1016/j.jneumeth.2025.110669
Pasquale Salerno , Mirko Job , Matteo Iurato , Marco Testa , Marco Bove , Ambra Bisio
Background
One of the most common ways to assess the sense of position is the Joint Position Reproduction (JPR) task, where a person reproduces a memorized joint position. While useful, this method is limited because it focuses on static positions and does not fully reflect the dynamic nature of real movements.
New methods
This study investigated the test-retest reliability of the Dynamic JPR (D-JPR) task, during Concentric and Eccentric muscle contractions. Twenty-eight participants were recruited and received a tactile stimulus indicating the position cue at Initial (INI), Intermediate (INT), and Final (FIN) phases of movements, during either the concentric or eccentric phases. After the movement, they replicated the position where they received the stimulus. Angular error (AE) was analysed. Intraclass Correlation Coefficient (ICC) was used to assess relative reliability; Standard Error of Measurement (SEM) and Bias were used to assess absolute reliability.
Results
The relative reliability was good in most conditions (ICC > 0.75), with moderate values only for some phases. Absolute reliability showed a variable SEM between conditions, with higher values in the initial eccentric contraction phase. The Bland-Altman plots showed low bias between test and retest. The best reliability was obtained by averaging movement phases and muscle contractions (ICC = 0.89, SEM = 1.35°, Bias = 0.91°).
Comparison with existing methods
The D-JPR provides a more suitable way to assess joint position sense during movement compared to existing methods.
Conclusion
The D-JPR task is a reliable method for assessing joint position sense in dynamic conditions.
{"title":"Assessing position sense in motion: Reliability of a dynamic joint position reproduction test","authors":"Pasquale Salerno , Mirko Job , Matteo Iurato , Marco Testa , Marco Bove , Ambra Bisio","doi":"10.1016/j.jneumeth.2025.110669","DOIUrl":"10.1016/j.jneumeth.2025.110669","url":null,"abstract":"<div><h3>Background</h3><div>One of the most common ways to assess the sense of position is the Joint Position Reproduction (JPR) task, where a person reproduces a memorized joint position. While useful, this method is limited because it focuses on static positions and does not fully reflect the dynamic nature of real movements.</div></div><div><h3>New methods</h3><div>This study investigated the test-retest reliability of the Dynamic JPR (<span>D</span>-JPR) task, during Concentric and Eccentric muscle contractions. Twenty-eight participants were recruited and received a tactile stimulus indicating the position cue at Initial (INI), Intermediate (INT), and Final (FIN) phases of movements, during either the concentric or eccentric phases. After the movement, they replicated the position where they received the stimulus. Angular error (AE) was analysed. Intraclass Correlation Coefficient (ICC) was used to assess relative reliability; Standard Error of Measurement (SEM) and Bias were used to assess absolute reliability.</div></div><div><h3>Results</h3><div>The relative reliability was good in most conditions (ICC > 0.75), with moderate values only for some phases. Absolute reliability showed a variable SEM between conditions, with higher values in the initial eccentric contraction phase. The Bland-Altman plots showed low bias between test and retest. The best reliability was obtained by averaging movement phases and muscle contractions (ICC = 0.89, SEM = 1.35°, Bias = 0.91°).</div></div><div><h3>Comparison with existing methods</h3><div>The <span>D</span>-JPR provides a more suitable way to assess joint position sense during movement compared to existing methods.</div></div><div><h3>Conclusion</h3><div>The <span>D</span>-JPR task is a reliable method for assessing joint position sense in dynamic conditions.</div></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"428 ","pages":"Article 110669"},"PeriodicalIF":2.3,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145896631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-18DOI: 10.1016/j.jneumeth.2025.110656
Konrad Kohnen , Peter Eipert , Laura Budde , Oliver Schmitt
Background:
Digital brain atlases are indispensable for primate connectomics, providing precise stereotactic references that enable reproducible mapping of structural and functional data.
New method:
We provide a fully digitized, bilaterally complete 3D reconstruction of the Paxinos et al. rhesus macaque atlas, implemented within the existing neuroVIISAS platform. The contribution of this work is the creation of a reusable, stereotactically embedded resource, rather than the introduction of new computational methods. Using polygon-based segmentation, we systematically digitized 1722 anatomical contours from the Paxinos et al. (2009) stereotactic atlas of the rhesus monkey, including cortical, subcortical, and non-neuronal regions, and embedded them into a stereotactic coordinate system. Mirroring procedures ensured full bilateral representation, while volumetric and surface calculations yielded quantitative benchmarks spanning nuclei of less than 0.1 mm3 to cortical regions exceeding 2000 mm3.
Results:
The atlas supports advanced visualization in 2D and 3D, including interactive rotation, transparency, and connectivity overlays, facilitating structural exploration and connectome simulations. Integration with neuroVIISAS enables hierarchical ontologies, quantitative analyses, and direct interfacing with simulation environments.
Comparison with existing methods:
Validation against stereological data and comparison with independent resources (SARM, ONPRC18) confirmed the reliability of delineations while highlighting methodological differences across atlases. Beyond structural applications, functional connectivity studies, such as gradient analyses in macaques (Xu et al., 2020), demonstrate how atlas-based frameworks bridge species by systematically linking macaque organization to human cortical architecture.
Conclusion:
Together, these methodological advances establish a reproducible, bilaterally complete, and volumetrically validated stereotactic reference for the rhesus monkey brain, enhancing both experimental design and translational connectomics.
背景:数字脑地图集是灵长类动物连接组学不可缺少的,它提供了精确的立体定向参考,使结构和功能数据的可复制映射成为可能。新方法:我们提供了一个完全数字化的,双边完整的三维重建的Paxinos等恒河猴图谱,在现有的neuroviisa平台内实现。这项工作的贡献是创建了一个可重用的、立体定向嵌入的资源,而不是引入新的计算方法。利用基于多边形的分割技术,我们系统地数字化了来自Paxinos等人(2009)恒河猴立体定向图谱的1722条解剖轮廓,包括皮层、皮层下和非神经元区域,并将它们嵌入到立体定向坐标系中。镜像程序确保了完全的双侧表征,而体积和表面计算产生了从小于0.1 mm3的核到超过2000 mm3的皮质区域的定量基准。结果:该图谱支持2D和3D的高级可视化,包括交互式旋转、透明和连接叠加,便于结构探索和连接体模拟。与neuroVIISAS的集成可以实现分层本体,定量分析以及与仿真环境的直接接口。与现有方法的比较:对立体数据的验证和与独立资源(SARM, ONPRC18)的比较证实了圈定的可靠性,同时强调了不同地图集之间的方法差异。除了结构应用之外,功能连通性研究,如猕猴的梯度分析(Xu et al., 2020),展示了基于图谱的框架如何通过系统地将猕猴组织与人类皮层结构联系起来,从而架起物种之间的桥梁。总之,这些方法学上的进步为恒河猴大脑建立了一个可重复的、双边完整的、体积验证的立体定向参考,增强了实验设计和翻译连接组学。
{"title":"neuroVIISAS-based construction of a stereotactic rhesus monkey brain atlas for connectome research","authors":"Konrad Kohnen , Peter Eipert , Laura Budde , Oliver Schmitt","doi":"10.1016/j.jneumeth.2025.110656","DOIUrl":"10.1016/j.jneumeth.2025.110656","url":null,"abstract":"<div><h3>Background:</h3><div>Digital brain atlases are indispensable for primate connectomics, providing precise stereotactic references that enable reproducible mapping of structural and functional data.</div></div><div><h3>New method:</h3><div>We provide a fully digitized, bilaterally complete 3D reconstruction of the Paxinos et al. rhesus macaque atlas, implemented within the existing <em>neuroVIISAS</em> platform. The contribution of this work is the creation of a reusable, stereotactically embedded resource, rather than the introduction of new computational methods. Using polygon-based segmentation, we systematically digitized 1722 anatomical contours from the Paxinos et al. (2009) stereotactic atlas of the rhesus monkey, including cortical, subcortical, and non-neuronal regions, and embedded them into a stereotactic coordinate system. Mirroring procedures ensured full bilateral representation, while volumetric and surface calculations yielded quantitative benchmarks spanning nuclei of less than 0.1 mm<sup>3</sup> to cortical regions exceeding 2000 mm<sup>3</sup>.</div></div><div><h3>Results:</h3><div>The atlas supports advanced visualization in 2D and 3D, including interactive rotation, transparency, and connectivity overlays, facilitating structural exploration and connectome simulations. Integration with <em>neuroVIISAS</em> enables hierarchical ontologies, quantitative analyses, and direct interfacing with simulation environments.</div></div><div><h3>Comparison with existing methods:</h3><div>Validation against stereological data and comparison with independent resources (SARM, ONPRC18) confirmed the reliability of delineations while highlighting methodological differences across atlases. Beyond structural applications, functional connectivity studies, such as gradient analyses in macaques (Xu et al., 2020), demonstrate how atlas-based frameworks bridge species by systematically linking macaque organization to human cortical architecture.</div></div><div><h3>Conclusion:</h3><div>Together, these methodological advances establish a reproducible, bilaterally complete, and volumetrically validated stereotactic reference for the rhesus monkey brain, enhancing both experimental design and translational connectomics.</div></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"427 ","pages":"Article 110656"},"PeriodicalIF":2.3,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145797729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-18DOI: 10.1016/j.jneumeth.2025.110663
Samuel Ehrlich , Alexandra D. VandeLoo , Mohamed Badawy , Mercedes M. Gonzalez , Max Stockslager , Aimei Yang , Sapna Sinha , Shahar Bracha , Demian Park , Benjamin Magondu , Bo Yang , Edward S. Boyden , Craig R. Forest
Background:
Our ability to engineer opsins is limited by an incomplete understanding of how sequence variations influence function. The vastness of opsin sequence space makes systematic exploration difficult.
New method:
In recognition of the need for datasets linking opsin genetic sequence to function, we pursued a novel method for screening channelrhodopsins to obtain these datasets. In this method, we integrate advances in robotic intracellular electrophysiology (Patch) to measure optogenetic properties (Excite), harvest individual cells of interest (Pick) and subsequently sequence them (Sequence), thus tying sequence to function.
Results:
We used this method to sequence more than 50 cells with associated functional characterization. We further demonstrate the utility of this method with experiments on heterogeneous populations of known opsins and single point mutations of a known opsin. Of these point mutations, we found C160W ablates ChrimsonR’s response to light.
Conclusion and comparison to existing methods:
Compared to traditional manual patch clamp screening, which is labor-intensive and low-throughput, this approach enables more efficient, standardized, and scalable characterization of large opsin libraries. This method can enable opsin engineering with large datasets to increase our understanding of opsin sequence–function relationships.
{"title":"Screening channelrhodopsins using robotic intracellular electrophysiology and single cell sequencing","authors":"Samuel Ehrlich , Alexandra D. VandeLoo , Mohamed Badawy , Mercedes M. Gonzalez , Max Stockslager , Aimei Yang , Sapna Sinha , Shahar Bracha , Demian Park , Benjamin Magondu , Bo Yang , Edward S. Boyden , Craig R. Forest","doi":"10.1016/j.jneumeth.2025.110663","DOIUrl":"10.1016/j.jneumeth.2025.110663","url":null,"abstract":"<div><h3>Background:</h3><div>Our ability to engineer opsins is limited by an incomplete understanding of how sequence variations influence function. The vastness of opsin sequence space makes systematic exploration difficult.</div></div><div><h3>New method:</h3><div>In recognition of the need for datasets linking opsin genetic sequence to function, we pursued a novel method for screening channelrhodopsins to obtain these datasets. In this method, we integrate advances in robotic intracellular electrophysiology (<u>P</u>atch) to measure optogenetic properties (<u>E</u>xcite), harvest individual cells of interest (<u>P</u>ick) and subsequently sequence them (<u>S</u>equence), thus tying sequence to function.</div></div><div><h3>Results:</h3><div>We used this method to sequence more than 50 cells with associated functional characterization. We further demonstrate the utility of this method with experiments on heterogeneous populations of known opsins and single point mutations of a known opsin. Of these point mutations, we found C160W ablates ChrimsonR’s response to light.</div></div><div><h3>Conclusion and comparison to existing methods:</h3><div>Compared to traditional manual patch clamp screening, which is labor-intensive and low-throughput, this approach enables more efficient, standardized, and scalable characterization of large opsin libraries. This method can enable opsin engineering with large datasets to increase our understanding of opsin sequence–function relationships.</div></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"428 ","pages":"Article 110663"},"PeriodicalIF":2.3,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145800610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}