Pub Date : 2025-02-13DOI: 10.1038/s41597-025-04538-3
Junryeol Jeon, Yeo-Gyeong Noh, JooYeong Kim, Jin-Hyuk Hong
This manuscript presents a Pre-AttentiveGaze dataset. One of the defining characteristics of gaze-based authentication is the necessity for a rapid response. In this study, we constructed a dataset for identifying individuals through eye movements by inducing "pre-attentive processing" in response to a given gaze stimulus in a very short time. A total of 76,840 eye movement samples were collected from 34 participants across five sessions. From the dataset, we extracted the gaze features proposed in previous studies, pre-processed them, and validated the dataset by applying machine learning models. This study demonstrates the efficacy of the dataset and illustrates its potential for use in gaze-based authentication of visual stimuli that elicit pre-attentive processing.
{"title":"Pre-AttentiveGaze: gaze-based authentication dataset with momentary visual interactions.","authors":"Junryeol Jeon, Yeo-Gyeong Noh, JooYeong Kim, Jin-Hyuk Hong","doi":"10.1038/s41597-025-04538-3","DOIUrl":"10.1038/s41597-025-04538-3","url":null,"abstract":"<p><p>This manuscript presents a Pre-AttentiveGaze dataset. One of the defining characteristics of gaze-based authentication is the necessity for a rapid response. In this study, we constructed a dataset for identifying individuals through eye movements by inducing \"pre-attentive processing\" in response to a given gaze stimulus in a very short time. A total of 76,840 eye movement samples were collected from 34 participants across five sessions. From the dataset, we extracted the gaze features proposed in previous studies, pre-processed them, and validated the dataset by applying machine learning models. This study demonstrates the efficacy of the dataset and illustrates its potential for use in gaze-based authentication of visual stimuli that elicit pre-attentive processing.</p>","PeriodicalId":21597,"journal":{"name":"Scientific Data","volume":"12 1","pages":"263"},"PeriodicalIF":5.8,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11825865/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143415101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-13DOI: 10.1038/s41597-025-04586-9
Lei Chu, Baoqiang Ma, Xiaoxi Dong, Yirong He, Tongtong Che, Debin Zeng, Zihao Zhang, Shuyu Li
The hippocampus plays a critical role in memory and is prone to neural degenerative diseases. Its complex structure and distinct subfields pose challenges for automatic segmentation in 3 T MRI because of its limited resolution and contrast. While 7 T MRI offers superior anatomical details and better gray-white matter contrast, aiding in clearer differentiation of hippocampal structures, its use is restricted by high costs. To bridge this gap, algorithms synthesizing 7T-like images from 3 T scans are being developed, requiring paired datasets for training. However, the scarcity of such high-quality paired datasets, particularly those with manual hippocampal subfield segmentations as ground truth, hinders progress. Herein, we introduce a dataset comprising paired 3 T and 7 T MRI scans from 20 healthy volunteers, with manual hippocampal subfield annotations on 7 T T2-weighted images. This dataset is designed to support the development and evaluation of both 3T-to-7T MR image synthesis models and automated hippocampal segmentation algorithms on 3 T images. We assessed the image quality using MRIQC. The dataset is freely accessible on the Figshare+.
{"title":"A paired dataset of multi-modal MRI at 3 Tesla and 7 Tesla with manual hippocampal subfield segmentations.","authors":"Lei Chu, Baoqiang Ma, Xiaoxi Dong, Yirong He, Tongtong Che, Debin Zeng, Zihao Zhang, Shuyu Li","doi":"10.1038/s41597-025-04586-9","DOIUrl":"10.1038/s41597-025-04586-9","url":null,"abstract":"<p><p>The hippocampus plays a critical role in memory and is prone to neural degenerative diseases. Its complex structure and distinct subfields pose challenges for automatic segmentation in 3 T MRI because of its limited resolution and contrast. While 7 T MRI offers superior anatomical details and better gray-white matter contrast, aiding in clearer differentiation of hippocampal structures, its use is restricted by high costs. To bridge this gap, algorithms synthesizing 7T-like images from 3 T scans are being developed, requiring paired datasets for training. However, the scarcity of such high-quality paired datasets, particularly those with manual hippocampal subfield segmentations as ground truth, hinders progress. Herein, we introduce a dataset comprising paired 3 T and 7 T MRI scans from 20 healthy volunteers, with manual hippocampal subfield annotations on 7 T T2-weighted images. This dataset is designed to support the development and evaluation of both 3T-to-7T MR image synthesis models and automated hippocampal segmentation algorithms on 3 T images. We assessed the image quality using MRIQC. The dataset is freely accessible on the Figshare+.</p>","PeriodicalId":21597,"journal":{"name":"Scientific Data","volume":"12 1","pages":"260"},"PeriodicalIF":5.8,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11825668/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143415098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-12DOI: 10.1038/s41597-025-04592-x
Anna Graff, Natalia Chousou-Polydouri, David Inman, Hedvig Skirgård, Marc Lischka, Taras Zakharko, Chiara Barbieri, Balthasar Bickel
{"title":"Author Correction: Curating global datasets of structural linguistic features for independence.","authors":"Anna Graff, Natalia Chousou-Polydouri, David Inman, Hedvig Skirgård, Marc Lischka, Taras Zakharko, Chiara Barbieri, Balthasar Bickel","doi":"10.1038/s41597-025-04592-x","DOIUrl":"10.1038/s41597-025-04592-x","url":null,"abstract":"","PeriodicalId":21597,"journal":{"name":"Scientific Data","volume":"12 1","pages":"259"},"PeriodicalIF":5.8,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11822050/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143410260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-12DOI: 10.1038/s41597-025-04544-5
Nicholas K Corak, Peter E Thornton, Lauren E L Lowman
Vapor pressure deficit (VPD) is a critical variable in assessing drought conditions and evaluating plant water stress. Gridded products of global and regional VPD are not freely available from satellite remote sensing, model reanalysis, or ground observation datasets. We present two versions of the first gridded VPD product for the Continental US and parts of Northern Mexico and Southern Canada (CONUS+) at a 1 km spatial resolution and daily time step. We derived VPD from Daymet maximum daily temperature and average daily vapor pressure and scale the estimates based on (1) climate determined by the Köppen-Geiger classifications and (2) land cover determined by the International Geosphere-Biosphere Programme. Ground-based VPD data from 253 AmeriFlux sites representing different climate and land cover classifications were used to improve the Daymet-derived VPD estimates for every pixel in the CONUS+ grid to produce the final datasets. We evaluated the Daymet-derived VPD against independent observations and reanalysis data. The CONUS+ VPD datasets will aid in investigating disturbances including drought and wildfire, and informing land management strategies.
{"title":"A high resolution, gridded product for vapor pressure deficit using Daymet.","authors":"Nicholas K Corak, Peter E Thornton, Lauren E L Lowman","doi":"10.1038/s41597-025-04544-5","DOIUrl":"10.1038/s41597-025-04544-5","url":null,"abstract":"<p><p>Vapor pressure deficit (VPD) is a critical variable in assessing drought conditions and evaluating plant water stress. Gridded products of global and regional VPD are not freely available from satellite remote sensing, model reanalysis, or ground observation datasets. We present two versions of the first gridded VPD product for the Continental US and parts of Northern Mexico and Southern Canada (CONUS+) at a 1 km spatial resolution and daily time step. We derived VPD from Daymet maximum daily temperature and average daily vapor pressure and scale the estimates based on (1) climate determined by the Köppen-Geiger classifications and (2) land cover determined by the International Geosphere-Biosphere Programme. Ground-based VPD data from 253 AmeriFlux sites representing different climate and land cover classifications were used to improve the Daymet-derived VPD estimates for every pixel in the CONUS+ grid to produce the final datasets. We evaluated the Daymet-derived VPD against independent observations and reanalysis data. The CONUS+ VPD datasets will aid in investigating disturbances including drought and wildfire, and informing land management strategies.</p>","PeriodicalId":21597,"journal":{"name":"Scientific Data","volume":"12 1","pages":"256"},"PeriodicalIF":5.8,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11822033/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143410239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-12DOI: 10.1038/s41597-025-04542-7
Ashish Mani, Sergey Gorbachev, Jun Yan, Abhishek Dixit, Xi Shi, Long Li, Yuanyuan Sun, Xin Chen, Jiaqi Wu, Jianwen Deng, Xiaohua Jiang, Dong Yue, Chunxia Dou, Xiangsen Wei, Jiawei Huang
In the context of the increasing popularity of Big Data paradigms and deep learning techniques, we introduce a novel large-scale hyperspectral imagery dataset, termed Orbita Hyperspectral Images Dataset-1 (OHID-1). It comprises 10 hyperspectral images sourced from diverse regions of Zhuhai City, China, each boasting 32 spectral bands with a spatial resolution of 10 meters and spanning a spectral range of 400-1000 nanometers. The core objective of this dataset is to elevate the performance of hyperspectral image classification and pose substantial challenges to existing hyperspectral image processing algorithms. When compared to traditional open-source hyperspectral datasets and recently released large-scale hyperspectral datasets, OHID-1 presents more intricate features and a higher degree of classification complexity by providing 7 classes labels in wider area. Furthermore, this study demonstrates the utility of OHID-1 by testing it with selected hyperspectral classification algorithms. This dataset will be useful to advance cutting-edge research in urban sustainable development science, land use analysis. We invite the scientific community to devise novel methodologies for an in-depth analysis of these data.
{"title":"OHID-1: A New Large Hyperspectral Image Dataset for Multi-Classification.","authors":"Ashish Mani, Sergey Gorbachev, Jun Yan, Abhishek Dixit, Xi Shi, Long Li, Yuanyuan Sun, Xin Chen, Jiaqi Wu, Jianwen Deng, Xiaohua Jiang, Dong Yue, Chunxia Dou, Xiangsen Wei, Jiawei Huang","doi":"10.1038/s41597-025-04542-7","DOIUrl":"10.1038/s41597-025-04542-7","url":null,"abstract":"<p><p>In the context of the increasing popularity of Big Data paradigms and deep learning techniques, we introduce a novel large-scale hyperspectral imagery dataset, termed Orbita Hyperspectral Images Dataset-1 (OHID-1). It comprises 10 hyperspectral images sourced from diverse regions of Zhuhai City, China, each boasting 32 spectral bands with a spatial resolution of 10 meters and spanning a spectral range of 400-1000 nanometers. The core objective of this dataset is to elevate the performance of hyperspectral image classification and pose substantial challenges to existing hyperspectral image processing algorithms. When compared to traditional open-source hyperspectral datasets and recently released large-scale hyperspectral datasets, OHID-1 presents more intricate features and a higher degree of classification complexity by providing 7 classes labels in wider area. Furthermore, this study demonstrates the utility of OHID-1 by testing it with selected hyperspectral classification algorithms. This dataset will be useful to advance cutting-edge research in urban sustainable development science, land use analysis. We invite the scientific community to devise novel methodologies for an in-depth analysis of these data.</p>","PeriodicalId":21597,"journal":{"name":"Scientific Data","volume":"12 1","pages":"251"},"PeriodicalIF":5.8,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11822079/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143410288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-12DOI: 10.1038/s41597-025-04536-5
Oussama Abdul Hay, Xiaoqian Huang, Abdulla Ayyad, Eslam Sherif, Randa Almadhoun, Yusra Abdulrahman, Lakmal Seneviratne, Abdulqader Abusafieh, Yahya Zweiri
Robotic automation requires precise object pose estimation for effective grasping and manipulation. With their high dynamic range and temporal resolution, event-based cameras offer a promising alternative to conventional cameras. Despite their success in tracking, segmentation, classification, obstacle avoidance, and navigation, their use for 6D object pose estimation is relatively unexplored due to the lack of datasets. This paper introduces an extensive dataset based on Yale-CMU-Berkeley (YCB) objects, including event packets with associated poses, spike images, masks, 3D bounding box coordinates, segmented events, and a 3-channel event image for validation. Featuring 13 YCB objects, the dataset covers both cluttered and uncluttered scenes across 18 scenarios with varying speeds and illumination. It contains 306 sequences, totaling over an hour and around 1.5 billion events, making it the largest and most diverse event-based dataset for object pose estimation. This resource aims to support researchers in developing and testing object pose estimation algorithms and solutions.
{"title":"E-POSE: A Large Scale Event Camera Dataset for Object Pose Estimation.","authors":"Oussama Abdul Hay, Xiaoqian Huang, Abdulla Ayyad, Eslam Sherif, Randa Almadhoun, Yusra Abdulrahman, Lakmal Seneviratne, Abdulqader Abusafieh, Yahya Zweiri","doi":"10.1038/s41597-025-04536-5","DOIUrl":"10.1038/s41597-025-04536-5","url":null,"abstract":"<p><p>Robotic automation requires precise object pose estimation for effective grasping and manipulation. With their high dynamic range and temporal resolution, event-based cameras offer a promising alternative to conventional cameras. Despite their success in tracking, segmentation, classification, obstacle avoidance, and navigation, their use for 6D object pose estimation is relatively unexplored due to the lack of datasets. This paper introduces an extensive dataset based on Yale-CMU-Berkeley (YCB) objects, including event packets with associated poses, spike images, masks, 3D bounding box coordinates, segmented events, and a 3-channel event image for validation. Featuring 13 YCB objects, the dataset covers both cluttered and uncluttered scenes across 18 scenarios with varying speeds and illumination. It contains 306 sequences, totaling over an hour and around 1.5 billion events, making it the largest and most diverse event-based dataset for object pose estimation. This resource aims to support researchers in developing and testing object pose estimation algorithms and solutions.</p>","PeriodicalId":21597,"journal":{"name":"Scientific Data","volume":"12 1","pages":"245"},"PeriodicalIF":5.8,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11822054/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143410280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-12DOI: 10.1038/s41597-025-04516-9
James S Bennett, Erin Mutch, Andrew Tollefson, Ed Chalstrey, Majid Benam, Enrico Cioni, Jenny Reddish, Jakob Zsambok, Jill Levine, C Justin Cook, Pieter Francois, Daniel Hoyer, Peter Turchin
The scientific understanding of the complex dynamics of global history - from the rise and spread of states to their declines and falls, from their peaceful interactions with economic or diplomatic exchanges to violent confrontations - requires, at its core, a consistent and explicit encoding of historical political entities, their locations, extents and durations. Numerous attempts have been made to produce digital geographical compendia of polities with different time depths and resolutions. Most have been limited in scope and many of the more comprehensive geospatial datasets must either be licensed or are stored in proprietary formats, making access for scholarly analysis difficult. To address these issues we have developed Cliopatria, a comprehensive open-source geospatial dataset of worldwide states from 3400BCE to 2024CE. Presently it comprises over 1600 political entities sampled at varying timesteps and spatial scales. Here, we discuss its construction, its scope, and its current limitations.
{"title":"Cliopatria - A geospatial database of world-wide political entities from 3400BCE to 2024CE.","authors":"James S Bennett, Erin Mutch, Andrew Tollefson, Ed Chalstrey, Majid Benam, Enrico Cioni, Jenny Reddish, Jakob Zsambok, Jill Levine, C Justin Cook, Pieter Francois, Daniel Hoyer, Peter Turchin","doi":"10.1038/s41597-025-04516-9","DOIUrl":"10.1038/s41597-025-04516-9","url":null,"abstract":"<p><p>The scientific understanding of the complex dynamics of global history - from the rise and spread of states to their declines and falls, from their peaceful interactions with economic or diplomatic exchanges to violent confrontations - requires, at its core, a consistent and explicit encoding of historical political entities, their locations, extents and durations. Numerous attempts have been made to produce digital geographical compendia of polities with different time depths and resolutions. Most have been limited in scope and many of the more comprehensive geospatial datasets must either be licensed or are stored in proprietary formats, making access for scholarly analysis difficult. To address these issues we have developed Cliopatria, a comprehensive open-source geospatial dataset of worldwide states from 3400BCE to 2024CE. Presently it comprises over 1600 political entities sampled at varying timesteps and spatial scales. Here, we discuss its construction, its scope, and its current limitations.</p>","PeriodicalId":21597,"journal":{"name":"Scientific Data","volume":"12 1","pages":"247"},"PeriodicalIF":5.8,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11822181/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143410277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-12DOI: 10.1038/s41597-025-04521-y
Jakub Rosner, Tomasz Krzeszowski, Adam Świtoński, Henryk Josiński, Wojciech Lindenheim-Locher, Michał Zieliński, Grzegorz Paleta, Marcin Paszkuta, Konrad Wojciechowski
The subject of the paper is a multimodal dataset (DPJAIT) containing drone flights prepared in two variants - simulation-based and with real measurements captured by the gold standard Vicon system. It contains video sequences registered by the synchronized and calibrated multicamera set as well as reference 3D drone positions in successive time instants obtained from simulation procedure or using the motion capture technique. Moreover, there are scenarios with ArUco markers in the scene with known 3D positions and RGB cameras mounted on drones for which internal parameters are given. Three applications of 3D tracking are demonstrated. They are based on the overdetermined set of linear equations describing camera projection, particle swarm optimization, and the determination of the extrinsic matrix of the camera attached to the drone utilizing recognized ArUco markers.
{"title":"Multimodal dataset for indoor 3D drone tracking.","authors":"Jakub Rosner, Tomasz Krzeszowski, Adam Świtoński, Henryk Josiński, Wojciech Lindenheim-Locher, Michał Zieliński, Grzegorz Paleta, Marcin Paszkuta, Konrad Wojciechowski","doi":"10.1038/s41597-025-04521-y","DOIUrl":"10.1038/s41597-025-04521-y","url":null,"abstract":"<p><p>The subject of the paper is a multimodal dataset (DPJAIT) containing drone flights prepared in two variants - simulation-based and with real measurements captured by the gold standard Vicon system. It contains video sequences registered by the synchronized and calibrated multicamera set as well as reference 3D drone positions in successive time instants obtained from simulation procedure or using the motion capture technique. Moreover, there are scenarios with ArUco markers in the scene with known 3D positions and RGB cameras mounted on drones for which internal parameters are given. Three applications of 3D tracking are demonstrated. They are based on the overdetermined set of linear equations describing camera projection, particle swarm optimization, and the determination of the extrinsic matrix of the camera attached to the drone utilizing recognized ArUco markers.</p>","PeriodicalId":21597,"journal":{"name":"Scientific Data","volume":"12 1","pages":"257"},"PeriodicalIF":5.8,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11821834/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143410286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-12DOI: 10.1038/s41597-025-04520-z
Chao Sun, Yongqi Shao, Junaid Iqbal
The midgut and fat body of insects control key physiological processes, including growth, digestion, metabolism, and stress response. Single-nucleus RNA sequencing (snRNA-seq) is a promising way to reveal organ complexity at the cellular level, yet data for lepidopteran insects are lacking. We utilized snRNA-seq to assess cellular diversity in the midgut and fat body of Spodoptera frugiperda. Our study identified 20 distinct clusters in the midgut, including enterocytes, enteroendocrine, stem-like cells, and muscle cells, and 27 clusters in the fat body, including adipocytes, hemocytes, and epithelial cells. This dataset, containing all identified cell types in midgut and fat body, is valuable for characterizing the cellular composition of these organs and uncovering new cell-specific biomarkers. This cellular atlas enhances our understanding of cellular heterogeneity of fat and midgut, serving as a basis for future functional and comparative analyses. As the first snRNA-seq study on the midgut and fat body of S. frugiperda, it will also support future research, contribute to lepidopteran studies, and aid in developing targeted pest control strategies.
{"title":"A comprehensive cell atlas of fall armyworm (Spodoptera frugiperda) larval gut and fat body via snRNA-Seq.","authors":"Chao Sun, Yongqi Shao, Junaid Iqbal","doi":"10.1038/s41597-025-04520-z","DOIUrl":"10.1038/s41597-025-04520-z","url":null,"abstract":"<p><p>The midgut and fat body of insects control key physiological processes, including growth, digestion, metabolism, and stress response. Single-nucleus RNA sequencing (snRNA-seq) is a promising way to reveal organ complexity at the cellular level, yet data for lepidopteran insects are lacking. We utilized snRNA-seq to assess cellular diversity in the midgut and fat body of Spodoptera frugiperda. Our study identified 20 distinct clusters in the midgut, including enterocytes, enteroendocrine, stem-like cells, and muscle cells, and 27 clusters in the fat body, including adipocytes, hemocytes, and epithelial cells. This dataset, containing all identified cell types in midgut and fat body, is valuable for characterizing the cellular composition of these organs and uncovering new cell-specific biomarkers. This cellular atlas enhances our understanding of cellular heterogeneity of fat and midgut, serving as a basis for future functional and comparative analyses. As the first snRNA-seq study on the midgut and fat body of S. frugiperda, it will also support future research, contribute to lepidopteran studies, and aid in developing targeted pest control strategies.</p>","PeriodicalId":21597,"journal":{"name":"Scientific Data","volume":"12 1","pages":"250"},"PeriodicalIF":5.8,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11822134/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143410221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-12DOI: 10.1038/s41597-025-04565-0
Gorka Fraga-González, Hester van de Wiel, Francesco Garassino, Willy Kuo, Diane de Zélicourt, Vartan Kurtcuoglu, Leonhard Held, Eva Furrer
{"title":"Affording reusable data: recommendations for researchers from a data-intensive project.","authors":"Gorka Fraga-González, Hester van de Wiel, Francesco Garassino, Willy Kuo, Diane de Zélicourt, Vartan Kurtcuoglu, Leonhard Held, Eva Furrer","doi":"10.1038/s41597-025-04565-0","DOIUrl":"10.1038/s41597-025-04565-0","url":null,"abstract":"","PeriodicalId":21597,"journal":{"name":"Scientific Data","volume":"12 1","pages":"258"},"PeriodicalIF":5.8,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11821812/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143410247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}