首页 > 最新文献

Scientific Data最新文献

英文 中文
Pre-AttentiveGaze: gaze-based authentication dataset with momentary visual interactions.
IF 5.8 2区 综合性期刊 Q1 MULTIDISCIPLINARY SCIENCES Pub Date : 2025-02-13 DOI: 10.1038/s41597-025-04538-3
Junryeol Jeon, Yeo-Gyeong Noh, JooYeong Kim, Jin-Hyuk Hong

This manuscript presents a Pre-AttentiveGaze dataset. One of the defining characteristics of gaze-based authentication is the necessity for a rapid response. In this study, we constructed a dataset for identifying individuals through eye movements by inducing "pre-attentive processing" in response to a given gaze stimulus in a very short time. A total of 76,840 eye movement samples were collected from 34 participants across five sessions. From the dataset, we extracted the gaze features proposed in previous studies, pre-processed them, and validated the dataset by applying machine learning models. This study demonstrates the efficacy of the dataset and illustrates its potential for use in gaze-based authentication of visual stimuli that elicit pre-attentive processing.

{"title":"Pre-AttentiveGaze: gaze-based authentication dataset with momentary visual interactions.","authors":"Junryeol Jeon, Yeo-Gyeong Noh, JooYeong Kim, Jin-Hyuk Hong","doi":"10.1038/s41597-025-04538-3","DOIUrl":"10.1038/s41597-025-04538-3","url":null,"abstract":"<p><p>This manuscript presents a Pre-AttentiveGaze dataset. One of the defining characteristics of gaze-based authentication is the necessity for a rapid response. In this study, we constructed a dataset for identifying individuals through eye movements by inducing \"pre-attentive processing\" in response to a given gaze stimulus in a very short time. A total of 76,840 eye movement samples were collected from 34 participants across five sessions. From the dataset, we extracted the gaze features proposed in previous studies, pre-processed them, and validated the dataset by applying machine learning models. This study demonstrates the efficacy of the dataset and illustrates its potential for use in gaze-based authentication of visual stimuli that elicit pre-attentive processing.</p>","PeriodicalId":21597,"journal":{"name":"Scientific Data","volume":"12 1","pages":"263"},"PeriodicalIF":5.8,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11825865/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143415101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A paired dataset of multi-modal MRI at 3 Tesla and 7 Tesla with manual hippocampal subfield segmentations.
IF 5.8 2区 综合性期刊 Q1 MULTIDISCIPLINARY SCIENCES Pub Date : 2025-02-13 DOI: 10.1038/s41597-025-04586-9
Lei Chu, Baoqiang Ma, Xiaoxi Dong, Yirong He, Tongtong Che, Debin Zeng, Zihao Zhang, Shuyu Li

The hippocampus plays a critical role in memory and is prone to neural degenerative diseases. Its complex structure and distinct subfields pose challenges for automatic segmentation in 3 T MRI because of its limited resolution and contrast. While 7 T MRI offers superior anatomical details and better gray-white matter contrast, aiding in clearer differentiation of hippocampal structures, its use is restricted by high costs. To bridge this gap, algorithms synthesizing 7T-like images from 3 T scans are being developed, requiring paired datasets for training. However, the scarcity of such high-quality paired datasets, particularly those with manual hippocampal subfield segmentations as ground truth, hinders progress. Herein, we introduce a dataset comprising paired 3 T and 7 T MRI scans from 20 healthy volunteers, with manual hippocampal subfield annotations on 7 T T2-weighted images. This dataset is designed to support the development and evaluation of both 3T-to-7T MR image synthesis models and automated hippocampal segmentation algorithms on 3 T images. We assessed the image quality using MRIQC. The dataset is freely accessible on the Figshare+.

{"title":"A paired dataset of multi-modal MRI at 3 Tesla and 7 Tesla with manual hippocampal subfield segmentations.","authors":"Lei Chu, Baoqiang Ma, Xiaoxi Dong, Yirong He, Tongtong Che, Debin Zeng, Zihao Zhang, Shuyu Li","doi":"10.1038/s41597-025-04586-9","DOIUrl":"10.1038/s41597-025-04586-9","url":null,"abstract":"<p><p>The hippocampus plays a critical role in memory and is prone to neural degenerative diseases. Its complex structure and distinct subfields pose challenges for automatic segmentation in 3 T MRI because of its limited resolution and contrast. While 7 T MRI offers superior anatomical details and better gray-white matter contrast, aiding in clearer differentiation of hippocampal structures, its use is restricted by high costs. To bridge this gap, algorithms synthesizing 7T-like images from 3 T scans are being developed, requiring paired datasets for training. However, the scarcity of such high-quality paired datasets, particularly those with manual hippocampal subfield segmentations as ground truth, hinders progress. Herein, we introduce a dataset comprising paired 3 T and 7 T MRI scans from 20 healthy volunteers, with manual hippocampal subfield annotations on 7 T T2-weighted images. This dataset is designed to support the development and evaluation of both 3T-to-7T MR image synthesis models and automated hippocampal segmentation algorithms on 3 T images. We assessed the image quality using MRIQC. The dataset is freely accessible on the Figshare+.</p>","PeriodicalId":21597,"journal":{"name":"Scientific Data","volume":"12 1","pages":"260"},"PeriodicalIF":5.8,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11825668/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143415098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Author Correction: Curating global datasets of structural linguistic features for independence.
IF 5.8 2区 综合性期刊 Q1 MULTIDISCIPLINARY SCIENCES Pub Date : 2025-02-12 DOI: 10.1038/s41597-025-04592-x
Anna Graff, Natalia Chousou-Polydouri, David Inman, Hedvig Skirgård, Marc Lischka, Taras Zakharko, Chiara Barbieri, Balthasar Bickel
{"title":"Author Correction: Curating global datasets of structural linguistic features for independence.","authors":"Anna Graff, Natalia Chousou-Polydouri, David Inman, Hedvig Skirgård, Marc Lischka, Taras Zakharko, Chiara Barbieri, Balthasar Bickel","doi":"10.1038/s41597-025-04592-x","DOIUrl":"10.1038/s41597-025-04592-x","url":null,"abstract":"","PeriodicalId":21597,"journal":{"name":"Scientific Data","volume":"12 1","pages":"259"},"PeriodicalIF":5.8,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11822050/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143410260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A high resolution, gridded product for vapor pressure deficit using Daymet.
IF 5.8 2区 综合性期刊 Q1 MULTIDISCIPLINARY SCIENCES Pub Date : 2025-02-12 DOI: 10.1038/s41597-025-04544-5
Nicholas K Corak, Peter E Thornton, Lauren E L Lowman

Vapor pressure deficit (VPD) is a critical variable in assessing drought conditions and evaluating plant water stress. Gridded products of global and regional VPD are not freely available from satellite remote sensing, model reanalysis, or ground observation datasets. We present two versions of the first gridded VPD product for the Continental US and parts of Northern Mexico and Southern Canada (CONUS+) at a 1 km spatial resolution and daily time step. We derived VPD from Daymet maximum daily temperature and average daily vapor pressure and scale the estimates based on (1) climate determined by the Köppen-Geiger classifications and (2) land cover determined by the International Geosphere-Biosphere Programme. Ground-based VPD data from 253 AmeriFlux sites representing different climate and land cover classifications were used to improve the Daymet-derived VPD estimates for every pixel in the CONUS+ grid to produce the final datasets. We evaluated the Daymet-derived VPD against independent observations and reanalysis data. The CONUS+ VPD datasets will aid in investigating disturbances including drought and wildfire, and informing land management strategies.

{"title":"A high resolution, gridded product for vapor pressure deficit using Daymet.","authors":"Nicholas K Corak, Peter E Thornton, Lauren E L Lowman","doi":"10.1038/s41597-025-04544-5","DOIUrl":"10.1038/s41597-025-04544-5","url":null,"abstract":"<p><p>Vapor pressure deficit (VPD) is a critical variable in assessing drought conditions and evaluating plant water stress. Gridded products of global and regional VPD are not freely available from satellite remote sensing, model reanalysis, or ground observation datasets. We present two versions of the first gridded VPD product for the Continental US and parts of Northern Mexico and Southern Canada (CONUS+) at a 1 km spatial resolution and daily time step. We derived VPD from Daymet maximum daily temperature and average daily vapor pressure and scale the estimates based on (1) climate determined by the Köppen-Geiger classifications and (2) land cover determined by the International Geosphere-Biosphere Programme. Ground-based VPD data from 253 AmeriFlux sites representing different climate and land cover classifications were used to improve the Daymet-derived VPD estimates for every pixel in the CONUS+ grid to produce the final datasets. We evaluated the Daymet-derived VPD against independent observations and reanalysis data. The CONUS+ VPD datasets will aid in investigating disturbances including drought and wildfire, and informing land management strategies.</p>","PeriodicalId":21597,"journal":{"name":"Scientific Data","volume":"12 1","pages":"256"},"PeriodicalIF":5.8,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11822033/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143410239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OHID-1: A New Large Hyperspectral Image Dataset for Multi-Classification.
IF 5.8 2区 综合性期刊 Q1 MULTIDISCIPLINARY SCIENCES Pub Date : 2025-02-12 DOI: 10.1038/s41597-025-04542-7
Ashish Mani, Sergey Gorbachev, Jun Yan, Abhishek Dixit, Xi Shi, Long Li, Yuanyuan Sun, Xin Chen, Jiaqi Wu, Jianwen Deng, Xiaohua Jiang, Dong Yue, Chunxia Dou, Xiangsen Wei, Jiawei Huang

In the context of the increasing popularity of Big Data paradigms and deep learning techniques, we introduce a novel large-scale hyperspectral imagery dataset, termed Orbita Hyperspectral Images Dataset-1 (OHID-1). It comprises 10 hyperspectral images sourced from diverse regions of Zhuhai City, China, each boasting 32 spectral bands with a spatial resolution of 10 meters and spanning a spectral range of 400-1000 nanometers. The core objective of this dataset is to elevate the performance of hyperspectral image classification and pose substantial challenges to existing hyperspectral image processing algorithms. When compared to traditional open-source hyperspectral datasets and recently released large-scale hyperspectral datasets, OHID-1 presents more intricate features and a higher degree of classification complexity by providing 7 classes labels in wider area. Furthermore, this study demonstrates the utility of OHID-1 by testing it with selected hyperspectral classification algorithms. This dataset will be useful to advance cutting-edge research in urban sustainable development science, land use analysis. We invite the scientific community to devise novel methodologies for an in-depth analysis of these data.

{"title":"OHID-1: A New Large Hyperspectral Image Dataset for Multi-Classification.","authors":"Ashish Mani, Sergey Gorbachev, Jun Yan, Abhishek Dixit, Xi Shi, Long Li, Yuanyuan Sun, Xin Chen, Jiaqi Wu, Jianwen Deng, Xiaohua Jiang, Dong Yue, Chunxia Dou, Xiangsen Wei, Jiawei Huang","doi":"10.1038/s41597-025-04542-7","DOIUrl":"10.1038/s41597-025-04542-7","url":null,"abstract":"<p><p>In the context of the increasing popularity of Big Data paradigms and deep learning techniques, we introduce a novel large-scale hyperspectral imagery dataset, termed Orbita Hyperspectral Images Dataset-1 (OHID-1). It comprises 10 hyperspectral images sourced from diverse regions of Zhuhai City, China, each boasting 32 spectral bands with a spatial resolution of 10 meters and spanning a spectral range of 400-1000 nanometers. The core objective of this dataset is to elevate the performance of hyperspectral image classification and pose substantial challenges to existing hyperspectral image processing algorithms. When compared to traditional open-source hyperspectral datasets and recently released large-scale hyperspectral datasets, OHID-1 presents more intricate features and a higher degree of classification complexity by providing 7 classes labels in wider area. Furthermore, this study demonstrates the utility of OHID-1 by testing it with selected hyperspectral classification algorithms. This dataset will be useful to advance cutting-edge research in urban sustainable development science, land use analysis. We invite the scientific community to devise novel methodologies for an in-depth analysis of these data.</p>","PeriodicalId":21597,"journal":{"name":"Scientific Data","volume":"12 1","pages":"251"},"PeriodicalIF":5.8,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11822079/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143410288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
E-POSE: A Large Scale Event Camera Dataset for Object Pose Estimation.
IF 5.8 2区 综合性期刊 Q1 MULTIDISCIPLINARY SCIENCES Pub Date : 2025-02-12 DOI: 10.1038/s41597-025-04536-5
Oussama Abdul Hay, Xiaoqian Huang, Abdulla Ayyad, Eslam Sherif, Randa Almadhoun, Yusra Abdulrahman, Lakmal Seneviratne, Abdulqader Abusafieh, Yahya Zweiri

Robotic automation requires precise object pose estimation for effective grasping and manipulation. With their high dynamic range and temporal resolution, event-based cameras offer a promising alternative to conventional cameras. Despite their success in tracking, segmentation, classification, obstacle avoidance, and navigation, their use for 6D object pose estimation is relatively unexplored due to the lack of datasets. This paper introduces an extensive dataset based on Yale-CMU-Berkeley (YCB) objects, including event packets with associated poses, spike images, masks, 3D bounding box coordinates, segmented events, and a 3-channel event image for validation. Featuring 13 YCB objects, the dataset covers both cluttered and uncluttered scenes across 18 scenarios with varying speeds and illumination. It contains 306 sequences, totaling over an hour and around 1.5 billion events, making it the largest and most diverse event-based dataset for object pose estimation. This resource aims to support researchers in developing and testing object pose estimation algorithms and solutions.

机器人自动化需要精确的物体姿态估计,以实现有效的抓取和操纵。基于事件的相机具有高动态范围和时间分辨率,是传统相机的理想替代品。尽管它们在跟踪、分割、分类、避障和导航方面取得了成功,但由于缺乏数据集,它们在 6D 物体姿态估计方面的应用还相对欠缺。本文介绍了基于 Yale-CMU-Berkeley (YCB) 物体的大量数据集,包括与相关姿势、尖峰图像、遮罩、三维边界框坐标、分割事件相关的事件包,以及用于验证的三通道事件图像。该数据集包含 13 个 YCB 物体,涵盖 18 种不同速度和光照场景中的杂乱和非杂乱场景。该数据集包含 306 个序列,总时长超过 1 小时,包含约 15 亿个事件,是用于物体姿态估计的最大、最多样化的基于事件的数据集。该资源旨在支持研究人员开发和测试物体姿态估计算法和解决方案。
{"title":"E-POSE: A Large Scale Event Camera Dataset for Object Pose Estimation.","authors":"Oussama Abdul Hay, Xiaoqian Huang, Abdulla Ayyad, Eslam Sherif, Randa Almadhoun, Yusra Abdulrahman, Lakmal Seneviratne, Abdulqader Abusafieh, Yahya Zweiri","doi":"10.1038/s41597-025-04536-5","DOIUrl":"10.1038/s41597-025-04536-5","url":null,"abstract":"<p><p>Robotic automation requires precise object pose estimation for effective grasping and manipulation. With their high dynamic range and temporal resolution, event-based cameras offer a promising alternative to conventional cameras. Despite their success in tracking, segmentation, classification, obstacle avoidance, and navigation, their use for 6D object pose estimation is relatively unexplored due to the lack of datasets. This paper introduces an extensive dataset based on Yale-CMU-Berkeley (YCB) objects, including event packets with associated poses, spike images, masks, 3D bounding box coordinates, segmented events, and a 3-channel event image for validation. Featuring 13 YCB objects, the dataset covers both cluttered and uncluttered scenes across 18 scenarios with varying speeds and illumination. It contains 306 sequences, totaling over an hour and around 1.5 billion events, making it the largest and most diverse event-based dataset for object pose estimation. This resource aims to support researchers in developing and testing object pose estimation algorithms and solutions.</p>","PeriodicalId":21597,"journal":{"name":"Scientific Data","volume":"12 1","pages":"245"},"PeriodicalIF":5.8,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11822054/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143410280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cliopatria - A geospatial database of world-wide political entities from 3400BCE to 2024CE.
IF 5.8 2区 综合性期刊 Q1 MULTIDISCIPLINARY SCIENCES Pub Date : 2025-02-12 DOI: 10.1038/s41597-025-04516-9
James S Bennett, Erin Mutch, Andrew Tollefson, Ed Chalstrey, Majid Benam, Enrico Cioni, Jenny Reddish, Jakob Zsambok, Jill Levine, C Justin Cook, Pieter Francois, Daniel Hoyer, Peter Turchin

The scientific understanding of the complex dynamics of global history - from the rise and spread of states to their declines and falls, from their peaceful interactions with economic or diplomatic exchanges to violent confrontations - requires, at its core, a consistent and explicit encoding of historical political entities, their locations, extents and durations. Numerous attempts have been made to produce digital geographical compendia of polities with different time depths and resolutions. Most have been limited in scope and many of the more comprehensive geospatial datasets must either be licensed or are stored in proprietary formats, making access for scholarly analysis difficult. To address these issues we have developed Cliopatria, a comprehensive open-source geospatial dataset of worldwide states from 3400BCE to 2024CE. Presently it comprises over 1600 political entities sampled at varying timesteps and spatial scales. Here, we discuss its construction, its scope, and its current limitations.

{"title":"Cliopatria - A geospatial database of world-wide political entities from 3400BCE to 2024CE.","authors":"James S Bennett, Erin Mutch, Andrew Tollefson, Ed Chalstrey, Majid Benam, Enrico Cioni, Jenny Reddish, Jakob Zsambok, Jill Levine, C Justin Cook, Pieter Francois, Daniel Hoyer, Peter Turchin","doi":"10.1038/s41597-025-04516-9","DOIUrl":"10.1038/s41597-025-04516-9","url":null,"abstract":"<p><p>The scientific understanding of the complex dynamics of global history - from the rise and spread of states to their declines and falls, from their peaceful interactions with economic or diplomatic exchanges to violent confrontations - requires, at its core, a consistent and explicit encoding of historical political entities, their locations, extents and durations. Numerous attempts have been made to produce digital geographical compendia of polities with different time depths and resolutions. Most have been limited in scope and many of the more comprehensive geospatial datasets must either be licensed or are stored in proprietary formats, making access for scholarly analysis difficult. To address these issues we have developed Cliopatria, a comprehensive open-source geospatial dataset of worldwide states from 3400BCE to 2024CE. Presently it comprises over 1600 political entities sampled at varying timesteps and spatial scales. Here, we discuss its construction, its scope, and its current limitations.</p>","PeriodicalId":21597,"journal":{"name":"Scientific Data","volume":"12 1","pages":"247"},"PeriodicalIF":5.8,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11822181/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143410277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal dataset for indoor 3D drone tracking.
IF 5.8 2区 综合性期刊 Q1 MULTIDISCIPLINARY SCIENCES Pub Date : 2025-02-12 DOI: 10.1038/s41597-025-04521-y
Jakub Rosner, Tomasz Krzeszowski, Adam Świtoński, Henryk Josiński, Wojciech Lindenheim-Locher, Michał Zieliński, Grzegorz Paleta, Marcin Paszkuta, Konrad Wojciechowski

The subject of the paper is a multimodal dataset (DPJAIT) containing drone flights prepared in two variants - simulation-based and with real measurements captured by the gold standard Vicon system. It contains video sequences registered by the synchronized and calibrated multicamera set as well as reference 3D drone positions in successive time instants obtained from simulation procedure or using the motion capture technique. Moreover, there are scenarios with ArUco markers in the scene with known 3D positions and RGB cameras mounted on drones for which internal parameters are given. Three applications of 3D tracking are demonstrated. They are based on the overdetermined set of linear equations describing camera projection, particle swarm optimization, and the determination of the extrinsic matrix of the camera attached to the drone utilizing recognized ArUco markers.

{"title":"Multimodal dataset for indoor 3D drone tracking.","authors":"Jakub Rosner, Tomasz Krzeszowski, Adam Świtoński, Henryk Josiński, Wojciech Lindenheim-Locher, Michał Zieliński, Grzegorz Paleta, Marcin Paszkuta, Konrad Wojciechowski","doi":"10.1038/s41597-025-04521-y","DOIUrl":"10.1038/s41597-025-04521-y","url":null,"abstract":"<p><p>The subject of the paper is a multimodal dataset (DPJAIT) containing drone flights prepared in two variants - simulation-based and with real measurements captured by the gold standard Vicon system. It contains video sequences registered by the synchronized and calibrated multicamera set as well as reference 3D drone positions in successive time instants obtained from simulation procedure or using the motion capture technique. Moreover, there are scenarios with ArUco markers in the scene with known 3D positions and RGB cameras mounted on drones for which internal parameters are given. Three applications of 3D tracking are demonstrated. They are based on the overdetermined set of linear equations describing camera projection, particle swarm optimization, and the determination of the extrinsic matrix of the camera attached to the drone utilizing recognized ArUco markers.</p>","PeriodicalId":21597,"journal":{"name":"Scientific Data","volume":"12 1","pages":"257"},"PeriodicalIF":5.8,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11821834/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143410286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive cell atlas of fall armyworm (Spodoptera frugiperda) larval gut and fat body via snRNA-Seq.
IF 5.8 2区 综合性期刊 Q1 MULTIDISCIPLINARY SCIENCES Pub Date : 2025-02-12 DOI: 10.1038/s41597-025-04520-z
Chao Sun, Yongqi Shao, Junaid Iqbal

The midgut and fat body of insects control key physiological processes, including growth, digestion, metabolism, and stress response. Single-nucleus RNA sequencing (snRNA-seq) is a promising way to reveal organ complexity at the cellular level, yet data for lepidopteran insects are lacking. We utilized snRNA-seq to assess cellular diversity in the midgut and fat body of Spodoptera frugiperda. Our study identified 20 distinct clusters in the midgut, including enterocytes, enteroendocrine, stem-like cells, and muscle cells, and 27 clusters in the fat body, including adipocytes, hemocytes, and epithelial cells. This dataset, containing all identified cell types in midgut and fat body, is valuable for characterizing the cellular composition of these organs and uncovering new cell-specific biomarkers. This cellular atlas enhances our understanding of cellular heterogeneity of fat and midgut, serving as a basis for future functional and comparative analyses. As the first snRNA-seq study on the midgut and fat body of S. frugiperda, it will also support future research, contribute to lepidopteran studies, and aid in developing targeted pest control strategies.

昆虫的中肠和脂肪体控制着关键的生理过程,包括生长、消化、新陈代谢和应激反应。单核 RNA 测序(snRNA-seq)是在细胞水平揭示器官复杂性的一种很有前途的方法,但目前还缺乏鳞翅目昆虫的相关数据。我们利用 snRNA-seq 评估了蛙翅蝶中肠和脂肪体的细胞多样性。我们的研究在中肠中发现了 20 个不同的细胞群,包括肠细胞、肠内分泌细胞、干样细胞和肌肉细胞;在脂肪体中发现了 27 个细胞群,包括脂肪细胞、血细胞和上皮细胞。这个数据集包含了中肠和脂肪体中所有已识别的细胞类型,对于描述这些器官的细胞组成和发现新的细胞特异性生物标记物非常有价值。该细胞图谱增强了我们对脂肪和中肠细胞异质性的了解,为未来的功能和比较分析奠定了基础。作为第一项关于节肢动物中肠和脂肪体的 snRNA-seq 研究,它还将支持未来的研究,为鳞翅目昆虫研究做出贡献,并有助于制定有针对性的害虫控制策略。
{"title":"A comprehensive cell atlas of fall armyworm (Spodoptera frugiperda) larval gut and fat body via snRNA-Seq.","authors":"Chao Sun, Yongqi Shao, Junaid Iqbal","doi":"10.1038/s41597-025-04520-z","DOIUrl":"10.1038/s41597-025-04520-z","url":null,"abstract":"<p><p>The midgut and fat body of insects control key physiological processes, including growth, digestion, metabolism, and stress response. Single-nucleus RNA sequencing (snRNA-seq) is a promising way to reveal organ complexity at the cellular level, yet data for lepidopteran insects are lacking. We utilized snRNA-seq to assess cellular diversity in the midgut and fat body of Spodoptera frugiperda. Our study identified 20 distinct clusters in the midgut, including enterocytes, enteroendocrine, stem-like cells, and muscle cells, and 27 clusters in the fat body, including adipocytes, hemocytes, and epithelial cells. This dataset, containing all identified cell types in midgut and fat body, is valuable for characterizing the cellular composition of these organs and uncovering new cell-specific biomarkers. This cellular atlas enhances our understanding of cellular heterogeneity of fat and midgut, serving as a basis for future functional and comparative analyses. As the first snRNA-seq study on the midgut and fat body of S. frugiperda, it will also support future research, contribute to lepidopteran studies, and aid in developing targeted pest control strategies.</p>","PeriodicalId":21597,"journal":{"name":"Scientific Data","volume":"12 1","pages":"250"},"PeriodicalIF":5.8,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11822134/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143410221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Affording reusable data: recommendations for researchers from a data-intensive project.
IF 5.8 2区 综合性期刊 Q1 MULTIDISCIPLINARY SCIENCES Pub Date : 2025-02-12 DOI: 10.1038/s41597-025-04565-0
Gorka Fraga-González, Hester van de Wiel, Francesco Garassino, Willy Kuo, Diane de Zélicourt, Vartan Kurtcuoglu, Leonhard Held, Eva Furrer
{"title":"Affording reusable data: recommendations for researchers from a data-intensive project.","authors":"Gorka Fraga-González, Hester van de Wiel, Francesco Garassino, Willy Kuo, Diane de Zélicourt, Vartan Kurtcuoglu, Leonhard Held, Eva Furrer","doi":"10.1038/s41597-025-04565-0","DOIUrl":"10.1038/s41597-025-04565-0","url":null,"abstract":"","PeriodicalId":21597,"journal":{"name":"Scientific Data","volume":"12 1","pages":"258"},"PeriodicalIF":5.8,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11821812/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143410247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Scientific Data
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1