Enhancing robotic skill acquisition with multimodal sensory data: A novel dataset for kitchen tasks.

IF 6.9 2区 综合性期刊 Q1 MULTIDISCIPLINARY SCIENCES Scientific Data Pub Date : 2025-03-21 DOI:10.1038/s41597-025-04798-z
Ruochen Ren, Zhipeng Wang, Chaoyun Yang, Jiahang Liu, Rong Jiang, Yanmin Zhou, Shuo Jiang, Bin He
{"title":"Enhancing robotic skill acquisition with multimodal sensory data: A novel dataset for kitchen tasks.","authors":"Ruochen Ren, Zhipeng Wang, Chaoyun Yang, Jiahang Liu, Rong Jiang, Yanmin Zhou, Shuo Jiang, Bin He","doi":"10.1038/s41597-025-04798-z","DOIUrl":null,"url":null,"abstract":"<p><p>The advent of large language models has transformed human-robot interaction by enabling robots to execute tasks via natural language commands. However, these models primarily depend on unimodal data, which limits their ability to integrate diverse and essential environmental, physiological, and physical data. To address the limitations of current unimodal dataset problems, this paper investigates the novel and comprehensive multimodal data collection methodologies which can fully capture the complexity of human interaction in the complex real-world kitchen environments. Data related to the use of 17 different kitchen tools by 20 adults in dynamic scenarios were collected, including human tactile information, EMG signals, audio data, whole-body movement, and eye-tracking data. The dataset is comprised of 680 segments (~11 hours) with data across seven modalities and includes 56,000 detailed annotations. This paper bridges the gap between real-world multimodal data and embodied AI, paving the way for a new benchmark in utility and repeatability for skill learning in robotics areas.</p>","PeriodicalId":21597,"journal":{"name":"Scientific Data","volume":"12 1","pages":"476"},"PeriodicalIF":6.9000,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11928623/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Scientific Data","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.1038/s41597-025-04798-z","RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

The advent of large language models has transformed human-robot interaction by enabling robots to execute tasks via natural language commands. However, these models primarily depend on unimodal data, which limits their ability to integrate diverse and essential environmental, physiological, and physical data. To address the limitations of current unimodal dataset problems, this paper investigates the novel and comprehensive multimodal data collection methodologies which can fully capture the complexity of human interaction in the complex real-world kitchen environments. Data related to the use of 17 different kitchen tools by 20 adults in dynamic scenarios were collected, including human tactile information, EMG signals, audio data, whole-body movement, and eye-tracking data. The dataset is comprised of 680 segments (~11 hours) with data across seven modalities and includes 56,000 detailed annotations. This paper bridges the gap between real-world multimodal data and embodied AI, paving the way for a new benchmark in utility and repeatability for skill learning in robotics areas.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用多模态感官数据增强机器人技能获取:厨房任务的新数据集。
大型语言模型的出现改变了人机交互方式,使机器人能够通过自然语言指令执行任务。然而,这些模型主要依赖于单模态数据,这限制了它们整合各种重要环境、生理和物理数据的能力。为了解决目前单模态数据集的局限性问题,本文研究了新颖而全面的多模态数据收集方法,这种方法可以充分捕捉复杂的真实世界厨房环境中人机交互的复杂性。本文收集了 20 名成年人在动态场景中使用 17 种不同厨房工具的相关数据,包括人体触觉信息、肌电图信号、音频数据、全身运动和眼动跟踪数据。数据集由 680 个片段(约 11 个小时)组成,包含七种模式的数据,其中有 56,000 个详细注释。本文弥补了真实世界多模态数据与具身人工智能之间的差距,为机器人领域技能学习的实用性和可重复性的新基准铺平了道路。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Scientific Data
Scientific Data Social Sciences-Education
CiteScore
11.20
自引率
4.10%
发文量
689
审稿时长
16 weeks
期刊介绍: Scientific Data is an open-access journal focused on data, publishing descriptions of research datasets and articles on data sharing across natural sciences, medicine, engineering, and social sciences. Its goal is to enhance the sharing and reuse of scientific data, encourage broader data sharing, and acknowledge those who share their data. The journal primarily publishes Data Descriptors, which offer detailed descriptions of research datasets, including data collection methods and technical analyses validating data quality. These descriptors aim to facilitate data reuse rather than testing hypotheses or presenting new interpretations, methods, or in-depth analyses.
期刊最新文献
Microstructure & physicochemical properties dataset of NaCl-based salt mixtures for concentrating solar power. High-quality haplotype-resolved genome assembly and annotation of Malus baccata 'Jackii'. A Full-Body Motion Capture Gait Dataset of Healthy Young Adults Walking Ramps Up and Down. A dataset on occupant satisfaction with the indoor environmental quality in Belgian classrooms. A Multimodal Dataset for Assessing Emotion, Stress, and Emotional Workload in Interpersonal Work Scenario.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1