{"title":"面向开放领域动作识别的活动数据集","authors":"Alexander Gabriel, N. Bellotto, Paul E. Baxter","doi":"10.31256/UKRAS19.17","DOIUrl":null,"url":null,"abstract":"In an agricultural context, having autonomous robots that can work side-by-side with human workers provide a range of productivity benefits. In order for this to be achieved safely and effectively, these autonomous robots require the ability to understand a range of human behaviors in order to facilitate task communication and coordination. The recognition of human actions is a key part of this, and is the focus of this paper. Available datasets for Action Recognition generally feature controlled lighting and framing while recording subjects from the front. They mostly reflect good recording conditions but fail to model the data a robot will have to work with in the field, such as varying distance and lighting conditions. In this work, we propose a set of recording conditions, gestures and behaviors that better reflect the environment an agricultural \nrobot might find itself in and record a dataset with a range of sensors that demonstrate these conditions.","PeriodicalId":424229,"journal":{"name":"UK-RAS19 Conference: \"Embedded Intelligence: Enabling and Supporting RAS Technologies\" Proceedings","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Towards a Dataset of Activities for Action Recognition in Open Fields\",\"authors\":\"Alexander Gabriel, N. Bellotto, Paul E. Baxter\",\"doi\":\"10.31256/UKRAS19.17\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In an agricultural context, having autonomous robots that can work side-by-side with human workers provide a range of productivity benefits. In order for this to be achieved safely and effectively, these autonomous robots require the ability to understand a range of human behaviors in order to facilitate task communication and coordination. The recognition of human actions is a key part of this, and is the focus of this paper. Available datasets for Action Recognition generally feature controlled lighting and framing while recording subjects from the front. They mostly reflect good recording conditions but fail to model the data a robot will have to work with in the field, such as varying distance and lighting conditions. In this work, we propose a set of recording conditions, gestures and behaviors that better reflect the environment an agricultural \\nrobot might find itself in and record a dataset with a range of sensors that demonstrate these conditions.\",\"PeriodicalId\":424229,\"journal\":{\"name\":\"UK-RAS19 Conference: \\\"Embedded Intelligence: Enabling and Supporting RAS Technologies\\\" Proceedings\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-01-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"UK-RAS19 Conference: \\\"Embedded Intelligence: Enabling and Supporting RAS Technologies\\\" Proceedings\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.31256/UKRAS19.17\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"UK-RAS19 Conference: \"Embedded Intelligence: Enabling and Supporting RAS Technologies\" Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.31256/UKRAS19.17","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Towards a Dataset of Activities for Action Recognition in Open Fields
In an agricultural context, having autonomous robots that can work side-by-side with human workers provide a range of productivity benefits. In order for this to be achieved safely and effectively, these autonomous robots require the ability to understand a range of human behaviors in order to facilitate task communication and coordination. The recognition of human actions is a key part of this, and is the focus of this paper. Available datasets for Action Recognition generally feature controlled lighting and framing while recording subjects from the front. They mostly reflect good recording conditions but fail to model the data a robot will have to work with in the field, such as varying distance and lighting conditions. In this work, we propose a set of recording conditions, gestures and behaviors that better reflect the environment an agricultural
robot might find itself in and record a dataset with a range of sensors that demonstrate these conditions.