{"title":"视频中基于运动的动态对象检索","authors":"Che-Bin Liu, N. Ahuja","doi":"10.1145/1027527.1027593","DOIUrl":null,"url":null,"abstract":"Most existing video retrieval systems use low-level visual features such as color histogram, shape, texture, or motion. In this paper, we explore the use of higher-level motion representation for video retrieval of dynamic objects. We use three motion representations, which together can retrieve a large variety of motion patterns. Our approach works on top of a tracking unit and assumes that each dynamic object has been tracked and circumscribed in a minimal bounding box in each video frame. We represent the motion attributes of each object in terms of changes in the image context of its circumscribing box. The changes are described via motion templates [4], self-similarity plots [3], and image dynamics [9]. Initially, defined criteria of the retrieval process are interactively refined using relevance feedback from the user. Experimental results demonstrate the use of the proposed motion models in retrieving objects undergoing complex motion.","PeriodicalId":292207,"journal":{"name":"MULTIMEDIA '04","volume":"104 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2004-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Motion based retrieval of dynamic objects in videos\",\"authors\":\"Che-Bin Liu, N. Ahuja\",\"doi\":\"10.1145/1027527.1027593\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Most existing video retrieval systems use low-level visual features such as color histogram, shape, texture, or motion. In this paper, we explore the use of higher-level motion representation for video retrieval of dynamic objects. We use three motion representations, which together can retrieve a large variety of motion patterns. Our approach works on top of a tracking unit and assumes that each dynamic object has been tracked and circumscribed in a minimal bounding box in each video frame. We represent the motion attributes of each object in terms of changes in the image context of its circumscribing box. The changes are described via motion templates [4], self-similarity plots [3], and image dynamics [9]. Initially, defined criteria of the retrieval process are interactively refined using relevance feedback from the user. Experimental results demonstrate the use of the proposed motion models in retrieving objects undergoing complex motion.\",\"PeriodicalId\":292207,\"journal\":{\"name\":\"MULTIMEDIA '04\",\"volume\":\"104 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2004-10-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"MULTIMEDIA '04\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/1027527.1027593\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"MULTIMEDIA '04","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/1027527.1027593","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Motion based retrieval of dynamic objects in videos
Most existing video retrieval systems use low-level visual features such as color histogram, shape, texture, or motion. In this paper, we explore the use of higher-level motion representation for video retrieval of dynamic objects. We use three motion representations, which together can retrieve a large variety of motion patterns. Our approach works on top of a tracking unit and assumes that each dynamic object has been tracked and circumscribed in a minimal bounding box in each video frame. We represent the motion attributes of each object in terms of changes in the image context of its circumscribing box. The changes are described via motion templates [4], self-similarity plots [3], and image dynamics [9]. Initially, defined criteria of the retrieval process are interactively refined using relevance feedback from the user. Experimental results demonstrate the use of the proposed motion models in retrieving objects undergoing complex motion.