Shaheena Noor, Humera Noor Minhas, Muhammad Imran Saleem, Vali Uddin, Najma Ismat
{"title":"内向外视觉在牙科环境下的程序识别","authors":"Shaheena Noor, Humera Noor Minhas, Muhammad Imran Saleem, Vali Uddin, Najma Ismat","doi":"10.1109/GCWOT49901.2020.9391594","DOIUrl":null,"url":null,"abstract":"Smart homes and offices are becoming more and more common with the advances in computer vision research and technology. Identifying the human activities and scenarios are basic components of such systems. This is important not only for the eco-system to work independently, but also to allow robots to be able to assist humans. This is specially true in the more complicated medical setups, e.g. dentistry, where we need subtle cues e.g. eye motion to identify scenarios. We present a hierarchical model in this paper for robustly recognizing scenarios and procedures in a dental setup by using the objects seen in eye gaze trajectories like material and equipment used by the dentist, and symptoms of the patient. We utilize the fact that by identifying the objects viewed during an activity and linking them over time to create more complicated scenarios, the problem of scenario recognition can be hierarchically solved. We performed experiments on a dental dataset and showed that combining multiple parameters results in a better precision and accuracy compared to any of them individually. Our experiments show that the accuracy increased from 45.18% to 94.42% when we used a combination of parameters vs. a single one.","PeriodicalId":157662,"journal":{"name":"2020 Global Conference on Wireless and Optical Technologies (GCWOT)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Inside-out Vision for Procedure Recognition in Dental Environment\",\"authors\":\"Shaheena Noor, Humera Noor Minhas, Muhammad Imran Saleem, Vali Uddin, Najma Ismat\",\"doi\":\"10.1109/GCWOT49901.2020.9391594\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Smart homes and offices are becoming more and more common with the advances in computer vision research and technology. Identifying the human activities and scenarios are basic components of such systems. This is important not only for the eco-system to work independently, but also to allow robots to be able to assist humans. This is specially true in the more complicated medical setups, e.g. dentistry, where we need subtle cues e.g. eye motion to identify scenarios. We present a hierarchical model in this paper for robustly recognizing scenarios and procedures in a dental setup by using the objects seen in eye gaze trajectories like material and equipment used by the dentist, and symptoms of the patient. We utilize the fact that by identifying the objects viewed during an activity and linking them over time to create more complicated scenarios, the problem of scenario recognition can be hierarchically solved. We performed experiments on a dental dataset and showed that combining multiple parameters results in a better precision and accuracy compared to any of them individually. Our experiments show that the accuracy increased from 45.18% to 94.42% when we used a combination of parameters vs. a single one.\",\"PeriodicalId\":157662,\"journal\":{\"name\":\"2020 Global Conference on Wireless and Optical Technologies (GCWOT)\",\"volume\":\"29 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 Global Conference on Wireless and Optical Technologies (GCWOT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/GCWOT49901.2020.9391594\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 Global Conference on Wireless and Optical Technologies (GCWOT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GCWOT49901.2020.9391594","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Inside-out Vision for Procedure Recognition in Dental Environment
Smart homes and offices are becoming more and more common with the advances in computer vision research and technology. Identifying the human activities and scenarios are basic components of such systems. This is important not only for the eco-system to work independently, but also to allow robots to be able to assist humans. This is specially true in the more complicated medical setups, e.g. dentistry, where we need subtle cues e.g. eye motion to identify scenarios. We present a hierarchical model in this paper for robustly recognizing scenarios and procedures in a dental setup by using the objects seen in eye gaze trajectories like material and equipment used by the dentist, and symptoms of the patient. We utilize the fact that by identifying the objects viewed during an activity and linking them over time to create more complicated scenarios, the problem of scenario recognition can be hierarchically solved. We performed experiments on a dental dataset and showed that combining multiple parameters results in a better precision and accuracy compared to any of them individually. Our experiments show that the accuracy increased from 45.18% to 94.42% when we used a combination of parameters vs. a single one.