Gender classification has multiple applications including, but not limited to, face perception, age, ethnicity and identity analysis, video surveillance and smart human computer interaction. The majority of computer based gender classification algorithms analyse the appearance of facial features predominantly based on the texture of the static image of the face. In this paper, we propose a novel algorithm for gender classification using the smile dynamics without resorting to the use of any facial texture information. Our experiments suggest that this method has great potential for finding indicators of gender dimorphism. Our approach was tested on two databases, namely the CK+ and the MUG, consisting of a total of 80 subjects. As a result, using the KNN algorithm along with 10-fold cross validation, we achieve an accurate classification rate of 80% for gender simply based on the dynamics of a person's smile.
{"title":"On Gender Identification Using the Smile Dynamics","authors":"Ahmad Al-dahoud, H. Ugail","doi":"10.1109/CW.2017.26","DOIUrl":"https://doi.org/10.1109/CW.2017.26","url":null,"abstract":"Gender classification has multiple applications including, but not limited to, face perception, age, ethnicity and identity analysis, video surveillance and smart human computer interaction. The majority of computer based gender classification algorithms analyse the appearance of facial features predominantly based on the texture of the static image of the face. In this paper, we propose a novel algorithm for gender classification using the smile dynamics without resorting to the use of any facial texture information. Our experiments suggest that this method has great potential for finding indicators of gender dimorphism. Our approach was tested on two databases, namely the CK+ and the MUG, consisting of a total of 80 subjects. As a result, using the KNN algorithm along with 10-fold cross validation, we achieve an accurate classification rate of 80% for gender simply based on the dynamics of a person's smile.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124166494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In accordance with the resent advancement in Internet of Things (IoT), the needs for IoT experiment platform have been ever increasing. IoT system consists of various technologies such as networking, sensor controller, edge-side computing, server-side big data collections, analysis and their visualizations. An experimental environment that can handle the development and experiments of such an IoT system become important. In the IoT system, a highly flexible system structure for applications using Field Programmable Gate Array (FPGA) is required. The authors propose the Remote Laboratory System for handling IoT experiments in the Cyber Laboratory, which is an educational FPGA-based remote laboratory for under-graduate university students. It enables not only to use available board-level small computers but also to use FPGA boards for prototyping IoT edges. It can also organize the IoT cloud-server side programs in the hybrid cloud. The FPGA based edge-side computing approach can have much more freedom and flexibility to implement various sensor controls those can be customized for specific IoT applications. The use of free micro-processor IP-core and re-organizing the available FPGA CAD design platform allow us to reduce the burden of design and implementation efforts for the construction of new Cyber Laboratory to accommodate IoT designs and experiments. It also contributed to reduce the students' amount of efforts to conduct their own IoT design and experiments, where students are required to have various skills in Information Technologies (IT): hardware design, edge-side computing, server-side computing, networking and infrastructure construction. The use of the Docker container/Swarm and the Docker File contributed to construct their own IoT experiment platforms for every student automatically, in the form of "Infrastructure as Code". Furthermore, these separately designed IoT experiment platforms can be combined to conduct a group of group experiment simultaneously. The paper showed the Cyber Laboratory's usefulness and applicability for IoT kinds of Remote experiments.
随着物联网技术的不断发展,对物联网实验平台的需求也越来越大。物联网系统由网络、传感器控制器、边缘计算、服务器端大数据收集、分析及其可视化等多种技术组成。能够处理这种物联网系统的开发和实验的实验环境变得重要。在物联网系统中,使用现场可编程门阵列(FPGA)的应用需要高度灵活的系统结构。作者提出了用于在网络实验室中处理物联网实验的远程实验室系统,这是一个基于fpga的本科大学生教育远程实验室。它不仅可以使用可用的板级小型计算机,还可以使用FPGA板进行物联网边缘原型设计。它还可以在混合云中组织物联网云服务器端程序。基于FPGA的边缘计算方法可以更自由和灵活地实现各种传感器控制,这些控制可以针对特定的物联网应用进行定制。使用免费的微处理器ip核和重新组织可用的FPGA CAD设计平台,使我们能够减少设计和实施工作的负担,以建设新的网络实验室,以适应物联网设计和实验。这也有助于减少学生自己进行物联网设计和实验的工作量,在这些实验中,学生需要掌握信息技术(It)的各种技能:硬件设计、边缘计算、服务器端计算、网络和基础设施建设。使用Docker容器/Swarm和Docker File,以“Infrastructure as Code”的形式为每个学生自动构建自己的物联网实验平台。此外,这些单独设计的物联网实验平台可以组合在一起,同时进行一组组实验。本文展示了网络实验室在物联网远程实验中的实用性和适用性。
{"title":"IoT Remote Group Experiments in the Cyber Laboratory: A FPGA-based Remote Laboratory in the Hybrid Cloud","authors":"N. Fujii, N. Koike","doi":"10.1109/CW.2017.29","DOIUrl":"https://doi.org/10.1109/CW.2017.29","url":null,"abstract":"In accordance with the resent advancement in Internet of Things (IoT), the needs for IoT experiment platform have been ever increasing. IoT system consists of various technologies such as networking, sensor controller, edge-side computing, server-side big data collections, analysis and their visualizations. An experimental environment that can handle the development and experiments of such an IoT system become important. In the IoT system, a highly flexible system structure for applications using Field Programmable Gate Array (FPGA) is required. The authors propose the Remote Laboratory System for handling IoT experiments in the Cyber Laboratory, which is an educational FPGA-based remote laboratory for under-graduate university students. It enables not only to use available board-level small computers but also to use FPGA boards for prototyping IoT edges. It can also organize the IoT cloud-server side programs in the hybrid cloud. The FPGA based edge-side computing approach can have much more freedom and flexibility to implement various sensor controls those can be customized for specific IoT applications. The use of free micro-processor IP-core and re-organizing the available FPGA CAD design platform allow us to reduce the burden of design and implementation efforts for the construction of new Cyber Laboratory to accommodate IoT designs and experiments. It also contributed to reduce the students' amount of efforts to conduct their own IoT design and experiments, where students are required to have various skills in Information Technologies (IT): hardware design, edge-side computing, server-side computing, networking and infrastructure construction. The use of the Docker container/Swarm and the Docker File contributed to construct their own IoT experiment platforms for every student automatically, in the form of \"Infrastructure as Code\". Furthermore, these separately designed IoT experiment platforms can be combined to conduct a group of group experiment simultaneously. The paper showed the Cyber Laboratory's usefulness and applicability for IoT kinds of Remote experiments.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134490549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We extend our system for radio astronomical monitoring by a module for numerical modeling of real antennas, based on the laws of wave optics. We use far field approximation, suitable for phased array radio telescopes, searching deep space signals. We have implemented the main computation on GPU, achieving acceleration factor 52 and providing real time performance of the module. Visualization of the resulting wave patterns helps to separate true deep space signals from the near Earth satellite radio interference. In particular, we model in detail a scenario of a satellite signal coming through the side lobe of SETI ATA-42 telescope. Such signals can be responsible for the false alarms in monitoring procedures, used in searching for extraterrestrial intelligence. We also analyze a scenario of searching pulsar signals with BSA/LPI radio telescope and their separation from background radio interference.
{"title":"StarWatch 3.0: Visualizing Wave Patterns of Phased Array Radio Telescopes","authors":"S. Klimenko, Kira Konich, I. Nikitin, L. Nikitina","doi":"10.1109/CW.2017.18","DOIUrl":"https://doi.org/10.1109/CW.2017.18","url":null,"abstract":"We extend our system for radio astronomical monitoring by a module for numerical modeling of real antennas, based on the laws of wave optics. We use far field approximation, suitable for phased array radio telescopes, searching deep space signals. We have implemented the main computation on GPU, achieving acceleration factor 52 and providing real time performance of the module. Visualization of the resulting wave patterns helps to separate true deep space signals from the near Earth satellite radio interference. In particular, we model in detail a scenario of a satellite signal coming through the side lobe of SETI ATA-42 telescope. Such signals can be responsible for the false alarms in monitoring procedures, used in searching for extraterrestrial intelligence. We also analyze a scenario of searching pulsar signals with BSA/LPI radio telescope and their separation from background radio interference.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133548451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Augmented and Virtual Reality applications provide environments in which users can immerse themselves in a fully or partially virtual world and interact with virtual objects or user interfaces. User-based, formal evaluation is needed to objectively compare interaction techniques, and find their value in different use cases, and user performance metrics are the key to being able to compare those techniques in a fair and effective manner. In this paper we explore evaluation principles used for or developed explicitly for virtual environments, and survey quality metrics, based on 15 current, important publications on interaction techniques for virtual environments. We check, categorize and analyze the formal user studies, and establish and present baseline performance metrics used for evaluation on interaction techniques in VR and AR.
{"title":"Popular Performance Metrics for Evaluation of Interaction in Virtual and Augmented Reality","authors":"Ali Samini, K. L. Palmerius","doi":"10.1109/CW.2017.25","DOIUrl":"https://doi.org/10.1109/CW.2017.25","url":null,"abstract":"Augmented and Virtual Reality applications provide environments in which users can immerse themselves in a fully or partially virtual world and interact with virtual objects or user interfaces. User-based, formal evaluation is needed to objectively compare interaction techniques, and find their value in different use cases, and user performance metrics are the key to being able to compare those techniques in a fair and effective manner. In this paper we explore evaluation principles used for or developed explicitly for virtual environments, and survey quality metrics, based on 15 current, important publications on interaction techniques for virtual environments. We check, categorize and analyze the formal user studies, and establish and present baseline performance metrics used for evaluation on interaction techniques in VR and AR.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132948803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Markerless Augmented Reality (AR) registration using the standard Homography matrix is unstable, and for image-based registration it has very low accuracy. In this paper, we present a new method to improve the stability and the accuracy of marker-less registration in AR. Based on the Visual Simultaneous Localization and Mapping (V-SLAM) framework, our method adds a three-dimensional dense cloud processing step to the state-of-the-art ORB-SLAM in order to deal with mainly the point cloud fusion and the object recognition. Our algorithm for the object recognition process acts as a stabilizer to improve the registration accuracy during the model to the scene transformation process. This has been achieved by integrating the Hough voting algorithm with the Iterative Closest Points(ICP) method. Our proposed AR framework also further increases the registration accuracy with the use of integrated camera poses on the registration of virtual objects. Our experiments show that the proposed method not only accelerates the speed of camera tracking with a standard SLAM system, but also effectively identifies objects and improves the stability of markerless augmented reality applications.
{"title":"A Stable and Accurate Marker-Less Augmented Reality Registration Method","authors":"Q. Gao, T. Wan, Wen Tang, Long Chen","doi":"10.1109/CW.2017.44","DOIUrl":"https://doi.org/10.1109/CW.2017.44","url":null,"abstract":"Markerless Augmented Reality (AR) registration using the standard Homography matrix is unstable, and for image-based registration it has very low accuracy. In this paper, we present a new method to improve the stability and the accuracy of marker-less registration in AR. Based on the Visual Simultaneous Localization and Mapping (V-SLAM) framework, our method adds a three-dimensional dense cloud processing step to the state-of-the-art ORB-SLAM in order to deal with mainly the point cloud fusion and the object recognition. Our algorithm for the object recognition process acts as a stabilizer to improve the registration accuracy during the model to the scene transformation process. This has been achieved by integrating the Hough voting algorithm with the Iterative Closest Points(ICP) method. Our proposed AR framework also further increases the registration accuracy with the use of integrated camera poses on the registration of virtual objects. Our experiments show that the proposed method not only accelerates the speed of camera tracking with a standard SLAM system, but also effectively identifies objects and improves the stability of markerless augmented reality applications.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121186998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Today, the cyber-world is fast growing to support almost all activities of humans. Moreover, the cyber-world begins to be self-contained so that we can perform every activity in the cyber-world. Although the term "cyber-physical world" is still in use, we see that the "cyber" part is fast growing.In the cyber-world, devices scattered in the physical world play a critical role. Furthermore, the legal relationship among humans will be introduced into the cyber-world. Moreover, some legal relationship between humans and devices must be considered. In this paper, we extend the conventional trust relationship among human identities to the one including devices. Devices are classified as communicators and sensors/ actuators. They form a platform to collect and analyze data in the cyber-world. The trust of sensors/actuators is hard to establish because of the resource constraint. We propose "data platform" to accommodate these devices. Trust relationship in the data platform is analyzed. We discuss high trust in device registration and authentication together with communications. Based on this high trust for the limited area, we can establish trust in an economical way.
{"title":"Trust for Data and Data Platforms in the Cyber-World","authors":"S. Hiroyuki, Ogata Takanori","doi":"10.1109/CW.2017.31","DOIUrl":"https://doi.org/10.1109/CW.2017.31","url":null,"abstract":"Today, the cyber-world is fast growing to support almost all activities of humans. Moreover, the cyber-world begins to be self-contained so that we can perform every activity in the cyber-world. Although the term \"cyber-physical world\" is still in use, we see that the \"cyber\" part is fast growing.In the cyber-world, devices scattered in the physical world play a critical role. Furthermore, the legal relationship among humans will be introduced into the cyber-world. Moreover, some legal relationship between humans and devices must be considered. In this paper, we extend the conventional trust relationship among human identities to the one including devices. Devices are classified as communicators and sensors/ actuators. They form a platform to collect and analyze data in the cyber-world. The trust of sensors/actuators is hard to establish because of the resource constraint. We propose \"data platform\" to accommodate these devices. Trust relationship in the data platform is analyzed. We discuss high trust in device registration and authentication together with communications. Based on this high trust for the limited area, we can establish trust in an economical way.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122401099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate motion capture and flexible retargeting of underwater creatures such as fish remain to be difficult due to the long-lasting challenges of marker attachment and feature description for soft bodies in the underwater environment. Despite limited new research progresses appeared in recent years, the fish motion retargeting with a desirable motion pattern in real-time remains elusive. Strongly motivated by our ambitious goal of achieving high-quality data-driven fish animation with a light-weight, mobile device, this paper develops a novel framework of motion capturing and retargeting for a fish. We capture the motion of actual fish by a monocular camera without the utility of any marker. The elliptical Fourier coefficients are then integrated into the contour-based feature extraction process to analyze the fish swimming patterns. This novel approach can obtain the motion information in a robust way, with smooth medial axis as the descriptor for a soft fish body. For motion retargeting, we propose a two-level scheme to properly transfer the captured motion into new models, such as 2D meshes (with texture) generated from pictures or 3D models designed by artists, regardless of different body geometry and fin proportions among various species. Both motion capture and retargeting processes are functioning in real time. Hence, the system can simultaneously create fish animation with variation, while obtaining video sequences of real fish by a monocular camera.
{"title":"Motion Capture and Retargeting of Fish by Monocular Camera","authors":"Xiangfei Meng, Junjun Pan, Hong Qin","doi":"10.1109/CW.2017.16","DOIUrl":"https://doi.org/10.1109/CW.2017.16","url":null,"abstract":"Accurate motion capture and flexible retargeting of underwater creatures such as fish remain to be difficult due to the long-lasting challenges of marker attachment and feature description for soft bodies in the underwater environment. Despite limited new research progresses appeared in recent years, the fish motion retargeting with a desirable motion pattern in real-time remains elusive. Strongly motivated by our ambitious goal of achieving high-quality data-driven fish animation with a light-weight, mobile device, this paper develops a novel framework of motion capturing and retargeting for a fish. We capture the motion of actual fish by a monocular camera without the utility of any marker. The elliptical Fourier coefficients are then integrated into the contour-based feature extraction process to analyze the fish swimming patterns. This novel approach can obtain the motion information in a robust way, with smooth medial axis as the descriptor for a soft fish body. For motion retargeting, we propose a two-level scheme to properly transfer the captured motion into new models, such as 2D meshes (with texture) generated from pictures or 3D models designed by artists, regardless of different body geometry and fin proportions among various species. Both motion capture and retargeting processes are functioning in real time. Hence, the system can simultaneously create fish animation with variation, while obtaining video sequences of real fish by a monocular camera.","PeriodicalId":309728,"journal":{"name":"2017 International Conference on Cyberworlds (CW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131759933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}