Y. Tokuyama, R. J. Rajapakse, Sachi Yamabe, K. Konno, Y. Hung
Augmented reality (AR) is where 3D virtual objects are integrated into a 3D real environment in real time. The augmented reality applications such as medical visualization, maintenance and repair, robot path planning, entertainment, military aircraft navigation, and targeting applications have been proposed. This paper introduces the development of an augmented reality game which allows the user to carry out lower limb exercise using a natural user interface based on Microsoft Kinect. The system has been designed as an augmented game where users can see themselves in a world augmented with virtual objects generated by computer graphics. The player sitting in a chair just has to step on a mole that appears and disappears by moving upward and downward randomly. It encourages the activities of a large number of lower limb muscles which will help prevent falls. It is also suitable for rehabilitation.
{"title":"A Kinect-Based Augmented Reality Game for Lower Limb Exercise","authors":"Y. Tokuyama, R. J. Rajapakse, Sachi Yamabe, K. Konno, Y. Hung","doi":"10.1109/CW.2019.00077","DOIUrl":"https://doi.org/10.1109/CW.2019.00077","url":null,"abstract":"Augmented reality (AR) is where 3D virtual objects are integrated into a 3D real environment in real time. The augmented reality applications such as medical visualization, maintenance and repair, robot path planning, entertainment, military aircraft navigation, and targeting applications have been proposed. This paper introduces the development of an augmented reality game which allows the user to carry out lower limb exercise using a natural user interface based on Microsoft Kinect. The system has been designed as an augmented game where users can see themselves in a world augmented with virtual objects generated by computer graphics. The player sitting in a chair just has to step on a mole that appears and disappears by moving upward and downward randomly. It encourages the activities of a large number of lower limb muscles which will help prevent falls. It is also suitable for rehabilitation.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"6 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125020448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yisi Liu, Zirui Lan, F. Trapsilawati, O. Sourina, Chun-Hsien Chen, W. Müller-Wittig
To deal with the increasing demands in Air Traffic Control (ATC), new working place designs are proposed and developed that need novel human factors evaluation tools. In this paper, we propose a novel application of Electroencephalogram (EEG)-based emotion, workload, and stress recognition algorithms to investigate the optimal length of training for Air Traffic Control Officers (ATCOs) to learn working with three-dimensional (3D) display as a supplementary to the existing 2D display. We tested and applied the state-of-the-art EEG-based subject-dependent algorithms. The following experiment was carried out. Twelve ATCOs were recruited to take part in the experiment. The participants were in charge of the Terminal Control Area, providing navigation assistance to aircraft departing and approaching the airport using 2D and 3D displays. EEG data were recorded, and traditional human factors questionnaires were given to the participants after 15-minute, 60-minute, and 120-minute training. Different from the questionnaires, the EEG-based evaluation tools allow the recognition of emotions, workload, and stress with different temporal resolutions during the task performance by subjects. The results showed that 50-minute training could be enough for the ATCOs to learn the new display setting as they had relatively low stress and workload. The study demonstrated that there is a potential of applying the EEG-based human factors evaluation tools to assess novel system designs in addition to traditional questionnaire and feedback, which can be beneficial for future improvements and developments of the systems and interfaces.
{"title":"EEG-Based Human Factors Evaluation of Air Traffic Control Operators (ATCOs) for Optimal Training","authors":"Yisi Liu, Zirui Lan, F. Trapsilawati, O. Sourina, Chun-Hsien Chen, W. Müller-Wittig","doi":"10.1109/CW.2019.00049","DOIUrl":"https://doi.org/10.1109/CW.2019.00049","url":null,"abstract":"To deal with the increasing demands in Air Traffic Control (ATC), new working place designs are proposed and developed that need novel human factors evaluation tools. In this paper, we propose a novel application of Electroencephalogram (EEG)-based emotion, workload, and stress recognition algorithms to investigate the optimal length of training for Air Traffic Control Officers (ATCOs) to learn working with three-dimensional (3D) display as a supplementary to the existing 2D display. We tested and applied the state-of-the-art EEG-based subject-dependent algorithms. The following experiment was carried out. Twelve ATCOs were recruited to take part in the experiment. The participants were in charge of the Terminal Control Area, providing navigation assistance to aircraft departing and approaching the airport using 2D and 3D displays. EEG data were recorded, and traditional human factors questionnaires were given to the participants after 15-minute, 60-minute, and 120-minute training. Different from the questionnaires, the EEG-based evaluation tools allow the recognition of emotions, workload, and stress with different temporal resolutions during the task performance by subjects. The results showed that 50-minute training could be enough for the ATCOs to learn the new display setting as they had relatively low stress and workload. The study demonstrated that there is a potential of applying the EEG-based human factors evaluation tools to assess novel system designs in addition to traditional questionnaire and feedback, which can be beneficial for future improvements and developments of the systems and interfaces.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114026108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Composite sketch recognition belongs to heterogeneous face recognition research, which is of great important in the field of criminal investigation. Because composite face sketch and photo belong to different modalities, robust representation of face feature cross different modalities is the key to recognition. Considering that composite sketch lacks texture details in some area, using texture features only may result in low recognition accuracy, this paper proposes a composite sketch recognition algorithm based on multi-scale Hog features and semantic attributes. Firstly, the global Hog features of the face and the local Hog features of each face component are extracted to represent the contour and detail features. Then the global and detail features are fused according to their importance at score level. Finally, semantic attributes are employed to reorder the matching results. The proposed algorithm is validated on PRIP-VSGC database and UoM-SGFS database, and achieves rank 10 identification accuracy of 88.6% and 96.7% respectively, which demonstrates that the proposed method outperforms other state-of-the-art methods.
{"title":"Composite Sketch Recognition Using Multi-scale Hog Features and Semantic Attributes","authors":"Xinying Xue, Jiayi Xu, Xiaoyang Mao","doi":"10.1109/CW.2019.00028","DOIUrl":"https://doi.org/10.1109/CW.2019.00028","url":null,"abstract":"Composite sketch recognition belongs to heterogeneous face recognition research, which is of great important in the field of criminal investigation. Because composite face sketch and photo belong to different modalities, robust representation of face feature cross different modalities is the key to recognition. Considering that composite sketch lacks texture details in some area, using texture features only may result in low recognition accuracy, this paper proposes a composite sketch recognition algorithm based on multi-scale Hog features and semantic attributes. Firstly, the global Hog features of the face and the local Hog features of each face component are extracted to represent the contour and detail features. Then the global and detail features are fused according to their importance at score level. Finally, semantic attributes are employed to reorder the matching results. The proposed algorithm is validated on PRIP-VSGC database and UoM-SGFS database, and achieves rank 10 identification accuracy of 88.6% and 96.7% respectively, which demonstrates that the proposed method outperforms other state-of-the-art methods.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115564316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, deep convolutional neural networks (CNN) have become a new standard in many machine learning applications not only in image but also in audio processing. However, most of the studies only explore a single type of training data. In this paper, we present a study on classifying bird species by combining deep neural features of both visual and audio data using kernel-based fusion method. Specifically, we extract deep neural features based on the activation values of an inner layer of CNN. We combine these features by multiple kernel learning (MKL) to perform the final classification. In the experiment, we train and evaluate our method on a CUB-200-2011 standard data set combined with our originally collected audio data set with respect to 200 bird species (classes). The experimental results indicate that our CNN+MKL method which utilizes the combination of both categories of data outperforms single-modality methods, some simple kernel combination methods, and the conventional early fusion method.
{"title":"Bird Species Classification with Audio-Visual Data using CNN and Multiple Kernel Learning","authors":"B. Naranchimeg, Chao Zhang, T. Akashi","doi":"10.1109/CW.2019.00022","DOIUrl":"https://doi.org/10.1109/CW.2019.00022","url":null,"abstract":"Recently, deep convolutional neural networks (CNN) have become a new standard in many machine learning applications not only in image but also in audio processing. However, most of the studies only explore a single type of training data. In this paper, we present a study on classifying bird species by combining deep neural features of both visual and audio data using kernel-based fusion method. Specifically, we extract deep neural features based on the activation values of an inner layer of CNN. We combine these features by multiple kernel learning (MKL) to perform the final classification. In the experiment, we train and evaluate our method on a CUB-200-2011 standard data set combined with our originally collected audio data set with respect to 200 bird species (classes). The experimental results indicate that our CNN+MKL method which utilizes the combination of both categories of data outperforms single-modality methods, some simple kernel combination methods, and the conventional early fusion method.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122665536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hiromichi Ichige, M. Toyoura, K. Go, K. Kashiwagi, I. Fujishiro, Xiaoyang Mao
The number of individuals with Age-related Macular Degeneration (AMD) is rapidly increasing. One of the main symptoms of AMD is "metamorphopsia," or distorted vision, which not only makes it difficult for individuals with AMD to do detailed-oriented tasks but also makes sufferers more vulnerable to certain risks in day-to-day life. Traditional clinical approaches to assess metamorphopsia have lacked mechanisms for quantifying the degree of distortion in space, making it impossible to know exactly how individuals with the condition see things. This paper proposes a new method for quantifying distortion in space and visualizing AMD patients' distorted views via line manipulation. By visualizing the distorted views stemming from metamorphopsia, the method gives doctors and others an intuitive picture of how patients see the world and thereby enables a broad range of options for treatment and support.
{"title":"Visual Assessment of Distorted View for Metamorphopsia Patient by Interactive Line Manipulation","authors":"Hiromichi Ichige, M. Toyoura, K. Go, K. Kashiwagi, I. Fujishiro, Xiaoyang Mao","doi":"10.1109/CW.2019.00038","DOIUrl":"https://doi.org/10.1109/CW.2019.00038","url":null,"abstract":"The number of individuals with Age-related Macular Degeneration (AMD) is rapidly increasing. One of the main symptoms of AMD is \"metamorphopsia,\" or distorted vision, which not only makes it difficult for individuals with AMD to do detailed-oriented tasks but also makes sufferers more vulnerable to certain risks in day-to-day life. Traditional clinical approaches to assess metamorphopsia have lacked mechanisms for quantifying the degree of distortion in space, making it impossible to know exactly how individuals with the condition see things. This paper proposes a new method for quantifying distortion in space and visualizing AMD patients' distorted views via line manipulation. By visualizing the distorted views stemming from metamorphopsia, the method gives doctors and others an intuitive picture of how patients see the world and thereby enables a broad range of options for treatment and support.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129818670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This research project developed a Virtual Reality (VR) training simulator for the CPR procedure. This is designed for use training school children. It can also form part of a larger system for training paramedics with VR. The simulator incorporates a number of advanced VR technologies including Oculus Rift and Leap motion. We have gained input from NHS paramedics and several related organisation to design the system and provide feedback and evaluation of the preliminary working prototype.
{"title":"CPR Virtual Reality Training Simulator for Schools","authors":"N. Vaughan, N. John, N. Rees","doi":"10.1109/CW.2019.00013","DOIUrl":"https://doi.org/10.1109/CW.2019.00013","url":null,"abstract":"This research project developed a Virtual Reality (VR) training simulator for the CPR procedure. This is designed for use training school children. It can also form part of a larger system for training paramedics with VR. The simulator incorporates a number of advanced VR technologies including Oculus Rift and Leap motion. We have gained input from NHS paramedics and several related organisation to design the system and provide feedback and evaluation of the preliminary working prototype.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129848545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the increasing availability of consumer brain-computer interfaces, new methods of authentication can be considered. In this paper, we present a shoulder surfing resistant means of entering a graphical password by measuring brain activity. The password is a subset of images displayed repeatedly by rapid serial visual presentation. The occurrence of a password image entails an event-related potential in the electroencephalogram, the P300 response. The P300 response is used to classify whether an image belongs to the password subset or not. We compare individual classifiers, trained with samples of a specific user, to general P300 classifiers, trained over all subjects. We evaluate the permanence of the classification results in three subsequent experiment sessions. The classification score significantly increases from the first to the third session. Comparing the use of natural photos or simple objects as stimuli shows no significant difference. In total, our authentication scheme achieves an equal error rate of about 10%. In the future, with increasing accuracy and proliferation, brain-computer interfaces could find practical application in alternative authentication methods.
{"title":"A Shoulder-Surfing Resistant Image-Based Authentication Scheme with a Brain-Computer Interface","authors":"Florian Gondesen, Matthias Marx, Ann-Christine Kycler","doi":"10.1109/CW.2019.00061","DOIUrl":"https://doi.org/10.1109/CW.2019.00061","url":null,"abstract":"With the increasing availability of consumer brain-computer interfaces, new methods of authentication can be considered. In this paper, we present a shoulder surfing resistant means of entering a graphical password by measuring brain activity. The password is a subset of images displayed repeatedly by rapid serial visual presentation. The occurrence of a password image entails an event-related potential in the electroencephalogram, the P300 response. The P300 response is used to classify whether an image belongs to the password subset or not. We compare individual classifiers, trained with samples of a specific user, to general P300 classifiers, trained over all subjects. We evaluate the permanence of the classification results in three subsequent experiment sessions. The classification score significantly increases from the first to the third session. Comparing the use of natural photos or simple objects as stimuli shows no significant difference. In total, our authentication scheme achieves an equal error rate of about 10%. In the future, with increasing accuracy and proliferation, brain-computer interfaces could find practical application in alternative authentication methods.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129456260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Volkau, A. Mujeeb, Wenting Dai, Marius Erdt, A. Sourin
Automatic optical inspection for manufacturing traditionally was based on computer vision. However, there are emerging attempts to do it using deep learning approach. Deep convolutional neural network allows to learn semantic image features which could be used for defect detection in products. In contrast to the existing approaches where supervised or semi-supervised training is done on thousands of images of defects, we investigate whether unsupervised deep learning model for defect detection could be trained with orders of magnitude smaller amount of representative defect-free samples (tenths rather than thousands). This research is motivated by the fact that collection of large amounts of defective samples is difficult and expensive. Our model undergoes only one-class training and aims to extract distinctive semantic features from the normal samples in an unsupervised manner. We propose a variant of transfer learning, that consists of combination of unsupervised learning used upon VGG16 with pre-trained on ImageNet weight coefficients. To demonstrate a defect detection, we used a set of Printed Circuit Boards (PCBs) with different types of defects - scratch, missing washer/extra hole, abrasion, broken PCB edge. The trained model allows us to make clusters of normal internal representations of features of PCB in high-dimensional feature space, and to localize defective patches in PCB image based on distance from normal clusters. Initial results show that more than 90% of defects were detected.
{"title":"Detection Defect in Printed Circuit Boards using Unsupervised Feature Extraction Upon Transfer Learning","authors":"I. Volkau, A. Mujeeb, Wenting Dai, Marius Erdt, A. Sourin","doi":"10.1109/CW.2019.00025","DOIUrl":"https://doi.org/10.1109/CW.2019.00025","url":null,"abstract":"Automatic optical inspection for manufacturing traditionally was based on computer vision. However, there are emerging attempts to do it using deep learning approach. Deep convolutional neural network allows to learn semantic image features which could be used for defect detection in products. In contrast to the existing approaches where supervised or semi-supervised training is done on thousands of images of defects, we investigate whether unsupervised deep learning model for defect detection could be trained with orders of magnitude smaller amount of representative defect-free samples (tenths rather than thousands). This research is motivated by the fact that collection of large amounts of defective samples is difficult and expensive. Our model undergoes only one-class training and aims to extract distinctive semantic features from the normal samples in an unsupervised manner. We propose a variant of transfer learning, that consists of combination of unsupervised learning used upon VGG16 with pre-trained on ImageNet weight coefficients. To demonstrate a defect detection, we used a set of Printed Circuit Boards (PCBs) with different types of defects - scratch, missing washer/extra hole, abrasion, broken PCB edge. The trained model allows us to make clusters of normal internal representations of features of PCB in high-dimensional feature space, and to localize defective patches in PCB image based on distance from normal clusters. Initial results show that more than 90% of defects were detected.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130068150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This research project developed a Virtual Reality (VR) training simulator for paramedic procedures. Currently needle cricothyroidotomy and chest drain are modelled, which could form part of a larger system for training paramedics with VR in various other procedures. The simulator incorporates a number of advanced VR technologies including Oculus Rift and haptic feedback. We have gained input and feedback from NHS paramedics and several related organisation to design the system and provide feedback and evaluation of the preliminary working prototype.
{"title":"ParaVR: Paramedic Virtual Reality Training Simulator","authors":"N. Vaughan, N. John, N. Rees","doi":"10.1109/CW.2019.00012","DOIUrl":"https://doi.org/10.1109/CW.2019.00012","url":null,"abstract":"This research project developed a Virtual Reality (VR) training simulator for paramedic procedures. Currently needle cricothyroidotomy and chest drain are modelled, which could form part of a larger system for training paramedics with VR in various other procedures. The simulator incorporates a number of advanced VR technologies including Oculus Rift and haptic feedback. We have gained input and feedback from NHS paramedics and several related organisation to design the system and provide feedback and evaluation of the preliminary working prototype.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127666245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a method for semi-automatically creating an anime-like 3D face model from a single illustration. In the proposed method, principal component analysis (PCA) is applied to existing anime-like 3D models to obtain base models for generating natural 3D models. To align the dimensions of the data and make geometric correspondence, a template model is deformed using a modified Nonrigid Iterative Closest Point (NICP) method. Then, the coefficients of the linear combination of the base models are estimated by minimizing the difference between the rendered image of the 3D model with the coefficients and the input illustration using edge-based matching. We confirmed that our method was able to generate a natural anime-like 3D face models which has similar eye and face shapes to those of the input illustration.
{"title":"Semi-Automatic Creation of an Anime-Like 3D Face Model from a Single Illustration","authors":"T. Niki, T. Komuro","doi":"10.1109/CW.2019.00017","DOIUrl":"https://doi.org/10.1109/CW.2019.00017","url":null,"abstract":"In this paper, we propose a method for semi-automatically creating an anime-like 3D face model from a single illustration. In the proposed method, principal component analysis (PCA) is applied to existing anime-like 3D models to obtain base models for generating natural 3D models. To align the dimensions of the data and make geometric correspondence, a template model is deformed using a modified Nonrigid Iterative Closest Point (NICP) method. Then, the coefficients of the linear combination of the base models are estimated by minimizing the difference between the rendered image of the 3D model with the coefficients and the input illustration using edge-based matching. We confirmed that our method was able to generate a natural anime-like 3D face models which has similar eye and face shapes to those of the input illustration.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122618249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}