Pub Date : 2019-08-01DOI: 10.1109/Ubi-Media.2019.00063
Anas Ali Alkasasbeh, G. Ghinea, Wu-Yuin Hwang
In currently used verification systems, different senses are being used as inputs or outputs, such as touch and sight. In a similar way, olfactory media (sense of smell) could be used to take part in the verification method. In this study, an empirical investigation was conducted to study the impact of olfactory data as a data channel on user performance and Quality of Experience (QoE). The olfactory data were used with words in our verification model (PassSmell). To this end, we developed two different versions of applications, namely enhanced-olfactory and none-olfactory, for which a database of words with/without scents were used. Twenty-eight participants were invited to take part in our experiment, evenly split into a control and experimental group. Time and number of failed/successful attempts were recorded. A significant difference was found, in terms of time taken, between the experimental and the control groups, as determined by independent sample t-test. Similar results were found with respect to average scores and number of successful attempts. Regarding user QoE, having olfactory data with words instead of passwords influenced the users positively, which resulted in their being attracted to using this kind of application in the future.
{"title":"PassSmell: Using Olfactory Media for Authentication","authors":"Anas Ali Alkasasbeh, G. Ghinea, Wu-Yuin Hwang","doi":"10.1109/Ubi-Media.2019.00063","DOIUrl":"https://doi.org/10.1109/Ubi-Media.2019.00063","url":null,"abstract":"In currently used verification systems, different senses are being used as inputs or outputs, such as touch and sight. In a similar way, olfactory media (sense of smell) could be used to take part in the verification method. In this study, an empirical investigation was conducted to study the impact of olfactory data as a data channel on user performance and Quality of Experience (QoE). The olfactory data were used with words in our verification model (PassSmell). To this end, we developed two different versions of applications, namely enhanced-olfactory and none-olfactory, for which a database of words with/without scents were used. Twenty-eight participants were invited to take part in our experiment, evenly split into a control and experimental group. Time and number of failed/successful attempts were recorded. A significant difference was found, in terms of time taken, between the experimental and the control groups, as determined by independent sample t-test. Similar results were found with respect to average scores and number of successful attempts. Regarding user QoE, having olfactory data with words instead of passwords influenced the users positively, which resulted in their being attracted to using this kind of application in the future.","PeriodicalId":259542,"journal":{"name":"2019 Twelfth International Conference on Ubi-Media Computing (Ubi-Media)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121597433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/Ubi-Media.2019.00041
Supanut Konngern, Naphssorn Kaibutr, Nawaporn Konru, T. Tantidham, Chih-Lin Hu, Tipajin Thaipisutikul, T. Shih, P. Mongkolwat
In the present day, the world is moving quickly, making the modern live of people to be more complex. Some people may have less time to manage their schedules, which could make them forget to do something that may be important. Especially, a family may not be able to manage their time well because they are caught up in their busy works. We have developed an application to help members in a family to manage their schedule. This application is a scheduling application with reminder, a user can add tasks for other family members and specify an activity, a place to remind each activity, and a person who is going to be reminded or do the activity. Multiple tasks can be complied together to become a plan, a plan specifies the date and time that the defined tasks must be done. In reminder part, we use a personal robot assistant called Zenbo from ASUS opened for developers. Zenbo has capabilities to speak, move, detect persons and much more. It works with the application to remind members in family to perform an activity at a preset time. Zenbo retrieves the information of tasks that stored on the application database. It can go to the specified place to notify a person to do the task. Users can view the history of the previous Because people are too busy and may not manage their time well, we developed a schedule and reminder application that works with Zenbo, to help people in family manage their activities and time effectively.
{"title":"Assistive Robot with Action Planner and Schedule for Family","authors":"Supanut Konngern, Naphssorn Kaibutr, Nawaporn Konru, T. Tantidham, Chih-Lin Hu, Tipajin Thaipisutikul, T. Shih, P. Mongkolwat","doi":"10.1109/Ubi-Media.2019.00041","DOIUrl":"https://doi.org/10.1109/Ubi-Media.2019.00041","url":null,"abstract":"In the present day, the world is moving quickly, making the modern live of people to be more complex. Some people may have less time to manage their schedules, which could make them forget to do something that may be important. Especially, a family may not be able to manage their time well because they are caught up in their busy works. We have developed an application to help members in a family to manage their schedule. This application is a scheduling application with reminder, a user can add tasks for other family members and specify an activity, a place to remind each activity, and a person who is going to be reminded or do the activity. Multiple tasks can be complied together to become a plan, a plan specifies the date and time that the defined tasks must be done. In reminder part, we use a personal robot assistant called Zenbo from ASUS opened for developers. Zenbo has capabilities to speak, move, detect persons and much more. It works with the application to remind members in family to perform an activity at a preset time. Zenbo retrieves the information of tasks that stored on the application database. It can go to the specified place to notify a person to do the task. Users can view the history of the previous Because people are too busy and may not manage their time well, we developed a schedule and reminder application that works with Zenbo, to help people in family manage their activities and time effectively.","PeriodicalId":259542,"journal":{"name":"2019 Twelfth International Conference on Ubi-Media Computing (Ubi-Media)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129914417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/Ubi-Media.2019.00039
Prasitthichai Naronglerdrit
This paper compares the performance of bottleneck feature extraction based on two different architectures, the first, Convolutional Neural Network (CNN) based bottleneck feature extraction and the second, Deep Neural Network (DNN) based bottleneck feature extraction. Both of CNN and DNN based bottleneck feature extraction network were trained for 200 epochs to perform a feature extraction task. The input of bottleneck network is the same as the output which is the preprocessed images. From the bottleneck network, after training, the layers after the bottleneck layer were cut-off and set the bottleneck layer as an output layer. The result of the bottleneck feature extraction is that it can reduce the dimension of the images from 4096 to 128 to be used as a feature vectors for a classification process. In the classification process, it was performed by Artificial Neural Network (ANN) with three fully-connected layers, and trained for 500 epochs. In order to evaluate the performance, the 10-flod cross-validation was applied to the networks. The result is that the CNN based bottleneck feature extraction performs a better performance than DNN based which are 99.54% and 98.91% respectively.
{"title":"Facial Expression Recognition: A Comparison of Bottleneck Feature Extraction","authors":"Prasitthichai Naronglerdrit","doi":"10.1109/Ubi-Media.2019.00039","DOIUrl":"https://doi.org/10.1109/Ubi-Media.2019.00039","url":null,"abstract":"This paper compares the performance of bottleneck feature extraction based on two different architectures, the first, Convolutional Neural Network (CNN) based bottleneck feature extraction and the second, Deep Neural Network (DNN) based bottleneck feature extraction. Both of CNN and DNN based bottleneck feature extraction network were trained for 200 epochs to perform a feature extraction task. The input of bottleneck network is the same as the output which is the preprocessed images. From the bottleneck network, after training, the layers after the bottleneck layer were cut-off and set the bottleneck layer as an output layer. The result of the bottleneck feature extraction is that it can reduce the dimension of the images from 4096 to 128 to be used as a feature vectors for a classification process. In the classification process, it was performed by Artificial Neural Network (ANN) with three fully-connected layers, and trained for 500 epochs. In order to evaluate the performance, the 10-flod cross-validation was applied to the networks. The result is that the CNN based bottleneck feature extraction performs a better performance than DNN based which are 99.54% and 98.91% respectively.","PeriodicalId":259542,"journal":{"name":"2019 Twelfth International Conference on Ubi-Media Computing (Ubi-Media)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116490667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/Ubi-Media.2019.00019
Qiuyu Lei, Yun Liu
Domain-specific knowledge graphs can represent complex domain knowledge in a structured format and have achieved great success in practical applications. Recently, knowledge graphs have been widely used in recommender systems because of their ability to integrate various recommendation models and deal with data sparseness and cold-start problems. In this paper, we propose an approach to extract movie related information from Linked Open Data (LOD) and construct the knowledge graph of movie domain. And the Neo4j graph database, which is characterized by friendly user interface and quick inquiry, is used to visualize the knowledge graph.
{"title":"Constructing Movie Domain Knowledge Graph Based on LOD","authors":"Qiuyu Lei, Yun Liu","doi":"10.1109/Ubi-Media.2019.00019","DOIUrl":"https://doi.org/10.1109/Ubi-Media.2019.00019","url":null,"abstract":"Domain-specific knowledge graphs can represent complex domain knowledge in a structured format and have achieved great success in practical applications. Recently, knowledge graphs have been widely used in recommender systems because of their ability to integrate various recommendation models and deal with data sparseness and cold-start problems. In this paper, we propose an approach to extract movie related information from Linked Open Data (LOD) and construct the knowledge graph of movie domain. And the Neo4j graph database, which is characterized by friendly user interface and quick inquiry, is used to visualize the knowledge graph.","PeriodicalId":259542,"journal":{"name":"2019 Twelfth International Conference on Ubi-Media Computing (Ubi-Media)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125902229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/Ubi-Media.2019.00053
Pichayakul Jenpoomjai, Potsawat Wosri, S. Ruengittinun, Chih-Lin Hu, Chalothon Chootong
This paper aims to reduce the losses in emergency cases of elderly falling in residential living environments. We design a falling detection system that can determine the human pose-estimation using the TensorFlow APIs to identify the falling of seniors. The proposed specific VA algorithm that considers time, velocity and acceleration factors of human movement, the falling detection system can better analyze the falling and obtain more accurate pose-estimation. To examine the proposed system, the experiments were conducted to testify basic specifications of fallings upon real data traces of human motion records. Results show the acceleration of human movement can relatively affect the classification of actions. the proposed approach achieves an accuracy of 88% on the test data on falling detection.
{"title":"VA Algorithm for Elderly's Falling Detection with 2D-Pose-Estimation","authors":"Pichayakul Jenpoomjai, Potsawat Wosri, S. Ruengittinun, Chih-Lin Hu, Chalothon Chootong","doi":"10.1109/Ubi-Media.2019.00053","DOIUrl":"https://doi.org/10.1109/Ubi-Media.2019.00053","url":null,"abstract":"This paper aims to reduce the losses in emergency cases of elderly falling in residential living environments. We design a falling detection system that can determine the human pose-estimation using the TensorFlow APIs to identify the falling of seniors. The proposed specific VA algorithm that considers time, velocity and acceleration factors of human movement, the falling detection system can better analyze the falling and obtain more accurate pose-estimation. To examine the proposed system, the experiments were conducted to testify basic specifications of fallings upon real data traces of human motion records. Results show the acceleration of human movement can relatively affect the classification of actions. the proposed approach achieves an accuracy of 88% on the test data on falling detection.","PeriodicalId":259542,"journal":{"name":"2019 Twelfth International Conference on Ubi-Media Computing (Ubi-Media)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126734592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/Ubi-Media.2019.00067
Jun-Ming Su, Ming-Hua Cheng, Xin-Jie Wang, S. Tseng
Network security practice learning can be regarded as the hands-on learning, so how to efficiently evaluate the learning outcome of students is an important issue. Accordingly, this study proposes a scheme to create the Simulated Test Items for online assessing the learning outcome of students in Web Security subject, called SimTI-WS scheme. The created Simulated Test Item based on SimTI-WS scheme is able to allow students to virtually operate and interact with the simulated Web Security scenario, e.g., WebGoat. Therefore, the SimTI-WS scheme is workable and the performance of the evaluation concerning the Web Security Subject can thus be expected to be improved.
{"title":"A Scheme to Create Simulated Test Items for Facilitating the Assessment in Web Security Subject","authors":"Jun-Ming Su, Ming-Hua Cheng, Xin-Jie Wang, S. Tseng","doi":"10.1109/Ubi-Media.2019.00067","DOIUrl":"https://doi.org/10.1109/Ubi-Media.2019.00067","url":null,"abstract":"Network security practice learning can be regarded as the hands-on learning, so how to efficiently evaluate the learning outcome of students is an important issue. Accordingly, this study proposes a scheme to create the Simulated Test Items for online assessing the learning outcome of students in Web Security subject, called SimTI-WS scheme. The created Simulated Test Item based on SimTI-WS scheme is able to allow students to virtually operate and interact with the simulated Web Security scenario, e.g., WebGoat. Therefore, the SimTI-WS scheme is workable and the performance of the evaluation concerning the Web Security Subject can thus be expected to be improved.","PeriodicalId":259542,"journal":{"name":"2019 Twelfth International Conference on Ubi-Media Computing (Ubi-Media)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125248478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/Ubi-Media.2019.00031
Yi-Hui Wu, Wen-Jiin Tsai, Hua-Tsung Chen
This paper addresses the problem of temporal action detection from untrimmed videos. Considering that actions can be recognized by the occurrence of objects and the corresponding moving information in the video, a hierarchical model is proposed which consists of two object detection networks to do temporal action detection. The first network is used to detect objects in each frame, and the second one is for temporal action detection. We also proposed a method which converts the object detection results of the first network into a new type of frame so that it can be fed to the second network. The generated frame has six channels with spatiotemporal information beneficial to action detection. Quantitative results on THUMOS14 dataset demonstrate the superior of the proposed model with satisfactory performance gains over state-of-the-art action detection methods.
{"title":"Temporal Action Detection Based on Hierarchical Object Detection Networks","authors":"Yi-Hui Wu, Wen-Jiin Tsai, Hua-Tsung Chen","doi":"10.1109/Ubi-Media.2019.00031","DOIUrl":"https://doi.org/10.1109/Ubi-Media.2019.00031","url":null,"abstract":"This paper addresses the problem of temporal action detection from untrimmed videos. Considering that actions can be recognized by the occurrence of objects and the corresponding moving information in the video, a hierarchical model is proposed which consists of two object detection networks to do temporal action detection. The first network is used to detect objects in each frame, and the second one is for temporal action detection. We also proposed a method which converts the object detection results of the first network into a new type of frame so that it can be fed to the second network. The generated frame has six channels with spatiotemporal information beneficial to action detection. Quantitative results on THUMOS14 dataset demonstrate the superior of the proposed model with satisfactory performance gains over state-of-the-art action detection methods.","PeriodicalId":259542,"journal":{"name":"2019 Twelfth International Conference on Ubi-Media Computing (Ubi-Media)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133510075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
United State Treasury Bonds are government bonds issued by the United State Treasury through the Public Debt Bureau. The trades of U.S. Treasury Bonds have a huge influence on global economy. To analysis the trend of global economy, many economists believe U.S. Treasury Yield has the ability to predict the fluence of other financial markets such as stock market, futures market, Option market, etc. However, However, most financial prediction models focus only on predicting stock price, which is a sort of multidimensional time-series prediction. Although U.S. Treasury Yield could be viewed as a multidimensional time-series, the prediction models for predicting stock price are not able to completely satisfy the requirements for predicting U.S. Treasury Yield. Besides, most traditional machine learning methods focus only on estimation of short-term cash flow. As the result, the loss of traditional machine learning methods would significantly be increased while the period of prediction target is fluctuated. In this paper, we propose a Deep-Learning framework, DeepBonds, to build a prediction model to predict U.S. Treasury Yield with different issue period. Meanwhile, the Recurrent Neural Network with Long Short Term Memory (LSTM) architecture is utilized for effectively summarizing U.S. Treasury Yield as characteristic vectors. Based on the produced characteristic vectors, we can precisely predict future U.S. Treasury Yield with different issue period. We conduct a comprehensive experimental study based on a real dataset collected from the website of Resource Center of U.S. Department of The Treasury. The results demonstrate a significantly improved accuracy of our Deep Learning approach compared with the existing works.
{"title":"DeepBonds: A Deep Learning Approach to Predicting United States Treasury Yield","authors":"Jia-Ching Ying, Yu-Bing Wang, Chih-Kai Chang, Ching Chang, Yu-Han Chen, Yow-Shin Liou","doi":"10.1109/Ubi-Media.2019.00055","DOIUrl":"https://doi.org/10.1109/Ubi-Media.2019.00055","url":null,"abstract":"United State Treasury Bonds are government bonds issued by the United State Treasury through the Public Debt Bureau. The trades of U.S. Treasury Bonds have a huge influence on global economy. To analysis the trend of global economy, many economists believe U.S. Treasury Yield has the ability to predict the fluence of other financial markets such as stock market, futures market, Option market, etc. However, However, most financial prediction models focus only on predicting stock price, which is a sort of multidimensional time-series prediction. Although U.S. Treasury Yield could be viewed as a multidimensional time-series, the prediction models for predicting stock price are not able to completely satisfy the requirements for predicting U.S. Treasury Yield. Besides, most traditional machine learning methods focus only on estimation of short-term cash flow. As the result, the loss of traditional machine learning methods would significantly be increased while the period of prediction target is fluctuated. In this paper, we propose a Deep-Learning framework, DeepBonds, to build a prediction model to predict U.S. Treasury Yield with different issue period. Meanwhile, the Recurrent Neural Network with Long Short Term Memory (LSTM) architecture is utilized for effectively summarizing U.S. Treasury Yield as characteristic vectors. Based on the produced characteristic vectors, we can precisely predict future U.S. Treasury Yield with different issue period. We conduct a comprehensive experimental study based on a real dataset collected from the website of Resource Center of U.S. Department of The Treasury. The results demonstrate a significantly improved accuracy of our Deep Learning approach compared with the existing works.","PeriodicalId":259542,"journal":{"name":"2019 Twelfth International Conference on Ubi-Media Computing (Ubi-Media)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115062639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/Ubi-Media.2019.00058
I. Chang, Tzu-Chiang Wang
Cosegmentation aims to segment out similar objects from a set of images with minimum additional information. Most of the existing cosegmentation algorithms assume that the foreground objects should appear in all images of the image set. But under some conditions, if the foreground objects only appear in a few images, the segmentation results are possible to be wrong. The paper proposes a new cosegmentation algorithm which can segment and classify the foreground of different objects even if they do not appear in all images. In our work, an image is considered to contain several kinds of objects. Each object is composed of several object elements; therefore, each image can be expressed in terms of the combination of several object elements. Object elements with similar features could be grouped into one object-element cluster by using a density-clustering algorithm. Moreover, the density-clustering algorithm excludes a few object elements which do not have a sufficient number of similar object elements. During the segmentation process, we de-project the sub-object classes back to images. Observing the distribution of each sub-object classes, we select the appropriate classes as the segmented results through the selection criteria. In the work, an unsupervised multiple-object class framework is proposed, and the segmentation rate is enhanced by introducing the concept of independent object elements. A selection criterion is presented to relax the similar object constraint.
{"title":"Unsupervised Multi-class Cosegmentation","authors":"I. Chang, Tzu-Chiang Wang","doi":"10.1109/Ubi-Media.2019.00058","DOIUrl":"https://doi.org/10.1109/Ubi-Media.2019.00058","url":null,"abstract":"Cosegmentation aims to segment out similar objects from a set of images with minimum additional information. Most of the existing cosegmentation algorithms assume that the foreground objects should appear in all images of the image set. But under some conditions, if the foreground objects only appear in a few images, the segmentation results are possible to be wrong. The paper proposes a new cosegmentation algorithm which can segment and classify the foreground of different objects even if they do not appear in all images. In our work, an image is considered to contain several kinds of objects. Each object is composed of several object elements; therefore, each image can be expressed in terms of the combination of several object elements. Object elements with similar features could be grouped into one object-element cluster by using a density-clustering algorithm. Moreover, the density-clustering algorithm excludes a few object elements which do not have a sufficient number of similar object elements. During the segmentation process, we de-project the sub-object classes back to images. Observing the distribution of each sub-object classes, we select the appropriate classes as the segmented results through the selection criteria. In the work, an unsupervised multiple-object class framework is proposed, and the segmentation rate is enhanced by introducing the concept of independent object elements. A selection criterion is presented to relax the similar object constraint.","PeriodicalId":259542,"journal":{"name":"2019 Twelfth International Conference on Ubi-Media Computing (Ubi-Media)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127073458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/Ubi-Media.2019.00027
Chuan-Feng Chiu, T. Shih, Chi-Yen Lin, Lin Hui, Fitri Utaminingrum
With the advent of new technological, the interaction between people and computers has become more and more inseparable. The human-computer interaction (HCI) technology improves the operations in people's lives, and the gesture recognition is the basic operation and is one of the hot research topics. In this paper, we developed a virtual theater system that using gesture to control the changes between different scene using the Myo armband. The proposed system used a deep learning method to classify dynamic gestures, and then send instructions to the virtual theater. By using this system, users can easily control the scenes and objects in the theater developed by Unity and make the virtual stage more enriched during the performance.
{"title":"Application of Hand Recognition System Based on Electromyography and Gyroscope Using Deep Learning","authors":"Chuan-Feng Chiu, T. Shih, Chi-Yen Lin, Lin Hui, Fitri Utaminingrum","doi":"10.1109/Ubi-Media.2019.00027","DOIUrl":"https://doi.org/10.1109/Ubi-Media.2019.00027","url":null,"abstract":"With the advent of new technological, the interaction between people and computers has become more and more inseparable. The human-computer interaction (HCI) technology improves the operations in people's lives, and the gesture recognition is the basic operation and is one of the hot research topics. In this paper, we developed a virtual theater system that using gesture to control the changes between different scene using the Myo armband. The proposed system used a deep learning method to classify dynamic gestures, and then send instructions to the virtual theater. By using this system, users can easily control the scenes and objects in the theater developed by Unity and make the virtual stage more enriched during the performance.","PeriodicalId":259542,"journal":{"name":"2019 Twelfth International Conference on Ubi-Media Computing (Ubi-Media)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121915850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}