Many technologies had surfaced to help people understand and learn a language. Learning major languages like English, Spanish, etc., was easier because a lot of people were speaking it and actually knew the structural integrity of its grammar, but what about the minor ones? How would you learn a language easily if you did not know its grammatical structure, especially if the language was not that known? The researchers would present a grammatical tool using deep parsing for Cebuano, a language that was mainly spoken in Central Visayas in the Philippines and was considered as one of the main languages in the whole Visayan region. This grammar tool would be useful mainly to people who wanted to learn the language; students, teachers, people from other regions of the country, and even foreigners. In this study, the researchers would relay the grammar through deep parsing, a method used to give a complete syntactic structure for a group of words. It also showed that the tool, with reliability marks higher than the expected 55%, would actually help the ones who needed it the most, and look forward for them into using it more.
{"title":"Development of a Cebuano Parse Tree for a Grammar Correction Tool Using Deep Parsing","authors":"Jan Mikhail Gaid, Robert Michael Lim, C. Maderazo","doi":"10.1109/TAAI.2018.00042","DOIUrl":"https://doi.org/10.1109/TAAI.2018.00042","url":null,"abstract":"Many technologies had surfaced to help people understand and learn a language. Learning major languages like English, Spanish, etc., was easier because a lot of people were speaking it and actually knew the structural integrity of its grammar, but what about the minor ones? How would you learn a language easily if you did not know its grammatical structure, especially if the language was not that known? The researchers would present a grammatical tool using deep parsing for Cebuano, a language that was mainly spoken in Central Visayas in the Philippines and was considered as one of the main languages in the whole Visayan region. This grammar tool would be useful mainly to people who wanted to learn the language; students, teachers, people from other regions of the country, and even foreigners. In this study, the researchers would relay the grammar through deep parsing, a method used to give a complete syntactic structure for a group of words. It also showed that the tool, with reliability marks higher than the expected 55%, would actually help the ones who needed it the most, and look forward for them into using it more.","PeriodicalId":211734,"journal":{"name":"2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116689204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Different with the traditional, green logistics unifies sustainable economic and social development. In addition, due to the nature of cross-functional logistics activities, it brings more difficulty to green development in the logistics market. In this paper, 52 award-gained green logistics initiatives/practices implemented during the period of 2006-2017 in Japan are studied. For doing so, text mining technique is used to explore the co-occurring links within words and to present the collaborative green initiatives. In contrast to most research on sustainable logistics directed towards manufacturing companies in the product-focused supply chain management, this study is positioned in the dual perspective of logistics service providers and shippers to demonstrate their combined effort of achieving sustainability goals. The results of the analysis showed that text mining technique is useful for summarizing and presenting the data in this research area, however, how to deal with the data more means-end logically remains in the future for revealing more indirect associations between the words.
{"title":"Study on Green Logistics Initiatives through Text Mining","authors":"Fuyume Sai","doi":"10.1109/TAAI.2018.00033","DOIUrl":"https://doi.org/10.1109/TAAI.2018.00033","url":null,"abstract":"Different with the traditional, green logistics unifies sustainable economic and social development. In addition, due to the nature of cross-functional logistics activities, it brings more difficulty to green development in the logistics market. In this paper, 52 award-gained green logistics initiatives/practices implemented during the period of 2006-2017 in Japan are studied. For doing so, text mining technique is used to explore the co-occurring links within words and to present the collaborative green initiatives. In contrast to most research on sustainable logistics directed towards manufacturing companies in the product-focused supply chain management, this study is positioned in the dual perspective of logistics service providers and shippers to demonstrate their combined effort of achieving sustainability goals. The results of the analysis showed that text mining technique is useful for summarizing and presenting the data in this research area, however, how to deal with the data more means-end logically remains in the future for revealing more indirect associations between the words.","PeriodicalId":211734,"journal":{"name":"2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"192 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126960781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TAAI 2018 Reviewers","authors":"","doi":"10.1109/taai.2018.00009","DOIUrl":"https://doi.org/10.1109/taai.2018.00009","url":null,"abstract":"","PeriodicalId":211734,"journal":{"name":"2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"10 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122317669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multi-label classification (MLC) is an important learning problem where each instance is annotated with multiple labels. Label embedding (LE) is an important family of methods for MLC that extracts and utilizes the latent structure of labels towards better performance. Within the family, feature-aware LE methods, which jointly consider the feature and label information during extraction, have been shown to reach better performance than feature-unaware ones. Nevertheless, current feature-aware LE methods are not designed to flexibly adapt to different evaluation criteria. In this work, we propose a novel feature-aware LE method that takes the desired evaluation criterion (cost) into account during training. The method, named Feature-aware Cost-sensitive Label Embedding (FaCLE), encodes the criterion into the distance between embedded vectors with a deep Siamese network. The feature-aware characteristic of FaCLE is achieved with a loss function that jointly considers the embedding error and the feature-to-embedding error. Moreover, FaCLE is coupled with an additional-bit trick to deal with the possibly asymmetric criteria. Experiment results across different data sets and evaluation criteria demonstrate that FaCLE is superior to other state-of-the-art feature-aware LE methods and competitive to cost-sensitive LE methods.
{"title":"Multi-Label Classification with Feature-Aware Cost-Sensitive Label Embedding","authors":"Hsien-Chun Chiu, Hsuan-Tien Lin","doi":"10.1109/TAAI.2018.00018","DOIUrl":"https://doi.org/10.1109/TAAI.2018.00018","url":null,"abstract":"Multi-label classification (MLC) is an important learning problem where each instance is annotated with multiple labels. Label embedding (LE) is an important family of methods for MLC that extracts and utilizes the latent structure of labels towards better performance. Within the family, feature-aware LE methods, which jointly consider the feature and label information during extraction, have been shown to reach better performance than feature-unaware ones. Nevertheless, current feature-aware LE methods are not designed to flexibly adapt to different evaluation criteria. In this work, we propose a novel feature-aware LE method that takes the desired evaluation criterion (cost) into account during training. The method, named Feature-aware Cost-sensitive Label Embedding (FaCLE), encodes the criterion into the distance between embedded vectors with a deep Siamese network. The feature-aware characteristic of FaCLE is achieved with a loss function that jointly considers the embedding error and the feature-to-embedding error. Moreover, FaCLE is coupled with an additional-bit trick to deal with the possibly asymmetric criteria. Experiment results across different data sets and evaluation criteria demonstrate that FaCLE is superior to other state-of-the-art feature-aware LE methods and competitive to cost-sensitive LE methods.","PeriodicalId":211734,"journal":{"name":"2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128667060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, music is an important media because it can relax us in our daily life. Therefore, most people listen to music frequently and current music websites offer online listening services. However, because the semantic gap, it is not easy to effectively retrieve the user preferred music especially from a huge amount of music data. For this issue, this paper presents a personalized content-based music retrieval system that integrates techniques of user-filtering and query-refinement to achieve high quality of music retrieval. In terms of user-filtering, the new user interest can be inferred by the user similarities. In terms of query-refinement, the user interest can be guided to the potential search space by iterative feedbacks. The experimental results show the proposed method does improve the retrieval quality significantly.
{"title":"Personalized Content-Based Music Retrieval by User-Filtering and Query-Refinement","authors":"Ja-Hwung Su, T. Hong, Jyun-Yu Li, Jung-Jui Su","doi":"10.1109/TAAI.2018.00047","DOIUrl":"https://doi.org/10.1109/TAAI.2018.00047","url":null,"abstract":"In recent years, music is an important media because it can relax us in our daily life. Therefore, most people listen to music frequently and current music websites offer online listening services. However, because the semantic gap, it is not easy to effectively retrieve the user preferred music especially from a huge amount of music data. For this issue, this paper presents a personalized content-based music retrieval system that integrates techniques of user-filtering and query-refinement to achieve high quality of music retrieval. In terms of user-filtering, the new user interest can be inferred by the user similarities. In terms of query-refinement, the user interest can be guided to the potential search space by iterative feedbacks. The experimental results show the proposed method does improve the retrieval quality significantly.","PeriodicalId":211734,"journal":{"name":"2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"351 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126681884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shunsuke Akama, Akihiro Matsufuji, E. Sato-Shimokawara, Shoji Yamamoto, Toru Yamaguchi
We propose a successive method for human tracking and posture estimation by using multiple omnidirectional cameras appropriate for Machine Learning method. A stable estimation for foot and head position is executed by the combination analysis with particle filter processing. Moreover, a classification method is accomplished by using the constraint of the connected line between head and foot position. The combination both this constraint and relative height from head to foot is possible to distinguish typical four postures for human activities in an indoor scene. We believe that this continuity of each data helps smooth convergence to the time-sequential learning for the discrimination between normal and abnormal behavior.
{"title":"Successive Human Tracking and Posture Estimation with Multiple Omnidirectional Cameras","authors":"Shunsuke Akama, Akihiro Matsufuji, E. Sato-Shimokawara, Shoji Yamamoto, Toru Yamaguchi","doi":"10.1109/TAAI.2018.00019","DOIUrl":"https://doi.org/10.1109/TAAI.2018.00019","url":null,"abstract":"We propose a successive method for human tracking and posture estimation by using multiple omnidirectional cameras appropriate for Machine Learning method. A stable estimation for foot and head position is executed by the combination analysis with particle filter processing. Moreover, a classification method is accomplished by using the constraint of the connected line between head and foot position. The combination both this constraint and relative height from head to foot is possible to distinguish typical four postures for human activities in an indoor scene. We believe that this continuity of each data helps smooth convergence to the time-sequential learning for the discrimination between normal and abnormal behavior.","PeriodicalId":211734,"journal":{"name":"2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131859011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kei Matsubayashi, Akihiro Yamashita, H. Nonaka, Yohko Konno
This study examines a method of effective utilization as knowledge in organizational stored information of business activities. More specifically, the purpose of this study is developing a system that supports efficient finding of appropriate knowledge from the stored information according to a question sentence input from a user. As an approach, we used automatic documents summarization technology to obtain valuable information from the stored information and we evaluated the effectiveness based on the real bulletin board system data from a certain company.
{"title":"A Research on Document Summarization and Presentation System Based on Feature Word Extraction from Stored Informations","authors":"Kei Matsubayashi, Akihiro Yamashita, H. Nonaka, Yohko Konno","doi":"10.1109/TAAI.2018.00022","DOIUrl":"https://doi.org/10.1109/TAAI.2018.00022","url":null,"abstract":"This study examines a method of effective utilization as knowledge in organizational stored information of business activities. More specifically, the purpose of this study is developing a system that supports efficient finding of appropriate knowledge from the stored information according to a question sentence input from a user. As an approach, we used automatic documents summarization technology to obtain valuable information from the stored information and we evaluated the effectiveness based on the real bulletin board system data from a certain company.","PeriodicalId":211734,"journal":{"name":"2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116535761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a system for integrating the neural networks' inference by using context and relation for complicated action recognition. In recent years, first person point of view which called as ego-centric video analysis draw a high attention to better understanding human activity and for being used to law enforcement, life logging and home automation. However, action recognition of ego-centric video is fundamental problem, and it is based on some complicating feature inference. In order to overcome these problems, we propose the context based inference for complicated action recognition. In realistic scene, people manipulate objects as a natural part of performing an activity, and these object manipulations are important part of the visual evidence that should be considered as context. Thus, we take account of such context for action recognition. Our system is consist of rule base architecture of bi-directional associative memory to use context of object-hand relationship for inference. We evaluate our method on benchmark first person video dataset, and empirical results illustrate the efficiency of our model.
{"title":"A Method of Action Recognition in Ego-Centric Videos by Using Object-Hand Relations","authors":"Akihiro Matsufuji, Wei-Fen Hsieh, Hao-Ming Hung, Eri Shimokawara, Toru Yamaguchi, Lieu-Hen Chen","doi":"10.1109/TAAI.2018.00021","DOIUrl":"https://doi.org/10.1109/TAAI.2018.00021","url":null,"abstract":"We present a system for integrating the neural networks' inference by using context and relation for complicated action recognition. In recent years, first person point of view which called as ego-centric video analysis draw a high attention to better understanding human activity and for being used to law enforcement, life logging and home automation. However, action recognition of ego-centric video is fundamental problem, and it is based on some complicating feature inference. In order to overcome these problems, we propose the context based inference for complicated action recognition. In realistic scene, people manipulate objects as a natural part of performing an activity, and these object manipulations are important part of the visual evidence that should be considered as context. Thus, we take account of such context for action recognition. Our system is consist of rule base architecture of bi-directional associative memory to use context of object-hand relationship for inference. We evaluate our method on benchmark first person video dataset, and empirical results illustrate the efficiency of our model.","PeriodicalId":211734,"journal":{"name":"2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125781988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Furuta, Tsuyoshi Nakamura, Y. Iwahori, S. Fukui, M. Kanoh, Koji Yamada
A hearing dog is a sort of assistance dog for hearing-impaired individuals. The physical touch of the dog can alert the individuals to important sounds such as doorbells, alarm clocks, and fire alarms. Although hearing dogs can assist people, there is an insufficient number of them around the world today. As an alternative, a hearing-dog robot has been developed. This robot can move around autonomously to search for a user and notify him or her of important sounds. In this work, we propose an exploring algorithm for the robot that considers past information about the location of the user. Specifically, this algorithm utilizes the user's life rhythm in order to achieve efficient exploring. In our experiments, proposed algorithm showed a shorter time as compared with the algorithm without the user's life rhythm.
{"title":"Consideration of Life Rhythm for Hearing-Dog Robots Searching for User","authors":"S. Furuta, Tsuyoshi Nakamura, Y. Iwahori, S. Fukui, M. Kanoh, Koji Yamada","doi":"10.1109/TAAI.2018.00031","DOIUrl":"https://doi.org/10.1109/TAAI.2018.00031","url":null,"abstract":"A hearing dog is a sort of assistance dog for hearing-impaired individuals. The physical touch of the dog can alert the individuals to important sounds such as doorbells, alarm clocks, and fire alarms. Although hearing dogs can assist people, there is an insufficient number of them around the world today. As an alternative, a hearing-dog robot has been developed. This robot can move around autonomously to search for a user and notify him or her of important sounds. In this work, we propose an exploring algorithm for the robot that considers past information about the location of the user. Specifically, this algorithm utilizes the user's life rhythm in order to achieve efficient exploring. In our experiments, proposed algorithm showed a shorter time as compared with the algorithm without the user's life rhythm.","PeriodicalId":211734,"journal":{"name":"2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130194520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}