Thanh-Vu Dang, Gwanghyun Yu, H. Nguyen, Vo Hoang Trong, Ju-Hwan Lee, Jinyoung Kim
Over a decade, convolutional neural networks (CNNs) have been applied extensively on various tasks related to images. Given an input image, a CNN model will investigate the content and deduce the representation of this image using a model's structure built from hidden neurons. This representation analyzes data semantically, which helps to solve semantic issues, such as image retrieval. To verify the above viewpoint, this study addresses the problem of using features learned from a CNN model to perform image retrieval. To more emphasize the efficiency of learned features, we consider degraded images and their enhanced version as queries and search for similar ones in the gallery set. Data augmentation is also applied to increase the number of images in the gallery. The experiments are conducted on a multi-view dataset, smallNorb. Experimental results are reported both in quantity and quality.
{"title":"Convolutional Neural Network-based image retrieval with degraded sample","authors":"Thanh-Vu Dang, Gwanghyun Yu, H. Nguyen, Vo Hoang Trong, Ju-Hwan Lee, Jinyoung Kim","doi":"10.1145/3426020.3426041","DOIUrl":"https://doi.org/10.1145/3426020.3426041","url":null,"abstract":"Over a decade, convolutional neural networks (CNNs) have been applied extensively on various tasks related to images. Given an input image, a CNN model will investigate the content and deduce the representation of this image using a model's structure built from hidden neurons. This representation analyzes data semantically, which helps to solve semantic issues, such as image retrieval. To verify the above viewpoint, this study addresses the problem of using features learned from a CNN model to perform image retrieval. To more emphasize the efficiency of learned features, we consider degraded images and their enhanced version as queries and search for similar ones in the gallery set. Data augmentation is also applied to increase the number of images in the gallery. The experiments are conducted on a multi-view dataset, smallNorb. Experimental results are reported both in quantity and quality.","PeriodicalId":305132,"journal":{"name":"The 9th International Conference on Smart Media and Applications","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116263981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
World aquaculture production continues to grow. However, the process of aquaculture is still dependent on human experience. With the recent development of artificial intelligence technology, automation has been achieved in various industrial fields. In this study, a real-time fish detection method based on deep learning is investigated as a basic research step required for smart farming. The performance is compared and evaluated using real fish data using various deep learning-based object detection models.
{"title":"Comparison of Deep Learning based Fish Detection Performance for Real-Time Smart Fish Farming","authors":"Younghak Shin, Jeonghyeon Choi, H. Choi","doi":"10.1145/3426020.3426033","DOIUrl":"https://doi.org/10.1145/3426020.3426033","url":null,"abstract":"World aquaculture production continues to grow. However, the process of aquaculture is still dependent on human experience. With the recent development of artificial intelligence technology, automation has been achieved in various industrial fields. In this study, a real-time fish detection method based on deep learning is investigated as a basic research step required for smart farming. The performance is compared and evaluated using real fish data using various deep learning-based object detection models.","PeriodicalId":305132,"journal":{"name":"The 9th International Conference on Smart Media and Applications","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132128110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As non-face-to-face services became more important after the COVID-19 incident, the introduction of chatbots became essential for facilities that meet and use in the same face as libraries. This study aims to develop chatbots that support interaction between librarians and managers and users as a way to strengthen electronic library services for library users. In this regard, we examined the specific development procedures and methods of chatbots, analyzed user needs and questions in the case of the National Library of Korea, and conducted a logical structure design for chatbot development. Based on the logical structure design, the present invention provides the services which are easy to access by realizing the intent and the entity in danbee.ai, grasping the intention of the user’s query, inducing the response to the query of the user using the conversation flow function, providing diversity about the query method of the user through the interactive service of button type and chatting method, and linking with SNS (telegram). After building the chatbot, the interaction process between the chatbot and the user was confirmed through the experimental results. Based on the experience of developing the electronic library chatbot, this study suggested implications related to level determination for the introduction of chatbot, user demand analysis, tool selection for the construction of chatbot, and interactive interaction.s.
{"title":"Development of electronic library chatbot system using SNS-based mobile chatbot service*","authors":"Hyunho Park, Hyoungjun Kim, Pan-Koo Kim","doi":"10.1145/3426020.3426134","DOIUrl":"https://doi.org/10.1145/3426020.3426134","url":null,"abstract":"As non-face-to-face services became more important after the COVID-19 incident, the introduction of chatbots became essential for facilities that meet and use in the same face as libraries. This study aims to develop chatbots that support interaction between librarians and managers and users as a way to strengthen electronic library services for library users. In this regard, we examined the specific development procedures and methods of chatbots, analyzed user needs and questions in the case of the National Library of Korea, and conducted a logical structure design for chatbot development. Based on the logical structure design, the present invention provides the services which are easy to access by realizing the intent and the entity in danbee.ai, grasping the intention of the user’s query, inducing the response to the query of the user using the conversation flow function, providing diversity about the query method of the user through the interactive service of button type and chatting method, and linking with SNS (telegram). After building the chatbot, the interaction process between the chatbot and the user was confirmed through the experimental results. Based on the experience of developing the electronic library chatbot, this study suggested implications related to level determination for the introduction of chatbot, user demand analysis, tool selection for the construction of chatbot, and interactive interaction.s.","PeriodicalId":305132,"journal":{"name":"The 9th International Conference on Smart Media and Applications","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122216155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a 5G base station location algorithm based on Voronoi diagram. To resolve the problem on how to convert 5G deployment into a certain area division problem, each 5G base station is regarded as a certain point in the Voronoi diagram, and the principle of Voronoi diagram is used to divide the area. A new Voronoi diagram-point set Voronoi diagram is proposed, its generation algorithm is given, and it is applied to pattern classification. Point set Voronoi diagram is formed by expanding the generator of Voronoi diagram from point to point set. The area division algorithm designed based on the point set Voronoi diagram is a non-linear area classifier of two-dimensional feature space, which can be directly applied to area division.
{"title":"Study on Location Selection of 5G Base Station based on Voronoi Diagram","authors":"Tao Yan, In-ho Ra, Yan Che","doi":"10.1145/3426020.3426163","DOIUrl":"https://doi.org/10.1145/3426020.3426163","url":null,"abstract":"This paper proposes a 5G base station location algorithm based on Voronoi diagram. To resolve the problem on how to convert 5G deployment into a certain area division problem, each 5G base station is regarded as a certain point in the Voronoi diagram, and the principle of Voronoi diagram is used to divide the area. A new Voronoi diagram-point set Voronoi diagram is proposed, its generation algorithm is given, and it is applied to pattern classification. Point set Voronoi diagram is formed by expanding the generator of Voronoi diagram from point to point set. The area division algorithm designed based on the point set Voronoi diagram is a non-linear area classifier of two-dimensional feature space, which can be directly applied to area division.","PeriodicalId":305132,"journal":{"name":"The 9th International Conference on Smart Media and Applications","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115989154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As AMI installation is expanded, various additional services using AMI data are emerging. However, data is missing in the communication process of collecting data. Estimation missing data is necessary to solve these problems. In order to estimate for missing values of time series data measured from smart meters, a total of four methods were experimented and the performance comparison data were provided, from traditional methods to the estimation method applied with good LSTM in the field of time series. In addition, since power usage is not a typical time series prediction data, but rather estimation of data that results in an intermediate missing, a simple prediction can cause errors that reverse the data that appear after the missing. For this reason, the linear interpolation method was proved to be stable and better performing than the general time series field prediction estimation method.
{"title":"Method of estimation of missing data in AMI system","authors":"Hyuk-Rok Kwon, Taekeun Hong, Pankoo Kim","doi":"10.1145/3426020.3426028","DOIUrl":"https://doi.org/10.1145/3426020.3426028","url":null,"abstract":"As AMI installation is expanded, various additional services using AMI data are emerging. However, data is missing in the communication process of collecting data. Estimation missing data is necessary to solve these problems. In order to estimate for missing values of time series data measured from smart meters, a total of four methods were experimented and the performance comparison data were provided, from traditional methods to the estimation method applied with good LSTM in the field of time series. In addition, since power usage is not a typical time series prediction data, but rather estimation of data that results in an intermediate missing, a simple prediction can cause errors that reverse the data that appear after the missing. For this reason, the linear interpolation method was proved to be stable and better performing than the general time series field prediction estimation method.","PeriodicalId":305132,"journal":{"name":"The 9th International Conference on Smart Media and Applications","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123846166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the era of data-parallel analytics, caching intermediate results is used as a key method to speed up the framework. Existing frameworks apply various caching policies depending on run-time context or programmer’s decision. Since caching still leave room for optimization, sophisticated caching which considering the benefit from caching is required. However, existing frameworks are limited to measure the performance benefit from caching because they only measure the computing time at the distributed task level. In this paper, we propose an operator-level computing time metric and a cost model to predict the performance benefit from caching, for in-memory data analytics frameworks. We implemented our scheme in Apache Spark and evaluated its prediction accuracy with Spark benchmark programs. The average error of the cost model measured from 10x input data was 7.3%, and the performance benefit predicted by the model and actual performance benefit showed a difference within 24%. The proposed cost model and performance benefit prediction method can be used to determine and optimize the caching of data analytics engines to maximize the performance benefit.
{"title":"Caching Cost Model for In-memory Data Analytics Framework","authors":"Mi-Young Jeong, Seongsoo Park, Hwansoo Han","doi":"10.1145/3426020.3426070","DOIUrl":"https://doi.org/10.1145/3426020.3426070","url":null,"abstract":"In the era of data-parallel analytics, caching intermediate results is used as a key method to speed up the framework. Existing frameworks apply various caching policies depending on run-time context or programmer’s decision. Since caching still leave room for optimization, sophisticated caching which considering the benefit from caching is required. However, existing frameworks are limited to measure the performance benefit from caching because they only measure the computing time at the distributed task level. In this paper, we propose an operator-level computing time metric and a cost model to predict the performance benefit from caching, for in-memory data analytics frameworks. We implemented our scheme in Apache Spark and evaluated its prediction accuracy with Spark benchmark programs. The average error of the cost model measured from 10x input data was 7.3%, and the performance benefit predicted by the model and actual performance benefit showed a difference within 24%. The proposed cost model and performance benefit prediction method can be used to determine and optimize the caching of data analytics engines to maximize the performance benefit.","PeriodicalId":305132,"journal":{"name":"The 9th International Conference on Smart Media and Applications","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122601067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Esposito, V. Moscato, Giancarlo Sperlí, Chang-Hyun Choi
In this paper, we describe a multimedia summarization technique for Online Social Networks (OSNs) using a bio-inspired influence maximization algorithm. As first step, we model each OSN using an hypergraph based approach that the authors have presented in some previous works. Then, we leverage an influence analysis methodology based on the bees' behaviors within an hive to determine the most important multimedia objects with respect to one or more topics of interest. Finally, a summarization technique is exploited to determine from the list of candidates a multimedia summary in according to a model that favors priority (w.r.t. some user keywords), continuity, variety and not repetitiveness features. Several preliminary experiments on Flickr dataset show the effectiveness of the proposed summarization approach and encourage future work.
{"title":"Summarizing social media content via bio-inspired influence maximization algorithms","authors":"C. Esposito, V. Moscato, Giancarlo Sperlí, Chang-Hyun Choi","doi":"10.1145/3426020.3426179","DOIUrl":"https://doi.org/10.1145/3426020.3426179","url":null,"abstract":"In this paper, we describe a multimedia summarization technique for Online Social Networks (OSNs) using a bio-inspired influence maximization algorithm. As first step, we model each OSN using an hypergraph based approach that the authors have presented in some previous works. Then, we leverage an influence analysis methodology based on the bees' behaviors within an hive to determine the most important multimedia objects with respect to one or more topics of interest. Finally, a summarization technique is exploited to determine from the list of candidates a multimedia summary in according to a model that favors priority (w.r.t. some user keywords), continuity, variety and not repetitiveness features. Several preliminary experiments on Flickr dataset show the effectiveness of the proposed summarization approach and encourage future work.","PeriodicalId":305132,"journal":{"name":"The 9th International Conference on Smart Media and Applications","volume":"266 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116537638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, sensitive information such as financial data and electronic payment systems have been stored in mobile devices. To protect important data, TEE technology has emerged, a trusty and safe execution environment. In particular, ARM TrustZone technology, which is mainly used in mobile, divides one physical processor into Normal World and Secure World to provide a safer execution environment. Many manufacturers have started using TrustZone technology, but existing commercial TEEs have limitations in conducting security research using TrustZone. Therefore, this paper shows how to use OP-TEE which is an open source project for implementing ARM TrustZone technology and TEE Client API that communicates with Trusted Application of TrustZone Secure World. To demystify TEE Client API, we implemented a simple trusted application for communication between Normal World and Secure World in OP-TEE OS using QEMU emulator.
{"title":"Demystifying ARM TrustZone TEE Client API using OP-TEE","authors":"HeeDong Yang, Manhee Lee","doi":"10.1145/3426020.3426113","DOIUrl":"https://doi.org/10.1145/3426020.3426113","url":null,"abstract":"Recently, sensitive information such as financial data and electronic payment systems have been stored in mobile devices. To protect important data, TEE technology has emerged, a trusty and safe execution environment. In particular, ARM TrustZone technology, which is mainly used in mobile, divides one physical processor into Normal World and Secure World to provide a safer execution environment. Many manufacturers have started using TrustZone technology, but existing commercial TEEs have limitations in conducting security research using TrustZone. Therefore, this paper shows how to use OP-TEE which is an open source project for implementing ARM TrustZone technology and TEE Client API that communicates with Trusted Application of TrustZone Secure World. To demystify TEE Client API, we implemented a simple trusted application for communication between Normal World and Secure World in OP-TEE OS using QEMU emulator.","PeriodicalId":305132,"journal":{"name":"The 9th International Conference on Smart Media and Applications","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129193772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sepsis is one of the leading causes of mortality globally that costs billions of dollars annually. Until now, the general method of treatment for sepsis remains uncertain. Therefore, treating septic patients is highly challenging. Some recent research has successfully applied reinforcement learning to generate optimal treatment policies for septic patients. The policies are proved to be better than that of physicians but sometimes they can suggest some actions that the clinicians almost never used. In this paper, we propose a method of combining supervised learning and reinforcement learning using Mixture-of-Experts technique. The policy derived from our model outperforms the physicians’ policies and limit the number of dangerous actions. It can be used as a dynamic decision-supporting tool for clinicians to reduce the mortality of patients.
{"title":"Combining Reinforcement Learning with Supervised Learning for Sepsis Treatment","authors":"Thanh-Cong Do, Hyung-Jeong Yang, S. Yoo, I. Oh","doi":"10.1145/3426020.3426077","DOIUrl":"https://doi.org/10.1145/3426020.3426077","url":null,"abstract":"Sepsis is one of the leading causes of mortality globally that costs billions of dollars annually. Until now, the general method of treatment for sepsis remains uncertain. Therefore, treating septic patients is highly challenging. Some recent research has successfully applied reinforcement learning to generate optimal treatment policies for septic patients. The policies are proved to be better than that of physicians but sometimes they can suggest some actions that the clinicians almost never used. In this paper, we propose a method of combining supervised learning and reinforcement learning using Mixture-of-Experts technique. The policy derived from our model outperforms the physicians’ policies and limit the number of dangerous actions. It can be used as a dynamic decision-supporting tool for clinicians to reduce the mortality of patients.","PeriodicalId":305132,"journal":{"name":"The 9th International Conference on Smart Media and Applications","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128685423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
∗The Angle-of-Arrival (AOA) information of various signals can be collected by employing planar array antennas such as Uniform Rectangular Array (URA) and Uniform Circular Array (UCA) installed in a satellite. In this paper, we introduce the cascade AOA estimation algorithm consisting of CAPON and Beamsapce MUSIC, based on UCA, which has good performance for estimating AOAs of adjacent signals. In addition, we provide the computer simulation result for illustrating the performance of the suggested technique.
{"title":"Cascade AOA Estimation Technique Based on Uniform Circular Array","authors":"Tae-yun Kim, Dongbin Lee, Suk-seung Hwang","doi":"10.1145/3426020.3426102","DOIUrl":"https://doi.org/10.1145/3426020.3426102","url":null,"abstract":"∗The Angle-of-Arrival (AOA) information of various signals can be collected by employing planar array antennas such as Uniform Rectangular Array (URA) and Uniform Circular Array (UCA) installed in a satellite. In this paper, we introduce the cascade AOA estimation algorithm consisting of CAPON and Beamsapce MUSIC, based on UCA, which has good performance for estimating AOAs of adjacent signals. In addition, we provide the computer simulation result for illustrating the performance of the suggested technique.","PeriodicalId":305132,"journal":{"name":"The 9th International Conference on Smart Media and Applications","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132700349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}