Wireless sensors networks (WSNs) comprise small sensing and computing units with limited power, and often run in non-replaceable energy sources. A large number of researches have been conducted for energy-efficient data gathering in unmanned aerial vehicle (UAV)-aided WSNs (UWSNs) for prolonging the lifetime of WSNs. UAVs are equipped with rechargeable batteries and can fly greater distances within a shorter period of time. However, the data gathering in UWSNs is still an under-investigated topic and more structured researches are required. To measure the effectiveness of the state-of-the-art models, performance analysis and comparison must be done by varying key parameters. As a new emerging field, there is no proper performance analysis guideline established in this topic yet. This study investigates major researches in this field and elaborately discusses the performance analysis techniques and tools used in the investigated research works. The qualitative comparisons of the performance analysis techniques will be able to provide a proper guideline to future researchers.
{"title":"Energy-Efficient Data Gathering Schemes in UAV-Based Wireless Sensor Networks","authors":"Rezoan Ahmed Nazib, Sang-Moon Moh","doi":"10.1145/3426020.3426025","DOIUrl":"https://doi.org/10.1145/3426020.3426025","url":null,"abstract":"Wireless sensors networks (WSNs) comprise small sensing and computing units with limited power, and often run in non-replaceable energy sources. A large number of researches have been conducted for energy-efficient data gathering in unmanned aerial vehicle (UAV)-aided WSNs (UWSNs) for prolonging the lifetime of WSNs. UAVs are equipped with rechargeable batteries and can fly greater distances within a shorter period of time. However, the data gathering in UWSNs is still an under-investigated topic and more structured researches are required. To measure the effectiveness of the state-of-the-art models, performance analysis and comparison must be done by varying key parameters. As a new emerging field, there is no proper performance analysis guideline established in this topic yet. This study investigates major researches in this field and elaborately discusses the performance analysis techniques and tools used in the investigated research works. The qualitative comparisons of the performance analysis techniques will be able to provide a proper guideline to future researchers.","PeriodicalId":305132,"journal":{"name":"The 9th International Conference on Smart Media and Applications","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116769103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The fifth-generation (5G) network is meant to support enhanced mobile broadband (eMBB), ultra-reliable and low-latency communication (URLLC), and massive machine-type communication (mMTC) services. With the development of the 5G network Non-orthogonal multiple access (NOMA) technique is getting popular due to its spectral efficiency, high reliability, and massive connectivity support. To make the NOMA more efficient, we propose a Q-learning based resource allocation and a priority-based device clustering scheme. We prioritize the URLLC, eMBB, and mMTC devices within a cluster to meet the quality of service (QoS) requirements. Then, we formulate different NOMA constraints and incorporate them with the Q-learning algorithm. To evaluate the performance of the proposed scheme, we conduct extensive simulations under various scenarios. We can confirm that the proposed Q-learning algorithm with priority-based device clustering achieves the maximum sum-rate among all scenarios.
{"title":"Q-Learning-based Resource Allocation with Priority-based Clustering for Heterogeneous NOMA Systems","authors":"Sifat Rezwan, Wooyeol Choi","doi":"10.1145/3426020.3426085","DOIUrl":"https://doi.org/10.1145/3426020.3426085","url":null,"abstract":"The fifth-generation (5G) network is meant to support enhanced mobile broadband (eMBB), ultra-reliable and low-latency communication (URLLC), and massive machine-type communication (mMTC) services. With the development of the 5G network Non-orthogonal multiple access (NOMA) technique is getting popular due to its spectral efficiency, high reliability, and massive connectivity support. To make the NOMA more efficient, we propose a Q-learning based resource allocation and a priority-based device clustering scheme. We prioritize the URLLC, eMBB, and mMTC devices within a cluster to meet the quality of service (QoS) requirements. Then, we formulate different NOMA constraints and incorporate them with the Q-learning algorithm. To evaluate the performance of the proposed scheme, we conduct extensive simulations under various scenarios. We can confirm that the proposed Q-learning algorithm with priority-based device clustering achieves the maximum sum-rate among all scenarios.","PeriodicalId":305132,"journal":{"name":"The 9th International Conference on Smart Media and Applications","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125519685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taehong Kim, Y. Cha, Soo-Kyo Oh, Byung-Rae Cha, Sun Park, JaeHyun Seo
The smart farm has recently attracted great attention as a solution to rural problems facing the sustainability crisis, such as the aging population of farming and livestock industries, the shortage of manpower and the production area of young people, and the stagnation of income, exports. A smart farm is a system that combines information and communication technology (ICT), internet of things (IoT), and agricultural technology that enable a farm to operate with minimal labor and to automatically control of a greenhouse environment. Machine learning based on recently data-driven techniques has emerged with big data technologies and high-performance computing to create opportunities to quantify data intensive processes in agricultural operational environments. In this paper, presents research on the application of machine learning technology to diagnose the growth status of crops and predicting the harvest time of strawberries according to image processing techniques. [1] We designed and implemented a prototype system that detects and classifies object image of strawberry using the YOLO v2 algorithm and Darknet in order to decide harvesting time of strawberries.
{"title":"Prototype of Strawberry Maturity-level Classification to Determine Harvesting Time of Strawberry","authors":"Taehong Kim, Y. Cha, Soo-Kyo Oh, Byung-Rae Cha, Sun Park, JaeHyun Seo","doi":"10.1145/3426020.3426050","DOIUrl":"https://doi.org/10.1145/3426020.3426050","url":null,"abstract":"The smart farm has recently attracted great attention as a solution to rural problems facing the sustainability crisis, such as the aging population of farming and livestock industries, the shortage of manpower and the production area of young people, and the stagnation of income, exports. A smart farm is a system that combines information and communication technology (ICT), internet of things (IoT), and agricultural technology that enable a farm to operate with minimal labor and to automatically control of a greenhouse environment. Machine learning based on recently data-driven techniques has emerged with big data technologies and high-performance computing to create opportunities to quantify data intensive processes in agricultural operational environments. In this paper, presents research on the application of machine learning technology to diagnose the growth status of crops and predicting the harvest time of strawberries according to image processing techniques. [1] We designed and implemented a prototype system that detects and classifies object image of strawberry using the YOLO v2 algorithm and Darknet in order to decide harvesting time of strawberries.","PeriodicalId":305132,"journal":{"name":"The 9th International Conference on Smart Media and Applications","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126861400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hoang Manh Hung, Soohyung Kim, Hyung-Jeong Yang, Gueesang Lee
Emotion recognition has a broad variety of applications in the area of affective computing, such as education, robotics, human-computer interaction. Because of that, the emotion recognition has been a significant concern in the area of computer vision in recent years, and has allowed a great deal of effort on the part of researchers to address the complexities involved in this task. Many techniques and approaches have been studied for different problems in this area including traditional machine learning techniques and deep learning approaches. The purpose of this paper is to incorporate models together to obtain benefit from different approaches for emotion recognition based on facial expression from images and videos. At the first stage, we use MTCNN to detect the faces of the objects contained in the video, then they are extracted as feature representations through ResNet50. In the next stage, the features will be learned through multi models that is LSTM, WaveNet, and SVM then we use late fusion to get the final decision. Our method is evaluated on MuSe-CaR dataset and the experimental results can compete with the baseline.
{"title":"Multiple Models Using Temporal Feature Learning for Emotion Recognition","authors":"Hoang Manh Hung, Soohyung Kim, Hyung-Jeong Yang, Gueesang Lee","doi":"10.1145/3426020.3426122","DOIUrl":"https://doi.org/10.1145/3426020.3426122","url":null,"abstract":"Emotion recognition has a broad variety of applications in the area of affective computing, such as education, robotics, human-computer interaction. Because of that, the emotion recognition has been a significant concern in the area of computer vision in recent years, and has allowed a great deal of effort on the part of researchers to address the complexities involved in this task. Many techniques and approaches have been studied for different problems in this area including traditional machine learning techniques and deep learning approaches. The purpose of this paper is to incorporate models together to obtain benefit from different approaches for emotion recognition based on facial expression from images and videos. At the first stage, we use MTCNN to detect the faces of the objects contained in the video, then they are extracted as feature representations through ResNet50. In the next stage, the features will be learned through multi models that is LSTM, WaveNet, and SVM then we use late fusion to get the final decision. Our method is evaluated on MuSe-CaR dataset and the experimental results can compete with the baseline.","PeriodicalId":305132,"journal":{"name":"The 9th International Conference on Smart Media and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116588541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the Internet of Things environment, most observational data from sensors are stored and managed in a relational database as simple values, but it is difficult to detect the relationship between systems, sensors, and data. This paper builds a data model based on SOSA and SSN ontologies for a portable atmospheric environment measurement system which is our previous research. As a result, it is possible to explicitly express information about data structure and the relationships in system and sensor, properties, and observation regions. Therefore, it is possible to integrate the observed data of various sensors and systems through the proposed data model.
{"title":"Building a Data Model for Portable Atmospheric Environment Measurement System","authors":"Soohyeon Chae, Jangwon Gim, Sukhoon Lee","doi":"10.1145/3426020.3426055","DOIUrl":"https://doi.org/10.1145/3426020.3426055","url":null,"abstract":"In the Internet of Things environment, most observational data from sensors are stored and managed in a relational database as simple values, but it is difficult to detect the relationship between systems, sensors, and data. This paper builds a data model based on SOSA and SSN ontologies for a portable atmospheric environment measurement system which is our previous research. As a result, it is possible to explicitly express information about data structure and the relationships in system and sensor, properties, and observation regions. Therefore, it is possible to integrate the observed data of various sensors and systems through the proposed data model.","PeriodicalId":305132,"journal":{"name":"The 9th International Conference on Smart Media and Applications","volume":"149 3-4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120917967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Machine learning is a widely adopted solution to complex and non-linear problems, but it takes considerable labor and time to develop an optimal model with high reliability. The costs increase even more as the model deepens and training data grows. This paper presents a method in which, a technique known as dataset distillation, can be implemented in data selection to reduce the training time. We first train the model with distilled images, and then, predict original train data to measure training contribution as sampling weight of selection. Our method enables the fast and easy calculation of weights even in the case of redesigning a network.
{"title":"Dataset Distillation for Core Training Set Construction","authors":"Yuna Jeong, Myunggwon Hwang, Won-Kyoung Sung","doi":"10.1145/3426020.3426051","DOIUrl":"https://doi.org/10.1145/3426020.3426051","url":null,"abstract":"Machine learning is a widely adopted solution to complex and non-linear problems, but it takes considerable labor and time to develop an optimal model with high reliability. The costs increase even more as the model deepens and training data grows. This paper presents a method in which, a technique known as dataset distillation, can be implemented in data selection to reduce the training time. We first train the model with distilled images, and then, predict original train data to measure training contribution as sampling weight of selection. Our method enables the fast and easy calculation of weights even in the case of redesigning a network.","PeriodicalId":305132,"journal":{"name":"The 9th International Conference on Smart Media and Applications","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114836844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current research in automobile security conducts driver authentication inside and outside the vehicle. There are two general methods of authenticating drivers being researched: one through direct contact with a sensor, and the other through a non-contact method. As authenticating drivers through the non-contact method has a lower driver recognition performance than the contact method, drivers may not be accurately identified. The technology of authenticating drivers by contact works by acquiring a biometric signal from drivers. Bio-signals show limitations in the ease of data acquisition, and its application. However, they have been studied in various fields due to their numerous advantages, such as being difficult to forge or alter, and their lower rate of rejection compared to existing biometric information in smart connected car environments. In this paper, we analyze the recent studies on bio- signals that use ECG (Electrocardiogram) and EMG (Electromyography) and confirm the possibility of application of this technology as it is expected that biometrics system technologies suitable for real-time environments would be researched with bio-signals acquired in the driver’s complex state.
{"title":"Analysis of Bio-signal based Biometrics Application Technique Trends for Smart Connected Car","authors":"Igor Lyebyedyev, Gyu-Ho Choi, Ki-Taek Lim, S. Pan","doi":"10.1145/3426020.3426045","DOIUrl":"https://doi.org/10.1145/3426020.3426045","url":null,"abstract":"Current research in automobile security conducts driver authentication inside and outside the vehicle. There are two general methods of authenticating drivers being researched: one through direct contact with a sensor, and the other through a non-contact method. As authenticating drivers through the non-contact method has a lower driver recognition performance than the contact method, drivers may not be accurately identified. The technology of authenticating drivers by contact works by acquiring a biometric signal from drivers. Bio-signals show limitations in the ease of data acquisition, and its application. However, they have been studied in various fields due to their numerous advantages, such as being difficult to forge or alter, and their lower rate of rejection compared to existing biometric information in smart connected car environments. In this paper, we analyze the recent studies on bio- signals that use ECG (Electrocardiogram) and EMG (Electromyography) and confirm the possibility of application of this technology as it is expected that biometrics system technologies suitable for real-time environments would be researched with bio-signals acquired in the driver’s complex state.","PeriodicalId":305132,"journal":{"name":"The 9th International Conference on Smart Media and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126005702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tran Van Khoa, Q. Dinh, Phuc Hong Nguyen, N. Debnath, T. Nguyen, Chang Wook Ahn
This paper significantly enhances from the work [1] and proposes a deep neural network that solves the denoising and colorization problem simultaneously. The joint problem is solved by two separate sub-networks that are trained in an end-to-end manner. Specifically, map attention modules are used to revise feature maps, while a few convolutional layers to extract features at the beginning of the network helps to boost the proposed network significantly. We use KITTI dataset to prepare training and testing datasets. In addition, we compare the proposed method with the baseline method using the PSNR and SSIM metrics. To have a fair comparison, we train the proposed and baseline methods using the same dataset, loss function, and training configurations. The experimental results show that the proposed method performed significantly better the baseline method in the KITTI dataset.
{"title":"Joint Image Denoising and Colorization Using Deep Network","authors":"Tran Van Khoa, Q. Dinh, Phuc Hong Nguyen, N. Debnath, T. Nguyen, Chang Wook Ahn","doi":"10.1145/3426020.3426056","DOIUrl":"https://doi.org/10.1145/3426020.3426056","url":null,"abstract":"This paper significantly enhances from the work [1] and proposes a deep neural network that solves the denoising and colorization problem simultaneously. The joint problem is solved by two separate sub-networks that are trained in an end-to-end manner. Specifically, map attention modules are used to revise feature maps, while a few convolutional layers to extract features at the beginning of the network helps to boost the proposed network significantly. We use KITTI dataset to prepare training and testing datasets. In addition, we compare the proposed method with the baseline method using the PSNR and SSIM metrics. To have a fair comparison, we train the proposed and baseline methods using the same dataset, loss function, and training configurations. The experimental results show that the proposed method performed significantly better the baseline method in the KITTI dataset.","PeriodicalId":305132,"journal":{"name":"The 9th International Conference on Smart Media and Applications","volume":"422 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132553387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a system that can classify personal propensity and recognize emotional information by using the user's biometric information, EEG. In addition, the facial expression generation module according to individual dispositions was proposed by mapping the emotional information to the facial expression. Using the differences in facial expressions according to individual propensities classified in this way, mapping is performed from the El Fuzzy Model to the size of facial expressions according to traits. Emotion recognition uses the absolute value of the differential coefficient of EEG data as a feature value and classifies it using the Support Vector Machine (SVM). After classifying each disposition and emotion, facial emotion information is generated based on the classified information. The emotional information classification system based on brainwave data proposed in this paper is expected to be helpful in the study of human-computer interaction (HCI) in the era of the 4th industrial revolution by intelligently classifying facial expressions according to user's emotions.
{"title":"Facial Expression Emotion through BCI-based Personal Traits and Emotion Classification","authors":"Tae-Yeun Kim, Sanghyun Bae, Sung-Hwan Kim","doi":"10.1145/3426020.3426118","DOIUrl":"https://doi.org/10.1145/3426020.3426118","url":null,"abstract":"In this paper, we propose a system that can classify personal propensity and recognize emotional information by using the user's biometric information, EEG. In addition, the facial expression generation module according to individual dispositions was proposed by mapping the emotional information to the facial expression. Using the differences in facial expressions according to individual propensities classified in this way, mapping is performed from the El Fuzzy Model to the size of facial expressions according to traits. Emotion recognition uses the absolute value of the differential coefficient of EEG data as a feature value and classifies it using the Support Vector Machine (SVM). After classifying each disposition and emotion, facial emotion information is generated based on the classified information. The emotional information classification system based on brainwave data proposed in this paper is expected to be helpful in the study of human-computer interaction (HCI) in the era of the 4th industrial revolution by intelligently classifying facial expressions according to user's emotions.","PeriodicalId":305132,"journal":{"name":"The 9th International Conference on Smart Media and Applications","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134277623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taehong Kim, Y. Cha, ByeongChun Shin, Byung-Rae Cha
By the Fourth Industrial Revolution and the 10 strategic technology of the Gartner Group, Artificial Intelligence(AI) technology was important and affected many areas. One of the ways to accelerate AI services is the Python-based parallel processing library. High-level programming languages such as Python are increasingly used to provide intuitive interfaces to libraries written in lower-level languages and for assembling applications from various components. This migration towards orchestration rather than implementation, coupled with the growing need for parallel computing (e.g., due to big data and the end of Moore's law), necessitates rethinking how parallelism is expressed in programs.[1] In this paper, take a look at a Python-based distributed parallel processing library, one of the ways to accelerate AI services, and use it to compare serial and parallel processing times.
{"title":"Survey and Performance Test of Python-based Libraries for Parallel Processing","authors":"Taehong Kim, Y. Cha, ByeongChun Shin, Byung-Rae Cha","doi":"10.1145/3426020.3426057","DOIUrl":"https://doi.org/10.1145/3426020.3426057","url":null,"abstract":"By the Fourth Industrial Revolution and the 10 strategic technology of the Gartner Group, Artificial Intelligence(AI) technology was important and affected many areas. One of the ways to accelerate AI services is the Python-based parallel processing library. High-level programming languages such as Python are increasingly used to provide intuitive interfaces to libraries written in lower-level languages and for assembling applications from various components. This migration towards orchestration rather than implementation, coupled with the growing need for parallel computing (e.g., due to big data and the end of Moore's law), necessitates rethinking how parallelism is expressed in programs.[1] In this paper, take a look at a Python-based distributed parallel processing library, one of the ways to accelerate AI services, and use it to compare serial and parallel processing times.","PeriodicalId":305132,"journal":{"name":"The 9th International Conference on Smart Media and Applications","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127584151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}