Pub Date : 2019-06-01DOI: 10.1109/icce-asia46551.2019.8942205
Gazi MA Ehsan ur Rahman, R. Chowdhury, A. Dinh, K. Wahid
Long-term and continuous monitoring of physiological parameters like Blood Pressure (BP), Heart-Beat Rate (HBR), and Electro-cardiogram (ECG) are of very important in ensuring proper medical care. Presently, all these measurements involve separate medical equipment. Most of the portable or wearable solutions require high-power, and are often too heavy to wear. In this paper, the design of a wearable wireless sensor node has been proposed that can collect all the required data and send them to the smartphone. The smartphone is used to perform the desired processing tasks like digital filtering, normalization, feature extraction, etc. In the proposed design, the smartphone further acts as an IoMT (Internet of Medical Things) gateway to send all processed data to the IoT cloud for medical use. This will eventually reduce the amount of power consumption at the sensor-node.
{"title":"A Smart Sensor Node with Smartphone based IoMT","authors":"Gazi MA Ehsan ur Rahman, R. Chowdhury, A. Dinh, K. Wahid","doi":"10.1109/icce-asia46551.2019.8942205","DOIUrl":"https://doi.org/10.1109/icce-asia46551.2019.8942205","url":null,"abstract":"Long-term and continuous monitoring of physiological parameters like Blood Pressure (BP), Heart-Beat Rate (HBR), and Electro-cardiogram (ECG) are of very important in ensuring proper medical care. Presently, all these measurements involve separate medical equipment. Most of the portable or wearable solutions require high-power, and are often too heavy to wear. In this paper, the design of a wearable wireless sensor node has been proposed that can collect all the required data and send them to the smartphone. The smartphone is used to perform the desired processing tasks like digital filtering, normalization, feature extraction, etc. In the proposed design, the smartphone further acts as an IoMT (Internet of Medical Things) gateway to send all processed data to the IoT cloud for medical use. This will eventually reduce the amount of power consumption at the sensor-node.","PeriodicalId":117814,"journal":{"name":"2019 IEEE International Conference on Consumer Electronics - Asia (ICCE-Asia)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115568598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/icce-asia46551.2019.8941598
Yang-Yao Lin, Yi-Hsien Lin, Mao-Jan Lin, Yang-Ming Yeh, Yi-Chang Lu
Motion blur is often observed when taking pictures of moving objects under long exposure settings. To reduce motion blur, a well-known temporal-domain approach is to use dual cameras with different exposure times, and the deblurred image can be obtained by subtracting one view from another after image warping. In this paper, we propose to modify the abovementioned flow by including depth information, which can greatly improve the qualities of edges in the deblurred images. The proposed method is very effective, especially for scenes with multiple objects moving in different directions at different depths.
{"title":"A Depth-Assisted Deblurring Flow Using Dual Cameras with Different Exposure Times","authors":"Yang-Yao Lin, Yi-Hsien Lin, Mao-Jan Lin, Yang-Ming Yeh, Yi-Chang Lu","doi":"10.1109/icce-asia46551.2019.8941598","DOIUrl":"https://doi.org/10.1109/icce-asia46551.2019.8941598","url":null,"abstract":"Motion blur is often observed when taking pictures of moving objects under long exposure settings. To reduce motion blur, a well-known temporal-domain approach is to use dual cameras with different exposure times, and the deblurred image can be obtained by subtracting one view from another after image warping. In this paper, we propose to modify the abovementioned flow by including depth information, which can greatly improve the qualities of edges in the deblurred images. The proposed method is very effective, especially for scenes with multiple objects moving in different directions at different depths.","PeriodicalId":117814,"journal":{"name":"2019 IEEE International Conference on Consumer Electronics - Asia (ICCE-Asia)","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121430041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/icce-asia46551.2019.8942209
Chinnavat Jatturas, Sornsawan Chokkoedsakul, Pisitpong Devahasting Na Avudhva, Sukit Pankaew, Cherdkul Sopavanit, W. Asdornwised
In this paper, we perform comparison techniques for environmental sound classification with multilayer perceptron (MLP) and support vector machine (SVM), and deep learning using new machine learning platforms, i.e., Scikit-Iearn and Tensorflow, respectively. For feature-based classification, principal component analysis of short-time Fourier transform is used as our feature as the front end to MLP and SVM. For deep learning-based classification, convolution+pooling layers is acting as feature extractor from the input image, while fully connected layer will act as a classifier. Our experimental results show that our proposed deep neural network (DNN) models outperform the feature-based sound classification algorithms and the original deep learning work [1].
{"title":"Feature-based and Deep Learning-based Classification of Environmental Sound","authors":"Chinnavat Jatturas, Sornsawan Chokkoedsakul, Pisitpong Devahasting Na Avudhva, Sukit Pankaew, Cherdkul Sopavanit, W. Asdornwised","doi":"10.1109/icce-asia46551.2019.8942209","DOIUrl":"https://doi.org/10.1109/icce-asia46551.2019.8942209","url":null,"abstract":"In this paper, we perform comparison techniques for environmental sound classification with multilayer perceptron (MLP) and support vector machine (SVM), and deep learning using new machine learning platforms, i.e., Scikit-Iearn and Tensorflow, respectively. For feature-based classification, principal component analysis of short-time Fourier transform is used as our feature as the front end to MLP and SVM. For deep learning-based classification, convolution+pooling layers is acting as feature extractor from the input image, while fully connected layer will act as a classifier. Our experimental results show that our proposed deep neural network (DNN) models outperform the feature-based sound classification algorithms and the original deep learning work [1].","PeriodicalId":117814,"journal":{"name":"2019 IEEE International Conference on Consumer Electronics - Asia (ICCE-Asia)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133664117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/icce-asia46551.2019.8941597
{"title":"Index","authors":"","doi":"10.1109/icce-asia46551.2019.8941597","DOIUrl":"https://doi.org/10.1109/icce-asia46551.2019.8941597","url":null,"abstract":"","PeriodicalId":117814,"journal":{"name":"2019 IEEE International Conference on Consumer Electronics - Asia (ICCE-Asia)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129995652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/icce-asia46551.2019.8942189
Nao Shikanai
This study focused on Onna Odori, a Japanese traditional dance form performed by a female role, and captured the dance of a Japanese traditional dance master. Stick figures were created from the motion data, and 24 observers evaluated the movements of these figures. The results showed that the dance master's movements were evaluated as being feminine. The movements were further analyzed and compared with data from the motion data of beginners' movements. The results showed significant differences in shoulder inclinations, shoulder positions, waist positions, and the standard deviations of hip movements. Finally, this study clarified the relevance of impressions and movement characteristics to femininity by using covariance structure analysis. The results showed that the femininity of Japanese traditional dance was characterized by “softness,” “stable,” “inclinations of both shoulders,” and “slow.”
{"title":"Relations between Femininity and the Movements in Japanese Traditional Dance","authors":"Nao Shikanai","doi":"10.1109/icce-asia46551.2019.8942189","DOIUrl":"https://doi.org/10.1109/icce-asia46551.2019.8942189","url":null,"abstract":"This study focused on Onna Odori, a Japanese traditional dance form performed by a female role, and captured the dance of a Japanese traditional dance master. Stick figures were created from the motion data, and 24 observers evaluated the movements of these figures. The results showed that the dance master's movements were evaluated as being feminine. The movements were further analyzed and compared with data from the motion data of beginners' movements. The results showed significant differences in shoulder inclinations, shoulder positions, waist positions, and the standard deviations of hip movements. Finally, this study clarified the relevance of impressions and movement characteristics to femininity by using covariance structure analysis. The results showed that the femininity of Japanese traditional dance was characterized by “softness,” “stable,” “inclinations of both shoulders,” and “slow.”","PeriodicalId":117814,"journal":{"name":"2019 IEEE International Conference on Consumer Electronics - Asia (ICCE-Asia)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133913226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/icce-asia46551.2019.8942220
Emilio Brando Villagomez, Roxanne Addiezza King, Mark Joshua Ordinario, Jose Lazaro, J. Villaverde
Communication is important for every individual to convey whatever information they want to people and viceversa. Hand gesture is one of the important methods of nonverbal communication for human beings. There are plenty of methods that are used to recognize hand gestures with different accuracies and precision, some has advantages and disadvantages. The general objective of this paper is to develop a hand gesture translator gloves with the use of fuzzy-neural network to eliminate the barrier of communication for deaf-mute and non-deaf person. This paper studied the effectiveness of combining fuzzy logic and neural network for hand gesture recognition. The study is successful with the objective of combining Fuzzy Logic algorithm with Neural Networks algorithm to improve the hand gesture recognition rate compared to as an individual. With the earning capability of the Neural Network combined with the simple interpretation and implementation by means of Fuzzy Logic, it unite their advantages and exclude disadvantages like the ability of Fuzzy Logic to interpret input to output that Neural Network is unable to do. The total percent of recognition rate was met with an average of 92.58%.
{"title":"Hand Gesture Recognition for Deaf-Mute using Fuzzy-Neural Network","authors":"Emilio Brando Villagomez, Roxanne Addiezza King, Mark Joshua Ordinario, Jose Lazaro, J. Villaverde","doi":"10.1109/icce-asia46551.2019.8942220","DOIUrl":"https://doi.org/10.1109/icce-asia46551.2019.8942220","url":null,"abstract":"Communication is important for every individual to convey whatever information they want to people and viceversa. Hand gesture is one of the important methods of nonverbal communication for human beings. There are plenty of methods that are used to recognize hand gestures with different accuracies and precision, some has advantages and disadvantages. The general objective of this paper is to develop a hand gesture translator gloves with the use of fuzzy-neural network to eliminate the barrier of communication for deaf-mute and non-deaf person. This paper studied the effectiveness of combining fuzzy logic and neural network for hand gesture recognition. The study is successful with the objective of combining Fuzzy Logic algorithm with Neural Networks algorithm to improve the hand gesture recognition rate compared to as an individual. With the earning capability of the Neural Network combined with the simple interpretation and implementation by means of Fuzzy Logic, it unite their advantages and exclude disadvantages like the ability of Fuzzy Logic to interpret input to output that Neural Network is unable to do. The total percent of recognition rate was met with an average of 92.58%.","PeriodicalId":117814,"journal":{"name":"2019 IEEE International Conference on Consumer Electronics - Asia (ICCE-Asia)","volume":"319 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116231716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/icce-asia46551.2019.8941601
Yu Thazin, Tanthip Eamsa-ard, T. Pobkrut, T. Kerdcharoen
The aim of this work is to develop an electronic nose (e-nose) system for the detection of toxic formaldehyde as a response to illegal addition of formalin into foods in the markets and food processing industries. In this work, nanocomposites of 0-phenylenediamine (OPD) with different functionalized single-walled carbon nanotubes (f-SWCNTs) were used as gas sensing materials. The gas sensors have been tested to perceive the individual response to different volatile organic compounds (VOCs). Since they showed the highest response to formaldehyde which is the main component in formalin solution, the gas sensor array is appropriate for the detection of food contamination due to formalin. By setting up two conditions, namely non-treatment and formalin treatment to raw chicken, shrimp and tofu, as well as shrimps with different concentrations of formalin treatment, the odor associated with those conditions was investigated by e-nose. Discrimination and analysis of non-treatment (natural) and formalin contaminated samples were analyzed by the principal component analysis (PCA). Our findings support the integration of nanocomposite gas sensors into e-nose as an advantageous tool for food safety applications.
{"title":"Formalin Adulteration Detection in Food Using E-nose based on Nanocomposite Gas Sensors","authors":"Yu Thazin, Tanthip Eamsa-ard, T. Pobkrut, T. Kerdcharoen","doi":"10.1109/icce-asia46551.2019.8941601","DOIUrl":"https://doi.org/10.1109/icce-asia46551.2019.8941601","url":null,"abstract":"The aim of this work is to develop an electronic nose (e-nose) system for the detection of toxic formaldehyde as a response to illegal addition of formalin into foods in the markets and food processing industries. In this work, nanocomposites of 0-phenylenediamine (OPD) with different functionalized single-walled carbon nanotubes (f-SWCNTs) were used as gas sensing materials. The gas sensors have been tested to perceive the individual response to different volatile organic compounds (VOCs). Since they showed the highest response to formaldehyde which is the main component in formalin solution, the gas sensor array is appropriate for the detection of food contamination due to formalin. By setting up two conditions, namely non-treatment and formalin treatment to raw chicken, shrimp and tofu, as well as shrimps with different concentrations of formalin treatment, the odor associated with those conditions was investigated by e-nose. Discrimination and analysis of non-treatment (natural) and formalin contaminated samples were analyzed by the principal component analysis (PCA). Our findings support the integration of nanocomposite gas sensors into e-nose as an advantageous tool for food safety applications.","PeriodicalId":117814,"journal":{"name":"2019 IEEE International Conference on Consumer Electronics - Asia (ICCE-Asia)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123997378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/icce-asia46551.2019.8942225
Qiaochu Zhao, Ittetsu Taniguchi, Makoto Nakamura, T. Onoye
In this paper, we proposed a refined counting method of vibratory parts feeders for industrial vision system. Frames of feeding parts are continuously monitored by industrial camera at high speed, therefore efficient and accurate counting method is critical to adopt. Previous research converted two dimensional spatial signal of parts into a time series of one dimensional signal thus achieved efficient counting. In this paper, we further suppressed the counting error in previous research by also taking the variance of feeding speed of each parts into consideration meanwhile conserve its efficiency.
{"title":"An Adaptive Parts Counting Method based on Orthogonal Intensity Distribution Analysis for Industrial Vision Systems","authors":"Qiaochu Zhao, Ittetsu Taniguchi, Makoto Nakamura, T. Onoye","doi":"10.1109/icce-asia46551.2019.8942225","DOIUrl":"https://doi.org/10.1109/icce-asia46551.2019.8942225","url":null,"abstract":"In this paper, we proposed a refined counting method of vibratory parts feeders for industrial vision system. Frames of feeding parts are continuously monitored by industrial camera at high speed, therefore efficient and accurate counting method is critical to adopt. Previous research converted two dimensional spatial signal of parts into a time series of one dimensional signal thus achieved efficient counting. In this paper, we further suppressed the counting error in previous research by also taking the variance of feeding speed of each parts into consideration meanwhile conserve its efficiency.","PeriodicalId":117814,"journal":{"name":"2019 IEEE International Conference on Consumer Electronics - Asia (ICCE-Asia)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122737133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/icce-asia46551.2019.8941592
J. Rajruangrabin, T. Pongthavornkamol, Suradesh Chotchuang, P. Pongpaibool
This paper presents new method of automatic beam steering calibration of phased array radar by using Reinforcement Learning method. The method of Q-Learning is used to find a standard of Digital Phase Shifters which is the main component of beam control. In complicated calibration process, the system measurement by expert user and time consumption are needed. The proposed method can reduce the resource usage.
{"title":"Automatic Beam Steering Calibration by Machine Learning Method for Phased Array Radar System","authors":"J. Rajruangrabin, T. Pongthavornkamol, Suradesh Chotchuang, P. Pongpaibool","doi":"10.1109/icce-asia46551.2019.8941592","DOIUrl":"https://doi.org/10.1109/icce-asia46551.2019.8941592","url":null,"abstract":"This paper presents new method of automatic beam steering calibration of phased array radar by using Reinforcement Learning method. The method of Q-Learning is used to find a standard of Digital Phase Shifters which is the main component of beam control. In complicated calibration process, the system measurement by expert user and time consumption are needed. The proposed method can reduce the resource usage.","PeriodicalId":117814,"journal":{"name":"2019 IEEE International Conference on Consumer Electronics - Asia (ICCE-Asia)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123629326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/icce-asia46551.2019.8942224
Huyen T. T. Tran, Trang H. Hoang, Phu N. Minh, N. P. Ngoc, T. Thang
A key component in VR systems is omnidirectional content, which provides 360-degree views of scenes. At a given time, only a portion of a content, called viewport, is displayed using head-mounted displays. In this work, we develop an objective quality metric, called Weighted Viewport PSNR (W-VPSNR), for omnidirectional content of spatially-varying quality. Through experiment results, it is found that the W-VPSNR metric well correlates with the subjective quality scores.
{"title":"A Perception-based Quality Metric for Omnidirectional Images","authors":"Huyen T. T. Tran, Trang H. Hoang, Phu N. Minh, N. P. Ngoc, T. Thang","doi":"10.1109/icce-asia46551.2019.8942224","DOIUrl":"https://doi.org/10.1109/icce-asia46551.2019.8942224","url":null,"abstract":"A key component in VR systems is omnidirectional content, which provides 360-degree views of scenes. At a given time, only a portion of a content, called viewport, is displayed using head-mounted displays. In this work, we develop an objective quality metric, called Weighted Viewport PSNR (W-VPSNR), for omnidirectional content of spatially-varying quality. Through experiment results, it is found that the W-VPSNR metric well correlates with the subjective quality scores.","PeriodicalId":117814,"journal":{"name":"2019 IEEE International Conference on Consumer Electronics - Asia (ICCE-Asia)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115176738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}