Pub Date : 2022-07-06DOI: 10.1109/ICCE-Taiwan55306.2022.9869034
Jeferson B. da Costa, E. Souto
This papper presents an Internet of Things (IoT) based low power consumption device for monitoring levels of CO2, PM2.5, PM10 particles, temperature and humidity, in order to help responsible areas to maintain air quality and occupational health. Real experiments revealed enhancement opportunities, allowed fine tuning, and finally showed its effectiveness, when more devices are added to the network, allowing greater accuracy of the data collected.
{"title":"A IoT Device for Monitoring Particulate Matter and Gaseous Pollutants in Indoor Industrial Workstations","authors":"Jeferson B. da Costa, E. Souto","doi":"10.1109/ICCE-Taiwan55306.2022.9869034","DOIUrl":"https://doi.org/10.1109/ICCE-Taiwan55306.2022.9869034","url":null,"abstract":"This papper presents an Internet of Things (IoT) based low power consumption device for monitoring levels of CO2, PM2.5, PM10 particles, temperature and humidity, in order to help responsible areas to maintain air quality and occupational health. Real experiments revealed enhancement opportunities, allowed fine tuning, and finally showed its effectiveness, when more devices are added to the network, allowing greater accuracy of the data collected.","PeriodicalId":164671,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics - Taiwan","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115881119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-06DOI: 10.1109/ICCE-Taiwan55306.2022.9869222
Xiu-Zhi Chen, Chen-Pu Cheng, Yen-Lin Chen
Vehicle detection and recognition are now implemented through powerful machine learning methods, those methods are not only relying on clever learning strategies, but also require high quality datasets. To obtain high quality datasets, time-consuming and grueling processing is needed. In this research, we proposed a data augmentation concept that is able to prepare high quality datasets for vehicle detection and recognition training in a more efficient approach. The effectiveness of our proposed data augmentation concept has been proved by applying it on the training data preparing process of YOLOv4 model. The result shows that the mean average precision (mAP) had increased 1.93% comparing to the YOLOv4 model which was trained without data augmentation.
{"title":"Data Augmentation Method for Improving Vehicle Detection and Recognition Performance","authors":"Xiu-Zhi Chen, Chen-Pu Cheng, Yen-Lin Chen","doi":"10.1109/ICCE-Taiwan55306.2022.9869222","DOIUrl":"https://doi.org/10.1109/ICCE-Taiwan55306.2022.9869222","url":null,"abstract":"Vehicle detection and recognition are now implemented through powerful machine learning methods, those methods are not only relying on clever learning strategies, but also require high quality datasets. To obtain high quality datasets, time-consuming and grueling processing is needed. In this research, we proposed a data augmentation concept that is able to prepare high quality datasets for vehicle detection and recognition training in a more efficient approach. The effectiveness of our proposed data augmentation concept has been proved by applying it on the training data preparing process of YOLOv4 model. The result shows that the mean average precision (mAP) had increased 1.93% comparing to the YOLOv4 model which was trained without data augmentation.","PeriodicalId":164671,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics - Taiwan","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114842640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-06DOI: 10.1109/ICCE-Taiwan55306.2022.9869161
Yutaka Onozuka, Keisuke Tajima, S. Tamaki, Ryota Shiina, T. Taniguchi, R. Kubo
Networked water-temperature feedback control sys-tems require the remote temperature measurement at the target position to be controlled. However, a temperature sensor could be displaced during its installation or under operation, which could, in turn, affect the system stability and performance. This study proposes a sensor-displacement compensation method based on an adaptive disturbance observer and adaptive spatiotempo-ral Smith predictor using sensor metadata for a networked water-temperature control system. Simulations confirm that the proposed method outperforms the baseline method, based on the disturbance observer and Smith predictor, in terms of the tracking performance.
{"title":"Metadata-Based Sensor-Displacement Compensation for Networked Water-Temperature Control Systems","authors":"Yutaka Onozuka, Keisuke Tajima, S. Tamaki, Ryota Shiina, T. Taniguchi, R. Kubo","doi":"10.1109/ICCE-Taiwan55306.2022.9869161","DOIUrl":"https://doi.org/10.1109/ICCE-Taiwan55306.2022.9869161","url":null,"abstract":"Networked water-temperature feedback control sys-tems require the remote temperature measurement at the target position to be controlled. However, a temperature sensor could be displaced during its installation or under operation, which could, in turn, affect the system stability and performance. This study proposes a sensor-displacement compensation method based on an adaptive disturbance observer and adaptive spatiotempo-ral Smith predictor using sensor metadata for a networked water-temperature control system. Simulations confirm that the proposed method outperforms the baseline method, based on the disturbance observer and Smith predictor, in terms of the tracking performance.","PeriodicalId":164671,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics - Taiwan","volume":"76 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114942001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-06DOI: 10.1109/ICCE-Taiwan55306.2022.9869040
Kazutoshi Nakane, Takumi Anjiki, Jiquan Xie, Y. Fukushima, T. Murase
In this paper, we propose a VM migration method that improves the accuracy in edge nodes of multi-stage information processing system even when the VM migration downtime is large. The previous method only performs well with short VM migration times. In contrast to previous studies, we place VMs to make a large contribution to improving accuracy by considering the system load degree and the characteristics of the tasks generated by each VM. The evaluation results show that the proposed method improves the accuracy by about 68% compared to the conventional method when the number of VMs is 25.
{"title":"VM Migration Considering Downtime for Accuracy Improvement in Multi-stage Information Processing System","authors":"Kazutoshi Nakane, Takumi Anjiki, Jiquan Xie, Y. Fukushima, T. Murase","doi":"10.1109/ICCE-Taiwan55306.2022.9869040","DOIUrl":"https://doi.org/10.1109/ICCE-Taiwan55306.2022.9869040","url":null,"abstract":"In this paper, we propose a VM migration method that improves the accuracy in edge nodes of multi-stage information processing system even when the VM migration downtime is large. The previous method only performs well with short VM migration times. In contrast to previous studies, we place VMs to make a large contribution to improving accuracy by considering the system load degree and the characteristics of the tasks generated by each VM. The evaluation results show that the proposed method improves the accuracy by about 68% compared to the conventional method when the number of VMs is 25.","PeriodicalId":164671,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics - Taiwan","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116411424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-06DOI: 10.1109/ICCE-Taiwan55306.2022.9869164
Kuan-Hung Chen, Jen-He Wang, Chun-Wei Su
Deep Convolutional Neural Networks (DCNNs) are imperative to state-of-the-art computer vision algorithms. In spite of the attractive qualities of DCNN s, they have been excessively expensive to be applied on large scale high-resolution images and video sequences. In order to implement DCNN models on edge platforms, we tend to optimize the DCNN model by considering energy efficiency and detection accuracy simultaneously. In this paper, we analyze the energy consumption, detection accuracy, and execution speed of our model and those of the state-of-the-art models based on a mobile platform called Jetson Nano. We adopt the performance index from Low Power Computer Vision (LPCV) challenge which considers power, mAP and FPS at the same time to evaluate these models in an overall point of view. On Jetson Nano, the presented system boosted with the GoP-mode technique can achieve an execution speed of near 20 frames per second, and high mean average precision of 59.9% under MS COCO test sets. Compared with the state-of-the-art models, e.g., YOLOv5, the LPCV score improves as high as 76.33%. If the GoP-mode acceleration is included, the LPCV score of Agilev4 reaches even 90.6 times of that ofYOLOv5.
{"title":"An Energy-efficient and Accurate Object Detection Design for Mobile Applications","authors":"Kuan-Hung Chen, Jen-He Wang, Chun-Wei Su","doi":"10.1109/ICCE-Taiwan55306.2022.9869164","DOIUrl":"https://doi.org/10.1109/ICCE-Taiwan55306.2022.9869164","url":null,"abstract":"Deep Convolutional Neural Networks (DCNNs) are imperative to state-of-the-art computer vision algorithms. In spite of the attractive qualities of DCNN s, they have been excessively expensive to be applied on large scale high-resolution images and video sequences. In order to implement DCNN models on edge platforms, we tend to optimize the DCNN model by considering energy efficiency and detection accuracy simultaneously. In this paper, we analyze the energy consumption, detection accuracy, and execution speed of our model and those of the state-of-the-art models based on a mobile platform called Jetson Nano. We adopt the performance index from Low Power Computer Vision (LPCV) challenge which considers power, mAP and FPS at the same time to evaluate these models in an overall point of view. On Jetson Nano, the presented system boosted with the GoP-mode technique can achieve an execution speed of near 20 frames per second, and high mean average precision of 59.9% under MS COCO test sets. Compared with the state-of-the-art models, e.g., YOLOv5, the LPCV score improves as high as 76.33%. If the GoP-mode acceleration is included, the LPCV score of Agilev4 reaches even 90.6 times of that ofYOLOv5.","PeriodicalId":164671,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics - Taiwan","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123548305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-06DOI: 10.1109/ICCE-Taiwan55306.2022.9869140
Shao-Hua Wang, Chun-Ting Wu, Chia-Heng Tu, J. Juang, Tsung-Ming Hsu
Automated Bus Rapid Transit (ABRT) is an emerging solution of smart transportation since it has the advantages of convenient transportation and alleviating traffic congestion brought by Mass Rapid Transit (MRT) while avoiding huge soft-ware/hardware construction costs. However, the characteristics of using dedicated lanes and frequent bus schedules pose the challenges to the development of self-driving technologies for ABRT. For example, driving from a dedicated lane to a non-dedicated lane encounters the merge of traffic flows, which makes it hard to drive safely and efficiently. In this work, we aim to establish a 3D simulation environment for ABRT development. The virtual environment is constructed by mimicking a real-world traffic scene, e.g., merged traffic lanes. With such a 3D environment, the control system of ABRT can be developed and validated in laboratories during the early stage of ABRT development. Furthermore, the 3D environment is able to replicate the actual traffic mixed with vehicles and bikers, which facilitates the testing of the effectiveness, realtimeness, and safeness. Besides, traffic signal information is crucial to ABRT, enabling the development of comfortable and energy-saving systems. We will also discuss the traffic signal information sharing during the 3D simulation.
{"title":"Automated Bus Rapid Transit System Development with 3D Simulations","authors":"Shao-Hua Wang, Chun-Ting Wu, Chia-Heng Tu, J. Juang, Tsung-Ming Hsu","doi":"10.1109/ICCE-Taiwan55306.2022.9869140","DOIUrl":"https://doi.org/10.1109/ICCE-Taiwan55306.2022.9869140","url":null,"abstract":"Automated Bus Rapid Transit (ABRT) is an emerging solution of smart transportation since it has the advantages of convenient transportation and alleviating traffic congestion brought by Mass Rapid Transit (MRT) while avoiding huge soft-ware/hardware construction costs. However, the characteristics of using dedicated lanes and frequent bus schedules pose the challenges to the development of self-driving technologies for ABRT. For example, driving from a dedicated lane to a non-dedicated lane encounters the merge of traffic flows, which makes it hard to drive safely and efficiently. In this work, we aim to establish a 3D simulation environment for ABRT development. The virtual environment is constructed by mimicking a real-world traffic scene, e.g., merged traffic lanes. With such a 3D environment, the control system of ABRT can be developed and validated in laboratories during the early stage of ABRT development. Furthermore, the 3D environment is able to replicate the actual traffic mixed with vehicles and bikers, which facilitates the testing of the effectiveness, realtimeness, and safeness. Besides, traffic signal information is crucial to ABRT, enabling the development of comfortable and energy-saving systems. We will also discuss the traffic signal information sharing during the 3D simulation.","PeriodicalId":164671,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics - Taiwan","volume":"112 18","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120851891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-06DOI: 10.1109/ICCE-Taiwan55306.2022.9869001
Lin-Hang Hsu, Chen-Kang Huang
In this study, the bacteria is chosen and discussed. The experiment uses SolidWorks simulation to see the flow field, In this work, a UVC air cleaning device, consisting of a cyclone and a UV tube, was designed and the prototype was established. The device was tested experimentally. A single stage Anderson impactor was used to detect the concentration of airborne bacteria. According to the results from bacteria colony, after the incubation of 48 hours, the device was shown effective.
{"title":"The Application of Cyclone Structure with UVC Equipment on the Analysis of Indoor Air Quality","authors":"Lin-Hang Hsu, Chen-Kang Huang","doi":"10.1109/ICCE-Taiwan55306.2022.9869001","DOIUrl":"https://doi.org/10.1109/ICCE-Taiwan55306.2022.9869001","url":null,"abstract":"In this study, the bacteria is chosen and discussed. The experiment uses SolidWorks simulation to see the flow field, In this work, a UVC air cleaning device, consisting of a cyclone and a UV tube, was designed and the prototype was established. The device was tested experimentally. A single stage Anderson impactor was used to detect the concentration of airborne bacteria. According to the results from bacteria colony, after the incubation of 48 hours, the device was shown effective.","PeriodicalId":164671,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics - Taiwan","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125801754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-06DOI: 10.1109/ICCE-Taiwan55306.2022.9869033
Yuhan Huang, Zhen Chen, Jin-Shyan Lee
Due to the increasing labor cost, many companies tend to develop unmanned technologies. This paper creates a smart unmanned vending machine using networked programmable logic controllers (PLC) and chatbots. The developed unmanned vending machine consists of three components, including a self-checkout system, a smart warehousing system, and a monitoring system. The self-checkout system is built by the proximity switches on the shelves with calculating the total cost of the buying items. The warehousing system is mainly regulated by a networked PLC as a controller with a connection to the internet. The monitoring system is mainly presented by chatbots used on smartphones. This paper combines RFID, PLC, and chatbots to create a smart unmanned vending machine.
{"title":"Development of Smart Unmanned Vending Machines Using Networked PLC and Chatbots","authors":"Yuhan Huang, Zhen Chen, Jin-Shyan Lee","doi":"10.1109/ICCE-Taiwan55306.2022.9869033","DOIUrl":"https://doi.org/10.1109/ICCE-Taiwan55306.2022.9869033","url":null,"abstract":"Due to the increasing labor cost, many companies tend to develop unmanned technologies. This paper creates a smart unmanned vending machine using networked programmable logic controllers (PLC) and chatbots. The developed unmanned vending machine consists of three components, including a self-checkout system, a smart warehousing system, and a monitoring system. The self-checkout system is built by the proximity switches on the shelves with calculating the total cost of the buying items. The warehousing system is mainly regulated by a networked PLC as a controller with a connection to the internet. The monitoring system is mainly presented by chatbots used on smartphones. This paper combines RFID, PLC, and chatbots to create a smart unmanned vending machine.","PeriodicalId":164671,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics - Taiwan","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124747991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Clinically, the laryngoscopy videos are often used to observe vocal folds movement and analysis larynx-related lesions preliminarily. However, there is a lack of objective larynx indicators in medicine currently. Thus, the 3D VOSNet architecture is used to extract spatial features and classify the larynx object in the sequence images of laryngoscopy. The results represent that the 3D VOSNet can accurately segment the left vocal fold, right vocal fold, and glottal, and the accuracy is 93.48%, 94.63%, and 89.91%, respectively. Finally, the self-built algorithm is utilized to calculate six measured indicators including the length, area, curvature, deviation of length and area of vocal folds, area of glottal, and symmetry of the vocal folds. Improve the effectiveness and quality of vocal fold examination by providing immediate and objective information to otolaryngologists.
{"title":"Larynx Object Segmentation and Indicators Generation Based on 3D VOSNet and Laryngeal Endoscopy Successive Images","authors":"I-Miao Chen, Pin-Yu Yeh, Ya-Chu Hsieh, Ting-Chih Chang, Wen-Fang Shen, Chiun-Li Chin","doi":"10.1109/ICCE-Taiwan55306.2022.9869059","DOIUrl":"https://doi.org/10.1109/ICCE-Taiwan55306.2022.9869059","url":null,"abstract":"Clinically, the laryngoscopy videos are often used to observe vocal folds movement and analysis larynx-related lesions preliminarily. However, there is a lack of objective larynx indicators in medicine currently. Thus, the 3D VOSNet architecture is used to extract spatial features and classify the larynx object in the sequence images of laryngoscopy. The results represent that the 3D VOSNet can accurately segment the left vocal fold, right vocal fold, and glottal, and the accuracy is 93.48%, 94.63%, and 89.91%, respectively. Finally, the self-built algorithm is utilized to calculate six measured indicators including the length, area, curvature, deviation of length and area of vocal folds, area of glottal, and symmetry of the vocal folds. Improve the effectiveness and quality of vocal fold examination by providing immediate and objective information to otolaryngologists.","PeriodicalId":164671,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics - Taiwan","volume":"166 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124760058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-06DOI: 10.1109/ICCE-Taiwan55306.2022.9869198
K. Kojima
This paper describes the development of a tracker for basketball wheelchair players using special flashing LEDs and an omnidirectional camera. Our previous trackers required some image processing to find multiple LEDs in the video captured by the omnidirectional camera. The problem was that these image processes were time consuming. This study uses convolutional neural networks to reduce the time it takes to find multiple LEDs in that video.
{"title":"Development of Wheelchair Basketball Player Tracker Using LED and Omni-camera","authors":"K. Kojima","doi":"10.1109/ICCE-Taiwan55306.2022.9869198","DOIUrl":"https://doi.org/10.1109/ICCE-Taiwan55306.2022.9869198","url":null,"abstract":"This paper describes the development of a tracker for basketball wheelchair players using special flashing LEDs and an omnidirectional camera. Our previous trackers required some image processing to find multiple LEDs in the video captured by the omnidirectional camera. The problem was that these image processes were time consuming. This study uses convolutional neural networks to reduce the time it takes to find multiple LEDs in that video.","PeriodicalId":164671,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics - Taiwan","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129402987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}