Pub Date : 2022-07-06DOI: 10.1109/ICCE-Taiwan55306.2022.9869116
Yao–Ming Zhang, S. Lin, Tzu-Hsiang Chou, Sin-Ye Jhong, Yung-Yao Chen
Lane detection is an important topic in the self-driving system. Having a stable lane detection system will assist the self-driving cars to make decisions in order to bring a more comfortable and safe driving environment to the driver. In this paper, we use a network architecture composed of Encoder-Decoder with a Feature Shift Aggregator between them to make the prediction more comprehensive; through our dataset, we found that some problems such as glitches occur when changing lanes. In this regard, we use Data Augmentation and Filter respectively to solve the problem. Finally, the network achieves the result accuracy rate of SOTA on the TuSimple dataset.
{"title":"Robust Lane Detection via Filter Estimator and Data Augmentation","authors":"Yao–Ming Zhang, S. Lin, Tzu-Hsiang Chou, Sin-Ye Jhong, Yung-Yao Chen","doi":"10.1109/ICCE-Taiwan55306.2022.9869116","DOIUrl":"https://doi.org/10.1109/ICCE-Taiwan55306.2022.9869116","url":null,"abstract":"Lane detection is an important topic in the self-driving system. Having a stable lane detection system will assist the self-driving cars to make decisions in order to bring a more comfortable and safe driving environment to the driver. In this paper, we use a network architecture composed of Encoder-Decoder with a Feature Shift Aggregator between them to make the prediction more comprehensive; through our dataset, we found that some problems such as glitches occur when changing lanes. In this regard, we use Data Augmentation and Filter respectively to solve the problem. Finally, the network achieves the result accuracy rate of SOTA on the TuSimple dataset.","PeriodicalId":164671,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics - Taiwan","volume":"69 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123247444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-06DOI: 10.1109/ICCE-Taiwan55306.2022.9869054
Wen-Hui Chen, H. Hsu, Yu‐Chen Lin
Potholes or uneven road surfaces can lead to flat tires, suspension damage, or even accidents. A real-time uneven pavement detection system can provide drivers with information beforehand to reduce car damage and safety risk. It also can be used to inform road repair and maintenance departments to save the efforts of manual inspection. The proposed real-time detection system employs the YOLO-v4 algorithm as the detection model followed by quantization using the Vitis-AI framework for model compression so that the developed system can be performed on the Xilinx FPGA platform without compromising on accuracy and speed. Experimental results show that the proposed system can obtain 28 FPS with four-thread running at 300 MHz.
{"title":"Implementation of a Real-time Uneven Pavement Detection System on FPGA Platforms","authors":"Wen-Hui Chen, H. Hsu, Yu‐Chen Lin","doi":"10.1109/ICCE-Taiwan55306.2022.9869054","DOIUrl":"https://doi.org/10.1109/ICCE-Taiwan55306.2022.9869054","url":null,"abstract":"Potholes or uneven road surfaces can lead to flat tires, suspension damage, or even accidents. A real-time uneven pavement detection system can provide drivers with information beforehand to reduce car damage and safety risk. It also can be used to inform road repair and maintenance departments to save the efforts of manual inspection. The proposed real-time detection system employs the YOLO-v4 algorithm as the detection model followed by quantization using the Vitis-AI framework for model compression so that the developed system can be performed on the Xilinx FPGA platform without compromising on accuracy and speed. Experimental results show that the proposed system can obtain 28 FPS with four-thread running at 300 MHz.","PeriodicalId":164671,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics - Taiwan","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125720384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-06DOI: 10.1109/ICCE-Taiwan55306.2022.9869177
Aris S. Canto, Pedro V.S. Matias, Rafael O. Moreira, Thyago A. Sampaio, Adriano E. Santos, D. F. Luiz, Celso B. Carvalho, W. S. S. Júnior
Due to the increasing number of drivers on the highways, we are exposed to an increasing probability of acci-dents. Collisions between vehicles can cause serious health and economic problems. In this work, we propose a system that uses a set of sensors, a microcontroller and a mobile application (Android) to notify emergency contacts and to inform the location of the accident. We define range metrics that prevent and indicate possible safety zones (safe, alert and danger). Additionally, we perform collision and rollover estimation of the vehicle. Finally, it sends reports with data from the events generated during the vehicle traffic. In terms of results, it was found that the performance is adequate for the proposed application.
{"title":"A mobile IoT system for the detection and prevention of vehicular collisions","authors":"Aris S. Canto, Pedro V.S. Matias, Rafael O. Moreira, Thyago A. Sampaio, Adriano E. Santos, D. F. Luiz, Celso B. Carvalho, W. S. S. Júnior","doi":"10.1109/ICCE-Taiwan55306.2022.9869177","DOIUrl":"https://doi.org/10.1109/ICCE-Taiwan55306.2022.9869177","url":null,"abstract":"Due to the increasing number of drivers on the highways, we are exposed to an increasing probability of acci-dents. Collisions between vehicles can cause serious health and economic problems. In this work, we propose a system that uses a set of sensors, a microcontroller and a mobile application (Android) to notify emergency contacts and to inform the location of the accident. We define range metrics that prevent and indicate possible safety zones (safe, alert and danger). Additionally, we perform collision and rollover estimation of the vehicle. Finally, it sends reports with data from the events generated during the vehicle traffic. In terms of results, it was found that the performance is adequate for the proposed application.","PeriodicalId":164671,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics - Taiwan","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125724905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-06DOI: 10.1109/ICCE-Taiwan55306.2022.9869288
Tzu-Hsuan Yeh, Saiau-Yue Tsau
Facial expressions are an essential part of the communication channel for humans, so people's eyes will focus on facial expressions when talking [1]. This research has provided a cat-ear animated accessory to help Virtual Youtuber solve the stone face problem and insufficient emotional communication.
{"title":"Taking Cat Ears to Improve the Facial Emotions of Virtual YouTuber to Enhance the Immersion of Readers","authors":"Tzu-Hsuan Yeh, Saiau-Yue Tsau","doi":"10.1109/ICCE-Taiwan55306.2022.9869288","DOIUrl":"https://doi.org/10.1109/ICCE-Taiwan55306.2022.9869288","url":null,"abstract":"Facial expressions are an essential part of the communication channel for humans, so people's eyes will focus on facial expressions when talking [1]. This research has provided a cat-ear animated accessory to help Virtual Youtuber solve the stone face problem and insufficient emotional communication.","PeriodicalId":164671,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics - Taiwan","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125183052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-06DOI: 10.1109/ICCE-Taiwan55306.2022.9869123
Y. Ishibashi, Jianlin Ma, K. Psannis
In this paper, we examine effects of haptic and visual senses on human angle perception for networked virtual environments by QoE (Quality of Experience) assessment. Each subject can touch angles in a 3D virtual space through a haptic interface device as well as watching them. By making a comparison among three cases, we clarify each effect of the two senses. One case uses only haptic sense, another case employs only visual sense, and the other case utilizes both senses. As a result, we can quantitatively confirm that the visual sense can differentiate angles more easily than the haptic sense.
{"title":"Effects of Haptic and Visual Senses on Angle Perception for Networked Virtual Environments","authors":"Y. Ishibashi, Jianlin Ma, K. Psannis","doi":"10.1109/ICCE-Taiwan55306.2022.9869123","DOIUrl":"https://doi.org/10.1109/ICCE-Taiwan55306.2022.9869123","url":null,"abstract":"In this paper, we examine effects of haptic and visual senses on human angle perception for networked virtual environments by QoE (Quality of Experience) assessment. Each subject can touch angles in a 3D virtual space through a haptic interface device as well as watching them. By making a comparison among three cases, we clarify each effect of the two senses. One case uses only haptic sense, another case employs only visual sense, and the other case utilizes both senses. As a result, we can quantitatively confirm that the visual sense can differentiate angles more easily than the haptic sense.","PeriodicalId":164671,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics - Taiwan","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131419972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-06DOI: 10.1109/ICCE-Taiwan55306.2022.9869028
M. O. Silva, Gustavo M. Torres, Myke D. M. Valadão, E. V. C. U. Mattos, Antônio M. C. Pereira, Matheus S. Uchôa, Lucas M. Torres, N. ValneyM., Victor L. G. Cavalcante, José E. B. S. Linhares, Adriel V. Dos Santos, Agemilson P. Silva, Caio F. S. Cruz, Rômulo Fabrício, Ruan J. S. Belém, Lucas Fujita, Felipe A.A. Araújo, Carlos A. Monteiro, Thiago B. Bezerra, W. S. S. Júnior, Celso B. Carvalho
In this work, conducted by three partners, called UFAM/CETELI, Envision (TPV Group) and ICTS, we present an embedded system capable of recognizing actions and measuring the assembly time of human workers on an industrial production line. The system is composed of machine learning algorithms and the embedded platform NVIDIA Jetson Nano. In terms of performance, the system achieved rates (best case) 91%.
{"title":"Action and Assembly Time Measurement System of Industry Workers using Jetson Nano","authors":"M. O. Silva, Gustavo M. Torres, Myke D. M. Valadão, E. V. C. U. Mattos, Antônio M. C. Pereira, Matheus S. Uchôa, Lucas M. Torres, N. ValneyM., Victor L. G. Cavalcante, José E. B. S. Linhares, Adriel V. Dos Santos, Agemilson P. Silva, Caio F. S. Cruz, Rômulo Fabrício, Ruan J. S. Belém, Lucas Fujita, Felipe A.A. Araújo, Carlos A. Monteiro, Thiago B. Bezerra, W. S. S. Júnior, Celso B. Carvalho","doi":"10.1109/ICCE-Taiwan55306.2022.9869028","DOIUrl":"https://doi.org/10.1109/ICCE-Taiwan55306.2022.9869028","url":null,"abstract":"In this work, conducted by three partners, called UFAM/CETELI, Envision (TPV Group) and ICTS, we present an embedded system capable of recognizing actions and measuring the assembly time of human workers on an industrial production line. The system is composed of machine learning algorithms and the embedded platform NVIDIA Jetson Nano. In terms of performance, the system achieved rates (best case) 91%.","PeriodicalId":164671,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics - Taiwan","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127583253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-06DOI: 10.1109/ICCE-Taiwan55306.2022.9868990
Jirayu Petchhan, S. Su
Nowadays, machine learning technology is growing exponentially, our research integrates AI and digital twin and/or transformation implemented in a wide range of industries. The issues can be seen in the fields which is the use of knowledge from both the virtual and physical world adapted all the way through. Part of obvious issue seems similar to deep transfer learning, such a domain shift occurrence during two domains. Hence, our proposed framework was developed to learn domain-invariant representation through Kernel Higher-order Tensor Matching (KHoM) and emphasized by cross-domain similarity learning via SoftTriple. Results, where are evaluated on public dataset and new fatal circumstance data, have been investigated that our framework is able to diminish discrepant domains from transferable on higher-level feature domain-invariant lightly on a less exemplary adaptations, but be obtained tremendously by the backing of the recognizability and realizing of object homogeneity through learning the likeness.
{"title":"Image Blending-assisted Few-Shot Cross-Domain Similarity Learning and Adaptation Tasks for Ambiguous Hazardous Incidents","authors":"Jirayu Petchhan, S. Su","doi":"10.1109/ICCE-Taiwan55306.2022.9868990","DOIUrl":"https://doi.org/10.1109/ICCE-Taiwan55306.2022.9868990","url":null,"abstract":"Nowadays, machine learning technology is growing exponentially, our research integrates AI and digital twin and/or transformation implemented in a wide range of industries. The issues can be seen in the fields which is the use of knowledge from both the virtual and physical world adapted all the way through. Part of obvious issue seems similar to deep transfer learning, such a domain shift occurrence during two domains. Hence, our proposed framework was developed to learn domain-invariant representation through Kernel Higher-order Tensor Matching (KHoM) and emphasized by cross-domain similarity learning via SoftTriple. Results, where are evaluated on public dataset and new fatal circumstance data, have been investigated that our framework is able to diminish discrepant domains from transferable on higher-level feature domain-invariant lightly on a less exemplary adaptations, but be obtained tremendously by the backing of the recognizability and realizing of object homogeneity through learning the likeness.","PeriodicalId":164671,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics - Taiwan","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127604538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-06DOI: 10.1109/ICCE-Taiwan55306.2022.9869038
C. Hsieh, Quanbin Zhang, Zece Chen
Currently, most indoor surveillance systems employ conventional cameras in a monitored place to have full coverage. However, there are still uncovered areas under surveillance because of the limitation of conventional cameras, for example, the shooting angle restriction. To alleviate the problem, this paper presents a surveillance system based on a fisheye or 360-degree panoramic camera and YOLOv4 object detection. The proposed approach consists of four stages: (i) capture panoramic images, (ii) convert the captured image into a conventional image, (iii) detect objects, that is, human objects, and (iv) record them if necessary. The proposed approach will reduce costs, save installation time, and eliminate areas uncovered in a surveillance system.
{"title":"An Indoor Surveillances System Using Fisheye Camera and YOLOV4 Object Detection","authors":"C. Hsieh, Quanbin Zhang, Zece Chen","doi":"10.1109/ICCE-Taiwan55306.2022.9869038","DOIUrl":"https://doi.org/10.1109/ICCE-Taiwan55306.2022.9869038","url":null,"abstract":"Currently, most indoor surveillance systems employ conventional cameras in a monitored place to have full coverage. However, there are still uncovered areas under surveillance because of the limitation of conventional cameras, for example, the shooting angle restriction. To alleviate the problem, this paper presents a surveillance system based on a fisheye or 360-degree panoramic camera and YOLOv4 object detection. The proposed approach consists of four stages: (i) capture panoramic images, (ii) convert the captured image into a conventional image, (iii) detect objects, that is, human objects, and (iv) record them if necessary. The proposed approach will reduce costs, save installation time, and eliminate areas uncovered in a surveillance system.","PeriodicalId":164671,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics - Taiwan","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125242392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-06DOI: 10.1109/ICCE-Taiwan55306.2022.9869009
Nan-Ching Tai, Jia-Ling Wu, Chih-Yi Yeh
Augmented reality (AR) that can overlay the interactive digital contents on real scenes has advanced the breadth and depth of the guided tour for architectural heritages. However, it increases the demand for precision in identifying real scenes without physical AR target installation for flexible and natural exploration. In this study, a customized physical convertible stand for a tablet computer that captures an extendable physical target object was developed to calibrate captured scenes and on-site viewing. The developed stand ensures an improved AR touring experience in receiving complex knowledge regarding visited cultural heritage sites.
{"title":"Supplementary Physical Device for In-Depth Augmented Reality Touring of Architectural Heritage Sites","authors":"Nan-Ching Tai, Jia-Ling Wu, Chih-Yi Yeh","doi":"10.1109/ICCE-Taiwan55306.2022.9869009","DOIUrl":"https://doi.org/10.1109/ICCE-Taiwan55306.2022.9869009","url":null,"abstract":"Augmented reality (AR) that can overlay the interactive digital contents on real scenes has advanced the breadth and depth of the guided tour for architectural heritages. However, it increases the demand for precision in identifying real scenes without physical AR target installation for flexible and natural exploration. In this study, a customized physical convertible stand for a tablet computer that captures an extendable physical target object was developed to calibrate captured scenes and on-site viewing. The developed stand ensures an improved AR touring experience in receiving complex knowledge regarding visited cultural heritage sites.","PeriodicalId":164671,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics - Taiwan","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125237878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-06DOI: 10.1109/ICCE-Taiwan55306.2022.9869268
Matus Pleva, Š. Korečko, D. Hládek, Patrick A. H. Bours, Markus Hoff Skudal, Y. Liao
The recent experience in the use of virtual reality (VR) technology has shown that users prefer Electromyography (EMG) sensor-based controllers over hand controllers. The results presented in this paper show the potential of EMG-based controllers, in particular the Myo armband, to identify a computer system user. In the first scenario, we train various classifiers with 25 keyboard typing movements for training and test with 75. The results with a 1-dimensional convolutional neural network indicate that we are able to identify the user with an accuracy of 93% by analyzing only the EMG data from the Myo armband. When we use 75 moves for training, accuracy increases to 96.45% after cross-validation.
{"title":"Biometric User Identification by Forearm EMG Analysis","authors":"Matus Pleva, Š. Korečko, D. Hládek, Patrick A. H. Bours, Markus Hoff Skudal, Y. Liao","doi":"10.1109/ICCE-Taiwan55306.2022.9869268","DOIUrl":"https://doi.org/10.1109/ICCE-Taiwan55306.2022.9869268","url":null,"abstract":"The recent experience in the use of virtual reality (VR) technology has shown that users prefer Electromyography (EMG) sensor-based controllers over hand controllers. The results presented in this paper show the potential of EMG-based controllers, in particular the Myo armband, to identify a computer system user. In the first scenario, we train various classifiers with 25 keyboard typing movements for training and test with 75. The results with a 1-dimensional convolutional neural network indicate that we are able to identify the user with an accuracy of 93% by analyzing only the EMG data from the Myo armband. When we use 75 moves for training, accuracy increases to 96.45% after cross-validation.","PeriodicalId":164671,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics - Taiwan","volume":"218 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134285824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}