Pub Date : 2019-10-01DOI: 10.1109/GCCE46687.2019.9015611
Ming-Lung Kung, Feng-Yu Chen, Ken-Huang Lin
Near-field dual-band wireless power transfer (WPT) systems can be compatible with distinct operating bands. Several systems are available with different architectures, such as multi-coil module or repeater-based coil module. However, their efficiencies become smaller when the coupling among coils change. In this work a dual-band WPT system with efficiency-tracking circuits-boost converters is proposed to obtain higher efficiency under dynamic-coupling applications. The simulation results show that the efficiency can be increased by 5-25% at 6.78 MHz and 13.56 MHz when the receiver coils are deviated from the optimal positions.
{"title":"Near-Field Wireless Power Transfer System with Efficiency Tracking for Dual-band Applications","authors":"Ming-Lung Kung, Feng-Yu Chen, Ken-Huang Lin","doi":"10.1109/GCCE46687.2019.9015611","DOIUrl":"https://doi.org/10.1109/GCCE46687.2019.9015611","url":null,"abstract":"Near-field dual-band wireless power transfer (WPT) systems can be compatible with distinct operating bands. Several systems are available with different architectures, such as multi-coil module or repeater-based coil module. However, their efficiencies become smaller when the coupling among coils change. In this work a dual-band WPT system with efficiency-tracking circuits-boost converters is proposed to obtain higher efficiency under dynamic-coupling applications. The simulation results show that the efficiency can be increased by 5-25% at 6.78 MHz and 13.56 MHz when the receiver coils are deviated from the optimal positions.","PeriodicalId":303502,"journal":{"name":"2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117255882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/GCCE46687.2019.9015470
Saya Takada, Ren Togo, Takahiro Ogawa, M. Haseyama
Reconstruction of human cognitive contents based on analyzing of functional Magnetic Resonance Imaging (fMRI) signals has been actively researched. Cognitive contents such as seen images can be reconstructed by estimating the relation between fMRI signals and deep neural network (DNN) features extracted from seen images. In order to reconstruct seen images with high accuracy, translation fMRI signals into meaningful features is an important task. In this paper, we validate the reconstruction accuracy of seen images by using visual features with some DNN feature extraction models. Recent works for image reconstruction used VGG19 to extract visual features. However, newer models such as Inception-v3 and ResNet50 have been proposed and these models perform general object recognition with higher accuracy. Thus it is expected the accuracy of image reconstruction is improved when using features extracted by these newer models. Experimental results for images of five categories show the effectiveness of the use of visual features from newer DNN models.
{"title":"Effectiveness Evaluation of Deep Features for Image Reconstruction from fMRI Signals","authors":"Saya Takada, Ren Togo, Takahiro Ogawa, M. Haseyama","doi":"10.1109/GCCE46687.2019.9015470","DOIUrl":"https://doi.org/10.1109/GCCE46687.2019.9015470","url":null,"abstract":"Reconstruction of human cognitive contents based on analyzing of functional Magnetic Resonance Imaging (fMRI) signals has been actively researched. Cognitive contents such as seen images can be reconstructed by estimating the relation between fMRI signals and deep neural network (DNN) features extracted from seen images. In order to reconstruct seen images with high accuracy, translation fMRI signals into meaningful features is an important task. In this paper, we validate the reconstruction accuracy of seen images by using visual features with some DNN feature extraction models. Recent works for image reconstruction used VGG19 to extract visual features. However, newer models such as Inception-v3 and ResNet50 have been proposed and these models perform general object recognition with higher accuracy. Thus it is expected the accuracy of image reconstruction is improved when using features extracted by these newer models. Experimental results for images of five categories show the effectiveness of the use of visual features from newer DNN models.","PeriodicalId":303502,"journal":{"name":"2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)","volume":"97 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117330679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/GCCE46687.2019.9014638
Tomoya Hirakawa, Y. Kuroki
This paper describes a novel multilayer-sparse-representation based image classification. This method designs a dictionary for sparse coefficients of each layer with ADMM (Alternating Direction Method of Multipliers) referring training images. For the classification stage, sparse coefficients should also be calculated with ADMM for test images, which needs computational burden. To reduce the burden, this work proposes to project inputs onto dictionary atoms of each layer instead of solving sparse coefficients. This alternative method is inspired by CNNs (Convolutional Neural Networks), and is also faster than solving sparse coefficients. Experimental results show that our method can predict coefficient vectors faster than the conventional methods with almost equivalent classification accuracy.
提出了一种基于多层稀疏表示的图像分类方法。该方法利用ADMM (Alternating Direction method of Multipliers)参考训练图像,设计了每层稀疏系数字典。在分类阶段,还需要用ADMM对测试图像进行稀疏系数计算,计算量较大。为了减轻负担,本工作提出将输入投影到每层的字典原子上,而不是求解稀疏系数。这种替代方法受到cnn(卷积神经网络)的启发,并且比求解稀疏系数更快。实验结果表明,该方法可以较传统方法更快地预测系数向量,分类精度基本相当。
{"title":"Image classification on projection based multilayer sparse representation","authors":"Tomoya Hirakawa, Y. Kuroki","doi":"10.1109/GCCE46687.2019.9014638","DOIUrl":"https://doi.org/10.1109/GCCE46687.2019.9014638","url":null,"abstract":"This paper describes a novel multilayer-sparse-representation based image classification. This method designs a dictionary for sparse coefficients of each layer with ADMM (Alternating Direction Method of Multipliers) referring training images. For the classification stage, sparse coefficients should also be calculated with ADMM for test images, which needs computational burden. To reduce the burden, this work proposes to project inputs onto dictionary atoms of each layer instead of solving sparse coefficients. This alternative method is inspired by CNNs (Convolutional Neural Networks), and is also faster than solving sparse coefficients. Experimental results show that our method can predict coefficient vectors faster than the conventional methods with almost equivalent classification accuracy.","PeriodicalId":303502,"journal":{"name":"2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116164085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Compressed sensing is famous for its compression performances under error-free transmission. It would be helpful to apply polar codes for compressed sensing data to cope with the transmission over error-prone channels. Parts of data are more important than others, hence, we apply polar codes with unequal protection capabilities to compressed sensing coefficients, for the error resilient transmission over the binary symmetric channels. Simulations have presented the enhancements and the potential use for practical applications.
{"title":"Unequal Error Protection for Compressed Sensing with Polar Codes","authors":"Yueh-Hong Chen, Feng-Cheng Chang, Hsiang-Cheh Huang, Teng-Kuan Huang","doi":"10.1109/GCCE46687.2019.9015394","DOIUrl":"https://doi.org/10.1109/GCCE46687.2019.9015394","url":null,"abstract":"Compressed sensing is famous for its compression performances under error-free transmission. It would be helpful to apply polar codes for compressed sensing data to cope with the transmission over error-prone channels. Parts of data are more important than others, hence, we apply polar codes with unequal protection capabilities to compressed sensing coefficients, for the error resilient transmission over the binary symmetric channels. Simulations have presented the enhancements and the potential use for practical applications.","PeriodicalId":303502,"journal":{"name":"2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123460857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/GCCE46687.2019.9015625
M. Fujii, Naoki Tsuji, Yukio Imai, Shigemi Masuda
We implemented a vision-based microwave power transfer experimental system aiming to feed a moving target. Slot cars moving along a set-up course were used as moving objects. The target slot car was successfully detected, tracked, and irradiated by a power-transferring antenna array embedded with a high frame-rate image-signal-processing directivity controller. Experimental results show that the beam tracking provided high received signal power for a longer time at the moving target slot car than a fixed beam.
{"title":"Microwave Power Transfer Based on Moving Target Tracking in High Frame-Rate Camera Images","authors":"M. Fujii, Naoki Tsuji, Yukio Imai, Shigemi Masuda","doi":"10.1109/GCCE46687.2019.9015625","DOIUrl":"https://doi.org/10.1109/GCCE46687.2019.9015625","url":null,"abstract":"We implemented a vision-based microwave power transfer experimental system aiming to feed a moving target. Slot cars moving along a set-up course were used as moving objects. The target slot car was successfully detected, tracked, and irradiated by a power-transferring antenna array embedded with a high frame-rate image-signal-processing directivity controller. Experimental results show that the beam tracking provided high received signal power for a longer time at the moving target slot car than a fixed beam.","PeriodicalId":303502,"journal":{"name":"2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123689451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/GCCE46687.2019.9015293
Chih-Hsun Chou, P. Jen
A deep learning based movie genre classification model, integrating the convolutional neural network (CNN) with the long-short term memory network (LSTM), was presented in this study. In the image process, a series of key-frame features was obtained by using the CNN so that the LSTM can be applied to learn the dynamic features of the key-frames for classification. In the experiment, well-known traditional movie features as well as the deep learning models used in other researches were used for comparisons to verify the performance of the proposed deep learning based movie genre classification model.
{"title":"Image Based Deep Learning Model for Movie Trailer Genre Classification","authors":"Chih-Hsun Chou, P. Jen","doi":"10.1109/GCCE46687.2019.9015293","DOIUrl":"https://doi.org/10.1109/GCCE46687.2019.9015293","url":null,"abstract":"A deep learning based movie genre classification model, integrating the convolutional neural network (CNN) with the long-short term memory network (LSTM), was presented in this study. In the image process, a series of key-frame features was obtained by using the CNN so that the LSTM can be applied to learn the dynamic features of the key-frames for classification. In the experiment, well-known traditional movie features as well as the deep learning models used in other researches were used for comparisons to verify the performance of the proposed deep learning based movie genre classification model.","PeriodicalId":303502,"journal":{"name":"2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)","volume":"214 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122149029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/GCCE46687.2019.9015614
Susumu Kawai, T. Wada, H. Ebara
In this paper, we propose an automatic recording system of the navigation time of a small aircraft. The acceleration of a small aircraft is detected from the X axis by using 3-axis acceleration sensor of the smartphone. The navigation time is determined from the acceleration. We actually flew a small aircraft and collected necessary data. The analysis gave valid results.
{"title":"New Automatic Navigation Time Recording System for Small Aircraft","authors":"Susumu Kawai, T. Wada, H. Ebara","doi":"10.1109/GCCE46687.2019.9015614","DOIUrl":"https://doi.org/10.1109/GCCE46687.2019.9015614","url":null,"abstract":"In this paper, we propose an automatic recording system of the navigation time of a small aircraft. The acceleration of a small aircraft is detected from the X axis by using 3-axis acceleration sensor of the smartphone. The navigation time is determined from the acceleration. We actually flew a small aircraft and collected necessary data. The analysis gave valid results.","PeriodicalId":303502,"journal":{"name":"2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123951629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/GCCE46687.2019.9015476
Chia-Yang Liu, C. Hsiao
A wireless power transformer for electronic flower pot is presented in this paper. In traditional the electronic flower is usually mass production. The flower and flower pot are combined in one product. The florist can't change the design any more. In this paper we separate the electronic flower and flower pot. A wireless power transmitter is designed in the flower pot and a wireless power reciver is designed in the electrnoic flower. The power transmitter is designed as building blocks. The floeirt can used the transmitter in different size of flower pot. They also can choose any different ecectronic flower to design the production.
{"title":"The Design of Wireless Power Transformer for Electronic Flower Pot","authors":"Chia-Yang Liu, C. Hsiao","doi":"10.1109/GCCE46687.2019.9015476","DOIUrl":"https://doi.org/10.1109/GCCE46687.2019.9015476","url":null,"abstract":"A wireless power transformer for electronic flower pot is presented in this paper. In traditional the electronic flower is usually mass production. The flower and flower pot are combined in one product. The florist can't change the design any more. In this paper we separate the electronic flower and flower pot. A wireless power transmitter is designed in the flower pot and a wireless power reciver is designed in the electrnoic flower. The power transmitter is designed as building blocks. The floeirt can used the transmitter in different size of flower pot. They also can choose any different ecectronic flower to design the production.","PeriodicalId":303502,"journal":{"name":"2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124493273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/GCCE46687.2019.9015360
Yusuke Akamatsu, Ryosuke Harakawa, Takahiro Ogawa, M. Haseyama
Researches for estimating what people view from their brain activity have attracted wide attention. Many existing methods focus on only relationship between brain activity and visual features extracted from viewed images. In this paper, we propose a multi-view Bayesian generative model (MVBGM), which adopts a new view, i.e., category features obtained from viewed images. MVBGM based on automatic feature selection under the Bayesian approach can also avoid overfitting caused by high dimensional features. Experimental results show that MVBGM can estimate viewed image categories from brain activity more accurately than existing methods.
{"title":"Estimating Viewed Image Categories from fMRI Activity via Multi-view Bayesian Generative Model","authors":"Yusuke Akamatsu, Ryosuke Harakawa, Takahiro Ogawa, M. Haseyama","doi":"10.1109/GCCE46687.2019.9015360","DOIUrl":"https://doi.org/10.1109/GCCE46687.2019.9015360","url":null,"abstract":"Researches for estimating what people view from their brain activity have attracted wide attention. Many existing methods focus on only relationship between brain activity and visual features extracted from viewed images. In this paper, we propose a multi-view Bayesian generative model (MVBGM), which adopts a new view, i.e., category features obtained from viewed images. MVBGM based on automatic feature selection under the Bayesian approach can also avoid overfitting caused by high dimensional features. Experimental results show that MVBGM can estimate viewed image categories from brain activity more accurately than existing methods.","PeriodicalId":303502,"journal":{"name":"2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129797167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/GCCE46687.2019.9015539
Mohd Hafizuddin Bin Kamilin, Mohd Anuaruddin Bin Ahmadon, S. Yamaguchi
In order to design an automated Internet of Things, the system developer not only needs to create a schedule to manage the execution of IoT devices by themselves, the schedule itself must also be able to achieve the desired outcome at the end of the scheduling period. However, it is difficult to create a generalized schedule because IoT is an application-centric system. Moreover, smart service must not only be able to react to the current situation but also to what will happen in the future. In this paper, we proposed an auto-scheduling framework to manage devices running time and use prediction data for fine-tuning. To generalize our method for any type of applications, we propose a process and optimizer modules to generate a schedule based on given user and prediction parameters. Finally, we showed the effectiveness by evaluating the running time.
{"title":"An Auto-Scheduling Framework for the Internet of Things based on Process and Optimizer Modules","authors":"Mohd Hafizuddin Bin Kamilin, Mohd Anuaruddin Bin Ahmadon, S. Yamaguchi","doi":"10.1109/GCCE46687.2019.9015539","DOIUrl":"https://doi.org/10.1109/GCCE46687.2019.9015539","url":null,"abstract":"In order to design an automated Internet of Things, the system developer not only needs to create a schedule to manage the execution of IoT devices by themselves, the schedule itself must also be able to achieve the desired outcome at the end of the scheduling period. However, it is difficult to create a generalized schedule because IoT is an application-centric system. Moreover, smart service must not only be able to react to the current situation but also to what will happen in the future. In this paper, we proposed an auto-scheduling framework to manage devices running time and use prediction data for fine-tuning. To generalize our method for any type of applications, we propose a process and optimizer modules to generate a schedule based on given user and prediction parameters. Finally, we showed the effectiveness by evaluating the running time.","PeriodicalId":303502,"journal":{"name":"2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128312566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}