首页 > 最新文献

2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)最新文献

英文 中文
Near-Field Wireless Power Transfer System with Efficiency Tracking for Dual-band Applications 双频应用中具有效率跟踪的近场无线输电系统
Pub Date : 2019-10-01 DOI: 10.1109/GCCE46687.2019.9015611
Ming-Lung Kung, Feng-Yu Chen, Ken-Huang Lin
Near-field dual-band wireless power transfer (WPT) systems can be compatible with distinct operating bands. Several systems are available with different architectures, such as multi-coil module or repeater-based coil module. However, their efficiencies become smaller when the coupling among coils change. In this work a dual-band WPT system with efficiency-tracking circuits-boost converters is proposed to obtain higher efficiency under dynamic-coupling applications. The simulation results show that the efficiency can be increased by 5-25% at 6.78 MHz and 13.56 MHz when the receiver coils are deviated from the optimal positions.
近场双频无线电力传输(WPT)系统可以兼容不同的工作频段。有几种系统具有不同的架构,例如多线圈模块或基于中继器的线圈模块。然而,当线圈之间的耦合发生变化时,它们的效率会变小。为了在动态耦合应用中获得更高的效率,本文提出了一种带效率跟踪电路的双频WPT系统-升压变换器。仿真结果表明,在6.78 MHz和13.56 MHz频段,当接收线圈偏离最佳位置时,效率可提高5-25%。
{"title":"Near-Field Wireless Power Transfer System with Efficiency Tracking for Dual-band Applications","authors":"Ming-Lung Kung, Feng-Yu Chen, Ken-Huang Lin","doi":"10.1109/GCCE46687.2019.9015611","DOIUrl":"https://doi.org/10.1109/GCCE46687.2019.9015611","url":null,"abstract":"Near-field dual-band wireless power transfer (WPT) systems can be compatible with distinct operating bands. Several systems are available with different architectures, such as multi-coil module or repeater-based coil module. However, their efficiencies become smaller when the coupling among coils change. In this work a dual-band WPT system with efficiency-tracking circuits-boost converters is proposed to obtain higher efficiency under dynamic-coupling applications. The simulation results show that the efficiency can be increased by 5-25% at 6.78 MHz and 13.56 MHz when the receiver coils are deviated from the optimal positions.","PeriodicalId":303502,"journal":{"name":"2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117255882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Effectiveness Evaluation of Deep Features for Image Reconstruction from fMRI Signals 深度特征在fMRI信号图像重建中的有效性评价
Pub Date : 2019-10-01 DOI: 10.1109/GCCE46687.2019.9015470
Saya Takada, Ren Togo, Takahiro Ogawa, M. Haseyama
Reconstruction of human cognitive contents based on analyzing of functional Magnetic Resonance Imaging (fMRI) signals has been actively researched. Cognitive contents such as seen images can be reconstructed by estimating the relation between fMRI signals and deep neural network (DNN) features extracted from seen images. In order to reconstruct seen images with high accuracy, translation fMRI signals into meaningful features is an important task. In this paper, we validate the reconstruction accuracy of seen images by using visual features with some DNN feature extraction models. Recent works for image reconstruction used VGG19 to extract visual features. However, newer models such as Inception-v3 and ResNet50 have been proposed and these models perform general object recognition with higher accuracy. Thus it is expected the accuracy of image reconstruction is improved when using features extracted by these newer models. Experimental results for images of five categories show the effectiveness of the use of visual features from newer DNN models.
基于功能磁共振成像(fMRI)信号分析的人类认知内容重构一直是研究热点。通过估计fMRI信号与从视觉图像中提取的深度神经网络(DNN)特征之间的关系,可以重构视觉图像等认知内容。为了高精度地重建视觉图像,将fMRI信号转化为有意义的特征是一项重要任务。在本文中,我们利用视觉特征和一些DNN特征提取模型来验证视觉图像的重建精度。最近的图像重建工作使用VGG19提取视觉特征。然而,新的模型如Inception-v3和ResNet50已经被提出,这些模型以更高的精度执行一般的物体识别。因此,使用这些新模型提取的特征可以提高图像重建的精度。对五类图像的实验结果表明,使用新DNN模型的视觉特征是有效的。
{"title":"Effectiveness Evaluation of Deep Features for Image Reconstruction from fMRI Signals","authors":"Saya Takada, Ren Togo, Takahiro Ogawa, M. Haseyama","doi":"10.1109/GCCE46687.2019.9015470","DOIUrl":"https://doi.org/10.1109/GCCE46687.2019.9015470","url":null,"abstract":"Reconstruction of human cognitive contents based on analyzing of functional Magnetic Resonance Imaging (fMRI) signals has been actively researched. Cognitive contents such as seen images can be reconstructed by estimating the relation between fMRI signals and deep neural network (DNN) features extracted from seen images. In order to reconstruct seen images with high accuracy, translation fMRI signals into meaningful features is an important task. In this paper, we validate the reconstruction accuracy of seen images by using visual features with some DNN feature extraction models. Recent works for image reconstruction used VGG19 to extract visual features. However, newer models such as Inception-v3 and ResNet50 have been proposed and these models perform general object recognition with higher accuracy. Thus it is expected the accuracy of image reconstruction is improved when using features extracted by these newer models. Experimental results for images of five categories show the effectiveness of the use of visual features from newer DNN models.","PeriodicalId":303502,"journal":{"name":"2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)","volume":"97 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117330679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image classification on projection based multilayer sparse representation 基于投影的多层稀疏表示图像分类
Pub Date : 2019-10-01 DOI: 10.1109/GCCE46687.2019.9014638
Tomoya Hirakawa, Y. Kuroki
This paper describes a novel multilayer-sparse-representation based image classification. This method designs a dictionary for sparse coefficients of each layer with ADMM (Alternating Direction Method of Multipliers) referring training images. For the classification stage, sparse coefficients should also be calculated with ADMM for test images, which needs computational burden. To reduce the burden, this work proposes to project inputs onto dictionary atoms of each layer instead of solving sparse coefficients. This alternative method is inspired by CNNs (Convolutional Neural Networks), and is also faster than solving sparse coefficients. Experimental results show that our method can predict coefficient vectors faster than the conventional methods with almost equivalent classification accuracy.
提出了一种基于多层稀疏表示的图像分类方法。该方法利用ADMM (Alternating Direction method of Multipliers)参考训练图像,设计了每层稀疏系数字典。在分类阶段,还需要用ADMM对测试图像进行稀疏系数计算,计算量较大。为了减轻负担,本工作提出将输入投影到每层的字典原子上,而不是求解稀疏系数。这种替代方法受到cnn(卷积神经网络)的启发,并且比求解稀疏系数更快。实验结果表明,该方法可以较传统方法更快地预测系数向量,分类精度基本相当。
{"title":"Image classification on projection based multilayer sparse representation","authors":"Tomoya Hirakawa, Y. Kuroki","doi":"10.1109/GCCE46687.2019.9014638","DOIUrl":"https://doi.org/10.1109/GCCE46687.2019.9014638","url":null,"abstract":"This paper describes a novel multilayer-sparse-representation based image classification. This method designs a dictionary for sparse coefficients of each layer with ADMM (Alternating Direction Method of Multipliers) referring training images. For the classification stage, sparse coefficients should also be calculated with ADMM for test images, which needs computational burden. To reduce the burden, this work proposes to project inputs onto dictionary atoms of each layer instead of solving sparse coefficients. This alternative method is inspired by CNNs (Convolutional Neural Networks), and is also faster than solving sparse coefficients. Experimental results show that our method can predict coefficient vectors faster than the conventional methods with almost equivalent classification accuracy.","PeriodicalId":303502,"journal":{"name":"2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116164085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unequal Error Protection for Compressed Sensing with Polar Codes 极性码压缩感知中的不等错误保护
Pub Date : 2019-10-01 DOI: 10.1109/GCCE46687.2019.9015394
Yueh-Hong Chen, Feng-Cheng Chang, Hsiang-Cheh Huang, Teng-Kuan Huang
Compressed sensing is famous for its compression performances under error-free transmission. It would be helpful to apply polar codes for compressed sensing data to cope with the transmission over error-prone channels. Parts of data are more important than others, hence, we apply polar codes with unequal protection capabilities to compressed sensing coefficients, for the error resilient transmission over the binary symmetric channels. Simulations have presented the enhancements and the potential use for practical applications.
压缩感知以其在无差错传输下的压缩性能而闻名。在压缩感知数据中应用极性编码有助于解决易出错信道上的传输问题。部分数据比其他数据更重要,因此,我们将具有不等保护能力的极性码应用于压缩感知系数,以便在二进制对称信道上进行错误弹性传输。仿真显示了这些增强和实际应用的潜在用途。
{"title":"Unequal Error Protection for Compressed Sensing with Polar Codes","authors":"Yueh-Hong Chen, Feng-Cheng Chang, Hsiang-Cheh Huang, Teng-Kuan Huang","doi":"10.1109/GCCE46687.2019.9015394","DOIUrl":"https://doi.org/10.1109/GCCE46687.2019.9015394","url":null,"abstract":"Compressed sensing is famous for its compression performances under error-free transmission. It would be helpful to apply polar codes for compressed sensing data to cope with the transmission over error-prone channels. Parts of data are more important than others, hence, we apply polar codes with unequal protection capabilities to compressed sensing coefficients, for the error resilient transmission over the binary symmetric channels. Simulations have presented the enhancements and the potential use for practical applications.","PeriodicalId":303502,"journal":{"name":"2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123460857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Microwave Power Transfer Based on Moving Target Tracking in High Frame-Rate Camera Images 基于高帧率摄像机图像运动目标跟踪的微波功率传输
Pub Date : 2019-10-01 DOI: 10.1109/GCCE46687.2019.9015625
M. Fujii, Naoki Tsuji, Yukio Imai, Shigemi Masuda
We implemented a vision-based microwave power transfer experimental system aiming to feed a moving target. Slot cars moving along a set-up course were used as moving objects. The target slot car was successfully detected, tracked, and irradiated by a power-transferring antenna array embedded with a high frame-rate image-signal-processing directivity controller. Experimental results show that the beam tracking provided high received signal power for a longer time at the moving target slot car than a fixed beam.
实现了一种基于视觉的微波功率传输实验系统,目的是为运动目标提供能量。沿着设定路线移动的槽车被用作移动物体。通过嵌入高帧率图像信号处理指向性控制器的功率传输天线阵列成功地检测、跟踪和照射目标槽车。实验结果表明,与固定波束相比,波束跟踪在运动目标槽车处提供了较长时间的高接收信号功率。
{"title":"Microwave Power Transfer Based on Moving Target Tracking in High Frame-Rate Camera Images","authors":"M. Fujii, Naoki Tsuji, Yukio Imai, Shigemi Masuda","doi":"10.1109/GCCE46687.2019.9015625","DOIUrl":"https://doi.org/10.1109/GCCE46687.2019.9015625","url":null,"abstract":"We implemented a vision-based microwave power transfer experimental system aiming to feed a moving target. Slot cars moving along a set-up course were used as moving objects. The target slot car was successfully detected, tracked, and irradiated by a power-transferring antenna array embedded with a high frame-rate image-signal-processing directivity controller. Experimental results show that the beam tracking provided high received signal power for a longer time at the moving target slot car than a fixed beam.","PeriodicalId":303502,"journal":{"name":"2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123689451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Image Based Deep Learning Model for Movie Trailer Genre Classification 基于图像的电影预告片类型分类深度学习模型
Pub Date : 2019-10-01 DOI: 10.1109/GCCE46687.2019.9015293
Chih-Hsun Chou, P. Jen
A deep learning based movie genre classification model, integrating the convolutional neural network (CNN) with the long-short term memory network (LSTM), was presented in this study. In the image process, a series of key-frame features was obtained by using the CNN so that the LSTM can be applied to learn the dynamic features of the key-frames for classification. In the experiment, well-known traditional movie features as well as the deep learning models used in other researches were used for comparisons to verify the performance of the proposed deep learning based movie genre classification model.
提出了一种基于深度学习的电影类型分类模型,该模型将卷积神经网络(CNN)与长短期记忆网络(LSTM)相结合。在图像处理过程中,利用CNN获得一系列关键帧特征,然后利用LSTM学习关键帧的动态特征进行分类。在实验中,我们将知名的传统电影特征与其他研究中使用的深度学习模型进行比较,验证所提出的基于深度学习的电影类型分类模型的性能。
{"title":"Image Based Deep Learning Model for Movie Trailer Genre Classification","authors":"Chih-Hsun Chou, P. Jen","doi":"10.1109/GCCE46687.2019.9015293","DOIUrl":"https://doi.org/10.1109/GCCE46687.2019.9015293","url":null,"abstract":"A deep learning based movie genre classification model, integrating the convolutional neural network (CNN) with the long-short term memory network (LSTM), was presented in this study. In the image process, a series of key-frame features was obtained by using the CNN so that the LSTM can be applied to learn the dynamic features of the key-frames for classification. In the experiment, well-known traditional movie features as well as the deep learning models used in other researches were used for comparisons to verify the performance of the proposed deep learning based movie genre classification model.","PeriodicalId":303502,"journal":{"name":"2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)","volume":"214 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122149029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
New Automatic Navigation Time Recording System for Small Aircraft 新型小型飞机自动导航时间记录系统
Pub Date : 2019-10-01 DOI: 10.1109/GCCE46687.2019.9015614
Susumu Kawai, T. Wada, H. Ebara
In this paper, we propose an automatic recording system of the navigation time of a small aircraft. The acceleration of a small aircraft is detected from the X axis by using 3-axis acceleration sensor of the smartphone. The navigation time is determined from the acceleration. We actually flew a small aircraft and collected necessary data. The analysis gave valid results.
本文提出了一种小型飞机导航时间自动记录系统。小型飞机的加速度是通过智能手机的3轴加速度传感器在X轴上进行检测的。导航时间由加速度决定。我们实际上驾驶了一架小型飞机并收集了必要的数据。分析得到了有效的结果。
{"title":"New Automatic Navigation Time Recording System for Small Aircraft","authors":"Susumu Kawai, T. Wada, H. Ebara","doi":"10.1109/GCCE46687.2019.9015614","DOIUrl":"https://doi.org/10.1109/GCCE46687.2019.9015614","url":null,"abstract":"In this paper, we propose an automatic recording system of the navigation time of a small aircraft. The acceleration of a small aircraft is detected from the X axis by using 3-axis acceleration sensor of the smartphone. The navigation time is determined from the acceleration. We actually flew a small aircraft and collected necessary data. The analysis gave valid results.","PeriodicalId":303502,"journal":{"name":"2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123951629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Design of Wireless Power Transformer for Electronic Flower Pot 电子花盆无线电源变压器的设计
Pub Date : 2019-10-01 DOI: 10.1109/GCCE46687.2019.9015476
Chia-Yang Liu, C. Hsiao
A wireless power transformer for electronic flower pot is presented in this paper. In traditional the electronic flower is usually mass production. The flower and flower pot are combined in one product. The florist can't change the design any more. In this paper we separate the electronic flower and flower pot. A wireless power transmitter is designed in the flower pot and a wireless power reciver is designed in the electrnoic flower. The power transmitter is designed as building blocks. The floeirt can used the transmitter in different size of flower pot. They also can choose any different ecectronic flower to design the production.
介绍了一种用于电子花盆的无线电源变压器。传统的电子花通常是批量生产的。花和花盆结合在一个产品中。花店不能再改变图案了。本文将电子花与花盆分离,在花盆中设计了无线电源发射器,在电子花中设计了无线电源接收器。电力发射机被设计成积木。可在不同大小的花盆中使用变送器,也可选择任何不同的电子花来设计生产。
{"title":"The Design of Wireless Power Transformer for Electronic Flower Pot","authors":"Chia-Yang Liu, C. Hsiao","doi":"10.1109/GCCE46687.2019.9015476","DOIUrl":"https://doi.org/10.1109/GCCE46687.2019.9015476","url":null,"abstract":"A wireless power transformer for electronic flower pot is presented in this paper. In traditional the electronic flower is usually mass production. The flower and flower pot are combined in one product. The florist can't change the design any more. In this paper we separate the electronic flower and flower pot. A wireless power transmitter is designed in the flower pot and a wireless power reciver is designed in the electrnoic flower. The power transmitter is designed as building blocks. The floeirt can used the transmitter in different size of flower pot. They also can choose any different ecectronic flower to design the production.","PeriodicalId":303502,"journal":{"name":"2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124493273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Estimating Viewed Image Categories from fMRI Activity via Multi-view Bayesian Generative Model 基于多视图贝叶斯生成模型的fMRI活动图像分类估计
Pub Date : 2019-10-01 DOI: 10.1109/GCCE46687.2019.9015360
Yusuke Akamatsu, Ryosuke Harakawa, Takahiro Ogawa, M. Haseyama
Researches for estimating what people view from their brain activity have attracted wide attention. Many existing methods focus on only relationship between brain activity and visual features extracted from viewed images. In this paper, we propose a multi-view Bayesian generative model (MVBGM), which adopts a new view, i.e., category features obtained from viewed images. MVBGM based on automatic feature selection under the Bayesian approach can also avoid overfitting caused by high dimensional features. Experimental results show that MVBGM can estimate viewed image categories from brain activity more accurately than existing methods.
从人们的大脑活动中估计人们所看到的东西的研究引起了广泛的关注。许多现有的方法只关注大脑活动与从观看图像中提取的视觉特征之间的关系。本文提出了一种多视图贝叶斯生成模型(MVBGM),该模型采用了一种新的视图,即从观看图像中获得的类别特征。基于贝叶斯方法的自动特征选择的MVBGM也可以避免高维特征引起的过拟合。实验结果表明,与现有方法相比,MVBGM可以更准确地从脑活动中估计所观看图像的类别。
{"title":"Estimating Viewed Image Categories from fMRI Activity via Multi-view Bayesian Generative Model","authors":"Yusuke Akamatsu, Ryosuke Harakawa, Takahiro Ogawa, M. Haseyama","doi":"10.1109/GCCE46687.2019.9015360","DOIUrl":"https://doi.org/10.1109/GCCE46687.2019.9015360","url":null,"abstract":"Researches for estimating what people view from their brain activity have attracted wide attention. Many existing methods focus on only relationship between brain activity and visual features extracted from viewed images. In this paper, we propose a multi-view Bayesian generative model (MVBGM), which adopts a new view, i.e., category features obtained from viewed images. MVBGM based on automatic feature selection under the Bayesian approach can also avoid overfitting caused by high dimensional features. Experimental results show that MVBGM can estimate viewed image categories from brain activity more accurately than existing methods.","PeriodicalId":303502,"journal":{"name":"2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129797167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An Auto-Scheduling Framework for the Internet of Things based on Process and Optimizer Modules 基于流程和优化器模块的物联网自动调度框架
Pub Date : 2019-10-01 DOI: 10.1109/GCCE46687.2019.9015539
Mohd Hafizuddin Bin Kamilin, Mohd Anuaruddin Bin Ahmadon, S. Yamaguchi
In order to design an automated Internet of Things, the system developer not only needs to create a schedule to manage the execution of IoT devices by themselves, the schedule itself must also be able to achieve the desired outcome at the end of the scheduling period. However, it is difficult to create a generalized schedule because IoT is an application-centric system. Moreover, smart service must not only be able to react to the current situation but also to what will happen in the future. In this paper, we proposed an auto-scheduling framework to manage devices running time and use prediction data for fine-tuning. To generalize our method for any type of applications, we propose a process and optimizer modules to generate a schedule based on given user and prediction parameters. Finally, we showed the effectiveness by evaluating the running time.
为了设计一个自动化的物联网,系统开发人员不仅需要创建一个时间表来管理物联网设备的执行,时间表本身也必须能够在调度周期结束时达到预期的结果。然而,由于物联网是一个以应用为中心的系统,因此很难创建一个通用的时间表。此外,智能服务不仅要能够对当前情况做出反应,还要能够对未来发生的事情做出反应。在本文中,我们提出了一个自动调度框架来管理设备运行时间,并使用预测数据进行微调。为了将我们的方法推广到任何类型的应用程序,我们提出了一个基于给定用户和预测参数生成调度的过程和优化器模块。最后,我们通过对运行时间的评估来证明该方法的有效性。
{"title":"An Auto-Scheduling Framework for the Internet of Things based on Process and Optimizer Modules","authors":"Mohd Hafizuddin Bin Kamilin, Mohd Anuaruddin Bin Ahmadon, S. Yamaguchi","doi":"10.1109/GCCE46687.2019.9015539","DOIUrl":"https://doi.org/10.1109/GCCE46687.2019.9015539","url":null,"abstract":"In order to design an automated Internet of Things, the system developer not only needs to create a schedule to manage the execution of IoT devices by themselves, the schedule itself must also be able to achieve the desired outcome at the end of the scheduling period. However, it is difficult to create a generalized schedule because IoT is an application-centric system. Moreover, smart service must not only be able to react to the current situation but also to what will happen in the future. In this paper, we proposed an auto-scheduling framework to manage devices running time and use prediction data for fine-tuning. To generalize our method for any type of applications, we propose a process and optimizer modules to generate a schedule based on given user and prediction parameters. Finally, we showed the effectiveness by evaluating the running time.","PeriodicalId":303502,"journal":{"name":"2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128312566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2019 IEEE 8th Global Conference on Consumer Electronics (GCCE)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1