首页 > 最新文献

2019 IEEE International Workshop on Signal Processing Systems (SiPS)最新文献

英文 中文
Pipelined Implementations for Belief Propagation Polar Decoder: From Formula to Hardware 信念传播极性解码器的流水线实现:从公式到硬件
Pub Date : 2019-10-01 DOI: 10.1109/SiPS47522.2019.9020515
Chao Ji, Zaichen Zhang, X. You, Chuan Zhang
A general design method for pipelined belief propagation (BP) polar decoder is proposed in this paper. By associating data flow graph (DFG) of polar encoder with factor graph (FG) of BP polar decoder, regular structure of FG helps to determine the generation formula representing pipelined BP polar decoder. Using Python as a compiler, the generation formula is translated into a series of synthesizable Verilog HDL files for various code lengths and parallelisms. Considering the balance between performance and cost, this formula-to-hardware design can be extended to explore the design space, where we are able to make tradeoffs according to specific application requirements. With the evaluation of auto-generation system, implementation results have shown that our design is reliable and practicable.
提出了一种管道信念传播(BP)极解码器的通用设计方法。通过将极性编码器的数据流图(DFG)与BP极性解码器的因子图(FG)相关联,使因子图的规则结构有助于确定表示流水线BP极性解码器的生成公式。使用Python作为编译器,生成公式被翻译成一系列可合成的Verilog HDL文件,用于各种代码长度和并行性。考虑到性能和成本之间的平衡,这种从公式到硬件的设计可以扩展到探索设计空间,我们可以根据特定的应用需求进行权衡。通过对自动生成系统的评估,实现结果表明了设计的可靠性和实用性。
{"title":"Pipelined Implementations for Belief Propagation Polar Decoder: From Formula to Hardware","authors":"Chao Ji, Zaichen Zhang, X. You, Chuan Zhang","doi":"10.1109/SiPS47522.2019.9020515","DOIUrl":"https://doi.org/10.1109/SiPS47522.2019.9020515","url":null,"abstract":"A general design method for pipelined belief propagation (BP) polar decoder is proposed in this paper. By associating data flow graph (DFG) of polar encoder with factor graph (FG) of BP polar decoder, regular structure of FG helps to determine the generation formula representing pipelined BP polar decoder. Using Python as a compiler, the generation formula is translated into a series of synthesizable Verilog HDL files for various code lengths and parallelisms. Considering the balance between performance and cost, this formula-to-hardware design can be extended to explore the design space, where we are able to make tradeoffs according to specific application requirements. With the evaluation of auto-generation system, implementation results have shown that our design is reliable and practicable.","PeriodicalId":256971,"journal":{"name":"2019 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115413222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Generation of Efficient Self-adaptive Hardware Polar Decoders Using High-Level Synthesis 基于高级合成的高效自适应硬件极性解码器的生成
Pub Date : 2019-10-01 DOI: 10.1109/SiPS47522.2019.9020441
Yann Delomier, B. Gal, J. Crenne, C. Jégo
Recent advances in 5G digital communication standard implementations advocate for the use of polar codes for the Enhanced Mobile Broad Band (EMBB) control channels. However, in many cases, implementing efficient hardware decoder over a short duration is very challenging. Specialized knowledge is required to facilitate testing, rapid design iterations and fast prototyping. In this paper, we present a model-based design methodology to generate efficient hardware SC polar decoders from high-level synthesis tools. The abstraction level flexibility is evaluated and generated decoders architectures are compared to competing approaches. It is shown that the fine-tuning of computation parallelism, bit width, pruning level and working frequency enable high throughput decoder designs with moderate hardware complexities. Decoding throughput between 10 to 310 Mbit/s and hardware complexity between 1,000 and 21,000 LUTs are reported for the generated architectures.
5G数字通信标准实现的最新进展主张在增强型移动宽带(EMBB)控制信道中使用极性编码。然而,在许多情况下,在短时间内实现高效的硬件解码器是非常具有挑战性的。需要专业知识来促进测试,快速设计迭代和快速原型。在本文中,我们提出了一种基于模型的设计方法,从高级合成工具生成高效的硬件SC极性解码器。评估了抽象级别的灵活性,并将生成的解码器架构与竞争方法进行了比较。结果表明,计算并行度、位宽、剪枝电平和工作频率的微调可以在中等硬件复杂度的情况下实现高吞吐量解码器设计。据报道,所生成的体系结构的解码吞吐量在10到310mbit /s之间,硬件复杂度在1,000到21,000 lut之间。
{"title":"Generation of Efficient Self-adaptive Hardware Polar Decoders Using High-Level Synthesis","authors":"Yann Delomier, B. Gal, J. Crenne, C. Jégo","doi":"10.1109/SiPS47522.2019.9020441","DOIUrl":"https://doi.org/10.1109/SiPS47522.2019.9020441","url":null,"abstract":"Recent advances in 5G digital communication standard implementations advocate for the use of polar codes for the Enhanced Mobile Broad Band (EMBB) control channels. However, in many cases, implementing efficient hardware decoder over a short duration is very challenging. Specialized knowledge is required to facilitate testing, rapid design iterations and fast prototyping. In this paper, we present a model-based design methodology to generate efficient hardware SC polar decoders from high-level synthesis tools. The abstraction level flexibility is evaluated and generated decoders architectures are compared to competing approaches. It is shown that the fine-tuning of computation parallelism, bit width, pruning level and working frequency enable high throughput decoder designs with moderate hardware complexities. Decoding throughput between 10 to 310 Mbit/s and hardware complexity between 1,000 and 21,000 LUTs are reported for the generated architectures.","PeriodicalId":256971,"journal":{"name":"2019 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124529070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
RNN Models for Rain Detection 用于雨检测的RNN模型
Pub Date : 2019-10-01 DOI: 10.1109/SiPS47522.2019.9020603
H. Habi, H. Messer
The task of rain detection, also known as wet-dry classification, using recurrent neural networks (RNNs) utilizing data from commercial microwave links (CMLs) has recently gained attention. Whereas previous studies used long short-term memory (LSTM) units, here we used gated recurrent units (GRUs). We compare the wet-dry classification performance of LSTM and GRU based network architectures using data from operational cellular backhaul networks and meteorological measurements in Israel and Sweden, and draw conclusions based on datasets consisting of actual measurements over two years in two different geological and climatic regions
利用商业微波链路(cml)的数据,利用递归神经网络(rnn)进行降雨探测,也称为干湿分类,最近引起了人们的关注。之前的研究使用的是长短期记忆(LSTM)单元,而本次研究使用的是门控循环单元(gru)。我们使用来自以色列和瑞典的蜂窝回程网络和气象测量数据比较了基于LSTM和GRU的网络架构的干湿分类性能,并基于两个不同地质和气候区域两年多的实际测量数据集得出结论
{"title":"RNN Models for Rain Detection","authors":"H. Habi, H. Messer","doi":"10.1109/SiPS47522.2019.9020603","DOIUrl":"https://doi.org/10.1109/SiPS47522.2019.9020603","url":null,"abstract":"The task of rain detection, also known as wet-dry classification, using recurrent neural networks (RNNs) utilizing data from commercial microwave links (CMLs) has recently gained attention. Whereas previous studies used long short-term memory (LSTM) units, here we used gated recurrent units (GRUs). We compare the wet-dry classification performance of LSTM and GRU based network architectures using data from operational cellular backhaul networks and meteorological measurements in Israel and Sweden, and draw conclusions based on datasets consisting of actual measurements over two years in two different geological and climatic regions","PeriodicalId":256971,"journal":{"name":"2019 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128469995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Co-Design of Sparse Coding and Dictionary Learning for Real-Time Physiological Signals Monitoring 基于稀疏编码和字典学习的生理信号实时监测协同设计
Pub Date : 2019-10-01 DOI: 10.1109/SiPS47522.2019.9020428
Kuan-Chun Chen, Ching-Yao Chou, A. Wu
Compressive sensing (CS) is a novel technique to reduce overall transmission power in wireless sensors. For physiological signals telemonitoring of wearable devices, chip area and power efficiency need to be considered simultaneously. There are many prior studies aim to develop algorithms that applied to CS reconstruction chips with reconfigurable architecture. However, representative dictionaries are also important when these CS reconstruction chips are verified in real-time physiological signals monitoring tasks. That is, a more representative dictionary can not only enhance the reconstruction performance of these chips but also alleviate memory overhead. In this paper, we apply the concept of co-design between sparse coding algorithms and learned dictionaries. We also explore the representativeness and compatibility of each learned dictionary. In addition, the computational complexity of each reconstruction algorithm is provided through simulations. Our results show that the dictionaries trained by fast iterative shrinkage-thresholding algorithm (FISTA) are more representative according to the quality of reconstruction for physiological signals monitoring. Besides, FISTA reduces more than 90% of the computational time compared with other hardware-friendly reconstruction algorithms.
压缩感知(CS)是一种降低无线传感器整体传输功率的新技术。对于可穿戴设备的生理信号远程监测,需要同时考虑芯片面积和功耗效率。已有许多研究旨在开发适用于具有可重构结构的CS重构芯片的算法。然而,当这些CS重构芯片在实时生理信号监测任务中得到验证时,代表性词典也很重要。也就是说,一个更具代表性的字典不仅可以提高这些芯片的重构性能,还可以减轻内存开销。在本文中,我们应用稀疏编码算法和学习字典之间的协同设计概念。我们还探讨了每个学过的词典的代表性和兼容性。此外,通过仿真给出了各重构算法的计算复杂度。研究结果表明,快速迭代收缩阈值算法(FISTA)训练的字典在生理信号监测中的重构质量更具有代表性。此外,与其他硬件友好型重构算法相比,FISTA的计算时间减少了90%以上。
{"title":"Co-Design of Sparse Coding and Dictionary Learning for Real-Time Physiological Signals Monitoring","authors":"Kuan-Chun Chen, Ching-Yao Chou, A. Wu","doi":"10.1109/SiPS47522.2019.9020428","DOIUrl":"https://doi.org/10.1109/SiPS47522.2019.9020428","url":null,"abstract":"Compressive sensing (CS) is a novel technique to reduce overall transmission power in wireless sensors. For physiological signals telemonitoring of wearable devices, chip area and power efficiency need to be considered simultaneously. There are many prior studies aim to develop algorithms that applied to CS reconstruction chips with reconfigurable architecture. However, representative dictionaries are also important when these CS reconstruction chips are verified in real-time physiological signals monitoring tasks. That is, a more representative dictionary can not only enhance the reconstruction performance of these chips but also alleviate memory overhead. In this paper, we apply the concept of co-design between sparse coding algorithms and learned dictionaries. We also explore the representativeness and compatibility of each learned dictionary. In addition, the computational complexity of each reconstruction algorithm is provided through simulations. Our results show that the dictionaries trained by fast iterative shrinkage-thresholding algorithm (FISTA) are more representative according to the quality of reconstruction for physiological signals monitoring. Besides, FISTA reduces more than 90% of the computational time compared with other hardware-friendly reconstruction algorithms.","PeriodicalId":256971,"journal":{"name":"2019 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129957040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Hybrid Preconditioned CG Detection with Sequential Update for Massive MIMO Systems 大规模MIMO系统的序列更新混合预处理CG检测
Pub Date : 2019-10-01 DOI: 10.1109/SiPS47522.2019.9020319
Jing Zeng, Jun Lin, Zhongfeng Wang, Yun Chen
Massive Multi-Input Multi-Output (MIMO) is one of the key technologies for the fifth generation communication systems. Conjugate Gradient (CG) algorithm approximates the minimum mean-square error (MMSE) in an iterative manner, which avoids full matrix inversion. Pre-conditioned CG (PCG) was presented to improve the robustness of CG method. However, for the PCG, a sparse matrix inversion is still required in preprocessing and the performance is only comparable to MMSE. In this paper, a hybrid PCG algorithm (HPCG) with sequential update is proposed with superior performance and low complexity. The preconditioned matrix is replaced by a diagonal matrix by exploring its characteristics, which avoids matrix inversion and incomplete Cholesky factorization. Besides, to improve the bit error performance, a sequential update strategy is employed for estimated signals after PCG detection. For a MIMO system with 128 receive antennas, simulation results show the proposed HPCG algorithm outperforms MMSE by 0.25 dB to 1.5 dB under different numbers of users. Based on the channel hardening theories, several signal vectors can be transmitted in the same channel condition. When 10 signal vectors are considered, compared to the other CG based algorithms, the overall complexity of HPCG can be reduced by 3.9% to 56%.
大规模多输入多输出(MIMO)是第五代通信系统的关键技术之一。共轭梯度(CG)算法以迭代的方式逼近最小均方误差(MMSE),避免了全矩阵反演。为了提高CG方法的鲁棒性,提出了预条件CG (PCG)。然而,对于PCG,在预处理中仍然需要进行稀疏矩阵反演,性能仅与MMSE相当。本文提出了一种具有较好性能和较低复杂度的顺序更新混合PCG算法(HPCG)。通过探索对角矩阵的特性,将预条件矩阵替换为对角矩阵,避免了矩阵的反演和不完全Cholesky分解。此外,为了提高误码性能,对PCG检测后的估计信号采用顺序更新策略。对于具有128个接收天线的MIMO系统,仿真结果表明,在不同用户数量下,HPCG算法的性能比MMSE高0.25 ~ 1.5 dB。基于信道硬化理论,可以在同一信道条件下同时传输多个信号矢量。当考虑10个信号矢量时,与其他基于CG的算法相比,HPCG的总体复杂度可降低3.9% ~ 56%。
{"title":"Hybrid Preconditioned CG Detection with Sequential Update for Massive MIMO Systems","authors":"Jing Zeng, Jun Lin, Zhongfeng Wang, Yun Chen","doi":"10.1109/SiPS47522.2019.9020319","DOIUrl":"https://doi.org/10.1109/SiPS47522.2019.9020319","url":null,"abstract":"Massive Multi-Input Multi-Output (MIMO) is one of the key technologies for the fifth generation communication systems. Conjugate Gradient (CG) algorithm approximates the minimum mean-square error (MMSE) in an iterative manner, which avoids full matrix inversion. Pre-conditioned CG (PCG) was presented to improve the robustness of CG method. However, for the PCG, a sparse matrix inversion is still required in preprocessing and the performance is only comparable to MMSE. In this paper, a hybrid PCG algorithm (HPCG) with sequential update is proposed with superior performance and low complexity. The preconditioned matrix is replaced by a diagonal matrix by exploring its characteristics, which avoids matrix inversion and incomplete Cholesky factorization. Besides, to improve the bit error performance, a sequential update strategy is employed for estimated signals after PCG detection. For a MIMO system with 128 receive antennas, simulation results show the proposed HPCG algorithm outperforms MMSE by 0.25 dB to 1.5 dB under different numbers of users. Based on the channel hardening theories, several signal vectors can be transmitted in the same channel condition. When 10 signal vectors are considered, compared to the other CG based algorithms, the overall complexity of HPCG can be reduced by 3.9% to 56%.","PeriodicalId":256971,"journal":{"name":"2019 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126381982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Joint Image Deblur and Poisson Denoising based on Adaptive Dictionary Learning 基于自适应字典学习的联合图像去模糊和泊松去噪
Pub Date : 2019-10-01 DOI: 10.1109/SiPS47522.2019.9020314
Xiangyang Zhang, Hongqing Liu, Zhen Luo, Yi Zhou
This paper describes a blind image reconstruction algorithm for blurred image under Poisson noise. To that aim, in this work, the group sparse domain is explored to sparsely represent the image and blur kernel, and then $ell_{1} -$norm is utilized to enforce the sparse solutions. In doing so, a joint optimization framework is developed to estimate the blur kernel matrix while removing Poisson noise. To effectively solve the developed optimization, a two-step iteration scheme involving two sub-problems is proposed. For each subproblem, the alternating direction method of multipliers (ADMM) algorithm is devised to estimate the blur or denoise. The experimental simulations demonstrate that the proposed algorithm is superior to other approaches in terms of restoration quality and performance metrics.
提出了一种针对泊松噪声下模糊图像的盲图像重建算法。为此,在本工作中,探索了群稀疏域来稀疏表示图像和模糊核,然后利用$ell_{1} -$范数来强制稀疏解。在此过程中,开发了一个联合优化框架来估计模糊核矩阵,同时去除泊松噪声。为了有效地解决所开发的优化问题,提出了一种包含两个子问题的两步迭代方案。针对每个子问题,设计了交替方向乘法器(ADMM)算法来估计模糊或噪声。实验结果表明,该算法在恢复质量和性能指标上都优于其他方法。
{"title":"Joint Image Deblur and Poisson Denoising based on Adaptive Dictionary Learning","authors":"Xiangyang Zhang, Hongqing Liu, Zhen Luo, Yi Zhou","doi":"10.1109/SiPS47522.2019.9020314","DOIUrl":"https://doi.org/10.1109/SiPS47522.2019.9020314","url":null,"abstract":"This paper describes a blind image reconstruction algorithm for blurred image under Poisson noise. To that aim, in this work, the group sparse domain is explored to sparsely represent the image and blur kernel, and then $ell_{1} -$norm is utilized to enforce the sparse solutions. In doing so, a joint optimization framework is developed to estimate the blur kernel matrix while removing Poisson noise. To effectively solve the developed optimization, a two-step iteration scheme involving two sub-problems is proposed. For each subproblem, the alternating direction method of multipliers (ADMM) algorithm is devised to estimate the blur or denoise. The experimental simulations demonstrate that the proposed algorithm is superior to other approaches in terms of restoration quality and performance metrics.","PeriodicalId":256971,"journal":{"name":"2019 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126303495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Memory Reduction through Experience Classification f or Deep Reinforcement Learning with Prioritized Experience Replay 基于经验分类的深度强化学习记忆减少与优先经验回放
Pub Date : 2019-10-01 DOI: 10.1109/SiPS47522.2019.9020610
Kai-Huan Shen, P. Tsai
Prioritized experience replay has been widely used in many online reinforcement learning algorithms, providing high efficiency in exploiting past experiences. However, a large replay buffer consumes system storage significantly. Thus, in this paper, a segmentation and classification scheme is proposed. The distribution of temporal-difference errors (TD errors) is first segmented. The experience for network training is classified according to its updated TD error. Then, a swap mechanism for similar experiences is implemented to change the lifetimes of experiences in the replay buffer. The proposed scheme is incorporated in the Deep Deterministic Policy Gradient (DDPG) algorithm, and the Inverted Pendulum and Inverted Double Pendulum tasks are used for verification. From the experiments, our proposed mechanism can effectively remove the buffer redundancy and further reduce the correlation of experiences in the replay buffer. Thus, better learning performance with reduced memory size is achieved at the cost of additional computations of updated TD errors.
优先经验重放被广泛应用于许多在线强化学习算法中,提供了对过去经验的高效利用。然而,一个大的重放缓冲区会消耗大量的系统存储。为此,本文提出了一种分割分类方案。首先对时间差误差(TD误差)的分布进行分割。根据更新后的TD误差对网络训练经验进行分类。然后,实现类似体验的交换机制,以更改重放缓冲区中体验的生命周期。该方案被纳入深度确定性策略梯度(DDPG)算法,并使用倒立摆和倒立双摆任务进行验证。实验结果表明,本文提出的机制可以有效地消除缓冲区冗余,进一步降低重放缓冲区中经验的相关性。因此,在减少内存大小的情况下获得更好的学习性能是以额外计算更新的TD误差为代价的。
{"title":"Memory Reduction through Experience Classification f or Deep Reinforcement Learning with Prioritized Experience Replay","authors":"Kai-Huan Shen, P. Tsai","doi":"10.1109/SiPS47522.2019.9020610","DOIUrl":"https://doi.org/10.1109/SiPS47522.2019.9020610","url":null,"abstract":"Prioritized experience replay has been widely used in many online reinforcement learning algorithms, providing high efficiency in exploiting past experiences. However, a large replay buffer consumes system storage significantly. Thus, in this paper, a segmentation and classification scheme is proposed. The distribution of temporal-difference errors (TD errors) is first segmented. The experience for network training is classified according to its updated TD error. Then, a swap mechanism for similar experiences is implemented to change the lifetimes of experiences in the replay buffer. The proposed scheme is incorporated in the Deep Deterministic Policy Gradient (DDPG) algorithm, and the Inverted Pendulum and Inverted Double Pendulum tasks are used for verification. From the experiments, our proposed mechanism can effectively remove the buffer redundancy and further reduce the correlation of experiences in the replay buffer. Thus, better learning performance with reduced memory size is achieved at the cost of additional computations of updated TD errors.","PeriodicalId":256971,"journal":{"name":"2019 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"23 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114095066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
SiPS 2019 Author Index SiPS 2019作者索引
Pub Date : 2019-10-01 DOI: 10.1109/sips47522.2019.9020593
{"title":"SiPS 2019 Author Index","authors":"","doi":"10.1109/sips47522.2019.9020593","DOIUrl":"https://doi.org/10.1109/sips47522.2019.9020593","url":null,"abstract":"","PeriodicalId":256971,"journal":{"name":"2019 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127554218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ensemble Neural Network Method for Wind Speed Forecasting 风速预报的集成神经网络方法
Pub Date : 2019-10-01 DOI: 10.1109/SiPS47522.2019.9020410
Binbin Yong, Fei Qiao, Chen Wang, Jun Shen, Yongqiang Wei, Qingguo Zhou
Wind power generation has gradually developed into an important approach of energy supply. Meanwhile, due to the difficulty of electricity storage, wind power is greatly affected by the real-time wind speed in wind fields. Generally, wind speed has the characteristics of nonlinear, irregular, and non-stationary, which make accurate wind speed forecasting a difficult problem. Recent studies have shown that ensemble forecasting approaches combining different sub-models is an efficient way to solve the problem. Therefore, in this article, two single models are ensembled for wind speed forecasting. Meanwhile, four data pre-processing hybrid models are combined with the reliability weights. The proposed ensemble approaches are simulated on the real wind speed data in the Longdong area of Loess Plateau in China from 2007 to 2015, the experimental results indicate that the ensemble approaches outperform individual models and other hybrid models with different pre-processing methods.
风力发电已逐渐发展成为一种重要的能源供应方式。同时,由于蓄电困难,风力发电受风场实时风速的影响较大。一般来说,风速具有非线性、不规则和非平稳的特点,这使得准确的风速预报成为一个难题。近年来的研究表明,结合不同子模型的集合预测方法是解决这一问题的有效途径。因此,本文将两个单一模式组合起来进行风速预报。同时,将4种数据预处理混合模型与可靠性权重相结合。对2007 - 2015年黄土高原陇东地区实测风速数据进行了模拟,结果表明,采用不同预处理方法的集成方法优于单个模型和混合模型。
{"title":"Ensemble Neural Network Method for Wind Speed Forecasting","authors":"Binbin Yong, Fei Qiao, Chen Wang, Jun Shen, Yongqiang Wei, Qingguo Zhou","doi":"10.1109/SiPS47522.2019.9020410","DOIUrl":"https://doi.org/10.1109/SiPS47522.2019.9020410","url":null,"abstract":"Wind power generation has gradually developed into an important approach of energy supply. Meanwhile, due to the difficulty of electricity storage, wind power is greatly affected by the real-time wind speed in wind fields. Generally, wind speed has the characteristics of nonlinear, irregular, and non-stationary, which make accurate wind speed forecasting a difficult problem. Recent studies have shown that ensemble forecasting approaches combining different sub-models is an efficient way to solve the problem. Therefore, in this article, two single models are ensembled for wind speed forecasting. Meanwhile, four data pre-processing hybrid models are combined with the reliability weights. The proposed ensemble approaches are simulated on the real wind speed data in the Longdong area of Loess Plateau in China from 2007 to 2015, the experimental results indicate that the ensemble approaches outperform individual models and other hybrid models with different pre-processing methods.","PeriodicalId":256971,"journal":{"name":"2019 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122137657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Data-Driven Approach to Vibrotactile Data Compression 振动触觉数据压缩的数据驱动方法
Pub Date : 2019-10-01 DOI: 10.1109/SiPS47522.2019.9020534
Xun Liu, M. Dohler
The emerging Internet of Skills that exchanges tactile and other sensorial data, significantly augments traditional multimedia. The increase of data scale and modalities demands for codecs dedicated to these sensorial data. In this paper, we propose a codec for compression of vibrotactile data in the spirit of Weber’s law. To be specific, a companding function is applied to the vibrotactile data, so that the quantisation step of high amplitude is larger than that of low amplitude. The curve of the companding function is optimised through a data-driven approach. To evaluate the performance of the vibrotactile codec in terms of human perceived quality, rigorous subjective tests are conducted. The results demonstrate that 75% compression of vibrotactile data is achieved without perceivable degradation. More importantly, the computational complexity is much lower and the latency performance is superior, compared with other vibrotactile codecs. The computational complexity of the proposed codec is about 1/20 of that of previous codecs, while the time delay is approximately 1/30 of that of previous codec.
新兴的技能互联网可以交换触觉和其他感官数据,大大增强了传统的多媒体。数据规模和模式的增加对专用于这些传感数据的编解码器提出了更高的要求。在本文中,我们提出了一个编解码器的压缩振动触觉数据的韦伯定律的精神。具体而言,对振动触觉数据施加压缩函数,使高幅值的量化步长大于低幅值的量化步长。通过数据驱动的方法对压缩函数曲线进行优化。为了评估振动触觉编解码器在人类感知质量方面的性能,进行了严格的主观测试。结果表明,75%的振动触觉数据压缩达到没有可感知的退化。更重要的是,与其他振动触觉编解码器相比,该编解码器的计算复杂度低得多,延迟性能优越。该编解码器的计算复杂度约为原有编解码器的1/20,时间延迟约为原有编解码器的1/30。
{"title":"A Data-Driven Approach to Vibrotactile Data Compression","authors":"Xun Liu, M. Dohler","doi":"10.1109/SiPS47522.2019.9020534","DOIUrl":"https://doi.org/10.1109/SiPS47522.2019.9020534","url":null,"abstract":"The emerging Internet of Skills that exchanges tactile and other sensorial data, significantly augments traditional multimedia. The increase of data scale and modalities demands for codecs dedicated to these sensorial data. In this paper, we propose a codec for compression of vibrotactile data in the spirit of Weber’s law. To be specific, a companding function is applied to the vibrotactile data, so that the quantisation step of high amplitude is larger than that of low amplitude. The curve of the companding function is optimised through a data-driven approach. To evaluate the performance of the vibrotactile codec in terms of human perceived quality, rigorous subjective tests are conducted. The results demonstrate that 75% compression of vibrotactile data is achieved without perceivable degradation. More importantly, the computational complexity is much lower and the latency performance is superior, compared with other vibrotactile codecs. The computational complexity of the proposed codec is about 1/20 of that of previous codecs, while the time delay is approximately 1/30 of that of previous codec.","PeriodicalId":256971,"journal":{"name":"2019 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129747547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2019 IEEE International Workshop on Signal Processing Systems (SiPS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1