首页 > 最新文献

2017 IEEE International Workshop on Signal Processing Systems (SiPS)最新文献

英文 中文
Dry fingerprint detection for multiple image resolutions using ridge features 干指纹检测多图像分辨率使用脊特征
Pub Date : 2017-10-01 DOI: 10.1109/SiPS.2017.8109985
Cheng-Jung Wu, C. Chiu
Dry and wet fingers lead to poor fingerprint quality, which means that it has impact for fingerprint recognition and matching. Recognition methods that are based on the feature of ridge, valley, minutiae or pore are affected by skin conditions. In this paper, we propose a novel dry fingerprint detection method for images with different resolutions using ridge features. The dry fingerprints have vague pores and discontinuous and fragmented ridges. Therefore, the features that we adopt for detection are ridge continuity, ridge fragmentation and ridge/valley ratio. These features can be observed clearly under different image resolutions, so our proposed method can work on 500∼1200 dpi. We propose several ridge features and use the support vector machine to classify into two groups, dry and normal. The NASIC database (1200dpi) and FVC2002 DB1 (500dpi) are used in our experiments, the SVM classification accuracy are 99.00%, and 99.09% relatively.
手指干湿导致指纹质量差,对指纹识别和匹配有影响。基于脊、谷、细部或毛孔特征的识别方法受皮肤状况的影响。本文提出了一种利用脊特征对不同分辨率图像进行干指纹检测的新方法。干指纹孔隙模糊,脊纹不连续、破碎。因此,我们采用的检测特征是脊连续性、脊破碎性和脊谷比。这些特征可以在不同的图像分辨率下清晰地观察到,因此我们提出的方法可以在500 ~ 1200 dpi上工作。我们提出了几种山脊特征,并使用支持向量机将其分为干燥和正常两组。实验采用NASIC数据库(1200dpi)和FVC2002 DB1数据库(500dpi), SVM分类准确率分别为99.00%和99.09%。
{"title":"Dry fingerprint detection for multiple image resolutions using ridge features","authors":"Cheng-Jung Wu, C. Chiu","doi":"10.1109/SiPS.2017.8109985","DOIUrl":"https://doi.org/10.1109/SiPS.2017.8109985","url":null,"abstract":"Dry and wet fingers lead to poor fingerprint quality, which means that it has impact for fingerprint recognition and matching. Recognition methods that are based on the feature of ridge, valley, minutiae or pore are affected by skin conditions. In this paper, we propose a novel dry fingerprint detection method for images with different resolutions using ridge features. The dry fingerprints have vague pores and discontinuous and fragmented ridges. Therefore, the features that we adopt for detection are ridge continuity, ridge fragmentation and ridge/valley ratio. These features can be observed clearly under different image resolutions, so our proposed method can work on 500∼1200 dpi. We propose several ridge features and use the support vector machine to classify into two groups, dry and normal. The NASIC database (1200dpi) and FVC2002 DB1 (500dpi) are used in our experiments, the SVM classification accuracy are 99.00%, and 99.09% relatively.","PeriodicalId":251688,"journal":{"name":"2017 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"190 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114211995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Reliable compressive sensing (CS)-based multi-user detection with power-based Zadoff-Chu sequence design 基于功率的Zadoff-Chu序列设计的可靠压缩感知多用户检测
Pub Date : 2017-10-01 DOI: 10.1109/SiPS.2017.8110015
Chieh-Fang Teng, Ching-Chun Liao, Hung-Yi Cheng, A. Wu
Internet-of-Things (IoT) applications grew rapidly in recent years. Deployment of massive machine-type communication (mMTC) with scalability and reliability becomes a challenging issue. The feature of activity sparsity in mMTC devices makes room for us to efficiently handle detection by compressive sensing (CS)-based multi-user detection (CS-MUD). However, the limited resource, such as the design of spreading sequences for massive devices, makes CS-MUD in mMTC congested and collided. Thus, non-orthogonal multiple access (NOMA) attracts attention in mMTC. Despite NOMA can accommodate more devices, the non-orthogonality makes the performance degradation. In this paper, we take advantage of NOMA to generate more set of sequences by Zadoff-Chu (ZC) sequence which the bit-error-rate (BER) is reduced by two orders. Meanwhile, by considering the channel characteristic and received power of devices, these devices are grouped to make a more reliable mMTC system which eliminates the effect of non-orthogonality and improves the BER by 1.7 times with 128 devices. Thus, the proposed ZC sequence design with power-based grouping scheme can greatly improve the device scalability and reliability of the detection process for uplink multi-user problems that provides a potential solution in mMTC scenarios.
近年来,物联网(IoT)应用增长迅速。具有可伸缩性和可靠性的大规模机器类型通信(mMTC)的部署成为一个具有挑战性的问题。mMTC设备的活动稀疏性为我们高效处理基于压缩感知的多用户检测(CS- mud)提供了空间。然而,由于资源有限,例如针对海量设备的扩展序列设计,使得mMTC中的CS-MUD拥塞和碰撞。因此,非正交多址(NOMA)在mMTC中引起了人们的关注。尽管NOMA可以容纳更多的设备,但非正交性使性能下降。本文利用NOMA技术,利用Zadoff-Chu (ZC)序列生成多组序列,误码率降低了两个数量级。同时,考虑到设备的信道特性和接收功率,将这些设备进行分组,使mMTC系统更加可靠,消除了非正交性的影响,在128个设备的情况下将误码率提高了1.7倍。因此,提出的基于功率分组方案的ZC序列设计可以极大地提高上行多用户问题检测过程的设备可扩展性和可靠性,为mMTC场景下的多用户问题检测提供了一种潜在的解决方案。
{"title":"Reliable compressive sensing (CS)-based multi-user detection with power-based Zadoff-Chu sequence design","authors":"Chieh-Fang Teng, Ching-Chun Liao, Hung-Yi Cheng, A. Wu","doi":"10.1109/SiPS.2017.8110015","DOIUrl":"https://doi.org/10.1109/SiPS.2017.8110015","url":null,"abstract":"Internet-of-Things (IoT) applications grew rapidly in recent years. Deployment of massive machine-type communication (mMTC) with scalability and reliability becomes a challenging issue. The feature of activity sparsity in mMTC devices makes room for us to efficiently handle detection by compressive sensing (CS)-based multi-user detection (CS-MUD). However, the limited resource, such as the design of spreading sequences for massive devices, makes CS-MUD in mMTC congested and collided. Thus, non-orthogonal multiple access (NOMA) attracts attention in mMTC. Despite NOMA can accommodate more devices, the non-orthogonality makes the performance degradation. In this paper, we take advantage of NOMA to generate more set of sequences by Zadoff-Chu (ZC) sequence which the bit-error-rate (BER) is reduced by two orders. Meanwhile, by considering the channel characteristic and received power of devices, these devices are grouped to make a more reliable mMTC system which eliminates the effect of non-orthogonality and improves the BER by 1.7 times with 128 devices. Thus, the proposed ZC sequence design with power-based grouping scheme can greatly improve the device scalability and reliability of the detection process for uplink multi-user problems that provides a potential solution in mMTC scenarios.","PeriodicalId":251688,"journal":{"name":"2017 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116262966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Error analysis methods for the fixed-point implementation of linear systems 线性系统定点实现的误差分析方法
Pub Date : 2017-10-01 DOI: 10.1109/SiPS.2017.8109991
Thibault Hilaire, Anastasia Volkova
In this paper we propose to perform a complete error analysis of a fixed-point implementation of any linear system described by data-flow graph. The system is translated to a matrix-based internal representation that is used to determine the analytical errors-to-output relationship. The error induced by the finite precision arithmetic (for each sum-of-product) of the implementation propagates through the system and perturbs the output. The output error is then analysed with three different point of view: classical statistical approach (errors modeled as noises), worst-case approach (errors modeled as intervals) and probability density function. These three approaches allow determining the output error due to the finite precision with respect to its probability to occur and give the designer a complete output error analysis. Finally, our methodology is illustrated with numerical examples.
在本文中,我们提出对任何由数据流图描述的线性系统的不动点实现进行完整的误差分析。系统被转换为基于矩阵的内部表示,用于确定分析误差到输出的关系。实现的有限精度算法(对于每个乘积和)所引起的误差在整个系统中传播并干扰输出。然后用三种不同的观点分析输出误差:经典统计方法(误差建模为噪声),最坏情况方法(误差建模为间隔)和概率密度函数。这三种方法可以确定由于相对于其发生概率的有限精度而产生的输出误差,并为设计人员提供完整的输出误差分析。最后,用数值算例说明了我们的方法。
{"title":"Error analysis methods for the fixed-point implementation of linear systems","authors":"Thibault Hilaire, Anastasia Volkova","doi":"10.1109/SiPS.2017.8109991","DOIUrl":"https://doi.org/10.1109/SiPS.2017.8109991","url":null,"abstract":"In this paper we propose to perform a complete error analysis of a fixed-point implementation of any linear system described by data-flow graph. The system is translated to a matrix-based internal representation that is used to determine the analytical errors-to-output relationship. The error induced by the finite precision arithmetic (for each sum-of-product) of the implementation propagates through the system and perturbs the output. The output error is then analysed with three different point of view: classical statistical approach (errors modeled as noises), worst-case approach (errors modeled as intervals) and probability density function. These three approaches allow determining the output error due to the finite precision with respect to its probability to occur and give the designer a complete output error analysis. Finally, our methodology is illustrated with numerical examples.","PeriodicalId":251688,"journal":{"name":"2017 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"307 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133115975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A reduction of circuit size of digital direct-driven speaker architecture using segmented pulse shaping technique 利用分段脉冲整形技术减小数字直接驱动扬声器结构的电路尺寸
Pub Date : 2017-10-01 DOI: 10.1109/SiPS.2017.8109971
Shingo Noami, Satoshi Saikatsu, A. Yasuda
We propose a novel digital direct-driven speaker architecture using a segmented pulse shaping technique to reduce circuit size. The digital direct-driven speaker consists of a multi-bit delta-sigma modulator and a noise-shaping dynamic element matching circuit. Because the multi-bit outputs drive each of the driver circuits directly, the system can be driven at low voltage with high efficiency. However, it is necessary for the number of loudspeakers or voice coils and driver circuits to be same as the number of quantization levels. Therefore, increasing the number of quantization levels is not a straightforward process. The segmented pulse shaping technique has been proposed to increase the number of quantization levels without increasing the number of output units, thereby improving the signal-to-noise ratio and reducing out-of-band noise. However, the internal signal processing circuit size is increased. Our proposed method reduces the size of the system using a noise-shaping dynamic element matching circuit. Comparing the proposed method with the conventional method for a 33-level modulator and eight speakers, and maintaining the signal-to-noise ratio of the conventional method, the number of look-up table is reduced by 27.2% and the circuit size is reduced by 22.7%.
我们提出了一种新的数字直接驱动扬声器架构,使用分段脉冲整形技术来减小电路尺寸。数字直接驱动扬声器由多位δ - σ调制器和噪声整形动态元件匹配电路组成。由于多比特输出直接驱动每个驱动电路,因此可以在低电压下高效驱动系统。但是,扬声器或音圈和驱动电路的数量必须与量化电平的数量相同。因此,增加量化水平的数量并不是一个简单的过程。提出了分段脉冲整形技术,在不增加输出单元数的情况下增加量化电平数,从而提高信噪比,降低带外噪声。但是,内部信号处理电路的尺寸增加了。我们提出的方法利用噪声整形动态元件匹配电路减小了系统的尺寸。与传统方法相比,在保持传统方法信噪比的情况下,该方法在33电平调制器和8个扬声器的情况下,查找表数减少27.2%,电路尺寸减少22.7%。
{"title":"A reduction of circuit size of digital direct-driven speaker architecture using segmented pulse shaping technique","authors":"Shingo Noami, Satoshi Saikatsu, A. Yasuda","doi":"10.1109/SiPS.2017.8109971","DOIUrl":"https://doi.org/10.1109/SiPS.2017.8109971","url":null,"abstract":"We propose a novel digital direct-driven speaker architecture using a segmented pulse shaping technique to reduce circuit size. The digital direct-driven speaker consists of a multi-bit delta-sigma modulator and a noise-shaping dynamic element matching circuit. Because the multi-bit outputs drive each of the driver circuits directly, the system can be driven at low voltage with high efficiency. However, it is necessary for the number of loudspeakers or voice coils and driver circuits to be same as the number of quantization levels. Therefore, increasing the number of quantization levels is not a straightforward process. The segmented pulse shaping technique has been proposed to increase the number of quantization levels without increasing the number of output units, thereby improving the signal-to-noise ratio and reducing out-of-band noise. However, the internal signal processing circuit size is increased. Our proposed method reduces the size of the system using a noise-shaping dynamic element matching circuit. Comparing the proposed method with the conventional method for a 33-level modulator and eight speakers, and maintaining the signal-to-noise ratio of the conventional method, the number of look-up table is reduced by 27.2% and the circuit size is reduced by 22.7%.","PeriodicalId":251688,"journal":{"name":"2017 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129322858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On modifying the temporal modeling of HSMMs for pediatric heart sound segmentation 儿童心音分割中HSMMs时间模型的改进
Pub Date : 2017-10-01 DOI: 10.1109/SiPS.2017.8110004
J. Oliveira, Theofrastos Mantadelis, F. Renna, P. Gomes, M. Coimbra
Heart sounds are difficult to interpret because a) they are composed by several different sounds, all contained in very tight time windows; b) they vary from physiognomy even if the show similar characteristics; c) human ears are not naturally trained to recognize heart sounds. Computer assisted decision systems may help but they require robust signal processing algorithms. In this paper, we use a real life dataset in order to compare the performance of a hidden Markov model and several hidden semi Markov models that used the Poisson, Gaussian, Gamma distributions, as well as a non-parametric probability mass function to model the sojourn time. Using a subject dependent approach, a model that uses the Poisson distribution as an approximation for the sojourn time is shown to outperform all other models. This model was able to recreate the “true” state sequence with a positive predictability per state of 96%. Finally, we used a conditional distribution in order to compute the confidence of our classifications. By using the proposed confidence metric, we were able to identify wrong classifications and boost our system (in average) from an ≈ 83% up to ≈90% of positive predictability per sample.
心音很难解释,因为a)它们是由几种不同的声音组成的,所有这些声音都包含在非常紧凑的时间窗口内;B)即使表现出相似的特征,它们也不同于面相;C)人的耳朵并没有天生的训练来识别心音。计算机辅助决策系统可能会有所帮助,但它们需要强大的信号处理算法。在本文中,我们使用一个真实的数据集来比较隐马尔可夫模型和几种隐半马尔可夫模型的性能,这些隐半马尔可夫模型使用泊松分布、高斯分布、伽玛分布以及非参数概率质量函数来建模逗留时间。使用与主题相关的方法,使用泊松分布作为逗留时间近似的模型被证明优于所有其他模型。该模型能够重建“真实”状态序列,每个状态的正可预测性为96%。最后,我们使用条件分布来计算分类的置信度。通过使用建议的置信度度量,我们能够识别错误的分类,并将我们的系统(平均而言)从每个样本的约83%提高到约90%的正可预测性。
{"title":"On modifying the temporal modeling of HSMMs for pediatric heart sound segmentation","authors":"J. Oliveira, Theofrastos Mantadelis, F. Renna, P. Gomes, M. Coimbra","doi":"10.1109/SiPS.2017.8110004","DOIUrl":"https://doi.org/10.1109/SiPS.2017.8110004","url":null,"abstract":"Heart sounds are difficult to interpret because a) they are composed by several different sounds, all contained in very tight time windows; b) they vary from physiognomy even if the show similar characteristics; c) human ears are not naturally trained to recognize heart sounds. Computer assisted decision systems may help but they require robust signal processing algorithms. In this paper, we use a real life dataset in order to compare the performance of a hidden Markov model and several hidden semi Markov models that used the Poisson, Gaussian, Gamma distributions, as well as a non-parametric probability mass function to model the sojourn time. Using a subject dependent approach, a model that uses the Poisson distribution as an approximation for the sojourn time is shown to outperform all other models. This model was able to recreate the “true” state sequence with a positive predictability per state of 96%. Finally, we used a conditional distribution in order to compute the confidence of our classifications. By using the proposed confidence metric, we were able to identify wrong classifications and boost our system (in average) from an ≈ 83% up to ≈90% of positive predictability per sample.","PeriodicalId":251688,"journal":{"name":"2017 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127905094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Advanced wireless digital baseband signal processing beyond 100 Gbit/s 先进的无线数字基带信号处理超过100 Gbit/s
Pub Date : 2017-10-01 DOI: 10.1109/SiPS.2017.8109974
Stefan Weithoffer, M. Herrmann, Claus Kestel, N. Wehn
The continuing trend towards higher data rates in wireless communication systems will, in addition to a higher spectral efficiency and lowest signal processing latencies, lead to throughput requirements for the digital baseband signal processing beyond 100 Gbit/s, which is at least one order of magnitude higher than the tens of Gbit/s targeted in the 5G standardization. At the same time, advances in silicon technology due to shrinking feature sizes and increased performance parameters alone won't provide the necessary gain, especially in energy efficiency for wireless transceivers, which have tightly constrained power and energy budgets. In this paper, we highlight the challenges for wireless digital baseband signal processing beyond 100 Gbit/s and the limitations of today's architectures. Our focus lies on the channel decoding and MIMO detection, which are major sources of complexity in digital baseband signal processing. We discuss techniques on algorithmic and architectural level, which aim to close this gap. For the first time we show Turbo-Code decoding techniques towards 100 Gbit/s and a complete MIMO receiver beyond 100 Gbit/s in 28 nm technology.
除了更高的频谱效率和最低的信号处理延迟外,无线通信系统中更高数据速率的持续趋势将导致对数字基带信号处理的吞吐量要求超过100 Gbit/s,这比5G标准化的目标几十Gbit/s至少高出一个数量级。与此同时,由于硅技术的进步,由于缩小特征尺寸和提高性能参数本身并不能提供必要的增益,特别是在无线收发器的能源效率方面,这严格限制了功率和能源预算。在本文中,我们强调了超过100 Gbit/s的无线数字基带信号处理的挑战和当今架构的局限性。我们的重点在于信道解码和MIMO检测,这是数字基带信号处理中复杂性的主要来源。我们讨论了算法和架构层面的技术,旨在缩小这一差距。我们首次展示了100 Gbit/s的Turbo-Code解码技术和28 nm技术中超过100 Gbit/s的完整MIMO接收器。
{"title":"Advanced wireless digital baseband signal processing beyond 100 Gbit/s","authors":"Stefan Weithoffer, M. Herrmann, Claus Kestel, N. Wehn","doi":"10.1109/SiPS.2017.8109974","DOIUrl":"https://doi.org/10.1109/SiPS.2017.8109974","url":null,"abstract":"The continuing trend towards higher data rates in wireless communication systems will, in addition to a higher spectral efficiency and lowest signal processing latencies, lead to throughput requirements for the digital baseband signal processing beyond 100 Gbit/s, which is at least one order of magnitude higher than the tens of Gbit/s targeted in the 5G standardization. At the same time, advances in silicon technology due to shrinking feature sizes and increased performance parameters alone won't provide the necessary gain, especially in energy efficiency for wireless transceivers, which have tightly constrained power and energy budgets. In this paper, we highlight the challenges for wireless digital baseband signal processing beyond 100 Gbit/s and the limitations of today's architectures. Our focus lies on the channel decoding and MIMO detection, which are major sources of complexity in digital baseband signal processing. We discuss techniques on algorithmic and architectural level, which aim to close this gap. For the first time we show Turbo-Code decoding techniques towards 100 Gbit/s and a complete MIMO receiver beyond 100 Gbit/s in 28 nm technology.","PeriodicalId":251688,"journal":{"name":"2017 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"191 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121098007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Reduced complexity ADMM-based schedules for LP decoding of LDPC convolutional codes 降低复杂度的基于admm的LDPC卷积码LP解码方案
Pub Date : 2017-10-01 DOI: 10.1109/SiPS.2017.8110012
Hayfa Ben Thameur, B. Gal, N. Khouja, F. Tlili, C. Jégo
The ADMM based linear programming (LP) technique shows interesting error correction performance when decoding binary LDPC block codes. Nonetheless, it's applicability to decode LDPC convolutional codes (LDPC-CC) has not been yet investigated. In this paper, a first flooding based formulation of the ADMM-LP for decoding LDPC-CCs is described. In addition, reduced complexity decoding schedules to lessen the storage requirements and improve the convergence speed of an ADMM-LP based LDPC-CC decoder without significant loss in error correction performances are proposed and assessed from an algorithmic and computational/memory complexity perspectives.
基于ADMM的线性规划(LP)技术在二进制LDPC分组码译码时表现出良好的纠错性能。然而,它在LDPC卷积码(LDPC- cc)解码中的适用性尚未得到研究。本文描述了一种基于泛洪的ADMM-LP解码ldpc - cc的方法。此外,本文还从算法复杂度和计算/存储复杂度的角度对基于ADMM-LP的LDPC-CC译码器提出了降低译码复杂度的方案,以减少存储需求并提高其收敛速度,同时又不会显著降低纠错性能。
{"title":"Reduced complexity ADMM-based schedules for LP decoding of LDPC convolutional codes","authors":"Hayfa Ben Thameur, B. Gal, N. Khouja, F. Tlili, C. Jégo","doi":"10.1109/SiPS.2017.8110012","DOIUrl":"https://doi.org/10.1109/SiPS.2017.8110012","url":null,"abstract":"The ADMM based linear programming (LP) technique shows interesting error correction performance when decoding binary LDPC block codes. Nonetheless, it's applicability to decode LDPC convolutional codes (LDPC-CC) has not been yet investigated. In this paper, a first flooding based formulation of the ADMM-LP for decoding LDPC-CCs is described. In addition, reduced complexity decoding schedules to lessen the storage requirements and improve the convergence speed of an ADMM-LP based LDPC-CC decoder without significant loss in error correction performances are proposed and assessed from an algorithmic and computational/memory complexity perspectives.","PeriodicalId":251688,"journal":{"name":"2017 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"72 2-3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121110748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Model-based dynamic scheduling for multicore implementation of image processing systems 基于模型的多核图像处理系统动态调度
Pub Date : 2017-10-01 DOI: 10.1109/SiPS.2017.8110003
Jiahao Wu, Timothy Blattner, Walid Keyrouz, S. Bhattacharyya
In this paper, we present a new software tool, called HTGS Model-based Engine (HMBE), for the design and implementation of multicore signal processing applications. HMBE provides complementary capabilities to HTGS (Hybrid Task Graph Scheduler), which is a recently-introduced software tool for implementing scalable workflows for high performance computing applications. HMBE integrates advanced design optimization techniques provided in HTGS with model-based approaches that are founded on dataflow principles. Such integration contributes to (a) making the application of HTGS more systematic and less time consuming, (b) incorporating additional dataflow-based optimization capabilities with HTGS optimizations, and (c) automating significant parts of the HTGS-based design process. In this paper, we present HMBE with an emphasis on novel dynamic scheduling techniques that are developed as part of the tool. We demonstrate the utility of HMBE through a case study involving an image stitching application for large scale microscopy images.
在本文中,我们提出了一种新的软件工具,称为HTGS基于模型的引擎(HMBE),用于设计和实现多核信号处理应用。HMBE为HTGS(混合任务图调度程序)提供了补充功能,HTGS是最近推出的一种软件工具,用于实现高性能计算应用程序的可扩展工作流。HMBE将HTGS提供的先进设计优化技术与基于数据流原理的基于模型的方法集成在一起。这种集成有助于(a)使HTGS的应用更系统化,更节省时间,(b)将额外的基于数据流的优化功能与HTGS优化相结合,以及(c)自动化基于HTGS的设计过程的重要部分。在本文中,我们提出了HMBE,重点是作为该工具的一部分开发的新型动态调度技术。我们通过一个涉及大型显微镜图像拼接应用的案例研究来展示HMBE的实用性。
{"title":"Model-based dynamic scheduling for multicore implementation of image processing systems","authors":"Jiahao Wu, Timothy Blattner, Walid Keyrouz, S. Bhattacharyya","doi":"10.1109/SiPS.2017.8110003","DOIUrl":"https://doi.org/10.1109/SiPS.2017.8110003","url":null,"abstract":"In this paper, we present a new software tool, called HTGS Model-based Engine (HMBE), for the design and implementation of multicore signal processing applications. HMBE provides complementary capabilities to HTGS (Hybrid Task Graph Scheduler), which is a recently-introduced software tool for implementing scalable workflows for high performance computing applications. HMBE integrates advanced design optimization techniques provided in HTGS with model-based approaches that are founded on dataflow principles. Such integration contributes to (a) making the application of HTGS more systematic and less time consuming, (b) incorporating additional dataflow-based optimization capabilities with HTGS optimizations, and (c) automating significant parts of the HTGS-based design process. In this paper, we present HMBE with an emphasis on novel dynamic scheduling techniques that are developed as part of the tool. We demonstrate the utility of HMBE through a case study involving an image stitching application for large scale microscopy images.","PeriodicalId":251688,"journal":{"name":"2017 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124567015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Extended-forward architecture for simplified check node processing in NB-LDPC decoders NB-LDPC解码器中简化检查节点处理的扩展前向架构
Pub Date : 2017-10-01 DOI: 10.1109/SiPS.2017.8109992
Cédric Marchand, E. Boutillon, Hassan Harb, L. Conde-Canencia, A. Ghouwayel
This paper focuses on low complexity architectures for check node processing in Non-Binary LDPC decoders. To be specific, we focus on Extended Min-Sum decoders and consider the state-of-the-art Forward-Backward and Syndrome-Based approaches. We recall the presorting technique that allows for significant complexity reduction at the Elementary Check Node level. The Extended-Forward architecture is then presented as an original new architecture for efficient syndrome calculation. These advances lead to a new architecture for check node processing with reduced area. As an example, we provide implementation results over GF(64) and code rate 5/6 showing complexity reduction by a factor of up to 2.6.
本文主要研究非二进制LDPC解码器中检查节点处理的低复杂度结构。具体来说,我们专注于扩展最小和解码器,并考虑最先进的前向后和基于综合征的方法。我们回顾一下在基本检查节点级别允许显著降低复杂性的预排序技术。然后提出了一种新颖的扩展前向体系结构,用于高效的综合征计算。这些进步导致了一种新的架构,用于减少面积的检查节点处理。作为一个例子,我们提供了在GF(64)和代码率5/6上的实现结果,显示复杂性降低了2.6倍。
{"title":"Extended-forward architecture for simplified check node processing in NB-LDPC decoders","authors":"Cédric Marchand, E. Boutillon, Hassan Harb, L. Conde-Canencia, A. Ghouwayel","doi":"10.1109/SiPS.2017.8109992","DOIUrl":"https://doi.org/10.1109/SiPS.2017.8109992","url":null,"abstract":"This paper focuses on low complexity architectures for check node processing in Non-Binary LDPC decoders. To be specific, we focus on Extended Min-Sum decoders and consider the state-of-the-art Forward-Backward and Syndrome-Based approaches. We recall the presorting technique that allows for significant complexity reduction at the Elementary Check Node level. The Extended-Forward architecture is then presented as an original new architecture for efficient syndrome calculation. These advances lead to a new architecture for check node processing with reduced area. As an example, we provide implementation results over GF(64) and code rate 5/6 showing complexity reduction by a factor of up to 2.6.","PeriodicalId":251688,"journal":{"name":"2017 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"212 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122656987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Low-latency software LDPC decoders for x86 multi-core devices 用于x86多核设备的低延迟软件LDPC解码器
Pub Date : 2017-10-01 DOI: 10.1109/SiPS.2017.8110001
B. Gal, C. Jégo
LDPC codes are a family of error correcting codes used in most modern digital communication standards even in future 3GPP 5G standard. Thanks to their high processing power and their parallelization capabilities, prevailing multi-core and many-core devices facilitate real-time implementations of digital communication systems, which were previously implemented on dedicated hardware targets. Through massive frame decoding parallelization, current LDPC decoders throughputs range from hundreds of Mbps up to Gbps. However, inter-frame parallelization involves latency penalties, while in future 5G wireless communication systems, the latency should be reduced as far as possible. To this end, a novel LDPC parallelization approach for LDPC decoding on a multi-core processor device is proposed in this article. It reduces the processing latency down to some microseconds as highlighted by x86 multi-core experimentations.
LDPC码是大多数现代数字通信标准甚至未来3GPP 5G标准中使用的纠错码族。由于它们的高处理能力和并行化能力,流行的多核和多核设备促进了数字通信系统的实时实现,而这些系统以前是在专用硬件目标上实现的。通过大规模帧解码并行化,目前的LDPC解码器吞吐量从数百Mbps到Gbps不等。然而,帧间并行涉及延迟损失,而在未来的5G无线通信系统中,延迟应尽可能减少。为此,本文提出了一种在多核处理器设备上实现LDPC并行解码的新方法。它将处理延迟降低到几微秒,正如x86多核实验所突出显示的那样。
{"title":"Low-latency software LDPC decoders for x86 multi-core devices","authors":"B. Gal, C. Jégo","doi":"10.1109/SiPS.2017.8110001","DOIUrl":"https://doi.org/10.1109/SiPS.2017.8110001","url":null,"abstract":"LDPC codes are a family of error correcting codes used in most modern digital communication standards even in future 3GPP 5G standard. Thanks to their high processing power and their parallelization capabilities, prevailing multi-core and many-core devices facilitate real-time implementations of digital communication systems, which were previously implemented on dedicated hardware targets. Through massive frame decoding parallelization, current LDPC decoders throughputs range from hundreds of Mbps up to Gbps. However, inter-frame parallelization involves latency penalties, while in future 5G wireless communication systems, the latency should be reduced as far as possible. To this end, a novel LDPC parallelization approach for LDPC decoding on a multi-core processor device is proposed in this article. It reduces the processing latency down to some microseconds as highlighted by x86 multi-core experimentations.","PeriodicalId":251688,"journal":{"name":"2017 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"os-57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127720580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
2017 IEEE International Workshop on Signal Processing Systems (SiPS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1