Pub Date : 2017-10-01DOI: 10.1109/SiPS.2017.8109985
Cheng-Jung Wu, C. Chiu
Dry and wet fingers lead to poor fingerprint quality, which means that it has impact for fingerprint recognition and matching. Recognition methods that are based on the feature of ridge, valley, minutiae or pore are affected by skin conditions. In this paper, we propose a novel dry fingerprint detection method for images with different resolutions using ridge features. The dry fingerprints have vague pores and discontinuous and fragmented ridges. Therefore, the features that we adopt for detection are ridge continuity, ridge fragmentation and ridge/valley ratio. These features can be observed clearly under different image resolutions, so our proposed method can work on 500∼1200 dpi. We propose several ridge features and use the support vector machine to classify into two groups, dry and normal. The NASIC database (1200dpi) and FVC2002 DB1 (500dpi) are used in our experiments, the SVM classification accuracy are 99.00%, and 99.09% relatively.
{"title":"Dry fingerprint detection for multiple image resolutions using ridge features","authors":"Cheng-Jung Wu, C. Chiu","doi":"10.1109/SiPS.2017.8109985","DOIUrl":"https://doi.org/10.1109/SiPS.2017.8109985","url":null,"abstract":"Dry and wet fingers lead to poor fingerprint quality, which means that it has impact for fingerprint recognition and matching. Recognition methods that are based on the feature of ridge, valley, minutiae or pore are affected by skin conditions. In this paper, we propose a novel dry fingerprint detection method for images with different resolutions using ridge features. The dry fingerprints have vague pores and discontinuous and fragmented ridges. Therefore, the features that we adopt for detection are ridge continuity, ridge fragmentation and ridge/valley ratio. These features can be observed clearly under different image resolutions, so our proposed method can work on 500∼1200 dpi. We propose several ridge features and use the support vector machine to classify into two groups, dry and normal. The NASIC database (1200dpi) and FVC2002 DB1 (500dpi) are used in our experiments, the SVM classification accuracy are 99.00%, and 99.09% relatively.","PeriodicalId":251688,"journal":{"name":"2017 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"190 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114211995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/SiPS.2017.8110015
Chieh-Fang Teng, Ching-Chun Liao, Hung-Yi Cheng, A. Wu
Internet-of-Things (IoT) applications grew rapidly in recent years. Deployment of massive machine-type communication (mMTC) with scalability and reliability becomes a challenging issue. The feature of activity sparsity in mMTC devices makes room for us to efficiently handle detection by compressive sensing (CS)-based multi-user detection (CS-MUD). However, the limited resource, such as the design of spreading sequences for massive devices, makes CS-MUD in mMTC congested and collided. Thus, non-orthogonal multiple access (NOMA) attracts attention in mMTC. Despite NOMA can accommodate more devices, the non-orthogonality makes the performance degradation. In this paper, we take advantage of NOMA to generate more set of sequences by Zadoff-Chu (ZC) sequence which the bit-error-rate (BER) is reduced by two orders. Meanwhile, by considering the channel characteristic and received power of devices, these devices are grouped to make a more reliable mMTC system which eliminates the effect of non-orthogonality and improves the BER by 1.7 times with 128 devices. Thus, the proposed ZC sequence design with power-based grouping scheme can greatly improve the device scalability and reliability of the detection process for uplink multi-user problems that provides a potential solution in mMTC scenarios.
{"title":"Reliable compressive sensing (CS)-based multi-user detection with power-based Zadoff-Chu sequence design","authors":"Chieh-Fang Teng, Ching-Chun Liao, Hung-Yi Cheng, A. Wu","doi":"10.1109/SiPS.2017.8110015","DOIUrl":"https://doi.org/10.1109/SiPS.2017.8110015","url":null,"abstract":"Internet-of-Things (IoT) applications grew rapidly in recent years. Deployment of massive machine-type communication (mMTC) with scalability and reliability becomes a challenging issue. The feature of activity sparsity in mMTC devices makes room for us to efficiently handle detection by compressive sensing (CS)-based multi-user detection (CS-MUD). However, the limited resource, such as the design of spreading sequences for massive devices, makes CS-MUD in mMTC congested and collided. Thus, non-orthogonal multiple access (NOMA) attracts attention in mMTC. Despite NOMA can accommodate more devices, the non-orthogonality makes the performance degradation. In this paper, we take advantage of NOMA to generate more set of sequences by Zadoff-Chu (ZC) sequence which the bit-error-rate (BER) is reduced by two orders. Meanwhile, by considering the channel characteristic and received power of devices, these devices are grouped to make a more reliable mMTC system which eliminates the effect of non-orthogonality and improves the BER by 1.7 times with 128 devices. Thus, the proposed ZC sequence design with power-based grouping scheme can greatly improve the device scalability and reliability of the detection process for uplink multi-user problems that provides a potential solution in mMTC scenarios.","PeriodicalId":251688,"journal":{"name":"2017 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116262966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/SiPS.2017.8109991
Thibault Hilaire, Anastasia Volkova
In this paper we propose to perform a complete error analysis of a fixed-point implementation of any linear system described by data-flow graph. The system is translated to a matrix-based internal representation that is used to determine the analytical errors-to-output relationship. The error induced by the finite precision arithmetic (for each sum-of-product) of the implementation propagates through the system and perturbs the output. The output error is then analysed with three different point of view: classical statistical approach (errors modeled as noises), worst-case approach (errors modeled as intervals) and probability density function. These three approaches allow determining the output error due to the finite precision with respect to its probability to occur and give the designer a complete output error analysis. Finally, our methodology is illustrated with numerical examples.
{"title":"Error analysis methods for the fixed-point implementation of linear systems","authors":"Thibault Hilaire, Anastasia Volkova","doi":"10.1109/SiPS.2017.8109991","DOIUrl":"https://doi.org/10.1109/SiPS.2017.8109991","url":null,"abstract":"In this paper we propose to perform a complete error analysis of a fixed-point implementation of any linear system described by data-flow graph. The system is translated to a matrix-based internal representation that is used to determine the analytical errors-to-output relationship. The error induced by the finite precision arithmetic (for each sum-of-product) of the implementation propagates through the system and perturbs the output. The output error is then analysed with three different point of view: classical statistical approach (errors modeled as noises), worst-case approach (errors modeled as intervals) and probability density function. These three approaches allow determining the output error due to the finite precision with respect to its probability to occur and give the designer a complete output error analysis. Finally, our methodology is illustrated with numerical examples.","PeriodicalId":251688,"journal":{"name":"2017 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"307 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133115975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/SiPS.2017.8109971
Shingo Noami, Satoshi Saikatsu, A. Yasuda
We propose a novel digital direct-driven speaker architecture using a segmented pulse shaping technique to reduce circuit size. The digital direct-driven speaker consists of a multi-bit delta-sigma modulator and a noise-shaping dynamic element matching circuit. Because the multi-bit outputs drive each of the driver circuits directly, the system can be driven at low voltage with high efficiency. However, it is necessary for the number of loudspeakers or voice coils and driver circuits to be same as the number of quantization levels. Therefore, increasing the number of quantization levels is not a straightforward process. The segmented pulse shaping technique has been proposed to increase the number of quantization levels without increasing the number of output units, thereby improving the signal-to-noise ratio and reducing out-of-band noise. However, the internal signal processing circuit size is increased. Our proposed method reduces the size of the system using a noise-shaping dynamic element matching circuit. Comparing the proposed method with the conventional method for a 33-level modulator and eight speakers, and maintaining the signal-to-noise ratio of the conventional method, the number of look-up table is reduced by 27.2% and the circuit size is reduced by 22.7%.
{"title":"A reduction of circuit size of digital direct-driven speaker architecture using segmented pulse shaping technique","authors":"Shingo Noami, Satoshi Saikatsu, A. Yasuda","doi":"10.1109/SiPS.2017.8109971","DOIUrl":"https://doi.org/10.1109/SiPS.2017.8109971","url":null,"abstract":"We propose a novel digital direct-driven speaker architecture using a segmented pulse shaping technique to reduce circuit size. The digital direct-driven speaker consists of a multi-bit delta-sigma modulator and a noise-shaping dynamic element matching circuit. Because the multi-bit outputs drive each of the driver circuits directly, the system can be driven at low voltage with high efficiency. However, it is necessary for the number of loudspeakers or voice coils and driver circuits to be same as the number of quantization levels. Therefore, increasing the number of quantization levels is not a straightforward process. The segmented pulse shaping technique has been proposed to increase the number of quantization levels without increasing the number of output units, thereby improving the signal-to-noise ratio and reducing out-of-band noise. However, the internal signal processing circuit size is increased. Our proposed method reduces the size of the system using a noise-shaping dynamic element matching circuit. Comparing the proposed method with the conventional method for a 33-level modulator and eight speakers, and maintaining the signal-to-noise ratio of the conventional method, the number of look-up table is reduced by 27.2% and the circuit size is reduced by 22.7%.","PeriodicalId":251688,"journal":{"name":"2017 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129322858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/SiPS.2017.8110004
J. Oliveira, Theofrastos Mantadelis, F. Renna, P. Gomes, M. Coimbra
Heart sounds are difficult to interpret because a) they are composed by several different sounds, all contained in very tight time windows; b) they vary from physiognomy even if the show similar characteristics; c) human ears are not naturally trained to recognize heart sounds. Computer assisted decision systems may help but they require robust signal processing algorithms. In this paper, we use a real life dataset in order to compare the performance of a hidden Markov model and several hidden semi Markov models that used the Poisson, Gaussian, Gamma distributions, as well as a non-parametric probability mass function to model the sojourn time. Using a subject dependent approach, a model that uses the Poisson distribution as an approximation for the sojourn time is shown to outperform all other models. This model was able to recreate the “true” state sequence with a positive predictability per state of 96%. Finally, we used a conditional distribution in order to compute the confidence of our classifications. By using the proposed confidence metric, we were able to identify wrong classifications and boost our system (in average) from an ≈ 83% up to ≈90% of positive predictability per sample.
{"title":"On modifying the temporal modeling of HSMMs for pediatric heart sound segmentation","authors":"J. Oliveira, Theofrastos Mantadelis, F. Renna, P. Gomes, M. Coimbra","doi":"10.1109/SiPS.2017.8110004","DOIUrl":"https://doi.org/10.1109/SiPS.2017.8110004","url":null,"abstract":"Heart sounds are difficult to interpret because a) they are composed by several different sounds, all contained in very tight time windows; b) they vary from physiognomy even if the show similar characteristics; c) human ears are not naturally trained to recognize heart sounds. Computer assisted decision systems may help but they require robust signal processing algorithms. In this paper, we use a real life dataset in order to compare the performance of a hidden Markov model and several hidden semi Markov models that used the Poisson, Gaussian, Gamma distributions, as well as a non-parametric probability mass function to model the sojourn time. Using a subject dependent approach, a model that uses the Poisson distribution as an approximation for the sojourn time is shown to outperform all other models. This model was able to recreate the “true” state sequence with a positive predictability per state of 96%. Finally, we used a conditional distribution in order to compute the confidence of our classifications. By using the proposed confidence metric, we were able to identify wrong classifications and boost our system (in average) from an ≈ 83% up to ≈90% of positive predictability per sample.","PeriodicalId":251688,"journal":{"name":"2017 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127905094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/SiPS.2017.8109974
Stefan Weithoffer, M. Herrmann, Claus Kestel, N. Wehn
The continuing trend towards higher data rates in wireless communication systems will, in addition to a higher spectral efficiency and lowest signal processing latencies, lead to throughput requirements for the digital baseband signal processing beyond 100 Gbit/s, which is at least one order of magnitude higher than the tens of Gbit/s targeted in the 5G standardization. At the same time, advances in silicon technology due to shrinking feature sizes and increased performance parameters alone won't provide the necessary gain, especially in energy efficiency for wireless transceivers, which have tightly constrained power and energy budgets. In this paper, we highlight the challenges for wireless digital baseband signal processing beyond 100 Gbit/s and the limitations of today's architectures. Our focus lies on the channel decoding and MIMO detection, which are major sources of complexity in digital baseband signal processing. We discuss techniques on algorithmic and architectural level, which aim to close this gap. For the first time we show Turbo-Code decoding techniques towards 100 Gbit/s and a complete MIMO receiver beyond 100 Gbit/s in 28 nm technology.
{"title":"Advanced wireless digital baseband signal processing beyond 100 Gbit/s","authors":"Stefan Weithoffer, M. Herrmann, Claus Kestel, N. Wehn","doi":"10.1109/SiPS.2017.8109974","DOIUrl":"https://doi.org/10.1109/SiPS.2017.8109974","url":null,"abstract":"The continuing trend towards higher data rates in wireless communication systems will, in addition to a higher spectral efficiency and lowest signal processing latencies, lead to throughput requirements for the digital baseband signal processing beyond 100 Gbit/s, which is at least one order of magnitude higher than the tens of Gbit/s targeted in the 5G standardization. At the same time, advances in silicon technology due to shrinking feature sizes and increased performance parameters alone won't provide the necessary gain, especially in energy efficiency for wireless transceivers, which have tightly constrained power and energy budgets. In this paper, we highlight the challenges for wireless digital baseband signal processing beyond 100 Gbit/s and the limitations of today's architectures. Our focus lies on the channel decoding and MIMO detection, which are major sources of complexity in digital baseband signal processing. We discuss techniques on algorithmic and architectural level, which aim to close this gap. For the first time we show Turbo-Code decoding techniques towards 100 Gbit/s and a complete MIMO receiver beyond 100 Gbit/s in 28 nm technology.","PeriodicalId":251688,"journal":{"name":"2017 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"191 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121098007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/SiPS.2017.8110012
Hayfa Ben Thameur, B. Gal, N. Khouja, F. Tlili, C. Jégo
The ADMM based linear programming (LP) technique shows interesting error correction performance when decoding binary LDPC block codes. Nonetheless, it's applicability to decode LDPC convolutional codes (LDPC-CC) has not been yet investigated. In this paper, a first flooding based formulation of the ADMM-LP for decoding LDPC-CCs is described. In addition, reduced complexity decoding schedules to lessen the storage requirements and improve the convergence speed of an ADMM-LP based LDPC-CC decoder without significant loss in error correction performances are proposed and assessed from an algorithmic and computational/memory complexity perspectives.
{"title":"Reduced complexity ADMM-based schedules for LP decoding of LDPC convolutional codes","authors":"Hayfa Ben Thameur, B. Gal, N. Khouja, F. Tlili, C. Jégo","doi":"10.1109/SiPS.2017.8110012","DOIUrl":"https://doi.org/10.1109/SiPS.2017.8110012","url":null,"abstract":"The ADMM based linear programming (LP) technique shows interesting error correction performance when decoding binary LDPC block codes. Nonetheless, it's applicability to decode LDPC convolutional codes (LDPC-CC) has not been yet investigated. In this paper, a first flooding based formulation of the ADMM-LP for decoding LDPC-CCs is described. In addition, reduced complexity decoding schedules to lessen the storage requirements and improve the convergence speed of an ADMM-LP based LDPC-CC decoder without significant loss in error correction performances are proposed and assessed from an algorithmic and computational/memory complexity perspectives.","PeriodicalId":251688,"journal":{"name":"2017 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"72 2-3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121110748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/SiPS.2017.8110003
Jiahao Wu, Timothy Blattner, Walid Keyrouz, S. Bhattacharyya
In this paper, we present a new software tool, called HTGS Model-based Engine (HMBE), for the design and implementation of multicore signal processing applications. HMBE provides complementary capabilities to HTGS (Hybrid Task Graph Scheduler), which is a recently-introduced software tool for implementing scalable workflows for high performance computing applications. HMBE integrates advanced design optimization techniques provided in HTGS with model-based approaches that are founded on dataflow principles. Such integration contributes to (a) making the application of HTGS more systematic and less time consuming, (b) incorporating additional dataflow-based optimization capabilities with HTGS optimizations, and (c) automating significant parts of the HTGS-based design process. In this paper, we present HMBE with an emphasis on novel dynamic scheduling techniques that are developed as part of the tool. We demonstrate the utility of HMBE through a case study involving an image stitching application for large scale microscopy images.
{"title":"Model-based dynamic scheduling for multicore implementation of image processing systems","authors":"Jiahao Wu, Timothy Blattner, Walid Keyrouz, S. Bhattacharyya","doi":"10.1109/SiPS.2017.8110003","DOIUrl":"https://doi.org/10.1109/SiPS.2017.8110003","url":null,"abstract":"In this paper, we present a new software tool, called HTGS Model-based Engine (HMBE), for the design and implementation of multicore signal processing applications. HMBE provides complementary capabilities to HTGS (Hybrid Task Graph Scheduler), which is a recently-introduced software tool for implementing scalable workflows for high performance computing applications. HMBE integrates advanced design optimization techniques provided in HTGS with model-based approaches that are founded on dataflow principles. Such integration contributes to (a) making the application of HTGS more systematic and less time consuming, (b) incorporating additional dataflow-based optimization capabilities with HTGS optimizations, and (c) automating significant parts of the HTGS-based design process. In this paper, we present HMBE with an emphasis on novel dynamic scheduling techniques that are developed as part of the tool. We demonstrate the utility of HMBE through a case study involving an image stitching application for large scale microscopy images.","PeriodicalId":251688,"journal":{"name":"2017 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124567015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/SiPS.2017.8109992
Cédric Marchand, E. Boutillon, Hassan Harb, L. Conde-Canencia, A. Ghouwayel
This paper focuses on low complexity architectures for check node processing in Non-Binary LDPC decoders. To be specific, we focus on Extended Min-Sum decoders and consider the state-of-the-art Forward-Backward and Syndrome-Based approaches. We recall the presorting technique that allows for significant complexity reduction at the Elementary Check Node level. The Extended-Forward architecture is then presented as an original new architecture for efficient syndrome calculation. These advances lead to a new architecture for check node processing with reduced area. As an example, we provide implementation results over GF(64) and code rate 5/6 showing complexity reduction by a factor of up to 2.6.
{"title":"Extended-forward architecture for simplified check node processing in NB-LDPC decoders","authors":"Cédric Marchand, E. Boutillon, Hassan Harb, L. Conde-Canencia, A. Ghouwayel","doi":"10.1109/SiPS.2017.8109992","DOIUrl":"https://doi.org/10.1109/SiPS.2017.8109992","url":null,"abstract":"This paper focuses on low complexity architectures for check node processing in Non-Binary LDPC decoders. To be specific, we focus on Extended Min-Sum decoders and consider the state-of-the-art Forward-Backward and Syndrome-Based approaches. We recall the presorting technique that allows for significant complexity reduction at the Elementary Check Node level. The Extended-Forward architecture is then presented as an original new architecture for efficient syndrome calculation. These advances lead to a new architecture for check node processing with reduced area. As an example, we provide implementation results over GF(64) and code rate 5/6 showing complexity reduction by a factor of up to 2.6.","PeriodicalId":251688,"journal":{"name":"2017 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"212 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122656987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/SiPS.2017.8110001
B. Gal, C. Jégo
LDPC codes are a family of error correcting codes used in most modern digital communication standards even in future 3GPP 5G standard. Thanks to their high processing power and their parallelization capabilities, prevailing multi-core and many-core devices facilitate real-time implementations of digital communication systems, which were previously implemented on dedicated hardware targets. Through massive frame decoding parallelization, current LDPC decoders throughputs range from hundreds of Mbps up to Gbps. However, inter-frame parallelization involves latency penalties, while in future 5G wireless communication systems, the latency should be reduced as far as possible. To this end, a novel LDPC parallelization approach for LDPC decoding on a multi-core processor device is proposed in this article. It reduces the processing latency down to some microseconds as highlighted by x86 multi-core experimentations.
{"title":"Low-latency software LDPC decoders for x86 multi-core devices","authors":"B. Gal, C. Jégo","doi":"10.1109/SiPS.2017.8110001","DOIUrl":"https://doi.org/10.1109/SiPS.2017.8110001","url":null,"abstract":"LDPC codes are a family of error correcting codes used in most modern digital communication standards even in future 3GPP 5G standard. Thanks to their high processing power and their parallelization capabilities, prevailing multi-core and many-core devices facilitate real-time implementations of digital communication systems, which were previously implemented on dedicated hardware targets. Through massive frame decoding parallelization, current LDPC decoders throughputs range from hundreds of Mbps up to Gbps. However, inter-frame parallelization involves latency penalties, while in future 5G wireless communication systems, the latency should be reduced as far as possible. To this end, a novel LDPC parallelization approach for LDPC decoding on a multi-core processor device is proposed in this article. It reduces the processing latency down to some microseconds as highlighted by x86 multi-core experimentations.","PeriodicalId":251688,"journal":{"name":"2017 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"os-57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127720580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}