{"title":"基于可变阈值和噪声扰动的低复杂度比特翻转性能研究","authors":"Julian Webber, T. Nishimura, T. Ohgane, Y. Ogawa","doi":"10.1109/ICACT.2014.6779175","DOIUrl":null,"url":null,"abstract":"The near Shannon capacity approaching low-density parity-check (LDPC) linear block codes are now in widespread use in modern systems including the long term evolution advanced (LTE-A) cellular, 802.11η Wi-Fi and DVB-S2 satellite communications standards. The decoders based on the iterative belief propagation algorithm provide near optimum performance but also have very high computational complexity. Therefore significant research has recently focused on reduced complexity architectures based on the group of so-called bit-flipping algorithms. In the basic bit-flipping algorithm the number of failed parity checks for each bit is computed and the bit with the maximum failed parity checks is inverted. Inverting bits above a certain threshold removes the complexity involved with a maximum-search and, adaptive thresholds on each bit can further reduce the computation overhead. The criterion for the threshold update affects the error and convergence performances. Here, we describe a low-complexity architecture that has two (or more) decoder branches each with a different threshold scaling factor and select the threshold and bits at each iteration from the branch with the lowest syndrome sum. We then investigate the effect of adding a random Uniform or Gaussian noise perturbation to the threshold in order to reduce the average iteration count further in order to provide the opportunity to escape from stuck decoding states.","PeriodicalId":6380,"journal":{"name":"16th International Conference on Advanced Communication Technology","volume":"227 1","pages":"206-213"},"PeriodicalIF":0.0000,"publicationDate":"2014-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Performance investigation of reduced complexity bit-flipping using variable thresholds and noise perturbation\",\"authors\":\"Julian Webber, T. Nishimura, T. Ohgane, Y. Ogawa\",\"doi\":\"10.1109/ICACT.2014.6779175\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The near Shannon capacity approaching low-density parity-check (LDPC) linear block codes are now in widespread use in modern systems including the long term evolution advanced (LTE-A) cellular, 802.11η Wi-Fi and DVB-S2 satellite communications standards. The decoders based on the iterative belief propagation algorithm provide near optimum performance but also have very high computational complexity. Therefore significant research has recently focused on reduced complexity architectures based on the group of so-called bit-flipping algorithms. In the basic bit-flipping algorithm the number of failed parity checks for each bit is computed and the bit with the maximum failed parity checks is inverted. Inverting bits above a certain threshold removes the complexity involved with a maximum-search and, adaptive thresholds on each bit can further reduce the computation overhead. The criterion for the threshold update affects the error and convergence performances. Here, we describe a low-complexity architecture that has two (or more) decoder branches each with a different threshold scaling factor and select the threshold and bits at each iteration from the branch with the lowest syndrome sum. We then investigate the effect of adding a random Uniform or Gaussian noise perturbation to the threshold in order to reduce the average iteration count further in order to provide the opportunity to escape from stuck decoding states.\",\"PeriodicalId\":6380,\"journal\":{\"name\":\"16th International Conference on Advanced Communication Technology\",\"volume\":\"227 1\",\"pages\":\"206-213\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-03-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"16th International Conference on Advanced Communication Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICACT.2014.6779175\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"16th International Conference on Advanced Communication Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICACT.2014.6779175","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Performance investigation of reduced complexity bit-flipping using variable thresholds and noise perturbation
The near Shannon capacity approaching low-density parity-check (LDPC) linear block codes are now in widespread use in modern systems including the long term evolution advanced (LTE-A) cellular, 802.11η Wi-Fi and DVB-S2 satellite communications standards. The decoders based on the iterative belief propagation algorithm provide near optimum performance but also have very high computational complexity. Therefore significant research has recently focused on reduced complexity architectures based on the group of so-called bit-flipping algorithms. In the basic bit-flipping algorithm the number of failed parity checks for each bit is computed and the bit with the maximum failed parity checks is inverted. Inverting bits above a certain threshold removes the complexity involved with a maximum-search and, adaptive thresholds on each bit can further reduce the computation overhead. The criterion for the threshold update affects the error and convergence performances. Here, we describe a low-complexity architecture that has two (or more) decoder branches each with a different threshold scaling factor and select the threshold and bits at each iteration from the branch with the lowest syndrome sum. We then investigate the effect of adding a random Uniform or Gaussian noise perturbation to the threshold in order to reduce the average iteration count further in order to provide the opportunity to escape from stuck decoding states.