Pub Date : 2018-07-01DOI: 10.2174/1876825301906010001
A. Boccuto, I. Gerace, V. Giorgetti, M. Rinaldi
In this paper, we deal with the demosaicing problem when the Bayer pattern is used. We propose a fast heuristic algorithm, consisting of three parts. In the first one, we initialize the green channel by means of an edge-directed and weighted average technique. In the second part, the red and blue channels are updated, thanks to an equality constraint on the second derivatives. The third part consists of a constant-hue-based interpolation. We show experimentally how the proposed algorithm gives in mean better reconstructions than more computationally expensive algorithms.
{"title":"A Fast Algorithm for the Demosaicing Problem Concerning the Bayer Pattern","authors":"A. Boccuto, I. Gerace, V. Giorgetti, M. Rinaldi","doi":"10.2174/1876825301906010001","DOIUrl":"https://doi.org/10.2174/1876825301906010001","url":null,"abstract":"\u0000 \u0000 In this paper, we deal with the demosaicing problem when the Bayer pattern is used. We propose a fast heuristic algorithm, consisting of three parts.\u0000 \u0000 \u0000 \u0000 In the first one, we initialize the green channel by means of an edge-directed and weighted average technique. In the second part, the red and blue channels are updated, thanks to an equality constraint on the second derivatives. The third part consists of a constant-hue-based interpolation.\u0000 \u0000 \u0000 \u0000 We show experimentally how the proposed algorithm gives in mean better reconstructions than more computationally expensive algorithms.\u0000","PeriodicalId":147157,"journal":{"name":"The Open Signal Processing Journal","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128065621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-27DOI: 10.2174/1876825301305010001
Osama Hosam
Digital communication and media sharing have extensively increased in the last couple of years. Research is focused on protecting digital media through copyright protection. The digital media is secured by watermarking. We have developed an image watermarking technique in the frequency domain to hide secure information in the Discrete Cosine Transform (DCT) coefficients of the carrier image. DCT coefficients are modulated by Dither Modulation (DM). We have increased the modulation step to be able to entirely recover the embedded data (watermark) and increase the robustness of our proposed algorithm to affine transforms and geometrical attacks. Our algorithm showed lower complexity and robust- ness against different attacks.
{"title":"Side-Informed Image Watermarking Scheme Based on Dither Modulation in the Frequency Domain","authors":"Osama Hosam","doi":"10.2174/1876825301305010001","DOIUrl":"https://doi.org/10.2174/1876825301305010001","url":null,"abstract":"Digital communication and media sharing have extensively increased in the last couple of years. Research is focused on protecting digital media through copyright protection. The digital media is secured by watermarking. We have developed an image watermarking technique in the frequency domain to hide secure information in the Discrete Cosine Transform (DCT) coefficients of the carrier image. DCT coefficients are modulated by Dither Modulation (DM). We have increased the modulation step to be able to entirely recover the embedded data (watermark) and increase the robustness of our proposed algorithm to affine transforms and geometrical attacks. Our algorithm showed lower complexity and robust- ness against different attacks.","PeriodicalId":147157,"journal":{"name":"The Open Signal Processing Journal","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122317259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-07-29DOI: 10.2174/1876825301104010019
K. ElMahgoub, M. Nafie
In this paper the low density party check (LDPC) codes used in the IEEE 802.16 standard physical layer are studied, and two novel techniques to enhance the performance of such codes are introduced. In the first technique, a novel parity check matrix for LDPC codes over GF(4) with the non-zero entries chosen to maximize the entropy is proposed, the parity check matrix is based on the binary parity check matrix used in the IEEE 802.16 standard. The proposed code is proven to outperform the binary code used in the IEEE 802.16 standard over both additive white Gaussian noise (AWGN) and Stanford University Interim (SUI-3) channel models. In the second technique, a high rate LDPC code is used in a concatenated coding structure as an outer code with a convolutional code as an inner code. The Convolutional codes are decode using two techniques bit-based maximum a posteriori probability (Log-MAP) decoder with its soft outputs feed into a binary LDPC decoder, and a symbol-based Log-MAP decoder with its soft outputs feed into a non-binary Galois Field LDPC decoder. The performance of such LDPC-CC concatenated codes is compared with the commonly used con- catenated convolutional Reed-Solomon codes over the standard SUI-3 channel model, and the LDPC-CC codes showed better performance.
{"title":"On the Enhancement of LDPC Codes Used in WiMAX","authors":"K. ElMahgoub, M. Nafie","doi":"10.2174/1876825301104010019","DOIUrl":"https://doi.org/10.2174/1876825301104010019","url":null,"abstract":"In this paper the low density party check (LDPC) codes used in the IEEE 802.16 standard physical layer are studied, and two novel techniques to enhance the performance of such codes are introduced. In the first technique, a novel parity check matrix for LDPC codes over GF(4) with the non-zero entries chosen to maximize the entropy is proposed, the parity check matrix is based on the binary parity check matrix used in the IEEE 802.16 standard. The proposed code is proven to outperform the binary code used in the IEEE 802.16 standard over both additive white Gaussian noise (AWGN) and Stanford University Interim (SUI-3) channel models. In the second technique, a high rate LDPC code is used in a concatenated coding structure as an outer code with a convolutional code as an inner code. The Convolutional codes are decode using two techniques bit-based maximum a posteriori probability (Log-MAP) decoder with its soft outputs feed into a binary LDPC decoder, and a symbol-based Log-MAP decoder with its soft outputs feed into a non-binary Galois Field LDPC decoder. The performance of such LDPC-CC concatenated codes is compared with the commonly used con- catenated convolutional Reed-Solomon codes over the standard SUI-3 channel model, and the LDPC-CC codes showed better performance.","PeriodicalId":147157,"journal":{"name":"The Open Signal Processing Journal","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131412995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-19DOI: 10.2174/1876825301104010001
M. Stecker
In this paper, a general theory of signals characterized by probabilistic constraints is developed. As in previous work (10), the theoretical development employs Lagrange multipliers to implement the constraints and the maximum en- tropy principle to generate the most likely probability distribution function consistent with the constraints. The method of computing the probability distribution functions is similar to that used in computing partition functions in statistical me- chanics. Simple cases in which exact analytic solutions for the maximum entropy distribution functions and entropy exist are studied and their implications discussed. The application of this technique to the problem of signal detection is ex- plored both theoretically and with simulations. It is demonstrated that the method can readily classify signals governed by different constraint distributions as long as the mean value of the constraints for the two distributions is different. Classi- fying signals governed by the constraint distributions that differ in shape but not in mean value is much more difficult. Some solutions to this problem and extensions of the method are discussed.
{"title":"Constrained Signals: A General Theory of Information Content and Detection","authors":"M. Stecker","doi":"10.2174/1876825301104010001","DOIUrl":"https://doi.org/10.2174/1876825301104010001","url":null,"abstract":"In this paper, a general theory of signals characterized by probabilistic constraints is developed. As in previous work (10), the theoretical development employs Lagrange multipliers to implement the constraints and the maximum en- tropy principle to generate the most likely probability distribution function consistent with the constraints. The method of computing the probability distribution functions is similar to that used in computing partition functions in statistical me- chanics. Simple cases in which exact analytic solutions for the maximum entropy distribution functions and entropy exist are studied and their implications discussed. The application of this technique to the problem of signal detection is ex- plored both theoretically and with simulations. It is demonstrated that the method can readily classify signals governed by different constraint distributions as long as the mean value of the constraints for the two distributions is different. Classi- fying signals governed by the constraint distributions that differ in shape but not in mean value is much more difficult. Some solutions to this problem and extensions of the method are discussed.","PeriodicalId":147157,"journal":{"name":"The Open Signal Processing Journal","volume":"10 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128173828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-14DOI: 10.2174/1876825301003010020
H. Albrecht
Cosine-sum windows with minimum sidelobes (minimum sidelobe windows) have good properties in terms of peak sidelobe level (PSL) and equivalent noise bandwidth (ENBW). But neighboring windows (the number of coefficients differ by one) have quite large PSL differences. If, for a special data analysis, the PSL of the window should not exceed a given value, then often windows with a much lower PSL than specified have to be used. Due to increasing ENBW in the case of decreasing PSL, this leads, amongst others, to more uncertainty in the determination of signal amplitudes. This article describes how to design modified minimum sidelobe windows which have similar properties to minimum sidelobe windows for a given PSL. Their ENBW were, however, traded off against PSL. Using such a design, windows can be created exactly for a given value of PSL at small ENBW. The adjustment of the asymptotic decay of the sidelobes and the determination of the window coefficients will be done without solving linear systems of equations to avoid known numerical problems. By using the proposed algorithm, more than 6000 windows with PSL values greater than -350 dB were created. The parameters and coefficients of selected windows will be given in the article.
{"title":"Tailoring of Minimum Sidelobe Cosine-Sum Windows for High-Resolution Measurements","authors":"H. Albrecht","doi":"10.2174/1876825301003010020","DOIUrl":"https://doi.org/10.2174/1876825301003010020","url":null,"abstract":"Cosine-sum windows with minimum sidelobes (minimum sidelobe windows) have good properties in terms of peak sidelobe level (PSL) and equivalent noise bandwidth (ENBW). But neighboring windows (the number of coefficients differ by one) have quite large PSL differences. If, for a special data analysis, the PSL of the window should not exceed a given value, then often windows with a much lower PSL than specified have to be used. Due to increasing ENBW in the case of decreasing PSL, this leads, amongst others, to more uncertainty in the determination of signal amplitudes. This article describes how to design modified minimum sidelobe windows which have similar properties to minimum sidelobe windows for a given PSL. Their ENBW were, however, traded off against PSL. Using such a design, windows can be created exactly for a given value of PSL at small ENBW. The adjustment of the asymptotic decay of the sidelobes and the determination of the window coefficients will be done without solving linear systems of equations to avoid known numerical problems. By using the proposed algorithm, more than 6000 windows with PSL values greater than -350 dB were created. The parameters and coefficients of selected windows will be given in the article.","PeriodicalId":147157,"journal":{"name":"The Open Signal Processing Journal","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126445753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-06-26DOI: 10.2174/1876825301003010013
F. Shih, Yuan Yuan
High Dynamic Range (HDR) images use a wider range of intensity values than the common Limited Dynamic Range (LDR) images. Because of this, handling HDR images requires a great deal of information to be stored and trans- ferred. To represent and display HDR images efficiently, a trade-off needs to be balanced between size and accuracy. We present a new wavelet-based algorithm for encoding HDR images, so that their data are feasible in the internet-based soci- ety for image communication. Experiments are conducted using the library provided by the Munsell Color Science Labo- ratory of Rochester Institute of Technology and HDR Images provided by Industrial Light and Magic (ILM, a motion pic- ture visual effects company). Experimental results show that the encoded format is able to achieve a good balance be- tween visual quality and image size for internet users.
{"title":"A Wavelet-Based Encoding Algorithm for High Dynamic Range Images","authors":"F. Shih, Yuan Yuan","doi":"10.2174/1876825301003010013","DOIUrl":"https://doi.org/10.2174/1876825301003010013","url":null,"abstract":"High Dynamic Range (HDR) images use a wider range of intensity values than the common Limited Dynamic Range (LDR) images. Because of this, handling HDR images requires a great deal of information to be stored and trans- ferred. To represent and display HDR images efficiently, a trade-off needs to be balanced between size and accuracy. We present a new wavelet-based algorithm for encoding HDR images, so that their data are feasible in the internet-based soci- ety for image communication. Experiments are conducted using the library provided by the Munsell Color Science Labo- ratory of Rochester Institute of Technology and HDR Images provided by Industrial Light and Magic (ILM, a motion pic- ture visual effects company). Experimental results show that the encoded format is able to achieve a good balance be- tween visual quality and image size for internet users.","PeriodicalId":147157,"journal":{"name":"The Open Signal Processing Journal","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129147096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-01-29DOI: 10.2174/1876825301003010006
R. Ranta, V. Louis-Dorr
This communication aims to combine several previously proposed wavelet denoising algorithms into a novel heuristic block method. The proposed ``hysteresis'' thresholding uses two thresholds simultaneously in order to combine detection and minimal alteration of informative features of the processed signal. This approach exploits the graph structure of the wavelet decomposition to detect clusters of significant wavelet coefficients. The new algorithm is compared with classical denoising methods on simulated benchmark signals.
{"title":"Hysteresis Thresholding: A Graph-Based Wavelet Block Denoising Algorithm","authors":"R. Ranta, V. Louis-Dorr","doi":"10.2174/1876825301003010006","DOIUrl":"https://doi.org/10.2174/1876825301003010006","url":null,"abstract":"This communication aims to combine several previously proposed wavelet denoising algorithms into a novel heuristic block method. The proposed ``hysteresis'' thresholding uses two thresholds simultaneously in order to combine detection and minimal alteration of informative features of the processed signal. This approach exploits the graph structure of the wavelet decomposition to detect clusters of significant wavelet coefficients. The new algorithm is compared with classical denoising methods on simulated benchmark signals.","PeriodicalId":147157,"journal":{"name":"The Open Signal Processing Journal","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121108475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-01-19DOI: 10.2174/1876825301003010001
T. Moir
The method of steepest-descent is re-visited in continuous time. It is shown that the continuous time version is a vector differential equation the solution of which is found by integration. Since numerical integration has many forms, we show an alternative to the conventional solution by using a Trapezoidal integration solution. This in turn gives a slightly modified least-mean squares (LMS) algorithm. Keyword: Steepest-Descent, Least-mean squares (LMS), Adaptive filters.
{"title":"The trapezoidal method of steepest-descent and its application to adaptive filtering","authors":"T. Moir","doi":"10.2174/1876825301003010001","DOIUrl":"https://doi.org/10.2174/1876825301003010001","url":null,"abstract":"The method of steepest-descent is re-visited in continuous time. It is shown that the continuous time version is a vector differential equation the solution of which is found by integration. Since numerical integration has many forms, we show an alternative to the conventional solution by using a Trapezoidal integration solution. This in turn gives a slightly modified least-mean squares (LMS) algorithm. Keyword: Steepest-Descent, Least-mean squares (LMS), Adaptive filters.","PeriodicalId":147157,"journal":{"name":"The Open Signal Processing Journal","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122522088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-16DOI: 10.2174/1876825300902010040
A. Majumdar, R. Ward
This paper proposes solution to the following non-convex optimization problem: min || x || p subject to || yAx || q Such an optimization problem arises in a rapidly advancing branch of signal processing called 'Compressed Sensing' (CS). The problem of CS is to reconstruct a k-sparse vector xnX1, from noisy measurements y = Ax+, where AmXn (m
{"title":"Non-Convex Compressed Sensing from Noisy Measurements","authors":"A. Majumdar, R. Ward","doi":"10.2174/1876825300902010040","DOIUrl":"https://doi.org/10.2174/1876825300902010040","url":null,"abstract":"This paper proposes solution to the following non-convex optimization problem: min || x || p subject to || yAx || q Such an optimization problem arises in a rapidly advancing branch of signal processing called 'Compressed Sensing' (CS). The problem of CS is to reconstruct a k-sparse vector xnX1, from noisy measurements y = Ax+, where AmXn (m<n) is the measurement matrix andmX1 is additive noise. In general the optimization methods developed for CS minimizes a sparsity promoting l1-norm (p=1) for Gaussian noise (q=2). This is restrictive for two reasons: i) theoretically it has been shown that, with positive fractional norms (0<p<1), the sparse vector x can be reconstructed by fewer measurements than required by l1-norm; and ii) Noises other than Gaus- sian require the norm of the misfit (q) to be something other than 2. To address these two issues an Iterative Reweighted Least Squares based algorithm is proposed here to solve the aforesaid optimization problem.","PeriodicalId":147157,"journal":{"name":"The Open Signal Processing Journal","volume":"181 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122885782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-16DOI: 10.2174/1876825300902010029
B. Lamia, Ouni Kais, E. Noureddine
A new cochlear implant speech coding strategy for representing acoustic information with reduced number of channels will be developed. After a brief description of the actual cochlear implant stimulation methods, the authors pre- sent a processing algorithm describing the new adaptive spectral analysis strategy (ASAS-GC). This technique is based on a Gammachirp perception model and an optimal stimulation rate selection. The excited electrodes signals and recognition performances are presented and compared with those of others strategies.
{"title":"Performances Study of a New Speech Coding Strategy with Reduced Channels for Cochlear Implants","authors":"B. Lamia, Ouni Kais, E. Noureddine","doi":"10.2174/1876825300902010029","DOIUrl":"https://doi.org/10.2174/1876825300902010029","url":null,"abstract":"A new cochlear implant speech coding strategy for representing acoustic information with reduced number of channels will be developed. After a brief description of the actual cochlear implant stimulation methods, the authors pre- sent a processing algorithm describing the new adaptive spectral analysis strategy (ASAS-GC). This technique is based on a Gammachirp perception model and an optimal stimulation rate selection. The excited electrodes signals and recognition performances are presented and compared with those of others strategies.","PeriodicalId":147157,"journal":{"name":"The Open Signal Processing Journal","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124344826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}