Pattern recognition is a task of searching particular patterns or features in the given input. The data mining, computer networks, genetic engineering, chemical structure analysis, web services etc. are few rapidly growing applications where pattern recognition has been used. Graphs are very powerful model applied in various areas of computer science and engineering. This paper proposes a graph based algorithm for performing the graphical symbol recognition. In the proposed approach, a graph based filtering prior to the matching is performed which significantly reduces the computational complexity. The proposed algorithm is evaluated using a large number of input drawings and the simulation results show that the proposed algorithm outperforms the existing algorithms.
{"title":"Graph Based Filtering and Matching for Symbol Recognition","authors":"Vaishali S. Pawar, M. Zaveri","doi":"10.4236/jsip.2018.93010","DOIUrl":"https://doi.org/10.4236/jsip.2018.93010","url":null,"abstract":"Pattern recognition is a task of searching particular patterns or features in the given input. The data mining, computer networks, genetic engineering, chemical structure analysis, web services etc. are few rapidly growing applications where pattern recognition has been used. Graphs are very powerful model applied in various areas of computer science and engineering. This paper proposes a graph based algorithm for performing the graphical symbol recognition. In the proposed approach, a graph based filtering prior to the matching is performed which significantly reduces the computational complexity. The proposed algorithm is evaluated using a large number of input drawings and the simulation results show that the proposed algorithm outperforms the existing algorithms.","PeriodicalId":38474,"journal":{"name":"Journal of Information Hiding and Multimedia Signal Processing","volume":"6 1","pages":"167-191"},"PeriodicalIF":0.0,"publicationDate":"2018-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85477793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Christofilakis, Giorgos Tatsis, Constantinos T. Votis, Spyridon K Chronopoulos, P. Kostarakis, C. Lolis, A. Bartzokas
In this paper we present an experimental validated system for measuring rainfall due to radio frequency (RF) signal attenuation at 2 GHz. Measurements took place in Ioannina, NW Greece, starting in April 2015 and lasting for twelve months. The primary acquired extensive results have shown reliable and accurate measurements for rainfall amounts smaller than 1 mm for 5 min periods. The very important innovation is that this paper presents significant earth-to-earth measurements due to rainfall attenuation (at 2 GHz) in order to act as a map for future investigation and as a prior knowledge for the behavior of other systems operating at frequencies around S-band.
{"title":"Rainfall Measurements Due to Radio Frequency Signal Attenuation at 2 GHz","authors":"V. Christofilakis, Giorgos Tatsis, Constantinos T. Votis, Spyridon K Chronopoulos, P. Kostarakis, C. Lolis, A. Bartzokas","doi":"10.4236/jsip.2018.93011","DOIUrl":"https://doi.org/10.4236/jsip.2018.93011","url":null,"abstract":"In this paper we present an experimental validated system for measuring rainfall due to radio frequency (RF) signal attenuation at 2 GHz. Measurements took place in Ioannina, NW Greece, starting in April 2015 and lasting for twelve months. The primary acquired extensive results have shown reliable and accurate measurements for rainfall amounts smaller than 1 mm for 5 min periods. The very important innovation is that this paper presents significant earth-to-earth measurements due to rainfall attenuation (at 2 GHz) in order to act as a map for future investigation and as a prior knowledge for the behavior of other systems operating at frequencies around S-band.","PeriodicalId":38474,"journal":{"name":"Journal of Information Hiding and Multimedia Signal Processing","volume":"53 1","pages":"192-201"},"PeriodicalIF":0.0,"publicationDate":"2018-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73786934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Texture analysis is important in several image segmentation and classification problems. Different image textures manifest themselves by dissimilarity in both the property values and the spatial interrelationships of their component texture primitives. We use this fact in a texture discrimination system. This paper focuses on how to apply texture operators based on co-occurrence matrix, texture filters and fractal dimension to the problem of object recognition and image segmentation.
{"title":"Texture Filters and Fractal Dimension on Image Segmentation","authors":"Beatriz Marrón","doi":"10.4236/JSIP.2018.93014","DOIUrl":"https://doi.org/10.4236/JSIP.2018.93014","url":null,"abstract":"Texture analysis is important in several image segmentation and classification problems. Different image textures manifest themselves by dissimilarity in both the property values and the spatial interrelationships of their component texture primitives. We use this fact in a texture discrimination system. This paper focuses on how to apply texture operators based on co-occurrence matrix, texture filters and fractal dimension to the problem of object recognition and image segmentation.","PeriodicalId":38474,"journal":{"name":"Journal of Information Hiding and Multimedia Signal Processing","volume":"1 1","pages":"229-238"},"PeriodicalIF":0.0,"publicationDate":"2018-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86986615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep Learning is a powerful technique that is widely applied to Image Recognition and Natural Language Processing tasks amongst many other tasks. In this work, we propose an efficient technique to utilize pre-trained Convolutional Neural Network (CNN) architectures to extract powerful features from images for object recognition purposes. We have built on the existing concept of extending the learning from pre-trained CNNs to new databases through activations by proposing to consider multiple deep layers. We have exploited the progressive learning that happens at the various intermediate layers of the CNNs to construct Deep Multi-Layer (DM-L) based Feature Extraction vectors to achieve excellent object recognition performance. Two popular pre-trained CNN architecture models i.e. the VGG_16 and VGG_19 have been used in this work to extract the feature sets from 3 deep fully connected multiple layers namely “fc6”, “fc7” and “fc8” from inside the models for object recognition purposes. Using the Principal Component Analysis (PCA) technique, the Dimensionality of the DM-L feature vectors has been reduced to form powerful feature vectors that have been fed to an external Classifier Ensemble for classification instead of the Softmax based classification layers of the two original pre-trained CNN models. The proposed DM-L technique has been applied to the Benchmark Caltech-101 object recognition database. Conventional wisdom may suggest that feature extractions based on the deepest layer i.e. “fc8” compared to “fc6” will result in the best recognition performance but our results have proved it otherwise for the two considered models. Our experiments have revealed that for the two models under consideration, the “fc6” based feature vectors have achieved the best recognition performance. State-of-the-Art recognition performances of 91.17% and 91.35% have been achieved by utilizing the “fc6” based feature vectors for the VGG_16 and VGG_19 models respectively. The recognition performance has been achieved by considering 30 sample images per class whereas the proposed system is capable of achieving improved performance by considering all sample images per class. Our research shows that for feature extraction based on CNNs, multiple layers should be considered and then the best layer can be selected that maximizes the recognition performance.
{"title":"DM-L Based Feature Extraction and Classifier Ensemble for Object Recognition","authors":"H. A. Khan","doi":"10.4236/JSIP.2018.92006","DOIUrl":"https://doi.org/10.4236/JSIP.2018.92006","url":null,"abstract":"Deep Learning is a powerful technique that is widely applied to Image Recognition and Natural Language Processing tasks amongst many other tasks. In this work, we propose an efficient technique to utilize pre-trained Convolutional Neural Network (CNN) architectures to extract powerful features from images for object recognition purposes. We have built on the existing concept of extending the learning from pre-trained CNNs to new databases through activations by proposing to consider multiple deep layers. We have exploited the progressive learning that happens at the various intermediate layers of the CNNs to construct Deep Multi-Layer (DM-L) based Feature Extraction vectors to achieve excellent object recognition performance. Two popular pre-trained CNN architecture models i.e. the VGG_16 and VGG_19 have been used in this work to extract the feature sets from 3 deep fully connected multiple layers namely “fc6”, “fc7” and “fc8” from inside the models for object recognition purposes. Using the Principal Component Analysis (PCA) technique, the Dimensionality of the DM-L feature vectors has been reduced to form powerful feature vectors that have been fed to an external Classifier Ensemble for classification instead of the Softmax based classification layers of the two original pre-trained CNN models. The proposed DM-L technique has been applied to the Benchmark Caltech-101 object recognition database. Conventional wisdom may suggest that feature extractions based on the deepest layer i.e. “fc8” compared to “fc6” will result in the best recognition performance but our results have proved it otherwise for the two considered models. Our experiments have revealed that for the two models under consideration, the “fc6” based feature vectors have achieved the best recognition performance. State-of-the-Art recognition performances of 91.17% and 91.35% have been achieved by utilizing the “fc6” based feature vectors for the VGG_16 and VGG_19 models respectively. The recognition performance has been achieved by considering 30 sample images per class whereas the proposed system is capable of achieving improved performance by considering all sample images per class. Our research shows that for feature extraction based on CNNs, multiple layers should be considered and then the best layer can be selected that maximizes the recognition performance.","PeriodicalId":38474,"journal":{"name":"Journal of Information Hiding and Multimedia Signal Processing","volume":"197 1","pages":"92-110"},"PeriodicalIF":0.0,"publicationDate":"2018-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75897307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Abo-Zahhad, Sabah M. Ahmed, M. Farrag, K. BaAli
Spectrum sensing is a core function at cognitive radio systems to have spectrum awareness. This could be achieved by collecting samples from the frequency band under observation to make a conclusion whether the band is occupied, or it is a spectrum hole. The task of sensing is becoming more challenging especially at wideband spectrum scenario. The difficulty is due to conventional sampling rate theory which makes it infeasible to sample such very wide range of frequencies and the technical requirements are very costly. Recently, compressive sensing introduced itself as a pioneer solution that relaxed the wideband sampling rate requirements. It showed the ability to sample a signal below the Nyquist sampling rate and reconstructed it using very few measurements. In this paper, we discuss the approaches used for solving compressed spectrum sensing problem for wideband cognitive radio networks and how the problem is formulated and rendered to improve the detection performance.
{"title":"Wideband Cognitive Radio Networks Based Compressed Spectrum Sensing: A Survey","authors":"M. Abo-Zahhad, Sabah M. Ahmed, M. Farrag, K. BaAli","doi":"10.4236/JSIP.2018.92008","DOIUrl":"https://doi.org/10.4236/JSIP.2018.92008","url":null,"abstract":"Spectrum sensing is a core function at cognitive radio systems to have spectrum awareness. This could be achieved by collecting samples from the frequency band under observation to make a conclusion whether the band is occupied, or it is a spectrum hole. The task of sensing is becoming more challenging especially at wideband spectrum scenario. The difficulty is due to conventional sampling rate theory which makes it infeasible to sample such very wide range of frequencies and the technical requirements are very costly. Recently, compressive sensing introduced itself as a pioneer solution that relaxed the wideband sampling rate requirements. It showed the ability to sample a signal below the Nyquist sampling rate and reconstructed it using very few measurements. In this paper, we discuss the approaches used for solving compressed spectrum sensing problem for wideband cognitive radio networks and how the problem is formulated and rendered to improve the detection performance.","PeriodicalId":38474,"journal":{"name":"Journal of Information Hiding and Multimedia Signal Processing","volume":"528 1","pages":"122-151"},"PeriodicalIF":0.0,"publicationDate":"2018-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78872246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, the interest in damage identification of structural components through innovative techniques has grown significantly. Damage identification has always been a crucial concern in quality assessment and load capacity rating of infrastructure. In this regard, researchers focus on proposing efficient tools to identify the damages in early stages to prevent the sudden failure in structural components, ensuring the public safety and reducing the asset management costs. The sensing technologies along with the data analysis through various techniques and machine learning approaches have been the area of interest for these innovative techniques. The purpose of this research is to develop a robust method for automatic condition assessment of real-life concrete structures for the detection of relatively small cracks at early stages. A damage identification algorithm is proposed using the hybrid approaches to analyze the sensors data. The data obtained from transducers mounted on concrete beams under static loading in laboratory. These data are used as the input parameters. The method relies only on the measured time responses. After filtering and normalization of the data, the damage sensitive statistical features are extracted from the signals and used as the inputs of Self-Advising Support Vector Machine (SA-SVM) for the classification purpose in civil Engineering area. Finally, the results are compared with traditional methods to investigate the feasibility of the hybrid proposed algorithm. It is demonstrated that the presented method can reliably detect the crack in the structure and thereby enable the real-time infrastructure health monitoring.
{"title":"Statistical Features and Traditional SA-SVM Classification Algorithm for Crack Detection","authors":"A. N. Hoshyar, S. Kharkovsky, B. Samali","doi":"10.4236/JSIP.2018.92007","DOIUrl":"https://doi.org/10.4236/JSIP.2018.92007","url":null,"abstract":"In recent years, the interest in damage identification of structural components through innovative techniques has grown significantly. Damage identification has always been a crucial concern in quality assessment and load capacity rating of infrastructure. In this regard, researchers focus on proposing efficient tools to identify the damages in early stages to prevent the sudden failure in structural components, ensuring the public safety and reducing the asset management costs. The sensing technologies along with the data analysis through various techniques and machine learning approaches have been the area of interest for these innovative techniques. The purpose of this research is to develop a robust method for automatic condition assessment of real-life concrete structures for the detection of relatively small cracks at early stages. A damage identification algorithm is proposed using the hybrid approaches to analyze the sensors data. The data obtained from transducers mounted on concrete beams under static loading in laboratory. These data are used as the input parameters. The method relies only on the measured time responses. After filtering and normalization of the data, the damage sensitive statistical features are extracted from the signals and used as the inputs of Self-Advising Support Vector Machine (SA-SVM) for the classification purpose in civil Engineering area. Finally, the results are compared with traditional methods to investigate the feasibility of the hybrid proposed algorithm. It is demonstrated that the presented method can reliably detect the crack in the structure and thereby enable the real-time infrastructure health monitoring.","PeriodicalId":38474,"journal":{"name":"Journal of Information Hiding and Multimedia Signal Processing","volume":"38 1","pages":"111-121"},"PeriodicalIF":0.0,"publicationDate":"2018-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78596199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present machine learning algorithms and systems for similar video retrieval. Here, the query is itself a video. For the similarity measurement, exemplars, or representative frames in each video, are extracted by unsupervised learning. For this learning, we chose the order-aware competitive learning. After obtaining a set of exemplars for each video, the similarity is computed. Because the numbers and positions of the exemplars are different in each video, we use a similarity computing method called M-distance, which generalizes existing global and local alignment methods using followers to the exemplars. To represent each frame in the video, this paper emphasizes the Frame Signature of the ISO/IEC standard so that the total system, along with its graphical user interface, becomes practical. Experiments on the detection of inserted plagiaristic scenes showed excellent precision-recall curves, with precision values very close to 1. Thus, the proposed system can work as a plagiarism detector for videos. In addition, this method can be regarded as the structuring of unstructured data via numerical labeling by exemplars. Finally, further sophistication of this labeling is discussed.
{"title":"Similar Video Retrieval via Order-Aware Exemplars and Alignment","authors":"T. Horie, M. Uchida, Y. Matsuyama","doi":"10.4236/jsip.2018.92005","DOIUrl":"https://doi.org/10.4236/jsip.2018.92005","url":null,"abstract":"In this paper, we present machine learning algorithms and systems for similar video retrieval. Here, the query is itself a video. For the similarity measurement, exemplars, or representative frames in each video, are extracted by unsupervised learning. For this learning, we chose the order-aware competitive learning. After obtaining a set of exemplars for each video, the similarity is computed. Because the numbers and positions of the exemplars are different in each video, we use a similarity computing method called M-distance, which generalizes existing global and local alignment methods using followers to the exemplars. To represent each frame in the video, this paper emphasizes the Frame Signature of the ISO/IEC standard so that the total system, along with its graphical user interface, becomes practical. Experiments on the detection of inserted plagiaristic scenes showed excellent precision-recall curves, with precision values very close to 1. Thus, the proposed system can work as a plagiarism detector for videos. In addition, this method can be regarded as the structuring of unstructured data via numerical labeling by exemplars. Finally, further sophistication of this labeling is discussed.","PeriodicalId":38474,"journal":{"name":"Journal of Information Hiding and Multimedia Signal Processing","volume":"144 1","pages":"73-91"},"PeriodicalIF":0.0,"publicationDate":"2018-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78034744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes an object-tracking algorithm with multiple randomly-generated features. We mainly improve the tracking performance which is sometimes good and sometimes bad in compressive tracking. In compressive tracking, the image features are generated by random projection. The resulting image features are affected by the random numbers so that the results of each execution are different. If the obvious features of the target are not captured, the tracker is likely to fail. Therefore the tracking results are inconsistent for each execution. The proposed algorithm uses a number of different image features to track, and chooses the best tracking result by measuring the similarity with the target model. It reduces the chances to determine the target location by the poor image features. In this paper, we use the Bhattacharyya coefficient to choose the best tracking result. The experimental results show that the proposed tracking algorithm can greatly reduce the tracking errors. The best performance improvements in terms of center location error, bounding box overlap ratio and success rate are from 63.62 pixels to 15.45 pixels, from 31.75% to 64.48% and from 38.51% to 82.58%, respectively.
{"title":"A Multiple Random Feature Extraction Algorithm for Image Object Tracking","authors":"Lan-Rong Dung, Shih-Chi Wang, Yin-Yi Wu","doi":"10.4236/JSIP.2018.91004","DOIUrl":"https://doi.org/10.4236/JSIP.2018.91004","url":null,"abstract":"This paper proposes an object-tracking algorithm with multiple randomly-generated features. We mainly improve the tracking performance which is sometimes good and sometimes bad in compressive tracking. In compressive tracking, the image features are generated by random projection. The resulting image features are affected by the random numbers so that the results of each execution are different. If the obvious features of the target are not captured, the tracker is likely to fail. Therefore the tracking results are inconsistent for each execution. The proposed algorithm uses a number of different image features to track, and chooses the best tracking result by measuring the similarity with the target model. It reduces the chances to determine the target location by the poor image features. In this paper, we use the Bhattacharyya coefficient to choose the best tracking result. The experimental results show that the proposed tracking algorithm can greatly reduce the tracking errors. The best performance improvements in terms of center location error, bounding box overlap ratio and success rate are from 63.62 pixels to 15.45 pixels, from 31.75% to 64.48% and from 38.51% to 82.58%, respectively.","PeriodicalId":38474,"journal":{"name":"Journal of Information Hiding and Multimedia Signal Processing","volume":"36 1","pages":"63-71"},"PeriodicalIF":0.0,"publicationDate":"2018-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74502596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-22DOI: 10.20944/PREPRINTS201802.0143.V1
V. Korzhik, Cuong Nguyen, I. Fedyanin, G. Morales-Luna
There are introduced two new steganalytic methods not depending on the statistics of the cover objects, namely side attacks stegosystems. The first one assumes that the plaintext, encrypted before embedding, is partly known by the attacker. In this case, the stegosystems detection is based on the calculation of mutual information between message and extracted encrypted data. For this calculation, a notion of the k-nearest neighbor distance is applied. The second method is applied to HUGO, one of the most efficient steganographic algorithms. In this case the stegosystems detection is based on a verification of the NIST tests to the extracted encrypted messages. Moreover, we show that the problem to find a submatrix of the embedding matrix determining a trellis code structure in the HUGO algorithm provides a search of the stegokey by the proposed method.
{"title":"Side Attacks on Stegosystems Executing Message Encryption Previous to Embedding","authors":"V. Korzhik, Cuong Nguyen, I. Fedyanin, G. Morales-Luna","doi":"10.20944/PREPRINTS201802.0143.V1","DOIUrl":"https://doi.org/10.20944/PREPRINTS201802.0143.V1","url":null,"abstract":"There are introduced two new steganalytic methods not depending on the statistics of the cover objects, namely side attacks stegosystems. The first one assumes that the plaintext, encrypted before embedding, is partly known by the attacker. In this case, the stegosystems detection is based on the calculation of mutual information between message and extracted encrypted data. For this calculation, a notion of the k-nearest neighbor distance is applied. The second method is applied to HUGO, one of the most efficient steganographic algorithms. In this case the stegosystems detection is based on a verification of the NIST tests to the extracted encrypted messages. Moreover, we show that the problem to find a submatrix of the embedding matrix determining a trellis code structure in the HUGO algorithm provides a search of the stegokey by the proposed method.","PeriodicalId":38474,"journal":{"name":"Journal of Information Hiding and Multimedia Signal Processing","volume":"1 1","pages":"44-57"},"PeriodicalIF":0.0,"publicationDate":"2018-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78111922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper derives a mathematical description of the complex stretch processor’s response to bandlimited Gaussian noise having arbitrary center frequency and bandwidth. The description of the complex stretch processor’s random output comprises highly accurate closed-form approximations for the probability density function and the autocorrelation function. The solution supports the complex stretch processor’s usage of any conventional range-sidelobe-reduction window. The paper then identifies two practical applications of the derived description. Digital-simulation results for the two identified applications, assuming the complex stretch processor uses the rectangular, Hamming, Blackman, or Kaiser window, verify the derivation’s correctness through favorable comparison to the theoretically predicted behavior.
{"title":"The Response to Arbitrarily Bandlimited Gaussian Noise of the Complex Stretch Processor Using a Conventional Range-Sidelobe-Reduction Window","authors":"John N. Spitzmiller","doi":"10.4236/jsip.2018.91003","DOIUrl":"https://doi.org/10.4236/jsip.2018.91003","url":null,"abstract":"This paper derives a mathematical description of the complex stretch processor’s response to bandlimited Gaussian noise having arbitrary center frequency and bandwidth. The description of the complex stretch processor’s random output comprises highly accurate closed-form approximations for the probability density function and the autocorrelation function. The solution supports the complex stretch processor’s usage of any conventional range-sidelobe-reduction window. The paper then identifies two practical applications of the derived description. Digital-simulation results for the two identified applications, assuming the complex stretch processor uses the rectangular, Hamming, Blackman, or Kaiser window, verify the derivation’s correctness through favorable comparison to the theoretically predicted behavior.","PeriodicalId":38474,"journal":{"name":"Journal of Information Hiding and Multimedia Signal Processing","volume":"54 1","pages":"36-62"},"PeriodicalIF":0.0,"publicationDate":"2018-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87234023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}