Pub Date : 2012-04-18DOI: 10.1109/SIU.2012.6204526
Yildiray Anagün, E. Seke
Objects in real-life recorded image sequences exhibit rotation along with linear motion. This should be taken into consideration when doing super-resolution work on video sequences. The work presented here involves determination of motion-vector field that includes rotation of blocks in the registration step which is the most important step of the super-resolution restoration work. Results of proposed approach are compared against the results that uses exhaustive block search that does not consider rotation. Compared approaches are applied upon low resolution sequences generated from their high resolution counterparts via downsampling. Comparison is done over the peak-signal-to-noise-ratios (psnr) of the output images with improved resolution. Visual quality assessment is also performed.
{"title":"Super resolution using block-matching motion estimation with rotation","authors":"Yildiray Anagün, E. Seke","doi":"10.1109/SIU.2012.6204526","DOIUrl":"https://doi.org/10.1109/SIU.2012.6204526","url":null,"abstract":"Objects in real-life recorded image sequences exhibit rotation along with linear motion. This should be taken into consideration when doing super-resolution work on video sequences. The work presented here involves determination of motion-vector field that includes rotation of blocks in the registration step which is the most important step of the super-resolution restoration work. Results of proposed approach are compared against the results that uses exhaustive block search that does not consider rotation. Compared approaches are applied upon low resolution sequences generated from their high resolution counterparts via downsampling. Comparison is done over the peak-signal-to-noise-ratios (psnr) of the output images with improved resolution. Visual quality assessment is also performed.","PeriodicalId":256154,"journal":{"name":"2012 20th Signal Processing and Communications Applications Conference (SIU)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129903649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-04-18DOI: 10.1109/SIU.2012.6204585
L. M. Gevrekci, Mehmet Umut Demircin, Erdem Akagündüz
This paper summarizes the developed real-time algorithm for registering subsequent video frames and experiments performed on an embedded processor. The features extracted from subsequent frames are matched using a RANSAC technique and frames are registered with an affine transformation model. The techniques used in the literature are improved to work in real-time and a system that can register 320×240 resolution images at approximately 40 frames per second is developed. Experiments are performed on a BeagleBoard-XM single board computer that contains ARM and DSP processor cores.
{"title":"Real-time image registration","authors":"L. M. Gevrekci, Mehmet Umut Demircin, Erdem Akagündüz","doi":"10.1109/SIU.2012.6204585","DOIUrl":"https://doi.org/10.1109/SIU.2012.6204585","url":null,"abstract":"This paper summarizes the developed real-time algorithm for registering subsequent video frames and experiments performed on an embedded processor. The features extracted from subsequent frames are matched using a RANSAC technique and frames are registered with an affine transformation model. The techniques used in the literature are improved to work in real-time and a system that can register 320×240 resolution images at approximately 40 frames per second is developed. Experiments are performed on a BeagleBoard-XM single board computer that contains ARM and DSP processor cores.","PeriodicalId":256154,"journal":{"name":"2012 20th Signal Processing and Communications Applications Conference (SIU)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129823521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-04-18DOI: 10.1109/SIU.2012.6204521
F. Yaman, A. Yılmaz, S. G. Tanyer
The problem of designing a thinning array with contradicting optimization constraints is examined. The genetic algorithm is illustrated for the search mechanism. The array is assumed to be linear with uniform elements of identical excitation; phase and amplitude. The elements in the array are assumed to be either on (active) or off (passive). The desirability function is utilized to combine the two requirements for narrow beam width and low side lobe levels. The effect of thinning of the array is examined. It is observed that thinning improves the side lobe levels without harming the half power beam width optimization. The thinning ratio and the convergence values are examined for different array lengths. Optimized antenna patterns for different array lengths are illustrated.
{"title":"Analysis of the design of the thinned antenna array pattern using the desired function","authors":"F. Yaman, A. Yılmaz, S. G. Tanyer","doi":"10.1109/SIU.2012.6204521","DOIUrl":"https://doi.org/10.1109/SIU.2012.6204521","url":null,"abstract":"The problem of designing a thinning array with contradicting optimization constraints is examined. The genetic algorithm is illustrated for the search mechanism. The array is assumed to be linear with uniform elements of identical excitation; phase and amplitude. The elements in the array are assumed to be either on (active) or off (passive). The desirability function is utilized to combine the two requirements for narrow beam width and low side lobe levels. The effect of thinning of the array is examined. It is observed that thinning improves the side lobe levels without harming the half power beam width optimization. The thinning ratio and the convergence values are examined for different array lengths. Optimized antenna patterns for different array lengths are illustrated.","PeriodicalId":256154,"journal":{"name":"2012 20th Signal Processing and Communications Applications Conference (SIU)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127064334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-04-18DOI: 10.1109/SIU.2012.6204508
Safak Bilgi Akdemir, Ç. Candan
It was shown in the literature that MIMO radar can resolve much more targets in the angle than phased array radars. In order to resolve targets close to each other in range, waveforms, whose side-lobes of the autocorrelation function are low, are designed by using pulse compression techniques or mismatched filters are used at the receiver. In this paper, an integrated side lobe level filter which was originally developed for conventional radars is adapted to MIMO radar. In addition, a new mismatched filter design procedure is developed for the reduction of peak side lobe level.
{"title":"Mismatched filter design in MIMO radar","authors":"Safak Bilgi Akdemir, Ç. Candan","doi":"10.1109/SIU.2012.6204508","DOIUrl":"https://doi.org/10.1109/SIU.2012.6204508","url":null,"abstract":"It was shown in the literature that MIMO radar can resolve much more targets in the angle than phased array radars. In order to resolve targets close to each other in range, waveforms, whose side-lobes of the autocorrelation function are low, are designed by using pulse compression techniques or mismatched filters are used at the receiver. In this paper, an integrated side lobe level filter which was originally developed for conventional radars is adapted to MIMO radar. In addition, a new mismatched filter design procedure is developed for the reduction of peak side lobe level.","PeriodicalId":256154,"journal":{"name":"2012 20th Signal Processing and Communications Applications Conference (SIU)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127064712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-04-18DOI: 10.1109/SIU.2012.6204606
Oguzhan Teke, A. Gürbüz, O. Arikan
Compressive Sensing (CS) theory details how a sparsely represented signal in a known basis can be reconstructed using less number of measurements. However in reality there is a mismatch between the assumed and the actual bases due to several reasons like discritization of the parameter space or model errors. Due to this mismatch, a sparse signal in the actual basis is definitely not sparse in the assumed basis and current sparse reconstruction algorithms suffer performance degradation. This paper presents a novel orthogonal matching pursuit algorithm that has a controlled perturbation mechanism on the basis vectors, decreasing the residual norm at each iteration. Superior performance of the proposed technique is shown in detailed simulations.
{"title":"A new OMP technique for sparse recovery","authors":"Oguzhan Teke, A. Gürbüz, O. Arikan","doi":"10.1109/SIU.2012.6204606","DOIUrl":"https://doi.org/10.1109/SIU.2012.6204606","url":null,"abstract":"Compressive Sensing (CS) theory details how a sparsely represented signal in a known basis can be reconstructed using less number of measurements. However in reality there is a mismatch between the assumed and the actual bases due to several reasons like discritization of the parameter space or model errors. Due to this mismatch, a sparse signal in the actual basis is definitely not sparse in the assumed basis and current sparse reconstruction algorithms suffer performance degradation. This paper presents a novel orthogonal matching pursuit algorithm that has a controlled perturbation mechanism on the basis vectors, decreasing the residual norm at each iteration. Superior performance of the proposed technique is shown in detailed simulations.","PeriodicalId":256154,"journal":{"name":"2012 20th Signal Processing and Communications Applications Conference (SIU)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127404414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-04-18DOI: 10.1109/SIU.2012.6204536
Mehmet Koç, A. Barkana
Matrix-based (2D) methods have advantages over vector-based (1D) methods. Matrix-based methods generally have less computational costs and higher recognition performances with respect to vector-based variants. In this work a two dimensional variation of Discriminative Common Vector Approach (2D-DCVA) is implemented. The performance of the method in single image problem is compared with the one dimensional Discriminative Common Vector Approach (1D-DCVA) and the two dimensional Fisher Linear Discriminant Analysis (2D-FLDA) on ORL, FERET, and YALE face databases. The best recognition performances are achieved in all databases with the proposed method.
{"title":"Application of the Discriminative Common Vector Approach to one sample problem","authors":"Mehmet Koç, A. Barkana","doi":"10.1109/SIU.2012.6204536","DOIUrl":"https://doi.org/10.1109/SIU.2012.6204536","url":null,"abstract":"Matrix-based (2D) methods have advantages over vector-based (1D) methods. Matrix-based methods generally have less computational costs and higher recognition performances with respect to vector-based variants. In this work a two dimensional variation of Discriminative Common Vector Approach (2D-DCVA) is implemented. The performance of the method in single image problem is compared with the one dimensional Discriminative Common Vector Approach (1D-DCVA) and the two dimensional Fisher Linear Discriminant Analysis (2D-FLDA) on ORL, FERET, and YALE face databases. The best recognition performances are achieved in all databases with the proposed method.","PeriodicalId":256154,"journal":{"name":"2012 20th Signal Processing and Communications Applications Conference (SIU)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127040269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-04-18DOI: 10.1109/SIU.2012.6204445
S. Seker, F. Kunter
Snow attenuation depends on many factors which are hard to observe and identify or classify. Modeling of snow attenuation is relatively complex. There are two main classes of methods used in snow attenuation prediction: the empirical method and the physical method. Physical method which we used in this work focuses on reproducing the physical behavior of factors involved in the process. The attenuation in the frequencies of mobile communication due to snow is simulated using Discrete Propagation Model. For this modeling, certain ice-crystal categories are chosen to be investigated. Needles, plates and branches are the main 3 groups which are focused, and 13 different models of snow particles in total are chosen to represent snow. The element in each group is chosen according to similar physical characteristics of ice crystal. It was found that attenuation due to snow is higher than rain attenuation specifically due to differences in particle size. In our simulations, frequencies of GSM communication, 900MHz, 1800MHz and 2270MHz, are used for calculation of attenuation.
{"title":"Modeling of snow attenuation at mobile frequencies","authors":"S. Seker, F. Kunter","doi":"10.1109/SIU.2012.6204445","DOIUrl":"https://doi.org/10.1109/SIU.2012.6204445","url":null,"abstract":"Snow attenuation depends on many factors which are hard to observe and identify or classify. Modeling of snow attenuation is relatively complex. There are two main classes of methods used in snow attenuation prediction: the empirical method and the physical method. Physical method which we used in this work focuses on reproducing the physical behavior of factors involved in the process. The attenuation in the frequencies of mobile communication due to snow is simulated using Discrete Propagation Model. For this modeling, certain ice-crystal categories are chosen to be investigated. Needles, plates and branches are the main 3 groups which are focused, and 13 different models of snow particles in total are chosen to represent snow. The element in each group is chosen according to similar physical characteristics of ice crystal. It was found that attenuation due to snow is higher than rain attenuation specifically due to differences in particle size. In our simulations, frequencies of GSM communication, 900MHz, 1800MHz and 2270MHz, are used for calculation of attenuation.","PeriodicalId":256154,"journal":{"name":"2012 20th Signal Processing and Communications Applications Conference (SIU)","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127335935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-04-18DOI: 10.1109/SIU.2012.6204612
Can Kavaklioglu, A. Cemgil
Tensor factorization is a frequently used modelling tool in problems involving large amounts of n-way data. Probabilistic Latent Tensor Factorization framework provides a probabilistic approach to solve the tensor factorization problem. The iterative algorithms use generalized tensor multiplication operations involving large amounts of arithmetic operations with similar structures. This work shows the performance improvements achieved by performing the independent operations on a graphical processing unit (GPU).
{"title":"Parallel generalized tensor multiplication","authors":"Can Kavaklioglu, A. Cemgil","doi":"10.1109/SIU.2012.6204612","DOIUrl":"https://doi.org/10.1109/SIU.2012.6204612","url":null,"abstract":"Tensor factorization is a frequently used modelling tool in problems involving large amounts of n-way data. Probabilistic Latent Tensor Factorization framework provides a probabilistic approach to solve the tensor factorization problem. The iterative algorithms use generalized tensor multiplication operations involving large amounts of arithmetic operations with similar structures. This work shows the performance improvements achieved by performing the independent operations on a graphical processing unit (GPU).","PeriodicalId":256154,"journal":{"name":"2012 20th Signal Processing and Communications Applications Conference (SIU)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127515954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-04-18DOI: 10.1109/SIU.2012.6204776
Deniz Gerçek, D. Çesmeci, M. Güllü, A. Ertürk, S. Ertürk
In this study, an automated method that is based on image intensities for geometric registration of multi-sensor/multi-resolution imagery acquired from EO-1 Hyperion and IKONOS satellite platforms is proposed. Method performs an area-based transformation to register images of different spectral and spatial resolution with high geometric accuracy. Method basically compares similarity of intensities of two images. Position where there is highest similarity score in translated blocks gives the best match. Chrominance Transform Operation (CTO) that we tested in this study as a similarity measure depicted higher accuracy and performance compared to methods commonly used in the field i.e. Normalized Cross-Correlation (NCC) and Mutual Information (MI).
{"title":"An automated approach for registration of multisensor/multi-resolution imagery","authors":"Deniz Gerçek, D. Çesmeci, M. Güllü, A. Ertürk, S. Ertürk","doi":"10.1109/SIU.2012.6204776","DOIUrl":"https://doi.org/10.1109/SIU.2012.6204776","url":null,"abstract":"In this study, an automated method that is based on image intensities for geometric registration of multi-sensor/multi-resolution imagery acquired from EO-1 Hyperion and IKONOS satellite platforms is proposed. Method performs an area-based transformation to register images of different spectral and spatial resolution with high geometric accuracy. Method basically compares similarity of intensities of two images. Position where there is highest similarity score in translated blocks gives the best match. Chrominance Transform Operation (CTO) that we tested in this study as a similarity measure depicted higher accuracy and performance compared to methods commonly used in the field i.e. Normalized Cross-Correlation (NCC) and Mutual Information (MI).","PeriodicalId":256154,"journal":{"name":"2012 20th Signal Processing and Communications Applications Conference (SIU)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129063250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-04-18DOI: 10.1109/SIU.2012.6204806
Baran Tan Bacinoglu, F. M. Ozcelik, E. Uysal-Biyikoglu
An online finite horizon throughput maximization problem is considered. Specifically, optimal power allocation of an energy harvesting rechargeable node with two different accessible output power levels is studied. Communication takes place under a static channel and rate levels are assumed to be a concave function of power, implying the delay-energy efficiency tradeoff. Taking battery state, remaining number of slots and achievable rate levels into account optimal policy is obtained through dynamic programming. In addition, several policies that are suboptimal yet good are proposed. Based on an empirically motivated harvesting model, a policy that we call Expected Threshold Policy is shown to achieve near-optimal performance.
{"title":"Finite-horizon online throughput maximization for an energy harvesting transmitter","authors":"Baran Tan Bacinoglu, F. M. Ozcelik, E. Uysal-Biyikoglu","doi":"10.1109/SIU.2012.6204806","DOIUrl":"https://doi.org/10.1109/SIU.2012.6204806","url":null,"abstract":"An online finite horizon throughput maximization problem is considered. Specifically, optimal power allocation of an energy harvesting rechargeable node with two different accessible output power levels is studied. Communication takes place under a static channel and rate levels are assumed to be a concave function of power, implying the delay-energy efficiency tradeoff. Taking battery state, remaining number of slots and achievable rate levels into account optimal policy is obtained through dynamic programming. In addition, several policies that are suboptimal yet good are proposed. Based on an empirically motivated harvesting model, a policy that we call Expected Threshold Policy is shown to achieve near-optimal performance.","PeriodicalId":256154,"journal":{"name":"2012 20th Signal Processing and Communications Applications Conference (SIU)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122363757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}