Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775695
S. Sarra-Nsibi, A. Benazza-Benyahia
In this paper, we are interested in image indexing in the wavelet transform domain. More precisely, the salient features of the image content correspond to the parameters of the statistical distribution model of the wavelet coefficients. The contribution of our work is twofold. Firstly, a very versatile multivariate interscale distribution driven by the copula theory is chosen to model the joint distribution of the homologous wavelet coefficients considered at different scales. Secondly, the search procedure associated with any request is accelerated through a tree structured search in the features space. Experimental results show that considering interscale information drastically improves the search performances.
{"title":"Interscale statistical models for wavelet-based image retrieval","authors":"S. Sarra-Nsibi, A. Benazza-Benyahia","doi":"10.1109/ISSPIT.2008.4775695","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775695","url":null,"abstract":"In this paper, we are interested in image indexing in the wavelet transform domain. More precisely, the salient features of the image content correspond to the parameters of the statistical distribution model of the wavelet coefficients. The contribution of our work is twofold. Firstly, a very versatile multivariate interscale distribution driven by the copula theory is chosen to model the joint distribution of the homologous wavelet coefficients considered at different scales. Secondly, the search procedure associated with any request is accelerated through a tree structured search in the features space. Experimental results show that considering interscale information drastically improves the search performances.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132211321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775730
N. Kalantari, S. Ahadi
In this paper, a robust audio watermarking system, using mean quantization in the wavelet transform domain, has been proposed. Since the data is embedded in both the low and high frequency bands, selection of the correct result from these two bands is very important. In this paper, an intelligent decoder using two stage multi layer perceptron (MLP) neural network is proposed. Using this scheme, the attack is detected during the decoding process and the decoder is adapted to the same attack in order to extract the watermark data correctly. The simulation results show that using the intelligent decoder, in comparison to the previous scheme, the performance of detection after common attacks, such as lowpass, MP3 compression, highpass, echo, resampling, amplifying etc, is increased.
{"title":"Intelligent Decoding for Mean Quantization Based Audio Watermarking in the Wavelet Transform Domain","authors":"N. Kalantari, S. Ahadi","doi":"10.1109/ISSPIT.2008.4775730","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775730","url":null,"abstract":"In this paper, a robust audio watermarking system, using mean quantization in the wavelet transform domain, has been proposed. Since the data is embedded in both the low and high frequency bands, selection of the correct result from these two bands is very important. In this paper, an intelligent decoder using two stage multi layer perceptron (MLP) neural network is proposed. Using this scheme, the attack is detected during the decoding process and the decoder is adapted to the same attack in order to extract the watermark data correctly. The simulation results show that using the intelligent decoder, in comparison to the previous scheme, the performance of detection after common attacks, such as lowpass, MP3 compression, highpass, echo, resampling, amplifying etc, is increased.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133557831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775644
M. Dzenana, F. Orucevic
This paper proposes and discusses a GIS that integrates geographic resources with business intelligence data from SAP, TIS and other systems. Location is a unifying theme in business. Location can be an address, a service boundary, a sales territory, or a delivery route. All these things can be visualized and interactively managed and analyzed in a GIS. Spatial relationships, patterns and trends reveal invaluable business intelligence and bring easy-to-understand visualization to business applications. Use GIS to answer basic or sophisticated question on how to improve businesses workflow, management, and ROI (Return on Invest). To be able to integrate GIS system with other business systems and processes, it is necessary that spatial and non-spatial data are "no different". In presentation I will present GIS application which demonstrates concept of integrating data and systems with spatial and non spatial data. The application integrates data from different systems: traditional GIS data, TIS and SAP. Data integration enabling easier and more effective control of telecommunication networks, supervising and effect solutions of failures on element telecommunications systems, access networks and networks features, better controlling systems, witch we afford to end users. Goal is faster and more quality solutions of user's request. Present solutions in this paper based on OpenGIS standards and interfaces to geospatial information providers, Oracle Database and Application Server.
{"title":"Notice of Violation of IEEE Publication PrinciplesIntegration of Spatial Data with Business Intelligence Systems","authors":"M. Dzenana, F. Orucevic","doi":"10.1109/ISSPIT.2008.4775644","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775644","url":null,"abstract":"This paper proposes and discusses a GIS that integrates geographic resources with business intelligence data from SAP, TIS and other systems. Location is a unifying theme in business. Location can be an address, a service boundary, a sales territory, or a delivery route. All these things can be visualized and interactively managed and analyzed in a GIS. Spatial relationships, patterns and trends reveal invaluable business intelligence and bring easy-to-understand visualization to business applications. Use GIS to answer basic or sophisticated question on how to improve businesses workflow, management, and ROI (Return on Invest). To be able to integrate GIS system with other business systems and processes, it is necessary that spatial and non-spatial data are \"no different\". In presentation I will present GIS application which demonstrates concept of integrating data and systems with spatial and non spatial data. The application integrates data from different systems: traditional GIS data, TIS and SAP. Data integration enabling easier and more effective control of telecommunication networks, supervising and effect solutions of failures on element telecommunications systems, access networks and networks features, better controlling systems, witch we afford to end users. Goal is faster and more quality solutions of user's request. Present solutions in this paper based on OpenGIS standards and interfaces to geospatial information providers, Oracle Database and Application Server.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133202032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775689
Gamal Fahmy
The Bspline mathematical functions have long been utilized for signal representation. However they have been just recently been used for signal interpolation and zooming. Bsplines can represent the next generation of wavelets for signal/image compression and multi-resolution analysis. This is due to the fact that they are flexible and provide the best cost/quality trade off relationship. By changing the Bspline function order we move from a linear representation to a high order band limited representation. Bsplines are also linked to differentials, as they are the exact mathematical translators between the discrete and continuous versions of the coefficients. In this paper we propose a novel technique for signal/image decomposition, analysis, synthesis and reconstruction based on the Bspline mathematical functions. Mathematical explanation and derivation for the proposed Bspline prediction is analyzed. We also present a lifting based implementation for the proposed Bspline image coder and measure its effect on the compression quality. Extensive simulation results, which have been carried out with the proposed approach on different classes of images with different B-spline orders, are illustrated.
{"title":"Bspline based Wavelets with Lifting Implementation","authors":"Gamal Fahmy","doi":"10.1109/ISSPIT.2008.4775689","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775689","url":null,"abstract":"The Bspline mathematical functions have long been utilized for signal representation. However they have been just recently been used for signal interpolation and zooming. Bsplines can represent the next generation of wavelets for signal/image compression and multi-resolution analysis. This is due to the fact that they are flexible and provide the best cost/quality trade off relationship. By changing the Bspline function order we move from a linear representation to a high order band limited representation. Bsplines are also linked to differentials, as they are the exact mathematical translators between the discrete and continuous versions of the coefficients. In this paper we propose a novel technique for signal/image decomposition, analysis, synthesis and reconstruction based on the Bspline mathematical functions. Mathematical explanation and derivation for the proposed Bspline prediction is analyzed. We also present a lifting based implementation for the proposed Bspline image coder and measure its effect on the compression quality. Extensive simulation results, which have been carried out with the proposed approach on different classes of images with different B-spline orders, are illustrated.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115436556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775660
Z. Jasak, L. Banjanović-Mehmedović
Benford's law gives expected patterns of the digits in numerical data. It can be used as a tool to detect outliers for example in as a test for the authenticity and reliability of transaction level accounting data. Based on Benford's law tests for first two digits, first three digits, last digits, last digit, last two digits have been derived as an additional analytical tool. Benford's law is known as a 'first digit law', 'digit analysis' or 'Benford-Newcomb phenomenon'. The second order test is an analysis of the digit frequencies of the differences between the ordered (ranked) values in a data set. The digit frequencies of these differences approximates the frequencies of Benford's law for most distributions of the original data. From some auditor's point of view it is very important if it is possible to use Benford's law to trace siginificant changes in data in some period of time or detect some trends within them.
{"title":"Detecting Anomalies by Benford's Law","authors":"Z. Jasak, L. Banjanović-Mehmedović","doi":"10.1109/ISSPIT.2008.4775660","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775660","url":null,"abstract":"Benford's law gives expected patterns of the digits in numerical data. It can be used as a tool to detect outliers for example in as a test for the authenticity and reliability of transaction level accounting data. Based on Benford's law tests for first two digits, first three digits, last digits, last digit, last two digits have been derived as an additional analytical tool. Benford's law is known as a 'first digit law', 'digit analysis' or 'Benford-Newcomb phenomenon'. The second order test is an analysis of the digit frequencies of the differences between the ordered (ranked) values in a data set. The digit frequencies of these differences approximates the frequencies of Benford's law for most distributions of the original data. From some auditor's point of view it is very important if it is possible to use Benford's law to trace siginificant changes in data in some period of time or detect some trends within them.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132453237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775714
Petar Hinich, Sasa Hinich, Svetko Milutinovich
We use level differences features to study sound source localization in three dimensions in work presented here. It regarded this result as a regression of sensors mean signal Scappedi on the distance Ri. All the sensors mean signal measures from four non-coplanar sound detectors define a unique solution for the 3D location of the sound source.. We have used a geometric method and algebraic proof to fully present result".
{"title":"Localization of Sound Source in 3D Space","authors":"Petar Hinich, Sasa Hinich, Svetko Milutinovich","doi":"10.1109/ISSPIT.2008.4775714","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775714","url":null,"abstract":"We use level differences features to study sound source localization in three dimensions in work presented here. It regarded this result as a regression of sensors mean signal Scappedi on the distance Ri. All the sensors mean signal measures from four non-coplanar sound detectors define a unique solution for the 3D location of the sound source.. We have used a geometric method and algebraic proof to fully present result\".","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"219 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131817150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775692
N. Norouzi, M. Moghaddam
Motion blur is one of the most common blurs that degrades images. To restore such images, a good estimation of motion blur parameters is necessary. In this paper, we have proposed a precise algorithm to estimate blur extent in linear and non linear motion blur for gray scale images. To estimate blur extent, the images were rotated such that motion blur was in horizontal direction. The proposed algorithm is based on analysis of image derivative along motion direction and processing the areas of the blurred image around an edge. The experimental results were great in the case of noiseless and noisy images.
{"title":"Motion Blur Identification Using Image derivative","authors":"N. Norouzi, M. Moghaddam","doi":"10.1109/ISSPIT.2008.4775692","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775692","url":null,"abstract":"Motion blur is one of the most common blurs that degrades images. To restore such images, a good estimation of motion blur parameters is necessary. In this paper, we have proposed a precise algorithm to estimate blur extent in linear and non linear motion blur for gray scale images. To estimate blur extent, the images were rotated such that motion blur was in horizontal direction. The proposed algorithm is based on analysis of image derivative along motion direction and processing the areas of the blurred image around an edge. The experimental results were great in the case of noiseless and noisy images.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128738440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775659
A. Khalifa, T. Fergany, R. Ammar, M. Tolba
Heterogeneous computing (HC) systems use different types of machines, networks, and interfaces to coordinate the execution of various task components which have different computational requirements. This variation in tasks requirements as well as machine capabilities has created a very strong need for developing mapping techniques to decide on which task should be moved to where and when, to optimize some system performance criteria. The existing dynamic heuristics for mapping tasks in HC systems works either on-line (immediate) or in batch mode. In batch mode, tasks are collected into a set that is examined for mapping at prescheduled times called mapping events. On contrast, on-line mode algorithms map a task onto a machine as soon as it arrives at the mapper. In this paper, we propose an on-line mapping algorithm which is called the maximum load balance, or for short the MLB. It tries to minimize the makespan by maximizing the load balancing of the target system. At each task arrival, the MLB algorithm examines all the machines in the HC suite one by one looking for the one that gives the maximum system balance among all possible mappings. In contrast with the opportunistic load balancing (OLB) heuristic; which assigns a task to the machine that becomes ready next, the MLB takes into consideration both the availability of the machine as well as the execution time of the task on that machine.
{"title":"Dynamic On-Line Allocation of Independent Task onto Heterogeneous Computing Systems to Maximize Load Balancing","authors":"A. Khalifa, T. Fergany, R. Ammar, M. Tolba","doi":"10.1109/ISSPIT.2008.4775659","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775659","url":null,"abstract":"Heterogeneous computing (HC) systems use different types of machines, networks, and interfaces to coordinate the execution of various task components which have different computational requirements. This variation in tasks requirements as well as machine capabilities has created a very strong need for developing mapping techniques to decide on which task should be moved to where and when, to optimize some system performance criteria. The existing dynamic heuristics for mapping tasks in HC systems works either on-line (immediate) or in batch mode. In batch mode, tasks are collected into a set that is examined for mapping at prescheduled times called mapping events. On contrast, on-line mode algorithms map a task onto a machine as soon as it arrives at the mapper. In this paper, we propose an on-line mapping algorithm which is called the maximum load balance, or for short the MLB. It tries to minimize the makespan by maximizing the load balancing of the target system. At each task arrival, the MLB algorithm examines all the machines in the HC suite one by one looking for the one that gives the maximum system balance among all possible mappings. In contrast with the opportunistic load balancing (OLB) heuristic; which assigns a task to the machine that becomes ready next, the MLB takes into consideration both the availability of the machine as well as the execution time of the task on that machine.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133556304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775675
Alaa Eleyan, H. Demirel, H. Ozkaramanli
This paper introduces a face recognition method based on the Dual-Tree Complex Wavelet Transform (DT-CWT), which is used to extract features from face images. DT-CWT uses similar kernels with Gabor wavelets and is a computationally cheaper way of extracting Gabor-like features. Principal Component Analysis (PCA) which is a linear dimensionality reduction technique, that attempts to represent data in lower dimensions, is used to perform the face recognition. The results demonstrate that using DT-CWT in the preprocessing phase and then applying PCA on the features extracted from the DT-CWT instead of raw face images, improves the recognition performance.
{"title":"Face Recognition using Dual-Tree Wavelet Transform","authors":"Alaa Eleyan, H. Demirel, H. Ozkaramanli","doi":"10.1109/ISSPIT.2008.4775675","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775675","url":null,"abstract":"This paper introduces a face recognition method based on the Dual-Tree Complex Wavelet Transform (DT-CWT), which is used to extract features from face images. DT-CWT uses similar kernels with Gabor wavelets and is a computationally cheaper way of extracting Gabor-like features. Principal Component Analysis (PCA) which is a linear dimensionality reduction technique, that attempts to represent data in lower dimensions, is used to perform the face recognition. The results demonstrate that using DT-CWT in the preprocessing phase and then applying PCA on the features extracted from the DT-CWT instead of raw face images, improves the recognition performance.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131246880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775721
V. Kitanovski, D. Taskovski, L. Panovski
This paper presents multi-scale edge detection method using an undecimated Haar wavelet transform. The use of undecimated transform improves the localization of detected edges when compared to the classical, decimated Haar wavelet transform. The presented method tracks for edges that exist at several dyadic scales, favoring edges at larger scales. Edge points are obtained by non-maximum suppression in four possible directions, combined with hysteresis thresholding. The experimental results show that this method is competitive to classical edge detection methods. This multi-scale approach brings robustness to noise, while the redundancy from the undecimated transform ensures good edge localization.
{"title":"Multi-scale Edge Detection Using Undecimated Wavelet Transform","authors":"V. Kitanovski, D. Taskovski, L. Panovski","doi":"10.1109/ISSPIT.2008.4775721","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775721","url":null,"abstract":"This paper presents multi-scale edge detection method using an undecimated Haar wavelet transform. The use of undecimated transform improves the localization of detected edges when compared to the classical, decimated Haar wavelet transform. The presented method tracks for edges that exist at several dyadic scales, favoring edges at larger scales. Edge points are obtained by non-maximum suppression in four possible directions, combined with hysteresis thresholding. The experimental results show that this method is competitive to classical edge detection methods. This multi-scale approach brings robustness to noise, while the redundancy from the undecimated transform ensures good edge localization.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134013861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}