Pub Date : 2013-12-01DOI: 10.1109/RAICS.2013.6745495
V. Subashini, S. Poornachandra, M. Ramakrishnan
This paper presents a fragile watermarking algorithm for biometric fingerprint images based on the run length pattern of pixels. The algorithm works with binary form of fingerprint images. The carrier image is converted into a one dimensional vector and the run lengths of the pixels are computed. The run length vector is then split into overlapping vector patterns of length three where the mid part of the vector is considered for watermark embedding. The watermark may be text data or a binary image. The new scheme has a good data embedding capacity. The paper also discusses the means to extract the embedded watermark.
{"title":"A fragile watermarking technique for fingerprint protection","authors":"V. Subashini, S. Poornachandra, M. Ramakrishnan","doi":"10.1109/RAICS.2013.6745495","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745495","url":null,"abstract":"This paper presents a fragile watermarking algorithm for biometric fingerprint images based on the run length pattern of pixels. The algorithm works with binary form of fingerprint images. The carrier image is converted into a one dimensional vector and the run lengths of the pixels are computed. The run length vector is then split into overlapping vector patterns of length three where the mid part of the vector is considered for watermark embedding. The watermark may be text data or a binary image. The new scheme has a good data embedding capacity. The paper also discusses the means to extract the embedded watermark.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128718875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/RAICS.2013.6745450
R. Menon, Shruthi S. Nair, K. Srindhya, M. D. Kaimal
Over the past few decades, many algorithms have been continuously evolving in the area of machine learning. This is an era of big data which is generated by different applications related to various fields like medicine, the World Wide Web, E-learning networking etc. So, we are still in need for more efficient algorithms which are computationally cost effective and thereby producing faster results. Sparse representation of data is one giant leap toward the search for a solution for big data analysis. The focus of our paper is on algorithms for sparsity-based representation of categorical data. For this, we adopt a concept from the image and signal processing domain called dictionary learning. We have successfully implemented its sparse coding stage which gives the sparse representation of data using Orthogonal Matching Pursuit (OMP) algorithms (both Batch and Cholesky based) and its dictionary update stage using the Singular Value Decomposition (SVD). We have also used a preprocessing stage where we represent the categorical dataset using a vector space model based on the TF-IDF weighting scheme. Our paper demonstrates how input data can be decomposed and approximated as a linear combination of minimum number of elementary columns of a dictionary which so formed will be a compact representation of data. Classification or clustering algorithms can now be easily performed based on the generated sparse coded coefficient matrix or based on the dictionary. We also give a comparison of the dictionary learning algorithm when applying different OMP algorithms. The algorithms are analysed and results are demonstrated by synthetic tests and on real data.
{"title":"Sparsity-based representation for categorical data","authors":"R. Menon, Shruthi S. Nair, K. Srindhya, M. D. Kaimal","doi":"10.1109/RAICS.2013.6745450","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745450","url":null,"abstract":"Over the past few decades, many algorithms have been continuously evolving in the area of machine learning. This is an era of big data which is generated by different applications related to various fields like medicine, the World Wide Web, E-learning networking etc. So, we are still in need for more efficient algorithms which are computationally cost effective and thereby producing faster results. Sparse representation of data is one giant leap toward the search for a solution for big data analysis. The focus of our paper is on algorithms for sparsity-based representation of categorical data. For this, we adopt a concept from the image and signal processing domain called dictionary learning. We have successfully implemented its sparse coding stage which gives the sparse representation of data using Orthogonal Matching Pursuit (OMP) algorithms (both Batch and Cholesky based) and its dictionary update stage using the Singular Value Decomposition (SVD). We have also used a preprocessing stage where we represent the categorical dataset using a vector space model based on the TF-IDF weighting scheme. Our paper demonstrates how input data can be decomposed and approximated as a linear combination of minimum number of elementary columns of a dictionary which so formed will be a compact representation of data. Classification or clustering algorithms can now be easily performed based on the generated sparse coded coefficient matrix or based on the dictionary. We also give a comparison of the dictionary learning algorithm when applying different OMP algorithms. The algorithms are analysed and results are demonstrated by synthetic tests and on real data.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127906673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/RAICS.2013.6745454
Santanu Halder, A. Hasnat, D. Bhattacharjee, M. Nasipuri
This paper proposes a novel approach to colorize a gray scale facial image with the selected reference face using patch matching technique. The colorization methodology has been applied on facial images due to their extensive use in various important fields like archaeology, entertainment, law enforcement etc. The experiment has been conducted with 150 male and female facial images collected from different face databases and the result has been found satisfactory. The proposed methodology has been implemented using Matlab 7.
{"title":"A proposed system for colorization of a gray scale facial image using patch matching technique","authors":"Santanu Halder, A. Hasnat, D. Bhattacharjee, M. Nasipuri","doi":"10.1109/RAICS.2013.6745454","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745454","url":null,"abstract":"This paper proposes a novel approach to colorize a gray scale facial image with the selected reference face using patch matching technique. The colorization methodology has been applied on facial images due to their extensive use in various important fields like archaeology, entertainment, law enforcement etc. The experiment has been conducted with 150 male and female facial images collected from different face databases and the result has been found satisfactory. The proposed methodology has been implemented using Matlab 7.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"259 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115956594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/RAICS.2013.6745477
H. Ramachandran, G. R. Bindu
Wireless power transfer systems are in vogue today and are being widely used in divergent fields like mobile charging systems, medical implants and powering utility devices in smart homes. Magnetic resonance technology between source and load resonators has been demonstrated as a potential means of non-radiative power transfer. In this paper, the basic circuit for a single source and receiver geometry is discussed and extended to describe the system with a single source resonator pair and a receiver resonator pair. Wireless power transfer to a light load is experimentally demonstrated using a source coil and a receiving coil made of 21 SWG copper coils. The system is extended to a source coil powering a source resonator and a receiver resonator powering a load coil. The near electromagnetic field of a wireless power transfer system is used to ionize the mercury vapour to light up a fluorescent tube without the aid of wires.
{"title":"Wireless powering of utility equipments in a smart home using magnetic resonance","authors":"H. Ramachandran, G. R. Bindu","doi":"10.1109/RAICS.2013.6745477","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745477","url":null,"abstract":"Wireless power transfer systems are in vogue today and are being widely used in divergent fields like mobile charging systems, medical implants and powering utility devices in smart homes. Magnetic resonance technology between source and load resonators has been demonstrated as a potential means of non-radiative power transfer. In this paper, the basic circuit for a single source and receiver geometry is discussed and extended to describe the system with a single source resonator pair and a receiver resonator pair. Wireless power transfer to a light load is experimentally demonstrated using a source coil and a receiving coil made of 21 SWG copper coils. The system is extended to a source coil powering a source resonator and a receiver resonator powering a load coil. The near electromagnetic field of a wireless power transfer system is used to ionize the mercury vapour to light up a fluorescent tube without the aid of wires.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115653003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/RAICS.2013.6745466
P. Muneer, C. P. Najlah, S. Sameer
In this paper, we propose a novel joint estimation technique for the carrier frequency offsets (CFOs) and time-varying frequency selective (doubly selective) channels of highly mobile users in an orthogonal frequency division multiple access (OFDMA) uplink system. To avoid the identifiability problem in tracking doubly selective channels (DSCs), we incorporate the idea of basis expansion model (BEM) which uses complex exponential (CE) basis functions. With the aid of CE-BEM we need to estimate only the basis expansion coefficients instead of actual impulse responses of the channels. Our proposed scheme make use of a line search method based on minimum mean square error (MMSE) criteria. The complete set of parameters, which includes both CFO and basis coefficients for all the users, are updated in each iteration by minimizing the error between the successive iterations. Simulation studies are carried out to demonstrates that the proposed method has faster convergence rate and achieves better estimation performance even at high mobile speeds.
{"title":"Joint estimation of carrier frequency offsets and doubly selective channel for OFDMA uplink using line search","authors":"P. Muneer, C. P. Najlah, S. Sameer","doi":"10.1109/RAICS.2013.6745466","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745466","url":null,"abstract":"In this paper, we propose a novel joint estimation technique for the carrier frequency offsets (CFOs) and time-varying frequency selective (doubly selective) channels of highly mobile users in an orthogonal frequency division multiple access (OFDMA) uplink system. To avoid the identifiability problem in tracking doubly selective channels (DSCs), we incorporate the idea of basis expansion model (BEM) which uses complex exponential (CE) basis functions. With the aid of CE-BEM we need to estimate only the basis expansion coefficients instead of actual impulse responses of the channels. Our proposed scheme make use of a line search method based on minimum mean square error (MMSE) criteria. The complete set of parameters, which includes both CFO and basis coefficients for all the users, are updated in each iteration by minimizing the error between the successive iterations. Simulation studies are carried out to demonstrates that the proposed method has faster convergence rate and achieves better estimation performance even at high mobile speeds.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115704586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/RAICS.2013.6745455
Waquar Ahmad, S. Satyavolu, R. Hegde, H. Karnick
In this paper, a novel approach to online cohort selection is proposed which combines the cohort sets obtained using acoustic universal structure (AUS) and speaker specific cohort selection (SSCS). To obtain the cohort set using AUS, a confusion matrix is first generated using the distance between the structure of test utterance and the AUS of speaker. The confusion matrix is normalized using the iterative proportional fitting (IPF) method. The normalized confusion matrix along with a simple distance metric is used to select a cohort set based on similarity for each client speaker. A similar procedure is followed to obtain the cohort set using the SSCS method. Both these cohort sets are combined to obtain a single cohort set. Normalization statistics are then computed from this cohort set, which is used in the final scoring for authenticating the claimed speaker identity. Speaker verification experiments conducted on the NIST 2002 SRE, NIST 2004 SRE and YORO database, show reasonable improvement over conventional techniques used in speaker verification in terms of equal error rate and decision cost function values.
{"title":"A combined approach to speaker authentication using claimant-specific acoustic universal structures","authors":"Waquar Ahmad, S. Satyavolu, R. Hegde, H. Karnick","doi":"10.1109/RAICS.2013.6745455","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745455","url":null,"abstract":"In this paper, a novel approach to online cohort selection is proposed which combines the cohort sets obtained using acoustic universal structure (AUS) and speaker specific cohort selection (SSCS). To obtain the cohort set using AUS, a confusion matrix is first generated using the distance between the structure of test utterance and the AUS of speaker. The confusion matrix is normalized using the iterative proportional fitting (IPF) method. The normalized confusion matrix along with a simple distance metric is used to select a cohort set based on similarity for each client speaker. A similar procedure is followed to obtain the cohort set using the SSCS method. Both these cohort sets are combined to obtain a single cohort set. Normalization statistics are then computed from this cohort set, which is used in the final scoring for authenticating the claimed speaker identity. Speaker verification experiments conducted on the NIST 2002 SRE, NIST 2004 SRE and YORO database, show reasonable improvement over conventional techniques used in speaker verification in terms of equal error rate and decision cost function values.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125259567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/RAICS.2013.6745480
P. S. Saikrishna, R. Pasumarthy
Cloud computing has been emerging as a new technology. In a distributed computing perspective cloud is similar to client-server services like web-based services and uses virtualized resources for execution. The widespread use of internet technology has focused attention on quality of service, especially the response time experienced by the end user. We demonstrate the performance degradation of traditional web hosting with time varying user requests directly affecting the response time. We also show how this issue could be addressed by a web-server hosted on a cloud using control algorithms for Load balancing and Elasticity control developed to maintain the desired response time within acceptable limit. Our experimental setup hosts a web server on an open source Eucalyptus cloud platform. To evaluate the control system performance we use the web server benchmarking tool called httperf and autobench for automating the process of benchmarking.
{"title":"Automated control of webserver performance in a cloud environment","authors":"P. S. Saikrishna, R. Pasumarthy","doi":"10.1109/RAICS.2013.6745480","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745480","url":null,"abstract":"Cloud computing has been emerging as a new technology. In a distributed computing perspective cloud is similar to client-server services like web-based services and uses virtualized resources for execution. The widespread use of internet technology has focused attention on quality of service, especially the response time experienced by the end user. We demonstrate the performance degradation of traditional web hosting with time varying user requests directly affecting the response time. We also show how this issue could be addressed by a web-server hosted on a cloud using control algorithms for Load balancing and Elasticity control developed to maintain the desired response time within acceptable limit. Our experimental setup hosts a web server on an open source Eucalyptus cloud platform. To evaluate the control system performance we use the web server benchmarking tool called httperf and autobench for automating the process of benchmarking.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116467544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/RAICS.2013.6745445
Omkar Abhishek, S. N. George, P. Deepthi
In this paper, compressive sensing is combined with a chaotic key based generation of measurement matrix to provide an effective encryption algorithm for multimedia security. Block-based compressive sensing provides a better way in the field of image and video transmission by reducing the memory requirements and complexity, where as multiple hypothesis prediction provides a competent way in improving PSNR during reconstruction of block based compressive sensed images and videos. The measurement matrix Φ place a crucial role in this compressive sensing and as well as in the reconstruction process. A possibility to generate secure measurement matrix using piecewise linear chaotic map (PWLCM) as the seed and then hiding initial condition, system parameter, number of iterations of PWLCM as the key enable the sender to incorporate room for encryption along with the compression in a single step. The above mentioned scheme provides high level of data security, reduced complexity, compression with a good reconstruction quality and beside all it reduce the burden of sending the measurement matrix along with the data which further reduces the complexity in over all compressive sensing framework.
{"title":"PWLCM based image encryption through compressive sensing","authors":"Omkar Abhishek, S. N. George, P. Deepthi","doi":"10.1109/RAICS.2013.6745445","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745445","url":null,"abstract":"In this paper, compressive sensing is combined with a chaotic key based generation of measurement matrix to provide an effective encryption algorithm for multimedia security. Block-based compressive sensing provides a better way in the field of image and video transmission by reducing the memory requirements and complexity, where as multiple hypothesis prediction provides a competent way in improving PSNR during reconstruction of block based compressive sensed images and videos. The measurement matrix Φ place a crucial role in this compressive sensing and as well as in the reconstruction process. A possibility to generate secure measurement matrix using piecewise linear chaotic map (PWLCM) as the seed and then hiding initial condition, system parameter, number of iterations of PWLCM as the key enable the sender to incorporate room for encryption along with the compression in a single step. The above mentioned scheme provides high level of data security, reduced complexity, compression with a good reconstruction quality and beside all it reduce the burden of sending the measurement matrix along with the data which further reduces the complexity in over all compressive sensing framework.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124958859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/RAICS.2013.6745491
S. Varma, M. Sreeraj
Object Detection and Tracking in Surveillance System is inevitable in the present scenario, as it is not possible for a person to continuously monitor the video clips in real time. We propose an efficient and novel system for detecting moving objects in a surveillance video and predict whether it is a human or not. In order to account for faster object detection, we use an established Background Subtraction Algorithm known as Mixture of Gaussians. A set of simple and efficient features are extracted and provided to Support Vector Machine. The performance of the system is evaluated with different kernels of SVM and also for K Nearest Neighbor Classifier with its various distance metrics. The system is evaluated using statistical measurements, and the experiments resulted in average F measure of 86.925% and thus prove the efficiency of the novel system.
{"title":"Object detection and classification in surveillance system","authors":"S. Varma, M. Sreeraj","doi":"10.1109/RAICS.2013.6745491","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745491","url":null,"abstract":"Object Detection and Tracking in Surveillance System is inevitable in the present scenario, as it is not possible for a person to continuously monitor the video clips in real time. We propose an efficient and novel system for detecting moving objects in a surveillance video and predict whether it is a human or not. In order to account for faster object detection, we use an established Background Subtraction Algorithm known as Mixture of Gaussians. A set of simple and efficient features are extracted and provided to Support Vector Machine. The performance of the system is evaluated with different kernels of SVM and also for K Nearest Neighbor Classifier with its various distance metrics. The system is evaluated using statistical measurements, and the experiments resulted in average F measure of 86.925% and thus prove the efficiency of the novel system.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116879794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/RAICS.2013.6745451
Sandhya Harikumar, A. Vinay
Query processing of high dimensional data with huge volume of records, especially in non-spatial domain require efficient multidimensional index. The present versions of DBMSs follow a single dimension indexing at multiple levels or indexing based on the formation of compound keys which is concatenation of the key values of the required attributes. The underlying structures, data models and query languages are not sufficient for the retrieval of information based on more complex data in terms of dimensions and size. This paper aims at designing an efficient indexing structure for multidimensional data access in non-spatial domain. This new indexing structure is evolved from R-tree with certain preprocessing steps to be applied on non-spatial data. The proposed indexing model, NSB-Tree (Non-Spatial Block tree) is balanced and has better performance than traditional B-trees and has less complicated algorithms as compared to UB tree. It has linear space complexity and logarithmic time complexity. The main drive of NSB tree is multidimensional indexing eliminating the need for multiple secondary indexes and concatenation of multiple keys. We cannot index non-spatial data using R-tree in the available DBMSs. Our index structure replaces an arbitrary number of secondary indexes for multicolumn index structure. This is implemented and feasibility check is done using the PostgreSQL database.
{"title":"NSB-TREE for an efficient multidimensional indexing in non-spatial databases","authors":"Sandhya Harikumar, A. Vinay","doi":"10.1109/RAICS.2013.6745451","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745451","url":null,"abstract":"Query processing of high dimensional data with huge volume of records, especially in non-spatial domain require efficient multidimensional index. The present versions of DBMSs follow a single dimension indexing at multiple levels or indexing based on the formation of compound keys which is concatenation of the key values of the required attributes. The underlying structures, data models and query languages are not sufficient for the retrieval of information based on more complex data in terms of dimensions and size. This paper aims at designing an efficient indexing structure for multidimensional data access in non-spatial domain. This new indexing structure is evolved from R-tree with certain preprocessing steps to be applied on non-spatial data. The proposed indexing model, NSB-Tree (Non-Spatial Block tree) is balanced and has better performance than traditional B-trees and has less complicated algorithms as compared to UB tree. It has linear space complexity and logarithmic time complexity. The main drive of NSB tree is multidimensional indexing eliminating the need for multiple secondary indexes and concatenation of multiple keys. We cannot index non-spatial data using R-tree in the available DBMSs. Our index structure replaces an arbitrary number of secondary indexes for multicolumn index structure. This is implemented and feasibility check is done using the PostgreSQL database.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127040703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}