Pub Date : 2020-09-09DOI: 10.1504/ijiids.2020.10031590
R. Revathy, A. Fathima, S. Balamurali, G. Murugaboopathi
Dengue fever is the most common viral disease caused by mosquitoes. Due to the lack of curable drugs, there is an urgent need to develop anti-viral against dengue disease. Several innovative computational approaches were incorporated for the discovery of a new lead molecule that acts on the dengue virus target. The target can be a viral or host protein. Predicting the type of interaction between the virus and human protein will give better knowledge in developing therapeutics against the dengue disease. The main objective of this study is to propose a hybrid model which combines feed forward back propagation neural network (FFBPNN) with firefly algorithm to predict the dengue-human protein interaction. The novelty in this study is to focus on optimising the weights and bias of the artificial neural network to improve the efficiency of algorithm. While comparing with existing C4.5 and FFBPNN classification algorithms, the results show that the proposed hybrid method fitted the interaction data efficiently and predicts the interaction type which leads to the development of anti-viral drugs. The accuracy of the classification gained by C4.5 is 88%, FFBPNN is 97% and hybrid FFBPNN is 99%.
{"title":"Development of hybrid model for improving the prediction of dengue-human protein interaction for anti-viral drug discovery","authors":"R. Revathy, A. Fathima, S. Balamurali, G. Murugaboopathi","doi":"10.1504/ijiids.2020.10031590","DOIUrl":"https://doi.org/10.1504/ijiids.2020.10031590","url":null,"abstract":"Dengue fever is the most common viral disease caused by mosquitoes. Due to the lack of curable drugs, there is an urgent need to develop anti-viral against dengue disease. Several innovative computational approaches were incorporated for the discovery of a new lead molecule that acts on the dengue virus target. The target can be a viral or host protein. Predicting the type of interaction between the virus and human protein will give better knowledge in developing therapeutics against the dengue disease. The main objective of this study is to propose a hybrid model which combines feed forward back propagation neural network (FFBPNN) with firefly algorithm to predict the dengue-human protein interaction. The novelty in this study is to focus on optimising the weights and bias of the artificial neural network to improve the efficiency of algorithm. While comparing with existing C4.5 and FFBPNN classification algorithms, the results show that the proposed hybrid method fitted the interaction data efficiently and predicts the interaction type which leads to the development of anti-viral drugs. The accuracy of the classification gained by C4.5 is 88%, FFBPNN is 97% and hybrid FFBPNN is 99%.","PeriodicalId":39658,"journal":{"name":"International Journal of Intelligent Information and Database Systems","volume":"20 1","pages":"479-490"},"PeriodicalIF":0.0,"publicationDate":"2020-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87008657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-28DOI: 10.1504/ijiids.2020.10031678
V. Rachapudi, G. L. Devi
Histopathological image classification is a prominent part of medical image classification. The classification of such images is a challenging task due to the presence of several morphological structures in the tissue images. Recently, bag-of-features method has been used for image classification tasks. However, bag-of-features method uses K-means algorithm to cluster the features, which is a sensitive algorithm towards the initial cluster centres and often traps into the local optima. Therefore, in this work, an efficient bag-of-features histopathological image classification method is presented using a novel variant of salp swarm algorithm termed as random salp swarm algorithm. The efficiency of the proposed variant has been validated against 20 benchmark functions. Further, the performance of the proposed method has been studied on blue histology image dataset and the results are compared with 5 other state-of-the-art meta-heuristic based bag-of-features methods. The experimental results demonstrate that the proposed method surpassed the other considered methods.
{"title":"Optimal bag-of-features using random salp swarm algorithm for histopathological image analysis","authors":"V. Rachapudi, G. L. Devi","doi":"10.1504/ijiids.2020.10031678","DOIUrl":"https://doi.org/10.1504/ijiids.2020.10031678","url":null,"abstract":"Histopathological image classification is a prominent part of medical image classification. The classification of such images is a challenging task due to the presence of several morphological structures in the tissue images. Recently, bag-of-features method has been used for image classification tasks. However, bag-of-features method uses K-means algorithm to cluster the features, which is a sensitive algorithm towards the initial cluster centres and often traps into the local optima. Therefore, in this work, an efficient bag-of-features histopathological image classification method is presented using a novel variant of salp swarm algorithm termed as random salp swarm algorithm. The efficiency of the proposed variant has been validated against 20 benchmark functions. Further, the performance of the proposed method has been studied on blue histology image dataset and the results are compared with 5 other state-of-the-art meta-heuristic based bag-of-features methods. The experimental results demonstrate that the proposed method surpassed the other considered methods.","PeriodicalId":39658,"journal":{"name":"International Journal of Intelligent Information and Database Systems","volume":"6 1","pages":"339-355"},"PeriodicalIF":0.0,"publicationDate":"2020-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87653998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-26DOI: 10.1504/ijiids.2020.10031608
Pooja Dehraj, Arun Sharma
The continuous growth in software management cost requires the development of self-managed software systems. Using self-managed property, a system will take intelligent decisions to make a system work properly. Autonomic computing is the technique, which is used to develop such systems. Autonomic computing systems are highly reliable software systems. To enhance the quality of software systems, implementation of autonomic computing-based software development life cycle process may be a novel idea. It involves autonomous decision making by the autonomic component during the development of software. This approach reduces the complexity of the software development process. In addition, it resolves the purpose of autonomic computing to reduce software complexity and do real-time exception handling. In this paper, the implementation of the autonomic advisor-based software development process is proposed using the cloud computing technique. Cloud computing helps the developers to develop software, applications using deliverable services such as platform, infrastructure, and software. During the implementation and usage of autonomic advisor, the database becomes heavier. Therefore, to resolve such issues, cloud computing will be a beneficiary step. Other benefits of such an autonomous software development life cycle process are discussed further in this paper.
{"title":"A new software development paradigm for intelligent information systems","authors":"Pooja Dehraj, Arun Sharma","doi":"10.1504/ijiids.2020.10031608","DOIUrl":"https://doi.org/10.1504/ijiids.2020.10031608","url":null,"abstract":"The continuous growth in software management cost requires the development of self-managed software systems. Using self-managed property, a system will take intelligent decisions to make a system work properly. Autonomic computing is the technique, which is used to develop such systems. Autonomic computing systems are highly reliable software systems. To enhance the quality of software systems, implementation of autonomic computing-based software development life cycle process may be a novel idea. It involves autonomous decision making by the autonomic component during the development of software. This approach reduces the complexity of the software development process. In addition, it resolves the purpose of autonomic computing to reduce software complexity and do real-time exception handling. In this paper, the implementation of the autonomic advisor-based software development process is proposed using the cloud computing technique. Cloud computing helps the developers to develop software, applications using deliverable services such as platform, infrastructure, and software. During the implementation and usage of autonomic advisor, the database becomes heavier. Therefore, to resolve such issues, cloud computing will be a beneficiary step. Other benefits of such an autonomous software development life cycle process are discussed further in this paper.","PeriodicalId":39658,"journal":{"name":"International Journal of Intelligent Information and Database Systems","volume":"76 1","pages":"356-375"},"PeriodicalIF":0.0,"publicationDate":"2020-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77815551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-26DOI: 10.1504/ijiids.2020.10031611
B. Srinivas, G. Rao
Medical images must be introduced to the specialists or doctors with high accuracy for the diagnosis of critical diseases like a brain tumour. In this paper, a novel DeepCNN model is proposed to perform MRI brain tumour image denoising task and the results are compared with pre-trained DnCNN, Gaussian, adaptive, bilateral and guided filters. It is found that DeepCNN performs better than other filtering methods used. Different noise levels ranging from 5 to 50 and noises like salt and pepper, Poisson, Gaussian, and speckle noises are used to form the noisy images. Performance metrics like peak signal to noise ratio and structural similarity index are calculated and compared across all filters and noises. The proposed DeepCNN model performs well for denoising with the unknown and known noise levels. It speeds up the training process and also improves the denoising performance because of using 17 convolutional layers and batch normalisation.
{"title":"A novel DeepCNN model for denoising analysis of MRI brain tumour images","authors":"B. Srinivas, G. Rao","doi":"10.1504/ijiids.2020.10031611","DOIUrl":"https://doi.org/10.1504/ijiids.2020.10031611","url":null,"abstract":"Medical images must be introduced to the specialists or doctors with high accuracy for the diagnosis of critical diseases like a brain tumour. In this paper, a novel DeepCNN model is proposed to perform MRI brain tumour image denoising task and the results are compared with pre-trained DnCNN, Gaussian, adaptive, bilateral and guided filters. It is found that DeepCNN performs better than other filtering methods used. Different noise levels ranging from 5 to 50 and noises like salt and pepper, Poisson, Gaussian, and speckle noises are used to form the noisy images. Performance metrics like peak signal to noise ratio and structural similarity index are calculated and compared across all filters and noises. The proposed DeepCNN model performs well for denoising with the unknown and known noise levels. It speeds up the training process and also improves the denoising performance because of using 17 convolutional layers and batch normalisation.","PeriodicalId":39658,"journal":{"name":"International Journal of Intelligent Information and Database Systems","volume":"86 1","pages":"393-410"},"PeriodicalIF":0.0,"publicationDate":"2020-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75688392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-26DOI: 10.1504/ijiids.2020.10031612
G. Rekha, V. Reddy, A. Tyagi
Imbalanced datasets typically make prediction accuracy difficult. Most of the real-world data are imbalanced in nature. The traditional classifiers assume a well-balanced class distribution for training data but in practical datasets show up an imbalance, thus obscure a classifier and degrade its capability to learn from such imbalanced datasets. Data pre-processing approaches address this concern by using either random undersampling or oversampling techniques. In this paper, we introduce Earth mover's distance (EMD), as a similarity measure, to find the samples similar in nature and eliminate them as redundant from the dataset. Earth mover's distance has received a lot of attention in wide areas such as computer vision, image retrieval, machine learning, etc. The Earth mover's distance-based undersampling approach provides a solution at the data level to eliminate the redundant instances in majority samples without any loss of valuable information. This method is implemented with five conventional classifiers and one ensemble technique respectively, like C4.5 decision tree (DT), k-nearest neighbour (k-NN), multilayer perceptron (MLP), support vector machine (SVM), naive Bayes (NB) and AdaBoost technique. The proposed method yields a superior performance on 21 datasets from Keel repository.
{"title":"An Earth mover's distance-based undersampling approach for handling class-imbalanced data","authors":"G. Rekha, V. Reddy, A. Tyagi","doi":"10.1504/ijiids.2020.10031612","DOIUrl":"https://doi.org/10.1504/ijiids.2020.10031612","url":null,"abstract":"Imbalanced datasets typically make prediction accuracy difficult. Most of the real-world data are imbalanced in nature. The traditional classifiers assume a well-balanced class distribution for training data but in practical datasets show up an imbalance, thus obscure a classifier and degrade its capability to learn from such imbalanced datasets. Data pre-processing approaches address this concern by using either random undersampling or oversampling techniques. In this paper, we introduce Earth mover's distance (EMD), as a similarity measure, to find the samples similar in nature and eliminate them as redundant from the dataset. Earth mover's distance has received a lot of attention in wide areas such as computer vision, image retrieval, machine learning, etc. The Earth mover's distance-based undersampling approach provides a solution at the data level to eliminate the redundant instances in majority samples without any loss of valuable information. This method is implemented with five conventional classifiers and one ensemble technique respectively, like C4.5 decision tree (DT), k-nearest neighbour (k-NN), multilayer perceptron (MLP), support vector machine (SVM), naive Bayes (NB) and AdaBoost technique. The proposed method yields a superior performance on 21 datasets from Keel repository.","PeriodicalId":39658,"journal":{"name":"International Journal of Intelligent Information and Database Systems","volume":"15 1","pages":"376-392"},"PeriodicalIF":0.0,"publicationDate":"2020-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85901048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-25DOI: 10.1504/ijiids.2020.10031604
Khalid Anwar, Jamshed Siddiqui, S. S. Sohail
The exponential growth of recommender systems research has drawn the attention of the scientific community recently. These systems are very useful in reducing information overload and providing users with the items of their need. The major areas where recommender systems have contributed significantly include e-commerce, online auction, and books and conference recommendation for academia and industrialists. Book recommender systems suggest books of interest to users according to their preferences and requirements. In this article, we have surveyed machine learning techniques which have been used in book recommender systems. Moreover, evaluation metrics applied to evaluate recommendation techniques is also studied. Six categories for book recommendation techniques have been identified and discussed which would enable the scientific community to lay a foundation of research in the concerned field. We have also proposed future perspectives to improve recommender system. We hope that researchers exploring recommendation technology in general and book recommendation in particular will be finding this work highly beneficial.
{"title":"Machine learning-based book recommender system: a survey and new perspectives","authors":"Khalid Anwar, Jamshed Siddiqui, S. S. Sohail","doi":"10.1504/ijiids.2020.10031604","DOIUrl":"https://doi.org/10.1504/ijiids.2020.10031604","url":null,"abstract":"The exponential growth of recommender systems research has drawn the attention of the scientific community recently. These systems are very useful in reducing information overload and providing users with the items of their need. The major areas where recommender systems have contributed significantly include e-commerce, online auction, and books and conference recommendation for academia and industrialists. Book recommender systems suggest books of interest to users according to their preferences and requirements. In this article, we have surveyed machine learning techniques which have been used in book recommender systems. Moreover, evaluation metrics applied to evaluate recommendation techniques is also studied. Six categories for book recommendation techniques have been identified and discussed which would enable the scientific community to lay a foundation of research in the concerned field. We have also proposed future perspectives to improve recommender system. We hope that researchers exploring recommendation technology in general and book recommendation in particular will be finding this work highly beneficial.","PeriodicalId":39658,"journal":{"name":"International Journal of Intelligent Information and Database Systems","volume":"9 1","pages":"231-248"},"PeriodicalIF":0.0,"publicationDate":"2020-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88954754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-25DOI: 10.1504/ijiids.2020.10031594
Raju Pal, M. Saraswat
Automated histopathological image analysis is a challenging problem due to the complex morphological structure of histopathology images. Bag-of-features is one of the prominent image representation methods which has been successfully applied in histopathological image analysis. There are four phases in the bag-of-features method, namely feature extraction, codebook construction, feature encoding, and classification. Out of which feature encoding is one of the prime phases. In feature encoding phase, images are represented in terms of visual words before feeding into support vector machine classifier. However, the feature encoding phase of the bag-of-features framework considers the one feature to encode each image in terms of visual words due to which the system can not use the merits of other features. Therefore, to improve the efficacy of the bag-of-features framework, a new weighted two-dimensional vector quantisation encoding method is proposed in this work. The proposed method is tested on two histopathological image datasets for classification. The experimental results show that the combination of SIFT and ORB features with two dimensional vector quantisation encoding method returns 80.13% and 77.13% accuracy on ADL and Blue Histology datasets respectively which is better than other considered encoding methods.
{"title":"A new weighted two-dimensional vector quantisation encoding method in bag-of-features for histopathological image classification","authors":"Raju Pal, M. Saraswat","doi":"10.1504/ijiids.2020.10031594","DOIUrl":"https://doi.org/10.1504/ijiids.2020.10031594","url":null,"abstract":"Automated histopathological image analysis is a challenging problem due to the complex morphological structure of histopathology images. Bag-of-features is one of the prominent image representation methods which has been successfully applied in histopathological image analysis. There are four phases in the bag-of-features method, namely feature extraction, codebook construction, feature encoding, and classification. Out of which feature encoding is one of the prime phases. In feature encoding phase, images are represented in terms of visual words before feeding into support vector machine classifier. However, the feature encoding phase of the bag-of-features framework considers the one feature to encode each image in terms of visual words due to which the system can not use the merits of other features. Therefore, to improve the efficacy of the bag-of-features framework, a new weighted two-dimensional vector quantisation encoding method is proposed in this work. The proposed method is tested on two histopathological image datasets for classification. The experimental results show that the combination of SIFT and ORB features with two dimensional vector quantisation encoding method returns 80.13% and 77.13% accuracy on ADL and Blue Histology datasets respectively which is better than other considered encoding methods.","PeriodicalId":39658,"journal":{"name":"International Journal of Intelligent Information and Database Systems","volume":"16 1","pages":"150-171"},"PeriodicalIF":0.0,"publicationDate":"2020-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87291856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-25DOI: 10.1504/ijiids.2020.10031591
S. K. Jain, N. Kesswani, Basant Agarwal
Internet of things (IoT) has emerged as one of the dominant technologies. The IoT systems provide significant number of opportunities in solving many real-time problems such as in healthcare, transport, smart cities, etc. However, ensuring privacy protection is challenging as sensitive and personal information is communicated through the IoT devices. In this paper, we propose a privacy preserving model called as security, privacy and trust (SPT) that ensures data privacy in IoT devices through lightweight data collection and data access protocols in resource constrained IoT ecosystem. We have conducted the experiments on small scale dataset (1,000 data points) and large scale dataset (10,000 data points). The experimental results show that in the proposed SPT model, there is an improvement of 3.63% for small scale dataset and 12.87% improvement for large scale dataset in terms of average effective time. We also provide a case study of the proposed approach on the healthcare-based IoT system.
{"title":"Security, privacy and trust: privacy preserving model for internet of things","authors":"S. K. Jain, N. Kesswani, Basant Agarwal","doi":"10.1504/ijiids.2020.10031591","DOIUrl":"https://doi.org/10.1504/ijiids.2020.10031591","url":null,"abstract":"Internet of things (IoT) has emerged as one of the dominant technologies. The IoT systems provide significant number of opportunities in solving many real-time problems such as in healthcare, transport, smart cities, etc. However, ensuring privacy protection is challenging as sensitive and personal information is communicated through the IoT devices. In this paper, we propose a privacy preserving model called as security, privacy and trust (SPT) that ensures data privacy in IoT devices through lightweight data collection and data access protocols in resource constrained IoT ecosystem. We have conducted the experiments on small scale dataset (1,000 data points) and large scale dataset (10,000 data points). The experimental results show that in the proposed SPT model, there is an improvement of 3.63% for small scale dataset and 12.87% improvement for large scale dataset in terms of average effective time. We also provide a case study of the proposed approach on the healthcare-based IoT system.","PeriodicalId":39658,"journal":{"name":"International Journal of Intelligent Information and Database Systems","volume":"41 1","pages":"249-277"},"PeriodicalIF":0.0,"publicationDate":"2020-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85428381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work proposes a new clustering algorithm named 'fuzzy interval number hierarchical clustering' (FINHC) by converting original data into fuzzy interval number (FIN) firstly, then it proves F that denotes the collection of FINs is a lattice and introduces a novel metric distance based on the results from lattice theory, as well as combining them with hierarchical clustering. The relevant mathematical background about lattice theory and the specific algorithm which is used to construct FIN have been presented in this paper. Three evaluation indexes including compactness, recall and F1-measure are applied to evaluate the performance of FINHC, hierarchical clustering (HC) k-means, k-medoids, density-based spatial clustering of applications with noise (DBSCAN) in six experiments used UCI public datasets and one experiment used KEEL public dataset. The FINHC algorithm shows better clustering performance compared to other traditional clustering algorithms and the results are also discussed specifically.
{"title":"Hierarchical clustering on metric lattice","authors":"Xiangyan Meng, Muyan Liu, Jingyi Wu, Huiqiu Zhou, F. Xu, Qiufeng Wu","doi":"10.1504/ijiids.2020.10030210","DOIUrl":"https://doi.org/10.1504/ijiids.2020.10030210","url":null,"abstract":"This work proposes a new clustering algorithm named 'fuzzy interval number hierarchical clustering' (FINHC) by converting original data into fuzzy interval number (FIN) firstly, then it proves F that denotes the collection of FINs is a lattice and introduces a novel metric distance based on the results from lattice theory, as well as combining them with hierarchical clustering. The relevant mathematical background about lattice theory and the specific algorithm which is used to construct FIN have been presented in this paper. Three evaluation indexes including compactness, recall and F1-measure are applied to evaluate the performance of FINHC, hierarchical clustering (HC) k-means, k-medoids, density-based spatial clustering of applications with noise (DBSCAN) in six experiments used UCI public datasets and one experiment used KEEL public dataset. The FINHC algorithm shows better clustering performance compared to other traditional clustering algorithms and the results are also discussed specifically.","PeriodicalId":39658,"journal":{"name":"International Journal of Intelligent Information and Database Systems","volume":"102 1","pages":"1-16"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88244631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-26DOI: 10.1504/ijiids.2020.10030204
B. Arun
The availability of huge volumes of digital data and powerful computers has facilitated the extraction of information, knowledge and wisdom for decision support system. The information value is solely dependent on data quality. Data warehouse provides quality data; it is required that it responds to queries within seconds. But on account of steadily growing data warehouse, the query response time is generally in hours and weeks. Materialised view is an efficient approach to facilitate timely extraction of information and knowledge for strategic business decision making. Selecting an optimal set of views for materialisation, referred to as view selection, is a NP complete problem. In this paper, a quantum inspired artificial bee colony algorithm is proposed to address the view selection problem. Experimental results show that the proposed algorithm significantly outperforms the fundamental algorithm for view selection, HRUA and other view selection algorithms like ABC, MBO, HBMO, BCOc, BCOi and BBMO.
{"title":"Quality materialised view selection using quantum inspired artificial bee colony optimisation","authors":"B. Arun","doi":"10.1504/ijiids.2020.10030204","DOIUrl":"https://doi.org/10.1504/ijiids.2020.10030204","url":null,"abstract":"The availability of huge volumes of digital data and powerful computers has facilitated the extraction of information, knowledge and wisdom for decision support system. The information value is solely dependent on data quality. Data warehouse provides quality data; it is required that it responds to queries within seconds. But on account of steadily growing data warehouse, the query response time is generally in hours and weeks. Materialised view is an efficient approach to facilitate timely extraction of information and knowledge for strategic business decision making. Selecting an optimal set of views for materialisation, referred to as view selection, is a NP complete problem. In this paper, a quantum inspired artificial bee colony algorithm is proposed to address the view selection problem. Experimental results show that the proposed algorithm significantly outperforms the fundamental algorithm for view selection, HRUA and other view selection algorithms like ABC, MBO, HBMO, BCOc, BCOi and BBMO.","PeriodicalId":39658,"journal":{"name":"International Journal of Intelligent Information and Database Systems","volume":"10 1","pages":"33-60"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79656984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}