Pub Date : 2019-01-01DOI: 10.1504/ijcse.2019.10019529
Deng Shaobo, Wang Lei, Li Min
{"title":"Probabilistic rough-set-based band selection method for hyperspectral data classification","authors":"Deng Shaobo, Wang Lei, Li Min","doi":"10.1504/ijcse.2019.10019529","DOIUrl":"https://doi.org/10.1504/ijcse.2019.10019529","url":null,"abstract":"","PeriodicalId":47380,"journal":{"name":"International Journal of Computational Science and Engineering","volume":"44 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76777245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-01DOI: 10.1504/IJCSE.2019.10022775
Chen Jungan, Chen Jinyin, Yang Dongyong
For classical clustering algorithms, it is difficult to find clusters that have non-spherical shapes or varied size and density. In view of this, many methods have been proposed in recent years to overcome this problem, such as introducing more representative points per cluster, considering both interconnectivity and closeness, and adopting the density-based method. However, the density defined in DBSCAN is decided by minPts and Eps, and it is not the best solution to describe the data distribution of one cluster. In this paper, a deviation factor model is proposed to describe the data distribution and a novel clustering algorithm based on artificial immune system is presented. The experimental results show that the proposed algorithm is more effective than DBSCAN, k-means, etc.
{"title":"A novel clustering algorithm based on the deviation factor model","authors":"Chen Jungan, Chen Jinyin, Yang Dongyong","doi":"10.1504/IJCSE.2019.10022775","DOIUrl":"https://doi.org/10.1504/IJCSE.2019.10022775","url":null,"abstract":"For classical clustering algorithms, it is difficult to find clusters that have non-spherical shapes or varied size and density. In view of this, many methods have been proposed in recent years to overcome this problem, such as introducing more representative points per cluster, considering both interconnectivity and closeness, and adopting the density-based method. However, the density defined in DBSCAN is decided by minPts and Eps, and it is not the best solution to describe the data distribution of one cluster. In this paper, a deviation factor model is proposed to describe the data distribution and a novel clustering algorithm based on artificial immune system is presented. The experimental results show that the proposed algorithm is more effective than DBSCAN, k-means, etc.","PeriodicalId":47380,"journal":{"name":"International Journal of Computational Science and Engineering","volume":"80 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77669712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-01DOI: 10.1504/ijcse.2019.10021550
Li Sikun, W. Wenke, Guo Yumeng
{"title":"Out-of-core streamline visualisation based on adaptive partitioning and data prefetching","authors":"Li Sikun, W. Wenke, Guo Yumeng","doi":"10.1504/ijcse.2019.10021550","DOIUrl":"https://doi.org/10.1504/ijcse.2019.10021550","url":null,"abstract":"","PeriodicalId":47380,"journal":{"name":"International Journal of Computational Science and Engineering","volume":"5 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74251857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-02DOI: 10.1504/IJCSE.2018.10010356
Xiong Li-yan, Zeng Hui, C. Jianjun
Semantic computing is an important task in the research on natural language processing. On solving the problem of the inaccurate conceptual graph matching, this paper proposes an algorithm to compute the similarity of conceptual graphs, based on conceptual sub-graph weight self-adjustment. The algorithm works by basing itself on the intensional logic model of Chinese concept connotation, using intensional semantic conceptual graph as knowledge representation method and combining itself with the computation method of E-A-V structures. When computing the similarity of conceptual graphs, the algorithm can give the homologous weight to the sub-graph according to the proportion of how much information the sub-graph contains in the whole conceptual graph. Therefore, it can achieve better similarity results, which has also been proved in the experiments of this paper.
{"title":"The intensional semantic conceptual graph matching algorithm based on conceptual sub-graph weight self-adjustment","authors":"Xiong Li-yan, Zeng Hui, C. Jianjun","doi":"10.1504/IJCSE.2018.10010356","DOIUrl":"https://doi.org/10.1504/IJCSE.2018.10010356","url":null,"abstract":"Semantic computing is an important task in the research on natural language processing. On solving the problem of the inaccurate conceptual graph matching, this paper proposes an algorithm to compute the similarity of conceptual graphs, based on conceptual sub-graph weight self-adjustment. The algorithm works by basing itself on the intensional logic model of Chinese concept connotation, using intensional semantic conceptual graph as knowledge representation method and combining itself with the computation method of E-A-V structures. When computing the similarity of conceptual graphs, the algorithm can give the homologous weight to the sub-graph according to the proportion of how much information the sub-graph contains in the whole conceptual graph. Therefore, it can achieve better similarity results, which has also been proved in the experiments of this paper.","PeriodicalId":47380,"journal":{"name":"International Journal of Computational Science and Engineering","volume":"17 1","pages":"53-62"},"PeriodicalIF":2.0,"publicationDate":"2018-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85880472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-16DOI: 10.1504/IJCSE.2017.087407
Dingyuan Mo, Liangju Yu, Meng Gao
Multisource geospatial data contains a lot of information that can be used for environment assessment and management. In this paper, four environmental indicators representing typical human activities in Yellow River Delta, China are extracted from multisource geospatial data. By analysing the causal relationship between these human-related indicators and NDVI, a Bayesian network (BN) model is developed. Part of the raster data pre-processed using GIS is used for training the BN model, and the other data is used for model testing. Sensitivity analysis and performance assessment showed that the BN model was good enough to reveal the impacts of human activities on land vegetation. With the trained BN model, the vegetation change under three different scenarios was also predicted. The results showed that multisource geospatial data could be successfully collated using the GIS-BN framework for vegetation detection.
{"title":"Collating multisource geospatial data for vegetation detection using Bayesian network-a case study of Yellow River Delta","authors":"Dingyuan Mo, Liangju Yu, Meng Gao","doi":"10.1504/IJCSE.2017.087407","DOIUrl":"https://doi.org/10.1504/IJCSE.2017.087407","url":null,"abstract":"Multisource geospatial data contains a lot of information that can be used for environment assessment and management. In this paper, four environmental indicators representing typical human activities in Yellow River Delta, China are extracted from multisource geospatial data. By analysing the causal relationship between these human-related indicators and NDVI, a Bayesian network (BN) model is developed. Part of the raster data pre-processed using GIS is used for training the BN model, and the other data is used for model testing. Sensitivity analysis and performance assessment showed that the BN model was good enough to reveal the impacts of human activities on land vegetation. With the trained BN model, the vegetation change under three different scenarios was also predicted. The results showed that multisource geospatial data could be successfully collated using the GIS-BN framework for vegetation detection.","PeriodicalId":47380,"journal":{"name":"International Journal of Computational Science and Engineering","volume":"22 1","pages":"277-284"},"PeriodicalIF":2.0,"publicationDate":"2017-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85521137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Vector Extrapolation Methods with Applications","authors":"A. Sidi","doi":"10.1137/1.9781611974966","DOIUrl":"https://doi.org/10.1137/1.9781611974966","url":null,"abstract":"","PeriodicalId":47380,"journal":{"name":"International Journal of Computational Science and Engineering","volume":"57 3","pages":"1-430"},"PeriodicalIF":2.0,"publicationDate":"2017-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72417467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1504/ijcse.2017.10017137
S. K. Jena, B. Sahoo, S. Mishra, Sampa Sahoo, Akram Khan
{"title":"Allocation of energy-efficient tasks in cloud using dynamic voltage frequency scaling","authors":"S. K. Jena, B. Sahoo, S. Mishra, Sampa Sahoo, Akram Khan","doi":"10.1504/ijcse.2017.10017137","DOIUrl":"https://doi.org/10.1504/ijcse.2017.10017137","url":null,"abstract":"","PeriodicalId":47380,"journal":{"name":"International Journal of Computational Science and Engineering","volume":"57 1","pages":"1"},"PeriodicalIF":2.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85501202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.1504/IJCSE.2017.10016221
Ye Wang, L. Gong, Tian Bai, Lan Huang
{"title":"Topic-specific image indexing and presentation for MEDLINE abstract","authors":"Ye Wang, L. Gong, Tian Bai, Lan Huang","doi":"10.1504/IJCSE.2017.10016221","DOIUrl":"https://doi.org/10.1504/IJCSE.2017.10016221","url":null,"abstract":"","PeriodicalId":47380,"journal":{"name":"International Journal of Computational Science and Engineering","volume":"9 1","pages":"1"},"PeriodicalIF":2.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79771023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-05-05DOI: 10.1504/IJCSE.2016.076217
Dai Caiyan, Chen Ling
Mining frequent closed itemsets from data streams is an important topic. In this paper, we propose an algorithm for mining frequent closed itemsets from data streams based on a time fading module. By dynamically constructing a pattern tree, the algorithm calculates densities of the itemsets in the pattern tree using a fading factor. The algorithm deletes real infrequent itemsets from the pattern tree so as to reduce the memory cost. A density threshold function is designed in order to identify the real infrequent itemsets which should be deleted. Using such density threshold function, deleting the infrequent itemsets will not affect the result of frequent itemset detecting. The algorithm modifies the pattern tree and detects the frequent closed itemsets in a fixed time interval so as to reduce the computation time. We also analyse the error caused by deleting the infrequent itemsets. The experimental results indicate that our algorithm can get higher accuracy results, and needs less memory and computation time than other algorithm.
{"title":"An algorithm for mining frequent closed itemsets with density from data streams","authors":"Dai Caiyan, Chen Ling","doi":"10.1504/IJCSE.2016.076217","DOIUrl":"https://doi.org/10.1504/IJCSE.2016.076217","url":null,"abstract":"Mining frequent closed itemsets from data streams is an important topic. In this paper, we propose an algorithm for mining frequent closed itemsets from data streams based on a time fading module. By dynamically constructing a pattern tree, the algorithm calculates densities of the itemsets in the pattern tree using a fading factor. The algorithm deletes real infrequent itemsets from the pattern tree so as to reduce the memory cost. A density threshold function is designed in order to identify the real infrequent itemsets which should be deleted. Using such density threshold function, deleting the infrequent itemsets will not affect the result of frequent itemset detecting. The algorithm modifies the pattern tree and detects the frequent closed itemsets in a fixed time interval so as to reduce the computation time. We also analyse the error caused by deleting the infrequent itemsets. The experimental results indicate that our algorithm can get higher accuracy results, and needs less memory and computation time than other algorithm.","PeriodicalId":47380,"journal":{"name":"International Journal of Computational Science and Engineering","volume":"5 1","pages":"146-154"},"PeriodicalIF":2.0,"publicationDate":"2016-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75328055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-01DOI: 10.1504/IJCSE.2016.10011674
Belkacem Soundes, Guezouli Larbi, Zidat Samir
Scene text presents challenging characteristics mainly related to acquisition circumstances and environmental changes resulting in low quality videos. In this paper, we present a scene text detection algorithm based on pseudo Zernike moments (PZMs) and stroke features from low resolution lecture videos. Algorithm mainly consists of three steps: slide detection, text detection and segmentation and non-text filtering. In lecture videos, slide region is a key object carrying almost all important information; hence slide region has to be extracted and segmented from other scene objects considered as background for later processing. Slide region detection and segmentation is done by applying pseudo Zernike moment's based on RGB frames. Text detection and extraction is performed using PZMs segmentation over V channel of HSV colour space, and then stroke feature is used to filter out non-text region and to remove false positives. The algorithm is robust to illumination, low resolution and uneven luminance from compressed videos. Effectiveness of PZM description leads to very few false positives comparing to other approached. Moreover resulting images can be used directly by OCR engines and no more processing is needed.
{"title":"Pseudo Zernike moments based approach for text detection and localisation from lecture videos","authors":"Belkacem Soundes, Guezouli Larbi, Zidat Samir","doi":"10.1504/IJCSE.2016.10011674","DOIUrl":"https://doi.org/10.1504/IJCSE.2016.10011674","url":null,"abstract":"Scene text presents challenging characteristics mainly related to acquisition circumstances and environmental changes resulting in low quality videos. In this paper, we present a scene text detection algorithm based on pseudo Zernike moments (PZMs) and stroke features from low resolution lecture videos. Algorithm mainly consists of three steps: slide detection, text detection and segmentation and non-text filtering. In lecture videos, slide region is a key object carrying almost all important information; hence slide region has to be extracted and segmented from other scene objects considered as background for later processing. Slide region detection and segmentation is done by applying pseudo Zernike moment's based on RGB frames. Text detection and extraction is performed using PZMs segmentation over V channel of HSV colour space, and then stroke feature is used to filter out non-text region and to remove false positives. The algorithm is robust to illumination, low resolution and uneven luminance from compressed videos. Effectiveness of PZM description leads to very few false positives comparing to other approached. Moreover resulting images can be used directly by OCR engines and no more processing is needed.","PeriodicalId":47380,"journal":{"name":"International Journal of Computational Science and Engineering","volume":"40 1","pages":"274-283"},"PeriodicalIF":2.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73887778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}