Pub Date : 2015-07-09DOI: 10.1109/ReTIS.2015.7232883
Dharitri Deka, J. Medhi, S. Nirmala
For the diagnosis of several retinal diseases, such as Diabetic Retinopathy (DR), Diabetic Macular Edema (DME), Age-related Macular degeneration (AMD) detection of macula and fovea is considered as an important prerequisite. If any abnormality like haemorrhages, exudates fall over macula then vision is severely effected and at higher stage people may become blind. In this paper a new approach for detection of macula and fovea is presented. Investigating the structure of blood vessels (BV) in the macular region localization of macula is done. The proposed method is tested on both normal and diseased images using DRIVE, MESSIDOR, DIARETDB1, HRF, STARE databases.
{"title":"Detection of macula and fovea for disease analysis in color fundus images","authors":"Dharitri Deka, J. Medhi, S. Nirmala","doi":"10.1109/ReTIS.2015.7232883","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232883","url":null,"abstract":"For the diagnosis of several retinal diseases, such as Diabetic Retinopathy (DR), Diabetic Macular Edema (DME), Age-related Macular degeneration (AMD) detection of macula and fovea is considered as an important prerequisite. If any abnormality like haemorrhages, exudates fall over macula then vision is severely effected and at higher stage people may become blind. In this paper a new approach for detection of macula and fovea is presented. Investigating the structure of blood vessels (BV) in the macular region localization of macula is done. The proposed method is tested on both normal and diseased images using DRIVE, MESSIDOR, DIARETDB1, HRF, STARE databases.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116664973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-09DOI: 10.1109/ReTIS.2015.7232937
Arka Saha, P. Sadhukhan
Location estimation is essential to the success of location based services. Since GPS does not work well in indoor and the urban areas, several indoor localization systems have been proposed in the literature. Among these, the fingerprinting-based localization systems involving two phases: training phase and positioning phase, are used mostly. In the training phase, a radio map is constructed by collecting the received signal strength (RSS) measurements at a set of known training locations. In the positioning phase, the training location whose corresponding RSS pattern matches best with the currently observed RSS pattern is selected as the estimated location of the object. The positioning accuracy of such systems depends on the grain size of the training locations, i.e., better localization accuracy can be achieved with increasing number of training locations, which in turn, increases the comparison cost as well as the searching time in the positioning phase. Several clustering strategies have been proposed in the literature to reduce the searching time by grouping several training locations into a cluster and selecting the right cluster in the positioning phase followed by searching within the selected cluster to localize an object. However, selection of some false cluster degrades the positioning accuracy of the localization system. Thus, this paper aims at devising some novel clustering strategy that would reduce the searching time without compromising the positioning accuracy.
{"title":"A novel clustering strategy for fingerprinting-based localization system to reduce the searching time","authors":"Arka Saha, P. Sadhukhan","doi":"10.1109/ReTIS.2015.7232937","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232937","url":null,"abstract":"Location estimation is essential to the success of location based services. Since GPS does not work well in indoor and the urban areas, several indoor localization systems have been proposed in the literature. Among these, the fingerprinting-based localization systems involving two phases: training phase and positioning phase, are used mostly. In the training phase, a radio map is constructed by collecting the received signal strength (RSS) measurements at a set of known training locations. In the positioning phase, the training location whose corresponding RSS pattern matches best with the currently observed RSS pattern is selected as the estimated location of the object. The positioning accuracy of such systems depends on the grain size of the training locations, i.e., better localization accuracy can be achieved with increasing number of training locations, which in turn, increases the comparison cost as well as the searching time in the positioning phase. Several clustering strategies have been proposed in the literature to reduce the searching time by grouping several training locations into a cluster and selecting the right cluster in the positioning phase followed by searching within the selected cluster to localize an object. However, selection of some false cluster degrades the positioning accuracy of the localization system. Thus, this paper aims at devising some novel clustering strategy that would reduce the searching time without compromising the positioning accuracy.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122122783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-09DOI: 10.1109/ReTIS.2015.7232910
Biswajit Biswas, A. Chakrabarti, K. Dey
Medical image fusion combines both functional and anatomical structures in different imaging modalities such as Computer Tomography (CT) and Magnetic Resonance Image (MRI). In spine medical image fusion, CT and MR of the spine provides complementary information that assist to diagnostic and therapeutic decisions. Thus, spine medical image fusion is an essential technique that integrate the anatomical details of CT image and the functional information of MR image to a fused image with high functional and anatomical structures. This paper proposes a spine medical image fusion using wiener filter (WF) in shearlet domain. Shearlet transform (ST) obtains the shearlet subbands from CT and MR source images. A unique fusion strategy is devised for lowpass ST subbands. The processing of highpass ST subbands are considered in detail. Finally, the fused image achieved by inverse shearlet transform (IST). By evaluating with mainly some familiar techniques with regard to some quality assessment indexes, simulation and experimental results on spine images are presented the excellence of proposed technique.
{"title":"Spine medical image fusion using wiener filter in shearlet domain","authors":"Biswajit Biswas, A. Chakrabarti, K. Dey","doi":"10.1109/ReTIS.2015.7232910","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232910","url":null,"abstract":"Medical image fusion combines both functional and anatomical structures in different imaging modalities such as Computer Tomography (CT) and Magnetic Resonance Image (MRI). In spine medical image fusion, CT and MR of the spine provides complementary information that assist to diagnostic and therapeutic decisions. Thus, spine medical image fusion is an essential technique that integrate the anatomical details of CT image and the functional information of MR image to a fused image with high functional and anatomical structures. This paper proposes a spine medical image fusion using wiener filter (WF) in shearlet domain. Shearlet transform (ST) obtains the shearlet subbands from CT and MR source images. A unique fusion strategy is devised for lowpass ST subbands. The processing of highpass ST subbands are considered in detail. Finally, the fused image achieved by inverse shearlet transform (IST). By evaluating with mainly some familiar techniques with regard to some quality assessment indexes, simulation and experimental results on spine images are presented the excellence of proposed technique.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114444956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-09DOI: 10.1109/ReTIS.2015.7232842
N. M. Kumar, Maheshwari Rashmita Nath, Aneesh Wunnava, Siddharth Sahany, Rutuparna Panda
This paper presents a new adaptive Cuckoo search (ACS) algorithm based on the Cuckoo search (CS) for optimization. The main thrust is to decide the step size adaptively from its fitness value without using the Levy distribution. The other idea is to enhance the performance from the point of time and global minima. The performance of ACS against standard benchmark function show that the proposed algorithm converges to best solution with less time than Cuckoo search.
{"title":"A new adaptive Cuckoo search algorithm","authors":"N. M. Kumar, Maheshwari Rashmita Nath, Aneesh Wunnava, Siddharth Sahany, Rutuparna Panda","doi":"10.1109/ReTIS.2015.7232842","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232842","url":null,"abstract":"This paper presents a new adaptive Cuckoo search (ACS) algorithm based on the Cuckoo search (CS) for optimization. The main thrust is to decide the step size adaptively from its fitness value without using the Levy distribution. The other idea is to enhance the performance from the point of time and global minima. The performance of ACS against standard benchmark function show that the proposed algorithm converges to best solution with less time than Cuckoo search.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128923929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-09DOI: 10.1109/ReTIS.2015.7232866
Ravi Khatri, D. Gupta
Now a day's use of internet has been increased tremendously, so providing information relevant to a user at particular time is very important task. Periodic web personalization is a process of recommending the most relevant information to the users at accurate time. In this paper we are proposing an improved personalize web recommender model, which not only considers user specific activities but also considers some other factors related to websites like total number of visitors, number of unique visitors, numbers of users download data, amount of data downloaded, amount of data uploaded and number of advertisements for a particular URL to provide a better result. This model consider user's web access activities to extract its usage behavior to build knowledge base and then knowledge base along with prior specified factors are used to predict the user specific content. Thus this advance computation of resources will help user to access required information more efficiently and effectively.
{"title":"An efficient periodic web content recommendation based on web usage mining","authors":"Ravi Khatri, D. Gupta","doi":"10.1109/ReTIS.2015.7232866","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232866","url":null,"abstract":"Now a day's use of internet has been increased tremendously, so providing information relevant to a user at particular time is very important task. Periodic web personalization is a process of recommending the most relevant information to the users at accurate time. In this paper we are proposing an improved personalize web recommender model, which not only considers user specific activities but also considers some other factors related to websites like total number of visitors, number of unique visitors, numbers of users download data, amount of data downloaded, amount of data uploaded and number of advertisements for a particular URL to provide a better result. This model consider user's web access activities to extract its usage behavior to build knowledge base and then knowledge base along with prior specified factors are used to predict the user specific content. Thus this advance computation of resources will help user to access required information more efficiently and effectively.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"00 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133359792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-09DOI: 10.1109/ReTIS.2015.7232913
Sourav Bhowmick, Sushant Kumar, Anurag Kumar
Human computer interaction (HCI) and sign language recognition (SLR), aimed at creating a virtual reality, 3D gaming environment, helping the deaf-and-mute people etc., extensively exploit the use of hand gestures. Segmentation of the hand part from the other body parts and background is the primary need of any hand gesture based application system; but gesture recognition systems are often plagued by different segmentation problems, and by the ones like co-articulation, movement epenthesis, recognition of similar gestures etc. The principal objective of this paper is to address a few of the said problems. In this paper, we propose a method for recognizing isolated as well as continuous English alphabet gestures which is a step towards helping and educating the hearing and speech-impaired people. We have performed the classification of the gestures with artificial neural network. Recognition rate (RR) of the isolated gestures is found to be 92.50% while that of continuous gestures is 89.05% with multilayer perceptron and 87.14% with focused time delay neural network. These results, when compared with other such system in the literature, go into showing the effectiveness of the system.
{"title":"Hand gesture recognition of English alphabets using artificial neural network","authors":"Sourav Bhowmick, Sushant Kumar, Anurag Kumar","doi":"10.1109/ReTIS.2015.7232913","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232913","url":null,"abstract":"Human computer interaction (HCI) and sign language recognition (SLR), aimed at creating a virtual reality, 3D gaming environment, helping the deaf-and-mute people etc., extensively exploit the use of hand gestures. Segmentation of the hand part from the other body parts and background is the primary need of any hand gesture based application system; but gesture recognition systems are often plagued by different segmentation problems, and by the ones like co-articulation, movement epenthesis, recognition of similar gestures etc. The principal objective of this paper is to address a few of the said problems. In this paper, we propose a method for recognizing isolated as well as continuous English alphabet gestures which is a step towards helping and educating the hearing and speech-impaired people. We have performed the classification of the gestures with artificial neural network. Recognition rate (RR) of the isolated gestures is found to be 92.50% while that of continuous gestures is 89.05% with multilayer perceptron and 87.14% with focused time delay neural network. These results, when compared with other such system in the literature, go into showing the effectiveness of the system.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133566918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-09DOI: 10.1109/ReTIS.2015.7232881
Priyanka, Sushila Maheshkar
With the advent of various image processing tools, images can be easily counterfeited or corrupted. Digital image watermarking has emerged as an important tool for protection and authentication of digital multimedia content. This paper presents a robust DCT-based blind digital watermarking scheme for still color images. RGB color space has been used to decompose the color cover image into three channels which can be taken as a gray scaled image. Experimental result shows that it sustains good visual quality even after attacks. Proposed technique is better in terms of payload, imperceptibility than available counterparts.
{"title":"An efficient DCT based image watermarking using RGB color space","authors":"Priyanka, Sushila Maheshkar","doi":"10.1109/ReTIS.2015.7232881","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232881","url":null,"abstract":"With the advent of various image processing tools, images can be easily counterfeited or corrupted. Digital image watermarking has emerged as an important tool for protection and authentication of digital multimedia content. This paper presents a robust DCT-based blind digital watermarking scheme for still color images. RGB color space has been used to decompose the color cover image into three channels which can be taken as a gray scaled image. Experimental result shows that it sustains good visual quality even after attacks. Proposed technique is better in terms of payload, imperceptibility than available counterparts.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130065230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-09DOI: 10.1109/ReTIS.2015.7232858
Naveen Kumar, Neetu Sood
This paper presents a Compressive Spectrum Sensing (CSS) technique for wideband Cognitive Radio (CR) system to shorten spectrum sensing interval. Fast and efficient CSS is used to detect wideband spectrum, where samples are taken at sub-Nyquist rate and signal acquisition is terminated automatically once the samples are sufficient for the best spectral recovery. To improve sensing performance we propose a new approach for sparsifying basis in context of CSS, based on Empirical Wavelet Transform (EWT) which is adaptive to the processed signal spectrum. Simulation results show that the proposed fast and efficient EWT CSS scheme outperforms the conventional Discrete Fourier Transform (DFT) and Discrete Cosine Transform (DCT) based schemes in terms of sensing time, detection probability, system throughput and robustness to noise.
{"title":"Fast and efficient compressive sensing for wideband Cognitive Radio systems","authors":"Naveen Kumar, Neetu Sood","doi":"10.1109/ReTIS.2015.7232858","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232858","url":null,"abstract":"This paper presents a Compressive Spectrum Sensing (CSS) technique for wideband Cognitive Radio (CR) system to shorten spectrum sensing interval. Fast and efficient CSS is used to detect wideband spectrum, where samples are taken at sub-Nyquist rate and signal acquisition is terminated automatically once the samples are sufficient for the best spectral recovery. To improve sensing performance we propose a new approach for sparsifying basis in context of CSS, based on Empirical Wavelet Transform (EWT) which is adaptive to the processed signal spectrum. Simulation results show that the proposed fast and efficient EWT CSS scheme outperforms the conventional Discrete Fourier Transform (DFT) and Discrete Cosine Transform (DCT) based schemes in terms of sensing time, detection probability, system throughput and robustness to noise.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132086801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-09DOI: 10.1109/ReTIS.2015.7232867
Subrata Datta, Subrata Bose
Useful rule formation from frequent itemsets in a large database is a crucial task in Association Rule mining. Traditionally association rules refer to associations between two frequent sets and measured by their confidence. This approach basically concentrates on positive associations and thereby do not study the effect of dissociation and null transactions in association. Though the effect of dissociation has been studied in association, the impact of null transactions has been generally ignored. Some scholars have identified both positive and negative rules and thus studied the impact of the null transactions. However there is no uniform treatment towards inclusion of null transactions in either positive or negative category. We have tried to bridge these gaps. We have established a uniform approach to mine association rules by combining the effect of all kinds of transactions in the rules without categorizing them as positives and negatives. We have proposed to identify the frequent sets by weighted support in lieu of support and measure rules by weighted confidence in lieu of confidence for useful positive rule generation taking care of the negativity through dissociation and Null Transaction Impact Factor. We have shown that the weighted support, weighted confidence approach increases the chance of discovering rules which are less dissociated compared to the traditional support-confidence framework provided we maintain same level of minsupp and minconf in both cases.
{"title":"Discovering association rules partially devoid of dissociation by weighted confidence","authors":"Subrata Datta, Subrata Bose","doi":"10.1109/ReTIS.2015.7232867","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232867","url":null,"abstract":"Useful rule formation from frequent itemsets in a large database is a crucial task in Association Rule mining. Traditionally association rules refer to associations between two frequent sets and measured by their confidence. This approach basically concentrates on positive associations and thereby do not study the effect of dissociation and null transactions in association. Though the effect of dissociation has been studied in association, the impact of null transactions has been generally ignored. Some scholars have identified both positive and negative rules and thus studied the impact of the null transactions. However there is no uniform treatment towards inclusion of null transactions in either positive or negative category. We have tried to bridge these gaps. We have established a uniform approach to mine association rules by combining the effect of all kinds of transactions in the rules without categorizing them as positives and negatives. We have proposed to identify the frequent sets by weighted support in lieu of support and measure rules by weighted confidence in lieu of confidence for useful positive rule generation taking care of the negativity through dissociation and Null Transaction Impact Factor. We have shown that the weighted support, weighted confidence approach increases the chance of discovering rules which are less dissociated compared to the traditional support-confidence framework provided we maintain same level of minsupp and minconf in both cases.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123568013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-09DOI: 10.1109/ReTIS.2015.7232871
M. Divya, B. Annappa
Hadoop MapReduce is one of the largely used platforms for large scale data processing. Hadoop cluster has machines with different resources, including memory size, CPU capability and disk space. This introduces challenging research issue of improving Hadoop's performance through proper resource provisioning. The work presented in this paper focuses on optimizing job scheduling in Hadoop. Workload Characteristic and Resource Aware (WCRA) Hadoop scheduler is proposed, that classifies the jobs into CPU bound and Disk I/O bound. Based on the performance, nodes in the cluster are classified as CPU busy and Disk I/O busy. The amount of primary memory available in the node is ensured to be more than 25% before scheduling the job. Performance parameters of Map tasks such as the time required for parsing the data, map, sort and merge the result, and of Reduce task, such as the time to merge, parse and reduce is considered to categorize the job as CPU bound or Disk I/O bound. Tasks are assigned the priority based on their minimum Estimated Completion Time. The jobs are scheduled on a compute node in such a way that jobs already running on it will not be affected. Experimental results has given 30 % improvement in performance compared to Hadoop's FIFO, Fair and Capacity scheduler.
{"title":"Workload characteristics and resource aware Hadoop scheduler","authors":"M. Divya, B. Annappa","doi":"10.1109/ReTIS.2015.7232871","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232871","url":null,"abstract":"Hadoop MapReduce is one of the largely used platforms for large scale data processing. Hadoop cluster has machines with different resources, including memory size, CPU capability and disk space. This introduces challenging research issue of improving Hadoop's performance through proper resource provisioning. The work presented in this paper focuses on optimizing job scheduling in Hadoop. Workload Characteristic and Resource Aware (WCRA) Hadoop scheduler is proposed, that classifies the jobs into CPU bound and Disk I/O bound. Based on the performance, nodes in the cluster are classified as CPU busy and Disk I/O busy. The amount of primary memory available in the node is ensured to be more than 25% before scheduling the job. Performance parameters of Map tasks such as the time required for parsing the data, map, sort and merge the result, and of Reduce task, such as the time to merge, parse and reduce is considered to categorize the job as CPU bound or Disk I/O bound. Tasks are assigned the priority based on their minimum Estimated Completion Time. The jobs are scheduled on a compute node in such a way that jobs already running on it will not be affected. Experimental results has given 30 % improvement in performance compared to Hadoop's FIFO, Fair and Capacity scheduler.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127830975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}