Pub Date : 2011-12-01DOI: 10.1109/WICT.2011.6141418
R. Priya, Bruno Feres de Souza, A. L. Rossi, A. D. de Carvalho
Lately, many academic and industrial fields have shifted research focus from data acquisition to data analysis. This transition has been facilitated by the usage of Machine Learning (ML) techniques to automatically identify patterns and extract non-trivial knowledge from data. The experimental procedures associated with that are usually complex and computationally demanding. Scheduling is a typical method used to decide how to allocate tasks into available resources. An important step for such is to guess how long an application would take to execute. In this paper, we introduce an approach for predicting processing time specifically of ML tasks. It employs a metalearning framework to relate characteristics of datasets and current machine state to actual execution time. An empirical study was conducted using 78 publicly available datasets, 6 ML algorithms and 4 meta-regressors. Experimental results show that our approach outperforms a commonly used baseline method. Statistical tests advise using SVMr as meta-regressor. These achievements indicate the potential of metalearning to tackle the problem and encourage further developments.
{"title":"Predicting execution time of machine learning tasks using metalearning","authors":"R. Priya, Bruno Feres de Souza, A. L. Rossi, A. D. de Carvalho","doi":"10.1109/WICT.2011.6141418","DOIUrl":"https://doi.org/10.1109/WICT.2011.6141418","url":null,"abstract":"Lately, many academic and industrial fields have shifted research focus from data acquisition to data analysis. This transition has been facilitated by the usage of Machine Learning (ML) techniques to automatically identify patterns and extract non-trivial knowledge from data. The experimental procedures associated with that are usually complex and computationally demanding. Scheduling is a typical method used to decide how to allocate tasks into available resources. An important step for such is to guess how long an application would take to execute. In this paper, we introduce an approach for predicting processing time specifically of ML tasks. It employs a metalearning framework to relate characteristics of datasets and current machine state to actual execution time. An empirical study was conducted using 78 publicly available datasets, 6 ML algorithms and 4 meta-regressors. Experimental results show that our approach outperforms a commonly used baseline method. Statistical tests advise using SVMr as meta-regressor. These achievements indicate the potential of metalearning to tackle the problem and encourage further developments.","PeriodicalId":178645,"journal":{"name":"2011 World Congress on Information and Communication Technologies","volume":"214 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115542433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/WICT.2011.6141276
M. Saha, J. Sil
Real-world datasets are often vague and redundant, creating problem to take decision accurately. Very recently, Rough-set theory has been used successfully for dimensionality reduction but is applicable only on discrete dataset. Discretisation of data leads to information loss and may add inconsistency in the datasets. The paper aims at developing an algorithm using fuzzy-rough concept to overcome this situation. By this approach, dimensionality of the dataset has been reduced and using genetic algorithm, an optimal subset of attributes is obtained, sufficient to classify the objects. The proposed algorithm reduces dimensionality to a great extent without degrading the accuracy of classification and avoid of being trapped at local minima. Results are compared with the existing algorithms demonstrate compatible outcome.
{"title":"Dimensionality reduction using genetic algorithm and fuzzy-rough concepts","authors":"M. Saha, J. Sil","doi":"10.1109/WICT.2011.6141276","DOIUrl":"https://doi.org/10.1109/WICT.2011.6141276","url":null,"abstract":"Real-world datasets are often vague and redundant, creating problem to take decision accurately. Very recently, Rough-set theory has been used successfully for dimensionality reduction but is applicable only on discrete dataset. Discretisation of data leads to information loss and may add inconsistency in the datasets. The paper aims at developing an algorithm using fuzzy-rough concept to overcome this situation. By this approach, dimensionality of the dataset has been reduced and using genetic algorithm, an optimal subset of attributes is obtained, sufficient to classify the objects. The proposed algorithm reduces dimensionality to a great extent without degrading the accuracy of classification and avoid of being trapped at local minima. Results are compared with the existing algorithms demonstrate compatible outcome.","PeriodicalId":178645,"journal":{"name":"2011 World Congress on Information and Communication Technologies","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116437089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/WICT.2011.6141264
O. Verma, Prerna Singhal, Sakshi Garg, D. S. Chauhan
In this paper, we present an approach for edge detection using adaptive thresholding and Ant Colony Optimization (ACO) algorithm to obtain a well-connected image edge map. Initially, the edge map of the image is obtained using adaptive thresholding. The end points obtained using adaptive threshoding are calculated and the ants are placed at these points. The movement of the ants is guided by the local variation in the pixel intensity values. The probability factor of only undetected neighboring pixels is taken into consideration while moving an ant to the next probable edge pixel. The two stopping rules are implemented to prevent the movement of ants through the pixel already detected using the adoptive thresholding. The results are qualitative analyze using Shanon's Entropy function.
{"title":"Edge detection using adaptive thresholding and Ant Colony Optimization","authors":"O. Verma, Prerna Singhal, Sakshi Garg, D. S. Chauhan","doi":"10.1109/WICT.2011.6141264","DOIUrl":"https://doi.org/10.1109/WICT.2011.6141264","url":null,"abstract":"In this paper, we present an approach for edge detection using adaptive thresholding and Ant Colony Optimization (ACO) algorithm to obtain a well-connected image edge map. Initially, the edge map of the image is obtained using adaptive thresholding. The end points obtained using adaptive threshoding are calculated and the ants are placed at these points. The movement of the ants is guided by the local variation in the pixel intensity values. The probability factor of only undetected neighboring pixels is taken into consideration while moving an ant to the next probable edge pixel. The two stopping rules are implemented to prevent the movement of ants through the pixel already detected using the adoptive thresholding. The results are qualitative analyze using Shanon's Entropy function.","PeriodicalId":178645,"journal":{"name":"2011 World Congress on Information and Communication Technologies","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114438800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/WICT.2011.6141296
V. B. Raju, T. TejaSwaroop, R. K. Rao
In this paper, a model for the optimal placement of contingency-constrained phasor measurement units (PMUs) in electric power networks is presented. The study of placement of PMU's for different contingency conditions in power networks including measurement losses and line outages are studied. The communication constraints which would limit the maximum number of measurements associated with each installed PMU is considered as measurement limitations. The IEEE standard test systems are examined for the applicability of proposed model. The comparison of presented results with those of other methods is presented which would justify the effectiveness of proposed model with regards to minimizing the total number of PMUs and the execution time. The proposed method uses Advanced Particle Swarm Optimization technique for optimal placement of PMU's. The simulation results are reported for two systems (IEEE-14 bus & IEEE-39 bus systems)
{"title":"Optimal placement of phasor measurement units against PMU outage and Line outage Using advanced particle swarm optimization techniques","authors":"V. B. Raju, T. TejaSwaroop, R. K. Rao","doi":"10.1109/WICT.2011.6141296","DOIUrl":"https://doi.org/10.1109/WICT.2011.6141296","url":null,"abstract":"In this paper, a model for the optimal placement of contingency-constrained phasor measurement units (PMUs) in electric power networks is presented. The study of placement of PMU's for different contingency conditions in power networks including measurement losses and line outages are studied. The communication constraints which would limit the maximum number of measurements associated with each installed PMU is considered as measurement limitations. The IEEE standard test systems are examined for the applicability of proposed model. The comparison of presented results with those of other methods is presented which would justify the effectiveness of proposed model with regards to minimizing the total number of PMUs and the execution time. The proposed method uses Advanced Particle Swarm Optimization technique for optimal placement of PMU's. The simulation results are reported for two systems (IEEE-14 bus & IEEE-39 bus systems)","PeriodicalId":178645,"journal":{"name":"2011 World Congress on Information and Communication Technologies","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121893733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/WICT.2011.6141310
S. Varma, V. Tokekar
Prior works on the hidden terminal problem in wireless networks often assume that the SNR requirement and the transmission range in a networks are fixed. In fact they are rate dependent. Because of this assumption many of the prior conclusions about the hidden terminal are not accurate. A new analysis of the hidden terminal problem is presented in the paper. The new insights provided by the analysis lead to a IG-MAC scheme for tackling the hidden terminal problem in multi rate ad hoc networks. The key point is to model the interference information by means of Interference Graph and send busy tone with encoded communication information to prevent the potentially interfering nodes from initiating new transmissions. Through the simulations our protocol can solve the hidden terminals problem caused by 802.11 and improve the network performance substantially.
{"title":"An interference graph based MAC protocol for multi rate ad hoc networks","authors":"S. Varma, V. Tokekar","doi":"10.1109/WICT.2011.6141310","DOIUrl":"https://doi.org/10.1109/WICT.2011.6141310","url":null,"abstract":"Prior works on the hidden terminal problem in wireless networks often assume that the SNR requirement and the transmission range in a networks are fixed. In fact they are rate dependent. Because of this assumption many of the prior conclusions about the hidden terminal are not accurate. A new analysis of the hidden terminal problem is presented in the paper. The new insights provided by the analysis lead to a IG-MAC scheme for tackling the hidden terminal problem in multi rate ad hoc networks. The key point is to model the interference information by means of Interference Graph and send busy tone with encoded communication information to prevent the potentially interfering nodes from initiating new transmissions. Through the simulations our protocol can solve the hidden terminals problem caused by 802.11 and improve the network performance substantially.","PeriodicalId":178645,"journal":{"name":"2011 World Congress on Information and Communication Technologies","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122049569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/WICT.2011.6141441
M. Sharma, M. Ayoub Khan
The Network-on-Chip (NoC) has emerged as an essential infrastructure for design of any complex System-on-Chip. The NoC provides efficient technique to exchange the data between different domains of Intellectual property (IP) cores. This also provides paradigm for integrating large number Intellectual Property (IP). Due to ever increasing integration of IPs the need of Network-on-Chip for efficient communication is essential. The power associated with the NoC is to be dealt because majority of the power is dissipated due to interconnection. In this paper we have investigated various levels where power can be reduced. This paper also presents mathematical model that can be applied to reduce the power.
{"title":"Energy and power issues in Network-on Chip","authors":"M. Sharma, M. Ayoub Khan","doi":"10.1109/WICT.2011.6141441","DOIUrl":"https://doi.org/10.1109/WICT.2011.6141441","url":null,"abstract":"The Network-on-Chip (NoC) has emerged as an essential infrastructure for design of any complex System-on-Chip. The NoC provides efficient technique to exchange the data between different domains of Intellectual property (IP) cores. This also provides paradigm for integrating large number Intellectual Property (IP). Due to ever increasing integration of IPs the need of Network-on-Chip for efficient communication is essential. The power associated with the NoC is to be dealt because majority of the power is dissipated due to interconnection. In this paper we have investigated various levels where power can be reduced. This paper also presents mathematical model that can be applied to reduce the power.","PeriodicalId":178645,"journal":{"name":"2011 World Congress on Information and Communication Technologies","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122154081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/WICT.2011.6141301
A. Kumbhar, A. Kulkarni
Magnetic Resonant Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis. In this paper, we describe a method for segmentation of White matter and Gray matter from real MR images using a LM-k-means technique. After preprocessing, a simple unsupervised clustering system like k-means is taken and made into a supervised system by using Levenberg-Marquardt optimization technique. It was inferred that a k-means system does not arrive on its own at the means which will give a good segmentation. Hence the LM algorithm trains it for that purpose. The results are compared with that of a k-means system and they show a considerable improvement with a much higher precision.
{"title":"Magnetic Resonant Image segmentation using trained K-means clustering","authors":"A. Kumbhar, A. Kulkarni","doi":"10.1109/WICT.2011.6141301","DOIUrl":"https://doi.org/10.1109/WICT.2011.6141301","url":null,"abstract":"Magnetic Resonant Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis. In this paper, we describe a method for segmentation of White matter and Gray matter from real MR images using a LM-k-means technique. After preprocessing, a simple unsupervised clustering system like k-means is taken and made into a supervised system by using Levenberg-Marquardt optimization technique. It was inferred that a k-means system does not arrive on its own at the means which will give a good segmentation. Hence the LM algorithm trains it for that purpose. The results are compared with that of a k-means system and they show a considerable improvement with a much higher precision.","PeriodicalId":178645,"journal":{"name":"2011 World Congress on Information and Communication Technologies","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116594615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/WICT.2011.6141394
S. Prasad, S. Sinha
In several applications of computer vision and image processing, the inception of the processing starts with object detection and subsequently tracking, if the need arises. In recent years, there has been extensive research in the field of object detection and tracking. Many remarkable algorithms have been developed for object detection and tracking, including color segmentation, edge tracking and many more. However, all these algorithms faced the limited success in their implementation in the real world and were also bounded by the constraints such as white/plain background. This paper is the result of our research where our research team developed and implemented object detection and tracking system operational in an unknown background, using real-time video processing and a single camera. The proposed system has been extensively tested to operate in complex, real world, non-plain, light variant, changing background.
{"title":"Real-time object detection and tracking in an unknown environment","authors":"S. Prasad, S. Sinha","doi":"10.1109/WICT.2011.6141394","DOIUrl":"https://doi.org/10.1109/WICT.2011.6141394","url":null,"abstract":"In several applications of computer vision and image processing, the inception of the processing starts with object detection and subsequently tracking, if the need arises. In recent years, there has been extensive research in the field of object detection and tracking. Many remarkable algorithms have been developed for object detection and tracking, including color segmentation, edge tracking and many more. However, all these algorithms faced the limited success in their implementation in the real world and were also bounded by the constraints such as white/plain background. This paper is the result of our research where our research team developed and implemented object detection and tracking system operational in an unknown background, using real-time video processing and a single camera. The proposed system has been extensively tested to operate in complex, real world, non-plain, light variant, changing background.","PeriodicalId":178645,"journal":{"name":"2011 World Congress on Information and Communication Technologies","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117104995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/WICT.2011.6141435
R. Kumar
To design and implement secure web applications an analysis must start with an understanding of the risks to which application will be exposed. Business-centric Web applications need complex authentication policies to securely implement business processes. Threats against the confidentiality, availability and integrity of the data stored, processed and transmitted by application need to be matched against the policies, technologies and human factors that would protect them. The goal of this paper is to provide an insight into the secure development of web applications by exposing the pitfalls often encountered related to the authentication process and to security requirements that will ensure application is resilient to these attacks.
{"title":"Mitigating the authentication vulnerabilities in Web applications through security requirements","authors":"R. Kumar","doi":"10.1109/WICT.2011.6141435","DOIUrl":"https://doi.org/10.1109/WICT.2011.6141435","url":null,"abstract":"To design and implement secure web applications an analysis must start with an understanding of the risks to which application will be exposed. Business-centric Web applications need complex authentication policies to securely implement business processes. Threats against the confidentiality, availability and integrity of the data stored, processed and transmitted by application need to be matched against the policies, technologies and human factors that would protect them. The goal of this paper is to provide an insight into the secure development of web applications by exposing the pitfalls often encountered related to the authentication process and to security requirements that will ensure application is resilient to these attacks.","PeriodicalId":178645,"journal":{"name":"2011 World Congress on Information and Communication Technologies","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129767709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/WICT.2011.6141223
T. Subash, C. Divya
Wireless sensor networks are a new class of distributed systems that are an integral part of the physical space they inhabit. Key distribution plays an important role in wireless sensor networks. In WSN, node capture attack is the most series attack. The aim of this paper is to improve the resistance against the node capture attack. So a double hash function key pre-distribution scheme is used to stop an adversary to get information of non-compromised sensor nodes from the compromised sensor nodes and the deployment model, is based on hexagonal to improve the local connectivity. The proposed scheme provides the best resilience against sensor nodes capture and the probability of links between any sensor nodes are compromised is zero after pair wise keys establishment.
{"title":"Double hash function scheme in wireless sensor networks","authors":"T. Subash, C. Divya","doi":"10.1109/WICT.2011.6141223","DOIUrl":"https://doi.org/10.1109/WICT.2011.6141223","url":null,"abstract":"Wireless sensor networks are a new class of distributed systems that are an integral part of the physical space they inhabit. Key distribution plays an important role in wireless sensor networks. In WSN, node capture attack is the most series attack. The aim of this paper is to improve the resistance against the node capture attack. So a double hash function key pre-distribution scheme is used to stop an adversary to get information of non-compromised sensor nodes from the compromised sensor nodes and the deployment model, is based on hexagonal to improve the local connectivity. The proposed scheme provides the best resilience against sensor nodes capture and the probability of links between any sensor nodes are compromised is zero after pair wise keys establishment.","PeriodicalId":178645,"journal":{"name":"2011 World Congress on Information and Communication Technologies","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128369833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}