Pub Date : 2013-07-04DOI: 10.1109/ICCCNT.2013.6726773
S. Sasikala, S. Appavu alias Balamurugan, S. Geetha
The precise diagnosis of patient profiles into categories, such as presence or absence of a particular disease along with its level of severity, remains to be a crucial challenge in biomedical field. This process is realized by the performance of the classifier by using a supervised training set with labeled samples. Then based on the result obtained, the classifier is allowed to predict the labels of new samples. Due to presence of irrelevant features it is difficult for standard classifiers from obtaining good detection rates. Hence it is important to select the features which are more relevant and by with good classifiers could be constructed to obtain a good accuracy and efficiency. This study is aimed to classify the medical profiles, and is realized by feature extraction (FE), feature ranking (FR) and dimension reduction methods (Shapley Values Analysis) as a hybrid procedure to improve the classification efficiency and accuracy. To appraise the success of the proposed method, experiments were conducted across 6 different medical data sets using J48 decision tree classifier. The experimental results showed that using the PCA-CFS-Shapley Values analysis procedure improves the classification efficiency and accuracy compared with individual usage.
{"title":"An efficient feature selection paradigm using PCA-CFS-Shapley values ensemble applied to small medical data sets","authors":"S. Sasikala, S. Appavu alias Balamurugan, S. Geetha","doi":"10.1109/ICCCNT.2013.6726773","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726773","url":null,"abstract":"The precise diagnosis of patient profiles into categories, such as presence or absence of a particular disease along with its level of severity, remains to be a crucial challenge in biomedical field. This process is realized by the performance of the classifier by using a supervised training set with labeled samples. Then based on the result obtained, the classifier is allowed to predict the labels of new samples. Due to presence of irrelevant features it is difficult for standard classifiers from obtaining good detection rates. Hence it is important to select the features which are more relevant and by with good classifiers could be constructed to obtain a good accuracy and efficiency. This study is aimed to classify the medical profiles, and is realized by feature extraction (FE), feature ranking (FR) and dimension reduction methods (Shapley Values Analysis) as a hybrid procedure to improve the classification efficiency and accuracy. To appraise the success of the proposed method, experiments were conducted across 6 different medical data sets using J48 decision tree classifier. The experimental results showed that using the PCA-CFS-Shapley Values analysis procedure improves the classification efficiency and accuracy compared with individual usage.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"276 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85211778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICCCNT.2013.6850237
Archit Gupta, Sanjiban Sekhar Roy, Sanchit Sabharwal, Rajat Gupta
The last decade has witnessed a prompt progression in the field of rough set notion. It has been fruitfully applied to numerous diverse fields such as data mining and network intrusion discovery with little or no alterations. A rapid advance of interest in rough set theory and its applications can be recently seen in the number of international workshops, conferences and seminars that are either directly devoted to rough sets or contain the subject in their programs. This paper familiarizes rudimentary notions of rough set theory and then applies them on a data set of hepatitis disease. Major factors responsible for the disease are studied and then we eliminated the surplus data from the information table. Based on the conditions the actions to be taken are defined in the decision algorithms.
{"title":"Investigating the factors responsible for hepatitis disease using rough set theory","authors":"Archit Gupta, Sanjiban Sekhar Roy, Sanchit Sabharwal, Rajat Gupta","doi":"10.1109/ICCCNT.2013.6850237","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6850237","url":null,"abstract":"The last decade has witnessed a prompt progression in the field of rough set notion. It has been fruitfully applied to numerous diverse fields such as data mining and network intrusion discovery with little or no alterations. A rapid advance of interest in rough set theory and its applications can be recently seen in the number of international workshops, conferences and seminars that are either directly devoted to rough sets or contain the subject in their programs. This paper familiarizes rudimentary notions of rough set theory and then applies them on a data set of hepatitis disease. Major factors responsible for the disease are studied and then we eliminated the surplus data from the information table. Based on the conditions the actions to be taken are defined in the decision algorithms.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"63 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81618303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICCCNT.2013.6726687
S. Yadav, S. Das, D. Rudrapal
In the present day scenario online social networks (OSN) are very trendy and one of the most interactive medium to share, communicate and exchange numerous types of information like text, image, audio, video etc. All these publicly shared information are explicitly viewed by connected people in the blog or networks and having an enormous social impact in human mind. Posting or commenting on particular public/private areas called wall, may include superfluous messages or sensitive data. Information filtering can therefore have a solid influence in online social networks and it can be used to give users the facility to organize the messages written on public areas by filtering out unwanted wordings. In this paper, we have proposed a system which may allow OSN users to have a direct control on posting or commenting on their walls with the help of information filtering. This is achieved through text pattern matching system, that allows users to filter their open space and a privilege to add new words treated as unwanted. For experimental analysis a test social learning website is designed and some unwanted words/texts are kept as blacklisted vocabulary. To provide control to the user, pattern matching of texts are done with the blacklisted vocabulary. If it passes then only text can be posted on someone's wall, otherwise text will be blurred or encoded with special symbols. Analysis of experimental results shows high accuracy of the proposed system.
{"title":"A system to filter unsolicited texts from social learning networks","authors":"S. Yadav, S. Das, D. Rudrapal","doi":"10.1109/ICCCNT.2013.6726687","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726687","url":null,"abstract":"In the present day scenario online social networks (OSN) are very trendy and one of the most interactive medium to share, communicate and exchange numerous types of information like text, image, audio, video etc. All these publicly shared information are explicitly viewed by connected people in the blog or networks and having an enormous social impact in human mind. Posting or commenting on particular public/private areas called wall, may include superfluous messages or sensitive data. Information filtering can therefore have a solid influence in online social networks and it can be used to give users the facility to organize the messages written on public areas by filtering out unwanted wordings. In this paper, we have proposed a system which may allow OSN users to have a direct control on posting or commenting on their walls with the help of information filtering. This is achieved through text pattern matching system, that allows users to filter their open space and a privilege to add new words treated as unwanted. For experimental analysis a test social learning website is designed and some unwanted words/texts are kept as blacklisted vocabulary. To provide control to the user, pattern matching of texts are done with the blacklisted vocabulary. If it passes then only text can be posted on someone's wall, otherwise text will be blurred or encoded with special symbols. Analysis of experimental results shows high accuracy of the proposed system.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"1 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81678462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICCCNT.2013.6726733
P. Rajkumar, A. Nair
In pervasive environments security and privacy has become a critical concern, since personal information can be available to malicious users. In this context, user authentication and service access control are some of major drawbacks in UPnP architecture, which are not suitable for pervasive environments. Moreover, the inherited heterogeneity of pervasive environments brings different security and privacy requirement concerns depending on the environment and the services provided. In this paper introduces a UPnP extension that not only allows multilevel user authentication for pervasive UPnP services, but also provides a flexible security approach that adapts to the network. What is more, it offers a seamless security level negotiation protocol1.
{"title":"A UPnP extension for multilevel security in pervasive systems","authors":"P. Rajkumar, A. Nair","doi":"10.1109/ICCCNT.2013.6726733","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726733","url":null,"abstract":"In pervasive environments security and privacy has become a critical concern, since personal information can be available to malicious users. In this context, user authentication and service access control are some of major drawbacks in UPnP architecture, which are not suitable for pervasive environments. Moreover, the inherited heterogeneity of pervasive environments brings different security and privacy requirement concerns depending on the environment and the services provided. In this paper introduces a UPnP extension that not only allows multilevel user authentication for pervasive UPnP services, but also provides a flexible security approach that adapts to the network. What is more, it offers a seamless security level negotiation protocol1.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"9 1","pages":"1-9"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81925879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICCCNT.2013.6726691
Mayank Kalbhor, S. Shrivastava, Babita Ujjainiya
The Local Concentration based feature extraction approach is take into consideration to be able to very effectively extract position related information from messages by transforming every area of a message to a corresponding LC feature. To include the LC approach into the entire process of spam filtering, a LC model is designed, where two kinds of detector sets are initially generated by using term selection strategies and a well-defined tendency threshold, then a window is applied to divide the message into local areas. After segmentation of the particular message, concentration of the detectors are calculated and brought as the feature for every local area. Finally, feature vector is created by combining all the local feature area. Then appropriate classification method inspired from immune system is applied on available feature vector. To check the performance of model, several experiments are conducted on four benchmark corpora using the cross-validation methodology. It is shown that our model performs well with the Information Gain as term selection methods, LC based feature extraction method with flexible applicability in the real world. In comparison of other global-concentration based feature extraction techniques like bag-of-word the LC approach has better performance in terms of both accuracy and measure. It is also demonstrated that the LC approach with artificial immune system inspired classifier gives better results against all parameters.
{"title":"An artificial immune system with local feature selection classifier for spam filtering","authors":"Mayank Kalbhor, S. Shrivastava, Babita Ujjainiya","doi":"10.1109/ICCCNT.2013.6726691","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726691","url":null,"abstract":"The Local Concentration based feature extraction approach is take into consideration to be able to very effectively extract position related information from messages by transforming every area of a message to a corresponding LC feature. To include the LC approach into the entire process of spam filtering, a LC model is designed, where two kinds of detector sets are initially generated by using term selection strategies and a well-defined tendency threshold, then a window is applied to divide the message into local areas. After segmentation of the particular message, concentration of the detectors are calculated and brought as the feature for every local area. Finally, feature vector is created by combining all the local feature area. Then appropriate classification method inspired from immune system is applied on available feature vector. To check the performance of model, several experiments are conducted on four benchmark corpora using the cross-validation methodology. It is shown that our model performs well with the Information Gain as term selection methods, LC based feature extraction method with flexible applicability in the real world. In comparison of other global-concentration based feature extraction techniques like bag-of-word the LC approach has better performance in terms of both accuracy and measure. It is also demonstrated that the LC approach with artificial immune system inspired classifier gives better results against all parameters.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"382 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83765133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICCCNT.2013.6726845
K. Soundararajan, Dr. S. Sureshkumar, P. Selvamani
To rise above the limitation of the Traditional load forecasting method using data warehousing system, a new load forecasting system basing on Radial Basis Gaussian kernel Function (RBF) neural network is proposed in this project. Genetic algorithm adopting the actual coding, crossover and mutation probability was applied to optimize the parameters of the neural network, and a faster growing rate was reached. Theoretical analysis and models prove that this model has more accuracy than the traditional one. There are several methods available to integrate information source, but only few methods focus on evaluating the reliability of the source and its information. The emergence of the web and dedicated data warehouses offer different kinds of ways to collect additional data to make better decisions. The reliable and trust of these data depends on many different aspects and metainformation: data source, experimental protocol. Developing generic tools to evaluate this reliability represents a true challenge for the proper use of distributed data. In this project, RBF neural network based approach to evaluate data reliability from a set of criteria has been proposed. Customized criteria and intuitive decisions are provided, information reliability and reassurance are most important components of a data warehousing system, as their power in a while retrieval and examination.
{"title":"An efficient query processing with approval of data reliability using RBF neural networks with web enabled data warehouse","authors":"K. Soundararajan, Dr. S. Sureshkumar, P. Selvamani","doi":"10.1109/ICCCNT.2013.6726845","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726845","url":null,"abstract":"To rise above the limitation of the Traditional load forecasting method using data warehousing system, a new load forecasting system basing on Radial Basis Gaussian kernel Function (RBF) neural network is proposed in this project. Genetic algorithm adopting the actual coding, crossover and mutation probability was applied to optimize the parameters of the neural network, and a faster growing rate was reached. Theoretical analysis and models prove that this model has more accuracy than the traditional one. There are several methods available to integrate information source, but only few methods focus on evaluating the reliability of the source and its information. The emergence of the web and dedicated data warehouses offer different kinds of ways to collect additional data to make better decisions. The reliable and trust of these data depends on many different aspects and metainformation: data source, experimental protocol. Developing generic tools to evaluate this reliability represents a true challenge for the proper use of distributed data. In this project, RBF neural network based approach to evaluate data reliability from a set of criteria has been proposed. Customized criteria and intuitive decisions are provided, information reliability and reassurance are most important components of a data warehousing system, as their power in a while retrieval and examination.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"81 3 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83154664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICCCNT.2013.6726631
Mr. S. Ramamoorthy, Dr. S. Rajalakshmi
Because of the huge reduce in the overall investment and greatest flexibility provided by the cloud, all the companies are nowadays migrating their applications towards cloud environment. Cloud provides the larger volume of space for the storage and different set of services for all kind of applications to the cloud users without any delay and not required any major changes at the client level. When the large amount of user data and application results stored on the cloud environment, will automatically make the data analysis and prediction process became very difficult on the different clusters of cloud. Whenever the used required to analysis the stored data as well as frequently used services by other cloud customers for the same set of query on the cloud environment hard to process. The existing data mining techniques are insufficient to analyse those huge data volumes and identify the frequent services accessed by the cloud users. In this proposed scheme trying to provide an optimized data and service analysis based on Map-Reduce algorithm along with BigData analytics techniques. Cloud services provider can Maintain the log for the frequent services from the past history analysis on multiple clusters to predict the frequent service. Through this analysis cloud service provider can able to recommend the frequent services used by the other cloud customers for the same query. This scheme automatically increase the number of customers on the cloud environment and effectively analyse the data which is stored on the cloud storage.
{"title":"Optimized data analysis in cloud using BigData analytics techniques","authors":"Mr. S. Ramamoorthy, Dr. S. Rajalakshmi","doi":"10.1109/ICCCNT.2013.6726631","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726631","url":null,"abstract":"Because of the huge reduce in the overall investment and greatest flexibility provided by the cloud, all the companies are nowadays migrating their applications towards cloud environment. Cloud provides the larger volume of space for the storage and different set of services for all kind of applications to the cloud users without any delay and not required any major changes at the client level. When the large amount of user data and application results stored on the cloud environment, will automatically make the data analysis and prediction process became very difficult on the different clusters of cloud. Whenever the used required to analysis the stored data as well as frequently used services by other cloud customers for the same set of query on the cloud environment hard to process. The existing data mining techniques are insufficient to analyse those huge data volumes and identify the frequent services accessed by the cloud users. In this proposed scheme trying to provide an optimized data and service analysis based on Map-Reduce algorithm along with BigData analytics techniques. Cloud services provider can Maintain the log for the frequent services from the past history analysis on multiple clusters to predict the frequent service. Through this analysis cloud service provider can able to recommend the frequent services used by the other cloud customers for the same query. This scheme automatically increase the number of customers on the cloud environment and effectively analyse the data which is stored on the cloud storage.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"98 6 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83234329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICCCNT.2013.6726673
L. Sumalatha, B. Sujatha, P. Sreekanth
Shape is an important visual feature and it is one of the basic features used to describe image content. However, shape representation and classification is a difficult task. This paper presents a new boundary based shape representation and classification algorithm based on mathematical morphology. It consists of two steps. Firstly, an input shape is represented by using Hit Miss Transform (HMT) into a set of structuring elements. Secondly, the extracted shape of the image is classified based on shape features. Experimental results show that the integration of these strategies significantly improves shape database.
形状是一种重要的视觉特征,是描述图像内容的基本特征之一。然而,形状表示和分类是一项艰巨的任务。提出了一种基于数学形态学的基于边界的形状表示与分类算法。它包括两个步骤。首先,使用HMT (Hit Miss Transform)将输入形状表示为一组结构元素。其次,根据形状特征对提取的图像形状进行分类;实验结果表明,这些策略的集成显著改善了形状数据库。
{"title":"A novel boundary approach for shape representation and classification","authors":"L. Sumalatha, B. Sujatha, P. Sreekanth","doi":"10.1109/ICCCNT.2013.6726673","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726673","url":null,"abstract":"Shape is an important visual feature and it is one of the basic features used to describe image content. However, shape representation and classification is a difficult task. This paper presents a new boundary based shape representation and classification algorithm based on mathematical morphology. It consists of two steps. Firstly, an input shape is represented by using Hit Miss Transform (HMT) into a set of structuring elements. Secondly, the extracted shape of the image is classified based on shape features. Experimental results show that the integration of these strategies significantly improves shape database.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"158 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77816980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICCCNT.2013.6726574
P. Ashok, G. M. Kadhar, E. Elayaraja, V. Vadivel
Clustering is a process for classifying objects or patterns in such a way that samples of the same group are more similar to one another than samples belonging to different groups. In this paper, we introduce the clustering method called soft clustering and its type Fuzzy C-Means. The clustering algorithms are improved by implementing the two different membership functions. The Fuzzy C-Means algorithm can be improved by implementing the Fuzzification parameter values from 1.25 to 2.0 and compared with different datasets using Davis Bouldin Index. The Fuzzification parameter 2.0 is most suitable for Fuzzy C-Means clustering algorithm than other Fuzzification parameter. The Fuzzy C-Means and K-Means clustering algorithms are implemented and executed in Matlab and compared with Execution speed and Iteration Count Methods. The Fuzzy C-Means clustering method achieve better results and obtain minimum DB index for all the different cluster values from different datasets. The experimental results shows that the Fuzzy C-Means method performs well when compare with the K-Means clustering.
{"title":"Fuzzy based clustering method on yeast dataset with different fuzzification methods","authors":"P. Ashok, G. M. Kadhar, E. Elayaraja, V. Vadivel","doi":"10.1109/ICCCNT.2013.6726574","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726574","url":null,"abstract":"Clustering is a process for classifying objects or patterns in such a way that samples of the same group are more similar to one another than samples belonging to different groups. In this paper, we introduce the clustering method called soft clustering and its type Fuzzy C-Means. The clustering algorithms are improved by implementing the two different membership functions. The Fuzzy C-Means algorithm can be improved by implementing the Fuzzification parameter values from 1.25 to 2.0 and compared with different datasets using Davis Bouldin Index. The Fuzzification parameter 2.0 is most suitable for Fuzzy C-Means clustering algorithm than other Fuzzification parameter. The Fuzzy C-Means and K-Means clustering algorithms are implemented and executed in Matlab and compared with Execution speed and Iteration Count Methods. The Fuzzy C-Means clustering method achieve better results and obtain minimum DB index for all the different cluster values from different datasets. The experimental results shows that the Fuzzy C-Means method performs well when compare with the K-Means clustering.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"117 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83440296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICCCNT.2013.6726588
Priyank Panchal, Urmi D. Agravat
Web Mining consists of three different categories, namely Web Content Mining, Web Structure Mining, and Web Usage Mining (is the process of discovering knowledge from the interaction generated by the users in the form of access logs, browser logs, proxy-server logs, user session data, cookies). This paper present mining process of web server log files in order to extract usage patterns to web link prediction with the help of proposed Markov Model. The approaches result in prediction of popular web page or stage and user navigation behavior. Proposed technique cluster user navigation based on their pair-wise similarity measure combined with markov model with the concept of apriori algorithm which is used for Web link prediction is the process to predict the Web pages to be visited by a user based on the Web pages previously visited by other user. So that Web pre-fetching techniques reduces the web latency & they predict the web object to be pre-fetched with high accuracy and good scalability also help to achieve better predictive accuracy among different log file The evolutionary approach helps to train the model to make predictions commensurate to current web browsing patterns.
{"title":"Hybrid technique for user's web page access prediction based on Markov model","authors":"Priyank Panchal, Urmi D. Agravat","doi":"10.1109/ICCCNT.2013.6726588","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726588","url":null,"abstract":"Web Mining consists of three different categories, namely Web Content Mining, Web Structure Mining, and Web Usage Mining (is the process of discovering knowledge from the interaction generated by the users in the form of access logs, browser logs, proxy-server logs, user session data, cookies). This paper present mining process of web server log files in order to extract usage patterns to web link prediction with the help of proposed Markov Model. The approaches result in prediction of popular web page or stage and user navigation behavior. Proposed technique cluster user navigation based on their pair-wise similarity measure combined with markov model with the concept of apriori algorithm which is used for Web link prediction is the process to predict the Web pages to be visited by a user based on the Web pages previously visited by other user. So that Web pre-fetching techniques reduces the web latency & they predict the web object to be pre-fetched with high accuracy and good scalability also help to achieve better predictive accuracy among different log file The evolutionary approach helps to train the model to make predictions commensurate to current web browsing patterns.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"75 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78873978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}