首页 > 最新文献

2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)最新文献

英文 中文
An efficient feature selection paradigm using PCA-CFS-Shapley values ensemble applied to small medical data sets 一种基于PCA-CFS-Shapley值集成的高效特征选择范式,应用于小型医疗数据集
S. Sasikala, S. Appavu alias Balamurugan, S. Geetha
The precise diagnosis of patient profiles into categories, such as presence or absence of a particular disease along with its level of severity, remains to be a crucial challenge in biomedical field. This process is realized by the performance of the classifier by using a supervised training set with labeled samples. Then based on the result obtained, the classifier is allowed to predict the labels of new samples. Due to presence of irrelevant features it is difficult for standard classifiers from obtaining good detection rates. Hence it is important to select the features which are more relevant and by with good classifiers could be constructed to obtain a good accuracy and efficiency. This study is aimed to classify the medical profiles, and is realized by feature extraction (FE), feature ranking (FR) and dimension reduction methods (Shapley Values Analysis) as a hybrid procedure to improve the classification efficiency and accuracy. To appraise the success of the proposed method, experiments were conducted across 6 different medical data sets using J48 decision tree classifier. The experimental results showed that using the PCA-CFS-Shapley Values analysis procedure improves the classification efficiency and accuracy compared with individual usage.
将患者的情况精确地分类,例如是否存在某种特定疾病及其严重程度,仍然是生物医学领域的一个关键挑战。该过程通过使用带有标记样本的监督训练集来实现分类器的性能。然后根据得到的结果,让分类器预测新样本的标签。由于不相关特征的存在,标准分类器很难获得良好的检测率。因此,重要的是选择更相关的特征,并通过构建好的分类器来获得良好的准确率和效率。本研究以医学档案分类为目标,通过特征提取(FE)、特征排序(FR)和降维方法(Shapley Values Analysis)作为混合过程来实现分类效率和准确率。为了评估该方法的有效性,使用J48决策树分类器在6个不同的医疗数据集上进行了实验。实验结果表明,PCA-CFS-Shapley值分析方法与单独使用相比,提高了分类效率和准确率。
{"title":"An efficient feature selection paradigm using PCA-CFS-Shapley values ensemble applied to small medical data sets","authors":"S. Sasikala, S. Appavu alias Balamurugan, S. Geetha","doi":"10.1109/ICCCNT.2013.6726773","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726773","url":null,"abstract":"The precise diagnosis of patient profiles into categories, such as presence or absence of a particular disease along with its level of severity, remains to be a crucial challenge in biomedical field. This process is realized by the performance of the classifier by using a supervised training set with labeled samples. Then based on the result obtained, the classifier is allowed to predict the labels of new samples. Due to presence of irrelevant features it is difficult for standard classifiers from obtaining good detection rates. Hence it is important to select the features which are more relevant and by with good classifiers could be constructed to obtain a good accuracy and efficiency. This study is aimed to classify the medical profiles, and is realized by feature extraction (FE), feature ranking (FR) and dimension reduction methods (Shapley Values Analysis) as a hybrid procedure to improve the classification efficiency and accuracy. To appraise the success of the proposed method, experiments were conducted across 6 different medical data sets using J48 decision tree classifier. The experimental results showed that using the PCA-CFS-Shapley Values analysis procedure improves the classification efficiency and accuracy compared with individual usage.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"276 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85211778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Investigating the factors responsible for hepatitis disease using rough set theory 用粗糙集理论研究肝炎疾病的致病因素
Archit Gupta, Sanjiban Sekhar Roy, Sanchit Sabharwal, Rajat Gupta
The last decade has witnessed a prompt progression in the field of rough set notion. It has been fruitfully applied to numerous diverse fields such as data mining and network intrusion discovery with little or no alterations. A rapid advance of interest in rough set theory and its applications can be recently seen in the number of international workshops, conferences and seminars that are either directly devoted to rough sets or contain the subject in their programs. This paper familiarizes rudimentary notions of rough set theory and then applies them on a data set of hepatitis disease. Major factors responsible for the disease are studied and then we eliminated the surplus data from the information table. Based on the conditions the actions to be taken are defined in the decision algorithms.
近十年来,粗糙集概念的研究取得了长足的进展。它已被成功地应用于许多不同的领域,如数据挖掘和网络入侵发现等。对粗糙集理论及其应用的兴趣迅速发展,最近可以在许多国际研讨会、会议和研讨会上看到,这些研讨会或直接致力于粗糙集,或在其程序中包含该主题。本文介绍了粗糙集理论的基本概念,并将其应用于肝炎疾病数据集。对导致该病的主要因素进行了研究,并从信息表中剔除了多余的数据。在决策算法中根据条件定义要采取的行动。
{"title":"Investigating the factors responsible for hepatitis disease using rough set theory","authors":"Archit Gupta, Sanjiban Sekhar Roy, Sanchit Sabharwal, Rajat Gupta","doi":"10.1109/ICCCNT.2013.6850237","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6850237","url":null,"abstract":"The last decade has witnessed a prompt progression in the field of rough set notion. It has been fruitfully applied to numerous diverse fields such as data mining and network intrusion discovery with little or no alterations. A rapid advance of interest in rough set theory and its applications can be recently seen in the number of international workshops, conferences and seminars that are either directly devoted to rough sets or contain the subject in their programs. This paper familiarizes rudimentary notions of rough set theory and then applies them on a data set of hepatitis disease. Major factors responsible for the disease are studied and then we eliminated the surplus data from the information table. Based on the conditions the actions to be taken are defined in the decision algorithms.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"63 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81618303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A system to filter unsolicited texts from social learning networks 一个从社交学习网络中过滤未经请求的文本的系统
S. Yadav, S. Das, D. Rudrapal
In the present day scenario online social networks (OSN) are very trendy and one of the most interactive medium to share, communicate and exchange numerous types of information like text, image, audio, video etc. All these publicly shared information are explicitly viewed by connected people in the blog or networks and having an enormous social impact in human mind. Posting or commenting on particular public/private areas called wall, may include superfluous messages or sensitive data. Information filtering can therefore have a solid influence in online social networks and it can be used to give users the facility to organize the messages written on public areas by filtering out unwanted wordings. In this paper, we have proposed a system which may allow OSN users to have a direct control on posting or commenting on their walls with the help of information filtering. This is achieved through text pattern matching system, that allows users to filter their open space and a privilege to add new words treated as unwanted. For experimental analysis a test social learning website is designed and some unwanted words/texts are kept as blacklisted vocabulary. To provide control to the user, pattern matching of texts are done with the blacklisted vocabulary. If it passes then only text can be posted on someone's wall, otherwise text will be blurred or encoded with special symbols. Analysis of experimental results shows high accuracy of the proposed system.
在当今的场景中,在线社交网络(OSN)是非常流行的,也是最具互动性的媒体之一,可以分享、沟通和交换文本、图像、音频、视频等多种类型的信息。所有这些公开分享的信息都被博客或网络上的相关人员明确地查看,并在人们的思想中产生巨大的社会影响。张贴或评论特定的公共/私人区域称为墙,可能包括多余的信息或敏感数据。因此,信息过滤可以在在线社交网络中产生坚实的影响,它可以用来为用户提供通过过滤掉不需要的单词来组织公共区域上的信息的设施。在本文中,我们提出了一个系统,可以允许OSN用户在信息过滤的帮助下直接控制在他们的墙上发帖或评论。这是通过文本模式匹配系统实现的,该系统允许用户过滤他们的开放空间,并有权添加被视为不需要的新单词。为了进行实验分析,设计了一个测试社会学习网站,并保留了一些不需要的单词/文本作为黑名单词汇。为了向用户提供控制,文本的模式匹配使用列入黑名单的词汇表完成。如果它通过了,那么只有文字可以张贴在某人的墙上,否则文字将被模糊化或用特殊符号编码。实验结果表明,该系统具有较高的精度。
{"title":"A system to filter unsolicited texts from social learning networks","authors":"S. Yadav, S. Das, D. Rudrapal","doi":"10.1109/ICCCNT.2013.6726687","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726687","url":null,"abstract":"In the present day scenario online social networks (OSN) are very trendy and one of the most interactive medium to share, communicate and exchange numerous types of information like text, image, audio, video etc. All these publicly shared information are explicitly viewed by connected people in the blog or networks and having an enormous social impact in human mind. Posting or commenting on particular public/private areas called wall, may include superfluous messages or sensitive data. Information filtering can therefore have a solid influence in online social networks and it can be used to give users the facility to organize the messages written on public areas by filtering out unwanted wordings. In this paper, we have proposed a system which may allow OSN users to have a direct control on posting or commenting on their walls with the help of information filtering. This is achieved through text pattern matching system, that allows users to filter their open space and a privilege to add new words treated as unwanted. For experimental analysis a test social learning website is designed and some unwanted words/texts are kept as blacklisted vocabulary. To provide control to the user, pattern matching of texts are done with the blacklisted vocabulary. If it passes then only text can be posted on someone's wall, otherwise text will be blurred or encoded with special symbols. Analysis of experimental results shows high accuracy of the proposed system.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"1 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81678462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A UPnP extension for multilevel security in pervasive systems 普适系统中多层安全的UPnP扩展
P. Rajkumar, A. Nair
In pervasive environments security and privacy has become a critical concern, since personal information can be available to malicious users. In this context, user authentication and service access control are some of major drawbacks in UPnP architecture, which are not suitable for pervasive environments. Moreover, the inherited heterogeneity of pervasive environments brings different security and privacy requirement concerns depending on the environment and the services provided. In this paper introduces a UPnP extension that not only allows multilevel user authentication for pervasive UPnP services, but also provides a flexible security approach that adapts to the network. What is more, it offers a seamless security level negotiation protocol1.
在普遍的环境中,安全和隐私已成为一个关键问题,因为个人信息可能被恶意用户获取。在这种情况下,用户身份验证和服务访问控制是UPnP体系结构中的一些主要缺点,它们不适合普及环境。此外,普适环境继承的异构性根据环境和所提供的服务带来了不同的安全和隐私需求问题。本文介绍了一种UPnP扩展,它不仅允许对普适UPnP服务进行多级用户认证,而且提供了一种适应网络的灵活的安全方法。更重要的是,它提供了一个无缝的安全级别协商协议。
{"title":"A UPnP extension for multilevel security in pervasive systems","authors":"P. Rajkumar, A. Nair","doi":"10.1109/ICCCNT.2013.6726733","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726733","url":null,"abstract":"In pervasive environments security and privacy has become a critical concern, since personal information can be available to malicious users. In this context, user authentication and service access control are some of major drawbacks in UPnP architecture, which are not suitable for pervasive environments. Moreover, the inherited heterogeneity of pervasive environments brings different security and privacy requirement concerns depending on the environment and the services provided. In this paper introduces a UPnP extension that not only allows multilevel user authentication for pervasive UPnP services, but also provides a flexible security approach that adapts to the network. What is more, it offers a seamless security level negotiation protocol1.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"9 1","pages":"1-9"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81925879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An artificial immune system with local feature selection classifier for spam filtering 基于局部特征选择分类器的人工免疫系统垃圾邮件过滤
Mayank Kalbhor, S. Shrivastava, Babita Ujjainiya
The Local Concentration based feature extraction approach is take into consideration to be able to very effectively extract position related information from messages by transforming every area of a message to a corresponding LC feature. To include the LC approach into the entire process of spam filtering, a LC model is designed, where two kinds of detector sets are initially generated by using term selection strategies and a well-defined tendency threshold, then a window is applied to divide the message into local areas. After segmentation of the particular message, concentration of the detectors are calculated and brought as the feature for every local area. Finally, feature vector is created by combining all the local feature area. Then appropriate classification method inspired from immune system is applied on available feature vector. To check the performance of model, several experiments are conducted on four benchmark corpora using the cross-validation methodology. It is shown that our model performs well with the Information Gain as term selection methods, LC based feature extraction method with flexible applicability in the real world. In comparison of other global-concentration based feature extraction techniques like bag-of-word the LC approach has better performance in terms of both accuracy and measure. It is also demonstrated that the LC approach with artificial immune system inspired classifier gives better results against all parameters.
考虑了基于局部集中的特征提取方法,通过将消息的每个区域转换为相应的LC特征,可以非常有效地从消息中提取位置相关信息。为了将LC方法应用到垃圾邮件过滤的整个过程中,设计了一个LC模型,该模型首先使用术语选择策略和定义好的趋势阈值生成两种检测集,然后使用窗口将消息划分为局部区域。在对特定信息进行分割后,计算检测器的浓度,并将其作为每个局部区域的特征。最后,将所有局部特征区域进行组合,生成特征向量。然后对可用的特征向量应用免疫系统启发的适当分类方法。为了检验模型的性能,采用交叉验证方法在四个基准语料库上进行了实验。结果表明,该模型以信息增益作为词项选择方法,基于LC的特征提取方法在实际应用中具有灵活的适用性。与其他基于全局集中的特征提取技术(如词袋)相比,LC方法在精度和度量方面都具有更好的性能。实验还表明,基于人工免疫系统的LC方法在所有参数下都有较好的结果。
{"title":"An artificial immune system with local feature selection classifier for spam filtering","authors":"Mayank Kalbhor, S. Shrivastava, Babita Ujjainiya","doi":"10.1109/ICCCNT.2013.6726691","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726691","url":null,"abstract":"The Local Concentration based feature extraction approach is take into consideration to be able to very effectively extract position related information from messages by transforming every area of a message to a corresponding LC feature. To include the LC approach into the entire process of spam filtering, a LC model is designed, where two kinds of detector sets are initially generated by using term selection strategies and a well-defined tendency threshold, then a window is applied to divide the message into local areas. After segmentation of the particular message, concentration of the detectors are calculated and brought as the feature for every local area. Finally, feature vector is created by combining all the local feature area. Then appropriate classification method inspired from immune system is applied on available feature vector. To check the performance of model, several experiments are conducted on four benchmark corpora using the cross-validation methodology. It is shown that our model performs well with the Information Gain as term selection methods, LC based feature extraction method with flexible applicability in the real world. In comparison of other global-concentration based feature extraction techniques like bag-of-word the LC approach has better performance in terms of both accuracy and measure. It is also demonstrated that the LC approach with artificial immune system inspired classifier gives better results against all parameters.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"382 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83765133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An efficient query processing with approval of data reliability using RBF neural networks with web enabled data warehouse 基于web数据仓库的RBF神经网络的高效查询处理和数据可靠性验证
K. Soundararajan, Dr. S. Sureshkumar, P. Selvamani
To rise above the limitation of the Traditional load forecasting method using data warehousing system, a new load forecasting system basing on Radial Basis Gaussian kernel Function (RBF) neural network is proposed in this project. Genetic algorithm adopting the actual coding, crossover and mutation probability was applied to optimize the parameters of the neural network, and a faster growing rate was reached. Theoretical analysis and models prove that this model has more accuracy than the traditional one. There are several methods available to integrate information source, but only few methods focus on evaluating the reliability of the source and its information. The emergence of the web and dedicated data warehouses offer different kinds of ways to collect additional data to make better decisions. The reliable and trust of these data depends on many different aspects and metainformation: data source, experimental protocol. Developing generic tools to evaluate this reliability represents a true challenge for the proper use of distributed data. In this project, RBF neural network based approach to evaluate data reliability from a set of criteria has been proposed. Customized criteria and intuitive decisions are provided, information reliability and reassurance are most important components of a data warehousing system, as their power in a while retrieval and examination.
针对传统数据仓库系统负荷预测方法的局限性,提出了一种基于径向基高斯核函数(RBF)神经网络的负荷预测系统。采用采用实际编码、交叉概率和突变概率的遗传算法对神经网络参数进行优化,达到了较快的增长速度。理论分析和模型证明,该模型比传统模型具有更高的精度。信息源集成的方法有很多,但很少有方法关注信息源及其信息的可靠性评估。网络和专用数据仓库的出现提供了各种不同的方式来收集额外的数据,从而做出更好的决策。这些数据的可靠性和可信度取决于许多不同的方面和元信息:数据源、实验协议。开发通用工具来评估这种可靠性是正确使用分布式数据的真正挑战。本课题提出了一种基于RBF神经网络的数据可靠性评估方法。提供自定义的标准和直观的决策,信息的可靠性和保证是数据仓库系统最重要的组成部分,因为它们具有瞬间检索和检查的能力。
{"title":"An efficient query processing with approval of data reliability using RBF neural networks with web enabled data warehouse","authors":"K. Soundararajan, Dr. S. Sureshkumar, P. Selvamani","doi":"10.1109/ICCCNT.2013.6726845","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726845","url":null,"abstract":"To rise above the limitation of the Traditional load forecasting method using data warehousing system, a new load forecasting system basing on Radial Basis Gaussian kernel Function (RBF) neural network is proposed in this project. Genetic algorithm adopting the actual coding, crossover and mutation probability was applied to optimize the parameters of the neural network, and a faster growing rate was reached. Theoretical analysis and models prove that this model has more accuracy than the traditional one. There are several methods available to integrate information source, but only few methods focus on evaluating the reliability of the source and its information. The emergence of the web and dedicated data warehouses offer different kinds of ways to collect additional data to make better decisions. The reliable and trust of these data depends on many different aspects and metainformation: data source, experimental protocol. Developing generic tools to evaluate this reliability represents a true challenge for the proper use of distributed data. In this project, RBF neural network based approach to evaluate data reliability from a set of criteria has been proposed. Customized criteria and intuitive decisions are provided, information reliability and reassurance are most important components of a data warehousing system, as their power in a while retrieval and examination.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"81 3 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83154664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized data analysis in cloud using BigData analytics techniques 使用大数据分析技术优化云中的数据分析
Mr. S. Ramamoorthy, Dr. S. Rajalakshmi
Because of the huge reduce in the overall investment and greatest flexibility provided by the cloud, all the companies are nowadays migrating their applications towards cloud environment. Cloud provides the larger volume of space for the storage and different set of services for all kind of applications to the cloud users without any delay and not required any major changes at the client level. When the large amount of user data and application results stored on the cloud environment, will automatically make the data analysis and prediction process became very difficult on the different clusters of cloud. Whenever the used required to analysis the stored data as well as frequently used services by other cloud customers for the same set of query on the cloud environment hard to process. The existing data mining techniques are insufficient to analyse those huge data volumes and identify the frequent services accessed by the cloud users. In this proposed scheme trying to provide an optimized data and service analysis based on Map-Reduce algorithm along with BigData analytics techniques. Cloud services provider can Maintain the log for the frequent services from the past history analysis on multiple clusters to predict the frequent service. Through this analysis cloud service provider can able to recommend the frequent services used by the other cloud customers for the same query. This scheme automatically increase the number of customers on the cloud environment and effectively analyse the data which is stored on the cloud storage.
由于总体投资的大幅减少和云提供的最大灵活性,现在所有的公司都在将他们的应用程序迁移到云环境。云为云用户提供了更大的存储空间和各种应用程序的不同服务集,没有任何延迟,也不需要在客户端级别进行任何重大更改。当大量的用户数据和应用结果存储在云环境中时,会自动使得数据分析和预测过程在不同集群的云上变得非常困难。每当使用需要分析存储的数据以及其他云客户经常使用的服务时,对同一组查询在云环境上难以处理。现有的数据挖掘技术不足以分析这些庞大的数据量并识别云用户频繁访问的服务。本方案尝试提供基于Map-Reduce算法和BigData分析技术的优化数据和服务分析。云服务提供商可以通过对多个集群的历史分析,维护频繁服务的日志,以预测频繁服务。通过这种分析,云服务提供商能够推荐其他云客户针对同一查询使用的常用服务。该方案自动增加云环境中的客户数量,并有效地分析存储在云存储中的数据。
{"title":"Optimized data analysis in cloud using BigData analytics techniques","authors":"Mr. S. Ramamoorthy, Dr. S. Rajalakshmi","doi":"10.1109/ICCCNT.2013.6726631","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726631","url":null,"abstract":"Because of the huge reduce in the overall investment and greatest flexibility provided by the cloud, all the companies are nowadays migrating their applications towards cloud environment. Cloud provides the larger volume of space for the storage and different set of services for all kind of applications to the cloud users without any delay and not required any major changes at the client level. When the large amount of user data and application results stored on the cloud environment, will automatically make the data analysis and prediction process became very difficult on the different clusters of cloud. Whenever the used required to analysis the stored data as well as frequently used services by other cloud customers for the same set of query on the cloud environment hard to process. The existing data mining techniques are insufficient to analyse those huge data volumes and identify the frequent services accessed by the cloud users. In this proposed scheme trying to provide an optimized data and service analysis based on Map-Reduce algorithm along with BigData analytics techniques. Cloud services provider can Maintain the log for the frequent services from the past history analysis on multiple clusters to predict the frequent service. Through this analysis cloud service provider can able to recommend the frequent services used by the other cloud customers for the same query. This scheme automatically increase the number of customers on the cloud environment and effectively analyse the data which is stored on the cloud storage.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"98 6 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83234329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
A novel boundary approach for shape representation and classification 一种新的形状表示与分类边界方法
L. Sumalatha, B. Sujatha, P. Sreekanth
Shape is an important visual feature and it is one of the basic features used to describe image content. However, shape representation and classification is a difficult task. This paper presents a new boundary based shape representation and classification algorithm based on mathematical morphology. It consists of two steps. Firstly, an input shape is represented by using Hit Miss Transform (HMT) into a set of structuring elements. Secondly, the extracted shape of the image is classified based on shape features. Experimental results show that the integration of these strategies significantly improves shape database.
形状是一种重要的视觉特征,是描述图像内容的基本特征之一。然而,形状表示和分类是一项艰巨的任务。提出了一种基于数学形态学的基于边界的形状表示与分类算法。它包括两个步骤。首先,使用HMT (Hit Miss Transform)将输入形状表示为一组结构元素。其次,根据形状特征对提取的图像形状进行分类;实验结果表明,这些策略的集成显著改善了形状数据库。
{"title":"A novel boundary approach for shape representation and classification","authors":"L. Sumalatha, B. Sujatha, P. Sreekanth","doi":"10.1109/ICCCNT.2013.6726673","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726673","url":null,"abstract":"Shape is an important visual feature and it is one of the basic features used to describe image content. However, shape representation and classification is a difficult task. This paper presents a new boundary based shape representation and classification algorithm based on mathematical morphology. It consists of two steps. Firstly, an input shape is represented by using Hit Miss Transform (HMT) into a set of structuring elements. Secondly, the extracted shape of the image is classified based on shape features. Experimental results show that the integration of these strategies significantly improves shape database.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"158 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77816980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fuzzy based clustering method on yeast dataset with different fuzzification methods 采用不同模糊化方法对酵母数据集进行模糊聚类
P. Ashok, G. M. Kadhar, E. Elayaraja, V. Vadivel
Clustering is a process for classifying objects or patterns in such a way that samples of the same group are more similar to one another than samples belonging to different groups. In this paper, we introduce the clustering method called soft clustering and its type Fuzzy C-Means. The clustering algorithms are improved by implementing the two different membership functions. The Fuzzy C-Means algorithm can be improved by implementing the Fuzzification parameter values from 1.25 to 2.0 and compared with different datasets using Davis Bouldin Index. The Fuzzification parameter 2.0 is most suitable for Fuzzy C-Means clustering algorithm than other Fuzzification parameter. The Fuzzy C-Means and K-Means clustering algorithms are implemented and executed in Matlab and compared with Execution speed and Iteration Count Methods. The Fuzzy C-Means clustering method achieve better results and obtain minimum DB index for all the different cluster values from different datasets. The experimental results shows that the Fuzzy C-Means method performs well when compare with the K-Means clustering.
聚类是一种对对象或模式进行分类的过程,通过这种方式,同一组的样本比属于不同组的样本更相似。本文介绍了一种称为软聚类的聚类方法及其模糊c均值。通过实现两种不同的隶属函数,改进了聚类算法。通过将模糊化参数值从1.25提高到2.0,并使用Davis Bouldin Index与不同数据集进行比较,改进了模糊C-Means算法。与其他模糊化参数相比,模糊化参数2.0最适合于模糊c均值聚类算法。在Matlab中实现和执行了模糊C-Means和K-Means聚类算法,并与执行速度和迭代计数方法进行了比较。模糊C-Means聚类方法取得了较好的聚类效果,对于不同数据集的所有不同聚类值都获得了最小的DB索引。实验结果表明,与K-Means聚类方法相比,模糊C-Means聚类方法具有较好的聚类性能。
{"title":"Fuzzy based clustering method on yeast dataset with different fuzzification methods","authors":"P. Ashok, G. M. Kadhar, E. Elayaraja, V. Vadivel","doi":"10.1109/ICCCNT.2013.6726574","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726574","url":null,"abstract":"Clustering is a process for classifying objects or patterns in such a way that samples of the same group are more similar to one another than samples belonging to different groups. In this paper, we introduce the clustering method called soft clustering and its type Fuzzy C-Means. The clustering algorithms are improved by implementing the two different membership functions. The Fuzzy C-Means algorithm can be improved by implementing the Fuzzification parameter values from 1.25 to 2.0 and compared with different datasets using Davis Bouldin Index. The Fuzzification parameter 2.0 is most suitable for Fuzzy C-Means clustering algorithm than other Fuzzification parameter. The Fuzzy C-Means and K-Means clustering algorithms are implemented and executed in Matlab and compared with Execution speed and Iteration Count Methods. The Fuzzy C-Means clustering method achieve better results and obtain minimum DB index for all the different cluster values from different datasets. The experimental results shows that the Fuzzy C-Means method performs well when compare with the K-Means clustering.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"117 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83440296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Hybrid technique for user's web page access prediction based on Markov model 基于马尔可夫模型的用户网页访问预测混合技术
Priyank Panchal, Urmi D. Agravat
Web Mining consists of three different categories, namely Web Content Mining, Web Structure Mining, and Web Usage Mining (is the process of discovering knowledge from the interaction generated by the users in the form of access logs, browser logs, proxy-server logs, user session data, cookies). This paper present mining process of web server log files in order to extract usage patterns to web link prediction with the help of proposed Markov Model. The approaches result in prediction of popular web page or stage and user navigation behavior. Proposed technique cluster user navigation based on their pair-wise similarity measure combined with markov model with the concept of apriori algorithm which is used for Web link prediction is the process to predict the Web pages to be visited by a user based on the Web pages previously visited by other user. So that Web pre-fetching techniques reduces the web latency & they predict the web object to be pre-fetched with high accuracy and good scalability also help to achieve better predictive accuracy among different log file The evolutionary approach helps to train the model to make predictions commensurate to current web browsing patterns.
Web挖掘包括三个不同的类别,即Web内容挖掘、Web结构挖掘和Web使用挖掘(从用户产生的交互中发现知识的过程,以访问日志、浏览器日志、代理服务器日志、用户会话数据、cookie的形式)。本文介绍了web服务器日志文件的挖掘过程,利用所提出的马尔可夫模型提取web链接预测的使用模式。该方法可以预测流行的网页或阶段以及用户的导航行为。本文提出了一种基于对相似度度量的聚类用户导航技术,该技术将马尔可夫模型与用于Web链接预测的apriori算法的概念相结合,是基于其他用户之前访问过的网页来预测用户将要访问的网页的过程。因此,Web预取技术减少了Web延迟,并且预测要预取的Web对象具有很高的准确性和良好的可扩展性,也有助于在不同的日志文件之间实现更好的预测精度。进化方法有助于训练模型做出与当前Web浏览模式相称的预测。
{"title":"Hybrid technique for user's web page access prediction based on Markov model","authors":"Priyank Panchal, Urmi D. Agravat","doi":"10.1109/ICCCNT.2013.6726588","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726588","url":null,"abstract":"Web Mining consists of three different categories, namely Web Content Mining, Web Structure Mining, and Web Usage Mining (is the process of discovering knowledge from the interaction generated by the users in the form of access logs, browser logs, proxy-server logs, user session data, cookies). This paper present mining process of web server log files in order to extract usage patterns to web link prediction with the help of proposed Markov Model. The approaches result in prediction of popular web page or stage and user navigation behavior. Proposed technique cluster user navigation based on their pair-wise similarity measure combined with markov model with the concept of apriori algorithm which is used for Web link prediction is the process to predict the Web pages to be visited by a user based on the Web pages previously visited by other user. So that Web pre-fetching techniques reduces the web latency & they predict the web object to be pre-fetched with high accuracy and good scalability also help to achieve better predictive accuracy among different log file The evolutionary approach helps to train the model to make predictions commensurate to current web browsing patterns.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"75 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78873978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1