首页 > 最新文献

2014 World Congress on Computing and Communication Technologies最新文献

英文 中文
Automated Secured Disaster Recovery with Hyper-V Replica and PowerShell 自动安全灾难恢复与Hyper-V副本和PowerShell
Pub Date : 2014-04-03 DOI: 10.1109/WCCCT.2014.60
G. Jayaseelan, P. Charles
Until now if you want to implement High Availability and Disaster Recovery for our business critical applications we would have to invest in Geo-Cluster Technologies which is absolutely not affordable for Small and medium business customers. Now the arrival of Windows Server 2012 brings with its perfectly acceptable disaster recovery solution for business applications which are running on hyper-V [1] Virtual Machines. Hyper-V Replica enables Hyper-V hosts or clusters to enable distance replication of running VMs to remote Hyper-V hosts over a standard IP WAN connection. It provides a very cost-effective disaster recovery solution in the event of a primary data center outage. This paper proposes a new idea of combining Hyper-V Replica and Power Shell 3.0 to automate the Disaster Recovery process in a cost effective and secured.
到目前为止,如果你想为我们的关键业务应用程序实现高可用性和灾难恢复,我们将不得不投资于地理集群技术,这对于中小型企业客户来说绝对是负担不起的。现在,Windows Server 2012的到来为运行在hyper-V[1]虚拟机上的业务应用程序带来了完全可以接受的灾难恢复解决方案。Hyper-V Replica允许Hyper-V主机或集群通过标准IP WAN连接,将运行中的虚拟机远程复制到远端Hyper-V主机。在主数据中心中断的情况下,它提供了一种非常经济有效的灾难恢复解决方案。本文提出了一种将Hyper-V Replica和Power Shell 3.0相结合的新思路,以一种经济有效且安全的方式实现灾难恢复过程的自动化。
{"title":"Automated Secured Disaster Recovery with Hyper-V Replica and PowerShell","authors":"G. Jayaseelan, P. Charles","doi":"10.1109/WCCCT.2014.60","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.60","url":null,"abstract":"Until now if you want to implement High Availability and Disaster Recovery for our business critical applications we would have to invest in Geo-Cluster Technologies which is absolutely not affordable for Small and medium business customers. Now the arrival of Windows Server 2012 brings with its perfectly acceptable disaster recovery solution for business applications which are running on hyper-V [1] Virtual Machines. Hyper-V Replica enables Hyper-V hosts or clusters to enable distance replication of running VMs to remote Hyper-V hosts over a standard IP WAN connection. It provides a very cost-effective disaster recovery solution in the event of a primary data center outage. This paper proposes a new idea of combining Hyper-V Replica and Power Shell 3.0 to automate the Disaster Recovery process in a cost effective and secured.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115106518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A New Clustering and Preprocessing for Web Log Mining 一种新的Web日志挖掘聚类和预处理方法
Pub Date : 2014-04-03 DOI: 10.1109/WCCCT.2014.67
B. Maheswari, Dr. P. Sumathi
World Wide Web is a massive repository of web pages and links. It provides information about vast area for the Internet users. There is tremendous growth and development in internet. Users' accesses are documented in web logs. Web usage mining is application of mining techniques in logs. Since due to tremendous usage, the log files are growing at a faster rate and the size is becoming huge. Preprocessing plays a vital role in efficient mining process as Log data is normally noisy and indistinct. Reconstruction of sessions and paths are completed by appending missing pages in preprocessing. Additionally, the transactions which illustrate the behavior of users are constructed exactly in preprocessing by calculating the Reference Lengths of user access by means of byte rate. Using Web clustering several types of objects can be clustered into different groups for various purposes. By using the theory of distribution in Dempster-Shafer's theory, the belief function similarity measure in this algorithm adds to the clustering task the ability to capture the uncertainty among Web user's navigation performance. This paper experiments about the accomplishment of preprocessing and clustering of web log. The experimental result shows the considerable performance of the proposed algorithm.
万维网是一个巨大的网页和链接库。它为互联网用户提供了广阔领域的信息。互联网有巨大的增长和发展。用户的访问记录在web日志中。Web使用挖掘是日志挖掘技术的应用。由于大量的使用,日志文件正在以更快的速度增长,并且大小变得越来越大。由于测井数据通常具有噪声和模糊性,因此预处理对有效挖掘至关重要。会话和路径的重建是通过在预处理中添加缺失的页面来完成的。此外,通过使用字节率计算用户访问的引用长度,在预处理中精确地构造了说明用户行为的事务。使用Web聚类,可以将几种类型的对象聚到不同的组中,以实现不同的目的。该算法利用Dempster-Shafer理论中的分布理论,在聚类任务中引入了信念函数相似度度量,增加了捕获Web用户导航性能不确定性的能力。本文对web日志的预处理和聚类的实现进行了实验。实验结果表明,该算法具有良好的性能。
{"title":"A New Clustering and Preprocessing for Web Log Mining","authors":"B. Maheswari, Dr. P. Sumathi","doi":"10.1109/WCCCT.2014.67","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.67","url":null,"abstract":"World Wide Web is a massive repository of web pages and links. It provides information about vast area for the Internet users. There is tremendous growth and development in internet. Users' accesses are documented in web logs. Web usage mining is application of mining techniques in logs. Since due to tremendous usage, the log files are growing at a faster rate and the size is becoming huge. Preprocessing plays a vital role in efficient mining process as Log data is normally noisy and indistinct. Reconstruction of sessions and paths are completed by appending missing pages in preprocessing. Additionally, the transactions which illustrate the behavior of users are constructed exactly in preprocessing by calculating the Reference Lengths of user access by means of byte rate. Using Web clustering several types of objects can be clustered into different groups for various purposes. By using the theory of distribution in Dempster-Shafer's theory, the belief function similarity measure in this algorithm adds to the clustering task the ability to capture the uncertainty among Web user's navigation performance. This paper experiments about the accomplishment of preprocessing and clustering of web log. The experimental result shows the considerable performance of the proposed algorithm.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132487927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
An Approach to Improve Precision and Recall for Ad-hoc Information Retrieval Using SBIR Algorithm 利用SBIR算法提高自组织信息检索的查准率和查全率
Pub Date : 2014-04-03 DOI: 10.1109/WCCCT.2014.68
R. T. Selvi, E. Raj
Information Retrieval is a process of finding the documents in a collection based on a specific topic. The information need is expressed by the user as a query. Documents that satisfy the given query in the judgment of the user are said to be relevant. The documents that are not of the given topic are said to be non-relevant. An IR engine may use the query to classify the documents in a collection, returning to the user a subset of documents that satisfy some classification criterion. There are several search engines to find information in the given repositories containing large amounts of unstructured form of text data. However, the task of ad hoc information retrieval is, finding documents within a corpus like Bible, that are relevant to the user remains a hard challenge. Sometimes the relevant documents may not contain the specified keyword. The lack of the given term in a document does not necessarily mean that the document is not a relevant. Because more than one terms can be semantically similar although they are lexicographically different. In this paper a new algorithm called "Semantic based Boolean Information Retrieval" (SBIR) is proposed to retrieve the documents with semantically similar terms to enhance the performance of Boolean Information Model by improving the recall and precision.
信息检索是根据特定主题在集合中查找文档的过程。信息需求由用户表示为查询。在用户的判断中,满足给定查询的文档被称为相关的。不属于给定主题的文档被认为是不相关的。IR引擎可以使用查询对集合中的文档进行分类,将满足某些分类标准的文档子集返回给用户。有几种搜索引擎可以在包含大量非结构化文本数据的给定存储库中查找信息。然而,临时信息检索的任务是在像《圣经》这样的语料库中查找与用户相关的文档,这仍然是一个艰巨的挑战。有时相关文档可能不包含指定的关键字。文档中缺少给定的术语并不一定意味着该文档不相关。因为多个术语可以在语义上相似,尽管它们在字典上不同。本文提出了一种基于语义的布尔信息检索算法(SBIR),该算法通过检索语义相似的文档,通过提高查全率和查准率来提高布尔信息模型的性能。
{"title":"An Approach to Improve Precision and Recall for Ad-hoc Information Retrieval Using SBIR Algorithm","authors":"R. T. Selvi, E. Raj","doi":"10.1109/WCCCT.2014.68","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.68","url":null,"abstract":"Information Retrieval is a process of finding the documents in a collection based on a specific topic. The information need is expressed by the user as a query. Documents that satisfy the given query in the judgment of the user are said to be relevant. The documents that are not of the given topic are said to be non-relevant. An IR engine may use the query to classify the documents in a collection, returning to the user a subset of documents that satisfy some classification criterion. There are several search engines to find information in the given repositories containing large amounts of unstructured form of text data. However, the task of ad hoc information retrieval is, finding documents within a corpus like Bible, that are relevant to the user remains a hard challenge. Sometimes the relevant documents may not contain the specified keyword. The lack of the given term in a document does not necessarily mean that the document is not a relevant. Because more than one terms can be semantically similar although they are lexicographically different. In this paper a new algorithm called \"Semantic based Boolean Information Retrieval\" (SBIR) is proposed to retrieve the documents with semantically similar terms to enhance the performance of Boolean Information Model by improving the recall and precision.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115325418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Time Account Based Path Stabilization in MANET 基于时间账户的自组网路径稳定
Pub Date : 2014-04-03 DOI: 10.1109/WCCCT.2014.46
T. Manimegalai, C. Jayakumar
MANET is a collection of wireless nodes with high mobility ratio. Construction of path is mandatory by the source node to its destination in-order to communicate. As the nodes move very fast, the path which had been constructed cannot persist. A path may or may not exist even immediate to its construction. Though many mobility models and routing protocols are in existence, finding such path is still a challenge for the mobile nodes in MANET environment. Frequent path failures are not appreciated for certain application as the communication is important and emergency. In this research work an algorithm named "Time Account Based Path Stabilizer (TABPS)" is used to improve the stability of the constructed path between a pair of source and destination.
MANET是一种具有高移动性的无线节点集合。为了通信,源节点必须构造到目的地的路径。由于节点的移动速度非常快,已经构建的路径无法持续存在。一条路径可能存在,也可能不存在,甚至在它的构建之前。尽管存在许多移动模型和路由协议,但在MANET环境下,寻找这样的移动节点路径仍然是一个挑战。频繁的路径故障对于某些应用程序是不受欢迎的,因为通信是重要的和紧急的。本文提出了一种基于时间账户的路径稳定器(TABPS)算法,用于提高源和目标对之间构建路径的稳定性。
{"title":"Time Account Based Path Stabilization in MANET","authors":"T. Manimegalai, C. Jayakumar","doi":"10.1109/WCCCT.2014.46","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.46","url":null,"abstract":"MANET is a collection of wireless nodes with high mobility ratio. Construction of path is mandatory by the source node to its destination in-order to communicate. As the nodes move very fast, the path which had been constructed cannot persist. A path may or may not exist even immediate to its construction. Though many mobility models and routing protocols are in existence, finding such path is still a challenge for the mobile nodes in MANET environment. Frequent path failures are not appreciated for certain application as the communication is important and emergency. In this research work an algorithm named \"Time Account Based Path Stabilizer (TABPS)\" is used to improve the stability of the constructed path between a pair of source and destination.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122942622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using K-Means Clustering Technique to Study of Breast Cancer 用k -均值聚类技术研究乳腺癌
Pub Date : 2014-04-03 DOI: 10.1109/WCCCT.2014.64
R. Radha, P. Rajendiran
Breast cancer is one of the most common cancers worldwide. In developed countries, among one in eight women develop breast cancer at some stage of their life. Early diagnosis of breast cancer plays a very important role in treatment of the disease. With the goal of identifying genes that are more correlated with the prognosis of breast cancer, we use data mining techniques to study the gene expression values of breast cancer patients with known clinical outcome. K-means clustering is used to compare the result based on test data. As a result, a set of genes are identified that are potential bio marks for breast cancer prognosis which can categorize the patients based on the certain attributes. A comparison is made on gene expression levels that are discovered with gene subsets identified from similar studies using clustering techniques.
乳腺癌是世界上最常见的癌症之一。在发达国家,八分之一的妇女在其生命的某个阶段患上乳腺癌。乳腺癌的早期诊断对疾病的治疗起着非常重要的作用。为了识别与乳腺癌预后更相关的基因,我们使用数据挖掘技术研究已知临床结局的乳腺癌患者的基因表达值。使用K-means聚类对基于测试数据的结果进行比较。因此,一组基因被确定为乳腺癌预后的潜在生物标记,可以根据某些属性对患者进行分类。使用聚类技术对基因表达水平与从类似研究中确定的基因亚群进行比较。
{"title":"Using K-Means Clustering Technique to Study of Breast Cancer","authors":"R. Radha, P. Rajendiran","doi":"10.1109/WCCCT.2014.64","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.64","url":null,"abstract":"Breast cancer is one of the most common cancers worldwide. In developed countries, among one in eight women develop breast cancer at some stage of their life. Early diagnosis of breast cancer plays a very important role in treatment of the disease. With the goal of identifying genes that are more correlated with the prognosis of breast cancer, we use data mining techniques to study the gene expression values of breast cancer patients with known clinical outcome. K-means clustering is used to compare the result based on test data. As a result, a set of genes are identified that are potential bio marks for breast cancer prognosis which can categorize the patients based on the certain attributes. A comparison is made on gene expression levels that are discovered with gene subsets identified from similar studies using clustering techniques.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"144 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128879590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Comparative Study of Proactive and Reactive AdHoc Routing Protocols Using Ns2 基于Ns2的主动和被动AdHoc路由协议的比较研究
Pub Date : 2014-04-03 DOI: 10.1109/WCCCT.2014.40
S. Vanthana, V. Prakash
A Mobile Ad-Hoc Network (MANET) is a collection of wireless mobile nodes forming a temporary network without using any centralized access point or administration. MANET protocols have to face high challenges due to dynamically changing topologies, low transmission power and asymmetric links of network. An attempt has been made to compare the performance of two On-demand reactive routing protocols namely AODV and DSR which works on gateway discovery algorithms and a proactive routing protocol namely DSDV which works on an algorithm to constantly update network topology information available to all nodes for MANETs on different scenarios. In this paper comparison is made on the basis of performance metrics such as throughput, packet loss and end-to-end delay, and the simulator used is NS-2 in Ubuntu operating system (Linux). The simulations are carried out by varying the packet size, number of connecting nodes at a time and pause time and the results are analyzed.
移动自组织网络(MANET)是无线移动节点的集合,在不使用任何集中接入点或管理的情况下形成临时网络。由于网络拓扑结构的动态变化、低传输功率和链路的不对称,MANET协议面临着很大的挑战。我们尝试比较两种按需响应路由协议的性能,即AODV和DSR,它们工作在网关发现算法上,而主动路由协议即DSDV,它工作在一种算法上,不断更新网络拓扑信息,供不同场景下的manet的所有节点使用。本文在吞吐量、丢包量、端到端时延等性能指标的基础上进行了比较,使用的模拟器为Ubuntu操作系统(Linux)下的NS-2。通过改变分组大小、每次连接节点数和暂停时间进行了仿真,并对仿真结果进行了分析。
{"title":"Comparative Study of Proactive and Reactive AdHoc Routing Protocols Using Ns2","authors":"S. Vanthana, V. Prakash","doi":"10.1109/WCCCT.2014.40","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.40","url":null,"abstract":"A Mobile Ad-Hoc Network (MANET) is a collection of wireless mobile nodes forming a temporary network without using any centralized access point or administration. MANET protocols have to face high challenges due to dynamically changing topologies, low transmission power and asymmetric links of network. An attempt has been made to compare the performance of two On-demand reactive routing protocols namely AODV and DSR which works on gateway discovery algorithms and a proactive routing protocol namely DSDV which works on an algorithm to constantly update network topology information available to all nodes for MANETs on different scenarios. In this paper comparison is made on the basis of performance metrics such as throughput, packet loss and end-to-end delay, and the simulator used is NS-2 in Ubuntu operating system (Linux). The simulations are carried out by varying the packet size, number of connecting nodes at a time and pause time and the results are analyzed.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125868935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Retinal Image Analysis Using Contourlet Transform and Multistructure Elements Morphology by Reconstruction 基于Contourlet变换和多结构元形态学重构的视网膜图像分析
Pub Date : 2014-04-03 DOI: 10.1109/WCCCT.2014.15
D. Karthika, A. Marimuthu
Retinal images play a vital role in most of the applications like ocular fundus operations and human recognition. Also, it is used to detect the diabetes in early stages by evaluating all the retinal blood vessels together. The detection of blood vessels from the retinal images is generally a slow process. In this paper, a novel algorithm called Contourlet Transform is proposed to detect the blood vessels efficiently. The proposed Contourlet Transform is the extension of wavelet transform used to enhance the retinal image then the image is utilized for the segmentation part. The existing curvelet transform has disadvantages that is directional specificity of the image is less owing to that the effectiveness is poor. The directionality features of the multistructure elements technique construct it as an effectual tool in edge detection. Therefore, morphology operators by means of multistructure elements are given to the enhanced image in order to locate the retinal image ridges. Later, morphological operators by reconstruction eradicate the ridges not related to the vessel tree as trying to protect the thin vessels that are unaffected. This approach uses multistructure elements in order to improve the performance of morphological operators by reconstruction. An improved Ostu thresholding method is combined with Strongly Connected Component Analysis (SCCA) which indicates the remained ridges pertaining to vessels. The experimental results show the proposed method obtains 96% accuracy in detection of blood vessels and is compared with other existing approaches.
视网膜图像在眼底手术和人体识别等应用中起着至关重要的作用。此外,它还可以通过对视网膜血管的综合评估,在早期发现糖尿病。从视网膜图像中检测血管通常是一个缓慢的过程。本文提出了一种新的血管检测算法Contourlet Transform。本文提出的Contourlet变换是对小波变换的扩展,小波变换用于增强视网膜图像,然后利用图像进行分割部分。现有的曲波变换存在着图像方向特异性较差的缺点,其有效性较差。多结构元素技术的方向性特征使其成为一种有效的边缘检测工具。因此,利用多结构元素对增强图像进行形态学运算,以定位视网膜图像脊。随后,形态学算子通过重建消除与血管树无关的脊,以试图保护未受影响的薄血管。该方法采用多结构元素,通过重构提高形态算子的性能。将改进的Ostu阈值法与强连接分量分析(SCCA)相结合,识别出与血管相关的残余脊线。实验结果表明,该方法对血管的检测准确率达到96%,并与现有方法进行了比较。
{"title":"Retinal Image Analysis Using Contourlet Transform and Multistructure Elements Morphology by Reconstruction","authors":"D. Karthika, A. Marimuthu","doi":"10.1109/WCCCT.2014.15","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.15","url":null,"abstract":"Retinal images play a vital role in most of the applications like ocular fundus operations and human recognition. Also, it is used to detect the diabetes in early stages by evaluating all the retinal blood vessels together. The detection of blood vessels from the retinal images is generally a slow process. In this paper, a novel algorithm called Contourlet Transform is proposed to detect the blood vessels efficiently. The proposed Contourlet Transform is the extension of wavelet transform used to enhance the retinal image then the image is utilized for the segmentation part. The existing curvelet transform has disadvantages that is directional specificity of the image is less owing to that the effectiveness is poor. The directionality features of the multistructure elements technique construct it as an effectual tool in edge detection. Therefore, morphology operators by means of multistructure elements are given to the enhanced image in order to locate the retinal image ridges. Later, morphological operators by reconstruction eradicate the ridges not related to the vessel tree as trying to protect the thin vessels that are unaffected. This approach uses multistructure elements in order to improve the performance of morphological operators by reconstruction. An improved Ostu thresholding method is combined with Strongly Connected Component Analysis (SCCA) which indicates the remained ridges pertaining to vessels. The experimental results show the proposed method obtains 96% accuracy in detection of blood vessels and is compared with other existing approaches.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115094900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Discovering Students' Academic Performance Based on GPA Using K-Means Clustering Algorithm 基于GPA的k均值聚类算法发现学生学习成绩
Pub Date : 2014-02-01 DOI: 10.1109/WCCCT.2014.75
J. Jamesmanoharan, S. Ganesh, M. L. P. Felciah, A. K. Shafreenbanu
Now days in higher learning program, the academic community facing some issues regarding monitor and analyzing the progress of student's academic performance. In the real world, predicting the performance of the students is a challenging task. Currently they are using cluster analysis for analyzing the students' results and using statistical algorithms to segregate their marks based on their performance. But it is not much effective, so we additionally added the k-mean clustering algorithm combined with deterministic model to analyze and monitor the student's results and their performance. By this k-mean clustering we can get more efficiency on monitoring the progress of academic performance of students in higher Institution to provide accurate results in a short period of time. In this paper, we applied the methodology to find out the various interesting pattern by taking the student test scores.
在高等教育中,如何对学生的学习成绩进行监控和分析是学术界面临的问题。在现实世界中,预测学生的表现是一项具有挑战性的任务。目前,他们正在使用聚类分析来分析学生的成绩,并使用统计算法来根据他们的表现区分他们的分数。但是效果不太好,所以我们额外增加了k-mean聚类算法结合确定性模型来分析和监控学生的成绩和表现。通过这种k-mean聚类,我们可以更有效地监测高等院校学生学业成绩的进步,在短时间内提供准确的结果。在本文中,我们运用该方法通过学生的考试成绩来发现各种有趣的模式。
{"title":"Discovering Students' Academic Performance Based on GPA Using K-Means Clustering Algorithm","authors":"J. Jamesmanoharan, S. Ganesh, M. L. P. Felciah, A. K. Shafreenbanu","doi":"10.1109/WCCCT.2014.75","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.75","url":null,"abstract":"Now days in higher learning program, the academic community facing some issues regarding monitor and analyzing the progress of student's academic performance. In the real world, predicting the performance of the students is a challenging task. Currently they are using cluster analysis for analyzing the students' results and using statistical algorithms to segregate their marks based on their performance. But it is not much effective, so we additionally added the k-mean clustering algorithm combined with deterministic model to analyze and monitor the student's results and their performance. By this k-mean clustering we can get more efficiency on monitoring the progress of academic performance of students in higher Institution to provide accurate results in a short period of time. In this paper, we applied the methodology to find out the various interesting pattern by taking the student test scores.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"311 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123219577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
LASA-HEU: Heuristic Approach for Service Selection in Composite Web Services 组合Web服务中服务选择的启发式方法
Pub Date : 2014-02-01 DOI: 10.1109/WCCCT.2014.73
N. Sasikaladevi, L. Arockiam
Numerous functionally similar services are evolving day by day. Selecting the service which matches exactly with the requirements of the consumer is a tedious task. The QoS-based Service Selection Problem (SSP) is a process of allocating a QoS based exterior web service component to each task of the workflow that describes a composite web service. Hence, the aggregate QoS of the composite web service is the best. It is a planning problem by its nature. This paper provides the brief overview of the heuristic based Service Selection Algorithm (LASA-HEU) for the MMKP form of reliability enforced SSP. This paper also compares the proposed LASA-HEU with the existing heuristic based SSA and proved that the proposed LASA-HEU performs better than the existing heuristic based SSA based on the reliability.
许多功能相似的服务每天都在发展。选择与消费者需求完全匹配的服务是一项繁琐的任务。基于QoS的服务选择问题(SSP)是一个将基于QoS的外部web服务组件分配给描述复合web服务的工作流的每个任务的过程。因此,组合web服务的聚合QoS是最好的。从本质上讲,这是一个规划问题。本文简要概述了基于启发式的服务选择算法(LASA-HEU)在MMKP形式的可靠性强制SSP中的应用。本文还将提出的LASA-HEU与现有的启发式SSA进行了比较,证明了基于可靠性的LASA-HEU优于现有的启发式SSA。
{"title":"LASA-HEU: Heuristic Approach for Service Selection in Composite Web Services","authors":"N. Sasikaladevi, L. Arockiam","doi":"10.1109/WCCCT.2014.73","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.73","url":null,"abstract":"Numerous functionally similar services are evolving day by day. Selecting the service which matches exactly with the requirements of the consumer is a tedious task. The QoS-based Service Selection Problem (SSP) is a process of allocating a QoS based exterior web service component to each task of the workflow that describes a composite web service. Hence, the aggregate QoS of the composite web service is the best. It is a planning problem by its nature. This paper provides the brief overview of the heuristic based Service Selection Algorithm (LASA-HEU) for the MMKP form of reliability enforced SSP. This paper also compares the proposed LASA-HEU with the existing heuristic based SSA and proved that the proposed LASA-HEU performs better than the existing heuristic based SSA based on the reliability.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114177383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Computer Visionimage Enhancement for Plant Leaves Disease Detection 植物叶片病害检测的计算机视觉图像增强
Pub Date : 2014-02-01 DOI: 10.1109/WCCCT.2014.39
K. Thangadurai, K. Padmavathi
Enhanced images have high quality and clarity than original captured images. Computer vision image enhancement (Color conversion and Histogram equalization) is used in different real time applications such as remote sensing, medical image analysis and plant leaves disease detection. Original captured images are RGB images. RGB images are combination of primary colors (Red, Green and Blue). It is difficult to implement the applications because of the range of this color is 0 to 255. Grayscale images have only the range between 0 and 1. So it is easy to implement many applications. Histogram equalization is used to increase the images clarity. Grayscale conversion and histogram equalization is used in plant leaves disease detection.
增强图像比原始捕获图像具有更高的质量和清晰度。计算机视觉图像增强(颜色转换和直方图均衡化)用于不同的实时应用,如遥感,医学图像分析和植物叶片疾病检测。原始捕获的图像为RGB图像。RGB图像是原色(红、绿、蓝)的组合。由于这种颜色的范围是0到255,因此很难实现应用程序。灰度图像只有0到1之间的范围。因此很容易实现许多应用程序。直方图均衡化用于提高图像清晰度。将灰度变换和直方图均衡化技术应用于植物叶片病害检测。
{"title":"Computer Visionimage Enhancement for Plant Leaves Disease Detection","authors":"K. Thangadurai, K. Padmavathi","doi":"10.1109/WCCCT.2014.39","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.39","url":null,"abstract":"Enhanced images have high quality and clarity than original captured images. Computer vision image enhancement (Color conversion and Histogram equalization) is used in different real time applications such as remote sensing, medical image analysis and plant leaves disease detection. Original captured images are RGB images. RGB images are combination of primary colors (Red, Green and Blue). It is difficult to implement the applications because of the range of this color is 0 to 255. Grayscale images have only the range between 0 and 1. So it is easy to implement many applications. Histogram equalization is used to increase the images clarity. Grayscale conversion and histogram equalization is used in plant leaves disease detection.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130408976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
期刊
2014 World Congress on Computing and Communication Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1