Cardiovascular diseases related · Coronary heart disease, Angina pectoris, congestive heart failure, Cardiomyopathy, congenital heart disease are the first cause of death in the Asian world. The health care industry collects a huge amount of data which is not properly mined and put into optimum use resulting in these hidden patterns and relationships often going unexploited. Advanced data mining modeling techniques can help overcome these conditions. The health care knowledge management, especially in heart disease, can be improved through the integration of data mining with decision support system. Almost 60% of the world population fall victim to the heart disease. Heart disease management is a complex task requiring much experience and knowledge. Traditional way of predicting heart disease is through physician's examination or a number of medical tests such as ECG Stress test, Heart MRI, CT etc., Computer based information along with advanced data mining techniques are used for appropriate results. The main aim of this study is to detect the various causes of cardiovascular diseases by means of machine-learning techniques with the help of clinical diagnosis. For detecting these image analysis data is used. The aim of this research work is to develop a framework for detecting causes by means of data mining and machine-learning techniques.
{"title":"Prediction and Diagnosis of Cardio Vascular Disease -- A Critical Survey","authors":"K. Mohan, Ilango Paramasivam, Subhashini Narayan","doi":"10.1109/WCCCT.2014.74","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.74","url":null,"abstract":"Cardiovascular diseases related · Coronary heart disease, Angina pectoris, congestive heart failure, Cardiomyopathy, congenital heart disease are the first cause of death in the Asian world. The health care industry collects a huge amount of data which is not properly mined and put into optimum use resulting in these hidden patterns and relationships often going unexploited. Advanced data mining modeling techniques can help overcome these conditions. The health care knowledge management, especially in heart disease, can be improved through the integration of data mining with decision support system. Almost 60% of the world population fall victim to the heart disease. Heart disease management is a complex task requiring much experience and knowledge. Traditional way of predicting heart disease is through physician's examination or a number of medical tests such as ECG Stress test, Heart MRI, CT etc., Computer based information along with advanced data mining techniques are used for appropriate results. The main aim of this study is to detect the various causes of cardiovascular diseases by means of machine-learning techniques with the help of clinical diagnosis. For detecting these image analysis data is used. The aim of this research work is to develop a framework for detecting causes by means of data mining and machine-learning techniques.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127336434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing is used broadly in various services which contain storage platform and very easy to get to vast calculation. Due to low cost, strength, elasticity and ubiquitous nature, cloud computing changes the path entities to manage their data. In cloud environment, secure outsourcing is one of the active research areas, as the cloud environment can't be trusted. The situation becomes complex when the outsourced data sources in a cloud environment are covenant with multiple outsourcers who are available with different access rights. Key management is one of the important aspects for securing outsourced data in cloud environment. In this paper, a new encryption algorithm called Key Insertion and Splay Tree encryption (KIST) proposed. This algorithm makes use of an asynchronous key series and splay tree for encryption. This encryption approach provides better key management approach for validating the users in the cloud. The experimental result shows that it is very efficient in providing authentication and security to the cloud data.
{"title":"Key Insertion and Splay Tree Encryption Algorithm for Secure Data Outsourcing in Cloud","authors":"A. Mercy, G. Rani, A Marimuthu","doi":"10.1109/WCCCT.2014.14","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.14","url":null,"abstract":"Cloud computing is used broadly in various services which contain storage platform and very easy to get to vast calculation. Due to low cost, strength, elasticity and ubiquitous nature, cloud computing changes the path entities to manage their data. In cloud environment, secure outsourcing is one of the active research areas, as the cloud environment can't be trusted. The situation becomes complex when the outsourced data sources in a cloud environment are covenant with multiple outsourcers who are available with different access rights. Key management is one of the important aspects for securing outsourced data in cloud environment. In this paper, a new encryption algorithm called Key Insertion and Splay Tree encryption (KIST) proposed. This algorithm makes use of an asynchronous key series and splay tree for encryption. This encryption approach provides better key management approach for validating the users in the cloud. The experimental result shows that it is very efficient in providing authentication and security to the cloud data.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126716114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The multi factorial, chronic, severe diseases like diabetes and cancer have complex relationship. When the glucose level of the body goes to abnormal level, it will lead to Blindness, Heart disease, Kidney failure and also Cancer. Epidemiological studies have proved that several cancer types are possible in patients having diabetes. Many researchers proposed methods to diagnose diabetes and cancer. To improve the classification accuracy and to achieve better efficiency a new approach like Adaptive Neuro Fuzzy Inference System (ANFIS) is proposed.
{"title":"A New Approach for Diagnosis of Diabetes and Prediction of Cancer Using ANFIS","authors":"C. Kalaiselvi, G. M. Nasira","doi":"10.1109/WCCCT.2014.66","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.66","url":null,"abstract":"The multi factorial, chronic, severe diseases like diabetes and cancer have complex relationship. When the glucose level of the body goes to abnormal level, it will lead to Blindness, Heart disease, Kidney failure and also Cancer. Epidemiological studies have proved that several cancer types are possible in patients having diabetes. Many researchers proposed methods to diagnose diabetes and cancer. To improve the classification accuracy and to achieve better efficiency a new approach like Adaptive Neuro Fuzzy Inference System (ANFIS) is proposed.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116588346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nutrient status of a plant can be diagnosed by detecting the edges and veins of the leaf images since the deficiency symptoms are generally found in interveinal areas and along the edges. This paper proposes an effective algorithm for detecting the edges and veins in leaf images. The proposed algorithm was developed using Canny edge detection method. The algorithm provides accurate and positive results and may prove to be an effective tool in nutrient deficiency detection in leaves.
{"title":"An Effective Algorithm for Edges and Veins Detection in Leaf Images","authors":"R. Radha, S. Jeyalakshmi","doi":"10.1109/WCCCT.2014.1","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.1","url":null,"abstract":"Nutrient status of a plant can be diagnosed by detecting the edges and veins of the leaf images since the deficiency symptoms are generally found in interveinal areas and along the edges. This paper proposes an effective algorithm for detecting the edges and veins in leaf images. The proposed algorithm was developed using Canny edge detection method. The algorithm provides accurate and positive results and may prove to be an effective tool in nutrient deficiency detection in leaves.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125388855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ravi Babu, Venkateswarlu Professor, Head, Aneel Kumar Chintha
This paper presents a new approach to off-line handwritten digit recognition based on structural features which is not required thinning operation and size normalization technique. In this paper uses four different types of structural features namely, number of holes, water reservoirs in four directions, maximum profile distances in four directions, and fill-hole density for the recognition of digits. The digit recognition system mainly depends on which kinds of features are used. The main objective of this paper is to provide efficient and reliable techniques for recognition of handwritten digits. A Euclidean minimum distance criterion is used to find minimum distances and k-nearest neighbor classifier is used to classify the digits. A MNIST database is used for both training and testing the system. 5000 images are used to test the proposed method a total 5000 numeral images are tested and got 96.94% recognition rate.
{"title":"Handwritten Digit Recognition Using K-Nearest Neighbour Classifier","authors":"Ravi Babu, Venkateswarlu Professor, Head, Aneel Kumar Chintha","doi":"10.1109/WCCCT.2014.7","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.7","url":null,"abstract":"This paper presents a new approach to off-line handwritten digit recognition based on structural features which is not required thinning operation and size normalization technique. In this paper uses four different types of structural features namely, number of holes, water reservoirs in four directions, maximum profile distances in four directions, and fill-hole density for the recognition of digits. The digit recognition system mainly depends on which kinds of features are used. The main objective of this paper is to provide efficient and reliable techniques for recognition of handwritten digits. A Euclidean minimum distance criterion is used to find minimum distances and k-nearest neighbor classifier is used to classify the digits. A MNIST database is used for both training and testing the system. 5000 images are used to test the proposed method a total 5000 numeral images are tested and got 96.94% recognition rate.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125524216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data marts are nominated to accomplish the role of tactical decision support for managers accountable for a specific business area. The data from data marts is typically gathered and made offered for business analysis. A planned ETL process populates data marts within the focus on precise data warehouse information. Extract-Transform-Load (ETL) functions as a process which is used to receive information from one or more sources. The next step is to normalize information into a convenient schema and information is appended into some other repository. This paper analyses the three decision making methods and find out the best decision making methodology to improve sales promotion in sales data mart by using arithmetic mean and rank matrix methods.
{"title":"A Comparative Analysis of Decision Making Methodologies in Sales Data Mart","authors":"A. Prema, V. Clara, A. Pethalakshmi","doi":"10.1109/WCCCT.2014.28","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.28","url":null,"abstract":"Data marts are nominated to accomplish the role of tactical decision support for managers accountable for a specific business area. The data from data marts is typically gathered and made offered for business analysis. A planned ETL process populates data marts within the focus on precise data warehouse information. Extract-Transform-Load (ETL) functions as a process which is used to receive information from one or more sources. The next step is to normalize information into a convenient schema and information is appended into some other repository. This paper analyses the three decision making methods and find out the best decision making methodology to improve sales promotion in sales data mart by using arithmetic mean and rank matrix methods.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"27 14","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132709267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireless Sensor Networks (WSN) is vulnerable to node capture attacks in which an attacker can capture one or more sensor nodes and reveal all stored security information which enables him to compromise a part of the WSN communications. Due to large number of sensor nodes and lack of information about deployment and hardware capabilities of sensor node, key management in wireless sensor networks has become a complex task. Limited memory resources and energy constraints are the other issues of key management in WSN. Hence an efficient key management scheme is necessary which reduces the impact of node capture attacks and consume less energy. By simulation results, we show that our proposed technique efficiently increases packet delivery ratio with reduced energy consumption.
{"title":"Security in Wireless Sensor Networks: Key Management Module in EECBKM","authors":"Dr. T. Lalitha, A. J. Devi","doi":"10.1109/WCCCT.2014.12","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.12","url":null,"abstract":"Wireless Sensor Networks (WSN) is vulnerable to node capture attacks in which an attacker can capture one or more sensor nodes and reveal all stored security information which enables him to compromise a part of the WSN communications. Due to large number of sensor nodes and lack of information about deployment and hardware capabilities of sensor node, key management in wireless sensor networks has become a complex task. Limited memory resources and energy constraints are the other issues of key management in WSN. Hence an efficient key management scheme is necessary which reduces the impact of node capture attacks and consume less energy. By simulation results, we show that our proposed technique efficiently increases packet delivery ratio with reduced energy consumption.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"27 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114143882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Since the invention of microprocessors around 1970, CPU performance improvement together with the Instruction Level Parallelism (ILP) had been the main focus of the computer industry. Recently, ILP seemed to have reached its limit and together with the problem of power consumption and heat dissipation, emerged the multi-core era. The focus had shifted from ILP to Thread Level Parallelism (TLP) and efficient use of multi-core processors. However, the detection of RAW hazard technique relies on complex hardware in the current computers, which may cause the designers to make the CPU consume lot of energy and the design to be more complex. By using dataflow paradigm, this can naturally eliminate the RAW hazards. This new architecture uses a paradigm, to closely link the ILP and TLP by combining the sequential and dataflow approach. It is designed using VHDL language and tested on Alter a DE2 board. With just two register sets, tremendous amount of performance improvement can be gained. This architecture not only reduces the latency of memory accesses, but also can be suitable for multithreaded multi-core platforms.
{"title":"VHDL Implementation of Scheduled Dataflow Architecture and the Impact of Efficient Way of Passing of Data","authors":"J. Arul, Han-Yao Ko, Hwa-Yuan Chung","doi":"10.1109/WCCCT.2014.62","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.62","url":null,"abstract":"Since the invention of microprocessors around 1970, CPU performance improvement together with the Instruction Level Parallelism (ILP) had been the main focus of the computer industry. Recently, ILP seemed to have reached its limit and together with the problem of power consumption and heat dissipation, emerged the multi-core era. The focus had shifted from ILP to Thread Level Parallelism (TLP) and efficient use of multi-core processors. However, the detection of RAW hazard technique relies on complex hardware in the current computers, which may cause the designers to make the CPU consume lot of energy and the design to be more complex. By using dataflow paradigm, this can naturally eliminate the RAW hazards. This new architecture uses a paradigm, to closely link the ILP and TLP by combining the sequential and dataflow approach. It is designed using VHDL language and tested on Alter a DE2 board. With just two register sets, tremendous amount of performance improvement can be gained. This architecture not only reduces the latency of memory accesses, but also can be suitable for multithreaded multi-core platforms.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124919295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recommender systems are one among the most powerful tools in this digital world, usually providing explanations to their recommendations to help the web users to find their products, peoples, even their missing friends in these social communities. Until now there are various methods and approaches implemented in this recommender system. The most widely used approaches are content-based and collaborative approaches. These approaches may be studied to have the most personalized approaches and to have the best recommendations for the end users. In this paper, the filtering technique is considered as the background of every recommender system approaches. Here, the Social Filtering and Meta Filtering are presented which is mostly like to be the better recommendation in the social networking websites like facebook.com, googleplus.com, etc., as well as in the blogging websites such as blogger.com, wordpress.com, etc.
{"title":"Impact on Social Filtering and Meta Filtering in Recommender Systems","authors":"K. Thangadurai, M. Venkatesan","doi":"10.1109/WCCCT.2014.11","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.11","url":null,"abstract":"Recommender systems are one among the most powerful tools in this digital world, usually providing explanations to their recommendations to help the web users to find their products, peoples, even their missing friends in these social communities. Until now there are various methods and approaches implemented in this recommender system. The most widely used approaches are content-based and collaborative approaches. These approaches may be studied to have the most personalized approaches and to have the best recommendations for the end users. In this paper, the filtering technique is considered as the background of every recommender system approaches. Here, the Social Filtering and Meta Filtering are presented which is mostly like to be the better recommendation in the social networking websites like facebook.com, googleplus.com, etc., as well as in the blogging websites such as blogger.com, wordpress.com, etc.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129235113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mining is a great service entities in trajectory database that indicates to the exposure of entities with huge service like acquisition. The extensive number of contender entities degrades the mining achievement in terms of execution time and space stipulation. The position may become worse when the database consists of endless lengthy transactions or lengthy huge utility entity sets. In this paper, UP -Growth+ algorithm is consider, for mining huge utility entities with a set of adequate approaches for pruning contender entities. The previous algorithms do not contribute any compaction or compression mechanism with respect to density in bit vector regions. To raise the density in bit-vector the Bit Mask Search (BM Search) starts with an array list. From root node, a BM Search representation for each frequent pattern is designed which gives an acceptable compression and compaction in bit search measure than UP Growth+ algorithms. The comparative analysis of UP Growth+ and BM Search are described in this paper. An experimental result shows that BM search produces better result than UP Growth + algorithm.
{"title":"A Frequent Trajectory Path Mining Using Bit Mask Search and UP Growth+ Algorithm","authors":"P. Geetha, E. Raj","doi":"10.1109/WCCCT.2014.36","DOIUrl":"https://doi.org/10.1109/WCCCT.2014.36","url":null,"abstract":"Mining is a great service entities in trajectory database that indicates to the exposure of entities with huge service like acquisition. The extensive number of contender entities degrades the mining achievement in terms of execution time and space stipulation. The position may become worse when the database consists of endless lengthy transactions or lengthy huge utility entity sets. In this paper, UP -Growth+ algorithm is consider, for mining huge utility entities with a set of adequate approaches for pruning contender entities. The previous algorithms do not contribute any compaction or compression mechanism with respect to density in bit vector regions. To raise the density in bit-vector the Bit Mask Search (BM Search) starts with an array list. From root node, a BM Search representation for each frequent pattern is designed which gives an acceptable compression and compaction in bit search measure than UP Growth+ algorithms. The comparative analysis of UP Growth+ and BM Search are described in this paper. An experimental result shows that BM search produces better result than UP Growth + algorithm.","PeriodicalId":421793,"journal":{"name":"2014 World Congress on Computing and Communication Technologies","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131032882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}