Pub Date : 2013-07-04DOI: 10.1109/ICCCNT.2013.6726764
V. Janani, R. Chandrasekar
The OSPF convergence period takes several milliseconds to update the current information about the topology. In worse transient behavior, the packets are dropped or loop may occur. It may result in network instability. The link failure recovery is important because it finds the alternative path to deviate the packets, in order to prevent the packet loss rate. An IPFRR approach, FEPS can handle single link or single node failure successfully during the convergence period. This FEPS recomputed the alternate shorter path before the failure occurs. This schema is works during the Convergence period of OSPF routing, which provides an immediate backup path. The proposed idea is to extend the existing protection method “enhanced Fast emergency path schema (EFEP-S)”, an IPFRR approach to overcome the multiple link failures occur within the local routing area of OSPF routing.
OSPF的收敛周期为几毫秒,更新拓扑的当前信息。在更糟糕的瞬态行为下,数据包可能会被丢弃或发生环路。可能导致网络不稳定。链路故障恢复很重要,因为它找到了偏离数据包的替代路径,以防止丢包率。采用IPFRR方法,FEPS可以在收敛期内成功处理单链路或单节点故障。该FEPS在故障发生之前重新计算了备用的较短路径。该模式主要工作在OSPF路由收敛期,提供快速的备份路径。提出的思想是对现有保护方法“增强快速紧急路径模式(enhanced Fast emergency path schema, EFEP-S)”的扩展,采用IPFRR方法来克服OSPF路由的本地路由区域内出现的多链路故障。
{"title":"Enhanced fast emergency path schema (EFEP-S) to redcue packet loss during multiple independent link failure in OSPF routing","authors":"V. Janani, R. Chandrasekar","doi":"10.1109/ICCCNT.2013.6726764","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726764","url":null,"abstract":"The OSPF convergence period takes several milliseconds to update the current information about the topology. In worse transient behavior, the packets are dropped or loop may occur. It may result in network instability. The link failure recovery is important because it finds the alternative path to deviate the packets, in order to prevent the packet loss rate. An IPFRR approach, FEPS can handle single link or single node failure successfully during the convergence period. This FEPS recomputed the alternate shorter path before the failure occurs. This schema is works during the Convergence period of OSPF routing, which provides an immediate backup path. The proposed idea is to extend the existing protection method “enhanced Fast emergency path schema (EFEP-S)”, an IPFRR approach to overcome the multiple link failures occur within the local routing area of OSPF routing.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"25 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82851325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICCCNT.2013.6726664
Jayant Adhikari, S. Patil
Nowadays implementation of local cloud is popular, organization are becoming aware of power consumed by unutilized resources. Reducing power consumption has been an essential requirement for cloud environments not only to decrease operating cost but also improve the system reliability. The energy-aware computing is not just to make algorithms run as fast as possible, but also to minimize energy requirements for computation. Our DT-PALB (Double Threshold Energy Aware Load Balancing) algorithm maintains the state of all compute nodes, and based on utilization percentages, decides the number of compute nodes that should be operating. We show that our solution provides adequate availability to compute node resources while decreasing the overall power consumed by the local cloud as compared to using load balancing techniques that are power aware.
{"title":"Double threshold energy aware load balancing in cloud computing","authors":"Jayant Adhikari, S. Patil","doi":"10.1109/ICCCNT.2013.6726664","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726664","url":null,"abstract":"Nowadays implementation of local cloud is popular, organization are becoming aware of power consumed by unutilized resources. Reducing power consumption has been an essential requirement for cloud environments not only to decrease operating cost but also improve the system reliability. The energy-aware computing is not just to make algorithms run as fast as possible, but also to minimize energy requirements for computation. Our DT-PALB (Double Threshold Energy Aware Load Balancing) algorithm maintains the state of all compute nodes, and based on utilization percentages, decides the number of compute nodes that should be operating. We show that our solution provides adequate availability to compute node resources while decreasing the overall power consumed by the local cloud as compared to using load balancing techniques that are power aware.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"23 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89561260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICCCNT.2013.6726754
S. Prabhavathi, A. Rao, A. Subramanyam
Ensuring efficient and fast data aggregation technique is the challenging task when it is considered for the large-scale wireless sensor network (WSN). The problems of presence of single sink, the distance between sink and cluster, as well as type of presence of dynamic task to be performed within different cluster is extremely difficult to address for reliable data aggregation technique in WSN. Therefore, the proposed system introduces a novel globular topology of WSN that ensures an efficient task allocation strategy in large-scale WSN architecture. The performance of the data aggregation process is further increased by considering the presence of multiple mobile sink that adds an exponential benefit to the task allocation policy proposed. The results simulated in Matlab shows satisfactory performance by considering the packet delivery ratio, delay minimization, and completion time of the data aggregation process.
{"title":"Globular topology of large scale WSN for efficient load balancing using multiple sink node","authors":"S. Prabhavathi, A. Rao, A. Subramanyam","doi":"10.1109/ICCCNT.2013.6726754","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726754","url":null,"abstract":"Ensuring efficient and fast data aggregation technique is the challenging task when it is considered for the large-scale wireless sensor network (WSN). The problems of presence of single sink, the distance between sink and cluster, as well as type of presence of dynamic task to be performed within different cluster is extremely difficult to address for reliable data aggregation technique in WSN. Therefore, the proposed system introduces a novel globular topology of WSN that ensures an efficient task allocation strategy in large-scale WSN architecture. The performance of the data aggregation process is further increased by considering the presence of multiple mobile sink that adds an exponential benefit to the task allocation policy proposed. The results simulated in Matlab shows satisfactory performance by considering the packet delivery ratio, delay minimization, and completion time of the data aggregation process.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"7 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87362916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICCCNT.2013.6726489
Jayanta Acharya, S. Gadhiya, Kapil S. Raviya
The quality of underwater images is directly affected by water medium, atmosphere medium, pressure and Temperature. This emphasizes the necessity of image segmentation, which divides an image into parts that have strong correlations with objects to reflect the actual information collected from the real world. Image segmentation is the most practical approach among virtually all automated image recognition systems. Feature extraction and recognition have numerous applications on telecommunication, weather forecasting, environment exploration and medical diagnosis. Different segmentation techniques are available in the literature for segmenting or simplifying the underwater images. The performance of an image segmentation algorithm depends on its simplification of image. In this paper, different segmentation algorithms namely, edge based image segmentation, adaptive image thresolding, K-means segmentation, Fuzzy c means(FCM), and Fuzzy C Means with thresholding (FCMT) are implemented for underwater images and they are compared using objective assesment parameter like Energy, Discrete Entropy, Relative Entropy, Mutual Information and Redundancy. Out of the above methods the experimental results show that Fuzzy C means with Thresholding (FCMT) algorithm performs better than other methods in processing underwater images.
{"title":"Objective assesment of different segmentation algorithm for underwater images","authors":"Jayanta Acharya, S. Gadhiya, Kapil S. Raviya","doi":"10.1109/ICCCNT.2013.6726489","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726489","url":null,"abstract":"The quality of underwater images is directly affected by water medium, atmosphere medium, pressure and Temperature. This emphasizes the necessity of image segmentation, which divides an image into parts that have strong correlations with objects to reflect the actual information collected from the real world. Image segmentation is the most practical approach among virtually all automated image recognition systems. Feature extraction and recognition have numerous applications on telecommunication, weather forecasting, environment exploration and medical diagnosis. Different segmentation techniques are available in the literature for segmenting or simplifying the underwater images. The performance of an image segmentation algorithm depends on its simplification of image. In this paper, different segmentation algorithms namely, edge based image segmentation, adaptive image thresolding, K-means segmentation, Fuzzy c means(FCM), and Fuzzy C Means with thresholding (FCMT) are implemented for underwater images and they are compared using objective assesment parameter like Energy, Discrete Entropy, Relative Entropy, Mutual Information and Redundancy. Out of the above methods the experimental results show that Fuzzy C means with Thresholding (FCMT) algorithm performs better than other methods in processing underwater images.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"1 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88293076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICCCNT.2013.6726615
M. Pandey, Shwetank Shekhar, Joginder Singh, G. Agarwal, Nitin Saxena
In peripheral to peripheral communication, USB2.0 continues to occupy prominent position. With the emergence of USB2.0 peripherals, figuring out a standard, reliable and robust approach that can validate USB2.0 on System on Chip (SoC) is the need of an hour. The performance of USB depends fundamentally on electrical characteristics. Using this innovative approach (validation using U-Boot framework) we have root-caused several notorious issues which were hard to narrow down using legacy approach. This methodology possesses both the legacy capability of low level programming (JTAG) as well as of application level (High Level) programming (Linux). The paper is presented using case study of some issues which were reflected in the system using this methodology only.
{"title":"A novel approach for USB2.0 validation on System on Chip","authors":"M. Pandey, Shwetank Shekhar, Joginder Singh, G. Agarwal, Nitin Saxena","doi":"10.1109/ICCCNT.2013.6726615","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726615","url":null,"abstract":"In peripheral to peripheral communication, USB2.0 continues to occupy prominent position. With the emergence of USB2.0 peripherals, figuring out a standard, reliable and robust approach that can validate USB2.0 on System on Chip (SoC) is the need of an hour. The performance of USB depends fundamentally on electrical characteristics. Using this innovative approach (validation using U-Boot framework) we have root-caused several notorious issues which were hard to narrow down using legacy approach. This methodology possesses both the legacy capability of low level programming (JTAG) as well as of application level (High Level) programming (Linux). The paper is presented using case study of some issues which were reflected in the system using this methodology only.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"45 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89063944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICCCNT.2013.6726698
Naveen Kumar, Suresh Kumar
The Semantic web is the extension of WWW, “web of document” that provide a support for “web of data” it gives an easier way to find, share, reuse and combine information. The semantic web can be best known as the web of linked data that enables people to create data stores on the web, build vocabularies, and write rules for handling data. It is based on machine-readable information and builds on XML technology's capability to define customized tagging schemes and RDF's (Resource Description Framework) flexible approach to representing data. The key challenge for many semantic web application is to access RDF and OWL data source, as a solution to this challenge, SPARQL, the w3c Recommendation for an RDF query language, supports querying of multiple RDF graphs and OWL data. In this paper we propose a framework for querying RDF data and OWL data using SPARQL. We perform querying the data using “TWINKLE” and “PROTEGE” tool and we also provide an experimental result of improving query performance by optimizing the query.
{"title":"Querying RDF and OWL data source using SPARQL","authors":"Naveen Kumar, Suresh Kumar","doi":"10.1109/ICCCNT.2013.6726698","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726698","url":null,"abstract":"The Semantic web is the extension of WWW, “web of document” that provide a support for “web of data” it gives an easier way to find, share, reuse and combine information. The semantic web can be best known as the web of linked data that enables people to create data stores on the web, build vocabularies, and write rules for handling data. It is based on machine-readable information and builds on XML technology's capability to define customized tagging schemes and RDF's (Resource Description Framework) flexible approach to representing data. The key challenge for many semantic web application is to access RDF and OWL data source, as a solution to this challenge, SPARQL, the w3c Recommendation for an RDF query language, supports querying of multiple RDF graphs and OWL data. In this paper we propose a framework for querying RDF data and OWL data using SPARQL. We perform querying the data using “TWINKLE” and “PROTEGE” tool and we also provide an experimental result of improving query performance by optimizing the query.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"53 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79177217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICCCNT.2013.6726517
Rohit Minni, Kaushal Sultania, Saurabh Mishra, D. Vincent
In symmetric key cryptography the sender as well as the receiver possess a common key. Asymmetric key cryptography involves generation of two distinct keys which are used for encryption and decryption correspondingly. The sender converts the original message to cipher text using the public key while the receiver can decipher this using his private key. This is also called Public Key Cryptography. For every public key there can exist only one private key that can decipher the encrypted text. Security of RSA Algorithm can be compromised using mathematical attack, by guessing the factors of a large number. It may also be compromised if one can guess the private key. In accordance with the mathematical attack, we propose a secure algorithm in this paper. In this algorithm, we try to eliminate the distribution of n which is the large number whose factors if found compromises the RSA algorithm. We also present a comparative analysis of the proposed algorithm with the RSA algorithm.
{"title":"An algorithm to enhance security in RSA","authors":"Rohit Minni, Kaushal Sultania, Saurabh Mishra, D. Vincent","doi":"10.1109/ICCCNT.2013.6726517","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726517","url":null,"abstract":"In symmetric key cryptography the sender as well as the receiver possess a common key. Asymmetric key cryptography involves generation of two distinct keys which are used for encryption and decryption correspondingly. The sender converts the original message to cipher text using the public key while the receiver can decipher this using his private key. This is also called Public Key Cryptography. For every public key there can exist only one private key that can decipher the encrypted text. Security of RSA Algorithm can be compromised using mathematical attack, by guessing the factors of a large number. It may also be compromised if one can guess the private key. In accordance with the mathematical attack, we propose a secure algorithm in this paper. In this algorithm, we try to eliminate the distribution of n which is the large number whose factors if found compromises the RSA algorithm. We also present a comparative analysis of the proposed algorithm with the RSA algorithm.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"20 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77079387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICCCNT.2013.6726512
S. Suchitra, S. Chitrakala
Content-based image retrieval (CBIR) has been one of the most research areas in the field of computer vision over the last ten years. CBIR is a problem that is getting more and more attention. Many of the research works had been undertaken in the past decade to design efficient image retrieval techniques from the image or multimedia databases. Although large number of indexing and retrieval techniques has been developed, there is still no universally accepted feature extraction, indexing and retrieval technique available. The amount of image data that has to be stored, managed, searched, and retrieved grows continuously on many fields of industry and research. One key challenge in CBIR is to develop a fast solution for indexing high-dimensional image contents, which is crucial to building large-scale CBIR systems. In this survey presents a highlight on the role of image indexing. It also point out scope and challenges in designing of image retrieval systems.
{"title":"A survey on scalable image indexing and searching","authors":"S. Suchitra, S. Chitrakala","doi":"10.1109/ICCCNT.2013.6726512","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726512","url":null,"abstract":"Content-based image retrieval (CBIR) has been one of the most research areas in the field of computer vision over the last ten years. CBIR is a problem that is getting more and more attention. Many of the research works had been undertaken in the past decade to design efficient image retrieval techniques from the image or multimedia databases. Although large number of indexing and retrieval techniques has been developed, there is still no universally accepted feature extraction, indexing and retrieval technique available. The amount of image data that has to be stored, managed, searched, and retrieved grows continuously on many fields of industry and research. One key challenge in CBIR is to develop a fast solution for indexing high-dimensional image contents, which is crucial to building large-scale CBIR systems. In this survey presents a highlight on the role of image indexing. It also point out scope and challenges in designing of image retrieval systems.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"397 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76458622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICCCNT.2013.6726540
S. Swaminathan, S. Suganya, K. Ambika
In a wireless network, when the adversary is being part of the network, they are well known about the protocol being used and other network secrets. A transceiver is enough to get the key and decrypt the message. Hence, simple cryptographic mechanism is not enough to protect the message. Moreover, jamming can be easily performed by modifying the packet header. Hence we have to go for advanced mechanism to protect our message. At the message level permutation and padding are used to protect the message and at the communication level puzzle is used to hide the key. A puzzle solver module in the client system can solve the puzzle and get the key.
{"title":"Packet hiding using cryptography with advanced key management against counter jamming attacks in wireless sensor networks","authors":"S. Swaminathan, S. Suganya, K. Ambika","doi":"10.1109/ICCCNT.2013.6726540","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726540","url":null,"abstract":"In a wireless network, when the adversary is being part of the network, they are well known about the protocol being used and other network secrets. A transceiver is enough to get the key and decrypt the message. Hence, simple cryptographic mechanism is not enough to protect the message. Moreover, jamming can be easily performed by modifying the packet header. Hence we have to go for advanced mechanism to protect our message. At the message level permutation and padding are used to protect the message and at the communication level puzzle is used to hide the key. A puzzle solver module in the client system can solve the puzzle and get the key.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"5 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86222012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-04DOI: 10.1109/ICCCNT.2013.6726522
C. S. Sasireka, P. Raviraj
In data mining context, for efficient data analysis recent researchers utilized branch-and-bound methods such as clustering, seriation and feature selection. Traditional cluster search was done with different partitioning schemes to optimize the cluster formation. Considering image data, partitioning approaches seems to be computationally complex due to large data size, and uncertainty of number of clusters. Recent work presented a new version of branch and bound model called model selection problem, handles the clustering issues more efficiently. For model-based clustering problems, to assign data point to appropriate cluster, cluster parameters should be known. Cluster parameters are computed only if the cluster assignments are known. Data point is assigned to the cluster based on most matching model such as Navigation and Cost Model, Segment Representation in SwiftRule and Analytic model. If the problem-specific bounds and/or added heuristics in the data points of the domain area get surmounted, memory overheads, specific model selection, and uncertain data points cause various clustering abnormalities. In addition cluster validity and purity needs to be testified for efficiency of problem-specific bound on certain domain areas of image data clustering. Experimental evaluation on the model selection approach of cluster model shows the improvement in accuracy, computational complexity and execution time, when compared to Navigation and Cost Model, Segment Representation in SwiftRule and Analytic model.
{"title":"Performance analysis of branch-and-bound approach with various model-selection clustering techniques for image data point","authors":"C. S. Sasireka, P. Raviraj","doi":"10.1109/ICCCNT.2013.6726522","DOIUrl":"https://doi.org/10.1109/ICCCNT.2013.6726522","url":null,"abstract":"In data mining context, for efficient data analysis recent researchers utilized branch-and-bound methods such as clustering, seriation and feature selection. Traditional cluster search was done with different partitioning schemes to optimize the cluster formation. Considering image data, partitioning approaches seems to be computationally complex due to large data size, and uncertainty of number of clusters. Recent work presented a new version of branch and bound model called model selection problem, handles the clustering issues more efficiently. For model-based clustering problems, to assign data point to appropriate cluster, cluster parameters should be known. Cluster parameters are computed only if the cluster assignments are known. Data point is assigned to the cluster based on most matching model such as Navigation and Cost Model, Segment Representation in SwiftRule and Analytic model. If the problem-specific bounds and/or added heuristics in the data points of the domain area get surmounted, memory overheads, specific model selection, and uncertain data points cause various clustering abnormalities. In addition cluster validity and purity needs to be testified for efficiency of problem-specific bound on certain domain areas of image data clustering. Experimental evaluation on the model selection approach of cluster model shows the improvement in accuracy, computational complexity and execution time, when compared to Navigation and Cost Model, Segment Representation in SwiftRule and Analytic model.","PeriodicalId":6330,"journal":{"name":"2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"53 3 1","pages":"1-9"},"PeriodicalIF":0.0,"publicationDate":"2013-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77309755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}