Pub Date : 2014-03-01DOI: 10.1109/ICISCON.2014.6965212
Vishal S. Thakare, N. Patil
Nowadays there has been great interest in field of image texture classification and retrieval. The increasing use of digital images has increased the size of image database which resulted in the need to develop a system that will classify and retrieve the required image of interest efficiently and accurately. This paper presents an effective and accurate method to classify and retrieve image using Self-organizing maps (SOM). The proposed method employs two phases, in the first phase color histogram is used to extract the color features and then the extracted features are given to Self-organizing map for initial classification. In the second phase Gray level co-occurrence matrix (GLCM) is used to extract the texture information from all images in each class from initial classification and then again given to Self-organizing map for final classification. The experimental results show the efficiency of the proposed method.
{"title":"Image texture classification and retrieval using self-organizing map","authors":"Vishal S. Thakare, N. Patil","doi":"10.1109/ICISCON.2014.6965212","DOIUrl":"https://doi.org/10.1109/ICISCON.2014.6965212","url":null,"abstract":"Nowadays there has been great interest in field of image texture classification and retrieval. The increasing use of digital images has increased the size of image database which resulted in the need to develop a system that will classify and retrieve the required image of interest efficiently and accurately. This paper presents an effective and accurate method to classify and retrieve image using Self-organizing maps (SOM). The proposed method employs two phases, in the first phase color histogram is used to extract the color features and then the extracted features are given to Self-organizing map for initial classification. In the second phase Gray level co-occurrence matrix (GLCM) is used to extract the texture information from all images in each class from initial classification and then again given to Self-organizing map for final classification. The experimental results show the efficiency of the proposed method.","PeriodicalId":193007,"journal":{"name":"2014 International Conference on Information Systems and Computer Networks (ISCON)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128221271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-03-01DOI: 10.1109/ICISCON.2014.6965229
Samayveer Singh, S. Chand, B. Kumar
Ant Colony Optimization (ACO) is one of the important techniques for solving optimization problems. It has been used to find locations to deploy sensors in a grid environment [12], in which the targets, called point of interest (PoI), are located on grid points in a square grid. The locations of sensors, which are grid points, are determined by considering the sink location as the starting point for deploying sensors. Though that work provides optimum number of sensors to cover all targets with respect to the given sink location, yet it does not provide which sink location provides minimum number of sensors to cover the targets. In this paper, we use ACO technique and find the sink location for which the number of sensors is minimum among all available locations in the grid. In our algorithm, we compute sum of distances of the targets from that sensor, which are in its range. Then we add these sums for all sensors in the grid. This distance corresponds to the given sink location. We repeat same process for computing the distance by changing the sink location in the grid. We choose that sink location for which the distance is minimum and this sink location requires minimum number of sensors to cover all targets. We carry out simulations to demonstrate the effectiveness of our proposed work.
{"title":"Optimum deployment of sensors in WSNs","authors":"Samayveer Singh, S. Chand, B. Kumar","doi":"10.1109/ICISCON.2014.6965229","DOIUrl":"https://doi.org/10.1109/ICISCON.2014.6965229","url":null,"abstract":"Ant Colony Optimization (ACO) is one of the important techniques for solving optimization problems. It has been used to find locations to deploy sensors in a grid environment [12], in which the targets, called point of interest (PoI), are located on grid points in a square grid. The locations of sensors, which are grid points, are determined by considering the sink location as the starting point for deploying sensors. Though that work provides optimum number of sensors to cover all targets with respect to the given sink location, yet it does not provide which sink location provides minimum number of sensors to cover the targets. In this paper, we use ACO technique and find the sink location for which the number of sensors is minimum among all available locations in the grid. In our algorithm, we compute sum of distances of the targets from that sensor, which are in its range. Then we add these sums for all sensors in the grid. This distance corresponds to the given sink location. We repeat same process for computing the distance by changing the sink location in the grid. We choose that sink location for which the distance is minimum and this sink location requires minimum number of sensors to cover all targets. We carry out simulations to demonstrate the effectiveness of our proposed work.","PeriodicalId":193007,"journal":{"name":"2014 International Conference on Information Systems and Computer Networks (ISCON)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122059760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-03-01DOI: 10.1109/ICISCON.2014.6965232
Sonali Agarwal, Divya Tomar, Siddhant
Considering the current scenario, the crucial need for software developer is the generous enhancement in the quality of the software product we deliver to the end user. Lifecycle models, development methodologies and tools have been extensively used for the same but the prime concern remains is the software defects that hinders our desire for good quality software. A lot of research work has been done on defect reduction, defect identification and defect prediction to solve this problem. This research work focus on defect prediction, a fairly new filed to work on. Artificial intelligence and data mining are the most popular methods researchers have been using recently. This research aims to use the Twin Support Vector Machine (TSVM) for predicting the number of defects in a new version of software product. This model gives a nearly perfect efficiency which compared to other models is far better. Twin Support Vector Machine based software defects prediction model using Gaussian kernel function obtains better performance as compare to earlier proposed approaches of software defect prediction. By predicting the defects in the new version, we thereby attempt to take a step to solve the problem of maintaining the high software quality. This proposed model directly shows its impact on the testing phase of the software product by simply plummeting the overall cost and efforts put in.
{"title":"Prediction of software defects using Twin Support Vector Machine","authors":"Sonali Agarwal, Divya Tomar, Siddhant","doi":"10.1109/ICISCON.2014.6965232","DOIUrl":"https://doi.org/10.1109/ICISCON.2014.6965232","url":null,"abstract":"Considering the current scenario, the crucial need for software developer is the generous enhancement in the quality of the software product we deliver to the end user. Lifecycle models, development methodologies and tools have been extensively used for the same but the prime concern remains is the software defects that hinders our desire for good quality software. A lot of research work has been done on defect reduction, defect identification and defect prediction to solve this problem. This research work focus on defect prediction, a fairly new filed to work on. Artificial intelligence and data mining are the most popular methods researchers have been using recently. This research aims to use the Twin Support Vector Machine (TSVM) for predicting the number of defects in a new version of software product. This model gives a nearly perfect efficiency which compared to other models is far better. Twin Support Vector Machine based software defects prediction model using Gaussian kernel function obtains better performance as compare to earlier proposed approaches of software defect prediction. By predicting the defects in the new version, we thereby attempt to take a step to solve the problem of maintaining the high software quality. This proposed model directly shows its impact on the testing phase of the software product by simply plummeting the overall cost and efforts put in.","PeriodicalId":193007,"journal":{"name":"2014 International Conference on Information Systems and Computer Networks (ISCON)","volume":"360 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122784366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-03-01DOI: 10.1109/ICISCON.2014.6965215
Rohit Agrawal, Amogh Chakkarwar, Prateek Choudhary, U. A. Jogalekar, D. Kulkarni
Facts play a critical role in our lives. We have got radical ways to accumulate and store these facts as meaningful information. A database is an organized collection of facts and information stored in the form of data. A good organization has to maintain lots and lots of data. There are ultimate tools which can collect and store the huge data in its proper format in the main repository but still the users doesn't get benefited from it up to its full potential. Retrieving meaningful information from such repositories requires an extensive knowledge of database languages like SQL. However, not everybody is able to write the complex SQL queries to extract the needful information. This intricate problem can be solved to a great extent by providing a Natural Language Interface to the users using which they can query a database in their own language. In this paper, we propose a Database Intelligent Querying System (DBIQS) which portrays completely automatic, simple, fast and reliable way to query a database.
{"title":"DBIQS — An intelligent system for querying and mining databases using NLP","authors":"Rohit Agrawal, Amogh Chakkarwar, Prateek Choudhary, U. A. Jogalekar, D. Kulkarni","doi":"10.1109/ICISCON.2014.6965215","DOIUrl":"https://doi.org/10.1109/ICISCON.2014.6965215","url":null,"abstract":"Facts play a critical role in our lives. We have got radical ways to accumulate and store these facts as meaningful information. A database is an organized collection of facts and information stored in the form of data. A good organization has to maintain lots and lots of data. There are ultimate tools which can collect and store the huge data in its proper format in the main repository but still the users doesn't get benefited from it up to its full potential. Retrieving meaningful information from such repositories requires an extensive knowledge of database languages like SQL. However, not everybody is able to write the complex SQL queries to extract the needful information. This intricate problem can be solved to a great extent by providing a Natural Language Interface to the users using which they can query a database in their own language. In this paper, we propose a Database Intelligent Querying System (DBIQS) which portrays completely automatic, simple, fast and reliable way to query a database.","PeriodicalId":193007,"journal":{"name":"2014 International Conference on Information Systems and Computer Networks (ISCON)","volume":"358 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115874890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-03-01DOI: 10.1109/ICISCON.2014.6965209
Bashir Alam, Arvind Kumar
A fault in a real time system can cause a system failure if it is not detected and tolerated in time. Fault tolerance is an approach used to allow a system to continue the work properly in the presence of fault. Time is an important aspect in real time system to perform an operation. Fault tolerance can be achieved by checkpointing approach to tolerate transient fault. But the issue is how many checkpoints should be applied to enhance the schedulability of tasks. A real time system must be fault tolerated to work in fault prone environment. In this paper we propose a new scheduling algorithm for finding maximum number of checkpoints to tolerate single transient fault. The proposed approach is able to tolerate single transient fault and enhancing schedulability. The tasks are scheduled in such a way that the system consumes less time to tolerate fault.
{"title":"A real time scheduling algorithm for tolerating single transient fault","authors":"Bashir Alam, Arvind Kumar","doi":"10.1109/ICISCON.2014.6965209","DOIUrl":"https://doi.org/10.1109/ICISCON.2014.6965209","url":null,"abstract":"A fault in a real time system can cause a system failure if it is not detected and tolerated in time. Fault tolerance is an approach used to allow a system to continue the work properly in the presence of fault. Time is an important aspect in real time system to perform an operation. Fault tolerance can be achieved by checkpointing approach to tolerate transient fault. But the issue is how many checkpoints should be applied to enhance the schedulability of tasks. A real time system must be fault tolerated to work in fault prone environment. In this paper we propose a new scheduling algorithm for finding maximum number of checkpoints to tolerate single transient fault. The proposed approach is able to tolerate single transient fault and enhancing schedulability. The tasks are scheduled in such a way that the system consumes less time to tolerate fault.","PeriodicalId":193007,"journal":{"name":"2014 International Conference on Information Systems and Computer Networks (ISCON)","volume":"58 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116518913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-03-01DOI: 10.1109/ICISCON.2014.6965225
P. Jain
A knowledge management system manages, organizes, filters, analyses, and disseminates knowledge in all of its forms within an organization. The functions of a KMS can be best performed by a network of agents, coordinating, collaborating and cooperating with each other so as to manage the knowledge in such a way that right information is available to the right user at the right time and in the right format. It's necessary to use intelligent software agents in knowledge based systems so that the difficult tasks are decomposed into smaller sub-tasks and that each sub-task is solved with the most appropriate reasoning technique without increasing the execution time. The same has been achieved by combining the concepts of Multi Agent Systems and Knowledge Management Systems and formulating a Multi Agent Enterprise Knowledge Management System (MAEKMS). Such a MAEMS is used in the field of e-health. This paper uses extended Gaia to design the system and its functional components. A part of the system is implemented using Java and MySql. The multiple agents are developed using JADE (Java Agent Development Environment).
知识管理系统管理、组织、过滤、分析和传播组织内所有形式的知识。一个知识管理系统的功能可以通过一个代理网络来实现,这些代理之间相互协调、协作和合作,从而管理知识,使正确的信息在正确的时间以正确的格式提供给正确的用户。在基于知识的系统中,有必要使用智能软件代理,将困难的任务分解成更小的子任务,在不增加执行时间的情况下,用最合适的推理技术解决每个子任务。将多智能体系统和知识管理系统的概念相结合,形成了多智能体企业知识管理系统(MAEKMS)。该系统已应用于电子卫生领域。本文采用扩展Gaia对系统及其功能组件进行了设计。系统的一部分是使用Java和MySql实现的。使用JADE (Java Agent Development Environment)开发多个代理。
{"title":"Architectural design of a multi agent enterprise knowledge management system (MAEKMS) for e-health","authors":"P. Jain","doi":"10.1109/ICISCON.2014.6965225","DOIUrl":"https://doi.org/10.1109/ICISCON.2014.6965225","url":null,"abstract":"A knowledge management system manages, organizes, filters, analyses, and disseminates knowledge in all of its forms within an organization. The functions of a KMS can be best performed by a network of agents, coordinating, collaborating and cooperating with each other so as to manage the knowledge in such a way that right information is available to the right user at the right time and in the right format. It's necessary to use intelligent software agents in knowledge based systems so that the difficult tasks are decomposed into smaller sub-tasks and that each sub-task is solved with the most appropriate reasoning technique without increasing the execution time. The same has been achieved by combining the concepts of Multi Agent Systems and Knowledge Management Systems and formulating a Multi Agent Enterprise Knowledge Management System (MAEKMS). Such a MAEMS is used in the field of e-health. This paper uses extended Gaia to design the system and its functional components. A part of the system is implemented using Java and MySql. The multiple agents are developed using JADE (Java Agent Development Environment).","PeriodicalId":193007,"journal":{"name":"2014 International Conference on Information Systems and Computer Networks (ISCON)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129393048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-03-01DOI: 10.1109/ICISCON.2014.6965234
I. Saini, Ashu Khanna, S. K. Peddoju
Cloud computing is the new buzz word today attracting high interest among various domains like business enterprises, particularly in Small and Medium Enterprises. As it is a pay-per-use model, SMEs have high expectations that adapting this model will not only make them flexible, hassle free but also economic. In view of such expectations, this paper analyses the possibility of adapting Cloud computing technologies in SMEs in light of economic concerns. In this paper, two hypotheses are developed to compare the average annual peruser costs of using Enterprise Resource Planning systems in two ways, the traditional approach and the Cloud approach. A web based survey is conducted apart from the Interviews with the peers to collect the data across the selected SMEs and t-test is performed to compare both the technologies on the pro proposed hypothesis. Results achieved are produced and discussed.
{"title":"Cloud and traditional ERP systems in small and medium enterprises","authors":"I. Saini, Ashu Khanna, S. K. Peddoju","doi":"10.1109/ICISCON.2014.6965234","DOIUrl":"https://doi.org/10.1109/ICISCON.2014.6965234","url":null,"abstract":"Cloud computing is the new buzz word today attracting high interest among various domains like business enterprises, particularly in Small and Medium Enterprises. As it is a pay-per-use model, SMEs have high expectations that adapting this model will not only make them flexible, hassle free but also economic. In view of such expectations, this paper analyses the possibility of adapting Cloud computing technologies in SMEs in light of economic concerns. In this paper, two hypotheses are developed to compare the average annual peruser costs of using Enterprise Resource Planning systems in two ways, the traditional approach and the Cloud approach. A web based survey is conducted apart from the Interviews with the peers to collect the data across the selected SMEs and t-test is performed to compare both the technologies on the pro proposed hypothesis. Results achieved are produced and discussed.","PeriodicalId":193007,"journal":{"name":"2014 International Conference on Information Systems and Computer Networks (ISCON)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132355574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-03-01DOI: 10.1109/ICISCON.2014.6965231
Bhushan Ghutke, U. Shrawankar
Cloud Computing is growing rapidly and clients are demanding more services and better flexibility. For providing user demands, cloud computing require effective load balancing techniques in computing environment. Load Balancing is essential for efficient operations in distributed environments and it has become a very interesting and important research area. Many algorithms are used to provide various approaches and algorithms for assigning the client's requests to available cloud nodes. These algorithms are used to enhance the overall performance of the cloud environment and provide users the more efficient services. In this paper, the different algorithms are studied which are used for resolving the issue of load balancing and task scheduling in Cloud Computing and also discussed pros and cons of the algorithms to provide an overview of the latest approaches in the field.
{"title":"Pros and cons of load balancing algorithms for cloud computing","authors":"Bhushan Ghutke, U. Shrawankar","doi":"10.1109/ICISCON.2014.6965231","DOIUrl":"https://doi.org/10.1109/ICISCON.2014.6965231","url":null,"abstract":"Cloud Computing is growing rapidly and clients are demanding more services and better flexibility. For providing user demands, cloud computing require effective load balancing techniques in computing environment. Load Balancing is essential for efficient operations in distributed environments and it has become a very interesting and important research area. Many algorithms are used to provide various approaches and algorithms for assigning the client's requests to available cloud nodes. These algorithms are used to enhance the overall performance of the cloud environment and provide users the more efficient services. In this paper, the different algorithms are studied which are used for resolving the issue of load balancing and task scheduling in Cloud Computing and also discussed pros and cons of the algorithms to provide an overview of the latest approaches in the field.","PeriodicalId":193007,"journal":{"name":"2014 International Conference on Information Systems and Computer Networks (ISCON)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116081579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-03-01DOI: 10.1109/ICISCON.2014.6965208
Swati Saxena, M. Sinha
In general, performances of routing protocols are analyzed through simulation. Tools like statistical design of experiments also provide a variety of significant outcomes. To improve the functioning of routing protocols it is required to evaluate the factors that affect its performance. Such evaluation is carried out with the help of response metrics. Two kinds of evaluation are possible: single-response and multi-response. By varying single factor at a time, the effect on the protocol performance is measured through response metrics individually. It is referred as single-response metric analysis. On contrary, the effects of such variation are quantified through more than one response metrics simultaneously and it is referred as multi-response analysis. The paper presents the multi-response analysis of Adaptive Fault Tolerant Replication (AFTR) routing protocol using five response metrics like packet delivery ratio, delay, throughput, routing overhead and packet drop under the variation of five factors named network size, transmission rates, mobility speed, optimal number of copies and pause time. The findings show network size is the leading factor that optimizes all the metrics of the AFTR protocol simultaneously. The second most important factor is the optimal number of copies which have been evaluated using adaptive fault tolerant replication strategy.
{"title":"Multi-response metric analysis of Adaptive Fault Tolerant Replication (AFTR) routing protocol for MANET","authors":"Swati Saxena, M. Sinha","doi":"10.1109/ICISCON.2014.6965208","DOIUrl":"https://doi.org/10.1109/ICISCON.2014.6965208","url":null,"abstract":"In general, performances of routing protocols are analyzed through simulation. Tools like statistical design of experiments also provide a variety of significant outcomes. To improve the functioning of routing protocols it is required to evaluate the factors that affect its performance. Such evaluation is carried out with the help of response metrics. Two kinds of evaluation are possible: single-response and multi-response. By varying single factor at a time, the effect on the protocol performance is measured through response metrics individually. It is referred as single-response metric analysis. On contrary, the effects of such variation are quantified through more than one response metrics simultaneously and it is referred as multi-response analysis. The paper presents the multi-response analysis of Adaptive Fault Tolerant Replication (AFTR) routing protocol using five response metrics like packet delivery ratio, delay, throughput, routing overhead and packet drop under the variation of five factors named network size, transmission rates, mobility speed, optimal number of copies and pause time. The findings show network size is the leading factor that optimizes all the metrics of the AFTR protocol simultaneously. The second most important factor is the optimal number of copies which have been evaluated using adaptive fault tolerant replication strategy.","PeriodicalId":193007,"journal":{"name":"2014 International Conference on Information Systems and Computer Networks (ISCON)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122368721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-03-01DOI: 10.1109/ICISCON.2014.6965214
S. Tazi, V. Jain
Since last thirty years, more research has been done in the area of reconstructing multi-dimensional objects from two-dimensional camera images. To capture the feature correspondences are one of the ordinary tasks among them. e.g., a same projection is addressed for multi-dimensional geometrical or textural feature of multiple images. Traditionally structure estimation reconstruction is based on either stereo image pairs or monocular image sequences. Boundaries in both of these approaches are generating to interest for measuring structure in stereo image sequences. In this paper we discuss, an Enhance matching in multi-dimension image reconstruction (EMMR) algorithm using stereo image sequences. The proposed EMMR algorithm has incrementally dense representation of feature extraction of images. We are expressly concerned this approach in the context of recognizing pineapple feature reconstruction and enhancing, dock and fault analysis. Conclusion is drained and possible future work is discussed.
{"title":"Enhance matching in multi-dimensional image reconstruction using stereo image sequences","authors":"S. Tazi, V. Jain","doi":"10.1109/ICISCON.2014.6965214","DOIUrl":"https://doi.org/10.1109/ICISCON.2014.6965214","url":null,"abstract":"Since last thirty years, more research has been done in the area of reconstructing multi-dimensional objects from two-dimensional camera images. To capture the feature correspondences are one of the ordinary tasks among them. e.g., a same projection is addressed for multi-dimensional geometrical or textural feature of multiple images. Traditionally structure estimation reconstruction is based on either stereo image pairs or monocular image sequences. Boundaries in both of these approaches are generating to interest for measuring structure in stereo image sequences. In this paper we discuss, an Enhance matching in multi-dimension image reconstruction (EMMR) algorithm using stereo image sequences. The proposed EMMR algorithm has incrementally dense representation of feature extraction of images. We are expressly concerned this approach in the context of recognizing pineapple feature reconstruction and enhancing, dock and fault analysis. Conclusion is drained and possible future work is discussed.","PeriodicalId":193007,"journal":{"name":"2014 International Conference on Information Systems and Computer Networks (ISCON)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114682365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}