We present here a tool to generate given the CCS automatically for the distributed multithreaded system coded in JAVA. Actually in the current scenario distributed systems are get emphasized and hard fact is to verify these systems. CCS (Concurrent communication system) is one of the ways to write formal specifications for concurrent systems but it is quite difficult to write CCS for given distributed system more over dynamic environment, where threads and processes created at runtime. is quit impossible analyze, audit and codify simultaneous. So as per requirement for verification of distributed system, we developed a tool named as Refjav that takes input a java coded file of system that is to be verified and produced output as its equivalent CCS under the set of reasonable restrictions. In present we worked with synchronization, dynamic thread creation and their communication using RMI in distributed, multithreaded Java program. Paper covers the various constructs of Refjav tool and its methodology. Our work is useful for getting CCS of a given Java distributed multithreaded program, which can be verified easily by model checkers to verify properties, expressed in model μ-calculus. As a result we should be in position to comment on the properties of system such as Deadlock, fairness etc, without actually going through the actual details and constructs of CCS.
我们在这里提供了一个工具来自动生成给定的CCS,用于用JAVA编码的分布式多线程系统。实际上,在当前的场景中,分布式系统得到了强调,而验证这些系统是一个困难的事实。CCS (Concurrent communication system,并发通信系统)是为并发系统编写规范的一种方法,但在动态环境下,对于线程和进程都是在运行时创建的分布式系统,编写CCS是相当困难的。是不可能同时分析、审核和编纂的。因此,根据分布式系统验证的需求,我们开发了一个名为Refjav的工具,在一组合理的限制下,将待验证系统的java编码文件作为其等效的CCS输入并产生输出。目前我们使用RMI在分布式多线程Java程序中实现同步、动态线程创建和它们之间的通信。本文介绍了Refjav工具的各种结构及其方法。我们的工作有助于获得给定Java分布式多线程程序的CCS,可以很容易地通过模型检查器来验证用模型μ微积分表示的属性。因此,我们应该能够评论系统的属性,如死锁、公平性等,而无需深入研究CCS的实际细节和结构。
{"title":"Refjav: tool for automated verification by generating CCS of multithreaded Java system in distributed environment","authors":"Arpit, Ashwini Kumar","doi":"10.1145/2007052.2007082","DOIUrl":"https://doi.org/10.1145/2007052.2007082","url":null,"abstract":"We present here a tool to generate given the CCS automatically for the distributed multithreaded system coded in JAVA. Actually in the current scenario distributed systems are get emphasized and hard fact is to verify these systems. CCS (Concurrent communication system) is one of the ways to write formal specifications for concurrent systems but it is quite difficult to write CCS for given distributed system more over dynamic environment, where threads and processes created at runtime. is quit impossible analyze, audit and codify simultaneous.\u0000 So as per requirement for verification of distributed system, we developed a tool named as Refjav that takes input a java coded file of system that is to be verified and produced output as its equivalent CCS under the set of reasonable restrictions. In present we worked with synchronization, dynamic thread creation and their communication using RMI in distributed, multithreaded Java program. Paper covers the various constructs of Refjav tool and its methodology. Our work is useful for getting CCS of a given Java distributed multithreaded program, which can be verified easily by model checkers to verify properties, expressed in model μ-calculus. As a result we should be in position to comment on the properties of system such as Deadlock, fairness etc, without actually going through the actual details and constructs of CCS.","PeriodicalId":348804,"journal":{"name":"International Conference on Advances in Computing and Artificial Intelligence","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131374395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Project manager is responsible to see the company does not suffer from time overruns and cost overruns, unfortunately accurate forecast of effort estimations stems from mature organizations only, others owing to lack of history databases, when we estimating the total functionality of the project there is a need to form the total complexity matrix for the data and transactional points, at certain point with the increase of one RET and DET the final resulting measurement will not correspond to a sufficiently accurate function point value, for this purpose we are adopting soft computing technique. This work proposes the use of concepts and properties from fuzzy set theory to extend FPA to FFPA (Fuzzy Function Point Analysis). Fuzzy theory seeks to build a formal quantitative structure capable of emulating the imprecision of human knowledge. With the function points generated by FFPA, we can observe the error correction in Function Point Estimation so derived values such as costs and terms of development can be more precisely determined.
{"title":"Error correction in function point estimation using soft computing technique","authors":"K. K. Rao, G. Raju","doi":"10.1145/2007052.2007092","DOIUrl":"https://doi.org/10.1145/2007052.2007092","url":null,"abstract":"Project manager is responsible to see the company does not suffer from time overruns and cost overruns, unfortunately accurate forecast of effort estimations stems from mature organizations only, others owing to lack of history databases, when we estimating the total functionality of the project there is a need to form the total complexity matrix for the data and transactional points, at certain point with the increase of one RET and DET the final resulting measurement will not correspond to a sufficiently accurate function point value, for this purpose we are adopting soft computing technique. This work proposes the use of concepts and properties from fuzzy set theory to extend FPA to FFPA (Fuzzy Function Point Analysis). Fuzzy theory seeks to build a formal quantitative structure capable of emulating the imprecision of human knowledge. With the function points generated by FFPA, we can observe the error correction in Function Point Estimation so derived values such as costs and terms of development can be more precisely determined.","PeriodicalId":348804,"journal":{"name":"International Conference on Advances in Computing and Artificial Intelligence","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127742880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hierarchy generation is the very familiar and evolving concept in Relational database management system. There are so many methods currently available to generate as well manage hierarchies. But, carry along sort algorithm is the new method which is used to generate hierarchies dynamically and it is the subset of materialized path. In this paper, we are going to perform the complexity study against "Carry-along Sort" Algorithm in terms of time and space so that the efficiency of this algorithm can be compared with other existing methods. Complexity is about performance measures, NOT about the intricacy of programs. The choice of which operations are time critical and what constitutes a space cell varies according to application, as we shall see. Complexity study plays a vital role at the time of choosing an algorithm to solve a given problem.
{"title":"Complexity study on \"Carry-along Sort\" algorithm","authors":"P. Sethuraman, L. Rajamani","doi":"10.1145/2007052.2007078","DOIUrl":"https://doi.org/10.1145/2007052.2007078","url":null,"abstract":"Hierarchy generation is the very familiar and evolving concept in Relational database management system. There are so many methods currently available to generate as well manage hierarchies. But, carry along sort algorithm is the new method which is used to generate hierarchies dynamically and it is the subset of materialized path. In this paper, we are going to perform the complexity study against \"Carry-along Sort\" Algorithm in terms of time and space so that the efficiency of this algorithm can be compared with other existing methods. Complexity is about performance measures, NOT about the intricacy of programs. The choice of which operations are time critical and what constitutes a space cell varies according to application, as we shall see. Complexity study plays a vital role at the time of choosing an algorithm to solve a given problem.","PeriodicalId":348804,"journal":{"name":"International Conference on Advances in Computing and Artificial Intelligence","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123364193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Text based Mining is the process of analyzing a document or set of documents to understand the content and meaning of the information they contain. Text Mining enhances human's ability to Process massive quantities of information and it has high Commercial values. Text mining, sometimes alternately referred to as text data mining, roughly, process of deriving high-quality information from text. High-quality information is typically derived through the devising of patterns and trends through means such as statistical pattern learning. It usually involves the process of structuring the input text deriving patterns within the structured data, and finally evaluation and interpretation of the output. 'High quality' in text mining usually refers to some combination of relevance, novelty, and interestingness. Typical text mining tasks include text categorization, text clustering, concept/entity extraction, analysis and entity relation modeling (i.e., learning relations between named entities).
{"title":"Study of text based mining","authors":"Ranjna Garg, Heena","doi":"10.1145/2007052.2007054","DOIUrl":"https://doi.org/10.1145/2007052.2007054","url":null,"abstract":"Text based Mining is the process of analyzing a document or set of documents to understand the content and meaning of the information they contain. Text Mining enhances human's ability to Process massive quantities of information and it has high Commercial values. Text mining, sometimes alternately referred to as text data mining, roughly, process of deriving high-quality information from text. High-quality information is typically derived through the devising of patterns and trends through means such as statistical pattern learning. It usually involves the process of structuring the input text deriving patterns within the structured data, and finally evaluation and interpretation of the output. 'High quality' in text mining usually refers to some combination of relevance, novelty, and interestingness. Typical text mining tasks include text categorization, text clustering, concept/entity extraction, analysis and entity relation modeling (i.e., learning relations between named entities).","PeriodicalId":348804,"journal":{"name":"International Conference on Advances in Computing and Artificial Intelligence","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129151780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The last decade has witnessed a tremendous increase in the popularity of mobile ad hoc networks (MANETs). The major computing challenges of distributed environment become more insidious when we try to address them in mobile ad hoc scenario. The distributed mutual exclusion (DMX) is one of the heavily researched topics of distributed computing. Although, a flurry of DMX protocols exists for static as well as cellular mobile distributed systems, the DMX in MANETs is comparatively less explored area of research. In 2004, Benchaiba et al. [7] presented a survey of the topic. Since then a considerable number of protocols have been published. The article presents an overview of the major contributions related to distributed mutual exclusion in MANETs, precisely after publication of the paper [7] in 2004.
{"title":"DMX in MANETs: major research trends since 2004","authors":"Bharti Sharma, R. S. Bhatia, Awadhesh Kumar Singh","doi":"10.1145/2007052.2007063","DOIUrl":"https://doi.org/10.1145/2007052.2007063","url":null,"abstract":"The last decade has witnessed a tremendous increase in the popularity of mobile ad hoc networks (MANETs). The major computing challenges of distributed environment become more insidious when we try to address them in mobile ad hoc scenario. The distributed mutual exclusion (DMX) is one of the heavily researched topics of distributed computing. Although, a flurry of DMX protocols exists for static as well as cellular mobile distributed systems, the DMX in MANETs is comparatively less explored area of research. In 2004, Benchaiba et al. [7] presented a survey of the topic. Since then a considerable number of protocols have been published. The article presents an overview of the major contributions related to distributed mutual exclusion in MANETs, precisely after publication of the paper [7] in 2004.","PeriodicalId":348804,"journal":{"name":"International Conference on Advances in Computing and Artificial Intelligence","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117295396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we are proposing a method which is different from many practical computer programs have been developed to exhibit useful types of learning. For problems such as speech recognition, different algorithms based on machine learning outperform all other approaches that have been attempted to date. In the field known as data mining, machine learning algorithms are being used commonly to discover valuable knowledge from large commercial databases containing equipment maintenance records, loan applications, financial transactions, medical records etc. Thus, it seems inevitable that machine learning will play an integral role in computer science and computer technology. In this paper, modeling and designing of a general learning system is proposed that presents new machine learning procedures used to arrive at "knowledgeable" static evaluators for checker board positions. The static evaluators are compared with each other, and with the linear polynomial using two different numerical indices reflecting the extent to which they agree with the choices of checker experts in the course of tabulated book games. The new static evaluators are found to perform about equally well, despite the relative simplicity of the second; and they perform noticeably better than the linear polynomial.
{"title":"Modeling and designing of machine learning procedures as applied to game playing using artificial intelligence","authors":"A. Dhawan, Jaswinder Singh","doi":"10.1145/2007052.2007080","DOIUrl":"https://doi.org/10.1145/2007052.2007080","url":null,"abstract":"In this paper, we are proposing a method which is different from many practical computer programs have been developed to exhibit useful types of learning. For problems such as speech recognition, different algorithms based on machine learning outperform all other approaches that have been attempted to date. In the field known as data mining, machine learning algorithms are being used commonly to discover valuable knowledge from large commercial databases containing equipment maintenance records, loan applications, financial transactions, medical records etc. Thus, it seems inevitable that machine learning will play an integral role in computer science and computer technology.\u0000 In this paper, modeling and designing of a general learning system is proposed that presents new machine learning procedures used to arrive at \"knowledgeable\" static evaluators for checker board positions. The static evaluators are compared with each other, and with the linear polynomial using two different numerical indices reflecting the extent to which they agree with the choices of checker experts in the course of tabulated book games. The new static evaluators are found to perform about equally well, despite the relative simplicity of the second; and they perform noticeably better than the linear polynomial.","PeriodicalId":348804,"journal":{"name":"International Conference on Advances in Computing and Artificial Intelligence","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126098606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, address the problem of encryption/decryption regions of interest in a video sequence for the purpose of security in video data. The proposed an efficient solution based on 3D encryption/decryption based. More specifically the videos files breaks different frames as 2D based digital images, after 2D based images encrypt/decrypt by the 2D generalized algorithms. The simulation results show that the mechanism can be successfully applied to obscure information in regions of the interest in the scene which provides the different level of security. Further, the keys values are flexible and allow choosing different types for security purpose.
{"title":"Security in real time multimedia data based on generalized keys","authors":"Raj Kumar, Divya Gupta","doi":"10.1145/2007052.2007071","DOIUrl":"https://doi.org/10.1145/2007052.2007071","url":null,"abstract":"In this paper, address the problem of encryption/decryption regions of interest in a video sequence for the purpose of security in video data. The proposed an efficient solution based on 3D encryption/decryption based. More specifically the videos files breaks different frames as 2D based digital images, after 2D based images encrypt/decrypt by the 2D generalized algorithms. The simulation results show that the mechanism can be successfully applied to obscure information in regions of the interest in the scene which provides the different level of security. Further, the keys values are flexible and allow choosing different types for security purpose.","PeriodicalId":348804,"journal":{"name":"International Conference on Advances in Computing and Artificial Intelligence","volume":"183 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131408539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Electro Cardiogram (ECG) signals are affected by various kinds of noise and artifacts that may hide important information of interest. Independent component analysis is a new technique suitable for separating independent component from ECG complexes. This paper compares the various Independent Component Analysis (ICA) algorithms with respect to their capability to remove noise from ECG. The data bases of ECG samples attributing to different beat types were sampled from MIT-BIH arrhythmia database for experiment. We compare the signal to noise ratio (SNR) improvement in the real ECG data with different ICA algorithms also we compare the SNR for simulated ECG signal on matlab; giving the selection choice of various ICA algorithms for different database.
{"title":"A comparative study of ICA algorithms for ECG signal processing","authors":"M. Sarfraz, Francis F. Li, Mohammad Javed","doi":"10.1145/2007052.2007079","DOIUrl":"https://doi.org/10.1145/2007052.2007079","url":null,"abstract":"Electro Cardiogram (ECG) signals are affected by various kinds of noise and artifacts that may hide important information of interest. Independent component analysis is a new technique suitable for separating independent component from ECG complexes. This paper compares the various Independent Component Analysis (ICA) algorithms with respect to their capability to remove noise from ECG. The data bases of ECG samples attributing to different beat types were sampled from MIT-BIH arrhythmia database for experiment. We compare the signal to noise ratio (SNR) improvement in the real ECG data with different ICA algorithms also we compare the SNR for simulated ECG signal on matlab; giving the selection choice of various ICA algorithms for different database.","PeriodicalId":348804,"journal":{"name":"International Conference on Advances in Computing and Artificial Intelligence","volume":"33 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116629274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Research on software quality is as old as software project management. As in other engineering and science disciplines, one approach to understand and control this issue is the use of models. These, quality models have become a well-accepted means to describe and manage software quality. Statistical techniques like Bayesians Networks are used to access and predict software quality by using these quality models. But they are not very accurate. These models lack clarity and operation. In this paper, we propose to develop software model that uses a fuzzy inference approach to access and predict software quality. This model indicates the impact of implementation, quality assurance and analysis on maintenance (an important factor for measuring quality) and the result is studied by the impact on indicator like average efforts required to maintain a project.
{"title":"A model for estimating efforts required to make changes in a software development project","authors":"K. Jeet, R. Dhir","doi":"10.1145/2007052.2007088","DOIUrl":"https://doi.org/10.1145/2007052.2007088","url":null,"abstract":"Research on software quality is as old as software project management. As in other engineering and science disciplines, one approach to understand and control this issue is the use of models. These, quality models have become a well-accepted means to describe and manage software quality. Statistical techniques like Bayesians Networks are used to access and predict software quality by using these quality models. But they are not very accurate. These models lack clarity and operation. In this paper, we propose to develop software model that uses a fuzzy inference approach to access and predict software quality. This model indicates the impact of implementation, quality assurance and analysis on maintenance (an important factor for measuring quality) and the result is studied by the impact on indicator like average efforts required to maintain a project.","PeriodicalId":348804,"journal":{"name":"International Conference on Advances in Computing and Artificial Intelligence","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129598570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Research in robot motion control offers research opportunities that will emulate human decision making capabilities to perfection in years to come. Autonomous robots roles are increasing in different aspects of engineering and everyday life. This paper describes an autonomous robot motion control system based on fuzzy logic Proportional Integral Derivative (PID) controller. Fuzzy rules are embedded in the controller to tune the gain parameters of PID and to make them helpful in real time applications. This paper discusses the design aspects of fuzzy PID controller for mobile robot that decrease rise time, remove steady sate error quickly and avoids overshoot. The performance of robot design has been verified with rule based evaluation using Matlab and results obtained have been found to be robust. Overall, the performances criteria in terms of its response towards rise time, steady sate error and overshoot have been found to be good.
{"title":"Adapting intelligence in robot using fuzzy logic","authors":"Vaishali Sood","doi":"10.1145/2007052.2007084","DOIUrl":"https://doi.org/10.1145/2007052.2007084","url":null,"abstract":"Research in robot motion control offers research opportunities that will emulate human decision making capabilities to perfection in years to come. Autonomous robots roles are increasing in different aspects of engineering and everyday life. This paper describes an autonomous robot motion control system based on fuzzy logic Proportional Integral Derivative (PID) controller. Fuzzy rules are embedded in the controller to tune the gain parameters of PID and to make them helpful in real time applications. This paper discusses the design aspects of fuzzy PID controller for mobile robot that decrease rise time, remove steady sate error quickly and avoids overshoot. The performance of robot design has been verified with rule based evaluation using Matlab and results obtained have been found to be robust. Overall, the performances criteria in terms of its response towards rise time, steady sate error and overshoot have been found to be good.","PeriodicalId":348804,"journal":{"name":"International Conference on Advances in Computing and Artificial Intelligence","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132668991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}