This paper describes our research into the application of fault injection to Simple Object Access Protocol (SOAP) based service oriented-architectures (SOA). We show that our previously devised WS-FIT method, when combined with parameter perturbation, gives comparable performance to code insertion techniques with the benefit that it is less invasive. Finally we demonstrate that this technique can be used to compliment certification testing of a production system by strategic instrumentation of selected servers in a system.
{"title":"A comparison of network level fault injection with code insertion","authors":"N. Looker, M. Munro, Jie Xu","doi":"10.1109/COMPSAC.2005.16","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.16","url":null,"abstract":"This paper describes our research into the application of fault injection to Simple Object Access Protocol (SOAP) based service oriented-architectures (SOA). We show that our previously devised WS-FIT method, when combined with parameter perturbation, gives comparable performance to code insertion techniques with the benefit that it is less invasive. Finally we demonstrate that this technique can be used to compliment certification testing of a production system by strategic instrumentation of selected servers in a system.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131069086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-26DOI: 10.1109/COMPSAC.2005.113
Yan Liu, B. Cukic, Edgar Fuller, S. Gururajan, S. Yerramalla
The appeal of including biologically inspired soft computing systems such as neural networks in complex computational systems is in their ability to cope with a changing environment. Unfortunately, continual changes induce uncertainty that limits the applicability of conventional verification and validation (V&V) techniques to assure the reliable performance of such systems. At the system input layer, novel data may cause unstable learning behavior, which may contribute to system failures. Thus, the changes at the input layer must be observed, diagnosed, accommodated and well understood prior to system deployment. Moreover, at the system output layer, the uncertainties/novelties existing in the neural network predictions also need to be well analyzed and detected during system operation. Our research tackles the novelty detection problem at both layers using two different methods. We use a statistical learning tool, support vector data description (SVDD), as a one-class classifier to examine the data entering the adaptive component and detect unforeseen patterns that may cause abrupt system functionality changes. At the output layer, we define a reliability-like measure, the validity index. The validity index reflects the degree of novelty associated with each output and thus can be used to perform system validity checks. Simulations demonstrate that both techniques effectively detect unusual events and provide validation inferences in a near-real time manner.
{"title":"Novelty detection for a neural network-based online adaptive system","authors":"Yan Liu, B. Cukic, Edgar Fuller, S. Gururajan, S. Yerramalla","doi":"10.1109/COMPSAC.2005.113","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.113","url":null,"abstract":"The appeal of including biologically inspired soft computing systems such as neural networks in complex computational systems is in their ability to cope with a changing environment. Unfortunately, continual changes induce uncertainty that limits the applicability of conventional verification and validation (V&V) techniques to assure the reliable performance of such systems. At the system input layer, novel data may cause unstable learning behavior, which may contribute to system failures. Thus, the changes at the input layer must be observed, diagnosed, accommodated and well understood prior to system deployment. Moreover, at the system output layer, the uncertainties/novelties existing in the neural network predictions also need to be well analyzed and detected during system operation. Our research tackles the novelty detection problem at both layers using two different methods. We use a statistical learning tool, support vector data description (SVDD), as a one-class classifier to examine the data entering the adaptive component and detect unforeseen patterns that may cause abrupt system functionality changes. At the output layer, we define a reliability-like measure, the validity index. The validity index reflects the degree of novelty associated with each output and thus can be used to perform system validity checks. Simulations demonstrate that both techniques effectively detect unusual events and provide validation inferences in a near-real time manner.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124752335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile agent systems have many attractive features including asynchrony, openness, dynamicity and anonymity, which makes them indispensable in designing complex modern applications that involve moving devices, human participants and software. To be comprehensive this list should include fault tolerance, yet as our analysis shows, this property is, unfortunately, often overlooked by middleware designers. A few existing solutions for fault tolerant mobile agents are developed mainly for tolerating hardware faults without providing any general support for application-specific recovery. In this paper we describe a novel exception handling model that allows application-specific recovery in coordination-based systems consisting of mobile agents. The proposed mechanism is general enough to be used in both loosely-and tightly-coupled communication models. The general ideas behind the mechanism are applied in the context of the Lime middleware.
{"title":"Exception handling in coordination-based mobile environments","authors":"A. Iliasov, A. Romanovsky","doi":"10.1109/COMPSAC.2005.73","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.73","url":null,"abstract":"Mobile agent systems have many attractive features including asynchrony, openness, dynamicity and anonymity, which makes them indispensable in designing complex modern applications that involve moving devices, human participants and software. To be comprehensive this list should include fault tolerance, yet as our analysis shows, this property is, unfortunately, often overlooked by middleware designers. A few existing solutions for fault tolerant mobile agents are developed mainly for tolerating hardware faults without providing any general support for application-specific recovery. In this paper we describe a novel exception handling model that allows application-specific recovery in coordination-based systems consisting of mobile agents. The proposed mechanism is general enough to be used in both loosely-and tightly-coupled communication models. The general ideas behind the mechanism are applied in the context of the Lime middleware.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125497995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this investigation we propose a novel technique to classify melodies by using EM algorithm. Here we generate classifiers based on naive Bayesian technique during FM steps. And we discuss the experimental results with several features of melodies and show we can get efficient classifiers by our approach.
{"title":"Melody classification using EM algorithm","authors":"Yukiteru Yoshihara, T. Miura","doi":"10.1109/COMPSAC.2005.98","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.98","url":null,"abstract":"In this investigation we propose a novel technique to classify melodies by using EM algorithm. Here we generate classifiers based on naive Bayesian technique during FM steps. And we discuss the experimental results with several features of melodies and show we can get efficient classifiers by our approach.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115433143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Discovering frequent patterns from huge amounts of data is one of the most studied problems in data mining. However, some sensitive patterns with security policies may cause a threat to privacy. We investigate to find an appropriate balance between a need for privacy and information discovery on frequent patterns. By multiplying the original database and a sanitization matrix together, a sanitized database with privacy concerns is obtained. Additionally, a probability policy is proposed to against the recovery of sensitive patterns and reduces the modifications of the sanitized database. A set of experiments is also performed to show the benefit of our work.
{"title":"A novel method for protecting sensitive knowledge in association rules mining","authors":"En Tzu Wang, Guanling Lee, Yuh-Tzu Lin","doi":"10.1109/COMPSAC.2005.27","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.27","url":null,"abstract":"Discovering frequent patterns from huge amounts of data is one of the most studied problems in data mining. However, some sensitive patterns with security policies may cause a threat to privacy. We investigate to find an appropriate balance between a need for privacy and information discovery on frequent patterns. By multiplying the original database and a sanitization matrix together, a sanitized database with privacy concerns is obtained. Additionally, a probability policy is proposed to against the recovery of sensitive patterns and reduces the modifications of the sanitized database. A set of experiments is also performed to show the benefit of our work.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125188419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reverse engineering is an imperfect process when comprehending a legacy system with large volume of source code and complicated business rules. It is important for the adopted software process to shorten the time to market and minimize the risks especially in distributed environment. In this paper, extreme programming (XP) was evaluated in a distributed legacy system reengineering project to handle the imperfect system requirement and response to rapid business request combination while the customer was offshore. Some important adjustment was made to the XP process according to the project environment. The reengineering tasks of large scale were divided into several subtasks through evolving reengineering. XP made these tasks comparatively independent, reduced the workload of analysis in reverse engineering, and improved the performance of analysis. Localized analysis made testing and tracing easier, so the complexity of reengineering project was reduced. Evolving reengineering helped us to conduct and fulfil reverse engineering and forward engineering in parallel and shorten project lifecycle. XP enabled us to deliver better quality code in a shorter period of time with low cost.
{"title":"Extreme programming for distributed legacy system reengineering","authors":"Bin Xu","doi":"10.1109/COMPSAC.2005.77","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.77","url":null,"abstract":"Reverse engineering is an imperfect process when comprehending a legacy system with large volume of source code and complicated business rules. It is important for the adopted software process to shorten the time to market and minimize the risks especially in distributed environment. In this paper, extreme programming (XP) was evaluated in a distributed legacy system reengineering project to handle the imperfect system requirement and response to rapid business request combination while the customer was offshore. Some important adjustment was made to the XP process according to the project environment. The reengineering tasks of large scale were divided into several subtasks through evolving reengineering. XP made these tasks comparatively independent, reduced the workload of analysis in reverse engineering, and improved the performance of analysis. Localized analysis made testing and tracing easier, so the complexity of reengineering project was reduced. Evolving reengineering helped us to conduct and fulfil reverse engineering and forward engineering in parallel and shorten project lifecycle. XP enabled us to deliver better quality code in a shorter period of time with low cost.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"104 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120929691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-26DOI: 10.1109/COMPSAC.2005.115
F. Padberg
In this paper, we highlight the application potential of process simulation techniques for software cybernetics research. Software engineering has seen many fruitful applications of simulation when modeling, understanding, and improving the software development process. In particular, process simulation has proven to be a valuable and efficient tool in our own software cybernetics research, having helped us to understand how scheduling policies actually behave in our discrete-time Markov decision process model for software projects. We outline how to advance the use of process simulation in our model to a much higher level: when computing optimal scheduling policies, simulation can be applied in the optimization step of the dynamic programming algorithms in order to save computation time. This approach resembles optimization techniques from the field of reinforcement learning, providing further evidence of the potential of simulation in software cybernetics.
{"title":"On the potential of process simulation in software project schedule optimization","authors":"F. Padberg","doi":"10.1109/COMPSAC.2005.115","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.115","url":null,"abstract":"In this paper, we highlight the application potential of process simulation techniques for software cybernetics research. Software engineering has seen many fruitful applications of simulation when modeling, understanding, and improving the software development process. In particular, process simulation has proven to be a valuable and efficient tool in our own software cybernetics research, having helped us to understand how scheduling policies actually behave in our discrete-time Markov decision process model for software projects. We outline how to advance the use of process simulation in our model to a much higher level: when computing optimal scheduling policies, simulation can be applied in the optimization step of the dynamic programming algorithms in order to save computation time. This approach resembles optimization techniques from the field of reinforcement learning, providing further evidence of the potential of simulation in software cybernetics.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130307437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fang-Yie Leu, Jia-Chun Lin, Ming-Chang Li, Chao-Tung Yang
Distributed denial-of-service (DDoS) and denial-of-service (DoS) are the most dreadful network threats in recent years. In this paper, we propose a grid-based IDS, called performance-based grid intrusion detection system (PGIDS), which exploits grid's abundant computing resources to detect enormous intrusion packets and improve the drawbacks of traditional IDSs which suffer from losing their detection effectiveness and capability when processing massive network traffic. For balancing detection load and accelerating the performance of allocating detection node (DN), we use exponential average to predict network traffic and then assign the collected actual traffic to the most suitable DN. In addition, score subtraction algorithm (SSA) and score addition algorithm (SAA) are deployed to update and reflect the current performance of a DN. PGIDS detects not only DoS/DDoS attacks but also logical attacks. Experimental results show that PGIDS is truly an outstanding system in detecting attacks.
{"title":"A performance-based grid intrusion detection system","authors":"Fang-Yie Leu, Jia-Chun Lin, Ming-Chang Li, Chao-Tung Yang","doi":"10.1109/COMPSAC.2005.28","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.28","url":null,"abstract":"Distributed denial-of-service (DDoS) and denial-of-service (DoS) are the most dreadful network threats in recent years. In this paper, we propose a grid-based IDS, called performance-based grid intrusion detection system (PGIDS), which exploits grid's abundant computing resources to detect enormous intrusion packets and improve the drawbacks of traditional IDSs which suffer from losing their detection effectiveness and capability when processing massive network traffic. For balancing detection load and accelerating the performance of allocating detection node (DN), we use exponential average to predict network traffic and then assign the collected actual traffic to the most suitable DN. In addition, score subtraction algorithm (SSA) and score addition algorithm (SAA) are deployed to update and reflect the current performance of a DN. PGIDS detects not only DoS/DDoS attacks but also logical attacks. Experimental results show that PGIDS is truly an outstanding system in detecting attacks.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"2 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133793326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. Tsai, Yinong Chen, R. Paul, H. Huang, Xinyu Zhou, Xiao Wei
Web services and service-oriented architecture are emerging technologies that are changing the way we develop and use computer software. Due to the standardization of Web services related description languages and protocols, as well as the open platforms, for the same Web service specification, many different implementations can be offered from different service providers. This paper presents an adaptive group testing technique that can test large number Web services simultaneously and effectively. Based on a case study, experiments are performed to validate the correctness and effectiveness of the technique.
{"title":"Adaptive testing, oracle generation, and test case ranking for Web services","authors":"W. Tsai, Yinong Chen, R. Paul, H. Huang, Xinyu Zhou, Xiao Wei","doi":"10.1109/COMPSAC.2005.40","DOIUrl":"https://doi.org/10.1109/COMPSAC.2005.40","url":null,"abstract":"Web services and service-oriented architecture are emerging technologies that are changing the way we develop and use computer software. Due to the standardization of Web services related description languages and protocols, as well as the open platforms, for the same Web service specification, many different implementations can be offered from different service providers. This paper presents an adaptive group testing technique that can test large number Web services simultaneously and effectively. Based on a case study, experiments are performed to validate the correctness and effectiveness of the technique.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131192789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-26DOI: 10.1504/IJBIDM.2006.010781
Jianjiang Lu, Baowen Xu, Yanhui Li, Dazhou Kang
Typical description logics are limited to dealing with crisp concepts and crisp roles. However, Web applications based on description logics should allow the treatment of the inherent imprecision. Therefore, it is necessary to add fuzzy features to description logics. A family of extended fuzzy description logics is proposed to enable representation and reasoning for complex fuzzy information. The extended fuzzy description logics introduce the cut sets of fuzzy concepts and fuzzy roles as atomic concepts and atomic roles, and inherit the concept and role constructors from description logics. The definitions of syntax, semantics, reasoning tasks, and reasoning properties are given for the extended fuzzy description logic. The extended fuzzy description logics adopt a special fuzzify-method with more expressive power than the previous fuzzy description logics.
{"title":"A family of extended fuzzy description logics","authors":"Jianjiang Lu, Baowen Xu, Yanhui Li, Dazhou Kang","doi":"10.1504/IJBIDM.2006.010781","DOIUrl":"https://doi.org/10.1504/IJBIDM.2006.010781","url":null,"abstract":"Typical description logics are limited to dealing with crisp concepts and crisp roles. However, Web applications based on description logics should allow the treatment of the inherent imprecision. Therefore, it is necessary to add fuzzy features to description logics. A family of extended fuzzy description logics is proposed to enable representation and reasoning for complex fuzzy information. The extended fuzzy description logics introduce the cut sets of fuzzy concepts and fuzzy roles as atomic concepts and atomic roles, and inherit the concept and role constructors from description logics. The definitions of syntax, semantics, reasoning tasks, and reasoning properties are given for the extended fuzzy description logic. The extended fuzzy description logics adopt a special fuzzify-method with more expressive power than the previous fuzzy description logics.","PeriodicalId":419267,"journal":{"name":"29th Annual International Computer Software and Applications Conference (COMPSAC'05)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132246901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}