Xiang Huang, Wei Wang, Wen-bo Zhang, Jun Wei, Tao Huang
Capacities of online services are mainly determined by the interactions between workload and the services of the application. As the complexity of IT infrastructure increases, it is quite difficult to match the capacities of various services without the knowledge of their behaviors. The challenge to the existing works is to keep the performance model consistent with the services under live workload, because the workload and application behaviors are varied greatly. Therefore, new methods and modeling techniques that explain large-system behaviors and help analyze their future performance are now needed to effectively handle the emerging performance issues. In this paper, we proposed an automatic approach to build and rebuild performance model according to services' history statuses. Based on these statuses, user behaviors and their corresponding internal service relations are both modeled, and the CPU time consumed by each service is also got through Kalman filter. The analyzed results of our model can explain the behaviors of both the whole system and the individual services, and give valuable information for capacity planning. At last, our work is evaluated with TPC-W bench mark, whose results can demonstrate the effectiveness of our approach.
{"title":"An Automatic Performance Modeling Approach to Capacity Planning for Multi-service Web Applications","authors":"Xiang Huang, Wei Wang, Wen-bo Zhang, Jun Wei, Tao Huang","doi":"10.1109/QSIC.2011.13","DOIUrl":"https://doi.org/10.1109/QSIC.2011.13","url":null,"abstract":"Capacities of online services are mainly determined by the interactions between workload and the services of the application. As the complexity of IT infrastructure increases, it is quite difficult to match the capacities of various services without the knowledge of their behaviors. The challenge to the existing works is to keep the performance model consistent with the services under live workload, because the workload and application behaviors are varied greatly. Therefore, new methods and modeling techniques that explain large-system behaviors and help analyze their future performance are now needed to effectively handle the emerging performance issues. In this paper, we proposed an automatic approach to build and rebuild performance model according to services' history statuses. Based on these statuses, user behaviors and their corresponding internal service relations are both modeled, and the CPU time consumed by each service is also got through Kalman filter. The analyzed results of our model can explain the behaviors of both the whole system and the individual services, and give valuable information for capacity planning. At last, our work is evaluated with TPC-W bench mark, whose results can demonstrate the effectiveness of our approach.","PeriodicalId":309774,"journal":{"name":"2011 11th International Conference on Quality Software","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116876971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automated debugging, i.e., automated fault localization in programs, is an important and challenging problem. In literature the use of AI techniques like model-based diagnosis have been reported in order to solve the debugging problem at least partially. Most recently stating the debugging problem as a constraint satisfaction problem has been suggested including the integration of pre- and post-conditions. In this paper we follow this approach and report on most recent results obtained when using a today's constraint solver. Moreover, we show that there is a very good correspondence between the running time required for finding bugs and the structure of the program's constraint representation. We are able to prove this relationship with a linear correlation coefficient of 0.9. The empirical results indicate that the constraint satisfaction approach is very promising when focusing on debugging methods and functions up to 1,000 lines of code with an expected debugging time of less than 1 1/2 minute.
{"title":"Program Debugging Using Constraints -- Is it Feasible?","authors":"F. Wotawa, M. Nica","doi":"10.1109/QSIC.2011.39","DOIUrl":"https://doi.org/10.1109/QSIC.2011.39","url":null,"abstract":"Automated debugging, i.e., automated fault localization in programs, is an important and challenging problem. In literature the use of AI techniques like model-based diagnosis have been reported in order to solve the debugging problem at least partially. Most recently stating the debugging problem as a constraint satisfaction problem has been suggested including the integration of pre- and post-conditions. In this paper we follow this approach and report on most recent results obtained when using a today's constraint solver. Moreover, we show that there is a very good correspondence between the running time required for finding bugs and the structure of the program's constraint representation. We are able to prove this relationship with a linear correlation coefficient of 0.9. The empirical results indicate that the constraint satisfaction approach is very promising when focusing on debugging methods and functions up to 1,000 lines of code with an expected debugging time of less than 1 1/2 minute.","PeriodicalId":309774,"journal":{"name":"2011 11th International Conference on Quality Software","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130329235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Exception handling is a vital but often poorly tested part of a program. Static analysis can spot bugs on exceptional paths without actually making the exceptions happen. However, the traditional methods only focus on null dereferences on exceptional paths, but do not check the states of variables, which may be corrupted by exceptions. In this paper we propose a static analysis method that combines forward flow sensitive analysis and backward path feasibility analysis, to detect bugs caused by incorrect exception handling in Java programs. We found 8 bugs in three open source server applications, 6 of which cannot be found by Find Bugs. The experiments showed that our method is effective for finding bugs related to poorly handled exceptions.
{"title":"Static Detection of Bugs Caused by Incorrect Exception Handling in Java Programs","authors":"Xiaoquan Wu, Zhongxing Xu, Jun Wei","doi":"10.1109/QSIC.2011.25","DOIUrl":"https://doi.org/10.1109/QSIC.2011.25","url":null,"abstract":"Exception handling is a vital but often poorly tested part of a program. Static analysis can spot bugs on exceptional paths without actually making the exceptions happen. However, the traditional methods only focus on null dereferences on exceptional paths, but do not check the states of variables, which may be corrupted by exceptions. In this paper we propose a static analysis method that combines forward flow sensitive analysis and backward path feasibility analysis, to detect bugs caused by incorrect exception handling in Java programs. We found 8 bugs in three open source server applications, 6 of which cannot be found by Find Bugs. The experiments showed that our method is effective for finding bugs related to poorly handled exceptions.","PeriodicalId":309774,"journal":{"name":"2011 11th International Conference on Quality Software","volume":"15 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120862484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. García-Domínguez, I. Medina-Bulo, M. Marcos-Bárcena
Obtaining the expected performance of a workflow is much simpler if the requirements for each of its tasks are well defined. However, most of the time, not all tasks have well-defined requirements, and these must be derived by hand. This can be an error-prone and time consuming process for complex workflows. In this work, we present an algorithm which can derive a time limit for each task in a workflow, using the available task and workflow expectations. The algorithm assigns the minimum time required by each task and distributes the slack according to the weights set by the user, while checking that the task and workflow expectations are consistent with each other. The algorithm avoids having to evaluate every path in the workflow by building its results incrementally over each edge. We have implemented the algorithm in a model handling language and tested it against a naive exhaustive algorithm which evaluates all paths. Our incremental algorithm reports equivalent results in much less time than the exhaustive algorithm.
{"title":"Model-Driven Design of Performance Requirements","authors":"A. García-Domínguez, I. Medina-Bulo, M. Marcos-Bárcena","doi":"10.1109/QSIC.2011.16","DOIUrl":"https://doi.org/10.1109/QSIC.2011.16","url":null,"abstract":"Obtaining the expected performance of a workflow is much simpler if the requirements for each of its tasks are well defined. However, most of the time, not all tasks have well-defined requirements, and these must be derived by hand. This can be an error-prone and time consuming process for complex workflows. In this work, we present an algorithm which can derive a time limit for each task in a workflow, using the available task and workflow expectations. The algorithm assigns the minimum time required by each task and distributes the slack according to the weights set by the user, while checking that the task and workflow expectations are consistent with each other. The algorithm avoids having to evaluate every path in the workflow by building its results incrementally over each edge. We have implemented the algorithm in a model handling language and tested it against a naive exhaustive algorithm which evaluates all paths. Our incremental algorithm reports equivalent results in much less time than the exhaustive algorithm.","PeriodicalId":309774,"journal":{"name":"2011 11th International Conference on Quality Software","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125677741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yunshan Zhao, Yunzhan Gong, Li Liu, Qing Xiao, Zhaohong Yang
Precise interprocedural analysis is crucial for defect detection faced with the problem of procedure call. Procedure summary is an effective and classical technique to handle this problem. However, there is no general recipe to construct and instantiate procedure summaries with context-sensitivity. This paper addresses the above challenge by introducing a unified symbolic procedure summary model (PSM), which consists of three aspects: (1) the post-condition briefly records the invocation side effects to calling context, (2) the feature means some inner attributes that might cause both the dataflow and control-flow transformation and (3) the pre-condition implies some potential dataflow safety properties that should not be violated at the call site, or there would exist defects. We represent each aspect of PSM in a three-valued logic:. Moreover, by comparing the concrete call site context (CSC) with the conditional constraints (CC), we achieve context-sensitivity while instantiating the summary. Furthermore, we proposed a summary transfer function for capturing the nesting call effect of a procedure, which transfers the procedure summary in a bottom-up manner. Algorithms are proposed to construct and instantiate the summary model at concrete call sites with context-sensitivity. Experimental results on 10 open source GCC benchmarks attest to the effectiveness of our technique on detecting null pointer dereference and out of boundary defects.
{"title":"Context-Sensitive Interprocedural Defect Detection Based on a Unified Symbolic Procedure Summary Model","authors":"Yunshan Zhao, Yunzhan Gong, Li Liu, Qing Xiao, Zhaohong Yang","doi":"10.1109/QSIC.2011.15","DOIUrl":"https://doi.org/10.1109/QSIC.2011.15","url":null,"abstract":"Precise interprocedural analysis is crucial for defect detection faced with the problem of procedure call. Procedure summary is an effective and classical technique to handle this problem. However, there is no general recipe to construct and instantiate procedure summaries with context-sensitivity. This paper addresses the above challenge by introducing a unified symbolic procedure summary model (PSM), which consists of three aspects: (1) the post-condition briefly records the invocation side effects to calling context, (2) the feature means some inner attributes that might cause both the dataflow and control-flow transformation and (3) the pre-condition implies some potential dataflow safety properties that should not be violated at the call site, or there would exist defects. We represent each aspect of PSM in a three-valued logic:. Moreover, by comparing the concrete call site context (CSC) with the conditional constraints (CC), we achieve context-sensitivity while instantiating the summary. Furthermore, we proposed a summary transfer function for capturing the nesting call effect of a procedure, which transfers the procedure summary in a bottom-up manner. Algorithms are proposed to construct and instantiate the summary model at concrete call sites with context-sensitivity. Experimental results on 10 open source GCC benchmarks attest to the effectiveness of our technique on detecting null pointer dereference and out of boundary defects.","PeriodicalId":309774,"journal":{"name":"2011 11th International Conference on Quality Software","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127308817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Conventionally, process analysis is performed by following the traces of processes on the attributes of cost, time and product quality. Although process quality is a significant aspect it is frequently ignored in process analysis. In this paper, we present the results of the applications of the Process Quality Measurement Model (PQMM) and demonstrate the added value that can be acquired by analyzing the process quality for software organizations. The PQMM is a comprehensive and proactive quality measurement model based on the ISO/IEC 9126 Software Product Quality Standard [1]. The basis of the PQMM is the measurement of process definitions. As process definitions can be established before their execution, the PQMM can measure processes before they are put into practice. The model was applied to three different software organizations as part of a software process improvement initiative. The applications of the PQMM provided guidance for the analysis and explanation of the deficiencies or problems identified in the processes. The PQMM was also applied to the improved versions of the processes and facilitated a quantitative evaluation of the improvements accomplished in the processes.
{"title":"The Application of a New Process Quality Measurement Model for Software Process Improvement Initiatives","authors":"A. S. Güceglioglu, Onur Demirörs","doi":"10.1109/QSIC.2011.29","DOIUrl":"https://doi.org/10.1109/QSIC.2011.29","url":null,"abstract":"Conventionally, process analysis is performed by following the traces of processes on the attributes of cost, time and product quality. Although process quality is a significant aspect it is frequently ignored in process analysis. In this paper, we present the results of the applications of the Process Quality Measurement Model (PQMM) and demonstrate the added value that can be acquired by analyzing the process quality for software organizations. The PQMM is a comprehensive and proactive quality measurement model based on the ISO/IEC 9126 Software Product Quality Standard [1]. The basis of the PQMM is the measurement of process definitions. As process definitions can be established before their execution, the PQMM can measure processes before they are put into practice. The model was applied to three different software organizations as part of a software process improvement initiative. The applications of the PQMM provided guidance for the analysis and explanation of the deficiencies or problems identified in the processes. The PQMM was also applied to the improved versions of the processes and facilitated a quantitative evaluation of the improvements accomplished in the processes.","PeriodicalId":309774,"journal":{"name":"2011 11th International Conference on Quality Software","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124563371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Requirements Engineering is an important part of software development processes. Business process models are widely used for the specification of software. Hence, the quality of the software depends on the quality of the process models. Validation of these semi-formal models against informal requirements has to be done manually. In contrast, formal requirements can be used for automatic validation and verification of process models. However, there is a gap between textual formal specification languages and graphical process models. In this contribution we present the Business Application Modeler (BAM). This is a modeling and Validation and Verification (V&V) tool, that reduces this gap by integrating formal, graphical and reusable requirement specifications into the modeling workflow. Furthermore, BAM provides the definition of customizable views on the models (MultiView), that reduce modeling complexity and allow the assignment of responsibilities. We further show how BAM integrates into a common requirements engineering process.
{"title":"BAM: A Requirements Validation and Verification Framework for Business Process Models","authors":"Sven Feja, Sören Witt, A. Speck","doi":"10.1109/QSIC.2011.33","DOIUrl":"https://doi.org/10.1109/QSIC.2011.33","url":null,"abstract":"Requirements Engineering is an important part of software development processes. Business process models are widely used for the specification of software. Hence, the quality of the software depends on the quality of the process models. Validation of these semi-formal models against informal requirements has to be done manually. In contrast, formal requirements can be used for automatic validation and verification of process models. However, there is a gap between textual formal specification languages and graphical process models. In this contribution we present the Business Application Modeler (BAM). This is a modeling and Validation and Verification (V&V) tool, that reduces this gap by integrating formal, graphical and reusable requirement specifications into the modeling workflow. Furthermore, BAM provides the definition of customizable views on the models (MultiView), that reduce modeling complexity and allow the assignment of responsibilities. We further show how BAM integrates into a common requirements engineering process.","PeriodicalId":309774,"journal":{"name":"2011 11th International Conference on Quality Software","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115101985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiulei Qin, Wen-bo Zhang, Wei Wang, Jun Wei, Hua Zhong, Tao Huang
With the rapid development of cloud computing, traditional TP applications are evloving into the Extreme Transaction Processing (XTP) applications which are characterized by exceptionally demanding performance, scalability, availability, security, manageability and dependability require requirements, elastic caching platforms (ECPs) are introduced to help meet these requirements. Three popular cache strategies for ECPs have been proposed, say replicated strategy, partitioned strategy and near strategy. According to our investigations, many ECPs support multiple cache strategies. In this paper, we evaluate the impact of the three cache strategies using the TPC-W benchmark. To the best of our knowledge, this paper is the first evaluation of distributed cache strategies for ECPs. The main contribution of this work is guidelines that could help system administrators decide effectively which cache strategy would perform better under different conditions. Our work shows that the selection of the best cache strategy is related with workload patterns, cluster size and the number of concurrent users. We also find that four important metrics (number of "get" operations, message throughput, get/put ratio, and cache hit rate) could be used to help characterize the current condition.
{"title":"A Comparative Evaluation of Cache Strategies for Elastic Caching Platforms","authors":"Xiulei Qin, Wen-bo Zhang, Wei Wang, Jun Wei, Hua Zhong, Tao Huang","doi":"10.1109/QSIC.2011.14","DOIUrl":"https://doi.org/10.1109/QSIC.2011.14","url":null,"abstract":"With the rapid development of cloud computing, traditional TP applications are evloving into the Extreme Transaction Processing (XTP) applications which are characterized by exceptionally demanding performance, scalability, availability, security, manageability and dependability require requirements, elastic caching platforms (ECPs) are introduced to help meet these requirements. Three popular cache strategies for ECPs have been proposed, say replicated strategy, partitioned strategy and near strategy. According to our investigations, many ECPs support multiple cache strategies. In this paper, we evaluate the impact of the three cache strategies using the TPC-W benchmark. To the best of our knowledge, this paper is the first evaluation of distributed cache strategies for ECPs. The main contribution of this work is guidelines that could help system administrators decide effectively which cache strategy would perform better under different conditions. Our work shows that the selection of the best cache strategy is related with workload patterns, cluster size and the number of concurrent users. We also find that four important metrics (number of \"get\" operations, message throughput, get/put ratio, and cache hit rate) could be used to help characterize the current condition.","PeriodicalId":309774,"journal":{"name":"2011 11th International Conference on Quality Software","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129592135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Antti Nieminen, A. Jääskeläinen, H. Virtanen, Mika Katara
Testing the interactions of different applications running in the same operating system or platform poses challenges for manual testing and conventional script-based automation. Towards this end, we have developed an online model-based testing solution allowing efficient testing of such interactions. This paper presents the results of the comparison of algorithms used for generating tests for interaction testing. The comparison is based on our experiments with a number of different algorithms as well as results from the earlier studies by others. Given the simplicity of implementation, Random Walk seems very useful and practical solution for online test generation. However, each of the compared algorithms has its strong points, making the selection dependent of the metric one wants to emphasize and the available a priori information. Especially when the execution of test events is slow, smarter algorithms have advantage over simple random walk.
{"title":"A Comparison of Test Generation Algorithms for Testing Application Interactions","authors":"Antti Nieminen, A. Jääskeläinen, H. Virtanen, Mika Katara","doi":"10.1109/QSIC.2011.12","DOIUrl":"https://doi.org/10.1109/QSIC.2011.12","url":null,"abstract":"Testing the interactions of different applications running in the same operating system or platform poses challenges for manual testing and conventional script-based automation. Towards this end, we have developed an online model-based testing solution allowing efficient testing of such interactions. This paper presents the results of the comparison of algorithms used for generating tests for interaction testing. The comparison is based on our experiments with a number of different algorithms as well as results from the earlier studies by others. Given the simplicity of implementation, Random Walk seems very useful and practical solution for online test generation. However, each of the compared algorithms has its strong points, making the selection dependent of the metric one wants to emphasize and the available a priori information. Especially when the execution of test events is slow, smarter algorithms have advantage over simple random walk.","PeriodicalId":309774,"journal":{"name":"2011 11th International Conference on Quality Software","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121693386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaowei Zhang, Donggang Cao, Hong Mei, Fuqing Yang
Determinism, memory consumption and throughput are three important performance indicators for RTSJ-based real-time applications, but they often interact and conflict with each other. Manually balancing these performance indicators is often time-consuming. Therefore, there is the need to clarify the relationship among the performance metrics and make trade-off automatically. In this paper, we abstract real-time thread properties relating to these performance indicators and analyze their relationship. Based on the analysis, an automatic configuration framework to balance the performance metrics is proposed. In this framework, a stochastic process is developed to represent real-time threads' determinism. And real-time application's throughput and memory consumption is quantified with these thread properties as parameters. Experiment results based on Sweet Factory application show that our approach could optimize memory consumption and throughput while guaranteeing determinism effectively.
{"title":"Towards Balancing Determinism, Memory Consumption and Throughput for RTSJ-Based Real-Time Applications","authors":"Xiaowei Zhang, Donggang Cao, Hong Mei, Fuqing Yang","doi":"10.1109/QSIC.2011.28","DOIUrl":"https://doi.org/10.1109/QSIC.2011.28","url":null,"abstract":"Determinism, memory consumption and throughput are three important performance indicators for RTSJ-based real-time applications, but they often interact and conflict with each other. Manually balancing these performance indicators is often time-consuming. Therefore, there is the need to clarify the relationship among the performance metrics and make trade-off automatically. In this paper, we abstract real-time thread properties relating to these performance indicators and analyze their relationship. Based on the analysis, an automatic configuration framework to balance the performance metrics is proposed. In this framework, a stochastic process is developed to represent real-time threads' determinism. And real-time application's throughput and memory consumption is quantified with these thread properties as parameters. Experiment results based on Sweet Factory application show that our approach could optimize memory consumption and throughput while guaranteeing determinism effectively.","PeriodicalId":309774,"journal":{"name":"2011 11th International Conference on Quality Software","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114369662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}