Pub Date : 2002-06-23DOI: 10.1109/DSN.2002.1028890
P. Santi, D. Blough
We consider the following problem for wireless ad hoc networks: assume n nodes, each capable of communicating with nodes within a radius of r, are distributed in a d-dimensional region of side l; how large must the transmitting range r be to ensure that the resulting network is connected? We also consider the mobile version of the problem, in which nodes are allowed to move during a time interval and the value of r ensuring connectedness for a given fraction of the interval must be determined. For the stationary case, we give tight bounds on the relative magnitude of r, n and l yielding a connected graph with high probability in l-dimensional networks, thus solving an open problem. The mobile version of the problem when d=2 is investigated through extensive simulations, which give insight on how mobility affects connectivity and reveal a useful trade-off between communication capability and energy consumption.
{"title":"An evaluation of connectivity in mobile wireless ad hoc networks","authors":"P. Santi, D. Blough","doi":"10.1109/DSN.2002.1028890","DOIUrl":"https://doi.org/10.1109/DSN.2002.1028890","url":null,"abstract":"We consider the following problem for wireless ad hoc networks: assume n nodes, each capable of communicating with nodes within a radius of r, are distributed in a d-dimensional region of side l; how large must the transmitting range r be to ensure that the resulting network is connected? We also consider the mobile version of the problem, in which nodes are allowed to move during a time interval and the value of r ensuring connectedness for a given fraction of the interval must be determined. For the stationary case, we give tight bounds on the relative magnitude of r, n and l yielding a connected graph with high probability in l-dimensional networks, thus solving an open problem. The mobile version of the problem when d=2 is investigated through extensive simulations, which give insight on how mobility affects connectivity and reveal a useful trade-off between communication capability and energy consumption.","PeriodicalId":93807,"journal":{"name":"Proceedings. International Conference on Dependable Systems and Networks","volume":"11 1","pages":"89-98"},"PeriodicalIF":0.0,"publicationDate":"2002-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78935593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-06-23DOI: 10.1109/DSN.2002.1028882
S. Krishnamurthy, W. Sanders, M. Cukier
One well-known challenge in using replication to service multiple clients concurrently is that of delivering a timely and consistent response to the clients. In this paper, we address this problem in the context of client applications that have specific temporal and consistency requirements. These applications can tolerate a certain degree of relaxed consistency, in exchange for better response time. We propose a flexible QoS model that allows these clients to specify their temporal and consistency constraints. In order to select replicas to serve these clients, we need to control of the inconsistency of the replicas, so that we have a large enough pool of replicas with the appropriate state to meet a client's timeliness, consistency, and dependability requirements. We describe an adaptive framework that uses lazy update propagation to control the replica inconsistency and employs a probabilistic approach to select replicas dynamically to service a client, based on its QoS specification. The probabilistic approach predicts the ability of a replica to meet a client's QoS specification by using the performance history collected by monitoring the replicas at runtime. We conclude with experimental results based on our implementation.
{"title":"An adaptive framework for tunable consistency and timeliness using replication","authors":"S. Krishnamurthy, W. Sanders, M. Cukier","doi":"10.1109/DSN.2002.1028882","DOIUrl":"https://doi.org/10.1109/DSN.2002.1028882","url":null,"abstract":"One well-known challenge in using replication to service multiple clients concurrently is that of delivering a timely and consistent response to the clients. In this paper, we address this problem in the context of client applications that have specific temporal and consistency requirements. These applications can tolerate a certain degree of relaxed consistency, in exchange for better response time. We propose a flexible QoS model that allows these clients to specify their temporal and consistency constraints. In order to select replicas to serve these clients, we need to control of the inconsistency of the replicas, so that we have a large enough pool of replicas with the appropriate state to meet a client's timeliness, consistency, and dependability requirements. We describe an adaptive framework that uses lazy update propagation to control the replica inconsistency and employs a probabilistic approach to select replicas dynamically to service a client, based on its QoS specification. The probabilistic approach predicts the ability of a replica to meet a client's QoS specification by using the performance history collected by monitoring the replicas at runtime. We conclude with experimental results based on our implementation.","PeriodicalId":93807,"journal":{"name":"Proceedings. International Conference on Dependable Systems and Networks","volume":"19 1","pages":"17-26"},"PeriodicalIF":0.0,"publicationDate":"2002-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76522228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-06-23DOI: 10.1109/DSN.2002.1029005
Mike Y. Chen, Emre Kıcıman, Eugene Fratkin, A. Fox, E. Brewer
Traditional problem determination techniques rely on static dependency models that are difficult to generate accurately in today's large, distributed, and dynamic application environments such as e-commerce systems. We present a dynamic analysis methodology that automates problem determination in these environments by 1) coarse-grained tagging of numerous real client requests as they travel through the system and 2) using data mining techniques to correlate the believed failures and successes of these requests to determine which components are most likely to be at fault. To validate our methodology, we have implemented Pinpoint, a framework for root cause analysis on the J2EE platform that requires no knowledge of the application components. Pinpoint consists of three parts: a communications layer that traces client requests, a failure detector that uses traffic-sniffing and middleware instrumentation, and a data analysis engine. We evaluate Pinpoint by injecting faults into various application components and show that Pinpoint identifies the faulty components with high accuracy and produces few false-positives.
{"title":"Pinpoint: problem determination in large, dynamic Internet services","authors":"Mike Y. Chen, Emre Kıcıman, Eugene Fratkin, A. Fox, E. Brewer","doi":"10.1109/DSN.2002.1029005","DOIUrl":"https://doi.org/10.1109/DSN.2002.1029005","url":null,"abstract":"Traditional problem determination techniques rely on static dependency models that are difficult to generate accurately in today's large, distributed, and dynamic application environments such as e-commerce systems. We present a dynamic analysis methodology that automates problem determination in these environments by 1) coarse-grained tagging of numerous real client requests as they travel through the system and 2) using data mining techniques to correlate the believed failures and successes of these requests to determine which components are most likely to be at fault. To validate our methodology, we have implemented Pinpoint, a framework for root cause analysis on the J2EE platform that requires no knowledge of the application components. Pinpoint consists of three parts: a communications layer that traces client requests, a failure detector that uses traffic-sniffing and middleware instrumentation, and a data analysis engine. We evaluate Pinpoint by injecting faults into various application components and show that Pinpoint identifies the faulty components with high accuracy and produces few false-positives.","PeriodicalId":93807,"journal":{"name":"Proceedings. International Conference on Dependable Systems and Networks","volume":"212 1","pages":"595-604"},"PeriodicalIF":0.0,"publicationDate":"2002-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76972103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-06-23DOI: 10.1109/DSN.2002.1028917
T. Jarboui, J. Arlat, Y. Crouzet, K. Kanoun
The main goal of the experimental stud), reported in this paper is to investigate to what extent distinct fault injection techniques lead to similar consequences (errors and failures). The target system we are using to carry out our investigation is the Linux kernel as it provides a representative operating system. It is featuring full controllability and observability thanks to its open source status. Three types of software-implemented fault injection techniques are considered, namely: i) provision of invalid values to the parameters of the kernel calls, ii) corruption of the parameters of the kernel calls, and iii) corruption of the input parameters of the internal functions of the kernel. The workload being used for the experiments is tailored to activate selectively each functional component. The observations encompass typical kernel failure modes (e.g., exceptions and kernel hangs) as well as a detailed analysis of the reported error codes.
{"title":"Experimental analysis of the errors induced into Linux by three fault injection techniques","authors":"T. Jarboui, J. Arlat, Y. Crouzet, K. Kanoun","doi":"10.1109/DSN.2002.1028917","DOIUrl":"https://doi.org/10.1109/DSN.2002.1028917","url":null,"abstract":"The main goal of the experimental stud), reported in this paper is to investigate to what extent distinct fault injection techniques lead to similar consequences (errors and failures). The target system we are using to carry out our investigation is the Linux kernel as it provides a representative operating system. It is featuring full controllability and observability thanks to its open source status. Three types of software-implemented fault injection techniques are considered, namely: i) provision of invalid values to the parameters of the kernel calls, ii) corruption of the parameters of the kernel calls, and iii) corruption of the input parameters of the internal functions of the kernel. The workload being used for the experiments is tailored to activate selectively each functional component. The observations encompass typical kernel failure modes (e.g., exceptions and kernel hangs) as well as a detailed analysis of the reported error codes.","PeriodicalId":93807,"journal":{"name":"Proceedings. International Conference on Dependable Systems and Networks","volume":"41 1","pages":"331-336"},"PeriodicalIF":0.0,"publicationDate":"2002-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77149687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-06-23DOI: 10.1109/DSN.2002.1029007
M. Vieira, H. Madeira
A major cause of failures in large database management systems (DBMS) is operator faults. Although most of the complex DBMS have comprehensive recovery mechanisms, the effectiveness of these mechanisms is difficult to characterize. On the other hand, the tuning of a large database is very complex and database administrators tend to concentrate on performance tuning and disregard the recovery mechanisms. Above all, database administrators seldom have feedback on how good a given configuration is concerning recovery. This paper proposes an experimental approach to characterize both the performance and the recoverability in DBMS. Our approach is presented through a concrete example of benchmarking the performance and recovery of an Oracle DBMS running the standard TPC-C benchmark, extended to include two new elements: a fault load based on operator faults and measures related to recoverability. A classification of operator faults in DBMS is proposed. The paper ends with the discussion of the results and the proposal of guidelines to help database administrators in finding the balance between performance and recovery tuning.
{"title":"Recovery and performance balance of a COTS DBMS in the presence of operator faults","authors":"M. Vieira, H. Madeira","doi":"10.1109/DSN.2002.1029007","DOIUrl":"https://doi.org/10.1109/DSN.2002.1029007","url":null,"abstract":"A major cause of failures in large database management systems (DBMS) is operator faults. Although most of the complex DBMS have comprehensive recovery mechanisms, the effectiveness of these mechanisms is difficult to characterize. On the other hand, the tuning of a large database is very complex and database administrators tend to concentrate on performance tuning and disregard the recovery mechanisms. Above all, database administrators seldom have feedback on how good a given configuration is concerning recovery. This paper proposes an experimental approach to characterize both the performance and the recoverability in DBMS. Our approach is presented through a concrete example of benchmarking the performance and recovery of an Oracle DBMS running the standard TPC-C benchmark, extended to include two new elements: a fault load based on operator faults and measures related to recoverability. A classification of operator faults in DBMS is proposed. The paper ends with the discussion of the results and the proposal of guidelines to help database administrators in finding the balance between performance and recovery tuning.","PeriodicalId":93807,"journal":{"name":"Proceedings. International Conference on Dependable Systems and Networks","volume":"51 1","pages":"615-624"},"PeriodicalIF":0.0,"publicationDate":"2002-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83679145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-06-23DOI: 10.1109/DSN.2002.1028907
R. Schlichting, A. Chien, C. Kesselman, K. Marzullo, J. Plank, S. Shrivastava
For over a decade, researchers involved with scientific computing have been investigating technologies that allow advanced scientific applications to exploit resources associated with machines connected by wide-area networks across large geographical distances. Originally referred to as metacomputing or heterogenous computing, Grid computing is currently the most common term used to describe this type of distributed computing model. Generally speaking, Grid computing emphasizes large scale resource sharing—not only computational cycles, but also software and data— across administrative domains in a flexible, secure, and coordinated fashion. A number of software platforms have been developed that address all or subsets of the challenges associated with Grid computing, including Condor, the Entropia platform, the Globus toolkit, Legion, LSF, Ninf, and Sun’s Grid Engine. While the Grid was originally designed to support scientific applications, there has been significant interest recently in extending the model to support the needs of enterprise computing, including those based on Web services. For example, both IBM and Sun have made the Grid part of their enterprise computing strategies, while the recent Global Grid Forum GGF-4 (http://www.gridforum.org/) included a number of topics related to generalizing the Grid in this way. Part of this effort includes defining an Open Grid Services Architecture (OGSA) that can be used to integrate services within and across enterprises. As might be expected given the difference between scientific and enterprise applications, there are any number of technical issues that must be addressed to accomplish this goal. This panel will focus on one particular challenge associated with Grid computing, that of ensuring dependable operation of Grid computations. Dependability in this context encompasses a broad collection of possible attributes, including availability, reliability, security, and timely execution. Among the possible topics for discussion are different dependability requirements of current versus envisioned application scenarios, technical barriers to achieving dependability in both contexts, and architectural issues related to providing appropriate support in software platforms such as OGSA. The overall goal is to bring together the perspectives of individuals working in different communities to identify issues and challenges that remain to be solved to make dependable Grid computing a reality.
{"title":"Dependability and the grid issues and challenges","authors":"R. Schlichting, A. Chien, C. Kesselman, K. Marzullo, J. Plank, S. Shrivastava","doi":"10.1109/DSN.2002.1028907","DOIUrl":"https://doi.org/10.1109/DSN.2002.1028907","url":null,"abstract":"For over a decade, researchers involved with scientific computing have been investigating technologies that allow advanced scientific applications to exploit resources associated with machines connected by wide-area networks across large geographical distances. Originally referred to as metacomputing or heterogenous computing, Grid computing is currently the most common term used to describe this type of distributed computing model. Generally speaking, Grid computing emphasizes large scale resource sharing—not only computational cycles, but also software and data— across administrative domains in a flexible, secure, and coordinated fashion. A number of software platforms have been developed that address all or subsets of the challenges associated with Grid computing, including Condor, the Entropia platform, the Globus toolkit, Legion, LSF, Ninf, and Sun’s Grid Engine. While the Grid was originally designed to support scientific applications, there has been significant interest recently in extending the model to support the needs of enterprise computing, including those based on Web services. For example, both IBM and Sun have made the Grid part of their enterprise computing strategies, while the recent Global Grid Forum GGF-4 (http://www.gridforum.org/) included a number of topics related to generalizing the Grid in this way. Part of this effort includes defining an Open Grid Services Architecture (OGSA) that can be used to integrate services within and across enterprises. As might be expected given the difference between scientific and enterprise applications, there are any number of technical issues that must be addressed to accomplish this goal. This panel will focus on one particular challenge associated with Grid computing, that of ensuring dependable operation of Grid computations. Dependability in this context encompasses a broad collection of possible attributes, including availability, reliability, security, and timely execution. Among the possible topics for discussion are different dependability requirements of current versus envisioned application scenarios, technical barriers to achieving dependability in both contexts, and architectural issues related to providing appropriate support in software platforms such as OGSA. The overall goal is to bring together the perspectives of individuals working in different communities to identify issues and challenges that remain to be solved to make dependable Grid computing a reality.","PeriodicalId":93807,"journal":{"name":"Proceedings. International Conference on Dependable Systems and Networks","volume":"695 1","pages":"263-263"},"PeriodicalIF":0.0,"publicationDate":"2002-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83299449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-06-23DOI: 10.1109/DSN.2002.1028918
Dongyan Chen, S. Dharmaraja, Dongyan Chen, Lei Li, Kishor S. Trivedi, R. Some, A. Nikora
The NASA Remote Exploration and Experimentation (REE) Project, managed by the Jet Propulsion Laboratory, has the vision of bringing commercial supercomputing technology into space, in a form which meets the demanding environmental requirements, to enable a new class of science investigation and discovery. Dependability goals of the REE system are 99% reliability over 5 years and 99% availability. In this paper we focus on the reliability/availability modeling and analysis of the REE system. We carry out this task using fault trees, reliability block diagrams, stochastic reward nets and hierarchical models. Our analysis helps to determine the ranges of parameters for which the REE dependability goal will be met. The analysis also allows us to assess different hardware and software fault-tolerance techniques.
{"title":"Reliability and availability analysis for the JPL Remote Exploration and Experimentation System","authors":"Dongyan Chen, S. Dharmaraja, Dongyan Chen, Lei Li, Kishor S. Trivedi, R. Some, A. Nikora","doi":"10.1109/DSN.2002.1028918","DOIUrl":"https://doi.org/10.1109/DSN.2002.1028918","url":null,"abstract":"The NASA Remote Exploration and Experimentation (REE) Project, managed by the Jet Propulsion Laboratory, has the vision of bringing commercial supercomputing technology into space, in a form which meets the demanding environmental requirements, to enable a new class of science investigation and discovery. Dependability goals of the REE system are 99% reliability over 5 years and 99% availability. In this paper we focus on the reliability/availability modeling and analysis of the REE system. We carry out this task using fault trees, reliability block diagrams, stochastic reward nets and hierarchical models. Our analysis helps to determine the ranges of parameters for which the REE dependability goal will be met. The analysis also allows us to assess different hardware and software fault-tolerance techniques.","PeriodicalId":93807,"journal":{"name":"Proceedings. International Conference on Dependable Systems and Networks","volume":"78 1","pages":"337-342"},"PeriodicalIF":0.0,"publicationDate":"2002-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83132298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-06-23DOI: 10.1109/DSN.2002.1029010
P. Buchholz
We present a new approximate solution technique for the numerical analysis of superposed generalized stochastic Petri nets (SGSPNs) and related models. The approach combines numerical iterative solution techniques and fixed point computations using the complete knowledge of state space and generator matrix. In contrast to other approximation methods, the proposed method is adaptive by considering states with a high probability in detail and aggregating states with small probabilities. Probabilities are approximated by the results derived during the iterative solution. Thus, a maximum number of states can be predefined and the presented method automatically aggregates states such that the solution is computed using a vector of a size smaller or equal to the maximum. By means of a non-trivial example it is shown that the approach computes good approximations with a low effort for many models.
{"title":"An adaptive decomposition approach for the analysis of stochastic Petri nets","authors":"P. Buchholz","doi":"10.1109/DSN.2002.1029010","DOIUrl":"https://doi.org/10.1109/DSN.2002.1029010","url":null,"abstract":"We present a new approximate solution technique for the numerical analysis of superposed generalized stochastic Petri nets (SGSPNs) and related models. The approach combines numerical iterative solution techniques and fixed point computations using the complete knowledge of state space and generator matrix. In contrast to other approximation methods, the proposed method is adaptive by considering states with a high probability in detail and aggregating states with small probabilities. Probabilities are approximated by the results derived during the iterative solution. Thus, a maximum number of states can be predefined and the presented method automatically aggregates states such that the solution is computed using a vector of a size smaller or equal to the maximum. By means of a non-trivial example it is shown that the approach computes good approximations with a low effort for many models.","PeriodicalId":93807,"journal":{"name":"Proceedings. International Conference on Dependable Systems and Networks","volume":"40 1","pages":"647-656"},"PeriodicalIF":0.0,"publicationDate":"2002-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76422337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-06-23DOI: 10.1109/DSN.2002.1028941
B. Madan, K. Goseva-Popstojanova, K. Vaidyanathan, K. Trivedi
Quite often failures in network based services and server systems may not be accidental, but rather caused by deliberate security intrusions. We would like such systems to either completely preclude the possibility of a security intrusion or design them to be robust enough to continue functioning despite security attacks. Not only is it important to prevent or tolerate security intrusions, it is equally important to treat security as a QoS attribute at par with, if not more important than other QoS attributes such as availability and performability. This paper deals with various issues related to quantifying the security attribute of an intrusion tolerant system, such as the SITAR system. A security intrusion and the response of an intrusion tolerant system to the attack is modeled as a random process. This facilitates the use of stochastic modeling techniques to capture the attacker behavior as well as the system's response to a security intrusion. This model is used to analyze and quantify the security attributes of the system. The security quantification analysis is first carried out for steady-state behavior leading to measures like steady-state availability. By transforming this model to a model with absorbing states, we compute a security measure called the "mean time (or effort) to security failure" and also compute probabilities of security failure due to violations of different security attributes.
{"title":"Modeling and quantification of security attributes of software systems","authors":"B. Madan, K. Goseva-Popstojanova, K. Vaidyanathan, K. Trivedi","doi":"10.1109/DSN.2002.1028941","DOIUrl":"https://doi.org/10.1109/DSN.2002.1028941","url":null,"abstract":"Quite often failures in network based services and server systems may not be accidental, but rather caused by deliberate security intrusions. We would like such systems to either completely preclude the possibility of a security intrusion or design them to be robust enough to continue functioning despite security attacks. Not only is it important to prevent or tolerate security intrusions, it is equally important to treat security as a QoS attribute at par with, if not more important than other QoS attributes such as availability and performability. This paper deals with various issues related to quantifying the security attribute of an intrusion tolerant system, such as the SITAR system. A security intrusion and the response of an intrusion tolerant system to the attack is modeled as a random process. This facilitates the use of stochastic modeling techniques to capture the attacker behavior as well as the system's response to a security intrusion. This model is used to analyze and quantify the security attributes of the system. The security quantification analysis is first carried out for steady-state behavior leading to measures like steady-state availability. By transforming this model to a model with absorbing states, we compute a security measure called the \"mean time (or effort) to security failure\" and also compute probabilities of security failure due to violations of different security attributes.","PeriodicalId":93807,"journal":{"name":"Proceedings. International Conference on Dependable Systems and Networks","volume":"33 1","pages":"505-514"},"PeriodicalIF":0.0,"publicationDate":"2002-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76356928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-06-23DOI: 10.1109/DSN.2002.1029017
C. Lindemann, O. P. Waldhorst
In this paper, we present a comprehensive performance study of least recently used and least frequently used with dynamic aging as traditional replacement schemes as well as for the newly proposed schemes greedy dual size and greedy dual. The goal of our study constitutes the understanding how these replacement schemes deal with different web document types. Using trace-driven simulation, we present curves plotting the hit rate and byte hit rate broken down for image, HTML, multi media, and application documents. The presented results show for the first workload that under the packet cost model Greedy Dual outperforms the other schemes both in terms of hit rate and byte hit rate for image, HTML, and multi media documents. However, the advantages of Greedy Dual diminish when the workload contains more distinct multi media documents and a larger number of requests to multi media documents.
{"title":"Evaluating the impact of different document types on the performance of web cache replacement schemes","authors":"C. Lindemann, O. P. Waldhorst","doi":"10.1109/DSN.2002.1029017","DOIUrl":"https://doi.org/10.1109/DSN.2002.1029017","url":null,"abstract":"In this paper, we present a comprehensive performance study of least recently used and least frequently used with dynamic aging as traditional replacement schemes as well as for the newly proposed schemes greedy dual size and greedy dual. The goal of our study constitutes the understanding how these replacement schemes deal with different web document types. Using trace-driven simulation, we present curves plotting the hit rate and byte hit rate broken down for image, HTML, multi media, and application documents. The presented results show for the first workload that under the packet cost model Greedy Dual outperforms the other schemes both in terms of hit rate and byte hit rate for image, HTML, and multi media documents. However, the advantages of Greedy Dual diminish when the workload contains more distinct multi media documents and a larger number of requests to multi media documents.","PeriodicalId":93807,"journal":{"name":"Proceedings. International Conference on Dependable Systems and Networks","volume":"57 1","pages":"717-726"},"PeriodicalIF":0.0,"publicationDate":"2002-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81106705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}