Combinatorial Interaction Testing (CIT) is a well practiced strategy for testing of software systems. Ordinary CIT detects faults caused by interactions of parameters but cannot locate faulty interactions. This paper addresses the problem of adding fault localization capability to CIT. This is done by means of fault locating suites of test cases, which are named constrained locating arrays. An algorithm that derives a constrained locating array from a test suite for ordinary CIT is proposed. Experimental results show that the new algorithm can construct constrained locating arrays for fairly large sized problem instances in reasonable time.
{"title":"Deriving Fault Locating Test Cases from Constrained Covering Arrays","authors":"Hao Jin, Tatsuhiro Tsuchiya","doi":"10.1109/PRDC.2018.00044","DOIUrl":"https://doi.org/10.1109/PRDC.2018.00044","url":null,"abstract":"Combinatorial Interaction Testing (CIT) is a well practiced strategy for testing of software systems. Ordinary CIT detects faults caused by interactions of parameters but cannot locate faulty interactions. This paper addresses the problem of adding fault localization capability to CIT. This is done by means of fault locating suites of test cases, which are named constrained locating arrays. An algorithm that derives a constrained locating array from a test suite for ordinary CIT is proposed. Experimental results show that the new algorithm can construct constrained locating arrays for fairly large sized problem instances in reasonable time.","PeriodicalId":409301,"journal":{"name":"2018 IEEE 23rd Pacific Rim International Symposium on Dependable Computing (PRDC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127965472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As DRAM technology continues to evolve towards smaller feature sizes and increased densities, faults in DRAM subsystem are becoming more severe. Current servers mostly use CHIPKILL based schemes to tolerate up-to one/two symbol errors per DRAM beat. Multi-symbol errors arising due to faults in multiple data buses and chips may not be detected by these schemes. In this paper, we introduce Single Symbol Correction Multiple Symbol Detection (SSCMSD) - a novel error handling scheme to correct single-symbol errors and detect multi-symbol ones. Here, we use a hash in combination with ECC to avoid silent data corruptions (SDCs). We employ 32-bit Spookyhash along with Reed-Solomon code to implement SSCMSD for a x4 based DDRx system. Our simulations show that the proposed scheme effectively prevents SDCs in the presence of multiple symbol errors. For this design, we need 19 chips per rank (storage overhead of 18.75 percent), 76 data bus-lines and additional hash-logic at the memory controller.
{"title":"SSCMSD - Single-Symbol Correction Multi-symbol Detection for DRAM Subsystem","authors":"Ravikiran Yeleswarapu, Arun Kumar Somani","doi":"10.1109/PRDC.2018.00012","DOIUrl":"https://doi.org/10.1109/PRDC.2018.00012","url":null,"abstract":"As DRAM technology continues to evolve towards smaller feature sizes and increased densities, faults in DRAM subsystem are becoming more severe. Current servers mostly use CHIPKILL based schemes to tolerate up-to one/two symbol errors per DRAM beat. Multi-symbol errors arising due to faults in multiple data buses and chips may not be detected by these schemes. In this paper, we introduce Single Symbol Correction Multiple Symbol Detection (SSCMSD) - a novel error handling scheme to correct single-symbol errors and detect multi-symbol ones. Here, we use a hash in combination with ECC to avoid silent data corruptions (SDCs). We employ 32-bit Spookyhash along with Reed-Solomon code to implement SSCMSD for a x4 based DDRx system. Our simulations show that the proposed scheme effectively prevents SDCs in the presence of multiple symbol errors. For this design, we need 19 chips per rank (storage overhead of 18.75 percent), 76 data bus-lines and additional hash-logic at the memory controller.","PeriodicalId":409301,"journal":{"name":"2018 IEEE 23rd Pacific Rim International Symposium on Dependable Computing (PRDC)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115322349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cyber-Physical Systems (CPS) are physical systems that are controlled or monitored by computer-based systems. CPS are a combination of computation, networking, and physical processes. As CPS are a combination of various diverse components, they are vulnerable to several security threats. Moreover, there are many different security domains (not just high/low, nor are they necessarily hierarchical). This paper utilizes previouslydeveloped Multiple Security Domain Nondeducibility to uncover potential integrity vulnerabilities in an electric microgrid. These are then mitigated, to the extent possible, by adding executable invariants on system operation. Implementation on the Electric Power and Intelligent Control (EPIC) testbed at the Singapore University of Technology and Design are reported. Limitations of the design and successes/shortcomings of attack mitigation are reported.
{"title":"Cyber-Physical Security of an Electric Microgrid","authors":"Prashanth Palaniswamy, B. McMillin","doi":"10.1109/PRDC.2018.00018","DOIUrl":"https://doi.org/10.1109/PRDC.2018.00018","url":null,"abstract":"Cyber-Physical Systems (CPS) are physical systems that are controlled or monitored by computer-based systems. CPS are a combination of computation, networking, and physical processes. As CPS are a combination of various diverse components, they are vulnerable to several security threats. Moreover, there are many different security domains (not just high/low, nor are they necessarily hierarchical). This paper utilizes previouslydeveloped Multiple Security Domain Nondeducibility to uncover potential integrity vulnerabilities in an electric microgrid. These are then mitigated, to the extent possible, by adding executable invariants on system operation. Implementation on the Electric Power and Intelligent Control (EPIC) testbed at the Singapore University of Technology and Design are reported. Limitations of the design and successes/shortcomings of attack mitigation are reported.","PeriodicalId":409301,"journal":{"name":"2018 IEEE 23rd Pacific Rim International Symposium on Dependable Computing (PRDC)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130582060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mehdi Karimibiuki, Ekta Aggarwal, K. Pattabiraman, A. Ivanov
In the near future, Internet-of-Things (IoT) systems will be comprised of autonomous, highly interactive and moving objects that require frequent handshakes to exchange information in time intervals of seconds. Examples of such systems are drones and self-driving cars. In these scenarios, data integrity, confidentiality, and privacy protection are of critical importance. Further, updates need to be processed quickly and with low overheads due to the systems' resource-constrained nature. This paper proposes Dynamic Policy-based Access Control (DynPolAC) as a model for protecting information in such systems. We construct a new access control policy language that satisfies the properties of highly dynamic IoT environments. Our access control engine is comprised of a rule parser and a checker to process policies and update them at run-time with minimum service disruption. DynPolAC achieves more than 7x performance improvements when compared to previously proposed methods for authorization on resource-constrained IoT platforms, and achieves more than 3x faster response times overall.
{"title":"DynPolAC: Dynamic Policy-Based Access Control for IoT Systems","authors":"Mehdi Karimibiuki, Ekta Aggarwal, K. Pattabiraman, A. Ivanov","doi":"10.1109/PRDC.2018.00027","DOIUrl":"https://doi.org/10.1109/PRDC.2018.00027","url":null,"abstract":"In the near future, Internet-of-Things (IoT) systems will be comprised of autonomous, highly interactive and moving objects that require frequent handshakes to exchange information in time intervals of seconds. Examples of such systems are drones and self-driving cars. In these scenarios, data integrity, confidentiality, and privacy protection are of critical importance. Further, updates need to be processed quickly and with low overheads due to the systems' resource-constrained nature. This paper proposes Dynamic Policy-based Access Control (DynPolAC) as a model for protecting information in such systems. We construct a new access control policy language that satisfies the properties of highly dynamic IoT environments. Our access control engine is comprised of a rule parser and a checker to process policies and update them at run-time with minimum service disruption. DynPolAC achieves more than 7x performance improvements when compared to previously proposed methods for authorization on resource-constrained IoT platforms, and achieves more than 3x faster response times overall.","PeriodicalId":409301,"journal":{"name":"2018 IEEE 23rd Pacific Rim International Symposium on Dependable Computing (PRDC)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134457043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While the number of test runs (test cases) is often used to define the time scale to measure quantitative software reliability, the common calendar-time modeling with non-homogeneous Poisson processes (NHPPs) is approximately applied to describe the time scale and the software fault-count phenomena as well. In this paper we give a conjecture that such an approximate treatment is not theoretically justified, and propose a simple test-run reliability modeling framework based on non-homogeneous binomial processes (NHBPs). We show that the Poisson-binomial distribution plays a central role in the software test-run reliability modeling, and apply it to the software release decision. In numerical experiments with seven software fault count data we compare the NHBP based software reliability models (SRMs) with their corresponding NHPP based SRMs and refer to an applicability of NHBP based software test-run reliability modeling.
{"title":"Software Test-Run Reliability Modeling with Non-homogeneous Binomial Processes","authors":"Yunlu Zhao, T. Dohi, H. Okamura","doi":"10.1109/PRDC.2018.00025","DOIUrl":"https://doi.org/10.1109/PRDC.2018.00025","url":null,"abstract":"While the number of test runs (test cases) is often used to define the time scale to measure quantitative software reliability, the common calendar-time modeling with non-homogeneous Poisson processes (NHPPs) is approximately applied to describe the time scale and the software fault-count phenomena as well. In this paper we give a conjecture that such an approximate treatment is not theoretically justified, and propose a simple test-run reliability modeling framework based on non-homogeneous binomial processes (NHBPs). We show that the Poisson-binomial distribution plays a central role in the software test-run reliability modeling, and apply it to the software release decision. In numerical experiments with seven software fault count data we compare the NHBP based software reliability models (SRMs) with their corresponding NHPP based SRMs and refer to an applicability of NHBP based software test-run reliability modeling.","PeriodicalId":409301,"journal":{"name":"2018 IEEE 23rd Pacific Rim International Symposium on Dependable Computing (PRDC)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134176254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyojung Lee, Kiwoon Sung, Kyusang Lee, Jaeseok Lee, Seungjai Min
Blockchain technology on the platform business becomes a new paradigm which gets security, irreversibility, and trustfulness closer to both of clients and service providers (SPs) for providing a better quality of service. To provide an economic analysis of such blockchain-based platform business, a game theoretic approach is used to model a competitive market against the incumbent platform operated by a centralizer as a trusted third party. In this market, the platforms behave as a mediator to deliver the services provided by SPs to clients. The crucial factors for the success of blockchain-based platform business are (i) how SPs' participation is reflected on its quality of service (QoS) and (ii) how to incentivize SPs to contribute their resources such as computing/storage infrastructure. In our game formulation, a non-cooperative two-stage dynamic game is used, where the first stage models how to incentivize SPs in a blockchain-based platform and the second stage models the competition between platforms to attract clients. As a result, we provide an equilibrium analysis, which gives a useful insight into how much the service quality of blockchain-based platform affects the competition between platforms and the equilibrium incentive strategy for SPs. Moreover, our numerical analysis shows that the equilibrium incentive increases with proportional to the QoS of a blockchain-based platform whereas the incentive becomes negative if it provides a non-increasing QoS with the number of participated SPs.
{"title":"Economic Analysis of Blockchain Technology on Digital Platform Market","authors":"Hyojung Lee, Kiwoon Sung, Kyusang Lee, Jaeseok Lee, Seungjai Min","doi":"10.1109/PRDC.2018.00020","DOIUrl":"https://doi.org/10.1109/PRDC.2018.00020","url":null,"abstract":"Blockchain technology on the platform business becomes a new paradigm which gets security, irreversibility, and trustfulness closer to both of clients and service providers (SPs) for providing a better quality of service. To provide an economic analysis of such blockchain-based platform business, a game theoretic approach is used to model a competitive market against the incumbent platform operated by a centralizer as a trusted third party. In this market, the platforms behave as a mediator to deliver the services provided by SPs to clients. The crucial factors for the success of blockchain-based platform business are (i) how SPs' participation is reflected on its quality of service (QoS) and (ii) how to incentivize SPs to contribute their resources such as computing/storage infrastructure. In our game formulation, a non-cooperative two-stage dynamic game is used, where the first stage models how to incentivize SPs in a blockchain-based platform and the second stage models the competition between platforms to attract clients. As a result, we provide an equilibrium analysis, which gives a useful insight into how much the service quality of blockchain-based platform affects the competition between platforms and the equilibrium incentive strategy for SPs. Moreover, our numerical analysis shows that the equilibrium incentive increases with proportional to the QoS of a blockchain-based platform whereas the incentive becomes negative if it provides a non-increasing QoS with the number of participated SPs.","PeriodicalId":409301,"journal":{"name":"2018 IEEE 23rd Pacific Rim International Symposium on Dependable Computing (PRDC)","volume":"231 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132375331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many practical real-time systems must be able to sustain several reliability threats induced by their physical environments that cause short-term abnormal system behavior, such as transient faults. To cope with this change of system behavior, online adaptions, which may introduce a high computation overhead, are performed in many cases to ensure the timeliness of the more important tasks while no guarantees are provided for the less important tasks. In this work, we propose a system model which does not require any online adaption, but, according to the concept of dynamic real-time guarantees, provides full timing guarantees as well as limited timing guarantees, depending on the system behavior. For the normal system behavior, timeliness is guaranteed for all tasks; otherwise, timeliness is guaranteed only for the more important tasks while bounded tardiness is ensured for the less important tasks. Aiming to provide such dynamic timing guarantees, we propose a suitable system model and discuss, how this can be established by means of partitioned as well as semi-partitioned strategies. Moreover, we propose an approach for handling abnormal behavior with a longer duration, such as intermittent faults or overheating of processors, by performing task migration in order to compensate the affected system component and to increase the system's reliability. We show by comprehensive experiments that good acceptance ratios can be achieved under partitioned scheduling, which can be further improved under semi-partitioned strategies. In addition, we demonstrate that the proposed migration techniques lead to a reasonable trade-off between the decrease in schedulability and the gain in robustness of the system. The presented approaches can also be applied to mixed-criticality systems with two criticality levels.
{"title":"Do Nothing, But Carefully: Fault Tolerance with Timing Guarantees for Multiprocessor Systems Devoid of Online Adaptation","authors":"G. V. D. Brüggen, Lea Schönberger, Jian-Jia Chen","doi":"10.1109/PRDC.2018.00010","DOIUrl":"https://doi.org/10.1109/PRDC.2018.00010","url":null,"abstract":"Many practical real-time systems must be able to sustain several reliability threats induced by their physical environments that cause short-term abnormal system behavior, such as transient faults. To cope with this change of system behavior, online adaptions, which may introduce a high computation overhead, are performed in many cases to ensure the timeliness of the more important tasks while no guarantees are provided for the less important tasks. In this work, we propose a system model which does not require any online adaption, but, according to the concept of dynamic real-time guarantees, provides full timing guarantees as well as limited timing guarantees, depending on the system behavior. For the normal system behavior, timeliness is guaranteed for all tasks; otherwise, timeliness is guaranteed only for the more important tasks while bounded tardiness is ensured for the less important tasks. Aiming to provide such dynamic timing guarantees, we propose a suitable system model and discuss, how this can be established by means of partitioned as well as semi-partitioned strategies. Moreover, we propose an approach for handling abnormal behavior with a longer duration, such as intermittent faults or overheating of processors, by performing task migration in order to compensate the affected system component and to increase the system's reliability. We show by comprehensive experiments that good acceptance ratios can be achieved under partitioned scheduling, which can be further improved under semi-partitioned strategies. In addition, we demonstrate that the proposed migration techniques lead to a reasonable trade-off between the decrease in schedulability and the gain in robustness of the system. The presented approaches can also be applied to mixed-criticality systems with two criticality levels.","PeriodicalId":409301,"journal":{"name":"2018 IEEE 23rd Pacific Rim International Symposium on Dependable Computing (PRDC)","volume":"1996 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122424772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a restructuring method to apply a degradation approach to mesh-connected processor arrays with spare processing elements on the orthogonal sides of the arrays. An array with faulty processing elements is restructured by shifting healthy processing elements toward faulty processing elements using single track switches. First, an algorithm which satisfies the necessary and sufficient condition (called a restructurable condition) that an array is restructured so that its logical size is kept is briefly explained. Next, a method that if the array does not satisfy the restructurable condition, its rows and/or columns are functionally deleted so that the subarray with the remaining rows and columns satisfies the restructurable condition is presented. Finally, the simulation results are shown.
{"title":"Degradable Restructuring of Mesh-Connected Processor Arrays with Spares on Orthogonal Sides","authors":"I. Takanami, Masaru Fukushi","doi":"10.1109/PRDC.2018.00011","DOIUrl":"https://doi.org/10.1109/PRDC.2018.00011","url":null,"abstract":"We present a restructuring method to apply a degradation approach to mesh-connected processor arrays with spare processing elements on the orthogonal sides of the arrays. An array with faulty processing elements is restructured by shifting healthy processing elements toward faulty processing elements using single track switches. First, an algorithm which satisfies the necessary and sufficient condition (called a restructurable condition) that an array is restructured so that its logical size is kept is briefly explained. Next, a method that if the array does not satisfy the restructurable condition, its rows and/or columns are functionally deleted so that the subarray with the remaining rows and columns satisfies the restructurable condition is presented. Finally, the simulation results are shown.","PeriodicalId":409301,"journal":{"name":"2018 IEEE 23rd Pacific Rim International Symposium on Dependable Computing (PRDC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127985408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sai Sidharth Patlolla, B. McMillin, Sridhar Adepu, A. Mathur
An increase in the number of attacks on cyberphysical systems (CPS) has raised concerns over the vulnerability of critical infrastructure such as water treatment, oil, gas plants, against cyber attacks. Such systems are controlled by an Industrial Control System (ICS) that includes controllers communicating with each other, and with physical sensors and actuators, using a communications network. This paper focuses on a Multiple Security Domain Nondeducibility (MSDND) model to identify the vulnerable points of attack on the system that hide critical information rather than steal it, such as in the STUXNET virus. It is shown how MSDND analysis, conducted on a realistic multi-stage water treatment testbed, is useful in enhancing the security of a water treatment plant. Based on the MSDND analysis, this work offers a thorough documentation on the vulnerable points of attack, invariants used for removing the vulnerabilities, and suggested design decisions that help in developing invariants to mitigate attacks.
{"title":"An Approach for Formal Analysis of the Security of a Water Treatment Testbed","authors":"Sai Sidharth Patlolla, B. McMillin, Sridhar Adepu, A. Mathur","doi":"10.1109/PRDC.2018.00022","DOIUrl":"https://doi.org/10.1109/PRDC.2018.00022","url":null,"abstract":"An increase in the number of attacks on cyberphysical systems (CPS) has raised concerns over the vulnerability of critical infrastructure such as water treatment, oil, gas plants, against cyber attacks. Such systems are controlled by an Industrial Control System (ICS) that includes controllers communicating with each other, and with physical sensors and actuators, using a communications network. This paper focuses on a Multiple Security Domain Nondeducibility (MSDND) model to identify the vulnerable points of attack on the system that hide critical information rather than steal it, such as in the STUXNET virus. It is shown how MSDND analysis, conducted on a realistic multi-stage water treatment testbed, is useful in enhancing the security of a water treatment plant. Based on the MSDND analysis, this work offers a thorough documentation on the vulnerable points of attack, invariants used for removing the vulnerabilities, and suggested design decisions that help in developing invariants to mitigate attacks.","PeriodicalId":409301,"journal":{"name":"2018 IEEE 23rd Pacific Rim International Symposium on Dependable Computing (PRDC)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130410039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Progress in IT has resulted in great improvements in convenience. However, IT can cause failures that have significant negative impacts such as system failures. In order to improve these circumstances, it is important to accumulate and analyze numerous past failure cases. In order to achieve this purpose, the authors have applied machine learning to a previously accumulated failure database. We have constructed a mechanism by which to calculate the degree of similarity between documents by two methods. One method uses the appearance frequency of words, and the second method uses the appearance probability of each topic extracted from the whole document. In the present paper, focusing on communications network failures, we realized a function by which to extract past failure cases similar to inquiry inputs, as new failures. A detailed analysis and comparison of these results extracted by these two methods are presented.
{"title":"Attempt to Apply Machine Learning to a Failure Database - A Case Study on Communications Networks","authors":"Koichi Bando, Kenji Tanaka","doi":"10.1109/PRDC.2018.00040","DOIUrl":"https://doi.org/10.1109/PRDC.2018.00040","url":null,"abstract":"Progress in IT has resulted in great improvements in convenience. However, IT can cause failures that have significant negative impacts such as system failures. In order to improve these circumstances, it is important to accumulate and analyze numerous past failure cases. In order to achieve this purpose, the authors have applied machine learning to a previously accumulated failure database. We have constructed a mechanism by which to calculate the degree of similarity between documents by two methods. One method uses the appearance frequency of words, and the second method uses the appearance probability of each topic extracted from the whole document. In the present paper, focusing on communications network failures, we realized a function by which to extract past failure cases similar to inquiry inputs, as new failures. A detailed analysis and comparison of these results extracted by these two methods are presented.","PeriodicalId":409301,"journal":{"name":"2018 IEEE 23rd Pacific Rim International Symposium on Dependable Computing (PRDC)","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132466023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}