With the increasing complexity of software system, the cost of software maintenance is increasing. In this case, software reliability is difficult to guarantee. To address this problem, software defect prediction technology based on machine learning has been attached great importance by a large number of scholars. Because of the strong interpretability of association rules, association rule algorithms are often used in classification tasks. However, the class imbalance problem seriously impacts the performance of traditional software defect classifiers based on association rule mining, therefore, it is necessary to use association rule algorithm that can be used to handle class imbalance data to deal with this problem. In this paper, a software defect prediction classifier based on three minimum support threshold association rule mining is proposed, which aims to improve the quality of these three frequent item-sets by considering the support of frequent item-sets containing defect labels, including non-defect labels and only including software metrics. The algorithm is compared with other four machine learning algorithms, and the results show that the algorithm is effective.
{"title":"A Software Defect Prediction Classifier based on Three Minimum Support Threshold Association Rule Mining","authors":"Wentao Wu, Shihai Wang, Yuanxun Shao, Mingxing Zhang, Wandong Xie","doi":"10.1109/QRS-C57518.2022.00048","DOIUrl":"https://doi.org/10.1109/QRS-C57518.2022.00048","url":null,"abstract":"With the increasing complexity of software system, the cost of software maintenance is increasing. In this case, software reliability is difficult to guarantee. To address this problem, software defect prediction technology based on machine learning has been attached great importance by a large number of scholars. Because of the strong interpretability of association rules, association rule algorithms are often used in classification tasks. However, the class imbalance problem seriously impacts the performance of traditional software defect classifiers based on association rule mining, therefore, it is necessary to use association rule algorithm that can be used to handle class imbalance data to deal with this problem. In this paper, a software defect prediction classifier based on three minimum support threshold association rule mining is proposed, which aims to improve the quality of these three frequent item-sets by considering the support of frequent item-sets containing defect labels, including non-defect labels and only including software metrics. The algorithm is compared with other four machine learning algorithms, and the results show that the algorithm is effective.","PeriodicalId":183728,"journal":{"name":"2022 IEEE 22nd International Conference on Software Quality, Reliability, and Security Companion (QRS-C)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125922462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/QRS-C57518.2022.00080
Jiahao Li, Xinhao Cui, Yichen Wang, Feng Xie
Software testing is an indispensable part of the software life cycle, and the quality of software testing largely affects the quality of software delivered to users. However, in the current stage of software testing quality research, the focus on testing quality influencing factors is still limited to theoretical analysis and model application, and there is a lack of empirical research based on real data and samples. To address this problem, this paper proposes the idea of using natural experiments to conduct empirical research in the field of software testing quality for the first time, and analyzes the feasibility of the research proposal through literature research. In this paper, the empirical study was conducted using real data provided by enterprises, through the research of cooperative enterprise project data and event records, selected the “CNAS expansion of the on-site review and audit training” as an exogenous event, team capacity as the explanatory variable to build a natural experimental model of test quality. After completing the empirical model, we analyze the results, which show that the exogenous events have a significant disposition effect on test quality, and the empirical results pass the four commonly used robustness tests, indicating that the experimental results have a high 95% confidence level. In addition, this paper also analyzes the control variables in the empirical model and finds that test team size can have an impact on test quality by affecting test diversity, which provides ideas for subsequent research. Finally, based on the experimental ideas and empirical results of this study, this paper summarizes the methodological paradigm of applying the natural experiment method to empirically study and analyze the factors affecting test quality, which provides an important reference example for future empirical studies by introducing the natural experiment method in the study of software quality and test quality, and greatly expands the research horizon of related fields.
{"title":"An Empirical Study of Software Testing Quality based on Natural Experiments","authors":"Jiahao Li, Xinhao Cui, Yichen Wang, Feng Xie","doi":"10.1109/QRS-C57518.2022.00080","DOIUrl":"https://doi.org/10.1109/QRS-C57518.2022.00080","url":null,"abstract":"Software testing is an indispensable part of the software life cycle, and the quality of software testing largely affects the quality of software delivered to users. However, in the current stage of software testing quality research, the focus on testing quality influencing factors is still limited to theoretical analysis and model application, and there is a lack of empirical research based on real data and samples. To address this problem, this paper proposes the idea of using natural experiments to conduct empirical research in the field of software testing quality for the first time, and analyzes the feasibility of the research proposal through literature research. In this paper, the empirical study was conducted using real data provided by enterprises, through the research of cooperative enterprise project data and event records, selected the “CNAS expansion of the on-site review and audit training” as an exogenous event, team capacity as the explanatory variable to build a natural experimental model of test quality. After completing the empirical model, we analyze the results, which show that the exogenous events have a significant disposition effect on test quality, and the empirical results pass the four commonly used robustness tests, indicating that the experimental results have a high 95% confidence level. In addition, this paper also analyzes the control variables in the empirical model and finds that test team size can have an impact on test quality by affecting test diversity, which provides ideas for subsequent research. Finally, based on the experimental ideas and empirical results of this study, this paper summarizes the methodological paradigm of applying the natural experiment method to empirically study and analyze the factors affecting test quality, which provides an important reference example for future empirical studies by introducing the natural experiment method in the study of software quality and test quality, and greatly expands the research horizon of related fields.","PeriodicalId":183728,"journal":{"name":"2022 IEEE 22nd International Conference on Software Quality, Reliability, and Security Companion (QRS-C)","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124744564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/QRS-C57518.2022.00089
Wanqing Cheng, Xiujie Zhao
Modeling on complex multi-component systems and seek the maintenance optimization have been widely investigated in recent years. In this paper, we propose a maintenance optimization policy that is applicable to dependent two-unit systems. The degradation model is based on a bivariate Wiener process. We simultaneously consider the three maintenance actions including doing nothing, imperfect repair and replacement, and implement the modeling of imperfect repair by introducing the degradation ratio that follows the beta distribution. Moreover, we solve the optimal maintenance problem of infinite horizons using the typical MDP method in sequential decision making, and give its structural properties and value iteration method. A numerical example is then demonstrated to illustrate the algorithm's viability and to investigate the model's properties.
{"title":"Maintenance Optimization for Dependent Two-Unit Systems Considering Stochastic Degradation and Imperfect Maintenance","authors":"Wanqing Cheng, Xiujie Zhao","doi":"10.1109/QRS-C57518.2022.00089","DOIUrl":"https://doi.org/10.1109/QRS-C57518.2022.00089","url":null,"abstract":"Modeling on complex multi-component systems and seek the maintenance optimization have been widely investigated in recent years. In this paper, we propose a maintenance optimization policy that is applicable to dependent two-unit systems. The degradation model is based on a bivariate Wiener process. We simultaneously consider the three maintenance actions including doing nothing, imperfect repair and replacement, and implement the modeling of imperfect repair by introducing the degradation ratio that follows the beta distribution. Moreover, we solve the optimal maintenance problem of infinite horizons using the typical MDP method in sequential decision making, and give its structural properties and value iteration method. A numerical example is then demonstrated to illustrate the algorithm's viability and to investigate the model's properties.","PeriodicalId":183728,"journal":{"name":"2022 IEEE 22nd International Conference on Software Quality, Reliability, and Security Companion (QRS-C)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117009907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/QRS-C57518.2022.00076
Yujuan Cheng
In this era of information explosion, in order to help students select suitable resources when facing a large number of online courses, this paper proposes a knowledge graph-based learning path recommendation method to bring personalized course recommendations to students. The knowledge graph of professional courses is realized by completing the construction of the ontology library of online courses, and the graph database Neo4j is used to store the knowledge graph. SpringBoot is used to build the backend system and implement a set of course recommendation algorithm to filter the learning resources after analyzing the courses students have taken and the quality of course learning, and generate a list of course recommendations for each student. After developing the system based on this method, it can effectively help learners recommend course learning paths and greatly meet students' learning needs.
{"title":"A Learning Path Recommendation Method for Knowledge Graph of Professional Courses","authors":"Yujuan Cheng","doi":"10.1109/QRS-C57518.2022.00076","DOIUrl":"https://doi.org/10.1109/QRS-C57518.2022.00076","url":null,"abstract":"In this era of information explosion, in order to help students select suitable resources when facing a large number of online courses, this paper proposes a knowledge graph-based learning path recommendation method to bring personalized course recommendations to students. The knowledge graph of professional courses is realized by completing the construction of the ontology library of online courses, and the graph database Neo4j is used to store the knowledge graph. SpringBoot is used to build the backend system and implement a set of course recommendation algorithm to filter the learning resources after analyzing the courses students have taken and the quality of course learning, and generate a list of course recommendations for each student. After developing the system based on this method, it can effectively help learners recommend course learning paths and greatly meet students' learning needs.","PeriodicalId":183728,"journal":{"name":"2022 IEEE 22nd International Conference on Software Quality, Reliability, and Security Companion (QRS-C)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132560260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/QRS-C57518.2022.00022
Wenxuan Wang, Yongqiang Chen, Jiangchen Zhou, Huan Jin
The growth of the Internet has accelerated the distribution of digital content such as graphics and audio, and its copyright issues have gained attention. One particular industry worth noting, apparel design, has no explicit legal constraints, making piracy even easier. In this paper, we combine the decentralized and tamper-evident features of blockchain to design and implement a federated chain-based copyright management system for apparel design diagrams using the Hyperledger Fabric platform. In this paper, the copyright checking model uses perceptual hash algorithm and difference hash algorithm to calculate the graph similarity of garment effect and garment plan respectively, and calculate the mean value of similarity between them to determine whether they are plagiarized. The design diagrams are stored on IPFS, which makes up for the drawbacks of blockchain's difficulty in scaling and expensive storage space. Simulation experiments show that the blockchain system can maintain a high throughput and the originality checking model proposed in this paper can meet the practical requirements.
{"title":"Hyperledger Fabric-Based Copyright Management System for Clothing design drawings","authors":"Wenxuan Wang, Yongqiang Chen, Jiangchen Zhou, Huan Jin","doi":"10.1109/QRS-C57518.2022.00022","DOIUrl":"https://doi.org/10.1109/QRS-C57518.2022.00022","url":null,"abstract":"The growth of the Internet has accelerated the distribution of digital content such as graphics and audio, and its copyright issues have gained attention. One particular industry worth noting, apparel design, has no explicit legal constraints, making piracy even easier. In this paper, we combine the decentralized and tamper-evident features of blockchain to design and implement a federated chain-based copyright management system for apparel design diagrams using the Hyperledger Fabric platform. In this paper, the copyright checking model uses perceptual hash algorithm and difference hash algorithm to calculate the graph similarity of garment effect and garment plan respectively, and calculate the mean value of similarity between them to determine whether they are plagiarized. The design diagrams are stored on IPFS, which makes up for the drawbacks of blockchain's difficulty in scaling and expensive storage space. Simulation experiments show that the blockchain system can maintain a high throughput and the originality checking model proposed in this paper can meet the practical requirements.","PeriodicalId":183728,"journal":{"name":"2022 IEEE 22nd International Conference on Software Quality, Reliability, and Security Companion (QRS-C)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131261921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/QRS-C57518.2022.00014
Xiang Wang, Xing Zhang, Changda Wang
Distributed Denial-of-Services (DDoS) are serious network threats hardly eliminated. Current network entropy-based DDoS detection methods suffer from distinguishing DDoS attack traffic among normal traffic through a fixed empirical detection threshold, i.e., most of such thresholds are case-sensitive ones. With the Rényi entropy of a network, the paper devised a Generalized Network Temperature (GNT) based approach for DDoS attack detection, where GNT is a novel and fine-granular-scale statistical indicator that describes the network entropy changes in the light of both network traffic and network topology changes. Within a series of predefined time windows, our proposed approach first collects the selected network traffic features and then calculates the GNT for each time window. Second, the DDoS attacks are then acknowledged or denied by comparing each GNT to a dynamically adjustable thresh-old generated by the Exponentially Weighted Moving Average (EWMA) model. Furthermore, the publicly available CIC DoS 2017 dataset is utilized to test the proposed approach in the paper. The experimental results show that our proposed approach outperforms the known Shannon entropy-based DDoS attack detection methods with respect to both efficacy and efficiency.
分布式拒绝服务(DDoS)是一种难以消除的严重网络威胁。目前基于网络熵的DDoS检测方法存在通过固定的经验检测阈值来区分DDoS攻击流量和正常流量的问题,即大多数阈值是区分大小写的。利用网络的rsamunyi熵,设计了一种基于广义网络温度(GNT)的DDoS攻击检测方法,GNT是一种新颖的细粒度统计指标,可以根据网络流量和网络拓扑的变化来描述网络熵的变化。在一系列预定义的时间窗口内,我们提出的方法首先收集选定的网络流量特征,然后计算每个时间窗口的GNT。其次,通过将每个GNT与指数加权移动平均(EWMA)模型生成的动态可调阈值进行比较,对DDoS攻击进行确认或拒绝。此外,利用公开可用的CIC DoS 2017数据集来测试本文提出的方法。实验结果表明,我们提出的方法在有效性和效率方面都优于已知的基于香农熵的DDoS攻击检测方法。
{"title":"Generalized Network Temperature for DDoS Detection through Rényi Entropy","authors":"Xiang Wang, Xing Zhang, Changda Wang","doi":"10.1109/QRS-C57518.2022.00014","DOIUrl":"https://doi.org/10.1109/QRS-C57518.2022.00014","url":null,"abstract":"Distributed Denial-of-Services (DDoS) are serious network threats hardly eliminated. Current network entropy-based DDoS detection methods suffer from distinguishing DDoS attack traffic among normal traffic through a fixed empirical detection threshold, i.e., most of such thresholds are case-sensitive ones. With the Rényi entropy of a network, the paper devised a Generalized Network Temperature (GNT) based approach for DDoS attack detection, where GNT is a novel and fine-granular-scale statistical indicator that describes the network entropy changes in the light of both network traffic and network topology changes. Within a series of predefined time windows, our proposed approach first collects the selected network traffic features and then calculates the GNT for each time window. Second, the DDoS attacks are then acknowledged or denied by comparing each GNT to a dynamically adjustable thresh-old generated by the Exponentially Weighted Moving Average (EWMA) model. Furthermore, the publicly available CIC DoS 2017 dataset is utilized to test the proposed approach in the paper. The experimental results show that our proposed approach outperforms the known Shannon entropy-based DDoS attack detection methods with respect to both efficacy and efficiency.","PeriodicalId":183728,"journal":{"name":"2022 IEEE 22nd International Conference on Software Quality, Reliability, and Security Companion (QRS-C)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131549363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/QRS-C57518.2022.00097
Xinrong Hu, Jingjing Huang, Junping Liu, Qiang Zhu, J. Yang
Traditional Knowledge Graph Question Answering(KGQA) usually focuses on entity recognition and relation detection. Common relation detection methods cannot detect new relations without corresponding word entries in the system, and the propagation of errors leads to the loss of some semantic similarity information. In this paper, we propose an end-to-end knowledge graph question-answering framework (TransCL). Latent knowledge is first mined from the knowledge base and augmented information is generated in the form of question-answer pairs. Positive features are then transformed into difficult positive features using a feature transformation method based on positive extrapolation. We use contrastive learning methods to aggregate vectors and retain the original information, capturing deep matching features between data samples by contrast. TransCL is more capable of fuzzy matching and dealing with unknown inputs. Experiments show that our method achieves an F1 score of 85.50% on the NLPCC-ICCPOL-2016 open domain QA dataset.
{"title":"Knowledge Graph Question Answering based on Contrastive Learning and Feature Transformation","authors":"Xinrong Hu, Jingjing Huang, Junping Liu, Qiang Zhu, J. Yang","doi":"10.1109/QRS-C57518.2022.00097","DOIUrl":"https://doi.org/10.1109/QRS-C57518.2022.00097","url":null,"abstract":"Traditional Knowledge Graph Question Answering(KGQA) usually focuses on entity recognition and relation detection. Common relation detection methods cannot detect new relations without corresponding word entries in the system, and the propagation of errors leads to the loss of some semantic similarity information. In this paper, we propose an end-to-end knowledge graph question-answering framework (TransCL). Latent knowledge is first mined from the knowledge base and augmented information is generated in the form of question-answer pairs. Positive features are then transformed into difficult positive features using a feature transformation method based on positive extrapolation. We use contrastive learning methods to aggregate vectors and retain the original information, capturing deep matching features between data samples by contrast. TransCL is more capable of fuzzy matching and dealing with unknown inputs. Experiments show that our method achieves an F1 score of 85.50% on the NLPCC-ICCPOL-2016 open domain QA dataset.","PeriodicalId":183728,"journal":{"name":"2022 IEEE 22nd International Conference on Software Quality, Reliability, and Security Companion (QRS-C)","volume":"22 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123651239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/QRS-C57518.2022.00039
Jinfu Chen, Yemin Yin, Saihua Cai, Ye Geng, Longxia Huang
With the rapid development of computer science and technology, the scale of software is becoming much bigger, and the advantages of components also makes it deeply used in component-based software engineering. However, once some security risks appear in the components, then, the whole software system would suffer irreversible problems or even fatal collapse. In recent years, various testing methods have been proposed to ensure the component security, but these traditional component testing methods did not consider whether the process of software testing satisfies the testing requirements, which causes the generated test cases cannot effectively detect potential faults in components. To solve this problem, this paper proposes an improved test case generation method based on testing requirements, namely TGTR, to generate effectively test cases for better detecting the software faults. We first construct test requirement meta model through selecting some features of the system under test; And then, we convert the state diagram model of the system under test to marker migration system based on the designed migration path generation algorithm; Finally, we generate the improved test cases through reachable path. Experimental results on three components show that the method optimizes the migration path compared to traditional test methods, resulting in fewer test cases while achieving similar detection efficiencies and taking less time.
{"title":"An Improved Test Case Generation Method based on Test Requirements for Testing Software Component","authors":"Jinfu Chen, Yemin Yin, Saihua Cai, Ye Geng, Longxia Huang","doi":"10.1109/QRS-C57518.2022.00039","DOIUrl":"https://doi.org/10.1109/QRS-C57518.2022.00039","url":null,"abstract":"With the rapid development of computer science and technology, the scale of software is becoming much bigger, and the advantages of components also makes it deeply used in component-based software engineering. However, once some security risks appear in the components, then, the whole software system would suffer irreversible problems or even fatal collapse. In recent years, various testing methods have been proposed to ensure the component security, but these traditional component testing methods did not consider whether the process of software testing satisfies the testing requirements, which causes the generated test cases cannot effectively detect potential faults in components. To solve this problem, this paper proposes an improved test case generation method based on testing requirements, namely TGTR, to generate effectively test cases for better detecting the software faults. We first construct test requirement meta model through selecting some features of the system under test; And then, we convert the state diagram model of the system under test to marker migration system based on the designed migration path generation algorithm; Finally, we generate the improved test cases through reachable path. Experimental results on three components show that the method optimizes the migration path compared to traditional test methods, resulting in fewer test cases while achieving similar detection efficiencies and taking less time.","PeriodicalId":183728,"journal":{"name":"2022 IEEE 22nd International Conference on Software Quality, Reliability, and Security Companion (QRS-C)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116437485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/QRS-C57518.2022.00063
Shoma Hamada, Haibo Yu, Vo Dai Trinh, Yuri Nishimura, Jianjun Zhao
Probabilistic programming systems allow developers to model random phenomena and perform reasoning about the model efficiently. As the number of probabilistic programming systems is growing significantly and are used more and more widely, the reliability of such systems is becoming very important. It is crucial to analyze real bugs of existing similar systems in order to develop efficient bug detection tools for probabilistic programming systems. This paper conducts an empirical study investigating bugs and their features on PyMC3, a real probabilistic programming system. Among 271 closed bugs, we identified 20 bugs that are unique to probabilistic programming languages and extracted eight bug patterns from these bugs. The result showed that many of the bugs were caused by types. We also propose some possible methods for automatically detecting these bug patterns. It is expected that this will contribute to the development of bug detection tools by capturing the characteristics of bugs in actual probabilistic programs in the future.
{"title":"Bug Patterns in Probabilistic Programming Systems","authors":"Shoma Hamada, Haibo Yu, Vo Dai Trinh, Yuri Nishimura, Jianjun Zhao","doi":"10.1109/QRS-C57518.2022.00063","DOIUrl":"https://doi.org/10.1109/QRS-C57518.2022.00063","url":null,"abstract":"Probabilistic programming systems allow developers to model random phenomena and perform reasoning about the model efficiently. As the number of probabilistic programming systems is growing significantly and are used more and more widely, the reliability of such systems is becoming very important. It is crucial to analyze real bugs of existing similar systems in order to develop efficient bug detection tools for probabilistic programming systems. This paper conducts an empirical study investigating bugs and their features on PyMC3, a real probabilistic programming system. Among 271 closed bugs, we identified 20 bugs that are unique to probabilistic programming languages and extracted eight bug patterns from these bugs. The result showed that many of the bugs were caused by types. We also propose some possible methods for automatically detecting these bug patterns. It is expected that this will contribute to the development of bug detection tools by capturing the characteristics of bugs in actual probabilistic programs in the future.","PeriodicalId":183728,"journal":{"name":"2022 IEEE 22nd International Conference on Software Quality, Reliability, and Security Companion (QRS-C)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124630069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/QRS-C57518.2022.00026
R. Sasaki
In recent years, the use of the metaverse has been increasing for activities such as gaming, shopping, and sightseeing. While the metaverse has various advantages, there are also concerns about risks such as security problems. Unless appropriate risk assessments are carried out in advance and countermeasures are taken, serious damage may occur during metaverse use. However, there has been no proposal or application of a risk assessment method for the metaverse. Therefore, we considered an improved method by applying the method previously developed by the authors for IoT risk assessment to the metaverse gaming environment and developed the Multiple Risk Communicator for Metaverse (MRC-MV) as a risk assessment method. Furthermore, in this research, by trial application of MRC-MV to metaverse shopping, we obtained results that show that this method is effective for this activity. This method not only clarifies threats with high risks, but also makes it possible to clarify high-priority countermeasure groups by semi-quantitatively considering countermeasure effectiveness, cost, and usability.
{"title":"Trial Application of Risk Assessment Method for Metaverse","authors":"R. Sasaki","doi":"10.1109/QRS-C57518.2022.00026","DOIUrl":"https://doi.org/10.1109/QRS-C57518.2022.00026","url":null,"abstract":"In recent years, the use of the metaverse has been increasing for activities such as gaming, shopping, and sightseeing. While the metaverse has various advantages, there are also concerns about risks such as security problems. Unless appropriate risk assessments are carried out in advance and countermeasures are taken, serious damage may occur during metaverse use. However, there has been no proposal or application of a risk assessment method for the metaverse. Therefore, we considered an improved method by applying the method previously developed by the authors for IoT risk assessment to the metaverse gaming environment and developed the Multiple Risk Communicator for Metaverse (MRC-MV) as a risk assessment method. Furthermore, in this research, by trial application of MRC-MV to metaverse shopping, we obtained results that show that this method is effective for this activity. This method not only clarifies threats with high risks, but also makes it possible to clarify high-priority countermeasure groups by semi-quantitatively considering countermeasure effectiveness, cost, and usability.","PeriodicalId":183728,"journal":{"name":"2022 IEEE 22nd International Conference on Software Quality, Reliability, and Security Companion (QRS-C)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124755226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}