{"title":"通过代码表示学习提升变形关系预测:实证研究","authors":"Xuedan Zheng, Mingyue Jiang, Zhi Quan Zhou","doi":"10.1002/stvr.1889","DOIUrl":null,"url":null,"abstract":"Metamorphic testing (MT) is an effective testing technique having a broad range of applications. One key task for MT is the identification of metamorphic relations (MRs), which is a fundamental mechanism in MT and is critical to the automation of MT. Prior studies have proposed approaches for predicting MRs (PMR). One major idea behind these PMR approaches is to represent program source code information via manually designed code features and then to apply machine‐learning–based classifiers to automatically predict whether a specific MR can be applied on the target program. Nevertheless, the human‐involved procedure of selecting and extracting code features is costly, and it may not be easy to obtain sufficiently comprehensive features for representing source code. To overcome this limitation, in this study, we explore and evaluate the effectiveness of code representation learning techniques for PMR. By applying neural code representation models for automatically mapping program source code to code vectors, the PMR procedure can be boosted with learned code representations. We develop 32 PMR instances by, respectively, combining 8 code representation models with 4 typical classification models and conduct an extensive empirical study to investigate the effectiveness of code representation learning techniques in the context of MR prediction. Our findings reveal that code representation learning can positively contribute to the prediction of MRs and provide insights into the practical usage of code representation models in the context of MR prediction. Our findings could help researchers and practitioners to gain a deeper understanding of the strength of code representation learning for PMR and, hence, pave the way for future research in deriving or extracting MRs from program source code.","PeriodicalId":501413,"journal":{"name":"Software Testing, Verification and Reliability","volume":"25 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Boosting Metamorphic Relation Prediction via Code Representation Learning: An Empirical Study\",\"authors\":\"Xuedan Zheng, Mingyue Jiang, Zhi Quan Zhou\",\"doi\":\"10.1002/stvr.1889\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Metamorphic testing (MT) is an effective testing technique having a broad range of applications. One key task for MT is the identification of metamorphic relations (MRs), which is a fundamental mechanism in MT and is critical to the automation of MT. Prior studies have proposed approaches for predicting MRs (PMR). One major idea behind these PMR approaches is to represent program source code information via manually designed code features and then to apply machine‐learning–based classifiers to automatically predict whether a specific MR can be applied on the target program. Nevertheless, the human‐involved procedure of selecting and extracting code features is costly, and it may not be easy to obtain sufficiently comprehensive features for representing source code. To overcome this limitation, in this study, we explore and evaluate the effectiveness of code representation learning techniques for PMR. By applying neural code representation models for automatically mapping program source code to code vectors, the PMR procedure can be boosted with learned code representations. We develop 32 PMR instances by, respectively, combining 8 code representation models with 4 typical classification models and conduct an extensive empirical study to investigate the effectiveness of code representation learning techniques in the context of MR prediction. Our findings reveal that code representation learning can positively contribute to the prediction of MRs and provide insights into the practical usage of code representation models in the context of MR prediction. Our findings could help researchers and practitioners to gain a deeper understanding of the strength of code representation learning for PMR and, hence, pave the way for future research in deriving or extracting MRs from program source code.\",\"PeriodicalId\":501413,\"journal\":{\"name\":\"Software Testing, Verification and Reliability\",\"volume\":\"25 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Software Testing, Verification and Reliability\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1002/stvr.1889\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Software Testing, Verification and Reliability","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/stvr.1889","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Boosting Metamorphic Relation Prediction via Code Representation Learning: An Empirical Study
Metamorphic testing (MT) is an effective testing technique having a broad range of applications. One key task for MT is the identification of metamorphic relations (MRs), which is a fundamental mechanism in MT and is critical to the automation of MT. Prior studies have proposed approaches for predicting MRs (PMR). One major idea behind these PMR approaches is to represent program source code information via manually designed code features and then to apply machine‐learning–based classifiers to automatically predict whether a specific MR can be applied on the target program. Nevertheless, the human‐involved procedure of selecting and extracting code features is costly, and it may not be easy to obtain sufficiently comprehensive features for representing source code. To overcome this limitation, in this study, we explore and evaluate the effectiveness of code representation learning techniques for PMR. By applying neural code representation models for automatically mapping program source code to code vectors, the PMR procedure can be boosted with learned code representations. We develop 32 PMR instances by, respectively, combining 8 code representation models with 4 typical classification models and conduct an extensive empirical study to investigate the effectiveness of code representation learning techniques in the context of MR prediction. Our findings reveal that code representation learning can positively contribute to the prediction of MRs and provide insights into the practical usage of code representation models in the context of MR prediction. Our findings could help researchers and practitioners to gain a deeper understanding of the strength of code representation learning for PMR and, hence, pave the way for future research in deriving or extracting MRs from program source code.