Isha Ganguli, Rajat Subhra Bhowmick, Shivam Biswas, J. Sil
{"title":"基于T5架构的变压器网络性能分析Python代码的经验自评价","authors":"Isha Ganguli, Rajat Subhra Bhowmick, Shivam Biswas, J. Sil","doi":"10.1109/ICSCC51209.2021.9528123","DOIUrl":null,"url":null,"abstract":"The immense real-time applicability of Python coding makes the task of evaluating the code highly intriguing, in the Natural Language Processing (NLP) domain. Evaluation of computer programs induces a challenge of logical and arithmetic understanding. Therefore, it is indeed very relevant to analyze the empirical ability of current state-of-the-art sequence-based neural architectures in evaluating small computer programs. One of the possible applications of such analysis is the auto-evaluation of erroneous Python code. In this context, we focused our work on evaluating small python code blocks with or without error and examined the efficiency of the latest T5 Transformer network model in this task. In terms of accuracy, different Rouge scores, and BLEU scores, the performance measurements has been calculated. Observations reveal that T5 Transformer is able to compute the output for both correct and erroneous python code blocks with more than 65% accuracy.","PeriodicalId":382982,"journal":{"name":"2021 8th International Conference on Smart Computing and Communications (ICSCC)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Empirical Auto-Evaluation of Python Code for Performance Analysis of Transformer Network Using T5 Architecture\",\"authors\":\"Isha Ganguli, Rajat Subhra Bhowmick, Shivam Biswas, J. Sil\",\"doi\":\"10.1109/ICSCC51209.2021.9528123\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The immense real-time applicability of Python coding makes the task of evaluating the code highly intriguing, in the Natural Language Processing (NLP) domain. Evaluation of computer programs induces a challenge of logical and arithmetic understanding. Therefore, it is indeed very relevant to analyze the empirical ability of current state-of-the-art sequence-based neural architectures in evaluating small computer programs. One of the possible applications of such analysis is the auto-evaluation of erroneous Python code. In this context, we focused our work on evaluating small python code blocks with or without error and examined the efficiency of the latest T5 Transformer network model in this task. In terms of accuracy, different Rouge scores, and BLEU scores, the performance measurements has been calculated. Observations reveal that T5 Transformer is able to compute the output for both correct and erroneous python code blocks with more than 65% accuracy.\",\"PeriodicalId\":382982,\"journal\":{\"name\":\"2021 8th International Conference on Smart Computing and Communications (ICSCC)\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 8th International Conference on Smart Computing and Communications (ICSCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSCC51209.2021.9528123\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 8th International Conference on Smart Computing and Communications (ICSCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSCC51209.2021.9528123","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Empirical Auto-Evaluation of Python Code for Performance Analysis of Transformer Network Using T5 Architecture
The immense real-time applicability of Python coding makes the task of evaluating the code highly intriguing, in the Natural Language Processing (NLP) domain. Evaluation of computer programs induces a challenge of logical and arithmetic understanding. Therefore, it is indeed very relevant to analyze the empirical ability of current state-of-the-art sequence-based neural architectures in evaluating small computer programs. One of the possible applications of such analysis is the auto-evaluation of erroneous Python code. In this context, we focused our work on evaluating small python code blocks with or without error and examined the efficiency of the latest T5 Transformer network model in this task. In terms of accuracy, different Rouge scores, and BLEU scores, the performance measurements has been calculated. Observations reveal that T5 Transformer is able to compute the output for both correct and erroneous python code blocks with more than 65% accuracy.