基于T5架构的变压器网络性能分析Python代码的经验自评价

Isha Ganguli, Rajat Subhra Bhowmick, Shivam Biswas, J. Sil
{"title":"基于T5架构的变压器网络性能分析Python代码的经验自评价","authors":"Isha Ganguli, Rajat Subhra Bhowmick, Shivam Biswas, J. Sil","doi":"10.1109/ICSCC51209.2021.9528123","DOIUrl":null,"url":null,"abstract":"The immense real-time applicability of Python coding makes the task of evaluating the code highly intriguing, in the Natural Language Processing (NLP) domain. Evaluation of computer programs induces a challenge of logical and arithmetic understanding. Therefore, it is indeed very relevant to analyze the empirical ability of current state-of-the-art sequence-based neural architectures in evaluating small computer programs. One of the possible applications of such analysis is the auto-evaluation of erroneous Python code. In this context, we focused our work on evaluating small python code blocks with or without error and examined the efficiency of the latest T5 Transformer network model in this task. In terms of accuracy, different Rouge scores, and BLEU scores, the performance measurements has been calculated. Observations reveal that T5 Transformer is able to compute the output for both correct and erroneous python code blocks with more than 65% accuracy.","PeriodicalId":382982,"journal":{"name":"2021 8th International Conference on Smart Computing and Communications (ICSCC)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Empirical Auto-Evaluation of Python Code for Performance Analysis of Transformer Network Using T5 Architecture\",\"authors\":\"Isha Ganguli, Rajat Subhra Bhowmick, Shivam Biswas, J. Sil\",\"doi\":\"10.1109/ICSCC51209.2021.9528123\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The immense real-time applicability of Python coding makes the task of evaluating the code highly intriguing, in the Natural Language Processing (NLP) domain. Evaluation of computer programs induces a challenge of logical and arithmetic understanding. Therefore, it is indeed very relevant to analyze the empirical ability of current state-of-the-art sequence-based neural architectures in evaluating small computer programs. One of the possible applications of such analysis is the auto-evaluation of erroneous Python code. In this context, we focused our work on evaluating small python code blocks with or without error and examined the efficiency of the latest T5 Transformer network model in this task. In terms of accuracy, different Rouge scores, and BLEU scores, the performance measurements has been calculated. Observations reveal that T5 Transformer is able to compute the output for both correct and erroneous python code blocks with more than 65% accuracy.\",\"PeriodicalId\":382982,\"journal\":{\"name\":\"2021 8th International Conference on Smart Computing and Communications (ICSCC)\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 8th International Conference on Smart Computing and Communications (ICSCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSCC51209.2021.9528123\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 8th International Conference on Smart Computing and Communications (ICSCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSCC51209.2021.9528123","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

Python编码的巨大实时适用性使得在自然语言处理(NLP)领域中评估代码的任务非常有趣。对计算机程序的评估是对逻辑和算术理解的挑战。因此,分析当前最先进的基于序列的神经架构在评估小型计算机程序中的经验能力确实非常相关。这种分析的一个可能应用是自动评估错误的Python代码。在这种情况下,我们的工作重点是评估小的python代码块是否有错误,并检查最新的T5 Transformer网络模型在此任务中的效率。在准确性、不同的Rouge分数和BLEU分数方面,计算了性能度量。观察显示,T5 Transformer能够以超过65%的准确率计算正确和错误的python代码块的输出。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Empirical Auto-Evaluation of Python Code for Performance Analysis of Transformer Network Using T5 Architecture
The immense real-time applicability of Python coding makes the task of evaluating the code highly intriguing, in the Natural Language Processing (NLP) domain. Evaluation of computer programs induces a challenge of logical and arithmetic understanding. Therefore, it is indeed very relevant to analyze the empirical ability of current state-of-the-art sequence-based neural architectures in evaluating small computer programs. One of the possible applications of such analysis is the auto-evaluation of erroneous Python code. In this context, we focused our work on evaluating small python code blocks with or without error and examined the efficiency of the latest T5 Transformer network model in this task. In terms of accuracy, different Rouge scores, and BLEU scores, the performance measurements has been calculated. Observations reveal that T5 Transformer is able to compute the output for both correct and erroneous python code blocks with more than 65% accuracy.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
FYEO : A Character Level Model For Lip Reading Parameter Dependencies and Optimization of True Random Number Generator (TRNG) using Genetic Algorithm (GA) Chaotic Time Series Prediction Model for Fractional-Order Duffing's Oscillator Segmentation of Brain Tumour in MR Images Using Modified Deep Learning Network Classification of Power Quality Disturbances in Emerging Power System with Distributed Generation Using Space Phasor Model and Normalized Cross Correlation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1