Brett A. Becker, Paul Denny, J. Prather, Raymond Pettit, Robert Nix, Catherine Mooney
{"title":"编程错误信息的可读性评估","authors":"Brett A. Becker, Paul Denny, J. Prather, Raymond Pettit, Robert Nix, Catherine Mooney","doi":"10.1145/3441636.3442320","DOIUrl":null,"url":null,"abstract":"Programming error messages have proven to be notoriously problematic for novices who are learning to program. Although recent efforts have focused on improving message wording, these have been criticized for attempting to improve usability without first understanding and addressing readability. To date, there has been no research dedicated to the readability of programming error messages and how this could be assessed. In this paper we examine human-based assessments of programming error message readability and make two important contributions. First, we conduct an experiment using the top twenty most-frequent error messages in three popular programming languages (Python, Java, and C), revealing that human notions of readability are highly subjective and dependent on both programming experience and language familiarity. Both novices and experts agreed more about which messages are more readable, but disagreed more about which messages are not readable. Second, we leverage the data from this experiment to uncover several key factors that seem to affect message readability: message length, message tone, and use of jargon. We discuss how these factors can help guide future efforts to design a readability metric for programming error messages.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Towards Assessing the Readability of Programming Error Messages\",\"authors\":\"Brett A. Becker, Paul Denny, J. Prather, Raymond Pettit, Robert Nix, Catherine Mooney\",\"doi\":\"10.1145/3441636.3442320\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Programming error messages have proven to be notoriously problematic for novices who are learning to program. Although recent efforts have focused on improving message wording, these have been criticized for attempting to improve usability without first understanding and addressing readability. To date, there has been no research dedicated to the readability of programming error messages and how this could be assessed. In this paper we examine human-based assessments of programming error message readability and make two important contributions. First, we conduct an experiment using the top twenty most-frequent error messages in three popular programming languages (Python, Java, and C), revealing that human notions of readability are highly subjective and dependent on both programming experience and language familiarity. Both novices and experts agreed more about which messages are more readable, but disagreed more about which messages are not readable. Second, we leverage the data from this experiment to uncover several key factors that seem to affect message readability: message length, message tone, and use of jargon. We discuss how these factors can help guide future efforts to design a readability metric for programming error messages.\",\"PeriodicalId\":334899,\"journal\":{\"name\":\"Proceedings of the 23rd Australasian Computing Education Conference\",\"volume\":\"26 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-02-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 23rd Australasian Computing Education Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3441636.3442320\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 23rd Australasian Computing Education Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3441636.3442320","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Towards Assessing the Readability of Programming Error Messages
Programming error messages have proven to be notoriously problematic for novices who are learning to program. Although recent efforts have focused on improving message wording, these have been criticized for attempting to improve usability without first understanding and addressing readability. To date, there has been no research dedicated to the readability of programming error messages and how this could be assessed. In this paper we examine human-based assessments of programming error message readability and make two important contributions. First, we conduct an experiment using the top twenty most-frequent error messages in three popular programming languages (Python, Java, and C), revealing that human notions of readability are highly subjective and dependent on both programming experience and language familiarity. Both novices and experts agreed more about which messages are more readable, but disagreed more about which messages are not readable. Second, we leverage the data from this experiment to uncover several key factors that seem to affect message readability: message length, message tone, and use of jargon. We discuss how these factors can help guide future efforts to design a readability metric for programming error messages.