Theresia Devi Indriasari, Paul Denny, Andrew Luxton-Reilly, Danielle Lottridge
Peer code review is not a standard activity within university programming courses. Educators are interested in implementing peer code review because it benefits students by developing their programming skills. One important challenge to address is how to motivate students to engage with the activity. In this study, we explore gamification as an approach for motivating students to manage their review submission time through the use of game elements and mechanics. We conducted a randomised controlled study and explored the review submission time from the log data and survey data. We found that the combination of game elements (i.e., battery, points, leaderboard) influenced students in the gamification group to better manage their review submission time by spreading the review submissions over the review period. These findings can assist academics and educators in understanding how selected game mechanics can assist in motivating students to distribute their review work more evenly over the course time period.
{"title":"Impacting the Submission Timing of Student Work Using Gamification","authors":"Theresia Devi Indriasari, Paul Denny, Andrew Luxton-Reilly, Danielle Lottridge","doi":"10.1145/3627217.3627218","DOIUrl":"https://doi.org/10.1145/3627217.3627218","url":null,"abstract":"Peer code review is not a standard activity within university programming courses. Educators are interested in implementing peer code review because it benefits students by developing their programming skills. One important challenge to address is how to motivate students to engage with the activity. In this study, we explore gamification as an approach for motivating students to manage their review submission time through the use of game elements and mechanics. We conducted a randomised controlled study and explored the review submission time from the log data and survey data. We found that the combination of game elements (i.e., battery, points, leaderboard) influenced students in the gamification group to better manage their review submission time by spreading the review submissions over the review period. These findings can assist academics and educators in understanding how selected game mechanics can assist in motivating students to distribute their review work more evenly over the course time period.","PeriodicalId":508655,"journal":{"name":"Proceedings of the 16th Annual ACM India Compute Conference","volume":"42 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139184666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zixuan Wang, Paul Denny, Juho Leinonen, Andrew Luxton-Reilly
This study investigates the use of large language models, specifically ChatGPT, to analyse the feedback from a Summative Evaluation Tool (SET) used to collect student feedback on the quality of teaching. We find that these models enhance comprehension of SET scores and the impact of context on student evaluations. This work aims to reveal hidden patterns in student evaluation data, demonstrating a positive first step towards automated, detailed analysis of student feedback.
本研究调查了大型语言模型(特别是 ChatGPT)的使用情况,以分析用于收集学生对教学质量反馈的总结性评价工具(SET)的反馈。我们发现,这些模型提高了对 SET 分数的理解能力,并增强了语境对学生评价的影响。这项工作旨在揭示学生评价数据中隐藏的模式,为自动详细分析学生反馈迈出了积极的第一步。
{"title":"Leveraging Large Language Models for Analysis of Student Course Feedback","authors":"Zixuan Wang, Paul Denny, Juho Leinonen, Andrew Luxton-Reilly","doi":"10.1145/3627217.3627221","DOIUrl":"https://doi.org/10.1145/3627217.3627221","url":null,"abstract":"This study investigates the use of large language models, specifically ChatGPT, to analyse the feedback from a Summative Evaluation Tool (SET) used to collect student feedback on the quality of teaching. We find that these models enhance comprehension of SET scores and the impact of context on student evaluations. This work aims to reveal hidden patterns in student evaluation data, demonstrating a positive first step towards automated, detailed analysis of student feedback.","PeriodicalId":508655,"journal":{"name":"Proceedings of the 16th Annual ACM India Compute Conference","volume":"14 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139184819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generative AI has exploded in popularity over the past few years and is showing no signs of slowing down. There is skepticism among educators and institutions on the best ways to harness its power without ignoring ethical and equitable challenges that arise with its use. One area where there is emerging consensus is in building personalized learning solutions that can provide equitable access to a wide range of learners without compromising on ethical challenges. Simultaneously game-based learning has proven to be a viable paradigm to engage learners and the ability of games to be able to adapt to the player/learner provides significant opportunities to build equitable and accessible personalized learning solutions. In this talk, we will discuss ways in which game-based learning and generative AI can synergistically be combined to take advantage of each other’s capabilities and create educational interventions that can be offered at scale. By combining the interactive and motivational aspects of games with the adaptability and intelligence of generative AI, educators can unlock new opportunities to cater to individual learning needs and cultivate a more effective and enjoyable learning process. In this keynote, we will look at experimental software frameworks that can drive and level up education in multiple contexts and showcase some exemplars that demonstrate the promise that this integration provides.
{"title":"Leveling Up Education: Harnessing Generative AI for Game-Based Learning","authors":"Ashish Amresh","doi":"10.1145/3627217.3631585","DOIUrl":"https://doi.org/10.1145/3627217.3631585","url":null,"abstract":"Generative AI has exploded in popularity over the past few years and is showing no signs of slowing down. There is skepticism among educators and institutions on the best ways to harness its power without ignoring ethical and equitable challenges that arise with its use. One area where there is emerging consensus is in building personalized learning solutions that can provide equitable access to a wide range of learners without compromising on ethical challenges. Simultaneously game-based learning has proven to be a viable paradigm to engage learners and the ability of games to be able to adapt to the player/learner provides significant opportunities to build equitable and accessible personalized learning solutions. In this talk, we will discuss ways in which game-based learning and generative AI can synergistically be combined to take advantage of each other’s capabilities and create educational interventions that can be offered at scale. By combining the interactive and motivational aspects of games with the adaptability and intelligence of generative AI, educators can unlock new opportunities to cater to individual learning needs and cultivate a more effective and enjoyable learning process. In this keynote, we will look at experimental software frameworks that can drive and level up education in multiple contexts and showcase some exemplars that demonstrate the promise that this integration provides.","PeriodicalId":508655,"journal":{"name":"Proceedings of the 16th Annual ACM India Compute Conference","volume":"29 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139184939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Code writing problems in introductory programming (CS1) courses typically ask students to write simple functions or programs based on detailed natural-language specifications. These details can be leveraged by large language models (LLMs), accessible to students via tools such as GitHub Copilot, to generate solutions that are often correct. CS1 instructors who are unwilling or unable to prohibit such usage must consider variants of traditional code writing problems that align with their learning objectives but are more difficult for LLMs to solve. Since LLMs are sensitive to the level of details in their prompts, it is natural to consider variants where details are progressively trimmed from the specifications of traditional code writing problems, and consequent ambiguities are clarified via examples. We consider an extreme variant, where all natural language is suppressed except for meaningful names of functions and their arguments. We evaluate the performance of Copilot on suppressed specification versions of 153 such problems drawn from the CodeCheck repository. If Copilot initially fails to generate a correct solution, we augment each suppressed specification with as few clarifying examples as possible to obtain a correct solution. Copilot solves 134 problems (87%) with just 0.7 examples on average, requiring no examples in 78 instances. Thus, modifying traditional code-writing problems by merely trimming specification details is unlikely to thwart sophisticated LLMs such as GitHub Copilot.
{"title":"Evaluating Copilot on CS1 Code Writing Problems with Suppressed Specifications","authors":"Varshini Venkatesh, Vaishnavi Venkatesh, Viraj Kumar","doi":"10.1145/3627217.3627235","DOIUrl":"https://doi.org/10.1145/3627217.3627235","url":null,"abstract":"Code writing problems in introductory programming (CS1) courses typically ask students to write simple functions or programs based on detailed natural-language specifications. These details can be leveraged by large language models (LLMs), accessible to students via tools such as GitHub Copilot, to generate solutions that are often correct. CS1 instructors who are unwilling or unable to prohibit such usage must consider variants of traditional code writing problems that align with their learning objectives but are more difficult for LLMs to solve. Since LLMs are sensitive to the level of details in their prompts, it is natural to consider variants where details are progressively trimmed from the specifications of traditional code writing problems, and consequent ambiguities are clarified via examples. We consider an extreme variant, where all natural language is suppressed except for meaningful names of functions and their arguments. We evaluate the performance of Copilot on suppressed specification versions of 153 such problems drawn from the CodeCheck repository. If Copilot initially fails to generate a correct solution, we augment each suppressed specification with as few clarifying examples as possible to obtain a correct solution. Copilot solves 134 problems (87%) with just 0.7 examples on average, requiring no examples in 78 instances. Thus, modifying traditional code-writing problems by merely trimming specification details is unlikely to thwart sophisticated LLMs such as GitHub Copilot.","PeriodicalId":508655,"journal":{"name":"Proceedings of the 16th Annual ACM India Compute Conference","volume":"23 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139184712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A recent overhaul of the New Zealand digital technologies curriculum has impacted the way that students are taught to program prior to university. The connection between student experiences with the updated curriculum and their perspectives on programming at university is pedagogically significant to educators. Semi-structured interviews were conducted with eight students enrolled in introductory programming courses at the University of Auckland, and a thematic analysis was conducted on the range of responses, revealing a surprisingly diverse range of experiences and perspectives. Insights gained into the connection between learning to program in secondary and tertiary, and the impact of the curriculum changes across schools, are informative to educators in both sectors.
{"title":"Exploring How Novice Programming Students Have Experienced Digital Technology","authors":"Stefan Dyer, Paul Denny, Andrew Luxton-Reilly","doi":"10.1145/3627217.3627219","DOIUrl":"https://doi.org/10.1145/3627217.3627219","url":null,"abstract":"A recent overhaul of the New Zealand digital technologies curriculum has impacted the way that students are taught to program prior to university. The connection between student experiences with the updated curriculum and their perspectives on programming at university is pedagogically significant to educators. Semi-structured interviews were conducted with eight students enrolled in introductory programming courses at the University of Auckland, and a thematic analysis was conducted on the range of responses, revealing a surprisingly diverse range of experiences and perspectives. Insights gained into the connection between learning to program in secondary and tertiary, and the impact of the curriculum changes across schools, are informative to educators in both sectors.","PeriodicalId":508655,"journal":{"name":"Proceedings of the 16th Annual ACM India Compute Conference","volume":"1 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139184817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern software products are complex systems and are better comprehended when engineers can think of the software as a system. Systems Science suggests that learning about a complex system is aided by modeling. It stands to reason that if we can help novice engineers model the software products as systems, it should improve their comprehension. One way of learning through modeling is to use Transition Systems to build models that we proposed in a previous paper. This requires the engineers to learn the vocabulary of Transition Systems and a way to use it to model software systems. The question arises: is it difficult to learn and use transition systems vocabulary? We hypothesize that it is not, because its vocabulary is small, and it builds on the concepts learned in other courses like Theory of Computation and Discrete Mathematics - finite-state machines and set theory. To test this hypothesis, we designed a short intervention (one lecture and one project) in a software engineering course for two cohorts of students from two different environments. We taught them basic concepts of Transition Systems and how systems can be modelled using its vocabulary and evaluated their performance on a modeling project. We also administered a survey to evaluate their perception of the topic. Both the cohorts scored well on the project and reported agreement with ease of learning and use of Transition Systems when surveyed. Based on the knowledge demonstrated and the survey feedback, we conclude that it is not difficult for them to learn the vocabulary of Transition Systems and its use. This result gives confidence to start designing longer intervention to promote use of systems modeling and study their effectiveness with large software systems.
{"title":"Evaluating the difficulty for novice engineers in learning and using Transition Systems for modeling software systems","authors":"Mrityunjay Kumar, Venkatesh Choppella","doi":"10.1145/3627217.3627223","DOIUrl":"https://doi.org/10.1145/3627217.3627223","url":null,"abstract":"Modern software products are complex systems and are better comprehended when engineers can think of the software as a system. Systems Science suggests that learning about a complex system is aided by modeling. It stands to reason that if we can help novice engineers model the software products as systems, it should improve their comprehension. One way of learning through modeling is to use Transition Systems to build models that we proposed in a previous paper. This requires the engineers to learn the vocabulary of Transition Systems and a way to use it to model software systems. The question arises: is it difficult to learn and use transition systems vocabulary? We hypothesize that it is not, because its vocabulary is small, and it builds on the concepts learned in other courses like Theory of Computation and Discrete Mathematics - finite-state machines and set theory. To test this hypothesis, we designed a short intervention (one lecture and one project) in a software engineering course for two cohorts of students from two different environments. We taught them basic concepts of Transition Systems and how systems can be modelled using its vocabulary and evaluated their performance on a modeling project. We also administered a survey to evaluate their perception of the topic. Both the cohorts scored well on the project and reported agreement with ease of learning and use of Transition Systems when surveyed. Based on the knowledge demonstrated and the survey feedback, we conclude that it is not difficult for them to learn the vocabulary of Transition Systems and its use. This result gives confidence to start designing longer intervention to promote use of systems modeling and study their effectiveness with large software systems.","PeriodicalId":508655,"journal":{"name":"Proceedings of the 16th Annual ACM India Compute Conference","volume":"44 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139184654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Competencies based (COM) approaches to curriculum design are a promising and recommended direction to evolve towards. It is then necessary to contrast the current practices with the proposed direction. Some questions have been, and are being, addressed; e.g. work has been done to illustrate defining competencies from given learning outcomes (LO). However globally the transition to competencies based approaches to computing education from LO based approaches is challenging given the diversity of LO practices. In this work we contrast the typical LO practice in India with the proposed competencies approach. We use degree program samples for computer science (CS) and computer engineering (CE) in this work. The LO approaches are obtained by using the AICTE model curriculum for computer engineering and the UGC model curriculum for computer science. The competencies approaches are from the literature around the respective ACM-IEEE curriculum recommendations. With respect to the Indian context we use these to (a) develop a clear contrast between LO based approaches and competency based approaches, (b) critically examine the competencies based approach, and (c) identify a path with incremental changes towards competencies based approaches. We develop a set of considerations and recommendations for an institution that seeks to incrementally move towards a competencies based approach while maintaining compatibility with the Indian educational ecosystem. Finally, we hope that this clarifies a number of potential misinterpretations of competencies based approach, and guides attempts to develop curricula based on it.
以能力为基础(COM)的课程设计方法是一个有前途的、值得推荐的发展方向。因此,有必要将目前的做法与建议的方向进行对比。有些问题已经得到解决,有些问题正在得到解决;例如,已经开展了一些工作,说明如何从给定的学习成果(LO)中定义能力。然而,鉴于学习成果实践的多样性,从基于学习成果的方法过渡到基于能力的计算教育方法在全球范围内都具有挑战性。在这项工作中,我们将印度典型的学习成果实践与建议的能力方法进行了对比。在这项工作中,我们使用了计算机科学(CS)和计算机工程(CE)的学位课程样本。学习方法是通过使用 AICTE 计算机工程示范课程和 UGC 计算机科学示范课程获得的。能力培养方法则来自 ACM-IEEE 课程建议的相关文献。针对印度的具体情况,我们利用这些方法:(a) 对基于 LO 的方法和基于能力的方法进行了清晰的对比;(b) 对基于能力的方法进行了严格的审查;(c) 确定了一条逐步向基于能力的方法转变的道路。我们为寻求逐步转向能力本位方法的机构制定了一套考虑因素和建议,同时保持与印度教育生态系统的兼容性。最后,我们希望这能澄清对基于能力的教学方法的一些潜在误解,并为基于能力的教学方法开发课程的尝试提供指导。
{"title":"From Learning Outcomes to Competencies based Computing Curricula for India","authors":"A. Vichare","doi":"10.1145/3627217.3627228","DOIUrl":"https://doi.org/10.1145/3627217.3627228","url":null,"abstract":"Competencies based (COM) approaches to curriculum design are a promising and recommended direction to evolve towards. It is then necessary to contrast the current practices with the proposed direction. Some questions have been, and are being, addressed; e.g. work has been done to illustrate defining competencies from given learning outcomes (LO). However globally the transition to competencies based approaches to computing education from LO based approaches is challenging given the diversity of LO practices. In this work we contrast the typical LO practice in India with the proposed competencies approach. We use degree program samples for computer science (CS) and computer engineering (CE) in this work. The LO approaches are obtained by using the AICTE model curriculum for computer engineering and the UGC model curriculum for computer science. The competencies approaches are from the literature around the respective ACM-IEEE curriculum recommendations. With respect to the Indian context we use these to (a) develop a clear contrast between LO based approaches and competency based approaches, (b) critically examine the competencies based approach, and (c) identify a path with incremental changes towards competencies based approaches. We develop a set of considerations and recommendations for an institution that seeks to incrementally move towards a competencies based approach while maintaining compatibility with the Indian educational ecosystem. Finally, we hope that this clarifies a number of potential misinterpretations of competencies based approach, and guides attempts to develop curricula based on it.","PeriodicalId":508655,"journal":{"name":"Proceedings of the 16th Annual ACM India Compute Conference","volume":"52 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139184707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Before implementing a function, programmers are encouraged to write a suite of test cases that specify its intended behaviour on several inputs. A suite of tests is thorough if any buggy implementation fails at least one of these tests. We posit that as the proportion of code generated by Large Language Models (LLMs) grows, so must the ability of students to create test suites that are thorough enough to detect subtle bugs in such code. Our paper makes two contributions. First, we demonstrate how difficult it can be to create thorough tests for LLM-generated code by evaluating 27 test suites from a public dataset (EvalPlus). Second, by identifying deficiencies in these test suites, we propose strategies for improving the ability of students to develop thorough test suites for LLM-generated code.
{"title":"Creating Thorough Tests for AI-Generated Code is Hard","authors":"Shreya Singhal, Viraj Kumar","doi":"10.1145/3627217.3627238","DOIUrl":"https://doi.org/10.1145/3627217.3627238","url":null,"abstract":"Before implementing a function, programmers are encouraged to write a suite of test cases that specify its intended behaviour on several inputs. A suite of tests is thorough if any buggy implementation fails at least one of these tests. We posit that as the proportion of code generated by Large Language Models (LLMs) grows, so must the ability of students to create test suites that are thorough enough to detect subtle bugs in such code. Our paper makes two contributions. First, we demonstrate how difficult it can be to create thorough tests for LLM-generated code by evaluating 27 test suites from a public dataset (EvalPlus). Second, by identifying deficiencies in these test suites, we propose strategies for improving the ability of students to develop thorough test suites for LLM-generated code.","PeriodicalId":508655,"journal":{"name":"Proceedings of the 16th Annual ACM India Compute Conference","volume":"2 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139184883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ayush Kataria, H. M. Venkateshprasanna, Ashok Kumar, Reddy Kummetha
The ability to search and retrieve the right resources in a Learning Experience Platform (LXP) is critical in helping the workforce of an enterprise to upskill and deepen their expertise effectively. To ensure the best resources are shown as high in the result set as possible to catch learners’ attention, a supervised learning approach of training and deploying a Learning to Rank (LTR) model for re-ranking is proposed. This work specifically focuses on judgement list preparation taking advantage of the learning progress data available in LXPs, as well as on defining and measuring model performance through metrics in both test and production setups. In particular, it highlights the positive impact of the deployed LTR model in production using the defined metrics like average search result click position and percentage top N clicks.
在学习体验平台(LXP)中搜索和检索正确资源的能力对于帮助企业员工有效提高技能和深化专业知识至关重要。为了确保最好的资源在结果集中尽可能高的位置显示,以吸引学习者的注意力,我们提出了一种监督学习方法,即训练和部署一个学习排名(LTR)模型来重新排序。这项工作特别关注利用 LXP 中的学习进度数据准备判断列表,以及通过测试和生产设置中的指标来定义和衡量模型性能。特别是,它利用所定义的指标(如平均搜索结果点击位置和前 N 次点击百分比),强调了已部署的 LTR 模型在生产中的积极影响。
{"title":"Learning to Rank for Search Results Re-ranking in Learning Experience Platforms","authors":"Ayush Kataria, H. M. Venkateshprasanna, Ashok Kumar, Reddy Kummetha","doi":"10.1145/3627217.3627224","DOIUrl":"https://doi.org/10.1145/3627217.3627224","url":null,"abstract":"The ability to search and retrieve the right resources in a Learning Experience Platform (LXP) is critical in helping the workforce of an enterprise to upskill and deepen their expertise effectively. To ensure the best resources are shown as high in the result set as possible to catch learners’ attention, a supervised learning approach of training and deploying a Learning to Rank (LTR) model for re-ranking is proposed. This work specifically focuses on judgement list preparation taking advantage of the learning progress data available in LXPs, as well as on defining and measuring model performance through metrics in both test and production setups. In particular, it highlights the positive impact of the deployed LTR model in production using the defined metrics like average search result click position and percentage top N clicks.","PeriodicalId":508655,"journal":{"name":"Proceedings of the 16th Annual ACM India Compute Conference","volume":"23 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139184710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ritwik Murali, Rajkumar Sukumar, Mary Sanjana Gali, Veeramanohar Avudaiappan
Learning one’s first programming language includes challenges of syntax, surplus code and semantics. The learning can be easy or quite hard for a novice programmer depending on the programming language. Even the small “Hello World” program code contains semantic and syntactic complexity. This paper discusses the pros and cons of multiple tools that may be used for syntax independent implementation of solutions. Based on the shortcomings of existing tools, Flowgramming – a platform independent flowcharting software for the novice programmer / problem solver and their instructor, is also proposed in the paper. Flowcharts developed using Flowgramming can be executed by the built-in interpreter which helps the novice programmer focus on understanding the problem solving strategy in a visually appealing manner and also allows for a language independent learning of solution strategies.
学习第一门编程语言包括语法、剩余代码和语义方面的挑战。对于程序员新手来说,学习过程可能很容易,也可能相当困难,这取决于编程语言。即使是小小的 "Hello World "程序代码,也包含着语义和语法上的复杂性。本文讨论了可用于独立于语法实施解决方案的多种工具的优缺点。基于现有工具的缺点,本文还提出了 Flowgramming--一种面向新手程序员/问题解决者及其指导老师的独立于平台的流程图软件。使用 Flowgramming 开发的流程图可由内置解释器执行,这有助于新手程序员以直观的方式集中精力理解问题解决策略,同时还可以学习与语言无关的解决策略。
{"title":"Empowering Novice Programmers with Visual Problem Solving tools","authors":"Ritwik Murali, Rajkumar Sukumar, Mary Sanjana Gali, Veeramanohar Avudaiappan","doi":"10.1145/3627217.3627232","DOIUrl":"https://doi.org/10.1145/3627217.3627232","url":null,"abstract":"Learning one’s first programming language includes challenges of syntax, surplus code and semantics. The learning can be easy or quite hard for a novice programmer depending on the programming language. Even the small “Hello World” program code contains semantic and syntactic complexity. This paper discusses the pros and cons of multiple tools that may be used for syntax independent implementation of solutions. Based on the shortcomings of existing tools, Flowgramming – a platform independent flowcharting software for the novice programmer / problem solver and their instructor, is also proposed in the paper. Flowcharts developed using Flowgramming can be executed by the built-in interpreter which helps the novice programmer focus on understanding the problem solving strategy in a visually appealing manner and also allows for a language independent learning of solution strategies.","PeriodicalId":508655,"journal":{"name":"Proceedings of the 16th Annual ACM India Compute Conference","volume":"10 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139184732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}