Georges Aaron Randrianaina, Xhevahire Tërnava, D. Khelladi, Mathieu Acher
{"title":"Code Reviewer Recommendation in Tencent: Practice, Challenge, and Direction*","authors":"Georges Aaron Randrianaina, Xhevahire Tërnava, D. Khelladi, Mathieu Acher","doi":"10.1145/3510457.3513035","DOIUrl":null,"url":null,"abstract":"Code review is essential for assuring system quality in software engineering. Over decades in practice, code review has evolved to be a lightweight tool-based process focusing on code change: the smallest unit of the development cycle, and we refer to it as Modern Code Review (MCR). MCR involves code contributors committing code changes and code reviewers reviewing the assigned code changes. Such a reviewer assigning process is challenged by efficiently finding appropriate reviewers. Recent studies propose automated code reviewer recommendation (CRR) approaches to resolve such challenges. These approaches are often evaluated on open-source projects and obtain promising performance. However, the code reviewer recommendation systems are not widely used on proprietary projects, and most current reviewer selecting practice is still manual or, at best, semi-manual. No previous work systematically evaluated these approaches’ effectiveness and compared each other on proprietary projects in practice. In this paper, we performed a quantitative analysis of typical recommendation approaches on proprietary projects in Tencent. The results show an imperfect performance of these approaches on proprietary projects and reveal practical challenges like the “cold start problem”. To better understand practical challenges, we interviewed practitioners about the expectations of applying reviewer recommendations to a production environment. The interview involves the current systems’ limitations, expected application scenario, and information requirements. Finally, we discuss the implications and the direction of practical code reviewer recommendation tools.","PeriodicalId":119790,"journal":{"name":"2022 IEEE/ACM 44th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE/ACM 44th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3510457.3513035","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Code review is essential for assuring system quality in software engineering. Over decades in practice, code review has evolved to be a lightweight tool-based process focusing on code change: the smallest unit of the development cycle, and we refer to it as Modern Code Review (MCR). MCR involves code contributors committing code changes and code reviewers reviewing the assigned code changes. Such a reviewer assigning process is challenged by efficiently finding appropriate reviewers. Recent studies propose automated code reviewer recommendation (CRR) approaches to resolve such challenges. These approaches are often evaluated on open-source projects and obtain promising performance. However, the code reviewer recommendation systems are not widely used on proprietary projects, and most current reviewer selecting practice is still manual or, at best, semi-manual. No previous work systematically evaluated these approaches’ effectiveness and compared each other on proprietary projects in practice. In this paper, we performed a quantitative analysis of typical recommendation approaches on proprietary projects in Tencent. The results show an imperfect performance of these approaches on proprietary projects and reveal practical challenges like the “cold start problem”. To better understand practical challenges, we interviewed practitioners about the expectations of applying reviewer recommendations to a production environment. The interview involves the current systems’ limitations, expected application scenario, and information requirements. Finally, we discuss the implications and the direction of practical code reviewer recommendation tools.