Rank Learning-Based Code Readability Assessment with Siamese Neural NetworksVirtual
Automatically assessing code readability is a relatively new challenge that has attracted growing attention from the software engineering community. In this paper, we outline the idea to regard code readability assessment as a learning-to-rank task. Specifically, we design a pairwise ranking model with siamese neural networks, which takes as input a code pair and outputs their readability ranking order. We have evaluated our approach on three publicly available datasets. The result is promising, with an accuracy of 83.5%, a precision of 86.1%, a recall of 81.6%, an F-measure of 83.6% and an AUC of 83.4%.
Mon 10 OctDisplayed time zone: Eastern Time (US & Canada) 
| 11:10 - 11:45 | Paper Presentation Session 2: Readability Assessment[Workshop] AeSIR '22 at Online Workshop 3 Chair(s): Fernanda Madeiral Vrije Universiteit Amsterdam | ||
| 11:1010m Paper | How Readable is Model Generated Code? Examining Readability and Visual Inspection of GitHub CopilotVirtual [Workshop] AeSIR '22 Naser Al Madi Colby College | ||
| 11:2010m Paper | Rank Learning-Based Code Readability Assessment with Siamese Neural NetworksVirtual [Workshop] AeSIR '22 | ||
| 11:3015m Live Q&A | Q&A and Open Discussion on Readability AssessmentVirtual [Workshop] AeSIR '22 | ||
