
Registered user since Mon 20 Jan 2020
Contributions
View general profile
Registered user since Mon 20 Jan 2020
Contributions
[Workshop] AeSIR '22
Mon 10 Oct 2022 10:45 - 11:00 at Online Workshop 3 - Paper Presentation Session 1: Identifier Names Chair(s): Felipe Ebertno description available
[Workshop] AeSIR '22
Mon 10 Oct 2022 10:25 - 10:35 at Online Workshop 3 - Paper Presentation Session 1: Identifier Names Chair(s): Felipe EbertIdentifier naming is one of the main sources of information in program comprehension, where a significant portion of software development time is spent. Previous research shows that similarity in identifier names could potentially hinder code comprehension, and subsequently code maintenance and evolution. In this paper, we present an open-source tool for assessing confusing naming combinations in Python programs. The tool which we call Namesake, flags confusing identifier naming combinations that are similar in orthography (word form), phonology (pronunciation), or semantics (meaning). Our tool extracts identifier names from the abstract syntax tree of a program, splits compound names, and evaluates the similarity of each pair in orthography, phonology, and semantics. Problematic identifier combinations are flagged to programmers along with their line numbers. In combination with existing coding style checkers, Namesake can provide programmers with an additional resource to enhance identifier naming quality. The tool can be integrated easily in DevOps pipelines for automated checking and identifier naming appraisal.
[Workshop] AeSIR '22
Mon 10 Oct 2022 11:30 - 11:45 at Online Workshop 3 - Paper Presentation Session 2: Readability Assessment Chair(s): Fernanda Madeiral[Workshop] AeSIR '22
Mon 10 Oct 2022 11:10 - 11:20 at Online Workshop 3 - Paper Presentation Session 2: Readability Assessment Chair(s): Fernanda MadeiralBackground: Recent advancements in large language models have motivated the practical use of such models in code generation and program synthesis. However, little is known about the effects of such tools on code readability and visual attention in practice. Objective: In this paper, we focus on GitHub Copilot to address the issues of readability and visual inspection of model generated code. Readability and low complexity are vital aspects of good source code, and visual inspection of generated code is important in light of automation bias. Method: Through a human experiment (n=21) we compare model generated code to code written completely by human programmers. We use a combination of static code analysis and human evaluators to assess code readability, and we use eye tracking to assess the visual inspection of code. Results: Our results suggest that model generated code is comparable in complexity and readability to code written entirely by human programmers. At the same time, eye tracking data suggests, to a statistically significant level, that programmers direct less visual attention to model generated code. Conclusion: Our findings highlight that reading code is more important than ever, and programmers should beware of complacency and automation bias with model generated code.