[Workshop] AeSIR '22
Mon 10 Oct 2022 09:00 - 09:10 at Online Workshop 3 - Welcome & Keynote Chair(s): Simone Scalabrino University of Moliseno description available
[Workshop] AeSIR '22
Mon 10 Oct 2022 09:10 - 10:10 at Online Workshop 3 - Welcome & Keynote Chair(s): Simone Scalabrino University of MoliseTracking developer eye movements unobstrusively while they work on software tasks can give us insights into the thought processes that go into solving them. Eye movements are essential to cognitive processes because they focus a developer’s gaze on parts of source code that is processed by the brain. In this talk, I will highlight how eye tracking has been used to gauge visual effort in readability studies and my vision on how it can be used in future software engineering studies to learn about readable code.
Bonita Sharif, Ph.D. is an Associate Professor in the Department of Computer Science and Engineering at University of Nebraska - Lincoln, Lincoln, Nebraska USA. She received her Ph.D. in 2010 and MS in 2003 in Computer Science from Kent State University, U.S.A and B.S. in Computer Science from Cyprus College, Nicosia Cyprus. Her research interests are in eye tracking related to software engineering, program comprehension, empirical software engineering, emotional awareness, software traceability, and software visualization to support maintenance of large systems. She has authored over 40 refereed publications. She serves on numerous program committees including ICSME, VISSOFT, SANER, ICSE NIER, and ICPC. Sharif is a recipient of the NSF CAREER award and the NSF CRI award related to empowering software engineering with eye tracking. She directs the Software Engineering Research and Empirical Studies Lab in the Computer Science and Engineering department at UNL.Check out the iTrace infrastructure that supports eye tracking within developer work environments at http://www.i-trace.org
no description available
[Workshop] AeSIR '22
Mon 10 Oct 2022 10:25 - 10:35 at Online Workshop 3 - Paper Presentation Session 1: Identifier Names Chair(s): Felipe Ebert Fontys University of Applied SciencesIdentifier naming is one of the main sources of information in program comprehension, where a significant portion of software development time is spent. Previous research shows that similarity in identifier names could potentially hinder code comprehension, and subsequently code maintenance and evolution. In this paper, we present an open-source tool for assessing confusing naming combinations in Python programs. The tool which we call Namesake, flags confusing identifier naming combinations that are similar in orthography (word form), phonology (pronunciation), or semantics (meaning). Our tool extracts identifier names from the abstract syntax tree of a program, splits compound names, and evaluates the similarity of each pair in orthography, phonology, and semantics. Problematic identifier combinations are flagged to programmers along with their line numbers. In combination with existing coding style checkers, Namesake can provide programmers with an additional resource to enhance identifier naming quality. The tool can be integrated easily in DevOps pipelines for automated checking and identifier naming appraisal.
[Workshop] AeSIR '22
Mon 10 Oct 2022 10:35 - 10:45 at Online Workshop 3 - Paper Presentation Session 1: Identifier Names Chair(s): Felipe Ebert Fontys University of Applied SciencesNames of classes/methods/variables play an important role in code readability. To investigate how developers choose names, Feitelson et al. conducted an empirical survey and suggested a method to improve naming quality. We replicated their study, but limited the survey subjects to university students. Specifically, we conducted two experiments including 341 students from freshman to senior. The aim of the first experiment was to investigate the characteristics of the names given by students. The experimental results showed that the name length as well as the number of words contained in names increased with the grade level and students have ambiguity in understanding the name. The second experiment was to verify whether Feitelson et al.’s naming method can help improve the quality of names given by students. The experimental data showed an improvement in the quality of names for 70% of cases, which confirms the validity of the method for university students.
[Workshop] AeSIR '22
Mon 10 Oct 2022 10:45 - 11:00 at Online Workshop 3 - Paper Presentation Session 1: Identifier Names Chair(s): Felipe Ebert Fontys University of Applied Sciencesno description available
no description available
[Workshop] AeSIR '22
Mon 10 Oct 2022 11:10 - 11:20 at Online Workshop 3 - Paper Presentation Session 2: Readability Assessment Chair(s): Fernanda Madeiral Vrije Universiteit AmsterdamBackground: Recent advancements in large language models have motivated the practical use of such models in code generation and program synthesis. However, little is known about the effects of such tools on code readability and visual attention in practice. Objective: In this paper, we focus on GitHub Copilot to address the issues of readability and visual inspection of model generated code. Readability and low complexity are vital aspects of good source code, and visual inspection of generated code is important in light of automation bias. Method: Through a human experiment (n=21) we compare model generated code to code written completely by human programmers. We use a combination of static code analysis and human evaluators to assess code readability, and we use eye tracking to assess the visual inspection of code. Results: Our results suggest that model generated code is comparable in complexity and readability to code written entirely by human programmers. At the same time, eye tracking data suggests, to a statistically significant level, that programmers direct less visual attention to model generated code. Conclusion: Our findings highlight that reading code is more important than ever, and programmers should beware of complacency and automation bias with model generated code.
[Workshop] AeSIR '22
Mon 10 Oct 2022 11:20 - 11:30 at Online Workshop 3 - Paper Presentation Session 2: Readability Assessment Chair(s): Fernanda Madeiral Vrije Universiteit AmsterdamAutomatically assessing code readability is a relatively new challenge that has attracted growing attention from the software engineering community. In this paper, we outline the idea to regard code readability assessment as a learning-to-rank task. Specifically, we design a pairwise ranking model with siamese neural networks, which takes as input a code pair and outputs their readability ranking order. We have evaluated our approach on three publicly available datasets. The result is promising, with an accuracy of 83.5%, a precision of 86.1%, a recall of 81.6%, an F-measure of 83.6% and an AUC of 83.4%.
[Workshop] AeSIR '22
Mon 10 Oct 2022 11:30 - 11:45 at Online Workshop 3 - Paper Presentation Session 2: Readability Assessment Chair(s): Fernanda Madeiral Vrije Universiteit Amsterdamno description available
[Workshop] AeSIR '22
Mon 10 Oct 2022 12:00 - 13:00 at Online Workshop 3 - Discussion Panel & Closing Chair(s): Marcelo De Almeida Maia Federal University of Uberlandiano description available
[Workshop] AeSIR '22
Mon 10 Oct 2022 13:00 - 13:10 at Online Workshop 3 - Discussion Panel & Closing Chair(s): Marcelo De Almeida Maia Federal University of Uberlandiano description available