Blogs (1) >>
ASE 2019
Sun 10 - Fri 15 November 2019 San Diego, California, United States
Tue 12 Nov 2019 17:00 - 17:20 at Cortez 1 - Testing and Visualization Chair(s): Amin Alipour

Test cases are crucial to help developers preventing the introduction of software faults. Unfortunately, not all the tests are properly designed or can effectively capture faults in production code. Some measures have been defined to assess test-case effectiveness: the most relevant one is the mutation score, which highlights the quality of a test by generating the so-called mutants, i.e., variations of the production code that make it faulty and that the test is supposed to identify. However, previous studies revealed that mutation analysis is extremely costly and hard to use in practice. The approaches proposed by researchers so far have not been able to provide practical gains in terms of mutation testing efficiency. Indeed, to apply mutation testing in practice a developer needs to (i) generate mutants, (ii) compile the source code, and (iii) run test cases. In cases of systems where test suites count hundrends of test cases, this is unfeasible. As a consequence, the problem of automatically assessing test-case effectiveness in a timely and efficient manner is still far from being solved. In this paper, we present a novel methodology orthogonal to existing approaches. In particular, we investigate the feasibility to estimate test-case effectiveness, as indicated by mutation score, exploiting production and test-code-quality indicators. We firstly select a set of 67 factors, including test and code metrics as well as test- and code-smells, and study their relation with test-case effectiveness. We discover that 41 out of 67 of the investigated factors statistically differs from non-effective and effective tests. Then, we devise a mutation score estimation model exploiting such factors and investigate its performance as well as its most relevant features. The key results of the study reveal that our estimation model only based only on statically computable features has 86% of both F-Measure and AUC-ROC. This means that we can estimate the test-case effectiveness, using source-code-quality indicators, with high accuracy and without executing the tests. In fact, adding line coverage as an additional feature only increases the performance of the model by about the 9%. As a consequence, we can provide a practical approach that is beyond the typical limitations of current mutation testing techniques.

Tue 12 Nov

ase-2019-paper-presentations
16:00 - 17:40: Papers - Testing and Visualization at Cortez 1
Chair(s): Amin AlipourUniversity of Houston
ase-2019-papers16:00 - 16:20
Talk
History-Guided Configuration Diversification for Compiler Test-Program GenerationACM SIGSOFT Distinguished Paper Award
Junjie ChenTianjin University, Guancheng WangPeking University, Dan HaoPeking University, Yingfei XiongPeking University, Hongyu ZhangThe University of Newcastle, Lu ZhangPeking University
ase-2019-papers16:20 - 16:40
Talk
Data-Driven Compiler Testing and Debugging
Junjie ChenTianjin University
ase-2019-papers16:40 - 17:00
Talk
Targeted Example Generation for Compilation Errors
Umair Z. AhmedNational University of Singapore, Renuka SindhgattaQueensland University of Technology, Australia, Nisheeth SrivastavaIndian Institute of Technology, Kanpur, Amey KarkareIIT Kanpur
Link to publication Pre-print
ase-2019-Journal-First-Presentations17:00 - 17:20
Talk
Lightweight Assessment of Test-Case Effectiveness using Source-Code-Quality Indicators
Giovanni GranoUniversity of Zurich, Fabio PalombaDepartment of Informatics, University of Zurich, Harald GallUniversity of Zurich
Link to publication Pre-print
ase-2019-Demonstrations17:20 - 17:30
Demonstration
Visual Analytics for Concurrent Java Executions
Cyrille ArthoKTH Royal Institute of Technology, Sweden, Monali PandeKTH Royal Institute of Technology, Qiyi TangUniversity of Oxford
ase-2019-Demonstrations17:30 - 17:40
Demonstration
NeuralVis: Visualizing and Interpreting Deep Learning Models
Xufan ZhangState Key Laboratory for Novel Software Technology Nanjing University, Nanjing, China, Ziyue YinState Key Laboratory for Novel Software Technology Nanjing University, Nanjing, China, Yang FengUniversity of California, Irvine, Qingkai ShiHong Kong University of Science and Technology, Jia LiuState Key Laboratory for Novel Software Technology Nanjing University, Nanjing, China, Zhenyu ChenNanjing University