Write a Blog >>
ASE 2020
Mon 21 - Fri 25 September 2020 Melbourne, Australia
Wed 23 Sep 2020 00:00 - 00:20 at Wombat - Testing (1) Chair(s): Lingming Zhang

In unit testing, mocking is popularly used to ease test effort, reduce test flakiness, and increase test coverage by replacing the actual dependencies with simple implementations. However, there are no clear criteria to determine which dependencies in a unit test should be mocked. Inappropriate mocking can have undesirable consequences: under-mocking could result in the inability to isolate the class under test (CUT) from its dependencies while over-mocking increases the developers’ burden on maintaining the mocked objects and may lead to spurious test failures. According to existing work, various factors can determine whether a dependency should be mocked. As a result, mocking decisions are often difficult to make in practice. Studies on the evolution of mocked objects also showed that developers tend to change their mocking decisions: 17% of the studied mocked objects were introduced sometime after the test scripts were created and another 13% of the originally mocked objects eventually became unmocked. In this work, we are motivated to develop an automated technique to make mocking recommendations to facilitate unit testing. We studied 10,846 test scripts in four actively maintained open-source projects that use mocked objects, aiming to characterize the dependencies that are mocked in unit testing. Based on our observations on mocking practices, we designed and implemented a tool, MockSniffer, to identify and recommend mocks for unit tests. The tool is fully automated and requires only the CUT and its dependencies as input. It leverages machine learning techniques to make mocking recommendations by holistically considering multiple factors that can affect developers’ mocking decisions. Our evaluation of Mock- Sniffer on ten open-source projects showed that it outperformed three baseline approaches, and achieved good performance in two potential application scenarios.

Wed 23 Sep
Times are displayed in time zone: (UTC) Coordinated Universal Time

00:00 - 01:00: Testing (1)Research Papers / Tool Demonstrations at Wombat
Chair(s): Lingming ZhangUniversity of Illinois at Urbana-Champaign, USA
00:00 - 00:20
MockSniffer: Characterizing and Recommending Mocking Decisions for Unit Tests
Research Papers
Hengcheng ZhuSouthern University of Science and Technology, Lili WeiThe Hong Kong University of Science and Technology, Ming WenHuazhong University of Science and Technology, China, Yepang LiuSouthern University of Science and Technology, Shing-Chi CheungHong Kong University of Science and Technology, China, Qin ShengWeBank Co Ltd, Cui ZhouWeBank Co Ltd
DOI Pre-print
00:20 - 00:40
Defect Prediction Guided Search-Based Software Testing
Research Papers
Anjana PereraMonash University, Aldeida AletiMonash University, Marcel BöhmeMonash University, Australia, Burak TurhanMonash University
DOI Pre-print
00:40 - 00:50
STIFA: Crowdsourced Mobile Testing Report Selection Based on Text and Image Fusion Analysis
Tool Demonstrations
Zhenfei CaoNanjing University, Xu WangNanjing University, Shengcheng YuNanjing University, China, Yexiao YunNanjing University, Chunrong FangNanjing University, China