
Registered user since Fri 13 Sep 2019
Contributions
Registered user since Fri 13 Sep 2019
Contributions
Research Papers
Wed 13 Sep 2023 14:06 - 14:18 at Room C - Program Verification 1 Chair(s): Nicolás RosnerDetecting non-termination is crucial for ensuring program correctness and security, such as preventing denial-of-service attacks. While termination analysis has been studied for many years, existing methods have limited scalability and are only effective on small programs. To address this issue, we propose a practical termination checking technique, called EndWatch, for detecting non-termination through testing. Specifically, we introduce two methods to generate non-termination oracles based on checking state revisits, i.e., if the program returns to a previously visited state at the same program location, it does not terminate. The non-termination oracles can be incorporated into testing tools (e.g., AFL used in this paper) to detect non-termination in large programs. For linear loops, we perform symbolic execution on individual loops to infer State Revisit Conditions (SRC) and instrument SRC into target loops. For non-linear loops, we instrument target loops for checking concrete state revisits during execution. We evaluated EndWatch on standard benchmarks with small-sized programs and real-world projects with large-sized programs. The evaluation results show that EndWatch is more effective than the state-of-the-art tools on standard benchmarks (detecting 87% of non-terminating programs while the best baseline detects only 67%), and useful in detecting non-termination in real-world projects (detecting 90% of known non-termination CVEs and 4 unknown bugs).
Pre-print File AttachedResearch Papers
Tue 12 Sep 2023 14:06 - 14:18 at Room C - Testing AI Systems 2 Chair(s): Lwin Khin SharThe reliability of decision-making policies is urgently important today as they have established the fundamentals of many critical applications, such as autonomous driving and robotics. To ensure reliability, there have been a number of research efforts on testing decision-making policies that solve Markov decision processes (MDPs). However, due to the deep neural network (DNN)-based inherit and infinite state space, developing scalable and effective testing frameworks for decision-making policies still remains open and challenging.
In this paper, we present an effective testing framework for decision-making policies. The framework adopts a generative diffusion model-based test case generator that can easily adapt to different search spaces, ensuring the practicality and validity of test cases. Then, we propose a termination state novelty-based guidance to diversify agent behaviors and improve the test effectiveness. Finally, we evaluate the framework on five widely used benchmarks, including autonomous driving, aircraft collision avoidance, and gaming scenarios. The results demonstrate that our approach identifies more diverse and influential failure-triggering test cases compared to current state-of-the-art techniques. Moreover, we employ the detected failure cases to repair the evaluated models, achieving better robustness enhancement compared to the baseline method.
File Attached