Smart contracts hold the potential for revolutionizing various industries, but their implementation requires thorough testing due to the associated financial risks. Mutation testing is a powerful technique that can boost the fault-detection capabilities of a test suite, but it can also foster a deeper understanding of the smart contract behavior. This work proposes the usage of mutation testing throughout the smart contract auditing process to support code inspection activities.
Program Display Configuration
Mon 11 Sep
Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna
Early Detection of Defects in Machine Learning Programs by Semi-Static Analysis
Machine learning (ML) has transformed various fields, highlighting the importance of early defect detection in ML programs without executing the code. While static analysis presents opportunities, existing studies have limitations. Meanwhile, notebooks have become a popular platform for developing ML prototypes. Notably, notebooks offer valuable run-time information, which can potentially enhance static analysis. In this project, we propose a semi-static analysis approach that will leverage available notebook run-time information. Our techniques will incorporate abstract interpretation with ML-based methods and support both notebooks and scripts. Our goal is to deliver efficient and effective semi-static analysis methodologies and open-source tools for the early detection of defects during coding, to enhance the productivity of ML development and the quality of ML programs.
Mutation Testing for supporting Smart Contract code inspection
Smart contracts hold the potential for revolutionizing various industries, but their implementation requires thorough testing due to the associated financial risks. Mutation testing is a powerful technique that can boost the fault-detection capabilities of a test suite, but it can also foster a deeper understanding of the smart contract behavior. This work proposes the usage of mutation testing throughout the smart contract auditing process to support code inspection activities.
Failure-based Testing: A Testing Paradigm in the Era of Large Language Models
Automatic test generation is an important software engineering task. Various test generation paradigms (e.g., code-coverage-based, model-based) have been proposed. One major objective of these paradigms is to generate test cases that achieve high coverage of a program’s components/functionalities. Despite many advances, there are still two outstanding challenges for these paradigms: 1) high coverage is not necessarily correlated with bug-revealing capabilities, 2) constructing a test oracle is often an undecidable problem. To address these challenges, we plan to study the paradigm of failure-based testing, which focuses on constructing failure-inducing test cases. We observe that LLMs have several desired characteristics, that can address these challenges for finding failure-inducing test cases.