First-order theorem proving is one of the earliest research areas within artificial intelligence and formal methods. It is undergoing a rapid development thanks to its successful use in program analysis and verification, semantic Web, database systems, symbolic computation, theorem proving in mathematics, and other related areas. Breakthrough results in all areas of theorem proving have been obtained, including improvements in theory, implementation, and the development of powerful theorem proving tools. However, recent developments are not always easily accessible to the Software Engineering community. This tutorial aims to address this by introducing the audience to first-order theorem proving with a focus on applications in rigorous systems engineering. The workhorse used for a demonstration of concepts discussed at the tutorial will be the award-winning theorem prover Vampire.
The tutorial will be split into three parts. The first part immediately helps the audience place the work in context by introducing them to an application of theorem proving in Vampire for program analysis. The second part presents the rudimentary underlying theory with a focus on recent developments of particular relevance to the rest of the tutorial. The third part presents a 'Toolkit' of techniques that can be used for rigorous systems engineering. All three parts are accompanied by live demonstrations and hands-on exercises with Vampire (http://www.vprover.org). All participants will be given access to the latest public release of Vampire and the (soon to be released) Vampire manual.
Laura Kovács is a full professor in computer science at the Vienna University of Technology, Austria. She has also an adjunct professor position at the Chalmers University of Technology, Sweden. Her research aims at the design of new methods, tools and algorithms supporting program analysis, by combining techniques from automated reasoning and symbolic computation techniques. She has been awarded an ERC Starting Grant 2014 and a Swedish Wallenberg Academy Fellowship 2014.
Giles Reger is a lecturer in computer science at the University of Manchester, UK since 2016. His joint research interests are in automated reasoning and runtime verification. He joined he Vampire team in 2014 and has contributed towards improved methods for theory reasoning, heuristics for proof search, finite model finding, and cooperate proof search. He is co-organiser of the ARCADE workshop colocated with CADE-26 and PC co-chair of the 17th international conference on Runtime Verification.
Martin Suda is a postdoc in the group of Laura Kovács at the Vienna University of Technology, Austria. Before coming to Vienna, he was a postdoc at Andrei Voronkov's group at the University of Manchester. As one of the main developers of Vampire, he recently contributed to extensions of the AVATAR architecture, new clausification algorithms and preprocessing techniques, and the support for interpolation. Besides automated theorem proving, his research interest include temporal reasoning, automated planning and QBF proof theory.
Andrei Voronkov is a professor of computer science at the University of Manchester, UK and a visiting professor at the Vienna University of Technology, Austria. He is the founder and CEO of EasyChair, the main designer of the theorem prover Vampire, and the founder of the LPAR conference series. His research focuses on automated theorem proving and EasyChair-inspired areas, such as automatic code analysis and (secure) code generation, scientific document analysis, keyphrase extraction from scientific documents and constraint satisfaction problems related to conference management. He received a Herbrand Award in 2015 for his numerous theoretical and practical contributions to automated deduction.
Uncertainty is becoming a first-class concern in modern software. Whether inherently present in the application domains like multimedia processing, finance, social sciences, induced by the interaction with the physical world via noisy and inaccurate sensors, or introduced by approximate execution platforms, software engineers have to reckon with uncertainty and how it affects the behavior of their software. Many uncertain phenomena can be modeled and analyzed using probabilistic reasoning, but software developers are often required to a spend significant effort to precisely implement their applications and obtain maximum precision and efficiency. To improve reliability and efficiency of their software, the developers can benefit from learning about recent techniques for automatic code-level modeling and analysis throughout the entire development cycle.
In this full-day tutorial, we will introduce the basic concepts of probabilistic programming and probabilistic program analysis, with particular emphasis on the recent advancements in theoretical foundations and tool support. The tutorial will incorporate hands-on sessions and case studies to get a practical taste of probabilistic programming and program analysis. It is tailored to a wide range of academic and professional software engineers. We assume only basic undergraduate knowledge of probability and provide a review of the main background topics relevant to the tutorial.
The tutorial will consist of two modules: (i) probabilistic programming and (ii) probabilistic program analysis. The probabilistic programming module will present the core principles and techniques for representing and analyzing probabilistic models as programs, including hands-on exercises that relate to basic concepts from probability theory, applying appropriate automated inference techniques, and showing how program analysis can improve efficiency and/or precision of probabilistic inference. The probabilistic analysis module will present the core challenges and principles of probabilistic symbolic execution, including techniques to overcome scalability issues of symbolic execution by trading off accuracy for scalability, efficient model counting and solution space quantification, and reliability estimation in regular programming languages.
Anyone interested is more than welcome to engage with the organizers to request additional information or to propose specific topics or examples to be discussed during the tutorial. We look forward to meeting you in Urbana-Champaign!
Sasa Misailovic is an Assistant Professor at the University of Illinois at Urbana-Champaign. His main research interests include programming languages, program analysis, and software engineering, with an emphasis on improving performance, energy efficiency, and resilience in the face of software errors and approximation opportunities. The main topics of his recent publications include probabilistic inference via static program analysis, probabilistic analysis of reliability in computer systems, approximate computing, program repair, and compiler optimizations.
Antonio Filieri is a Lecturer (Assistant Professor) at Imperial College London. His main research interests are in the application of mathematical methods in software engineering, in particular probability, statistics, logic, and control theory. The main topics of his recent publications include exact and approximate analysis methods for probabilistic software analysis, incremental verification, quantitative software modeling and verification at runtime, and control-theoretic software adaptation.
Open source code repositories, such as GitHub, contain a vast wealth of software artifacts for researchers to mine and analyze. Analysis of such ultra-large-scale datasets is challenging, at best, and often researchers simply choose a small subset of the data for their tasks. Performing program analysis on such a large-scale dataset is even more daunting. Many software engineering problems such as specification inference, code search, programming pattern discovery, etc can benefit from program analysis of ultra-large-scale datasets. The Boa language and infrastructure was designed to provide users the ability to easily query such ultra-large-scale datasets by providing them with a simple domain-specific language to query the source code and a robust infrastructure that automatically translates those queries into distributed Hadoop programs. This infrastructure was recently extended to provide support for program analysis via new domain-specific language features and automatic optimizations of the analyses.
In this tutorial, we present an overview of the Boa language and infrastructure and show how it can be used to easily perform program analysis on thousands of projects. We describe the language, give participants the chance to use the infrastructure, and show examples of program analyses written in Boa's domain-specific language for program analysis.
- Background on the Boa language and infrastructure
- A gentle introduction to the Boa language
- Hands-on experience using the Boa website
- Introduction to program analysis language features
- Examples of program analysis in Boa
- Hands-on experience with Boa
The target audience for the tutorial is anyone who has prior experience with program analysis, at any level from novice to expert. We do not assume the audience has any experience with the Boa framework. All users will be given login information to the Boa website and a brief introduction on how to use the website and the basics of the language. We do assume audience members have at least a basic understanding of program analysis, such as what a control-flow graph is, although we will briefly cover some of the basics of program analysis and terminology.
Ganesha Upadhyaya is a doctoral candidate at Iowa State University. His research interests include program analysis, mining software repositories, and concurrent programming. He has contributed to adding program analysis support to Boa. He has published and presented several works at OOPSLA, ICSE, MSR, Modularity, and AGERE.
Hridesh Rajan is the Kingland Professor in the Computer Science Department at Iowa State University (ISU) where he has been since 2005. His research interests include programming languages, software engineering, and data science. He founded the Midwest Big Data Summer School to deliver broadly accessible data science curricula and serves as a Steering Committee member of the Midwest Big Data Hub (MBDH). He has successfully given tutorials/demos at ASE, ICSE, SPLASH, and ECOOP.
Robert Dyer is an assistant professor at Bowling Green State University. His research interested include software engineering, big data applications, and empirically validating programming language features. His doctoral thesis was on the design, implementation, and evaluation of Boa. He has successfully given tutorials/demos at ASE, ICSE, SPLASH, and ECOOP.
Tien N. Nguyen is an associate professor at The University of Texas at Dallas. His research interests include program analysis, large-scale mining software repositories, and statistical approaches in software engineering. He has been involved in organizing workshops and tutorials at ICSE14, SPLASH14, and SPLASH15.
Enterprise networks have become complex software artifacts weaving together interdependent devices, protocols, virtualization layers, optimizations, software bugs, and security controls. Yet communication is now critical infrastructure. How can we be sure networks are doing the right thing? The fact is, too often, they aren't. Despite enormous amounts of human time - a network or security change might spend a week in a review board in a large enterprise - outages and security vulnerabilities are common.
Can we prove that a network is correct? This is the goal of the emerging area of network verification. By applying ideas from formal methods to network infrastructure, algorithms and systems have recently been developed which can provide strong assurance that network behavior meets service availability, segmentation, and resilience intent of the network architect.
In this tutorial, we'll first begin with a brief introduction to networking and its recent transformation from highly manual management of siloed devices towards software-driven automation independent of hardware. Given that context, we will introduce the theoretical and algorithmic foundations of network verification, discuss approaches that provide verification of the data plane and control plane, and discuss how research is pushing the boundaries into verification of more complex network software. We will also discuss future research directions.
Network verification blossomed into a hot area of research and quickly had impact on industry, with multiple startups, major networking companies and hyperscale cloud providers deploying verification technology today. We'll discuss key practical challenges and lessons learned, and how enterprises use these solutions in practice. Finally, we will close with interactive discussion.
The objective of this tutorial is to introduce the ASE community to the field of network verification, key challenges for the future, and more broadly the deepening connections between software engineering and networking, in order to spur new research interactions.
P. Brighten Godfrey is Co-Founder and Chief Technology Officer of Veriflow, and Associate Professor in the Department of Computer Science at UIUC. Dr. Godfrey has conducted research in networked systems and algorithms for more than a decade, and is a co-inventor of key technology in use at Veriflow. His work has developed novel architectures and systems for Internet routing, data center networking, high performance data transport, and network data plane verification, as well as advancing theoretical analysis of network algorithms.
Santhosh Prabhu is a software engineer at Veriflow, specializing in core verification algorithms. He is also a Ph.D. candidate at the University of Illinois at Urbana-Champaign, advised by Matthew Caesar, focusing on network verification and SDN. He previously completed his M.Tech at the Indian Institute of Technology Kharagpur, where he was awarded the Institute Medal for First Rank in Computer Science.