ASE 2011 26th IEEE/ACM International Conference on Automated Software Engineering
ASE 2011: 26th IEEE/ACM International Conference
On Automated Software Engineering

Sunday–Saturday • November 6–12, 2011
Oread, Lawrence, Kan.

Monday, November 7

Tutorial 1

Full-Day Tutorial—Java Pathfinder Tutorial

Presenters—Peter Mehlitz, Neha Rungta, and Willem Visser

Java Pathfinder (JPF) is an open source analysis system that automatically verifies Java programs. The core of JPF is an explicit state model checker for Java bytecode. JPF is a customized Virtual Machine that supports state storage, state matching and configurable execution semantics of bytecode instructions, controls scheduling choices in concurrent programs, and supports monitoring of program executions with Observer design patterns. It checks for properties such as absence of unhandled exceptions, deadlocks and race conditions. One of the defining qualities of JPF is its extensibility. JPF has been extended to support symbolic execution, directed automated random testing, different choice generation, configurable state abstractions, various heuristics for enabling bug detection, configurable search strategies, checking temporal properties and many more. JPF supports these extensions at the design level through a set of stable well–defined interfaces. The interfaces are designed to not require changes to the core, yet enable the development of various JPF extensions. In this tutorial we provide attendees a hands-on experience of developing different interfaces in order to extend JPF. The system provides a rich build, test and configuration infrastructure.

The objective of the tutorial is to provide an opportunity for software engineering researchers and practitioners to learn about JPF, be able to install and run JPF, and understand the concepts required to extend JPF. The hands-on tutorial will expose the attendees to the basic architecture framework of JPF, demonstrate the ways to use it for analyzing their artifacts, and illustrate how they can extend JPF to implement their own analyses. The in-depth discussion of the architecture framework will include having the attendees write their own listeners, choice generators, attributes, instruction factories, and scheduler factories—the key extension elements of the JPF.

Tutorial 2

Half-Day Morning Tutorial—Schema-based Program Synthesis and the AutoBayes System

Presenter—Johann Schumann

In recent years, automated program synthesis has gained renewed interest in research and industry. Most realistic program synthesis tools are highly complicated software systems. However, most of them are not available or open source for purposes of a detailed analysis.

This tutorial will provide an introduction to program synthesis and its approaches and will give the participant a detailed overview of the architecture and inner workings of the AutoBayes program synthesis system. AutoBayes is a schema-based program synthesis tool for the automatic generation of customized data analysis algorithms from compact and declarative statistical specifications. Developed at NASA Ames, this mature tool has recently been made open source.

This tutorial will discuss Autobayes' schema-based synthesis process and the interaction between schema instantiation, search, symbolic calculation, and rewriting during the synthesis process. AutoBayes is a large system implemented in SWI Prolog and the tutorial will discuss software engineering aspects that have to be addressed in order to make a synthesis system working and reliable.

Tuesday, November 8

Tutorial 4

Half-Day Morning Tutorial—Incremental Evaluation of Model Queries over EMF Models: a Tutorial on EMF-IncQuery?

Presenters—Gábor Bergmann, Ábel Hegedüs, Ákos Horváth, István Ráth, Zoltán Ujhelyi, and Dániel Varró

Model Driven Development platforms, such as industry leader Eclipse Modeling Framework (EMF), greatly benefit from pattern matching, as it supports various goals including model validation, model transformation, code generation and domain specific behaviour simulation. Pattern matching is a search for model elements conforming to a given pattern that describes their arrangement and properties, e.g. finding a violation of a complex well-formedness constraint of the domain–specific modeling language.

Two major issues arise in pattern matching: (i) it can have significant impact on runtime performance and scalability; and (ii) it is often tedious and time consuming to (efficiently) implement manually on a case-by-case basis. The latter is typically addressed by a declarative query language (e.g., EMF Query, OCL) processed by a general-purpose pattern matching engine.

The current tutorial introduces a declarative model query framework over EMF called EMF-IncQuery [3, 4], using the graph pattern formalism (from the theory of graph transformations) as its query language and relying on incremental pattern matching for improved performance. In case of incremental pattern matching, matches of a pattern are explicitly stored and incrementally maintained upon model manipulation. In many scenarios this technique provides significant speed-up at the cost of increased memory consumption.

In this tutorial, we give an overview of the EMF-IncQuery system, demonstrating how the technology can be applied, and discuss gains and trade-offs. We will show how cheap pattern matching can have significant performance advantages in a number of scenarios, such as model validation (model editors can continuously evaluate complex well-formedness constraints and give efficient, immediate feedback), model transformation (determining the applicability of declarative transformation rules), and simulation of dynamic domain-specific models (identifying the possible model evolutions). These will be illustrated using a case study of on-the-fly well-formedness constraint evaluation in UML models.

Tutorial 5

Half-Day Morning Tutorial—Modularizing Crosscutting Concerns with Ptolemy

Presenters—Hridesh Rajan, Gary T. Leavens, and Robert Dyer

This tutorial will provide an introduction to Ptolemy. Ptolemy is a programming language whose goals are to improve a software engineer's ability to separate conceptual concerns, while preserving encapsulation of object-oriented code and to improve a programmer’s ability to modularly reason about their code. In particular, Ptolemy's features are useful towards modularization of cross-cutting concerns. A cross-cutting concern is a requirement whose implementation is spread across and mixed with the code of other requirements. There have been attempts to improve separation of cross-cutting concerns, e.g. by aspect-oriented and implicit-invocation languages, but none give software developers textual separation of concerns and modular reasoning at the same time. Ptolemy has both these properties important for scalable software engineering. Ptolemy's event types provide a well-defined interface between object-oriented code and cross-cutting code. This in turn enables separate type-checking and compilation. Ptolemy also provides a novel and practical specification mechanism that we call a translucid contract. A translucid contract allows developers to reason modularly about the control effects of the object-oriented code and cross-cutting code.  This tutorial will proceed by discussing the goals of the Ptolemy programming language. We will then discuss Ptolemy's programming features and its specification features by way of several hands-on exercises. We will conclude with pointers to ongoing work on design, implementation and verification of Ptolemy programs.

Tutorial 6

Half-Day Afternoon Tutorial—The Use of Text Retrieval Techniques in Software Engineering

Presenters—Andrian Marcus and Giuliano Antoniol

During software evolution many related artifacts are created or modified. Some of these are composed of structured data (e.g., analysis data), some contain semi-structured information (e.g., source code), and many include unstructured information (e.g., natural language text). Software artifacts written in natural language (e.g., requirements, design documents, user manuals, scenarios, bug reports, developers' messages, etc.) together with the comments and identifiers in the source code encode to a large degree the domain of the software, the developers' knowledge about the system, as well as capturing design decisions, developer information, etc. In many software projects the amount of the unstructured information exceeds the size of the source code by one order of magnitude. Retrieving and analyzing the textual information existing in software are extremely important in supporting program comprehension and a variety of software evolution tasks.

Text retrieval (TR) is a branch of information retrieval (IR) where the information is stored primarily in the form of text. TR methods are suitable candidates to help in the retrieval and the analysis of unstructured data embedded in software. Most software engineering professionals and researchers are not exposed to TR in their training.

This tutorial presents some of the most popular TR methods and their applications in software engineering. Special attention is given to two specific tasks for which the use of TR methods proved to be very successful: concept location in software and traceability link recovery between software artifacts.

The tutorial aims to provide sufficient knowledge for any software engineering researcher or practitioner to start using TR methods in their work.

Tutorial 7

Half-Day Afternoon Tutorial—xSA: eXtreme Software Analytics – Marriage of eXtreme Computing and Software Analytics

Presenters—Dongmei Zhang and Tao Xie

A wealth of various data exists in the software development process. Further rich data are produced by modern software and services in operation, many of which tend to be data-driven and/or data-producing in nature.  Hidden in the data is information about the quality of software and services and the dynamics of software development.  Software analytics is to develop and apply data exploration and analysis technologies, such as pattern recognition, machine learning, and information visualization, on software data in order to obtain insightful and actionable information for modern software and services. In real-world settings, software data are often of large scale.  Such a large scale requires infrastructure support on highly scalable and highly efficient data storage and computing.  To address scalability issues in software analytics, eXtreme Software Analytics (xSA) has become an emerging field, being the marriage of eXtreme Computing and Software Analytics.


Tutorial Registration Fees

By
September 19
After
September 19
Full-Day Tutorial Member $250 $275
Non-member $315 $345
Student $100 $100
Half-Day Tutorial Member $180 $200
Non-member $225 $250
Student $100 $100


AA127040