99 Panels

Panel 1 : Can Model Checking Scale to Software?

Chair: John Penix, NASA Ames

Model checking is formal verification technique for finite-state concurrent systems, initially developed and applied in the context of hardware and protocol verification in the late 80's and early 90's. It can be used to verify temporal logic formulas, as well as detect deadlocks and assertion violations, by performing an exhaustive execution of all possible interleavings of concurrent processes. The major benefit of model checking is that the verification algorithms are completely automated, reducing the amount of knowledge of the underlying formal system that is required to use the tools.

Recent experiments applying model checking to various software artifacts have shown positive results, discovering errors that were not detected by testing. However, model checking has a well known theoretical limitation: the size of the state space that must be searched increases exponentially with the number of processes in the system. This panel will address the questions of when and if this limitation of model checking can be overcome in the context of real software systems, and when other verification techniques should be used with or instead of model checking.

Panelists:

Panel 2 : Evaluation of Automated Software Engineering Systems

Chair: David Redmiles

Evaluation --- research reviewers ask for it, industrial and government sponsors ask for it, managers and clients want proof of concepts; but, how does one do it? The purpose of this panel is to explore the issue of how one can evaluate automated software engineering environments and tools, including knowledge-based systems. Specific approaches to evaluation vary with the nature of the system under evaluation. For example, theoretical concepts require proofs. Advances in interfaces require empirical testing. Systems affecting groups of end users require field studies. Management concepts and industrial systems can be validated with case studies. Thus, there is a wide variety in the nature of the evidence used to substantiate our claims and hypotheses about automated software engineering. The panelists are chosen to provide the widest possible range of experiences.