FroCoS 2005 - 5th International Workshop on Frontiers of Combining SystemsVienna, Austria, September 19-21, 2005 |
FroCoS 2005 general info: | home | cfp (txt, pdf) | poster (more) | dates | pc | oc | invited talks | submission | accepted papers | talks | program (txt) | call for participation (txt) | sponsors | abstracts | proceedings |
FroCoS 2005 local info: | notes (on registration, accomodation, travelling) | registration | accomodation | local information (venue, registration, internet, technical facilities) | social events | local information about Vienna (public transport, restaurants, etc.) | participants | photos |
FroCoS general info: | FroCoS homepage | FroCoS workshop series | Combination Methods in AR |
proceedings | |
photos |
Abstract:
We define a general notion of a fragment within higher order type
theory; our fragments can be combined into larger ones and a
combined procedure for constraint satisfiability with respect to the
intended semantics of such fragments is outlined, following
Nelson-Oppen schema. The procedure is in general only sound, but it
becomes terminating and complete when the shared fragment enjoys
suitable noetherianity conditions and allows an abstract version of
a `Keisler-Shelah like' isomorphism theorem. We show that this
general decidability transfer result covers as special cases,
besides applications which seem to be new, the recent extension of
Nelson-Oppen procedure to non-disjoint signatures and the fusion
transfer of decidability of consistency of A-Boxes with respect to
T-Boxes axioms in local abstract description systems; in addition,
it reduces decidability of modal and temporal monodic fragments to
their extensional and one-variable components.
Abstract:
In a previous paper, we have introduced a general approach for
connecting two many-sorted theories through connection functions
that behave like homomorphisms on the shared signature, and have shown
that, under appropriate algebraic conditions, decidability of the
validity of universal formulae in the component theories transfers to
their connection. This work generalizes decidability transfer results
for so-called $\mathcal{E}$-connections of modal logics. However, in
this general algebraic setting, only the most basic type of
$\mathcal{E}$-connections could be handled. In the present paper, we
overcome this restriction by looking at pairs of connection functions
that are adjoint pairs for partial orders defined in the component
theories.
Abstract:
Most computer programs store elements of a given nature into
container-based data structures such as lists, arrays, sets, and
multisets. To verify the correctness of these programs, one needs to
combine a theory $S$ modeling the data structure with a theory $T$
modeling the elements. This combination can be achieved using the
classic Nelson-Oppen method only if both $S$ and $T$ are stably
infinite.
The goal of this paper is to relax the stable infiniteness
requirement. To achieve this goal, we introduce the notion of
\emph{polite} theories, and we show that natural examples of polite
theories include those modeling data structures such as lists, arrays,
sets, and multisets. Furthemore, we provide a method that is able to
combine a polite theory $S$ with any theory $T$ of the elements,
regardless of whether $T$ is stably infinite or not.
The results of this paper generalize to many-sorted logic those
recently obtained by Tinelli and Zarba concerning the combination of
\emph{shiny} theories with nonstably infinite theories in one-sorted
logic.
Abstract:
The rewriting approach to ${\cal T}$-satisfiability
is based on establishing termination of a rewrite-based inference system
for first-order logic on the ${\cal T}$-satisfiability problem.
Extending previous such results,
including the {\em quantifier-free theory of equality}
and the {\em theory of arrays with or without extensionality},
we prove termination for the theories of
{\em records with or without extensionality},
{\em integer offsets} and {\em integer offsets modulo}.
A general theorem for termination on {\em combinations of theories},
that covers any combination of the theories above, is given next.
For empirical evaluation,
the rewrite-based theorem prover \eprover{} is compared
with the validity checkers CVC and CVC~Lite,
on both synthetic and real-world benchmarks,
including both valid and invalid instances.
Parametric synthetic benchmarks test {\em scalability},
while real-world benchmarks test ability to handle huge sets of literals.
Contrary to the folklore that a general-purpose prover
cannot compete with specialized reasoners,
the experiments are overall favorable to the theorem prover,
showing that the rewriting approach is both elegant and practical.
Abstract:
Interface formalisms are able to model both the input requirements and
the output behavior of system components; they support both bottom-up
component-based design, and top-down design refinement.
In this paper, we propose ``sociable'' interface formalisms, endowed
with a rich compositional semantics that facilitates their use in
design and modeling.
Specifically, we introduce interface models that can
communicate via both actions and shared variables, and where
communication and synchronization covers the full spectrum, from
one-to-one, to one-to-many, many-to-one, and many-to-many.
Thanks to the expressive power of interface formalisms, this rich
compositional semantics can be realized in an economical way, on the
basis of a few basic principles.
We show how the algorithms for composing, checking the compatibility,
and refining the resulting sociable interfaces can be implemented
symbolically, leading to efficient implementations.
Abstract:
Two infinite structures (sets together
with operations and relations) hold our attention here: the trees
together with operations of construction and the rational numbers
together with the operations of addition and substraction and a
linear dense order relation without endpoints. The object of this
paper is the study of the \emph{evaluated trees}, a structure
mixing the two preceding ones.
First of all, we establish a general theorem which gives a
sufficient condition for the completeness of a first-order theory.
This theorem uses a special quantifier, primarily asserting the
existence of an infinity of individuals having a given first order
property. The proof of the theorem is nothing other than the broad
outline of a general algorithm which decides if
a proposition or its negation is true in certain theories.
We introduce then the theory $T_E$ of the evaluated trees and show
its completeness using our theorem. From our proof it is possible
to extract a general algorithm for solving quantified constraints
in $T_E$.
Abstract:
In this paper, we introduce a spatial and temporal logic for reasoning
about distributed computation. The logic is a combination of an
extension of hybrid logic, that allows us to reason about the
spatial structure of a computation, and linear temporal logic, which
accounts for the temporal aspects. On the pragmatic side, we show
the wide applicability of this logic by means of many examples. Our
main technical contribution is completeness of the logic both with
respect to spatial/temporal structures and a class of spatial
transition systems.
Abstract:
In this paper, we are concerned with the design of a hybrid resolution
framework. We develop a theoretical model based on chaotic iterations
in which hybrid resolution can be achieved as the computation of a
fixpoint of elementary functions. These functions correspond to basic
resolution techniques and their applications can easily be
parameterized by different search strategies. This framework is used
for the hybridization of local search and constraint propagation, and
for the integration of genetic algorithms and constraint propagation.
Our prototype implementation gave experimental results showing the
interest of the model to design such hybridizations.
Abstract:
A unit two variable per inequality (UTVPI) constraint is of the form
$a.x+b.y \leq d$ where $x$ and $y$ are integer variables, the
coefficients $a,b \in \{-1,0,1\}$ and the bound $d$ is an integer
constant. This paper presents an efficient decision procedure for
UTVPI constraints. Given $m$ such constraints over $n$ variables, the
procedure checks the satisfiability of the constraints in $O(n.m)$
time and $O(n+m)$ space. This improves upon the previously known
$O(n^2.m)$ time and $O(n^2)$ space algorithm based on transitive
closure. Our decision procedure is also equality generating, proof
generating, and model generating.
Abstract:
The new generic scheme $\cflp{\cdom}$ has been recently propo\-sed
in \cite{LMR04a} as a logical and semantic framework for lazy
Cons\-traint Functional Logic Programming over a parametrica\-lly
given constraint domain $\cdom$. Further, \cite{LMR04b} presented a
Cons\-trained Lazy Narrowing Calculus $\clnc{\cdom}$ as a
conve\-nient computation mechanism for solving goals for
$\cflp{\cdom}$-programs, which was proved sound and strongly
complete with respect to $\cflp{\cdom}$'s semantics. Now, in order
to provide a formal foundation for an efficient implementation of
goal solving methods in
%$\cflp{\cdom}$ systems
exis\-ting systems such as $Curry$ \cite{curryreport} and $\toy$
\cite{toyreport,toyfdreport}, this paper enriches the $\cflp{\cdom}$
framework by presen\-ting an optimization of the $\clnc{\cdom}$
calculus by means of definitional trees to efficient\-ly control the
computation. We prove that this new Constrained Demanded Narro\-wing
Calculus $\cdnc{\cdom}$ preserves the soundness and completeness
properties of $\clnc{\cdom}$ and maintains the good properties shown
for needed and demand-driven na\-rrowing strategies
\cite{AEH00,LLR93,Vad03}.
Abstract:
In this paper we report on a novel approach for uniform encoding
of hash functions (but also other cryptographic functions) into
propositional logic formulae, and reducing cryptanalysis problems
to the satisfiability problem. The approach is general, elegant,
and does not require any human expertise on the construction of a
specific cryptographic function. By using this approach, we
developed a technique for generating {\em hard and satisfiable}
propositional formulae and {\em hard and unsatisfiable}
propositional formulae. In addition, one can finely tune the
hardness of generated formulae. This can be very important for
different applications, including testing (complete or incomplete)
\sat solvers. The uniform logical analysis of cryptographic
functions can be used for comparison between different functions
and can expose weaknesses of some of them (as shown for
\mdfour in comparison with \mdfive).
Abstract:
The dependency pair technique is a powerful modular method for
automated
termination proofs of term rewrite systems (TRSs). We present two important extensions of
this technique: First, we show how to prove termination of \emph{higher-order}
functions using dependency pairs. To this end, the dependency pair technique
is extended to handle (untyped) applicative TRSs. Second, we introduce a method to prove
\emph{non-termination} with dependency pairs, while up to now dependency pairs
were only used to verify termination. Our results lead to a framework for
combining termination and non-termination techniques
for first- and higher-order functions in a very flexible way.
We implemented and evaluated our results in the
automated termination prover {\sf AProVE}.
Abstract:
In this paper we combine rewriting techniques with verification issues.
More precisely, we show how techniques for proving relative termination
of term rewrite systems (TRSs) can be applied to prove liveness properties
in fair computations. We do this using a new transformation which is stronger
than the sound transformation from~\cite{GieslZan03} but still is suitable
for automation. On the one hand we show completeness of this approach under
some mild conditions. On the other hand we show how this approach applies to
some examples completely automatically, using the TPA tool designed for proving
relative termination of TRSs. In particular we succeed in proving liveness in
the classical readers-writers synchronization problem.
Abstract:
We introduce a new concurrent lambda calculus with futures,
\LambdaFut, to model the operational semantics of Alice, a
concurrent extension of ML. \LambdaFut is a minimalist
extension of the call-by-value $\lambda$-calculus that yields
the full expressiveness to define, combine, and implement
a variety of standard concurrency constructs such as channels,
semaphores, and ports. We present a linear type system for
\LambdaFut by which the safety of such definitions
and their combinations can be proved:
Well-typed implementations cannot be corrupted in any well-typed context.
Abstract:
We introduce into and survey the ASM method for high-level system
design and analysis. We explain the three notions---\emph{Abstract
State Machine}~\cite{Gurevich94b}, \emph{ASM ground model} (system
blueprint)~\cite{Boerger03a} and \emph{ASM
refinement}~\cite{Boerger02b}---that characterize the method, which
integrates also current validation and verification techniques. We
illustrate how the method allows the system engineer to rigorously
capture requirements by ASM ground models and to stepwise refine these
to code in a validatable and verifiable way.
Abstract:
Classifications, often mistakenly called directories, are pervasive: we use
them to classify our messages, our favourite Web Pages, our files, ... And
many more can be found in the Web; think for instance of the Google and
Yahoo's directories. The problem is that all these classifications are very
different or more precisely, semantically heterogeneous. The most striking
consequence is that they classify documents very differently, making
therefore very hard and sometimes impossible to find them.
Matching classifications is the process which allows us to map those
nodes of two classifications which, intuitively, correspond semantically to
each other. In the first part of the talk I will show how it is possible to
encode this problem into a propositional validity problem, thus allowing
for the use of SAT reasoners. This is done mainly using linguistic
resources (e.g., WordNet) and some amount of Natural Language Processing.
However, as shown in the second part of the talk, this turns out to be
almost useless. In most cases, in fact, linguistic resources do not contain
enough of the axioms needed to prove unsatisfiability. The solution to
this problem turns to be that of using SAT as a way to generate the missing
axioms.
We have started using linguistic resources to provide SAT with the
axioms needed to match classifications, and we have ended up using SAT to
generate missing axioms in the linguistic resources. We will argue that
this is an example of a more general phenomenon which arises when using
commonsense knowledge. This in turns becomes an opportunity for the use of
decision procedures for a focused automated generation of the missing
knowledge.
Abstract:
We present one way of combining a logical framework and
first-order logic. The logical framework is used as an
interface to a first-order theorem prover. Its main purpose is to keep
track of the structure of the proof and to deal with the high level
steps, for instance, induction. The steps that involve purely
propositional or simple first-order reasoning are left to a
first-order resolution prover (the system Gandalf in our prototype).
The correctness of this interaction is based on a general
meta-theoretic result. One feature is the simplicity of our
translation between the logical framework and first-order logic,
which uses implicit typing.
Implementation and case studies are described.
Abstract:
We describe results and status of a sub project of the Verisoft
\cite{VeriSoft} project. While the Verisoft project aims at
verification of a complete computer system starting with hardware and
up to user applications, the goal of our sub project is an efficient
{\it hardware} verification.
We use the Isabelle theorem prover \cite{ISA} as the major tool for
hardware design and verification. Since many hardware verification
problems can be efficiently solved by automatic tools, we combine
Isabelle with model checkers and SAT solvers.
This combination of tools speeds up verification of hardware and
simplifies sharing of the results with verification of the whole
computer system. To increase the range of problems which can be solved
by external tools we implemented in Isabelle several algorithms for
handling uninterpreted functions and data abstraction.
The resulting combination was applied to verify many different
hardware circuits, automata, and processors.
In our project we use open source tools that are free for academical
and commercial purposes.
Abstract:
ATS is a language with a highly expressive type system
that supports a restricted form of dependent types in which programs are
not allowed to appear in type expressions. The language is separated into
two components: a proof language in which (inductive) proofs can be encoded
as (total recursive) functions that are erased before execution, and a
programming language for constructing programs to be evaluated. This
separation enables a paradigm that combines programming with theorem
proving. In this paper, we illustrate by example how this programming
paradigm is supported in ATS.