Termination and Circular Proofs
19th of July 2017, Chambéry
Termination is a very active research topic, which is essential for the construction of proof assistants. Indeed, a circular proof (i.e., a proof having the freedom of using its own goal as a premise) is only valid if it is well-founded, and thus terminating in some sense.
This one day workshop aims at bringing together researchers from both the termination and the proof assistant communities around a few talks, in the hope of stimulating interactions and fruitful discussions.
If you plan to stay in a hotel, we advise you to book a room somewhere in the center of Chambéry. There are several hotels next to the train station that might be convenient (see here or there).
|First name||Last name||Institution|
|Sorin||Stratulat||Université de Lorraine|
|Laurent||Regnier||I2M - Aix-Marseille Université|
|Lionel||Vaux||I2M - Aix-Marseille Université|
|Christophe||Raffalli||Université Savoie Mont Blanc|
|Guillaume||Genestier||Mines ParisTech - ENS Cachan - Paris-Diderot|
|Ben||Price||University of Strathclyde|
|Guilhem||Jaber||ENS de Lyon|
|Karim||Nour||Université Savoie Mont Blanc|
|Luigi||Santocanale||LIF - Aix-Marseille Université|
|Pierre||Hyvernat||Université Savoie Mont Blanc|
|Rodolphe||Lepigre||Université Savoie Mont Blanc|
|David||Baelde||LSV, ENS Paris-Saclay & Inria Paris|
|09:30 - 10:00||Breakfast|
|10:00 - 11:00||Invited talk (Andreas Abel)|
|11:00 - 11:30||Coffee break|
|11:30 - 12:00||Contributed talk (Pierre Hyvernat)|
|12:00 - 12:30||Contributed talk (Luigi Santocanale)|
|12:30 - 14:00||Lunch|
|14:00 - 14:30||Contributed talk (David Baelde)|
|14:30 - 15:00||Contributed talk (Christophe Raffalli)|
|15:00 - 15:30||Coffee break|
|15:30 - 16:00||Contributed talk (Frédéric Blanqui)|
|16:00 - 16:30||Contributed talk (Sorin Stratulat)|
The Size-Change Principle (SCP) is a simple algorithm giving a partial
termination test for recursive definitions. It is particularly well suited for
functional languages with inductive types where recursive functions are given
by pattern matching (Caml / Coq) or equations (Haskell / Agda).
If term constructors are lazy, the SCP also gives (by duality) a partial productivity test for recursive functions involving coinductive types. This is what is used in Agda.
Unfortunately, when inductive and coinductive types are nested, this test is unsound: there are "well typed" and terminating recursive definitions producing terms in empty types. Such definitions make Agda inconsistant.
By using ideas from L. Santocanale about circular proof and parity games, I'll show how the SCP can be used to check "totality" of recursive definitions to make sure the definition denote actual objects in their type.
μ-bicompelte categories have products, coproducts, initial
algebras and terminal coalgebras of definable functors.
In this talk we shall focus on the way the category of Set is μ-bicomplete. That is, we shall give a characterization of sets and functions that are in the image of the functor from the initial μ-bicomplete category.
Objetcs are sets of deterministic winning strategies in parity games. We denote arrow-terms (in the theory of μ-bicomplete categories) by circular proofs. The cut-elimination procedure for circular proofs provides a set-theoretic operational semantics for the calculus. The operational semantics yields the denotational one: that is, the trasnformations of winning strategies that are definable by μ-bicomplete arrow-terms are exacly those computed by the cut-elimination procedure.
Various recent work have shown the interest of infinitary proof theory for various μ-calculi, where proof validity is based on threads that should satisfy some parity condition. These threads are "straight", i.e. they always go from conclusion to premise / occurrence to sub-occurrence. This is natural in cut-free systems (as found in tableaux and more general in systems whose goal is to obtain proof-search procedures or completeness results) but restrictive in presence of cut (something we want to consider, from the Curry-Howard perspective). Indeed, in that setting, several computations that should be allowed are invalid wrt straight threads. In other words, there is a huge gap between productive and valid computations. We narrow this gap by considering bouncing threads, that follow the flow of cut elimination. In this talk I'll introduce these objects formally, state a few results and many open questions. This is joint work with Doumane, Jaber and Saurin.
We present a rich type system with subtyping for an extension of System F. Our type constructors include sum and product types, universal and existential quantifiers, inductive and coinductive types. The latter two may carry annotations allowing the encoding of size invariants. For example, the termination of quicksort can be derived by showing that partitioning a list does not increase its size. The system deals with complex programs involving mixed induction and coinduction, or even mixed polymorphism and (co-)induction (as for Scott-encoded data types). One of the key ideas is to completely separate the notion of size from recursion. We do not check the termination of programs directly, but rather show that their (circular) typing proofs are well-founded. We then obtain termination using a standard semantic proof of strong normalisation.
CLKIDω is a sequent-based cyclic inference system adapted
for reasoning on first-order logic with inductive definitions. The
current approach for verifying the soundness of CLKIDω
proofs is based on expensive model-checking techniques leading to an
explosion in the number of states.
We propose proof strategies that guarantee the soundness of a class of CLKIDω proofs if some ordering constraints are satisfied. They are inspired from previous works about cyclic well-founded induction reasoning, known to provide effective sets of ordering constraints. In our case, an ordering constraint compares two sequents and can be decided in polynomial time in the size of the sequents. Under certain conditions, these strategies can help to build automatically proofs that implicitly satisfy the ordering constraints.
I will discuss why previous methods do not seem to be able to handle this system and present Gilles Dowek's proof.