ccc09

Abstracts

Invited Talks:


Andrej Bauer: Canonical Effective Subalgebras of Classical Algebra (Slides)

Given a classical algebra (a set with operations), is there a canonical effective structure on it, or at least a largest effective subalgebra? In general the answer is negative without further assumptions. We give conditions under which a complete metric algebra has a canonical effectively complete effective subalgebra for which the distance function is effectively upper-semicontinuous. Our results are valid in any realizability model. They generalize know results about effectivity of finitely generated algebras by Mal’cev and about real numbers by Moschovakis and Hertling.

This is joint work with Jens Blanck.


Willem Fouché: Countably Dense Random Sets of Reals, Kolmogorov Complexity and Brownian Motion (Slides)

Countably dense random sets of reals arise naturally in the theory of Brownian motion and the understanding of percolation phenomena in statistical physics. The mathematical representa-tion of countably dense random sets poses foundational challenges which are of a descriptive set theoretical nature. However, in [7], Tsirelson made remarkable progress with this problem. He proved and provided a rich context for the following

Theorem 1 (Tsirelson) Consider the Borel space C[0,1] × (0,1)ω. There exists a probability measure P on the product space such the first marginal of P is the Wiener measure W and the second is the Lebesgue measure Λ on the infinite dimensional cube (0,1)ω. Moreover, for P-almost all pairs (x,u) ∈ C[0,1]×(0,1)ω the sequence u is an enumeration of the set of local minima of the sample Brownian motion x.

We discuss Tsirelsons theory from the viewpoint of algorithmically random Brownian motion and provide evidence for finding adequate definitions, within the arithmetical hierarchy, of a “generic” countably dense subset of the unit interval. We also show how one can use these techniques together with ideas from Fourier analysis to find effective definitions of perfect subsets of some Hamel bases of the reals viewed as a vector space over the rationals. The arguments are based on the ideas of [1], [2] and [3]. We shall also discuss open problems of a descriptive set theoretic nature that are naturally suggested by these results. For related results, see [4], [5] and [6].

References:

[1] Asarin, E.A. and Prokovskii, A.V.: Use of the Kolmogorov complexity in analyzing control system dynamics, Automation and Remote Control 47 (1986), 21–28.

[2] Fouché, W.L.: Arithmetical representations of Brownian motion I, J. Symb. Logic 65 (2000), 421–442.

[3] Fouché, W.L.: The descriptive complexity of Brownian motion, Advances in Mathematics 155 (2000), 317–343.

[4] Fouché, W.L.: Dynamics of a generic Brownian motion : Recursive aspects, Theoretical Computer Science 394, (2008), 175–186.

[5] Fouché, W.L.: Fractals generated by algorithmically random Brownian motion, Proceedings on Mathematical Theory and Computational Practice, CiE 2009, Editors: K Ambos-Spies et al, Lecture Notes in Computer Science (to appear).

[6] Kjos-Hanssen, B. and Nerode, A.: Effective dimension of points visited by Brownian motion , Theoretical Computer Science 410 (2009), 347–354.

[7] Tsirelson, S.: Brownian local minima, random dense countable sets and random equivalence classes, Electronic Journal of Probability 11 (2006), 162–198.


Jaime Gaspar: A Logical View at Tao’s Finitisation of Principles in Analysis (Slides)

In 2007 and 2008 Terence Tao wrote on his blog essays about the finitisation of principles in analysis. His goal is to find for infinite qualitative “soft analysis” statements equivalent finitary quantitative “hard analysis” statements. These equivalences are usually proved using a contradiction and sequentially compactness argument. Taos two prime examples are:

  • A finitisation of the infinite convergence principle (every bounded monotone sequence of real numbers converges);
  • An almost finitisation of the infinite pigeonhole principle (every colouring of the natural numbers with finitely many colours has a colour that occurs infinitely often).

We take a logical look at Tao’s essays and make mainly two points:

  • The finitisations can be done in a systematic way using proof theoretical tools, namely Gödel (Dialectica) functional interpretation;
  • Uniform boundedness (Heine-Borel compactness) arguments are preferable to sequentially compactness arguments, for reverse mathematics reasons.

These points are then illustrated in a case study: the almost finitisation of the infinite pigeonhole principle.

This is joint work with Ulrich Kohlenbach.

References:

[1] Tao, T.: Soft analysis, hard analysis, and the finite convergence principle. http://terrytao.wordpress.com, 2007.

[2] Tao, T.: The correspondence principle and finitary ergodic theory. http://terrytao.wordpress.com, 2008.

[3] Gaspar, J. and Kohlenbach, U.: On Tao’s “finitary” infinite pigeonhole principle. To appear in The Journal of Symbolic Logic.


Hajime Ishihara: A Boundedness Principle in Constructive Reverse Mathematics (Slides)

A boundedness principle BD-N was found in 1992, and has proved to be equivalent to a certain continuity principle, to Banach’s inverse mapping theorem, and to the completeness of the space of test functions. Recently, relations among Brouwer’s continuity principle, Baire’s category theorem, and BD-N have discussed. We will give a survey of BD-N, its equivalents and the recent results in constructive reverse mathematics.


Oleg Kudinov: Semantics and Computability on Structures (Slides)

This talk is related to some semantic approach to computability on structures with computably enumerable existential theory. The notion of effectively enumerable topological space was considered by different authors, it is shown that corresponding class of spaces possesses nice categorical and computational properties and contains mentioned above class of structures, being considered as spaces with induced Σ-topology. This semantic approach appears to be rather powerful and suitable for the investigation of functionals of finite types over the reals after appropriate inductive topological considerations related to construction C(X). The central role in proposed approach belongs to suitable modifications of Gandy Theorem and its corollaries. A couple of applications to computability and definability theory are considered.


Dirk Pattinson: Continuous Functions on Infinite Structures (Slides)

The well-known representation of real numbers as infinite streams of digits is a well-studied instance of the use of coinductive types to represent continuous data types. The fact that also (total) continuous functions may be represented as a coinductive type suggests that this analogy can be carried over to a larger variety of structures and spaces usually found in analysis, with the main benefit of being able to use coinduction both as a definition and a proof principle on representatives. In this talk, we argue that indeed many types of spaces found in analysis can be represented coinductively by generalising known representation theorems for function spaces from infinite streams to infinite trees, and subsequently to spaces of higher-order functions. The main challenge is to adequately capture the topology on the represented spaces, which leads to an information-theoretic interpretation.

This is joint work with Neil Ghani and Peter Hancock.


Robert Rettinger: The ”Computability by Compactness” Principle (Slides)

We discuss a general principle to prove computability of sets and functions using essentially three properties: (computable) compactness, uniqueness and recursive enumerability. By applying this simple principle we give new, simplified proofs of computability for conformal mappings (in the simple and multiple connected case), uniformization and several differential equations.


Matthias Schröder: The Coincidence Problem of the Extensional and the Intensional Hierarchy over the Reals (Slides)

In functional programming, there are essentially two approaches to computability on the real numbers. The extensional approach assumes an idealistic functional language that contains the real numbers as a datatype. The intensional approach uses data structure of ordinary functional languages and encodes real numbers e.g. as streams using the signed-digit representation.

Both approaches yield the same classes of Type-1 and Type-2 functionals over the reals. Whether this is also the case for higher type functionals remains an open problem. Bauer, Escardó and Simpson in [1] were the first to discover that there is a relationship to topological properties of the Kleene-Kreisel continuous functionals over the natural numbers . Normann formulated in [2] a purely topological problem that is equivalent to the coincidence problem.

In this talk, I will report on recent progress in solving the equivalent topological question.

References:

[1] Bauer, A., Escardó, M. and Simpson, A.: Comparing Functional Paradigms for Exact Real-number Computation. Proc. ICALP ’02, Lecture Notes in Computer Science 2380 (2002), 488–500.

[2] Normann, D.: Comparing Hierarchies of Total Functionals. Logical Methods in Computer Science 1 (2:4) (2005), 1–28.


Alex Simpson: The Locale of Random Sequences (Slides)

I shall outline an approach to random sequences based on point-free topology.


Alexey Stukachev: Effective Model Theory via the Sigma-Definability Approach (Slides)

We look at the computability over abstract structures via the formalism based on the notion of Sigma-definability in admissible sets and, in particular, in HF-superstructures. Providing a natural generalization of the classical computable model theory, this approach can be used for measuring the complexity of structures, no matter countable or not. We briefly overview the connections with some closely related approaches, such as BSS-computability and search computability, and consider in more details the following groups of questions, in some sense global and local, correspondingly:

  1. Semilattices of Sigma-degrees of structures: embeddings of the semilattices of Turing and enumeration degrees, jump operation and jump inversion, connections with the semilattices of degrees of presentability;
  2. Sigma-degrees of uncountable models of computably simple theories, Sigma-degrees of the fields of real and complex numbers, refinements of the notion of Sigma-definability.

Trifon Trifonov: Computation in Non-constructive Proofs (Slides)

In the recent years Kreisel’s idea from the 1950s for “unwinding” classical proofs to obtain their computational content has received a lot of renewed attention. More than half a century after the first theoretical developments on this topic the extent of its applications has not yet been thoroughly explored. The ability to use machine help to handle, analyse and transform proofs makes the idea of discovering a new algorithm hidden in a seemingly non-constructive argument even more attractive. Several authors have developed fully and semi-automatic extraction techniques and have demonstrated their usefulness by means of case studies in vari-ous fields of mathematics. Nevertheless, general results relating the properties of the obtained program to the specifics of the used method and the considered proof have been somewhat elusive.

Here we will discuss and attempt to compare the features of two methods for extraction from non-constructive proofs: a refined variant of Friedman’s A-translation and Gödel’s “Dialectica” interpretation. We will describe the methods in a common setting suitable for automatic extraction and examine their behaviour on a case study from combinatorics, the Infinite Pigeonhole Principle. We will explain the computational processes behind the two obtained algorithms and will compare their time complexity. Finally, we will suggest how the implementation of the methods can be optimised to improve the automatically produced programs.


Hideki Tsuiki: {0,1,⊥}ω-Codings Derived from Dynamical Systems (Slides1, Slides2)

The itinerary of the tent function derives a non-redundant {0,1,⊥}ω-coding of the unit interval I = [0,1]. It is a topological embedding of I in {0,1,⊥}ω and is called the Gray-code embedding. Such a {0,1,⊥}ω-coding of a space induces a computational notion via multi-head indeterministic machines, and we can construct an admissible representation from such a coding; for the case of the Gray-code embedding, we obtain the signed digit representation.

Among embeddings of I in {0,1,⊥}ω, Gray-code embedding has the property that the two operations ‘inversion of the first digit’ and ‘one-digit shift’ define single-valued functions on I, which are the inversion f(x) = 1-x and the tent function. We generalize these ideas to {0,1,⊥}ω-codings of spaces and define the notion of invertible coding and dynamical coding. That is, a {0,1,⊥}ω-coding of a space X is invertible if the inversion of the n+1-th digit induces a homeomorphism on every region determined by a {0,1}-sequence w of length n, and it is dynamical if the one-digit shift defines a 2-1 function on X from which the code is obtained as the itinerary. We study some properties of these two kinds of codings. After that, we characterize dynamical coding of the spaces I and I2.

This is a joint work with Shuji Yamada.


Contributed Talks:


Josef Berger: The Disjunctive Version of Markov’s Principle and Related Axioms

We show how a generalisation of the disjunctive version of Markov’s principle is related to other axioms.


Ulrich Berger, Jens Blanck and Petter Køber: Domain Representations for Spaces of Compact Sets (Slides)

We present a method for constructing a domain representation over the Plotkin powerdomain of D from a given domain representation over D of a space X. We show that this operation is functorial over a category of domain representations with a natural choice of morphisms. The represented space has a natural characterisation as a space of certain compact subsets of X. We study the elements of the space and discuss some topological properties of the space. The case of admissible representations is discussed, and a more detailed account of domain representations of metric spaces is given.


Vasco Brattka: A New Computational Approach to Metamathematics (Slides)

We report on some recent progress on classifying the Weihrauch degree of theorems. This constitutes a new approach to metamathematics, where the question is which theorems can be continuously transferred into each other.


Douglas Bridges: Product A-Frames and a Related (Almost) Pre-apartness (Slides)

This paper continues the program of research in the constructive (Bishop-style) theory of pre-apartness on lattices.


Luca Chiarabini: Automatic Synthesis of an Efficient Algorithm for the Alignment Problem (Slides)

A widely studied problem in bioinformatic is to find the distance between two given sequences of symbols (in the alphabet Σ). The two main technique developed in this area to solve this problem turned out to be the edit distance and the string alignment method.

Edit distance focusses on the transformation of the first list into the second one by using a restricted set of operations (insertion, deletion, and replacement). Given two lists we define the edit distance problem to be the task of finding the minimum number of insertions, deletions and substitutions operations to transform the first list into the second one.

The other way to measure the distance of lists is the so called similarity method. The idea is based of the concept of string alignment. Given two strings l1 and l2, an alignment of l1 and l2 is obtained by inserting a new symbol “_” (named space) (that does not belong to Σ) into the strings l1 and l2 and then placing the two strings one above the other so that every character or space in either list is opposite a unique character or space in the other list, and no space is opposite to another space. Afterwards the similarity between l1 and l2 is defined as the greatest α(A ) with α a scoring function with values in , defined on any alignment A between l1 and l2.

In computational biology the similarity of l1 and l2 is efficiently solved by using dynamic programming; in fact the problem can be solved by storing in a matrix M, of dimension |l1|×|l2|, the values of the similarities between all the prefixes of dimension i ≤|l1| and j ≤|l2| of l1 and l2. This could be seen as a sort of generalization of the Fibonacci problem to 2-dimension.

In this work we try to answer to the following question: how modify a constructive proof in order to synthesize a dynamic program for solving the similarity problem? We propose a method that we name list as memory. The idea consists in evaluating a sufficient amount of data in advance so that the extracted algorithm gets to reuse it instead of recomputing it each time it is needed. This is done by introducing in the proof a list of ad-hoc axioms. The method we propose cannot be applied automatically to an arbitrary proof; it can be seen more as a general schema (that has to be instantiated case by case) to follow in order to extract dynamic programs from proofs. Future works in this direction could regards the automatization of this process.


Giovanni Curi: Imaginary Locales (Slides)

The concept of locale is known to provide a satisfactory substitute of the notion of topological space in intuitionistic impredicative settings, such as the internal logic of toposes HHA, or intuitionistic set theory IZF. In intuitionistic generalized predicative settings such as constructive set theory CZF and type theory CTT, two categories have been considered as counterparts of the category of locales: the category FSp of formal spaces, and its full subcategory IFSp of inductively generated formal spaces. Both are equivalent in a fully impredicative setting to the category of locales. In contrast with what happens with locales in a topos, in CZF, CTT no uniform method is available for the construction of limits and colimits (not even of binary products) in the category of formal spaces; the category of inductively generated formal spaces is, on the contrary, complete and cocomplete, but does not contain important classes of formal spaces (most notably the Boolean ones). Moreover, the existence of (co)limits in this subcategory is only guaranteed under the assumption of strong principles for the existence of inductively defined sets, such as the axiom REA in constructive set theory.

In this talk, working in CZF, the category FrmPr of frame presentations, and its dual, the category ImLoc of imaginary locales will be defined. ImLoc is complete (even without REA), contains FSp and IFSp as full subcategories, and is still impredicatively equivalent to the category of locales.

Using imaginary locales we shall then prove that the full subcategory of the category of formal spaces representing compact Hausdorff spaces is locally small in a certain fragment of CZF. This shows how imaginary locales can be used to prove results concerning the ordinary category of formal spaces, and how in some cases they make it possible to work in a weaker setting.

Imaginary locales appears in particular to be useful in the development of constructive (point-free) topology in settings that do not dispose of principles for very general inductive definitions of sets.


Hannes Diener: The Fan Theorem(s) in Constructive Reverse Mathematics (Slides)

As a way to re-capture the unit interval’s compactness, which was somewhat lost when rejecting the law of excluded middle, L.E.J. Brouwer made generous use of the fan theorem. Since he also accepted the continuous choice principle, the complexity of the underlying sets did not matter to him. When working in a more general framework, though, complexity is of interest. Untangling the web of semi-constructive principles we will present a hierarchy of versions of the fan theorem together with many analytical equivalencies.

There are currently four versions of Brouwer’s fan theorem that have been investigated in the scope of constructive reverse mathematics. All of them enable one to conclude that a given bar is uniform. The difference between them lies in the required complexity of the bar. This ranges from the very strongest requirement—decidable—to no restriction on the bar at all. Between these two extremes lie fan theorems for bars that are c-sets and Π10-sets, respectively. These two fan theorems (c-Fan and Π10-Fan) are of particular interest, since one can show that the (general) uniform continuity theorem, as stated below, lies between them.

(UCT)    If X is a compact metric space and Y an arbitrary metric space, then every point-wise continuous function f : X Y is uniformly continuous.

Whether UCT is actually equivalent to either c-Fan and Π10-Fan is an open question. However, these two fan theorems seem to be “very close”, which makes the fact that UCT resists classification very intriguing. Moreover, related versions of it have been shown to be equivalent to either one of them. For example one can show:

  • c-Fan Every continuous function f : {0,1} is uniformly continuous
  • c-Fan Every sequentially continuous function f : [0,1] ℝ is sequentially uniformly continuous
  • Π10-Fan Every equicontinuous sequence of functions is uniformly equicontinuous.

We will also discuss the implication of the pseudoboundedness principle BD-on this hierarchy of fan theorems, highlighting once again how subtle the distinction between c-Fan and Π10-Fan is.


Ernst-Erich Doberkat: Lattice Properties of Congruences for Stochastic Relations (Slides)

We have a look at the set of congruences for a stochastic relation; conditions under which the infimum or the supremum of two congruences is a congruence again are investigated. Congruences are based on smooth (or countably generated) equivalence relations, so that we discuss these relations first.

Specifically, we characterize the supremum of smooth equivalence relations on an analytic space through an algebraic refinement operation and relate the supremum to the pushout of the factor spaces involved. The refinement operation helps in showing that the supremum of continuously presented relations is a smooth equivalence relation again, provided the base space is compact metric. For the general Polish case a counterexample demonstrates that the supremum even of two equivalence relations with a finite number of classes needs no longer be countably generated. Turning to congruences for stochastic relations, we show that the supremum of two congruences is always a congruence again, provided the equivalences on which the congruence is based have a smooth supremum. The infimum of congruences is more difficult to characterize. It is shown that independent congruences have an infimum, and that congruences which have an infimum lead to bisimilar factor spaces. An application to Kripke models for Hennessy-Milner logic (the pet example for these relations) is discussed.

Consequences of the observation that the supremum of two smooth relations fails to be smooth are noted: analytic spaces are not closed under pushouts, and the intersection of two countably generated Σ-algebras need not be countably generated.


Gilda Ferreira and Paulo Oliva: Functional Interpretations of Intuitionistic Linear Logic (Slides)

We present a basic functional interpretation of pure (without the exponential !) intuitionistic linear logic and prove its soundness. Contrary to the interpretation of classical linear logic [5,6], the functional interpretation of intuitionistic linear logic is no longer symmetric. For instance, the interpretation of A B, AB and the quantifiers is as follows:

  • |A B|f,g x,w
    :
    |A|x fxw |B|gx w
         
    |AB|x,v y,w
    :
    |A|x y ⊗|B|v w
         
    |∃zA(z)|x,z y
    :
    |A(z)|x y
         
    |∀zA(z)|f y,z
    :
    |A(z)|fz y .

The interpretation is such that from a proof of A one can extract terms t such that y|A|ty, as is normally the case in ILω. In the asymmetric setting of ILLω branching quantifiers are no longer needed for the characterization of the interpretation.

Afterwards, we show how to extend this interpretation to full intuitionistic linear logic. More precisely, we present the following interpretations of !A:

  • |!A|x
    :
    !y|A|xy
           
    |!A|xa
    :
    !y∈a|A|xy
           
    |!A|xy
    :
    !|A|xy,

which correspond, via the embedding of ILω into ILLω, to three well-known functional interpretations of intuitionistic logic: Modified Realizability [7], Diller-Nahm [2] and Dialectica Interpretation [1].

We also investigate how the notion of “bounded interpretations” [3,4] can be captured in the intuitionistic linear logic context.

References:

[1] Avigad, J. and Feferman, S: Gödel’s functional (“Dialectica”) interpretation. In S. R. Buss, editor, Handbook of Proof Theory, Studies in Logic and the Foundations of Mathematics, 137:337–405. North Holland, Amsterdam, 1998.

[2] Diller, J. and Nahm, W.: Eine Variant zur Dialectica interpretation der Heyting Arithmetik endlicher Typen. Arch. Math. Logik Grundlagenforsch, 16:49–66, 1974.

[3] Ferreira, F. and Oliva, P.: Bounded functional interpretation. Annals of Pure and Applied Logic, 135:73–112, 2005.

[4] Ferreira, F. and Nunes, A.: Bounded modified realizability. The Journal of Symbolic Logic, 71:329–346, 2006.

[5] Oliva, P.: Modified realizability interpretation of classical linear logic. In Proc. of the 22nd Annual IEEE Symposium on Logic in Computer Sciences (LICS 2007). IEEE Press 2007.

[6] Oliva, P.: Computational interpretations of classical linear logic. In Proceedings of WoLLIC’07, LNCS 4576, pp. 285–296. Springer, 2007.

[7] Troelstra, A.S.: Realizability. In S. R. Buss, editor, Handbook of Proof Theory, 137:408–473. North Holland, Amsterdam, 1998.


Christine Gaßner: Relativizations of the P =?DNP Question over the Complex Numbers (Slides)

We consider the uniform BSS model of computation over the complex numbers where the machines can perform additions, multiplications, and tests of the form x = 0. If the guesses of the non-deterministic oracle machines are restricted to belong to {0,1}, we get the class DNP relative to an oracle.

We construct oracles such that the relativized versions of P and DNP are equal or not equal.


Davorin Lešnik: Closed Sets in Synthetic Topology (Slides)

In the developing subject of synthetic topology, the closed sets are still not as well understood as the open ones. We take Martin Escardo’s idea to define them via implication, and redefine basic notions of synthetic topology to make open and closed sets dual in an appropriate sense. Specifically, we take the notion of test truth value as primitive and derive open and closed truth values as those, for which the conjunction and the implication with test values (which are adjoint operations) are again test.

We prove several propositions to motivate the new definitions. All of them have dual versions for open and closed truth values and subsets. As a practical example, we calculate the closed sets in a gros topos. This can be done quickly after observing that in it the open truth values are stable and form a lattice, and after proving a general theorem about such models of synthetic topology.


David Lester and Norbert Müller: Towards Machine Verification of a Practical Exact Real Arithmetic

There has been significant progress in the last decade on libraries for numeric computation with exact real (or complex) numbers. Here the term ‘real number’ is meant literally, and is as described in computable analysis and the type-2-theory of effectivity. Corresponding implementations are iRRAM, RealLib (both written in C++), or the programs cited in [LG02,Le08] (written in Haskell and PVS, both of a functional nature).

Although all of these packages have been programmed very carefully, numerous errors made during their development have led to numerically wrong results. To circumvent this, a functional language implementation has actually already been verified using PVS. Unfortunately, such implementations are order-of-magnitude slower than the iRRAM or the RealLib, which both use many optimizations only available in imperative languages.

We are therefore proposing to verify the iRRAM, as this now appears to be on the cusp of achievability. Because of the complexity of this software, this verification has to be done in several steps; here major components are (1) the parts of the implementation that have direct access to the underlying (data-)structures representing real numbers, i.e. (1a) the basic arithmetic, (1b) the algorithms for conversions between discrete data types and the real numbers, (1c) the operators for the computation of limits, and (2) high level algrotihms that do not have access to the internals, but use real numbers as basic, atomic objects together with the functions mentioned for (1).

Trying such a multi-level verification closely matches the current state of the art in mechanical theorem proving for imperative languages that uses a so-called “separation logic”, to structure the problem. At the high level, it is shown that the algorithm correctly computes the values required of the data structures. A further task is to then show that the imperative implementation language correctly computes these values as well. At the moment, we only attempt the first of these tasks which is usually where the main interest lies.


Norbert Müller, Margarita Korovina and David Lester: Making Big Steps in Tajectories (Slides)

The core of any hybrid system consists of a system of differential equations (initial value problems, IVP) enhanced with the ability to do discontinuous jumps. Unfortunately, from the viewpoint of TTE this implies noncomputability. This has been investigated in detail e.g. by Collins (2008). Nevertheless, the importance of these systems forces us to deal with them and provide solutions as good as possible.

One basic aspect of the hybrid systems is their evolution in time, i.e. the computation (if possible) of trajectories. In this paper we deal with corresponding solutions of the underlying IVPs up to the point where discontinuous jumps may appear.

In contrast to Edalat-Pattinson (2007), where the solution of the IVPs was attempted by an explicit enclosure of the solving functions, we try to avoid the evaluation of the solutions as much as possible. Our approach rather goes back to Müller (1993) and Müller/Moiske (1993), where polynomial time complexity could be proven under certain strong conditions: Essentially, the IVP has to be given in a way that allows the application of arbitrary high order methods for solving the IVP. This is the case e.g. for linear differential equations, which can be found in many hybrid systems.

In the solution of IVPs, classically Runge-Kutta methods are used, resulting in fifth order algorithms (i.e. the error can be described by the fifth power of the step width and a bound on higher derivatives of the solution). In consequence, the step width has to be chosen quite small, leading to an exponential number of steps (measured in the number of correct bits of the result). In our approach, the order can be chosen dynamically and arbitrarily high, leading to a much bigger step width. Asymptotically, the number of steps necessary to follow a trajectory to a switching point of the hybrid system can be polynomial under sufficiently strong conditions.

In addition to a theoretical discussion of the approach, we present a prototypical implementation in the iRRAM software package.


Monika Seisenberger and Ulrich Berger: Realisability for Coinductive Definitions with Applications to Exact Real Arithmetic (Slides)

We show how to extract certified programs for exact real number algorithms from constructive proofs based on an inductive/coinductive characterisation of uniform continuity. We discuss the theoretical background of the method and present some examples.


Victor Selivanov: Fine Hierarchies via Priestley duality (Slides)

In most applications of fine their characterizations in terms of so called alternating trees is of principal importance. Also, in many cases a suitable version of many-one reducibility exists that fits a given fine hierarchy. Here we show a surprising result that suitable versions of alternating trees and of m-reducibilities may be found for any given fine hierarchy, i.e. the methods of alternating trees and m-reducibilities are quite general, which is of some methodological interest.


Thomas Streicher and Bernhard Reus: Towards a Logic of Sequential Domains (Slides)

In the early 90ies Cartwright, Curien and Felleisen have identified a category OSA of con-crete data structures with errors and observably sequential algorithms, which is fully abstract for SPCF, i.e. PCF with a catch-construct allowing one to observe sequentiality. In particular, the category OSA is well-pointed. Later (around 2005) Jim Laird has exhibited an equivalent category LBD of countably based locally Boolean domains and bistable maps between them. The category LBD has a universal object U consisting of all bistable endomorphisms of , the bilifted natural numbers. In this respect LBD is analogous to the Scott model. Since LBD has a universal object this gives rise to a total combinatory algebra also denoted as U. Thus one may consider the realizability topos RT(U) which contains LBD as a full subcategory. Thus RT(U) may serve as a logical ambience for reasoning about partial sequential functionals of higher and recursive types.

Whereas “traditional” Synthetic Domain Theory (as pioneered by M. Hyland and P. Taylor) is based on a type S (Sierpinski) of observations which is a “dominance” our account is based on a type O arising from the concrete data structure with one cell and no proper value to fill it. Notice that it is not a dominance itself but [O O] does contain a dominance S. By postulating an infinitary version of sketch and requiring that all non-constant functionals from U to have sequential indices we finally axiomatize in a synthetic way the liberal notion of sequentiality as introduced by Berry and Curien.


Takahashi Toda: Domain-theoretical Aspects of Convex Sets in Real Projective Space (Slides)

We introduce the notion of convexity in d-dimensional real projective space X as follows: A subset of X is simple convex iff for every pair of points in the set, there exists an unique segment connected by the two points and contained in the set.

A subset of X is convex iff the set is represented as the intersection of a nonempty family of simple convex sets.

Note that, since a real projective line spanned by two distinct points is homeomorphic to 2-sphere, there exist two distinct segments connected by the two points.

This talk focuses on “well-formed“convex sets, known as linear convex sets, which form a proper subclass of convex sets in X.

A linear convexity is characterized as follows: a subset of X is linear convex iff the set is represented as the intersection of a family of open simple convex sets.

Let L(X) denote the poset of all linear convex sets in X ordered by inclusion and O(X) denote the poset of all open simple convex sets in X ordered by inclusion. The most notable property of L (X) is that each linear convex set in X has its counterpart (i.e. linear convex set) in the dual projective space X*. This correspondence is anti-order isomorphism.

The main results of this talk are to present two embeddings of L (X) in P(O(X)) and show that the two images L (O(X)) and L*(O(X)) have similar relation between L (X) and L (X*) from a domain-theoretical viewpoint.


Klaus Weihrauch: Computable Separation in Topology (Slides)

We study computable separation axioms for computable topological spaces. We introduce 13 computable versions of the topological separation axioms T 0 to T 3 and clarify the logical relations between these axioms completely.


Martin Ziegler: Real Computation with Least Discrete Advice: A Complexity Theory of Nonuniform (Slides) Computability

It is folklore particularly in numerical and computer sciences that, instead of solving some general problem f : A B, additional structural information about the input x in A (that is any kind of promise that x belongs to a certain subset Aof A) should be taken advantage of. Some examples from real number computation show that such discrete advice can even make the difference between computability and uncomputability. We turn this into a both topological and combinatorial complexity theory of information, investigating for several practical problems how much advice is necessary and sufficient to render them computable.

Specifically, finding a nontrivial solution to a homogeneous linear equation A*x = 0 for a given singular real N ×N-matrix A is possible when knowing rank(A), an integer between 0 and N -1; and we show this to be best possible. Similarly, diagonalizing (i.e. finding a basis of eigenvectors of) a given real symmetric N ×N-matrix A becomes effective when knowing the number of distinct eigenvalues: an integer between 1 and N (the latter corresponding to the nondegenerate case). And again we show that, in addition to A itself, N-fold (i.e. roughly logN bits of) information is indeed necessary in order to render this problem continuous. Whereas finding SOME SINGLE eigenvector of A requires and suffices with Θ(logN)–fold advice, namely the rounded logarithm of the least dimensional eigenspace of A.