Invited Speakers

Samson Abramsky (University of Oxford, UK)   » slides of the talk
The logic and topology of non-locality and contextuality
Bell's theorem famously shows that no local theory can account for the predictions of quantum mechanics; while the Kochen-Specker theorem shows the same for non-contextual theories. Non-locality, and increasingly also contextuality, play an important role as computational resources in current work on quantum information. Much has been written on these matters, but there is surprisingly little unanimity even on basic definitions or the inter-relationships among the various concepts and results. We use the mathematical language of sheaves and monads to give a very general and mathematically robust description of the behaviour of systems in which one or more measurements can be selected, and one or more outcomes observed. In particular, we give a unified account of contextuality and non-locality in this setting.
- A central result is that an empirical model can be extended to all sets of measurements if and only if it can be realized by a factorizable hidden-variable model, where factorizability subsumes both non-contextuality and Bell locality. Thus the existence of incompatible measurements is the essential ingredient in non-local and contextual behavior in quantum mechanics.
- We give hidden-variable-free proofs of Bell style theorems.
- We identify a notion of strong contextuality, with surprising separations between non-local models: Hardy is not strongly contextual, GHZ is.
- We interpret Kochen-Specker as a generic (model-independent) strong contextuality result.
- We give general combinatorial and graph-theoretic conditions, independent of Hilbert space, for such results.
Bastien Chopard (University of Geneva, Switzerland)   » slides of the talk
A framework for multiscale and multiscience modeling based on the cellular automata and lattice Boltzmann approaches
Multiscale and multiscience problems are a challenge for computational sciences. Real systems are composed of several processes that interact across different scales. Although there is a large body of work dealing with multiscale applications, very few papers consider the methodological aspects. We propose a theoretical framework to design, run and analyze multiscale applications built on cellular automata, lattice Boltzmann and/or multiagent models. We illustrate the approach in the case a biomedical applications, in particular the process of restenosis in a stented coronary artery.
David Corne (Heriot-Watt University, UK)   » slides of the talk
Unconventional Optimizer Development
The fruits of bio-inspired approaches to optimisation include several techniques that are now commonly used in practice to address real-world problems. A common situation is as follows: an organisation has a regularly occurring problem to solve (typically a logistics problem), and they engage a research group or a consultancy to deliver an optimizer that can then be used as they regularly solve instances of that problem. The research group will then spend perhaps several months developing the optimizer, and this will almost always involve: (i) deciding to use a specific algorithm framework (e.g. tabu search or evolutionary search); (ii) tuning an algorithm over many problem instances in the space of interest, towards getting the best results achievable in a given time (perhaps minutes). I argue that this typical approach should, in many,arguably most cases, be changed completely. First, the client does not need an a slow algorithm that delivers great solutions – they need a very fast algorithm that delivers acceptable solutions. Second, there are many drawbacks and uncertainties in the enterprise of algorithm tuning; it would be good to mitigate these uncertainties via a different approach. Third, to spend several months designing and tuning an algorithm that solves instances seems like a great waste of time when, in several cases, it may be possible to simply use this time to solve all of the instances the company is likely to face! In this talk I therefore discuss the ingredients of the unconventional approach.
Juhani Karhumäki (University of Turku, Finland)   » slides of the talk
Weighted Finite Automata: Computing with Different Topologies
We use a very conventional model of computation to define unconventional computational processes. This leads to an easily computable class of real functions, however, this class is very different to those of nicely behaving real functions in a classical sense. All this is based on the fact that the topology of the unit interval is very different to that of infinite words representing numbers in that interval. In addition, the very inherent recursive structure of finite automata is central here.
Gheorghe Păun (Institute of Mathematics of the Romanian Academy, Romania)   » slides of the talk
Membrane Computing at Twelve Years (Back to Turku)
The talk is a quick introduction to membrane computing, by briefly presenting twelve basic ideas (in the development of which the author was involved – several other ideas deserve to be mentioned), with some emphasis on two recently investigated notions, the spiking neural P systems (SN P systems, for short), inspired from neural biology, and the dP systems, a distributed class of P systems (initially introduced for so-called symport-antiport P systems, but extended also to SN P systems, a case which is discussed here in some details).
Grzegorz Rozenberg (Leiden University, The Netherlands)
A formal framework for bioprocesses in living cells

Natural Computing is an interdisciplinary field of research that investigates human-designed computing inspired by nature as well as computation taking place in nature. In other words, Natural Computing investigates models, computational techniques, and computational technologies inspired by nature as well as it investigates, in terms of information processing, phenomena/processes taking place in nature.

One of the fascinating research areas of Natural Computing is the computational nature of biochemical reactions taking place in living cells. It is hoped that this line of research may contribute to a computational understanding of the functioning of a living cell. An important step towards this goal is understanding interactions between biochemical reactions. These reactions and their interactions are regulated, and the main regulation mechanisms are facilitation/acceleration and inhibition/retardation. The interactions between individual reactions take place through their influence on each other, and this influence happens through the two mechanisms.

In our lecture we present a formal framework for the investigation of processes carried by biochemical reactions in living cells. We motivate this framework by explicitly stating a number of assumptions that hold for a great number of biochemical reactions, and we point out that these assumptions are very different from the ones underlying traditional models of computation. We discuss some basic properties of processes carried out by biochemical reactions, and demonstrate how to capture and analyze, in our formal framework, some biochemistry-related notions.

Besides providing a formal framework for reasoning about processes instigated by biochemical reactions, the models discussed in the lecture are novel and attractive from the computer science point of view.

The lecture is of a tutorial style and self-contained. In particular no knowledge of biochemistry is required.