Searching with Dice
Prof Conrado Martínez, Universitat Politècnica de Catalunya, Barcelona, Spain
Thursday, 29 July 2010, 14h00, M 111 (Seminar Room)
This is a survey on randomized data structures. I discuss here how randomization helps the design of efficient data structures, with guarantees for expected performance. I will describe the skip lists by W. Pugh and the randomized BSTs by S. Roura and myself, their algorithms and their analysis - which, indeed, must be probabilistic.
What are automatic structures?
Dr Sasha Rubin, Research Visitor, UCT
Thursday, 13 May 2010, 16h00, M 111 (Seminar Room)
Finite automata are abstract models of computation that have very limited expressive power. However/consequently they appear all over computer science (pattern recognition, compression, natural language processing, hardware verification) as well as mathematics (geometric group theory, semigroup theory, dynamical systems, mathematical logic).
It turns out that automata can be used to decide certain first-order theories. (For instance, one can decide whether a statement written in first-order logic about addition on the integers is true or not). This is the main motivation behind the notion of an automatic structure. I will discuss some of central research directions in this area.
Computer-Aided Proofs for Impossibility Theorems in Social Choice Theory
Christian Geist, University of Amsterdam, The Netherlands
Thursday, 16 February 2010, 16h00, M 207
About thirty years after computer-aided proving had celebrated its first major triumph by proving the Four Color Theorem, Lin and Tang have now successfully brought it to the area of social choice theory and proved some major impossibility results (Arrow, Muller-Satterthwaite, Sen): having used inductive lemmas to reduce the theorems to relatively small base cases, they finally used a computer to check these. Whereas in their proceedings-paper "Computer-Aided Proofs of Arrow's and Other Impossibility Theorems" they concentrated on the inductive steps and proofs thereof, we present this new proof technique by reporting their results for Arrow's Theorem and then exemplifying it for the Gibbard-Satterthwaite Theorem with a strong focus on the computer-aided part, which was only introduced on a very conceptual level by Lin and Tang. We present the axiomatization and describe the necessary steps to obtain a representation that is accepted as the input to a SAT solver, which then does the actual verification.
Sequential Game Theory: A Formal and Abstract Approach
Dr Stéphane Le Roux, Ecole Polytechnique, Palaiseau, France
Thursday, 4 February 2010, 16h00, M 111 (Seminar Room)
In a game from game theory, a Nash equilibrium (NE) is a combination of one strategy per agent such that no agent can increase his payoff by unilaterally changing his strategy. Kuhn proved that all (tree-like) sequential games have NE. Osborne and Rubinstein abstracted over these games and Kuhn's result: they proved a sufficient condition on agent preferences for all games to have NE. I will present a necessary and sufficient condition, thus accounting for the game-theoretic frameworks that were left aside. The proof is formalised using Coq, and contrary to usual game theory it adopts an inductive approach to trees for definitions and proofs. By rephrasing a few game-theoretic concepts, by ignoring useless ones, and by characterising the proof-theoretic strength of Kuhn's/Osborne and Rubinstein's development, the talk also intends to clarifies sequential game theory.
Then I will sketch an alternative proof. Both proofs so far pre-process the preferences by topological sorting, which is surprising since it decreases the number of NE. Finally I will sketch two other proofs that pre-process the preferences only by transitive closure. The four proofs are all constructive and by induction, and I will compare them when seen as algorithms. The four proofs also constitute diverse viewpoints on the same concepts, the trial of diverse techniques against the same problem, and reusable techniques for similar structures.