We organize weekly/biweekly meetings in our group with invited speakers, but also with presentations by members of our group. Stay tuned!


Responsive Card Design HTML and CSS | CodingNepal

Upcoming seminars

4 March 2025 at 11:00 in Room 319 (Battelle A)

Jean-Luc Falcone (University of Geneva)

Type algebras for algebraic data types

11 March 2025 at 11:00 in Room 319 (Battelle A)

Eric Coatanea (Tampere University, Finland)

Automatic generation and use of oriented coloured graphs in system design for modelling and design flaw detection

The presentation will discuss a research program aiming at automating model generation and detection of design flaws in early development stages by automatically generating and exploiting directed colored graphs representing the architecture and behavior of systems. The proposed framework should serve as a computational companion to developers, designers, and decision makers, providing a new form of computational support. The early development phases are characterized by ill-defined design problems using a mixture of text, functional descriptions, requirements combining numerical and textual data. In these design phases, the generation of models, the reasoning and the detection of weaknesses or limitations still rely heavily on human expertise, despite the enormous increase in complexity that overwhelms human cognitive capabilities. In the 1990s, a branch of machine learning developed methods for qualitative reasoning. A mathematical machinery that exploits the decomposition of units into elementary dimensions, originally developed at the IBM Research Centre, combined power laws, contact variables and partial derivatives to model the interactions of systems’ variables using qualitative reasoning . The approach has since been extended to include problem formulation in the form of directed graphs, and the ability to detect and resolve inconsistencies using inventive principles from TRIZ. The long-term aim is to develop an approach towards fully automatic generation of models that can be simulated, used to control complex systems, evaluate, and test design hypotheses, and be used to improve design solutions. The application domains envisioned for this approach are explainable and parsimonious machine learning model generation, and design of cyber-physical systems.

Past seminars

20 February 2025 at 15:00 in Room 319 (Battelle A)

Roland Bouffanais (University of Geneva)

Collective Computing by Complex Intelligent Robotic Systems

We emphasize the necessity of decentralized collective computing for the efficient operation of complex intelligent robotic systems in unstructured environments. Specifically, we address the challenge of collectively exploring unknown and dynamic environments using a decentralized, heterogeneous multi-robot system with substantial computing capabilities. Our swarm of floor-mapping robots demonstrates scalability, robustness, and flexibility. These properties are systematically tested and quantitatively evaluated in unstructured and dynamic environments without any supporting infrastructure. Repeated experiments consistently confirm strong performance across all three features and highlight the system’s ability to integrate additional units while in operation. Although the generated occupancy-grid maps can be large, they remain fully distributed—no single robotic unit holds the complete map, which aligns with our cooperative path-planning strategy.

10 February 2025 at 12:30 in Room 301 (Battelle A)

Kavé Salamatian (University of Savoy Mont Blanc)

On geometrization of graphs : ironing to graph for correct geometric interpretations with applications to RNASeq data analysis

Graphs are a fundamental tool to capture complex interactions through a relatively simple logical framework of node-to-node relations. They are used in a wide range of applications and scientific domains. Yet, understanding the structure and global properties of a graph remains challenging. Geometric interpretations are widely used to represent complex problems and help develop intuitions that lead to solutions. Such interpretations are at the core of classical machine learning techniques, like k-means, attempts to define geometric interpretations generally consider vertices as “points” seating over a low dimensional Riemannian manifold, and link weights as geodesic “distances” between these points , leading to « graph embedding » . This, has a large set of applications including graph clustering, links prediction and graph alignment, graph robustness and resilience, classification tasks in machine learning. More recently, Graph Neural Networks (GNNs) are using embeddings of nodes and links s into a space defined by the structure of a neural network. However, the choice of the embedding manifold is critical and, if incorrectly performed, can lead to misleading interpretations due to incorrect geometric inference. In this talk, we argue that the classical embedding techniques cannot lead to correct geometric interpretation as the microscopic details, e.g. curvature at each point, of manifold, that are needed to derive geometric properties in Riemannian geometry methods are not available. We explain that for doing correct geometric interpretation the embedding of a graph should be done over regular constant curvature manifolds. To this end, we present an embedding approach, the discrete Ricci flow graph embedding (dRfge) based on the discrete Ricci flow that adapts the distance between nodes in a graph so that the graph can be embedded onto a constant curvature manifold that is homogeneous and isotropic. One of our major contributions is the proof of the convergence of discrete Ricci flow to a constant curvature and stable distance metric over the edges. A drawback of using the discrete Ricci flow is the high computational complexity that prevented its usage in large-scale graph analysis. We describe new algorithmic solutions that make it feasible to calculate the Ricci flow for graphs of up to 50k nodes, and beyond. The intuitions behind the discrete Ricci flow make it possible to obtain new insights into the structure of large-scale graphs. We demonstrate this through a case study on analysing single-cell RNA-sequencing time-series data.

3 February 2025 at 12:30 in Room 301 (Battelle A)

Bastien Chopard (University of Geneva)

Modeling the collective behavior of a fleet of autonomous vehicles

The impact on traffic flow of a fleet of autonomous vehicles is considered in the case where a two-lane road narrows to a single lane. We propose an algorithm that ensures the vehicles will never have to reduce their speed, and we analyze its effectiveness and the resulting constraints on large-scale traffic and on the layout of the area. (Joint work with Pierre Leone)

27 January 2025 at 12:30 in Room 301 (Battelle A)

Théo von Düring (University of Geneva)

Quelques problèmes à résoudre tous ensemble

En décembre 2024, un groupe de six étudiants de l'UNIGE est allé participer au concours de programmation SWERC (https://swerc.eu/2024/). Le concours consiste à résoudre des problèmes algorithmiques et coder les solutions de manière efficace. L'un des étudiants, Théo, nous présentera quelques uns des problèmes qu'il a trouvé intéressants et nous tenterons collectivement de les résoudre. Cette séance un peu spéciale s'annonce donc très divertissante !

20 January 2025 at 12:30 in Room 301 (Battelle A)

Jean-Luc Falcone (University of Geneva)

Derivatives of regular expressions

Regular expressions are powerful tool used by programmers for various kinds of string manipulation. They are available in many programming languages, with implementations usually based on finite state automata. In this talk we will discover an alternate implementation based on Brzozowski derivatives. This approach offers an intuitive way to understand regular expressions, while keeping strong theoretical foundations. Last but not least, it is trivial to write efficient implementations in functional programming languages.

13 January 2025 at 12:30 in Room 301 (Battelle A)

David Schindl (Haute Ecole de Gestion de Genève)

Upper bounds on the average distance in a graph

A graph invariant is a graph parameter that does not depend on the way vertices are labelled or how the graph is drawn. Well known examples include the chromatic number, the independence number, the domination number or the diameter. Other examples include the average distance between vertices or the size of a maximum induced forest. Graph invariants are particularly studied in the context of computer assisted conjectures generation, where such conjectures are expressed as equalities or inequalities between [simple functions of] such invariants. In the late 1980’s an automated conjecturing program called Graffiti generated meaningful such conjectures, among which the fact that the average distance never exceeds the independence number. This surprising non-trivial relation was demonstrated a few years later in a 5 pages proof by F. Chung. In 1992, Graffiti conjectured a stronger relation, namely that the average distance could be bounded by half the maximum size of an induced bipartite graph. In this talk, after a brief overview on automated conjecturing in graph theory, we show (milestones) how we proved that the average distance is actually bounded by half the size of a maximum induced forest, a result from which the above 1992 relation directly follows. We finally propose the next step, i.e. we conjecture a further stronger upper bound on the average distance. This conjecture is still open today.

16 December 2024 at 12:30 in Room 301 (Battelle A)

Dimi Racordon (EPFL)

Flow-sensitive reasoning with control flow graphs

One recent trend in programming languages is to equip type systems with flow-sensitive capabilities to reason about uniqueness and protect programs from unintended mutation through shared aliases. One representative of this trend is Hylo, an ongoing research project to propose a safe yet performance-oriented programming language for systems programming. In this talk, I will present the implementation of Hylo's flow-sensitive features. This implementation involves old and new algorithms on control-flow graphs, a common intermediate representation. In particular, I will explain how Hylo's compiler identifies computes liveness information to detect overlapping accesses to shared data ahead of time.

9 December 2024 at 12:30 in Room 301 (Battelle A)

Arnaud Casteigts (University of Geneva)

The saga of the four-color theorem

In this talk, I will present the beautiful story of the four-color theorem. This theorem establishes that every planar graph (or equivalently, geographical map) can be colored with four colors such that no two neighbors have the same color. The saga started in 1852, when a British botanist, Francis Guthrie, observed that the map of counties in England could be colored with four colors, and conjectured that this was the case for any map. Proving this theorem turned out to be more challenging than expected, it was finally proved by Appel and Haken in 1976, reducing the problem to 1834 configurations checked by computer. This proof being quite unorthodox, it was still not well accepted until Werner and Gonthier formalized it using the Coq proof assistant in 2005. Throughout the talk, I will give some background on graph coloring, mentioning some current applications of this versatile problem. I will present the (reasonably simple) proof of the weaker five-color theorem, whose main ideas are also present in the proof of the four-color theorem. Finally, I will talk about the computational complexity of the graph coloring problem, showing that deciding if a given graph is 3-colorable (or 4-colorable) is NP-hard for general graphs, by a reduction from 3-SAT.
Link to the presentation

2 December 2024 at 13:00 in Room 301 (Battelle A)

Jonas Lätt (University of Geneva)

Challenges in computational fluid dynamics: an introductory presentation with applications in biomedical and geophysical flow modeling

The presentation provides an introduction to computational fluid dynamics and presents the challenges faced in this field to overcome the complexity of the physical problems on one hand and the complexity of high-performance computing hardware on the other side. The talk focuses on a specific numerical approach, the lattice Boltzmann method. It guides the audience through multiple facets of the numerical simulation, including the numerical method, algorithmics, programming models, and hardware. Examples of simulations in biomedical and geophysical sciences are included.

18 November 2024 at 12:30 in Room 301 (Battelle A)

François Fleuret (University of Geneva)

Attention Models, World Models, and Going Beyond World Understanding

In this talk, I will first present the standard attention-based operations that have been so successful for large language models, and will show how we have used these techniques to devise state-of-the-art world models for reinforcement learning. The I will expose preliminary results for the self-generation of non-NLP reasoning tasks that expend a basic set of simple hand-designed reasoning challenges. This approach provides a blueprint of a possible strategy to develop abstract reasoning beyond reinforcement learning, without using human-generated data.

11 November 2024 at 12:30 in Room 301 (Battelle A)

Carolina Lucía Gonzalez (University of Buenos Aires)

Locally checkable problems parameterized by mim-width

Locally checkable problems are vertex partitioning problems for which a solution has to satisfy some local property for each vertex (that is, a property that involves only the solution restricted to the vertex and its neighbors). This is the case of several variants of minimum dominating set, maximum independent set and k-coloring, among others. In this talk, we will delve into the question of what makes locally checkable problems easy or hard, when restricted to certain graph classes. Specifically, we will study the conditions that locally checkable problems must satisfy in order to be XP parameterized by mim-width.
Link to the paper
Link to the presentation

4 November 2024 at 12:30 in Room 301 (Battelle A)

Julia Buwaya (University of Geneva)

Mobile crowdsensing through the lens of algorithmic game theory

Mobile crowdsensing (MCS) is a recent paradigm, which leverages the sensing data from the mobile devices of people (the crowd) to serve various goals. These include business goals as for example designing and evaluating a health care product by using the cumulative data gathered by a crowd through the powerful sensors integrated in smartphones. Moreover, MCS is used to improve public and individual services. In this talk, I present a selfish atomic routing game that serves as model to optimize the allocation of tasks in a MCS system. For the Nash equilibria of our game, we can transfer efficiency guarantees, i.e., the worst-case ratio between the welfare of an equilibrium and the welfare of a social optimum is provably bounded by a small constant when cost functions are polynomials. An epsilon-approximation of a Nash equilibrium solution can be computed in polynomial time for affine cost functions. Based on the analysis using our model, we derive a mechanism for the automation of efficient task allocations in MCS and we prove that the mechanism is truthful. At the end of the talk, I briefly present initial results from an ongoing project in cooperation with the Swiss Tropical and Public Health Institute in Basel and the Citizen Science Center of ETH Zurich to monitor electromagnetic radiation using MCS.

14 October 2024

Arnaud Casteigts (University of Geneva)

Temporal graph theory: paradigm and algorithmic challenges

A temporal graph is a graph whose edges are only present at certain points in time. These graphs received a growing attention over the past two decades, due to their ability to model various types of dynamic networks. On the theoretical side, temporal graphs are quite different from static graphs. For instance, reachability (defined in terms of paths that traverses edges in chronological order) is not transitive. This and other features have important consequences, both structural and algorithmic, most questions becoming computationally hard. In this talk, I will review some basic results in this young field, with a focus on reachability questions.

29 February 2024

Sandrine Blazy (IRISA, University of Rennes)

How to provide proof that software is bug-free? Verified compilation to the rescue

Deductive verification provides very strong guarantees that software is bug-free. Since the verification is usually done at the source level, the compiler becomes a weak link in the software-production chain. Verifying the compiler itself provides guarantees that no errors are introduced during compilation. This talk will illustrate this approach through CompCert, the first fully verified compiler for C that is actually usable on real source code and that produces decent target code on real-world architectures. More generally, this approach opens the way to the verification of software tools involved in the production and verification of software.

18 January 2024

Paul-André Melliès (IRIF, CNRS, Paris)

Sémantique des jeux: une interprétation interactive de la logique linéaire et de la logique de séparation concurrente

Dans cet exposé qui se veut introductif, je commencerai par décrire les principes qui régissent la sémantique des jeux de la logique linéaire. Dans cette interprétation interactive de la logique, toute formule A définit un jeu de dialogue à deux joueurs dans lequel le joueur cherche à établir la validité de la formule A tandis que l’opposant cherche à la refuter. Une démonstration de la formule A est interprétée par une stratégie gagnante du jeu de dialogue associée, qui assure au joueur qu’il convaincra l’opposant et gagnera la partie. J’expliquerai comment la notion de jeu de gabarit (template game) que j’ai introduite il y a quelques années permet d’adapter ces idées à la programmation impérative et concurrente avec mémoire partagée, pour établir la correction de la logique de séparation concurrente.

11 January 2024

Karine Altisen (Verimag, University of Grenoble)

Interactive Proof for Self-Stabilizing Algorithms using PADEC: Squeezing Streams

PADEC is a framework to build machine-checked proofs of self-stabilizing algorithms using the Coq proof assistant. The framework includes the definition of the computational model, tools for time complexity and composition of algorithms, lemmas for common proof patterns and case studies. A constant purpose was to convince the designers that what we formally prove using PADEC is what they expect by using definitions that are (syntactically) as close as possible to their usual definitions. In this talk, we focus on the composition of algorithms: the correctness proof contains a lazy-productive operator called ‘squeeze’. This operator transforms a stream in order to remove repeats under some decidable conditions.

19 December 2023

Petra Wolf (University of Bordeaux)

Kernelizing Temporal Exploration Problems

We study the kernelization of exploration problems on temporal graphs. A temporal graph consists of a finite sequence of snapshot graphs G = (G1, G2, …, GL) that share a common vertex set but might have different edge sets. The non-strict temporal exploration problem (NS-TEXP for short) introduced by Erlebach and Spooner, asks if a single agent can visit all vertices of a given temporal graph where the edges traversed by the agent are present in non-strict monotonous time steps, i.e., the agent can move along the edges of a snapshot graph with infinite speed. The exploration must at the latest be completed in the last snapshot graph. The optimization variant of this problem is the k-arb NS-TEXP problem, where the agent’s task is to visit at least k vertices of the temporal graph. We show that under standard computational complexity assumptions, neither of the problems NS-TEXP nor k-arb NS-TEXP allow for polynomial kernels in the standard parameters: number of vertices n, lifetime L, number of vertices to visit k, and maximal number of connected components per time step γ; as well as in the combined parameters L + k, L + γ, and k + γ. On the way to establishing these lower bounds, we answer a couple of questions left open by Erlebach and Spooner. We also initiate the study of structural kernelization by identifying a new parameter of a temporal graph Informally, this parameter measures how dynamic thetemporal graph is. Our main algorithmic result is the construction of a polynomial (in p(G)) kernel for the more general Weighted k-arb NS-TEXP problem, where weights are assigned to the vertices and the task is to find a temporal walk of weight at least k.

23 November 2023

Xavier Urbain (LIRIS, Univ. Claude Bernard, Lyon)

Safety and Versatility: verifying swarms of autonomous robots

After a brief overview of my journey from automated deduction, its formal certification, and towards verification of distributed protocols, I will present our work in the context of autonomous mobile robots in swarms. In this recent model, robots compute their actions based on their own perception of their surroundings. Characterized by extreme dynamicity and versatility, these swarms allow to envision numerous applications: exploration of dangerous/devastated areas, search and rescue, setting up of dynamic networks, etc. Their formal verification, compulsory in the case of critical/life and death scenarii, brings formidable challenges to tackle. To this goal, we develop Pactole, a formal framework mechanised for the Coq proof assistant. As Pactole allows for establishing correctness of distributed protocols for robotic swarms; it also proves useful to show formally that some tasks are indeed unfeasable under particular assumptions.