Advancing AI research

The Erlangen AI Hub is built on an active core network established through two previous influential EPSRC-funded projects. This solid foundation of collaborative research has already helped to establish the UK as a world-leader in applied and computational topology. The hub’s leadership team has collectively supervised more than 200 PhDs, received £117m of external funding, been awarded 4 Whitehead, 2 Adams, and 3 Leverhulme prizes, and created 17 tech spinout companies. 

Our ambitious research programme will apply geometry and topology to four key questions that underlie modern AI systems:   

  • How can hidden structures in data be discovered and expressed in the language of geometry and topology so that they can be exploited by machine learning models?  
  • Can we use geometric and topological tools to characterise machine learning models so that we can understand when and how they work and fail?  
  • How can we guarantee learning to benefit from these structures, and use these insights to develop better, more efficient, and safer new models?   
  • How can we use such models in future AI systems that make decisions potentially affecting billions of people? 

Our research themes

Our four key research themes aim to provide rigorous solutions to these questions. Find out more about these below.

Theme A: Understanding Data

Lead: Omer Bobrowski (QMUL)
Deputy Lead: Ulrike Tillman (Oxford)

In today’s landscape of machine learning and AI, vast amounts of data exist in every layer of every system. Therefore, unveiling structures and patterns within data is pivotal in understanding how AI systems operate, and for enhancing performance in terms of interpretability, reliability, robustness, efficiency, fairness, and bias.  

However, the sheer scale, dimensionality, multimodality, noise, dynamics, and incompleteness of data pose significant challenges. The research in this theme aims to leverage tools from geometry, topology, and probability to uncover structure in data, such as symmetries and hierarchies, enabling their exploitation for improved system functionality and performance.

Theme B: Understanding Machine Learning Models

Lead: Yue Ren (Durham)
Deputy Lead: Coralia Cartis (Oxford)

AI in popular media is usually portrayed as a single black box. However, just as you wouldn’t play a power washer in an orchestra or use a trombone to clean your porch, not all AI tools are the same. The AI assistant helping with editing children’s birthday videos differs significantly from the one proofreading obituaries.  

Understanding which machine learning model is best suited for which task, and how to quantify it in the first place, is not only important for the creation of more effective and more efficient AI tools, but also allows us to determine their limitations. Moreover, understanding which traits makes AI excel at a particular task allows us to identify fundamental patterns in its functioning. This knowledge, in turn, helps us distinguish between works generated by AI and works created by humans. 

Theme C: Understanding Learning

Lead: Jared Tanner (Oxford)
Deputy Lead: Marika Maxine Taylor (Southampton)

Machine learning and AI models have millions to trillions of parameters, all of which must be learned so that the algorithm can perform the desired task. The success in learning these parameters relies on principled choices in the model architecture and how the optimisation algorithm is designed. Properly designed, the learning can encourage the model to achieve greater robustness alongside absolute accuracy.  

Reinforcement learning is a key technique through which algorithms learn from themselves. This research theme focuses on scalable algorithms for machine learning models, with an understanding of how the model and algorithm choices impact the training and resulting algorithm. 

Theme D: Understanding Decision-Making

Lead: Alessandro Abate (Oxford)
Deputy Lead: Tom Coates (Imperial)

Machine learning models used today are often parts of larger systems involving some human supervision. Future AI systems will need to act autonomously by exercising both reactive and deliberative intelligence and be able to understand whether their autonomous capabilities can achieve their goals, e.g. to comply with safety constraints or ethical principles.  

In this theme we will employ formal methods over machine learning models to build self-adaptive AI systems that, while largely autonomous, understand their limitations and do not act when such limitations emerge. This will ultimately increase their trustworthiness, while avoiding the need of management by human supervisors.


Find out more about the working parties within each theme and the people involved:

Theme A Working Parties

A1: Revealing Symmetry in Data

Lead: Jeroen Lamb (Imperial)

Tom Coates (Imperial)
Jacek Brodzki (Southampton)
Ruben Sanchez Garcia (Southampton)
Jeffrey Giansiracusa (Durham)
Michael Bronstein (Oxford)
Heather Harrington (Oxford)
Terry Lyons (Oxford)
Harald Oberhauser (Oxford)

A2: Global Structures for Generalisation

Lead: Primoz Skraba (QMUL)

Haim Dubossarsky (QMUL)
Heather Harrington (Oxford)
Peter Grindrod (Oxford)
Jacek Brodzki (Southampton)
Ruben Sanchez Garcia (Southampton)

A3: Unveiling Structure Through Probabilistic & Statistical Analysis

Lead: Omer Bobrowski (QMUL)

Haim Dubossarsky (QMUL)
Ulrike Tillmann (Oxford)
Gesine Reinert (Oxford)

A4: Learning with Structured & Geometric Models

Lead: Renaud Lambiotte (Oxford)

Coralia Cartis (Oxford)
Raphael Hauser (Oxford)
Michael Bronstein (Oxford)
Jeffrey Giansiracusa (Durham)

Theme A Postdoctoral Research Associates

Eng-Jon Ong (QMUL)
Oliver Clarke (Durham)

Theme B Working Parties

B1: Low Effective-dimensional Learning Models

Lead: Jared Tanner (Oxford)

Coralia Cartis (Oxford)
Raphael Hauser (Oxford)
Varun Kanade (Oxford)
Mahesan Niranjan (Southampton)
Marika Maxine Taylor (Southampton)

B2: Stability & Reliability of Deep Learning

Lead: Jacek Brodzki (Southampton)

Jeffrey Giansiracusa (Durham)
Michael Bronstein (Oxford)

B3: Tropical Geometry of Neural Networks

Lead: Yue Ren (Durham)

Jeffrey Giansiracusa (Durham)
Anthea Monod (Imperial)

B4: Inverse Problems in AI/ML

Lead: Anthea Monod (Imperial)

Marika Maxine Taylor (Southampton)

Theme B Postdoctoral Research Associate

Tristan Madeleine (Southampton)

Theme C Working Parties

C1: Implicit Regularisation

Lead: Patrick Rebeschini (Oxford)

Coralia Cartis (Oxford)
Jared Tanner (Oxford)
Varun Kanade (Oxford)
Raphael Hauser (Oxford)
Marika Maxine Taylor (Southampton)

C2: Topology & Dynamics of Learning

Lead: Ran Levi (Aberdeen)

Omer Bobrowski (QMUL)
Jeroen Lamb (Imperial)

C3: RL through Stochastic Control

Lead: Christoph Reisinger (Oxford)

Alessandro Abate (Oxford)
Justin Sirignano (Oxford)

C4: Multi-Agent Path Finding (MAPF)

Lead: Norbert Peyerimhoff (Durham)

Giuseppe De Giacomo (Oxford)
Ruben Sanchez Garcia (Southampton)

Theme D Working Parties

D1: Learning Models for Decision Making

Lead: Giuseppe De Giacomo (Oxford)

Alessandro Abate (Oxford)
David Parker (Oxford)
Marta Kwiatkowska (Oxford)
Samuel Cohen (Oxford)
Mahesan Niranjan (Southampton)

D2: Automated Analysis & Synthesis for Sequential Decision Making

Lead: Marta Kwiatkowska (Oxford)

Alessandro Abate (Oxford)
Giuseppe De Giacomo (Oxford)
David Parker (Oxford)
Mahesan Niranjan (Southampton)

D3: Adaptive Modelling

Lead: David Parker (Oxford)

Alessandro Abate (Oxford)
Giuseppe De Giacomo (Oxford)
Marta Kwiatkowska (Oxford)
Mahesan Niranjan (Southampton)

D4: AI/ML for Government

Lead: Tom Coates (Imperial)

Peter Grindrod (Oxford)

Theme D Postdoctoral Research Associates

Thom Badings (Oxford)
Francesco Fabiano (Oxford)

>> From science to society