Mathematics and the Future of AI

Intellectual leadership through deep foundations

In a new University of Oxford Expert Comment article, Erlangen AI Hub Co-Investigator Professor Peter Grindrod CBE, argues that mathematics is not peripheral to artificial intelligence, but central to solving its core challenges.

As AI systems increase in scale and complexity, concerns around reliability, bias, interpretability, and formal guarantees cannot be addressed by engineering alone. Mathematics provides the structure to reason rigorously about uncertainty, optimisation, stability, and limits. Through probability, geometry, topology, dynamical systems, and information theory, maths enables AI systems that are interpretable by design and grounded in provable principles.

At the Erlangen AI Hub, Professor Grindrod and colleagues are working precisely in this space: bringing deep mathematical ideas into direct engagement with real-world AI challenges. Maths provides the foundation to build systems that are more robust, transparent, and intellectually grounded, helping position the UK as a leader through intellectual depth rather than scale. Read the full article below.

Expert Comment: How and why mathematics will both underpin and lead the next generation of AI | University of Oxford

Erlangen Hub Seminar: Riemannian Neural Optimal Transport

Alessandro Micheli
Theme: How can hidden structures in data be discovered and expressed in the language of geometry and topology so that they can be exploited by machine learning models?

Computational optimal transport (OT) provides a principled framework for generative modelling. Neural OT methods learn transport maps from data using neural networks and can be evaluated out of sample after training; however, existing approaches are largely restricted to Euclidean settings. Extending neural OT to high-dimensional Riemannian manifolds presents significant theoretical and computational challenges.

In this talk, Alessandro Micheli will show that discretisation-based OT methods on manifolds inherently face severe dimensionality scaling limitations. To address this, he introduces Riemannian Neural Optimal Transport (RNOT), a continuous neural parameterisation of OT maps that avoids discretisation and incorporates geometric structure directly. Under mild regularity assumptions, RNOT achieves sub-exponential complexity in the manifold dimension. Empirical results on synthetic and real datasets demonstrate improved scalability and competitive performance relative to existing approaches.

Register your interest now:

Erlangen AI Hub Seminar Alessandro Micheli – Riemannian Neural Optimal Transport – Fill in form

Over 20 Hub papers accepted at ICLR 2026

The Erlangen Hub has achieved a significant international research milestone, with over 20 papers accepted at ICLR 2026, one of the world’s leading conferences in artificial intelligence and machine learning.

The International Conference on Learning Representations, known as ICLR, is a premier global venue for research in areas such as deep learning, reinforcement learning, and the theoretical foundations of modern AI, and will be held in Rio de Janeiro, Brazil, from Thursday 23 April to Monday 27 April.

ICLR 2026 had over 19,000 paper submissions from researchers worldwide, with an acceptance rate of only around 30 percent. For Erlangen, securing over 20 papers in a single year is an excellent outcome. This success ensures the Hub remains a productive contributor to the conference internationally and the wider AI research conversation.

The accepted papers are diverse. They span a wide range of topics at the forefront of AI research, reflecting both the breadth and depth of expertise within the Hub. They include work on reinforcement learning, causal inference, diffusion models, and the theoretical analysis of machine learning systems, alongside several high-profile collaborative projects.

Hub Director Michael Bronstein and colleagues contributed an exceptional 17 papers.

Other contributors include Ran Levi, whose collaborative project paper develops new topological neural network models for learning from complex, higher-order relational data. Alessandro Abate also co-authored an accepted paper with L. Carvalho Melo and Yarin Gal, on challenges in reinforcement learning for large language model reasoning.

The Erlangen Hub is further represented in foundational work on causality and learning, with Marta Kwiatkowska co-authoring an accepted paper on causal imitation learning in the presence of hidden confounders, while Patrick Rebeschini co-authored a paper offering new theoretical insights into diffusion models, an increasingly important class of generative models in modern AI.

In other conference news, Hub PDRAs Francesco Fabiano and Thom Badings presented the paper “Best-Effort Policies for Robust Markov Decision Processes”, a collaboration with Co-Investigators Alessandro Abate and Giuseppe De Giacomo, at the AAAI 2026 conference in Singapore. Thom also received an honourable mention in the AAAI and ACM SIGAI Doctoral Dissertation Award and delivered his own talk at AAAI 2026. Hub Co-I Gesine Reinert has contributed two papers this year to the AIStats conference, taking place later this year in Morocco.

Taken together, these achievements highlight the Erlangen Hub’s growing international profile and its impact across the most active and influential areas of artificial intelligence research. They reflect both individual research excellence and a strong culture of collaboration and high-quality scholarship within the Hub.