Erlangen AI Hub Industry Day: Connecting Academia and Industry

On Tuesday 21 April, the Erlangen AI Hub hosted an Industry Collaboration Day at the Department of Computer Science, University of Oxford. The event brought together academics from across the Hub network alongside leading industry partners, creating a valuable space for discussion, knowledge exchange and future collaboration.

The day opened with a welcome from Jeff Giansiracusa, Professor of Mathematics at Durham University. We were joined by an outstanding group of industry speakers, including Dr Carl Hunter (Durham Institute of Research, Development and Invention), Andreas Haggman (Ofcom), Danica Greetham (Capgemini Engineering), Francis Bursa (Oxford Nanopore Technologies) and Marco Albanese (Oxford Drug Design).

Academic perspectives were provided by Tom Coates, Professor of Mathematics at Imperial College London, and Pete Grindrod, Professor of Mathematics, University of Oxford.

The morning sessions highlighted the breadth of challenges and opportunities at the intersection of AI and industry. Topics ranged from online safety and the societal risks of AI, to the importance of sovereign AI capabilities in the UK and the role of collaboration between government, science and industry. Speakers also explored how AI is being applied to analyse complex data, identify patterns and accelerate progress in science and engineering.

Discussions continued over lunch, with participants exchanging ideas on future collaborations and exploring how partnerships between academia and industry can drive innovation and real-world impact.

In the afternoon, attendees worked in groups to develop ideas inspired by the morning sessions. These discussions generated a number of thought-provoking themes, including:

  • The evolving relationship between AI and philosophy, and whether the field is returning to its conceptual roots
  • Challenges in modelling complex, noisy and temporal data, and improving interpretability
  • The economics of benchmarking AI models and the risks of overfitting to established benchmarks
  • The role of domain expertise in mitigating risks for less-informed users of AI systems
  • Advances in machine learning for recognising complex patterns and improving sequencing accuracy

The day concluded with a panel discussion featuring Marika Taylor (University of Southampton), Ran Levi (University of Aberdeen) and Yue Ren (Durham University), who reflected on the future of AI and industry. The panel explored emerging challenges, key research questions, and how collaboration can deepen understanding of both mathematics and AI, while shaping their applications in industry.

The event received excellent feedback from participants. As Danica Greetham (Capgemini Engineering) noted:

“What stood out was the quality of the discussion afterwards—sharp questions and open exchanges with academics keen to dive quickly into the essence of problems. It was great to hear from other industrial partners about the challenges they’re tackling. A recurring theme: observational data doesn’t lie—but it rarely speaks plainly. Interpretation is where the real work happens.”

We look forward to our next major event where we can continue the conversations, Mathematical Foundations of AI: The Erlangen Hub Conference 2026, taking place from 1–3 September 2026 at the Mathematical Institute, University of Oxford. Early bird tickets are now available.

Meet the team Q&A: Marika Taylor

In this Erlangen Hub Q&A we spoke to Marika Taylor, Co-Investigator and Theme C Deputy Lead. Marika is a Professor of Mathematics, Physics and AI at the University of Southampton. She trained in theoretical physics under Stephen Hawking, and is currently interested in geometric ML for fundamental physics applications and physics-inspired methods for ML. Marika was a Turing Institute Fellow and recipient of the “Dutch ERC” Vidi. She has a long track record with start-ups, including in encryption and fintech.  

Can you share a bit about your background and your current research focus?

My main research background is in mathematical and theoretical physics – particularly string theory, quantum theory and gravity. In parallel with this work in fundamental science, I have always been involved in mathematical modeling for real world problems, particularly finance, and have used neural nets for many years in that context. In recent years I’ve seen a convergence between my fundamental science research and my applied work: concepts from my areas of physics are being used within AI, while AI is increasingly being applied within fundamental physics too. A nice example would be graph neural networks in non-Euclidean geometry. Non-Euclidean geometry underpins our understanding of Einstein’s theory of general relativity (gravity). Many of the physical insights obtained from studying particularly geometries for gravity lead to insights into GNNs embedded into such geometries. Another example would be around symmetries. Physicists always build in their understanding of the underlying symmetries (exact or approximate) into their modelling of a system. One can similarly build symmetry equivariance into neural networks – for example, if you are classifying images of 3d objects and are agnostic about the orientation of the objects, then a network with rotational equivariance built into it will be more efficient in classification. My group is currently exploring more general symmetry equivariant networks, drawing from physics insights; this enables us to reduce substantially the number of parameters that need to be learned, and also to understand conceptually patterns found in previous algorithms. We are also interested in using physics understanding of time dependent systems to develop spiking neural networks; the latter are nature inspired, in that neurons only fire when a threshold is met, making them much more energy efficient.

What inspired you to pursue this area?

Throughout my career I have always worked on the frontiers of fundamental science. String theory is a “theory of everything”. It uses concepts from right across mathematics and also leads to new insights and ideas in mathematics – topological quantum field theory, for which Ed Witten won the Fields Medal, is a notable example. In parallel I’ve always enjoyed using the breadth of my knowledge in mathematical sciences for real world applications and I’ve often found that I get new insights into fundamental science from the applied work I’ve done. Over the last few years I’ve gradually moved more and more into AI, for both fundamental science and applied work, because there are so many exciting developments.

Which themes are you connected to within the Erlangen AI Hub and how does your work within the hub intersect with your research background?

The main theme that I am connected with is “Understanding Learning”, but I link with all the themes. Much of what I do relates to understanding conceptually hidden structures in data, and how geometry and topology can be used to characterize these. My physics insights into geometry also facilitate relating geometric and topological insights to real world phenomena.

What attracted you to the Erlangen AI Hub and what do you hope to see it achieve?

The hub is exploring the mathematical foundations of intelligence – this is essential to develop better, more efficient and safer models, which in turn will allow us to use AI in more contexts.

What’s been the most surprising or exciting finding in your work so far?

I don’t think that I would have predicted ten years ago that my two parallel streams of research would become so closely connected, with physics giving insights into developing AI and AI started to be used more in physics. (For AI to really be adopted more widely in physics, we will need robustness and accuracy.)

What challenges have you faced in your research, and how did you overcome them?

I like to take on research problems that are quite open ended and conceptually challenging. Inevitably that means that at times I get stuck or can’t quite see where to go next! Then I take a break, think about something else, and also talk to others, to get new ideas on where to go next.

What advice would you give to someone just starting out in your field?

I would advise somebody to follow their interests, and see where these take them!

Meet the team Q&A: Yue Ren

In this Erlangen Hub Q&A we spoke to Yue Ren, Co-Investigator and Theme B lead at Durham University. Yue is a UKRI Future Leaders Fellow and leading expert in tropical geometry, mathematical software, and the application of both to neural networks and problems in industry and sciences. He is a core developer of the computer algebra systems Polymake, Singular, and OSCAR. 

What is your name?

Yue Ren

Can you share a bit about your background and your current research focus?

My background is in algebraic and tropical geometry.  I did my PhD in Germany, and spent some time in the US, South Africa, Israel, and Sweden before moving to the UK. My current focus is on applications of the latter to polynomial system solving and machine learning.

What inspired you to pursue this area?

Mathematically, I’ve always been fascinated by the concrete interplay between algebra, geometry, and combinatorics in tropical geometry.  However, I was always prone to making mistakes in hand calculations, so I decided to specialize in teaching a computer to do them for me instead.  Professionally, I wanted a path that combined my mathematical interests with practical skills like software development.

Which themes are you connected to within the Erlangen AI Hub and how does your work within the hub intersect with your research background?

I am mainly connected to Theme B, though my research touches upon other themes as well.  My work within the hub revolves around taking theoretical techniques from pure mathematics and turning them into practical algorithms. It’s a way to expand the machine learning toolbox with some interesting new tools.

What attracted you to the Erlangen AI Hub and what do you hope to see it achieve?

The Erlangen AI Hub brings together researchers from a wide range of backgrounds who are all pursuing a common goal. I’m really looking forward to the mathematical theories and practical tools that will come out of this unique mix of expertise.

What’s been the most surprising or exciting finding in your work so far?

I’ve found it really surprising that p-adic numbers, an abstract number system developed by pure mathematicians for number theory, can be so useful for data analysis. Their distance prioritizes structural relationships over physical proximity, which is perfect for hierarchical data.

What challenges have you faced in your research, and how did you overcome them?

Most researchers face the same problems: getting stuck on a proof, getting unstuck only to realize you’ve built a suspiciously complicated proof for a simple statement, and Reviewer 2.  Research is an endless chain of challenges, and the best strategy for overcoming them is to ask for advice, ask for feedback, and act on it.  Not only are advice and feedback valuable, but asking for and acting on them is also a valuable skill that needs to be trained.

What advice would you give to someone just starting out in your field?

Don’t just focus on doing things, reflect on how you do them. Finding better work habits will help you spend more time on the things you enjoy and less time on the things you don’t.

What’s something people might be surprised to learn about you outside of research?

I’ve participated in the Cape Argus Cycle Tour, a 110 km race around the Cape of Good Hope with 35k entrants. I finished in the middle of my age group while taking 300 pictures along the way. I was beaten by a group dressed up as the Power Rangers under the ruthless South African sun, but at least I managed to beat that one bloke who rode a unicycle.

Mathematics and the Future of AI

Intellectual leadership through deep foundations

In a new University of Oxford Expert Comment article, Erlangen AI Hub Co-Investigator Professor Peter Grindrod CBE, argues that mathematics is not peripheral to artificial intelligence, but central to solving its core challenges.

As AI systems increase in scale and complexity, concerns around reliability, bias, interpretability, and formal guarantees cannot be addressed by engineering alone. Mathematics provides the structure to reason rigorously about uncertainty, optimisation, stability, and limits. Through probability, geometry, topology, dynamical systems, and information theory, maths enables AI systems that are interpretable by design and grounded in provable principles.

At the Erlangen AI Hub, Professor Grindrod and colleagues are working precisely in this space: bringing deep mathematical ideas into direct engagement with real-world AI challenges. Maths provides the foundation to build systems that are more robust, transparent, and intellectually grounded, helping position the UK as a leader through intellectual depth rather than scale. Read the full article below.

Expert Comment: How and why mathematics will both underpin and lead the next generation of AI | University of Oxford

Erlangen Hub Co-Investigators elected inaugural Fellows of the Academy for the Mathematical Sciences

Erlangen Hub Co-Investigators Rama Cont and Christoph Reisinger have been elected to the inaugural cohort of Fellows of the newly founded Academy for the Mathematical Sciences. Their election recognises their leadership and contributions to the mathematical sciences and marks a significant milestone for both individuals and the Hub.

The Academy for the Mathematical Sciences brings mathematics to the centre of UK research, policy, and public life, advancing the discipline across multiple domains, including policy, education, research, and innovation. Sitting alongside other national academies, including the British Academy and the Academy of Medical Sciences, it will use its convening power to bring together experts to collaborate on major global challenges. These include climate change, national security, financial systems, and artificial intelligence.

The Fellowship comprises leading mathematicians from across academia, education, industry, business, and government, and includes Fields Medallists, senior figures in national security, and pioneers in computing and AI. The election of Rama and Christoph recognises their achievements and expertise within this distinguished community.

Rama Cont is Professor of Mathematics at the University of Oxford’s Mathematical Institute, Head of the Mathematical and Computational Finance Group, and a Fellow of St Hugh’s College. He also directs the Centre for Doctoral Training in Mathematics of Random Systems. His research spans stochastic computational methods, the mathematical foundations of AI, generative models, and data-driven modelling in finance. Alongside his academic work, Rama advises several AI-focused start-ups, including InstaDeep, 73Strings, and Synthera.AI. Through the Erlangen Hub, he contributes to strengthening the mathematical foundations that underpin modern AI systems and their applications. On knowledge of the election Rama said:

The Academy for the Mathematical Sciences’ ambitions are to represent and promote the full spectrum of mathematical sciences and their applications. As a mathematician with research activities spanning theory and applications, I am delighted to join the Academy as a Fellow.’

Christoph Reisinger is Professor of Applied Mathematics at the University of Oxford and specialises in stochastic simulation and control, mean-field models, and the mathematical foundations of deep learning. He collaborates closely with industry and government partners on challenges in AI security, air traffic control, and financial market microstructure. Within the Erlangen Hub, Christoph advances fundamental research at the interface of control theory and reinforcement learning, supporting the development of robust and reliable AI decision-making.

Hub PDRA Thom Badings receives AAAI doctoral award honourable mention

Erlangen Hub researcher Thom Badings has received an honourable mention in the AAAI and ACM SIGAI Doctoral Dissertation Award; a prestigious international award recognising outstanding PhD research in artificial intelligence. As part of this recognition, Thom was invited to attend AAAI 2026 in Singapore, where he received the award and delivered an award talk on his doctoral research.

The AAAI and ACM SIGAI Doctoral Dissertation Award is jointly presented by the Association for the Advancement of Artificial Intelligence, and is regarded as one of the most significant distinctions for early-career researchers in the field. Honourable mentions are awarded to dissertations that demonstrate exceptional originality, technical depth, and potential impact.

In addition to the award presentation, Thom and Hub PDRA Francesco Fabiano also presented their joint research paper on robust decision-making, developed in collaboration with Co-investigators Alessandro Abate and Giuseppe De Giacomo.

Thom will be leaving the Erlangen AI Hub in March. His recognition at AAAI 2026 reflects both the strength of his individual research contributions and the broader impact of the Erlangen Hub’s work in artificial intelligence.

Over 20 Hub papers accepted at ICLR 2026

The Erlangen Hub has achieved a significant international research milestone, with over 20 papers accepted at ICLR 2026, one of the world’s leading conferences in artificial intelligence and machine learning.

The International Conference on Learning Representations, known as ICLR, is a premier global venue for research in areas such as deep learning, reinforcement learning, and the theoretical foundations of modern AI, and will be held in Rio de Janeiro, Brazil, from Thursday 23 April to Monday 27 April.

ICLR 2026 had over 19,000 paper submissions from researchers worldwide, with an acceptance rate of only around 30 percent. For Erlangen, securing over 20 papers in a single year is an excellent outcome. This success ensures the Hub remains a productive contributor to the conference internationally and the wider AI research conversation.

The accepted papers are diverse. They span a wide range of topics at the forefront of AI research, reflecting both the breadth and depth of expertise within the Hub. They include work on reinforcement learning, causal inference, diffusion models, and the theoretical analysis of machine learning systems, alongside several high-profile collaborative projects.

Hub Director Michael Bronstein and colleagues contributed an exceptional 17 papers.

Other contributors include Ran Levi, whose collaborative project paper develops new topological neural network models for learning from complex, higher-order relational data. Alessandro Abate also co-authored an accepted paper with L. Carvalho Melo and Yarin Gal, on challenges in reinforcement learning for large language model reasoning.

The Erlangen Hub is further represented in foundational work on causality and learning, with Marta Kwiatkowska co-authoring an accepted paper on causal imitation learning in the presence of hidden confounders, while Patrick Rebeschini co-authored a paper offering new theoretical insights into diffusion models, an increasingly important class of generative models in modern AI.

In other conference news, Hub PDRAs Francesco Fabiano and Thom Badings presented the paper “Best-Effort Policies for Robust Markov Decision Processes”, a collaboration with Co-Investigators Alessandro Abate and Giuseppe De Giacomo, at the AAAI 2026 conference in Singapore. Thom also received an honourable mention in the AAAI and ACM SIGAI Doctoral Dissertation Award and delivered his own talk at AAAI 2026. Hub Co-I Gesine Reinert has contributed two papers this year to the AIStats conference, taking place later this year in Morocco.

Taken together, these achievements highlight the Erlangen Hub’s growing international profile and its impact across the most active and influential areas of artificial intelligence research. They reflect both individual research excellence and a strong culture of collaboration and high-quality scholarship within the Hub.

Meet the Team Q&A: Tom Coates

I was blown away by how powerful AI tools can be for theorem discovery. It’s impossible to spend a lot of time using these tools, and their more powerful LLM cousins, without becoming intensely curious about how they work and how to reason about them.


In this edition of our Meet the Team Q&A we sat down with Tom Coates. Tom is a Professor of Pure Mathematics at Imperial College London. He is also an Erlangen Hub Co-Investigator and Theme D Deputy Lead focusing on the development of autonomous, self-adaptive AI that uses formal methods to understand its limits, and act safely and ethically.

Can you share a bit about your background and your current research focus?

For the past decade or more, I have been involved in a large-scale collaboration to find and classify algebraic varieties called Fano manifolds — one can think of this as building a “Periodic Table” for shapes. This collaboration involves a lot of large-scale computation, data mining, and machine learning, which is how I got interested in AI as a tool for scientific discovery. More recently I have also been on part-time secondment to the Office of the Chief Scientific Adviser, which has led to my current focus on the interface between AI and policy.

What inspired you to pursue this area?

Originally I was attracted by a purely theoretical, purely mathematical question: how to find the “basic pieces” from which more complex geometries are made. But as we began to deploy machine learning and AI-powered pipelines to tackle this question, I was blown away by how powerful AI tools can be for theorem discovery. And it is not possible to spend a lot of time using these tools, and their more powerful LLM cousins, without becoming intensely curious about how they work and how to reason about them.

Which themes are you connected to within the Erlangen AI Hub and how does your work within the hub intersect with your research background?

I lead on the Government Engagement and Theme D.

What attracted you to the Erlangen AI Hub and what do you hope to see it achieve?

There is such a need for more mathematicians with the skills to work at the interface between AI and policy. The Erlangen Hub is a perfect opportunity to pilot interventions in this space, and to support and grow more early career researchers who are “bilingual” — communicating fluently with both policymakers and scientists in AI and adjacent fields.

What’s been the most surprising or exciting finding in your work so far?

I’ve been enormously proud of the work that Sara Veneziale and I have done developing and delivering training courses on the fundamentals of AI to civil servants. More than 250 civil servants in at least 15 departments to date! This is having a demonstrable impact on policy formulation and delivery, in an area that is critical to both security and economic growth.

What challenges have you faced in your research, and how did you overcome them?

I have been very fortunate to work with an incredible group of collaborators across my career. There have been challenges here and there, but also amazingly creative colleagues to help overcome them.

What advice would you give to someone just starting out in your field?

Get stuck into the practicalities of AI: training large models, fighting the data-cleaning pipeline, wrangling the GPUs. There is no substitute for practical experience — and it helps you to choose and formulate the correct research questions too.

Celebrating Three PhD Successes

We are delighted to celebrate the recent PhD successes of Dr Yueqi Cao, Dr Roan Talbut, and Dr Qiquan (Qi) Wang, three early-career researchers whose work spans tropical geometry, statistical topology, and the mathematics of complex data. Their achievements reflect not only their own creativity and depth of insight but also the vibrant research environment shaped by their supervisor, and Hub Co-Director Professor Anthea Monod, who’s algebraic topology and algebraic geometry contributes to the understanding of modern statistical and machine-learning problems.


Dr Yueqi Cao — Tropical Geometry and Metric Graphs

Dr Yueqi Cao successfully defended his PhD, From Graphs to Point Clouds: the Tropical Abel–Jacobi Transform and Persistent Homology for Metric Graphs. His thesis develops rigorous links between tropical geometry, persistent homology, and statistical approaches for metric graphs, offering new tools for understanding geometric structure and opening pathways for applications in cryptography, information geometry, and machine-learning tasks on graph-structured data.

Yueqi’s doctoral work has led to four published journal papers across computational mathematics, data science, and statistics, with several more under review. He now continues his research as a Digital Futures Fellow at KTH Stockholm.


Dr Roan Talbut — Tropical Geometry for Phylogenetic Statistics

Dr Roan Talbut, now a Postdoctoral Research Associate at the Erlangen Hub, defended their PhD titled Tropical Geometry for Phylogenetic Statistics. Their research provides deep new insights into the intersection of tropical geometry, probability, statistics, and optimisation, developing tools that bring greater interpretability and computational tractability to the analysis of evolutionary and biological data.

Roan’s PhD resulted in several peer-reviewed publications across data science, optimisation theory and pure mathematics, with further work in progress. They continue their academic journey at Durham University


Dr Qiquan (“Qi”) Wang — Statistical Topology Across Biology and AI

Dr Qiquan Wang successfully defended her PhD, The Shape of Data: Statistical Topology Across Biology and AI. Her thesis establishes new statistical frameworks for analysing data using topological invariants in both single- and multi-parameter settings, and has applications to biological systems and deep-learning architectures.

Qi’s research has led to five papers, including publications, with additional manuscripts under review. She now moves on to a postdoctoral fellowship at Queen Mary, University of London.



Recognising the Mathematical Foundations of Their Research

These PhD successes highlight the vibrancy of mathematical foundations research and the impact of early-career researchers contributing new ideas at the interface of mathematics and AI. We are proud to celebrate their achievements and look forward to seeing the exciting directions their work will take in the years ahead.

Conference Round-Up: CDC 2025 and NeurIPS 2025

Researchers across the Erlangen AI Hub continue to showcase their work on the international stage. This season, Hub members presented at the IEEE Conference on Decision and Control (CDC 2025) and NeurIPS 2025, one of the world’s leading AI gatherings. Their contributions span advances in autonomous systems, the mathematical foundations of control, and the growing use of generative AI in finance. The highlights are captured below.

Thom Badings, delivering CDC conference talk

Advances in Abstraction-Based Control at CDC 2025

Designing safe, reliable controllers for autonomous systems, from drones to self-driving vehicles, remains a fundamental challenge in AI. At CDC 2025, Erlangen Hub PDRA Thom Badings and Co-Investigator Alessandro Abate presented new research advancing abstraction-based control, a principled approach for computing correct-by-construction control policies under uncertainty.

Their two papers deliver key contributions:

  • Strengthening the mathematical foundations
    A refined abstraction framework capable of computing provably safe control policies even when system dynamics are uncertain. This work enhances both precision and scalability for complex autonomous platforms.
  • Introducing data-driven abstraction methods
    New techniques for constructing abstractions directly from empirical data, reducing reliance on fully specified analytical models and enabling robust control in partially known environments.

These developments push forward the frontier of reliable autonomous decision-making and contribute to the Hub’s broader mission to develop rigorous foundations for trustworthy AI.

Further reading:
Probabilistic Alternating Simulations for Policy Synthesis in Uncertain Stochastic Dynamical Systems https://arxiv.org/abs/2508.05062
• Data-Driven Abstraction and Synthesis for Stochastic Systems with Unknown Dynamics: https://arxiv.org/abs/2508.15543

Generative AI for Finance: Rama Cont at NeurIPS 2025

At the NeurIPS 2025 Workshop on Generative AI in Finance, Erlangen Hub Co-Investigator Rama Cont delivered an invited talk on how generative models are transforming quantitative finance.

Financial markets are noisy, nonlinear, and highly interdependent, making simulation and risk assessment especially challenging. Cont presented recent work demonstrating how GAN-based models can emulate complex market behaviour, generate realistic scenarios, and support robust risk management.

His talk covered several key generative approaches developed by Cont and collaborators, including:

  • VolGAN for stochastic volatility surfaces
  • Tail-GAN for modelling rare but high-impact tail events
  • YieldGAN for yield curve dynamics
  • Data-driven hedging with generative models, a method using conditional generative models to compute hedge ratios across simulated market scenarios

The last of these was the focus of his presentation and recent paper, which proposes a non-parametric approach to hedging that outperforms classical delta and delta-vega strategies, even years after the training period.

The workshop itself featured leading voices from academia and industry, reflecting the rapid growth of AI-driven approaches in financial modelling.

Paper abstract:
Cont, R., Vuletić, M. Data-driven hedging with generative models. Ann Oper Res (2025)

Hub Leadership at NeurIPS 2025

NeurIPS 2025 was among the most competitive editions of the conference to date, with just 24.5% of submissions accepted. Against this backdrop, Erlangen AI Hub Director Michael Bronstein appeared as a co-author on nine accepted papers, presented across poster and spotlight sessions.

These contributions span generative and diffusion models, flow-based methods, equivariant and graph neural architectures, optimisation, and inference. All are core areas in the mathematical foundations of modern AI, and together, they reflect sustained engagement with both the theory and practice of scalable learning systems.

In a conference landscape increasingly shaped by large North American corporations and Chinese research institutions, this level of representation places Bronstein as a key figure in small group of Europe-based researchers maintaining strong technical visibility at NeurIPS, while highlighting the continued contribution of UK and European research to foundational questions shaping the field.

Further reading

Arroyo, Álvaro; Gravina, Alessio; Gutteridge, Benjamin; Barbero, Federico; Gallicchio, Claudio; Dong, Xiaowen; Bronstein, Michael; Vandergheynst, Pierre. On Vanishing Gradients, Over-Smoothing, and Over-Squashing in GNNs: Bridging Recurrent and Graph Learning. NeurIPS 2025

Finkelshtein, Ben; Ceylan, İsmail İlkan; Bronstein, Michael; Levie, Ron. Equivariance Everywhere All At Once: A Recipe for Graph Foundation Models. NeurIPS 2025

Gelberg, Yoav; Eitan, Yam; Navon, Aviv; Shamsian, Aviv; Putterman, Theo (Moe); Bronstein, Michael; Maron, Haggai. GradMetaNet: An Equivariant Architecture for Learning on Gradients. NeurIPS, 2025

Marisca, Ivan; Bamberger, Jacob; Alippi, Cesare; Bronstein, Michael M. Over-squashing in Spatiotemporal Graph Neural Networks. NeurIPS 2025

Petrović, Katarina; Atanackovic, Lazar; Moro, Viggo; Kapuśniak, Kacper; Ceylan, İsmail İlkan; Bronstein, Michael; Bose, Avishek Joey; Tong, Alexander. Curly Flow Matching for Learning Non-gradient Field Dynamics. NeurIPS 2025

Reu, Teodora; Dromigny, Sixtine; Bronstein, Michael; Vargas, Francisco. Gradient Variance Reveals Failure Modes in Flow-Based Generative Models. NeurIPS 2025

Sadeghi (Akhound-Sadegh), Tara; Lee, Jungyoon; Bose, Avishek Joey; De Bortoli, Valentin; Doucet, Arnaud; Bronstein, Michael M.; Beaini, Dominique; Ravanbakhsh (Ravandbakhsh), Siamak; Neklyudov, Kirill; Tong, Alexander. Progressive Inference-Time Annealing of Diffusion Models for Sampling from Boltzmann Densities. NeurIPS 2025

Tan, Charlie B.; Hassan, Majdi; Klein, Leon; Syed, Saifuddin; Beaini, Dominique; Bronstein, Michael M.; Tong, Alexander; Neklyudov, Kirill. Amortized Sampling with Transferable Normalizing Flows. NeurIPS 2025

Tang, Zhiyuan; Zhou, Yuhao; Zhao, Xuanlei; Shi, Mingjia; Wang, Wangbo; Huang, Kaixuan; Schurholt (Schürholt), Konstantin; Bronstein, Michael M.; You, Yang; Zhangyang, Wang; Wang, Kai. Drag-and-Drop LLMs: Zero-Shot Prompt-to-Weights. NeurIPS 2025