Schedule of the Workshop
Schedule
Monday
08:30 --- Registration
09:30 --- Coffee Break
10:30 --- Amaury Hayat: How can Machine Learning Help Mathematicians?
12:00 --- Lunch
14:00 --- Wenda Li: Will mathematical theorem proving be solved by scaling laws?
15:00 --- Coffee Break
16:00 --- Fran?ois Charton: Transformers know more than they can tell - Investigating the Collatz sequence
17:00 --- Welcome Reception
Tuesday
09:30 --- Coffee Break
10:30 --- Ingo Blechschmidt: Beyond verified correctness and collaborative proof engineering: Proofs as programs?
12:00 --- Lunch
14:00 --- Michael Rawson: Automated Reasoning for the Working Mathematician
15:00 --- Coffee Break
16:00 --- Open Discussion
Wednesday
09:30 --- Coffee Break
10:30 --- David Alfaya Sánchez: Computer assisted exploration of the geometry of moduli spaces
12:00 --- Lunch
14:00 --- Yang-Hui He: The AI Mathematician: from Geometry, to Physics, to Number theory. (ONLINE)
15:00 --- Coffee Break
17:00 --- Guided City Tour
20:00 --- Beergarden Visit
Thursday
09:30 --- Coffee Break
10:30 --- Maximilian Doré: Using generative AI in theoretical computer science
12:00 --- Lunch
14:00 --- Andras Juhasz: Reinforcement learning and knot theory
15:00 --- Coffee Break
16:00 --- Closing Discussion
Friday
Departure
Details and Abstracs
Abstract:
Some features about the geometry of certain moduli spaces (like topological invariants) suffer naturally a combinatorial explosion as we increase the parameters used to build the moduli (like the rank or genus of a curve), complicating a by-hand analysis of these properties. In this talk we will explore some ways in which computers can help algebraic geometers tackle this situation. Concretely, we will focus on two applications to moduli spaces of parabolic vector bundles and twisted Higgs bundles on curves. On the one hand, we will showcase how we can use a combination of algebro-geometric and computational tools to count the number of possible non-isomorphic moduli spaces of parabolic vector bundles on a marked curve and obtain their automorphism groups in low rank, and how to use this data to explore and prove some general results on the structure of the automorphisms of these varieties. On the other hand, a new Python package called “motives” will be presented which allows an efficient computation and simplification of motivic expressions in the Grothendieck ring of varieties, and we will see that the package can be used to explore and verify computationally open conjectures like Mozgovoy’s conjecture on the motive of the moduli space of twisted Higgs bundles. Joint works with Sergio Herreros, Javier Rodrigo, Daniel Sánchez, Jaime Pizarroso and José Portela.
Abstract:
A century ago, Hilbert called upon the mathematical community to explain how to extract concrete numerical content from abstract proofs involving transfinite methods. For instance, from a qualitative proof of the existence of a limit, we might hope to extract a quantitative bound on the rate of convergence. Proof assistants based on modern type theory offer the tantalizing prospect of facilitating such extraction by running formalized proofs as programs—adding value to mechanization beyond verified correctness and collaborative proof engineering. In the talk, we will explore a case study from commutative algebra on this kind of proof mining, obtained in joint work with Peter Schuster.
Abstract:
It is generally understood that transformers struggle to learn arithmetic functions (even integer multiplication), models learn shortcuts, fail to generalize etc. I investigate a complex arithmetic function, predicting distant terms in the Collatz sequence, and show that transformers can learn it to very high accuracy, but incrementally solving the problem for calsses of inputs characterized by their binary representation. This learning pattern is independent of the base used for tokenization. An analysis of model errors unveils a hierarchy of error cases, suggesting that, in latent space, all models are very close to learning the Collatz sequence, no matter the base.
Abstract:
I’ll critically examine where I think generative AI could replace me—as in, which parts of my work might be (better) carried out by a neural network than me. I’ll start with presenting a small project I’ve been working on with Michael Rawson to train a neural network to generate Agda proof terms, which sets the scene for what I think AI will be most helpful with. I’ll then sketch two research projects I’ve been working on, one on proof automation in cubical type theory and one at the intersection of linear logic and dependent type theory, and will try to gauge where AI could have helped my work, and which parts of the projects really required me to bang my head against the wall in a sustained fashion.
Abstract:
The advent of artificial intelligence raises an important question: Can AI assist mathematicians in solving open problems in mathematics? This talk explores this question from multiple perspectives. We will explore how different types of AI models can be trained to provide valuable insights into mathematical questions to recent progresses in the field of automated theorem proving.
Abstract:
We argue how AI can assist mathematics in three ways: theorem-proving, conjecture formulation, and language processing. Inspired by initial experiments in geometry and string theory in 2017, we summarize how this emerging field has grown over the past years, and show how various machine-learning algorithms can help with pattern detection across disciplines ranging from algebraic geometry to representation theory, to combinatorics, and to number theory. At the heart of the programme is the question how does AI help with theoretical discovery, and the implications for the future of mathematics.
Abstract:
Several hard problems in knot theory can be formulated as single-player games, including computing the unknotting number and the 4-ball genus, making these amenable to reinforcement learning algorithms. I will discuss some of the latest results and remaining challenges.
Abstract:
Recent advances in generative models and reinforcement learning have enabled automatic theorem proving beyond the capabilities of classical methods. Scaling up data, model size, and inference compute has undeniably allowed us to prove more theorems. But is scaling alone enough to solve mathematical theorem proving? If not, what essential components might still be missing?
Abstract:
Automated reasoning tools such as constraint solvers, finite model builders and theorem provers could be very useful for mathematicians. I will give a brief introduction to these tools and their possible applications to research mathematics. I will also try to understand why few mathematicians use them in their day-to-day work and give some suggestions for increasing uptake as a topic for discussion in the workshop.