2025 KIAS Winter School on Mathematics and AI
| Home | Program | Timetable | Venue | Registration |
Home
The 2025 KIAS Winter School on Mathematics and AI is a four‑day hackathon‑style program at the intersection of mathematics and AI. It brings together researchers exploring new connections between the two fields to experiment, collaborate, and pursue future research directions. Teams work in parallel on hands‑on projects in math‑for‑AI, AI‑for‑math, and formalization. The program also includes invited talks on related topics.
Topics & Keywords
- Formalization (Lean), Autoformalization, Informalization
- Machine Learning for Mathematics
- Large Language Models
- Reinforcement Learning
Organizers
- Joonhyun La (KIAS)
- Chul‑hee Lee (KIAS)
- Kyu‑Hwan Lee (University of Connecticut / KIAS)
Team Leads
- Ilkyoo Choi (HUFS / DIMAG, IBS)
- Byung-Hak Hwang (KIAS)
- Jihoon Hyun (KAIST)
- Chul‑hee Lee (KIAS)
- Seewoo Lee (UC Berkeley)
- Hyojae Lim (RICAM)
Speakers
- Dohyun Kwon (University of Seoul)
- Kyu‑Hwan Lee (University of Connecticut / KIAS)
- Hongseok Yang (KAIST)
Contact
- Email: chlee@kias.re.kr (Chul‑hee Lee)
Supported by
- HCMC, Korea Institute for Advanced Study (KIAS)
Program
Talks
- Speaker: Dohyun Kwon (University of Seoul)
- Title: Generative Modeling Through the Lens of Optimal Transport
- Abstract: Generative models have achieved remarkable success across a wide range of machine learning tasks. This talk presents a unified perspective on generative modeling through the lens of optimal transport, viewing learning as an optimization problem over the space of probability measures. We will explore how this framework inspires connections between diffusion models, energy-based models, and GANs. Along the way, we will highlight recent advances and discuss how these ideas extend new directions for advancing generative models.
⋯
- Speaker: Kyu‑Hwan Lee (University of Connecticut / KIAS)
- Title: Mathematical Data Science
- Abstract: Can machine learning help discover new mathematical structures? In this talk we discuss an approach to doing this which one can call “mathematical data science”. This paradigm involves studying mathematical objects collectively rather than individually by constructing datasets and conducting machine learning experiments and interpretations. After an overview, I will present two case studies: (1) murmurations in number theory and (2) loadings of partitions related to Kronecker coefficients in representation theory and combinatorics. This talk is based on my joint article (arXiv:2502.08620) with Michael Douglas.
⋯
- Speaker: Hongseok Yang (KAIST)
- Title: Tackling asymptotic extremal problems in graph theory using neural networks
- Abstract: Assisting mathematical discovery with machine learning techniques has been an active research topic in the machine-learning community in recent years, yielding impressive results such as the discovery of key insights into or counterexamples for open conjectures in knot theory, representation theory, arithmetic geometry and combinatorics. In this talk, I will describe our ongoing efforts for extending this line of research to extremal combinatorics. We consider the asymptotic extremal problems on graphs, which are just a particular type of optimisation problems on an infinite limiting version of graphs, called graphons. We aim at helping prove or disprove open conjectures for such problems using tools from machine learning. Our idea is to represent these infinite limiting graphs using carefully-designed neural networks, and to solve these optimisation problems using gradient descent. I will describe what challenges we encountered, how we overcame those challenges by designing a particular architecture for neural networks inspired by the popular diffusion model, and what new insights into the well-known asymptotic extremal problems we gained by our approach. This is joint work with Taeyoung Kim from KAIST, Jineon Baek from KIAS, and Joonkyung Lee from Yonsei University.
Team Programs
Team DM
- Team Leads: Byung-Hak Hwang (KIAS), Ilkyoo Choi (HUFS / DIMAG, IBS)
- Title: ML and Discrete Mathematics
- Abstract: Our team will explore FunSearch! We aim to understand (1) how FunSearch works, (2) what kinds of mathematics FunSearch can handle, (3) how to use FunSearch in practice, and then (4) we will apply FunSearch to some combinatorial problems. No specific background is required, but familiarity with combinatorics, machine learning, or programming in Python will be helpful.
Team AF (Autoformalization)
- Team Lead: Hyojae Lim (RICAM)
- Title: Autoformalization
- Abstract: This project explores autoformalization, the automatic translation of natural-language mathematical definitions into formal definitions in Lean. We focus on how large language models (LLMs) can be guided to produce Lean definitions based on the rules for defining data types, particularly inductive types. During the project, we will study how mathematical concepts and definitions can be represented in Lean, experiment with LLM-based translation pipelines, and analyze the interpretability and stability of the resulting formalizations. Participants will gain hands-on experience with Lean and LLM prompting; prior familiarity with Lean or Python programming will be helpful but not required.
Team NT
- Team Lead: Seewoo Lee (UC Berkeley)
- Title: ML and Number Theory
- Abstract: The main goal of our group is to use (classical/modern) ML algorithms to predict number-theoretic invariants. Before the winter school, we are going to learn basic ML algorithms and how they can be used to solve easy problems in number theory. Then we will consider more advanced topics, including Artin representations and abelian varieties over finite fields, aiming for discovering new phenomena. We assume that members are familiar with graduate-level number theory but not assuming any background on machine learning.
Team RL
- Team Lead: Chul‑hee Lee (KIAS)
- Title: Reinforcement Learning and Canonical Forms in Mathematics
- Abstract: This project explores how reinforcement learning (RL) can be applied to the process of reducing mathematical objects to their canonical forms. RL is a framework in which an agent learns to take sequential actions through interaction with an environment, guided by rewards and feedback. From this viewpoint, it seems plausible to explore step-by-step procedures such as Gaussian elimination through the lens of RL. During the preparatory phase, participants will study the fundamentals of RL and gain hands-on experience in simple environments using tools such as Gymnasium. We then treat various standardization processes as agent-driven search problems, examining how RL can replicate or extend traditional algorithmic methods. Background in Python programming and undergraduate-level abstract algebra will be helpful.
Team FM (Formalization)
- Team Lead: Jihoon Hyun (KAIST)
- Title: Formalization
- Abstract: Formalizing mathematics is a topic which earned much interest recently. Our team will formalize several extremal problems in combinatorics in Lean4. Especially, we consider 0-1 pattern matrices of small size, and bound the number of n-by-n matrices which avoids the pattern, and observe related theorems. Throughout the program, we aim to excessively use Lean4 with Mathlib4 by defining and formalizing the above statement.
Timetable
Day 1 (Tue, Dec 2)
09:20 - 12:40 Departure: KIAS → Park Roche
12:40 - 14:30 Lunch & check-in
14:30 - 14:50 Welcome remarks & brief introduction to the program
14:50 - 18:00 Team project session
18:15 - 20:00 Dinner
Day 2 (Wed, Dec 3)
09:20 - 10:20 Talk 1: Dohyun Kwon (University of Seoul)
10:20 - 10:30 Discussion
10:30 - 12:20 Team project session
12:30 - 14:00 Lunch
14:00 - 18:00 Team project session and brief progress reports
18:15 - 20:00 Dinner
Day 3 (Thu, Dec 4)
09:20 - 10:20 Talk 2: Kyu‑Hwan Lee (University of Connecticut / KIAS)
10:20 - 10:30 Discussion
10:30 - 12:20 Team project session
12:30 - 14:00 Lunch
14:00 - 18:00 Team project session and final team presentations
18:15 - 20:00 Dinner
Day 4 (Fri, Dec 5)
09:20 - 10:20 Talk 3: Hongseok Yang (KAIST)
10:20 - 12:20 Discussion
12:20 Lunch & return to KIAS
Note: Meals from lunch on Tue, Dec 2 to lunch on Fri, Dec 5 (10 meals in total) will be provided by the organizers.
Note: The seminar room will be open in the evenings for informal use (project work, discussions, self‑study).
Venue
- Park Roche, Jeongseon — https://park-roche.com/
Transportation
- Shuttle Bus
- A shuttle bus will be arranged between KIAS and Park Roche for participants.
- KIAS → Park Roche: Dec 2 (Tue) 9:20 AM (approx. 3 h 30 min)
- Park Roche → KIAS: Dec 5 (Fri) 1:30 PM (approx. 3 h 30 min)
- Times are subject to change.
Registration
Apply Now: https://forms.gle/oaApy67SgbAsRrYMA (Google Form; application deadline: Nov 13, 18:00 KST)
Registration is closed.