SMS scnews item created by Hongwei Wen at Tue 14 Apr 2026 1540
Type: Seminar
Modified: Tue 14 Apr 2026 1548
Distribution: World
Expiry: 14 Apr 2027
Calendar1: 20 Apr 2026 1300-1400
Auth: hongweiw@hongweis-mbp.shared.sydney.edu.au (hwen0178) in SMS-SAML

Machine Learning Seminar: Gong -- A Thousand Faces of Continual Learning: Self-Improving AI with Modular Learning and Memory

The details about the machine learning seminar are as follows: 

Time: Mon 20 April (1:00 - 2:00pm): 

Location: SMRI Seminar Room (A12-03-301) A12 Macleay Building, Level 3, Room 301.  

Speaker: Dong Gong (UNSW) 

Title: A Thousand Faces of Continual Learning: Self-Improving AI with Modular Learning
and Memory 

Abstract: No matter how capable, a static AI cannot meet the demands of real-world
deployment, where tasks shift, knowledge ages, users differ, and environments evolve.
Yet the vast majority of machine learning methods operate under an essentially static
paradigm: train once, freeze, deploy.  Enabling models, including today's foundation
models, to keep learning is the central promise of continual learning, and, more
ambitiously, of *self-improving AI*.  It is also where one of the field's most stubborn
challenges lives: catastrophic forgetting, the tendency of models to lose what they have
already learned as soon as they acquire something new.  Continual learning is therefore
not optional but demanded.  And yet, in the current landscape , it is often dismissed
from two opposite directions — either declared impossible, or quietly claimed to be
"already solved".  In this talk, I want to surface a realistic scene — *a thousand
faces of continual learning*.  It is not a single algorithm but a spectrum of
mechanisms, spanning training-time updates, parameter editing, modular expansion,
associative memory, and test-time adaptation, embodying different expectations about an
AI system.  At its core, continual learning is fundamentally a question of *learning*
and *memory*: what to change, what to preserve, and how knowledge lives.  I will share
my perspective on modular learning and memory that localise change, mitigate
catastrophic forgetting, and scale naturally to foundation models.  I will then present
several of our recent works that instantiate this philosophy: dynamic Mixture-of-Experts
for model expansion, rank-1 fine-grained memory for precise knowledge injection,
on-demand expansion driven by task difficulty, and dynamic test-time self-adaptation.
Together, these span LLMs, multimodal LLMs, diffusion models, and agentic systems,
pointing toward AI that continues to evolve after pre-training ends.