Generalization in diffusion models from geometry-adaptive harmonic representation | Zahra Kadkhodaie

  Рет қаралды 1,124

Valence Labs

Valence Labs

Күн бұрын

Portal is the home of the AI for drug discovery community. Join for more details on this talk and to connect with the speakers: portal.valencelabs.com/logg
Abstract: High-quality samples generated with score-based reverse diffusion algorithms provide evidence that deep neural networks (DNN) trained for denoising can learn high-dimensional densities, despite the curse of dimensionality. However, recent reports of memorization of the training set raise the question of whether these networks are learning the "true" continuous density of the data. Here, we show that two denoising DNNs trained on non-overlapping subsets of a dataset learn nearly the same score function, and thus the same density, with a surprisingly small number of training images. This strong generalization demonstrates an alignment of powerful inductive biases in the DNN architecture and/or training algorithm with properties of the data distribution. We analyze these, demonstrating that the denoiser performs a shrinkage operation in a basis adapted to the underlying image. Examination of these bases reveals oscillating harmonic structures along contours and in homogeneous image regions. We show that trained denoisers are inductively biased towards these geometry-adaptive harmonic representations by demonstrating that they arise even when the network is trained on image classes such as low-dimensional manifolds, for which the harmonic basis is suboptimal. Additionally, we show that the denoising performance of the networks is near-optimal when trained on regular image classes for which the optimal basis is known to be geometry-adaptive and harmonic.
Speaker: Zahra Kadkhodaie
Twitter Hannes: / hannesstaerk
Twitter Dominique: / dom_beaini
~
Chapters
00:00 - Intro + Background
06:40 - Diffusion Models + Denoising
23:42 - Transition from Memorization to Generalization
50:19 - Denoising as Shrinkage in a Basis
1:01:30 - Inductive Biases
1:11:50 - Q + A

Пікірлер: 4
@johnparkhill2963
@johnparkhill2963 3 ай бұрын
Beautiful work.
@ML_n00b
@ML_n00b 3 ай бұрын
Great paper
@mehrdadmirpourian64
@mehrdadmirpourian64 3 ай бұрын
Wonderful paper!
@namjoonsuh8095
@namjoonsuh8095 Ай бұрын
Seminal work
Manifold Diffusion Fields | Ahmed Elhag
27:17
Valence Labs
Рет қаралды 656
Tom & Jerry !! 😂😂
00:59
Tibo InShape
Рет қаралды 58 МЛН
Did you believe it was real? #tiktok
00:25
Анастасия Тарасова
Рет қаралды 10 МЛН
Stable Diffusion in Code (AI Image Generation) - Computerphile
16:56
Computerphile
Рет қаралды 286 М.
Bayesian Flow Networks | Alex Graves
1:26:41
Valence Labs
Рет қаралды 3,5 М.
Learning Graph Cellular Automata | Daniele Grattarola
1:29:10
Valence Labs
Рет қаралды 2,3 М.
Generative AI in a Nutshell - how to survive and thrive in the age of AI
17:57
The Difference Between Islam and Christianity - Jordan Peterson
5:09
Mosaic-SDF for 3D Generative Models | Lior Yariv
1:15:41
Valence Labs
Рет қаралды 490
Lid hologram 3d
0:32
LEDG
Рет қаралды 10 МЛН
#miniphone
0:16
Miniphone
Рет қаралды 3,6 МЛН
cute mini iphone
0:34
승비니 Seungbini
Рет қаралды 6 МЛН