A shallow grip on neural networks (What is the "universal approximation theorem"?)

  Рет қаралды 4,879

Sheafification of G

Sheafification of G

Күн бұрын

The "universal approximation theorem" is a catch-all term for a bunch of theorems regarding the ability of the class of neural networks to approximate arbitrary continuous functions. How exactly (or approximately) can we go about doing so? Fortunately, the proof of one of the earliest versions of this theorem comes with an "algorithm" (more or less) for approximating a given continuous function to whatever precision you want.
(I have never formally studied neural networks.... is it obvious? 👉👈)
The original manga:
[LLPS92] M. Leshno, V.Y. Lin, A. Pinkus, S. Schocken, 1993. Multilayer feedforward networks with a non-polynomial activation function can approximate any function. Neural Networks, 6(6):861--867.
________________
Timestamps:
00:00 - Intro (ABCs)
01:08 - What is a neural network?
02:37 - Universal Approximation Theorem
03:37 - Polynomial approximations
04:26 - Why neural networks?
05:00 - How to approximate a continuous function
05:55 - Step 1 - Monomials
07:07 - Step 2 - Polynomials
07:33 - Step 3 - Multivariable polynomials (buckle your britches)
09:35 - Step 4 - Multivariable continuous functions
09:47 - Step 5 - Vector-valued continuous functions
10:20 - Thx 4 watching

Пікірлер: 33
@connor9024
@connor9024 Ай бұрын
It’s t-22 hours until my econometrics final, I have been studying my ass off, I’m tired, I have no idea what this video is even talking about, I’m hungry and a little scared.
@SheafificationOfG
@SheafificationOfG Ай бұрын
Did you pass? (Did the universal approx thm help?)
@davidebic
@davidebic 26 күн бұрын
​@@SheafificationOfGusing this theorem he could create a neural network that approximates test answers to an arbitrarily good degree, thus getting an A-.
@henriquenoronha1392
@henriquenoronha1392 27 күн бұрын
Came for the universal approximation theorem, stayed for the humor (after the first pump up I didn't understand a word). Great video!
@raneena5079
@raneena5079 Ай бұрын
super underrated channel
@dinoscheidt
@dinoscheidt 25 күн бұрын
5:07 phew, this channel is gold. Basic enough that I understand whats going on as an applied ML engineer, and smart enough that I feel like I would learn something. Subscribed.
@gbnam8
@gbnam8 Ай бұрын
as someone who is really interested in pure maths, i think that youtube should really have more videos like these, keep it up!
@SheafificationOfG
@SheafificationOfG Ай бұрын
Happy to do my part 🫡
@neelg7057
@neelg7057 Ай бұрын
I have no idea what I just watched
@raspberryspicelatte65
@raspberryspicelatte65 26 күн бұрын
Did not expect to see a Jim's Big Ego reference here
@antarctic214
@antarctic214 24 күн бұрын
To 6:20. The secant line approximation converges at least pointwise. But for the theorem we want to construct uniform/sup-norm convergence, and I don't see why that holds for the secant approximation.
@SheafificationOfG
@SheafificationOfG 24 күн бұрын
Good catch! The secret sauce here is that we're using a smooth activation function, and we're only approximating the function over a closed interval. For a smooth function f(x), the absolute difference between df/dx at x and a secant line approximation (of width h) is bounded by M*h/2, where M is a bound on the absolute value of the second derivative of f(x) between x and x+h [this falls out of the Lagrange form of the error in Taylor's Theorem]. If x is restricted to a closed interval, we can choose the absolute bound M of the second derivative to be independent of x (and h, if h is bounded), and this gives us a uniform bound on convergence of the secant lines.
@kennycommentsofficial
@kennycommentsofficial 26 күн бұрын
@6:22 missed opportunity for the canonical recall from gradeschool joke
@SheafificationOfG
@SheafificationOfG 25 күн бұрын
😔
@Baer2
@Baer2 Ай бұрын
I don't think I'm part of the target group for this video ( i have no idea what the fuck you are talking about) but it was still entertaining and allowed me to feel smart whenever I was able to make sense of anything ( I know what f(x) means) so have a like and a comment, and good luck with your future math endeavors!!
@SheafificationOfG
@SheafificationOfG Ай бұрын
Haha thanks, I really appreciate the support! The fact that you watched it makes you part of the target group. Exposure therapy is a pretty powerful secret ingredient in math.
@akhiljalan11
@akhiljalan11 25 күн бұрын
Great content
@98danielray
@98danielray Ай бұрын
I suppose I can show the last equality of 9:04 using induction on monomial operators?
@SheafificationOfG
@SheafificationOfG Ай бұрын
Yep! Linearity of differentiation allows you to assume q is just a k-th order mixed partial derivative, and then you can proceed by induction on k.
@decare696
@decare696 Ай бұрын
This is by far the math channel with the best jokes. Sadly, I don't know any Chinese, so I couldn't figure out who 丩的層化 is. Best any translator would give me was "Stratification of ???"...
@SheafificationOfG
@SheafificationOfG Ай бұрын
Haha, thanks! 層化 can mean "sheafification" and ㄐ is zhuyin for the sound "ji"
@smellslikeupdog80
@smellslikeupdog80 26 күн бұрын
your linguistic articulation is extremely specific and 🤌🤌🤌
@noahgeller392
@noahgeller392 28 күн бұрын
banger vid
@SheafificationOfG
@SheafificationOfG 28 күн бұрын
thanks fam
@gabrielplzdks3891
@gabrielplzdks3891 Ай бұрын
Any videos coming about Kolmogorov Arnold networks?
@SheafificationOfG
@SheafificationOfG Ай бұрын
I didn't know about those prior to reading your comment, but they look really interesting! Might be able to put something together in the future; stay tuned.
@kuzuma4523
@kuzuma4523 Ай бұрын
Okay but has the manga good application? Does it train faster or something?😊 (Please help me I like mathing but world is corrupting me with its engineering)
@SheafificationOfG
@SheafificationOfG Ай бұрын
The manga is an enjoyable read (though it's an old paper), but it doesn't say anything about how well neural networks train; it's only concerned with the capacity of shallow neural networks in approximating continuous functions (that we already "know" and aren't trying to "learn"). In particular, it says nothing about training a neural network with tune-able parameters (and fixed size). (I feel your pain, though; that's what brought me to make a channel!)
@kuzuma4523
@kuzuma4523 Ай бұрын
@@SheafificationOfG Fair enough. I'll still give it a read. Also, thanks for the content; it felt like fresh air watching high level maths with comedy; I shall therefore use the successor function on your sub count.
@korigamik
@korigamik Ай бұрын
Can you share the pdf of the notes you show in the video?
@SheafificationOfG
@SheafificationOfG Ай бұрын
If you're talking about the source of the proof I presented, the paper is in the description: M. Leshno, V.Y. Lin, A. Pinkus, S. Schocken, 1993. Multilayer feedforward networks with a non-polynomial activation function can approximate any function. Neural Networks, 6(6):861--867. If you're talking about the rest, I actually just generated LaTeX images containing specifically what I presented; they didn't come from a complete document. I might *write* such documents down the road for my videos, but that's heavily dependent on disposable time I have, and general demand.
@korigamik
@korigamik Ай бұрын
@@SheafificationOfG then demand there is. I love well written explanations to read not see
@SheafificationOfG
@SheafificationOfG Ай бұрын
@@korigamik I'll keep debating writing up supplementary material for my videos (though I don't want to make promises). In the meantime, though, I highly recommend reading the reference I cited: it's quite well-written (and, of course, the argument is more complete).
What is a Monad? - Math vs Computer Science
13:58
Sheafification of G
Рет қаралды 14 М.
And this year's Turing Award goes to...
15:44
polylog
Рет қаралды 101 М.
UFC 302 : Махачев VS Порье
02:54
Setanta Sports UFC
Рет қаралды 1,4 МЛН
Super gymnastics 😍🫣
00:15
Lexa_Merin
Рет қаралды 71 МЛН
Be kind🤝
00:22
ISSEI / いっせい
Рет қаралды 24 МЛН
Cute Barbie Gadget 🥰 #gadgets
01:00
FLIP FLOP Hacks
Рет қаралды 41 МЛН
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 254 М.
(Provably) Unprovable and Undisprovable... How??
11:16
Sheafification of G
Рет қаралды 10 М.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 173 М.
Infinite numbers have only finitely many (nonzero) digits
15:04
Sheafification of G
Рет қаралды 10 М.
Spectral Graph Theory For Dummies
28:17
Ron & Math
Рет қаралды 42 М.
Numberphile's Square-Sum Problem was solved! #SoME2
26:37
HexagonVideos
Рет қаралды 348 М.
Урна с айфонами!
0:30
По ту сторону Гугла
Рет қаралды 3,8 МЛН
Задача APPLE сделать iPHONE НЕРЕМОНТОПРИГОДНЫМ
0:57
КОПИМ НА АЙФОН В ТГК АРСЕНИЙ СЭДГАПП🛒
0:59
сюрприз
1:00
Capex0
Рет қаралды 1,1 МЛН
Mi primera placa con dios
0:12
Eyal mewing
Рет қаралды 659 М.