Sparsity and the L1 Norm

  Рет қаралды 47,419

Steve Brunton

Steve Brunton

Күн бұрын

Here we explore why the L1 norm promotes sparsity in optimization problems. This is an incredibly important concept in machine learning, and data science more broadly, as sparsity helps us to improve robustness and prevent overfitting.
Book Website: databookuw.com
Book PDF: databookuw.com/databook.pdf
These lectures follow Chapter 3 from:
"Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control" by Brunton and Kutz
Amazon: www.amazon.com/Data-Driven-Sc...
Brunton Website: eigensteve.com
This video was produced at the University of Washington

Пікірлер: 57
@Ninjashifter
@Ninjashifter 3 жыл бұрын
Currently losing my mind over how effective that simple diamond shape is in finding the 1-norm, I suddenly want to research topology. Amazing stuff.
@Eigensteve
@Eigensteve 3 жыл бұрын
I know, it is kind of a mind twister. It gets even more interesting in higher dimensions...
@danbochman
@danbochman 3 жыл бұрын
It's amazing how you can work for years with a function and still gain new insights about it and improve your intuition. Thanks you so much for these videos.
@alexhartung2369
@alexhartung2369 3 жыл бұрын
mind blown. this guy is a rare breed. Loving the content
@nmerrillvt
@nmerrillvt 2 жыл бұрын
Amazing videos. The quality and clarity are above and beyond. Thanks!
@pevogam
@pevogam 3 жыл бұрын
These are all very visual and well made lectures, thanks a lot for the high quality videos and openly available materials!
@SamAndHenryPlaz
@SamAndHenryPlaz 3 жыл бұрын
Finally got a nibble of insight into L1, L2 norms they bring up in Ng's coursera classes. Seems like it would be good to use when you want to prune the size of a network after training. I love the chunks of insights these videos give and the very understandable way they're explained.
@aantoine5819
@aantoine5819 3 жыл бұрын
This was a perfect supplement to a lecture I just watched for class
@sahilchawla2485
@sahilchawla2485 2 жыл бұрын
Best 11 mins spent! Thanks a lot for distinguishing your solution than the rest of the internet!
@PunmasterSTP
@PunmasterSTP Жыл бұрын
Sparsity? More like “Super explanation that’s gotta be”…one of the best I’ve ever heard on the topic. Thank you so much for making and sharing all of these incredibly high-quality videos!
@jasleenkaur1208
@jasleenkaur1208 2 жыл бұрын
Really best videos...I have been reading the stuff for dictionaries creation from last two week and now I understood the concept
@chaser27
@chaser27 3 жыл бұрын
You've convinced me to buy your book based on these videos.
@Eigensteve
@Eigensteve 3 жыл бұрын
Awesome, hope you like it!
@yuhuawei1763
@yuhuawei1763 2 жыл бұрын
Excellent explanation! Thank you!
@adityamrai8508
@adityamrai8508 3 жыл бұрын
You are simply amazing. Keep inspiring.😎
@andreygrant8676
@andreygrant8676 3 жыл бұрын
I don't know why this came up on my suggestions list but was a good explanation! Thank you Steve :)
@TheLuceArs
@TheLuceArs 3 жыл бұрын
Same I was literally browsing YT suggestions while waiting a night at the airport and saw one of these videos. Who would've known that this becomes the most useful night in my university years
@hoaxuan7074
@hoaxuan7074 3 жыл бұрын
It is quite simple to use smoothing to solve compressed sensing. Though I found quite slow. You repeatedly correct with the compressed sensing data, smooth, correct, smooth..... The binomial filter is quite good. These days I would prefer to solve with a neural net in one step. Likely to be much faster.
@nguyenngocly1484
@nguyenngocly1484 3 жыл бұрын
You can train neural networks with sparse mutations. Continuous Gray Code Optimization mutations work well. Each GPU gets the full neural model and part of the training set. Not much data needs to move around, just a sparse mutation list, costs back and accept or reject. A fixed random pattern of sign flips is a simple random projection. If you follow it by a Walsh Hadamard transform you get a much better random projection. The WHT acts as a system of fixed orthogonal dot products. The variance equation for linear combinations of random variables applies to dot products. The CLT applies not just to sums but any combination of adding and subtracting (cf WHT) You can have inside_out neural networks with fixed dot products and parametric (adjustable) activation functions. Ie Fast Transform fixed-filter-bank neural nets. You do a simple random projection before the net to stop the first layer taking a spectrum and do a final fast transform as a pseudo_readout layer. f(x)=x.ai x=0 , i=0 to n-1 is a suitable parametric activation function.
@hashirroshinvaliyaparambil70
@hashirroshinvaliyaparambil70 3 жыл бұрын
Hey prof Steve, could you please make videos about Lyapunove stability and especially nonlinear systems
@proteuswave
@proteuswave 2 ай бұрын
Great video. Also..your setup is nuts! Writing backward? Mirror image? Does not compute!
@alegian7934
@alegian7934 3 жыл бұрын
I love how Y variable sounds like "why"
@yasir9909
@yasir9909 2 жыл бұрын
Sir do also have a similar detailed lecture on sparsity introduced by l-infinity norm?
@cDtag00
@cDtag00 3 жыл бұрын
Visually, l_{2/3} looks like an attractive choice as well, as it approximates l_0 even better than l_1 does. Whats the advantage of l_1 over l_{2/3}?
@NowanIlfideme
@NowanIlfideme 3 жыл бұрын
I think, like the 0-norm, it's not possible to use convex optimization to find solutions with a norm of p
@cDtag00
@cDtag00 3 жыл бұрын
@@NowanIlfideme Thank you, makes sense... I was indeed thinking of neural networks and other forms of non-linear optimization
@EduardoGarcia-tv2fc
@EduardoGarcia-tv2fc 4 жыл бұрын
Are we gonna get more videos on chapter 3?
@varunjohnson9570
@varunjohnson9570 4 жыл бұрын
Sir when can we expect continuation videos for Chapter 3?
@NowanIlfideme
@NowanIlfideme 3 жыл бұрын
Hi, will you also be covering the mathematical background of the L1 sparsity property? I understand it intuitively, but I haven't found a good intuitive and rigorous mathematical proof/explanation of it.
@umangyadav4787
@umangyadav4787 3 жыл бұрын
Hey, steve I have question like in lasso we have regularisation term |w| if this is. Change to w^2/1+w^2 ... What change it make to coefficient?
@meganauger354
@meganauger354 4 жыл бұрын
Sort of a random question, but what would happen if, when using the L1 norm, the s vector was perfectly parallel to one of the sides of the L1 square? Would it return a dense solution, instead of sparse? Would that mean you are in an inappropriate basis for a sparse solution? Thanks!
@viniciusvilela5371
@viniciusvilela5371 2 жыл бұрын
I guess that's why he said it often brings a sparse solution, not always...
@meccamiles7816
@meccamiles7816 3 жыл бұрын
Does someone have a simple way of describing why the L0 sparse solution is NP complete?
@praveencv431
@praveencv431 2 жыл бұрын
Any formal proof that l1 norm minimisation gives the sparsest solution in higher dimensions? I was trying to find a reference for it but unable to get one .
@TheMazyProduction
@TheMazyProduction 3 жыл бұрын
Is L_infinity norm the most uniform solution?
@deez_gainz
@deez_gainz 3 жыл бұрын
Does L1 norm regularization for the neural network layer also promote sparsest weights in the layer?
@SamAndHenryPlaz
@SamAndHenryPlaz 3 жыл бұрын
I'm wondering this as well. It does seem that way.
@zrmsraggot
@zrmsraggot 2 жыл бұрын
What's the difference between a sparse solution coming up on S1 and one on S2 ?
@xinyichen4332
@xinyichen4332 3 жыл бұрын
Hi, I still don’t understand why finding the L0 norm of S is NP hard. Could you please explain a bit? I’d appreciate your help. Thanks!
@Eigensteve
@Eigensteve 3 жыл бұрын
Good question. So computing the L0 norm of a vector is simple: just count all of the non-zero entries. But finding the solution "x" of a system of equations Ax=b where x has the minimum L0 norm is NP hard. You would have to search through all possible sparse vectors (you would try all vectors with only one entry, then all vectors with only 2 entries, then all vectors with only 3 entries), until you find the sparsest solution. This is a combinatorial search that scales like n! (where n is the dimension of x). n! grows way too fast for modern computers to handle.
@johnathancorgan3994
@johnathancorgan3994 2 жыл бұрын
Steve, I love how your abbreviation of "solution" is "sol" to nth power 😏
@mikefischbein3230
@mikefischbein3230 2 жыл бұрын
I keep picturing scenes from Cheers where Norm walks into the bar and everyone shouts "Norm!" Then Sam asks "What'll it be today, Norm? L1 or L2?"
@adityagarg259
@adityagarg259 Жыл бұрын
What if the line and one-norm diamond becomes parallel?
@davidmadsen2761
@davidmadsen2761 3 жыл бұрын
Isn't taxi-cab norm L1? As in the distance a car will have to drive in a grid system
@zakaryaelkhiyati8563
@zakaryaelkhiyati8563 3 жыл бұрын
neat
@alexeyl22
@alexeyl22 3 жыл бұрын
"there are infinitely many asses" 😂😂😂. You said it two times and I concur
@enigmabloom
@enigmabloom 3 жыл бұрын
Shoot, this guy's from UW? I'm doing the master's for EEs there right now lol
@MrAlextorex
@MrAlextorex 3 жыл бұрын
He writes backwards. how impressive!
@davidmadsen2761
@davidmadsen2761 3 жыл бұрын
probably mirroring in edit, so this is not actually how Steve would look like in person
@tcratius1748
@tcratius1748 3 жыл бұрын
How do you prove a L0 if you can not compute it? Maths is weird, yet the L1 is handy for a project I'm doing, so I should really be asking, how did you read my mind...👻 ps is it useful for imbalance classes?
@snippletrap
@snippletrap 3 жыл бұрын
It's computable but not differentiable (since non-convex). Later in the video he conflates "computable" with "tractable", but that is a mistake and not true.
@tcratius1748
@tcratius1748 3 жыл бұрын
@@snippletrap hmm, as a fairly poor mathematician studying data science at uni, this all computable, understanding takes a whole new and incredibly sparse vocabulary that seems to be different depending on the authors preference of symbols. Oh well, just keep on keeping on. :)
@snippletrap
@snippletrap 3 жыл бұрын
@@tcratius1748 Among computer scientists these terms are not ambiguous. "Computable" means it can be computed. Not everything can be computed -- see the Halting Problem for instance. "Tractable" generally means that computation time is not exponential in the size of the input. Hard problems like non-convex optimization are computable but not tractable. A computer will get eventually solve those problems, but it may take centuries to do so.
@tcratius1748
@tcratius1748 3 жыл бұрын
@@snippletrap yeap, I guess it didn't help I did biology as my bach. Oh, well enjoy, if you really wish to chat more I can give you my LinkedIn otherwise cheers for the chat.
@goodgame7474
@goodgame7474 3 жыл бұрын
This comment is literally literal !
@DerekWoolverton
@DerekWoolverton 3 жыл бұрын
Now I have to go searching for the formula for l_{2/3} norm. I can guess what it is, but it seems oddly out of place compared to all the integer bases, so I'm as curious where it originally came up.
@MrAlextorex
@MrAlextorex 3 жыл бұрын
He tells a nice story but his explanation requires a lot of background knowledge so in the end I do not feel very satisfied. It would be useful to add also more detailed tutorial links for further study
Compressed Sensing: When It Works
17:47
Steve Brunton
Рет қаралды 32 М.
Can You Draw A PERFECTLY Dotted Line?
00:55
Stokes Twins
Рет қаралды 87 МЛН
Luck Decides My Future Again 🍀🍀🍀 #katebrush #shorts
00:19
Kate Brush
Рет қаралды 8 МЛН
The Lp Norm for Vectors and Functions
9:34
Dr. Will Wood
Рет қаралды 73 М.
What is Sparsity?
8:25
Steve Brunton
Рет қаралды 47 М.
Physicists Claim They Can Send Particles Into the Past
7:21
Sabine Hossenfelder
Рет қаралды 239 М.
Shannon Nyquist Sampling Theorem
17:19
Steve Brunton
Рет қаралды 127 М.
Sparse Sensor Placement Optimization for Reconstruction
17:47
Steve Brunton
Рет қаралды 20 М.
Weird notions of "distance" || Intro to Metric Spaces
12:31
Dr. Trefor Bazett
Рет қаралды 88 М.
Normed Linear Spaces | Introduction,  L1 and L2 Norms
13:56
Dr. Will Wood
Рет қаралды 25 М.
Regularization in a Neural Network | Dealing with overfitting
11:40
Beating Nyquist with Compressed Sensing, in Python
12:05
Steve Brunton
Рет қаралды 18 М.
Robust Regression with the L1 Norm
8:05
Steve Brunton
Рет қаралды 20 М.