2021's Biggest Breakthroughs in Math and Computer Science

  Рет қаралды 1,105,283

Quanta Magazine

Quanta Magazine

Күн бұрын

It was a big year. Researchers found a way to idealize deep neural networks using kernel machines-an important step toward opening these black boxes. There were major developments toward an answer about the nature of infinity. And a mathematician finally managed to model quantum gravity.
Read the articles in full at Quanta Magazine: www.quantamagazine.org/the-ye...
- VISIT our Website: www.quantamagazine.org
- LIKE us on Facebook: / quantanews
- FOLLOW us Twitter: / quantamagazine
Quanta Magazine is an editorially independent publication supported by the Simons Foundation www.simonsfoundation.org/

Пікірлер: 824
@QuantaScienceChannel
@QuantaScienceChannel 2 жыл бұрын
Read the articles in full at Quanta Magazine: www.quantamagazine.org/the-year-in-math-and-computer-science-20211223/
@naturemc2
@naturemc2 2 жыл бұрын
Your last few videos in this channel is killing it. Need it. Much need it ❤️
@zfyl
@zfyl 2 жыл бұрын
I think the opposite. All i see here, is just mathematicians coming up with new approaches to existing problems (made by previous mathematicians) and publishing new approaches. These are not results, and i feel like these are practically useless. So sad to see, that the education system embraces pointless research in such overly sophisticated, yet never applied, fields of science! What a shame, as it happens on the background of the world in fires, looking for help...and what is give?...some over-engineered half solution for made up problems...
@antoniussugianto7973
@antoniussugianto7973 2 жыл бұрын
Please Riemann hypothesis progress updates...
@EmperorZelos
@EmperorZelos 2 жыл бұрын
Uh yeah no, I have to correct you. The continuum hypothesis is UNDECIDABLE in ZFC. Meaning there is no way to decide it. There is nothign to SOLVE there, there is nothing unanswered. It was resolved and understood many many decades ago. We KNOW it is independent and we cannot say c=Aleph_1. We can assume it axiomatically if we so want, or assume its negation and both are EQUALLY valid. What you're talking about here is adding an axiom to create a NEW axiomatic system where we CAN say it, but that does not mean it was "resolved" or anything because we already knew the answer.
@eeemotion
@eeemotion 2 жыл бұрын
Thanks for sparing me the trouble of watching. As anything significant could be buried in such an annal. The only real breakthrough in lamestream science is how to get them to shield for a plasma environment while still thinking almost exclusively in terms of 'heat'. The almost being the novelty. Electricity still being a dirty word in space. Hence its smell at first described from the suits after a spacewalk as that of electric soldering was then peppered with burnt chicken and BBQ insinuations to make for the usual clumsy narrative reminiscent of the sticky tape on the supposed lunar landing module. Ah, who knows what's in the peel of an onion? It's a slow boil to get to the truth and for the cluttered cosmogony of the believers it seems all too much useless toil...
@ruchirkadam8510
@ruchirkadam8510 2 жыл бұрын
Man, loving these 'breakthrough' videos! It's feels fulfilling to see the progress being made! I mean, finally modelling quantum gravity? jeez!
@Djfmdotcom
@Djfmdotcom 2 жыл бұрын
Same! I think in no small part it’s because we have all these KZfaq channels focusing on them! I’d much rather watch Videos about science, exploration and learning than MSM garbage that divides us. Science brings us together!
@v2ike6udik
@v2ike6udik 2 жыл бұрын
BS. Gravity (as a separate force) is a hoax. It has been done for a reason.
@irs4486
@irs4486 2 жыл бұрын
cringe bruh, stop commenting, ratio + yb better
@sublimejourney3384
@sublimejourney3384 2 жыл бұрын
I love these videos too !!
@The.Golden.Door.
@The.Golden.Door. 2 жыл бұрын
Quantum gravity is far more simpler to calculate than what modern day Physicists have known to be true.
@MargaretSpintz
@MargaretSpintz 2 жыл бұрын
Slight correction. The infinite limit of shallow neural networks as kernel machines (specifically Gaussian processes) was established in 1994 (Radford Neal). This was updated for 'ReLU' non-linearities in 2009 (Cho & Saul). In 2017 Lee & Bahri showed this result could be extended to deep neural networks. Not sure this counts as "2021's biggest breakthrough", though it is a cool result, so happy to have it publicised. 👍
@PythonPlusPlus
@PythonPlusPlus 2 жыл бұрын
I was thinking the same thing
@lexusmaxus
@lexusmaxus 2 жыл бұрын
Since there are no physical infinite machines so there must be mathematical operators that eliminates these infinities?
@hayeder
@hayeder 2 жыл бұрын
Was about to post something similar. The recent famous paper in this area is Jacot et al with the NTK in 2018. It’s also not clear to what extent this explains practice. Eg see the work of chizat and Bach on lazy training.
@ramkitty
@ramkitty 2 жыл бұрын
@@lexusmaxus or is infinity an inversion in some way
@Ef554rgcc
@Ef554rgcc 2 жыл бұрын
Obviously
@OneDayIMay91Bil
@OneDayIMay91Bil 2 жыл бұрын
Glad to have been a contributing member to this field had my first peer reviewed paper published in IEEE this year :)
@kf10147
@kf10147 2 жыл бұрын
Congratulations!
@thatkindcoder7510
@thatkindcoder7510 2 жыл бұрын
What's the paper?
@zfyl
@zfyl 2 жыл бұрын
Too bad ieee is just an international conglomerate of science paper resellers. I, and everybody else in this planet want to know why are you writing these papers, and what is you contributed progress. Sorry for the negative tone, and congrats to the publishing 😉
@sampadmohanty8573
@sampadmohanty8573 2 жыл бұрын
@@zfyl Exactly. Why is everyone writing these papers? And if it is for advancement of science, why is it not accessible to the general public? Is science a business - it is but many intellectuals do not want to see it as such because they want to believe that they do it for "a bigger cause" while in reality they do it selfishly which accidentally sometimes might actually do good, without the original intent being so. Please do not point to Arxiv.
@dougaltolan3017
@dougaltolan3017 2 жыл бұрын
@@sampadmohanty8573 don't you just have to pay for access?
@MarcelBornancin
@MarcelBornancin 2 жыл бұрын
I appreaciate the efforts in trying to make these heavily technical subjects understandable to the general public. Thank you all : )
@primenumberbuster404
@primenumberbuster404 2 жыл бұрын
Mathematics is like that wind your sail boat needs to move way ahead on your journey. This was so heart warming to watch. There is really a thin line between maths and magic! Thanks a lot Quanta Magazine for this beautiful summary! loved it!
@jackgallahan9669
@jackgallahan9669 2 жыл бұрын
wtf
@criscrix3
@criscrix3 2 жыл бұрын
Some bot stole your comment and slightly reworded it lmao
@michaelblankenau6598
@michaelblankenau6598 6 ай бұрын
That's a funny looking cat .
@hansolo9892
@hansolo9892 2 жыл бұрын
I have been using these kernel vector spaces for QML recently and this is one of those mathemagics I honestly adore!
@WsciekleMleko
@WsciekleMleko 2 жыл бұрын
Hi I could take 2 fists of shrooms and it still would have same sense to me as it does right now. Im glad You are happy tho.
@joshlewis575
@joshlewis575 2 жыл бұрын
@@WsciekleMleko yeah but just a few years ago you could've ate 2 ounces in your example. That's some crazy advancement, only a matter of time
@RexGalilae
@RexGalilae 2 жыл бұрын
Yo I worked on QML too back in college! I used to devour papers by Anatole Lilienfeld and Matthias Rupp coz of how interesting they were. Gaussian and Laplacian Kernels were the bread and butter of my Kernel Ridge Regression models and I was pleasantly surprised to see kernel vector spaces here lol It's one of the dark horses of ML
@Levi_Ackerman_7
@Levi_Ackerman_7 2 жыл бұрын
We really love watching breakthrough in science and technology.
@midas2092
@midas2092 2 жыл бұрын
These videos last year introduced me to this channel, and yet I still have the same excitement when I see the new ones
@williamzame3708
@williamzame3708 2 жыл бұрын
Also: Aleph 1 is *by definition* the smallest cardinal bigger than Aleph 0. The question is whether the size of the continuum is Aleph 1 or something bigger ...
@alexantone5532
@alexantone5532 2 жыл бұрын
The continuum of natural numbers?
@LeBartoshe
@LeBartoshe 2 жыл бұрын
@@alexantone5532 Continuum is just a nickname for cardinality of real numbers.
@whataboutthis10
@whataboutthis10 2 жыл бұрын
and the new result makes it seem it is less likely that continuum is aleph1, which was Cantor's guess that seemed the most plausible for many years
@EM-qr4kz
@EM-qr4kz 2 жыл бұрын
if you take an infinite number of line segments one centimeter each..then you have an infinite line..this set of line segments are No = aleph 0 infinity..the line is one dimension object.. but! * if you take a square.. one square centimeter in size..the parallel straight sections that make this square up are infinite.. but the set of them is aleph 1 in size..and square in 2 dimension object.. could that be the key of dimentions ? especialy when we have fractals objects to describe?
@moerkx1304
@moerkx1304 2 жыл бұрын
@@EM-qr4kz I'm not sure if you have some typos or I'm not exactly understanding what you're trying to say. But your analogy of a straight line being the natural numbers and then extending it to a square seems to me like Cantor's prove that the rational numbers are countable and hence of the same cardinality as the natural numbers.
@Epoch11
@Epoch11 2 жыл бұрын
These are really great and I hope you do more of these. Hopefully we don't have to wait till the end of the year, to get more videos which talk about breakthroughs.
@whataboutthis10
@whataboutthis10 2 жыл бұрын
this lol, give us more breakthroughs!
@markusheimerl8735
@markusheimerl8735 2 жыл бұрын
Love these videos. Gotta say as much as I wow'ed at the bubbles around our supermassive black hole in the physics video, I just have a specially warm spot in my heart for mathematics :)
@zight123
@zight123 2 жыл бұрын
same. I now jack about math, but its so fascinating.
@szymonbaranowski8184
@szymonbaranowski8184 Жыл бұрын
You believe in black holes? Seriously?
@Geosquare8128
@Geosquare8128 2 жыл бұрын
hadnt realized that svms were being applied to dnns like that
@alany4004
@alany4004 2 жыл бұрын
Geosquare the GOAT
@marcelo55869
@marcelo55869 2 жыл бұрын
Support Vector Machines is somehow equivalent to neural networks?? Who knew!?! I would love to see the proof. I might lack the fundamentals to understand everything but it might be interesting anyway...
@cyanimpostor6971
@cyanimpostor6971 2 жыл бұрын
This has actually been around for 3 decades now. Since the 1990s in fact
@nabeelhasan6593
@nabeelhasan6593 2 жыл бұрын
Thanks to RBF kernel
@varunnayyar3138
@varunnayyar3138 2 жыл бұрын
yeah me too
@bolducfrancis
@bolducfrancis 2 жыл бұрын
The animation at 5:12 is the last piece I needed to finally understand the diagonal proof. Thank you so much for this!
@gregparrott
@gregparrott 2 жыл бұрын
Just discovered 'Quanta Magazine'. Your articles on Physics, Math and Biology are all top notch! Subscribed
@AdlerMow
@AdlerMow 2 жыл бұрын
Quanta Magazine is incredible! Their style make everything affordable to the interested layman and it grips, you can start with any video or article and see it by yourself! So thank you all Quanta Team and writers!
@aayankhan6734
@aayankhan6734 2 жыл бұрын
one of the few joys of the end of the year is watching these types of video....loved it!
@yakuzzi35
@yakuzzi35 2 жыл бұрын
that's what I love about maths lots of times something that started out as a game or a fun curiosity turns out to be extremely applicable and equivalent to something unpredictable decades later
@quentingallea166
@quentingallea166 2 жыл бұрын
You know the channel is pretty good when you watch full length video while understanding about half of the content
@szymonbaranowski8184
@szymonbaranowski8184 Жыл бұрын
No. It means it still sucks half of the time. And in this case i bet it sucks much more than a half. And it means it's useless to watch it since you end up in the same spot you started but fooled & getting more arrogant having an opposite feeling
@quentingallea166
@quentingallea166 Жыл бұрын
@@szymonbaranowski8184 when I was a teenager, I was reading Hawking, Brian Green etc and understand maybe 10% the first time. I would read and read again the pages and chapter to understand more each time. The world is a complex place. As a scientific researcher, I face everyday this complexity. Over simplifying is possible and useful. Kurtzgesagt is a pretty neat example. However, in some cases, in my opinion, if you still want to go far, you can't explain it in 10min simply. But well, you are perfectly free to disagree .
@MrMann163
@MrMann163 2 жыл бұрын
It's crazy how much stuff from uni started flowing back watching this. The fact that I can actually be able to understand all these complicated maths is crazy but exciting
@matthewtang1489
@matthewtang1489 2 жыл бұрын
I was like. Damn... I know all of these ideas when I was watching it. I guess I can finally taste the fruits of my university education.
@MrMann163
@MrMann163 2 жыл бұрын
@@matthewtang1489 They told me the quadratic formula would be important, but no one said I'd ever need to know set theory. Oh such ripe fruits .-.
@saiparepally
@saiparepally 2 жыл бұрын
I really hope you guys continue to publish these every year
@kevinvanhorn2193
@kevinvanhorn2193 2 жыл бұрын
Radford Neal explored this same idea of expanding the width of a neural net to infinity over a quarter-century ago, in his 1995 dissertation, Bayesian Learning for Neural Networks. He found that what you get is a Gaussian Process.
@zfyl
@zfyl 2 жыл бұрын
is this single handedly makes all this breakthrough just a simple revisiting of an existing conclusion?
@Luizfernando-dm2rf
@Luizfernando-dm2rf 2 жыл бұрын
the real MVP
@daviddodelson8870
@daviddodelson8870 2 жыл бұрын
@Gergely Kovács: no. Neal's work dealt with neural networks with a single hidden layer, this breakthrough studies the limit of width for deep neural networks, i.e, many hidden layers.
@kevinvanhorn2193
@kevinvanhorn2193 2 жыл бұрын
@@daviddodelson8870 Thanks for the clarification. Strange, though, that it took 25 years to take that next step.
@AUniqueName
@AUniqueName Жыл бұрын
These videos are severely underrated- Thank you for the knowledge you share and hopefully millions of people will be watching these per week- It's so good for people to know about these things
@johnwick2018
@johnwick2018 2 жыл бұрын
I didn't understand a single thing but it is awesome.
@binman5753
@binman5753 2 жыл бұрын
Watching this and not understanding anything make these videos all the more magical 💫
@primorock8141
@primorock8141 2 жыл бұрын
It's crazy that we've been able to do so much with deep neural networks and we are only now starting to figure out how they work
@ajaykumar-ve5oq
@ajaykumar-ve5oq 2 жыл бұрын
We made machines but we don't know they perform task? sounds counter intuitive
@jakomeister8159
@jakomeister8159 2 жыл бұрын
Ever done a task that just works, you don’t know how, it just works? Yeah this is it. It’s actually pretty cool
@balazsh2
@balazsh2 2 жыл бұрын
@@ajaykumar-ve5oq more like we can measure how well they perform tasks, so we don't care about the whys :) transparent statistical methods exist and are widely used, just for AI black box methods perform better most of the time
@jirrking3461
@jirrking3461 2 жыл бұрын
this video is idiotic, since we do know how they work and we have been visualizing them for ages now
@Elrog3
@Elrog3 2 жыл бұрын
Saying we don't know how neural networks work is a stretch to the same caliber of saying we don't know how cars work.
@mathman274
@mathman274 2 жыл бұрын
interesting, when I was in school, many decades ago, 'we' always had the idea that there's no reason something couldn't exist between aleph-0 (size of N) and aleph-1 (size of R) however, a "finger was neverput on it". There were wild speculations about fractal dimensions, but that was just a fashionable thing to look at , at the time. Interesting where this is going.
@ferdinandkraft857
@ferdinandkraft857 2 жыл бұрын
This question was answered in 1964 by Paul Cohen and Kurt Gödel. The Continuum Hypothesis (CH) is _independent_ of Zermelo-Fraenkel axioms (plus the axiom of choice). In other words, standard mathematics cannot prove it nor it's negation. You can, however, extend standard mathematics to include the CH or some other axioms. David Asperó et al "breakthrough" doesn't use only standard math. They only proved the equivalent of two axioms that are known to imply one particular hypothesis that is incompatible with CH... The video is unfortunately very superficial and gives the false idea of an "answer" to a problem that, in my opinion, is already answered.
@mathman274
@mathman274 2 жыл бұрын
well... the keyword 'H' being hypothesis of course there's also the "incompleteness theorem", and extending the "axioms" might lead to inconsistency. Indeed "standard math" can't touch it, however including CH might be a little too much. Maybe I was just too "classically" educated, but still... interesting, as was the video, i think.
@Noname-67
@Noname-67 2 жыл бұрын
@@ferdinandkraft857 it independent from ZFC doesn't mean that it's neither true or false. Axiom of pairing, axiom of infinity, axiom of union, etc.. are all independent from each other and we all know they are true. If anything non standard is just a conventional there wouldn't be ZFC as we know it, only ZF. Gödel himself believed that the Continuum hypothesis was wrong, without prove nor disprove rigorously, we still can use logical deduction and reasoning to get a agreeable answer.
@viliml2763
@viliml2763 2 жыл бұрын
@@Noname-67 "Axiom of pairing, axiom of infinity, axiom of union, etc.. are all independent from each other and we all know they are true." define "true" none of them describe the physical universe, there's no reason someone can't say they're false and work with that
@Pramerios
@Pramerios 2 жыл бұрын
Bravo!! This was SUCH an awesome video! Definitely saving and coming back!
@warpdrive9229
@warpdrive9229 2 жыл бұрын
I wait for this video eagerly every year! Much love from India :)
@warpdrive9229
@warpdrive9229 2 жыл бұрын
This was just awesome! See you guys next year again. Much love from India :)
@jordanweir7187
@jordanweir7187 2 жыл бұрын
I love how you guys don't leave out the gory details, thats what we all wanna see hehe, also great to have an update each year
@NovaWarrior77
@NovaWarrior77 2 жыл бұрын
these are awesome! I'm glad we don't just have to look back to textbooks to see cutting edge advances!
@KeertiGautam
@KeertiGautam 2 жыл бұрын
I don't understand much but I feel happy that good science is happening. It means there's still some sense and logic in this world alive 😄
@KimTiger777
@KimTiger777 2 жыл бұрын
Math is art as one needs creativity to arrive to new solutions. Big WOW!
@zfyl
@zfyl 2 жыл бұрын
okay, this is actually a fair point totally agree
@Rotem_S
@Rotem_S 2 жыл бұрын
Also because it's (sometimes) beautiful and can engage deeply
@bobsanders2145
@bobsanders2145 2 жыл бұрын
That’s everything though not just math
@aniksamiurrahman6365
@aniksamiurrahman6365 2 жыл бұрын
What what what what what? Finally, such a result in continuum hypothesis! Unbelievable.
@jman997700
@jman997700 2 жыл бұрын
This is the best news I've heard all year. People want to know about the good news too.
@zfyl
@zfyl 2 жыл бұрын
what good is about these things? whom this will benefit?
@nullbeyondo
@nullbeyondo 2 жыл бұрын
@@zfyl If you want a really accurate answer, then It is "what" will this benefit which is mainly all of our technology. And only if they're used right, then they'd improve the quality of life overall; but no guarantee on human behavior.
@miguelriesco466
@miguelriesco466 2 жыл бұрын
Hey it was pretty nice! Just to clear things up, the continuum hypothesis is whether aleph 1 is the cardinality or size of the real numbers. By definition it is the smallest infinity greater than aleph 0.
@IvanGrozev
@IvanGrozev 2 жыл бұрын
We dont know the size of set of real numbers, we just know its bigger the aleph0. It can be aleph1, aleph2 .... even can be monstrously big as aleph_omega_1 etc. And in current state of most widelly accepted axiomatization of mathematics called ZFC is impossible to sovle continuum hypothesis. One watching this video get the impression that real numbers are aleph1 in size which is not true.
@sweetspiderling
@sweetspiderling 2 жыл бұрын
@@IvanGrozev yeah this video is all wrong.
@richardfredlund3802
@richardfredlund3802 Жыл бұрын
that equivalence between the infinite width NN's and Kernel machines is really a very surprising and interesting result.
@pvic6959
@pvic6959 2 жыл бұрын
I love how google showed up in both the physics and math/comp sci break through videos. it shows how much theyre doing and how much they're pushing humanity forward little by little. love them or hate them, its so cool to see science being done!
@martinschulze5399
@martinschulze5399 2 жыл бұрын
Google is not altruistic ;)
@LA-eq4mm
@LA-eq4mm 2 жыл бұрын
@@martinschulze5399 as long as someone is doing something
@willlowtree
@willlowtree 2 жыл бұрын
i have great respect for the scientists working at google, but as a company it is inevitable that their goals are not always allied with humanity's interests
@pvic6959
@pvic6959 2 жыл бұрын
@@willlowtree yeah my comment wasnt about about goals or anything. just that theyre doing so much science and sharing a lot of it with the world
@baronvonbeandip
@baronvonbeandip 2 жыл бұрын
@@martinschulze5399 Water is wet. Nothing is altruistic.
@frankferdi1927
@frankferdi1927 Жыл бұрын
What I dislike is, that many videos, this one included at some points, reward before there is proof, stimulating excitement in the viewers. Generating publicity is important, I do know that.
@dylanparker130
@dylanparker130 2 жыл бұрын
I love these videos & QM's articles too!
@Irrazzo
@Irrazzo 2 жыл бұрын
1:01 "What happens inside their billions of hidden layers". I think you confused layers with parameters, or weights, here. The largest GPT-3 version for instance has 96 layers and 175 billion parameters.
@shambhav9534
@shambhav9534 2 жыл бұрын
Parameters are whatever the starting nodes pick up and layers are layers, right? Or are parameters the starting nodes themselves?
@Irrazzo
@Irrazzo 2 жыл бұрын
@@shambhav9534 In a simple feed-forward neural network like a multilayer perceptron, you can represent a neuron / node by the equation y=h(w*x + b). x is what goes into the layer that neuron belongs to (if it's the first hidden layer, x is just an unchanged input feature vector), y is what goes out. w are the weights (all the edges) connecting all the neurons in the previous layer with the one in the current layer we're currently looking at, b is a bias. '*' is a dot product. h is a nonlinear activation function. The union of all weights and biases of all neurons between all the layers are the parameters which are learned during training.
@shambhav9534
@shambhav9534 2 жыл бұрын
@@Irrazzo Okay I get it now.
@Irrazzo
@Irrazzo 2 жыл бұрын
Just one more thing about layers: instead of thinking of layers in terms of the nodes of which they consist, you can also think of them in terms of the data that flows through your network (the x's and y's). Then, layers are different, increasingly abstract representations of your data, connected via transformations, or functions. And the complexity, the 'billions', are due to the enormous size of the function space of the overall function (transformation) which the network approximates by a series (or rather, composition) of functions which only slightly differ from one to the next.
@shambhav9534
@shambhav9534 2 жыл бұрын
@@Irrazzo I understood nothing but I do think I understand layers. They're layers which modify the starting input and at the end that input becomes the output. I tried(just tried) to make a neural network back in the day, I think I know the basics.
@AnthonyBecker9
@AnthonyBecker9 2 жыл бұрын
Hmm, I'm not sure how the neural net to kernel machine model is a breakthrough. Maybe that was left out. But the idea that a neural net divides data points with hyperplanes in high-D space goes back decades.
@PedroContipelli2
@PedroContipelli2 2 жыл бұрын
Kernel machines are linear, whereas neural networks are, generally, non-linear. Showing that an infinite-width network can be reduced to linear essentially raises suspicion about whether finite neural networks can be simplified in some novel way as well. The consequences could be groundbreaking.
@satishkpradhan
@satishkpradhan 2 жыл бұрын
@@PedroContipelli2 arent all layers of neural network just linear functions of the previous layer, so technically isnt it possible at some conditions a multi layer neural network can be a linear function.
@PedroContipelli2
@PedroContipelli2 2 жыл бұрын
@@satishkpradhan The activation function of each layer (sigmoid, tanh, relu, etc) is usually where the non-linearity is introduced.
@lolgamez9171
@lolgamez9171 2 жыл бұрын
@@PedroContipelli2 analog artificial intelligence
@joshuascholar3220
@joshuascholar3220 2 жыл бұрын
I stopped at the "nobody knows how neural networks work" and "billions of hidden layers" sentence. MY GOD, why did they have some moron who has no idea what he's talking about write this? And another one read it? MY GOD.
@nichtrichtigrum
@nichtrichtigrum 2 жыл бұрын
With only a high school maths background, I couldn't understand any of the concepts in the video. I'd be very happy if you could explain in more detail what a Liouville field actually is and what a free Gaussian field is and so on
@monad_tcp
@monad_tcp 2 жыл бұрын
So they proved the equivalence between convolution kernels and neural networks. As someone who does searchers in computing graphics, I always had this feeling that they were very close, as you could use them together and sometimes even replace one for another.
@szymonbaranowski8184
@szymonbaranowski8184 Жыл бұрын
Seems not as any great or surprising breakthrough then.
@robertschlesinger1342
@robertschlesinger1342 2 жыл бұрын
Very interesting, informative and worthwhile video. Be sure to read the linked articles.
@ChocolateMilkCultLeader
@ChocolateMilkCultLeader 2 жыл бұрын
Thanks for making these. Very important
@cobywhitw5748
@cobywhitw5748 Жыл бұрын
Does anyone know where I can read the paper about the Deep Neural Networks shown in the video??
@YouChube3
@YouChube3 2 жыл бұрын
Natural numbers, floating points and that third set I couldn’t bare even to try explain. Thank you narrator?
@edgedg
@edgedg 2 жыл бұрын
My favourite videos of every year!
@J3Compton
@J3Compton Жыл бұрын
Love this! It would be nice to have the urls to the papers here if possible
@srivatsavakasibhatla823
@srivatsavakasibhatla823 2 жыл бұрын
The last one made me remember what David Hilbert implied. "Physics is too complicated to be left for Physicists alone".
@RegiKusumaatmadja
@RegiKusumaatmadja 2 жыл бұрын
Superb explanation! Thank you for the video
@goldensnitch1614
@goldensnitch1614 2 жыл бұрын
great vid! btw 11:08 is Simons foundation made by the guy who made Rennaissance Technologies?
@badalism
@badalism 2 жыл бұрын
We have known for a while that infinite width neural network + SGD is equivalent to Gaussian Process.
@zfyl
@zfyl 2 жыл бұрын
thanks for single handedly eradicating the breakthrough level of that paper 😅
@Bruno-el1jl
@Bruno-el1jl 2 жыл бұрын
Not for dnns though
@piercevaughn7000
@piercevaughn7000 2 жыл бұрын
Excellent intro Edit: excellent everything I’m pretty clueless on all of this, but this was awesome
@nateb3277
@nateb3277 2 жыл бұрын
I discovered Quanta only a few months ago but already love coming back to them for this kind of quality content on new developments in science and tech :) Like it's well written, well animated, and easily understood *chef's kiss*
@viniciush.6540
@viniciush.6540 2 жыл бұрын
"This enables to compute things that physicists don't know how to compute" oh man how i love this phrase lol
@droro8197
@droro8197 2 жыл бұрын
Talking about the continuum hypothesis without mentioning the results of Cohen and Godel is pretty much a crime. Basically the continuum hypothesis is independent from the the rest of set theory axioms and can be assume to be true or false. i guess the real problem here is talking about very heavy math problem in 10 minute video…
@lifeisstr4nge
@lifeisstr4nge 2 жыл бұрын
I understand the outputs to be an answer of like a classification type. But why are there exactly the same number of inputs always shown? What is the input here?
@mobjwez
@mobjwez 2 жыл бұрын
would be nice to see how these theories and works can be applied to real-world situations, cheers
@akshaysingh11990
@akshaysingh11990 2 жыл бұрын
I wished I lived a million years and watched all the content created forever
@mdoerkse
@mdoerkse 2 жыл бұрын
Interesting that all three breakthroughs have to do with connections between different theories and 2 of them are mapping something useful to something easy to compute.
@zfyl
@zfyl 2 жыл бұрын
what useful?
@mdoerkse
@mdoerkse 2 жыл бұрын
@@zfyl Deep neural nets and quantum physics/gravity.
@seenaman96
@seenaman96 2 жыл бұрын
I learned about kernels back in 2017 when using SVM... How are kernels breakthroughs? If you have inputs that are not activated in 1 dimension, exploding to a higher dimension will not include them... So it's fine to skip the work, DUH
@mdoerkse
@mdoerkse 2 жыл бұрын
@@seenaman96 I'm not a mathematician and I don't know anything about kernels, but the video wasn't saying that kernels are the breakthrough. It's saying they are the old, easily computible thing that neural nets can be mapped to. The mapping is the breakthrough.
@Psychonaut165
@Psychonaut165 Жыл бұрын
Out of all the science channels I understand nothing about this is one of my favorites
@tetomissio8716
@tetomissio8716 2 жыл бұрын
Fantastic set of videos
@quicksilver0311
@quicksilver0311 2 жыл бұрын
Am I the only one who was totally clueless for all 11 minutes? This video literally gives me "What am I doing with my life?" vibes and I love it. XD
@MadScientyst
@MadScientyst Жыл бұрын
I'd sum this up with a reference to a title of author Eric Temple Bell: 'Mathematics queen and servant of science'.....brilliant read & exposition as per this Quanta snippet!!
@andraspongracz5996
@andraspongracz5996 2 жыл бұрын
Got halfway through the video, and stopped. I wonder if the creators ever asked the scientists in the video (or any expert, really) to check the final version of the narration. It is full of inconsistencies, and in case of the second segment (continuum hypothesis) just completely off. We know that the continuum hypothesis is independent from ZFC (the standard system of axioms of set theory) for nearly 60 years. It was famously Paul Cohen who proved this, and he was the one who developed the technique of forcing (in order to prove this result and others). He even got a Fields Medal for his work. I'm not sure about the relevance of the Aspero-Schindler theorem ("Martin's Maximum++ implies Woodin's axiom (∗)") as I'm not a set theorist, but it must be much more subtle than what the video suggests. It is well-understood for decades what the possible alef indices of the continuum can be. In particular, it is not necessarily alef_1, as suggested early on in this video, and contradicted later. The video has very nice graphics and catchy phrases, but the content is just wrong. It was quite cringey to listen to it, really.
@pingdingdongpong
@pingdingdongpong 2 жыл бұрын
Yea, I agree. I know enough set theory (and it ain't much) to know that this is a bunch of hogwash.
@Macieks300
@Macieks300 2 жыл бұрын
Yes. I agree. Set theory basics are easy enough to understand for undergraduates so it's the most approachable subject among all in these videos but hearing how wrong their explanation is I now must wonder how wrong are their explanations of the other discoveries.
@user-ei8yd3tm9l
@user-ei8yd3tm9l 2 жыл бұрын
towards the end of the video, I was like: this is pretty much why my naive thought of majoring in pure math got crushed after first-year university... math before university is nowhere close to real hard-core math, which is a different beast altogether.
@charlesvanderhoog7056
@charlesvanderhoog7056 2 жыл бұрын
Kernel Machine new? We used variance analysis in multiple dimensions as far back as the 1970's and it was developed into what is called positioning in marketing. These techniques enable the researcher to extract immense amounts of data from small samples.
@elmaruchiha6641
@elmaruchiha6641 2 жыл бұрын
Greate! I love the video with the Animations and the topic!
@Quwertyn007
@Quwertyn007 2 жыл бұрын
6:33 Saying an axiom is "likely true" makes no sense, unless it was to follow from other axioms and thus be unnecessary. Axioms are what you start with - you can start with whatever assumptions you want, the best they can do is not contradict each other and lead to interesting/useful mathematics. Math doesn't take into account the physical world - it is only based on axioms. Maybe you could make an argument about this axiom likely being related to the physical world in some way, which in some non mathematical sense would make it "true", but that seems rather difficult.
@Quwertyn007
@Quwertyn007 2 жыл бұрын
@FriedIcecreamIsAReality I think you make a good point, but I don't think many people would understand "likely true" as "intuitively making sense". That's just not what "true" means.
@Quwertyn007
@Quwertyn007 2 жыл бұрын
@FriedIcecreamIsAReality I'm still just a mathematics student, so I'm not in the best position to judge whether it really is used this way, but this video isn't aimed at professors, so I think the phrasing is at least misleading
@scifithoughts3611
@scifithoughts3611 2 жыл бұрын
Great video series!
@JustNow42
@JustNow42 Жыл бұрын
If you would like to crack anything, try group theory . Split observations into groups and then use groups of groups etc.
@NickMorozov
@NickMorozov 2 жыл бұрын
So, do I understand correctly that the neural networks are hyperdimensional? Or use extra dimensions for calculations? I'm sure I don't understand the ramifications but it sounds incredibly cool!
@sheriffoftiltover
@sheriffoftiltover 2 жыл бұрын
Dimension in this context just means additional parameters from my understanding. EG: For a light, one dimension might be wavelength, one might be frequency and another might be luminosity
@Rawi888
@Rawi888 2 жыл бұрын
Thanks for making me feel smart.
@chilling00000
@chilling00000 2 жыл бұрын
Isn’t the equivalence of wide NN and kernels known for a long time already…?
@satishkpradhan
@satishkpradhan 2 жыл бұрын
even i thought so... but as i saw all comments of people in amazment i was confused. Thank God someone else also think so ... else I thought to reread everything I had learned... or revisit my analytical thinking.
@StratosFair
@StratosFair 2 жыл бұрын
It is in fact (part of) what my Master's thesis was about and I am quite confused because indeed this has been known for some time already
@David-rb9lh
@David-rb9lh 2 жыл бұрын
It’s about dnn here
@StratosFair
@StratosFair 2 жыл бұрын
@@David-rb9lh I did a bit of digging and it turns out that the paper which introduces the result (wide deep neural networks are equivalent to kernel machines) has in fact been written in 2017. Now don't get me wrong, this is a very nice result, but by no means a 2021 breakthrough unfortunately.
@David-rb9lh
@David-rb9lh 2 жыл бұрын
@@StratosFair I’m agree with you . I’ve not digged to much into details to be honest .
@josueibarra4718
@josueibarra4718 Жыл бұрын
Gotta love how Gauss still somehow manages to butt in to present-day, groundbreaking discoveries
@EM-qr4kz
@EM-qr4kz 2 жыл бұрын
you have a square with vertices A, B, C, D. get all parallel straight segments from A, B to C, D. This set of line segments are aleph1 .. greater than the set of straight segments that make up an infinite line * * ... This is my observation. I do not know if it is true but it is interesting as we can say when a body is one dimensional or two, not in terms of geometry but through set theory.
@dEntz88
@dEntz88 2 жыл бұрын
With regard to the contiuum hypothesis: Did I understand this correctly that they are no longer operating in ZFC, but added more and stricter axioms? Wouldn't this imply that the continuum hypothesis is still undecidable in ZFC?
@hunterdjohny4427
@hunterdjohny4427 2 жыл бұрын
Yes the continuum hypotheses is known to be undecidable in ZFC since Gödel. It has also been known for a while that if you were to add either of the axioms MM++ or Woodin's axiom (*) to ZFC, then the Continuum hypotheses would be false. Now, the paper by David Asperó and Ralf Schindler proves that (*) is weaker than MM++. This ofcourse has no bearing on the continuum hypotheses at all unless you consider either of them an axiom. How the video chooses to present this is quite odd. I guess the point they are trying to make is that since they were always considered rival axioms and we now know that one actually implies the other we might just add M++ as an axiom to ZFC. Woodin stated something along the lines that we shouldn't accept MM++ or (*) an axiom because MM++ is incompatible with the natural strengthening of (*). Regardless of what that actually means it at least should be clear that there are objections to simply accepting MM++ as an axiom.
@dEntz88
@dEntz88 2 жыл бұрын
@@hunterdjohny4427 Thank you. I also found it weird how they framed it in the video. At least to me it came across that they were implying that the results could also be used in ZFC alone. Hence my question.
@dEntz88
@dEntz88 2 жыл бұрын
@FriedIcecreamIsAReality But isn't that just creating new problems? If I remember Gödel correctly, every sufficiently powerful system of axioms will run into similar problems as the continuum hypothesis. My issue is that the video, at least to how I perceived it, framed the issue in a way that implies that result leads to a result which is "more true". But the notion of true solely depends on the axioms we choose and is subjective to a certain extent.
@hunterdjohny4427
@hunterdjohny4427 2 жыл бұрын
​@@dEntz88 Adding an axiom to ZFC wouldn't create new problems. Every theorem that was previously provable (or refutable) is still provable (or refutable), and some that were previously undecidable may now be provable (or refutable). So by adding an axiom your theory gets more 'specific'. What Gödel showed is that this process of adding axioms can never lead to a system of mathematics in which every statement is provable (or refutable), unless you add many many axioms in such a way that your set of axioms loses it's recursiveness. This is hardly desirable, since the set of axioms being non-recursive means that if I write down a statement you have no way of telling whether it is an axiom or not, neither will you be able to tell whether a given proof is valid or not. Our only option is to accept that any decent theory of mathematics (decent as in powerful enough to express basic arithmetic) can't be complete. Your issue with the video is correct of course, they pretend statements have an absolute truth value regardless of the system of axioms worked in. What is said at 6:33 is especially bizarre: [MM++ and (*) are both likely true] makes no sense whatsoever since both axioms are independent of ZFC.
@dEntz88
@dEntz88 2 жыл бұрын
@@hunterdjohny4427 Thank you for your explanation. I only have a somewhat superficial knowledge of that area of maths and was actually thinking about the issues you elaborated.
@kravandal
@kravandal 2 жыл бұрын
Omg. I can't wait next year's video.
@caracasmihai01
@caracasmihai01 2 жыл бұрын
My brain had a meltdown when watching this video.
@deantoth
@deantoth 2 жыл бұрын
I've watched several of these breakthrough videos and although extremely interesting, you simplify a concept so much that rather than clarifying the topic, you make it more opaque. And just when I think you are about to provide some insight, you move on to the next segment. You could spend a few more minutes on each topic.. OR make a full video per topic please ! Thank you for your hard work.
@gettingdatasciencedone
@gettingdatasciencedone Жыл бұрын
I love these intro videos that try and convey the complexity of recent advances. One small problem with this video is that the opening line is not strictly speaking true. The 1950s neural networks did not use the same learning rules as the human brain. They were very simplified models based on a bunch of assumptions.
@raajjann
@raajjann 2 жыл бұрын
Great exposition!
@animebingers8897
@animebingers8897 2 жыл бұрын
How do you get these interview videos? Anyone knows??
@lebiquo8501
@lebiquo8501 2 жыл бұрын
god i would love a "breakthroughs in chemistry" video
@UsamaThakurr
@UsamaThakurr 2 жыл бұрын
Thank you
@saugatbhattarai9826
@saugatbhattarai9826 2 жыл бұрын
I like your explanation ........and thanku for updates .................
@SilBu3n0
@SilBu3n0 2 жыл бұрын
incredible video!
@a.movement
@a.movement 2 жыл бұрын
Appreciate this!
@nicholasb1471
@nicholasb1471 2 жыл бұрын
This video makes me want to do my calculus 3 homework. If only it wasn't winter break right now.
@SolaceEasy
@SolaceEasy 2 жыл бұрын
Man, math's mysterious.
@Ashallmusica
@Ashallmusica 2 жыл бұрын
I'm the least educated person watching this( had only completed junior school ) now as a 21 years old. I just get curious with different things and clicking this video get me to learn a new word - Aleph. It's amazing for me yet i still didn't understand much here but I love this.
@deleted-something
@deleted-something Жыл бұрын
I knew in the moment they started speaking about the Continuum hypothesis this was gonna be interesting
@Fan-fb4tz
@Fan-fb4tz 2 жыл бұрын
great videos always!
@2REACTION4U
@2REACTION4U Жыл бұрын
If you Design it as/like a in Body/Brain/chemic/physic/ abstract game 3D like kinda like Daniel Tammet explains how he imagines to see calculus, and visualize it like that w colors forms etc and can connect it in those simple ways it's perfectly adjustable, overviewable etc for best resulting
@thanhtunghoang3448
@thanhtunghoang3448 2 жыл бұрын
The first breakthrough is called Neural Tangent Kernels, first introduced in 2018 by Arthur Jacot at EPFL. He at that time, not a Google employee. Attributing this breakthrough to Google is unfair and misleading.
@WilliamParkerer
@WilliamParkerer 2 жыл бұрын
No one's attributing it to this Google employee
@ridax4416
@ridax4416 2 жыл бұрын
Honestly I'm a bit lost but this was very cool to see. Makes me want to learn more :')
@glennmatsiwe8705
@glennmatsiwe8705 2 жыл бұрын
i have a question, at 1.00 in the videos i was said, "we don't know how neural network work." how come we don't know how they work. my argument is where did the come from then? like was the tech given to use by aliens of some sort or something of that nature? or there is no research paper which explains how neural networks work?
P vs. NP: The Biggest Puzzle in Computer Science
19:44
Quanta Magazine
Рет қаралды 690 М.
Biggest Breakthroughs in Math: 2023
19:12
Quanta Magazine
Рет қаралды 1,6 МЛН
1🥺🎉 #thankyou
00:29
はじめしゃちょー(hajime)
Рет қаралды 78 МЛН
He tried to save his parking spot, instant karma
00:28
Zach King
Рет қаралды 17 МЛН
Cat story: from hate to love! 😻 #cat #cute #kitten
00:40
Stocat
Рет қаралды 15 МЛН
How Physicists Created a Holographic Wormhole in a Quantum Computer
17:05
Quanta Magazine
Рет қаралды 2,2 МЛН
The Riemann Hypothesis, Explained
16:24
Quanta Magazine
Рет қаралды 5 МЛН
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,1 МЛН
Has Generative AI Already Peaked? - Computerphile
12:48
Computerphile
Рет қаралды 735 М.
How a Hobbyist Solved a 50-Year-Old Math Problem (Einstein Tile)
17:59
How AI Discovered a Faster Matrix Multiplication Algorithm
13:00
Quanta Magazine
Рет қаралды 1,3 МЛН
The Biggest Project in Modern Mathematics
13:19
Quanta Magazine
Рет қаралды 1,9 МЛН
The Map of Quantum Computing - Quantum Computing Explained
33:28
Domain of Science
Рет қаралды 1,6 МЛН
Computer Scientist Answers Computer Questions From Twitter
14:27
Solving the secrets of gravity - with Claudia de Rham
1:01:17
The Royal Institution
Рет қаралды 30 М.
С Какой Высоты Разобьётся NOKIA3310 ?!😳
0:43
POCO F6 PRO - ЛУЧШИЙ POCO НА ДАННЫЙ МОМЕНТ!
18:51
Apple, как вас уделал Тюменский бренд CaseGuru? Конец удивил #caseguru #кейсгуру #наушники
0:54
CaseGuru / Наушники / Пылесосы / Смарт-часы /
Рет қаралды 4,6 МЛН
Huawei который почти как iPhone
0:53
Romancev768
Рет қаралды 617 М.