Spectral Theorem For Dummies - 3Blue1Brown Summer of Math Exposition

  Рет қаралды 79,111

Jacqueline Doan

Jacqueline Doan

Күн бұрын

This is our first time making a math video, so please forgive our mistakes. I hope you had as much fun watching as we did making it!
UPDATE: We did make a mistake after all! The last matrix on the screen at 0:44 is not real! We meant to write -sqrt(2) instead sqrt(-2).
Our websites:
Jackie: www.jacquelinedoan.com
Alex: akazachek.com
Resources:
1. Our math animations were made with Manim: www.manim.community
2. Our doodles were drawn with Procreate
3. Our video was edited using Final Cut Pro
3Blue1Brown Summer of Math Exposition

Пікірлер: 110
@schrodingerbracat2927
@schrodingerbracat2927 2 жыл бұрын
Matrices usually don't commute, because they like to work from home.
@toniwalter2911
@toniwalter2911 2 жыл бұрын
​@@aaaa8130 to commute means to travel a specific route on a regular basis, for example most commonly from home to work. so, if a matrix were to go back and forth between its workplace and home, it would commute. finally @SchrodingerBraCat claims that matrices like to work from home (for whatever reason) and that this would be the reason why they don't commute. now i'm sure you would have understood all of this just from the first line i wrote but since i was already taking the fun out of the joke i might as well have done it properly. anyway i'm done wasting your time now, hope his helps :)
@davidk7212
@davidk7212 2 жыл бұрын
You should be ashamed of yourself 🤣
@iamlucky323
@iamlucky323 2 жыл бұрын
The last two questions will be left as an exercise for the reader... I was dead!!! Really enjoyed the video and the easy to follow explanation!
@jacquelinedoan2224
@jacquelinedoan2224 2 жыл бұрын
Haha I'm glad you enjoyed the joke!
@steviebudden3397
@steviebudden3397 2 жыл бұрын
@@jacquelinedoan2224 Yeah, that tickled me as well. :D
@fsponj
@fsponj 3 күн бұрын
​@@steviebudden3397☠️
@zornslemmon2463
@zornslemmon2463 2 жыл бұрын
I enjoyed this. I'm a few decades removed from a linear algebra class, so though I know (or knew) all of these concepts, I am a bit rusty and I found some of the content to move too quickly for me, but I don't think I am the intended audience (I do have the advantage of knowing where spectral theory is used and is useful). However, my expectations are not that I fully grasp it on a first run through, which I don't think is generally true for anyone who doesn't already know the content, and that the onus is upon me to rewatch it and, as Grant would say, "pause and ponder" the parts where I am rusty or deficient. For a first-ever math video, you have done an outstanding job and you should both be proud of what you put out. Congratulations.
@gmatree
@gmatree 2 жыл бұрын
I'm so glad 3b1b took this wonderful initiative. Your video, atleast for me, is one of the best outcomes which came out of it.
@slog656
@slog656 Жыл бұрын
I have been struggling with spectral theory in functional analysis for weeks now and this has just cleared up so much ambiguity in my intuition. Thank you so much.
@pimcoenders-with-a-c1725
@pimcoenders-with-a-c1725 2 жыл бұрын
"But what are the applications" at the end... This is one of the most applicable parts of all of linear algebra haha, it's used sooo much in quantum mechanics! In QM, observables are given by hermitian operators (since these have real eigenvalues; it is for example not possible to have imaginary magnetization or energy). The spectral theorem asserts that such an observable will always have an orthonormal basis of eigenvectors, which is extremely useful! It means we can just solve the eigenval/eigenvec problem for our particular operator and we know pretty much everything. Also, it will later be proven that if two hermitian operators A and B commute (so AB=BA), then the ONB of eigenvectors they share is the exact same! This fact is also extremely important in quantum mechanics
@polyhistorphilomath
@polyhistorphilomath 2 жыл бұрын
Maybe there is a meaningful interpretation of a complex value. Consider a complex dielectric constant or resistance/impedance. I am not saying that every component of a geometric algebra is meaningful in all physical contexts, but maybe there is a potential dialectical synthesis which broadens the understanding of the real-valued quantity.
@pimcoenders-with-a-c1725
@pimcoenders-with-a-c1725 2 жыл бұрын
@@polyhistorphilomath Possibly; this is actually already happening with the operators we use for quantum computing; these are non-hermitian, thus giving complex eigenvalues
@pimcoenders-with-a-c1725
@pimcoenders-with-a-c1725 2 жыл бұрын
@@polyhistorphilomath also, dielectric constant and impedance can indeed be complex, but this is because they are either not quantum mechanical observables (but results of many, many interactions) or they are dynamical variables instead of static ones
@jdp9994
@jdp9994 2 жыл бұрын
@@polyhistorphilomath If you're interested in complex physics, have you considered the work on PT symmetric physics (uses non-Hermitian Hamiltonians with real eigenvalues, but complex coupling coefficients)? Carl Bender gives an easy introduction in kzfaq.info/get/bejne/bJ2fqc1lyMW6gmw.html
@kazachekalex
@kazachekalex 2 жыл бұрын
Thanks! That application in physics is actually what inspired me to pick this subject for the video, the applications section was more of a joke (:
@FareSkwareGamesFSG
@FareSkwareGamesFSG 2 жыл бұрын
The first ten seconds felt like what you feel when an advert on tv matches your situation so perfectly, you peek outside through the blinds and close them in fear.
@tbttfox
@tbttfox 2 жыл бұрын
All righty, I'm going to tear into what you've done here, but that's because I think it's *worth* tearing into, and making better. This is a great start, but I don't think it's a great end-product (yet?) You're definitely overusing the smear-y text transition. Go watch Mathologer's algebra auto-pilot stuff. Like, really study it. Watch how the symbols move. The work he puts into it has to take forever, but it makes it *so* much easier to follow what's going on. For instance, the conjugate transposition at around 4:00. It doesn't show visually what's happening to the components of the matrix as you're explaining it. They way I'd probably do it is: The * should break into 4 bars and smear into the t as the 4 bars move over the components (nothing else should move). Then the T should fade away as you simply swap the bbar and cbar components, leaving the abar and dbar completely unmoved. Not smear them, but simply trade their places. Also around 3:00 same deal when describing the inner product. The "linear in the first slot" addition animation should duplicate and move the variables and operators that are the "same", and the lambda should just "hop out" of the multiplication. Also, the left hand side of the equations shouldn't smear at all since they're not changing (top is good, bottom isn't) Then at around 6:00, your grid isn't square, so projecting v onto your eigenvectors doesn't *look* like projection, and that's bad. It took like 3 times watching through that part to realize why it looked so wrong. I mean, we were just talking about orthogonal eigenvectors, and those blue vectors sure don't look orthogonal. One more thing I thought of. You also take time to re-introduce linear operators and the dot product as a projection/rejection, but you skip over the (arguably more complicated) eigenvector/eigenvalue. An animation following the pre-transformation eigenvectors through the transformation described by the matrix you put on screen would *definitely* help remind people what an eigenvector is, and you wouldn't even have to mention it in your script. Also, it could give them an early intuition of what self-adjoint matrices are, and what transforms they represent. A general note: if you show an animation when you're describing something, when re-using that concept later, you should most likely re-use that same (or a similar) animation to drive home your words. You're spending time coming up with a visual language for people to follow. So like if you talk about eigenvectors, follow the transformation. If you talk about projection, drop the perpendicular. If you're doing a linear transformation, show where the basis vectors will end up first, then do the transformation.
@robertschlesinger1342
@robertschlesinger1342 2 жыл бұрын
Excellent overview summary. Very interesting, informative and worthwhile video. I encourage you to make more videos.
@6884
@6884 2 жыл бұрын
at 3:50 I would have left-justified the equations, so that when you move the T the left term does not "change". It took me a while to check and double check to make sure nothing actually changed
@supergeniodelmale2756
@supergeniodelmale2756 Жыл бұрын
This was incredible! Need more!
@Yaketycast
@Yaketycast 2 жыл бұрын
I loved this video. The explanation is clear and interesting. And the visuals and so cute! Keep it up y'all!
@streetos
@streetos 2 жыл бұрын
Can we just appreciate the incredible graphics? This is eye candy 😍
@donovanholm
@donovanholm 2 жыл бұрын
Just wrote a test on matrices today yet you still spark my interest!!!
@accountname1047
@accountname1047 2 жыл бұрын
There's a nice proof of the spectral theorem projecting onto ever more nested subspaces that could have fit in here
@leyawonder2306
@leyawonder2306 2 жыл бұрын
You guys are awesome, this video clears things up well
@Reliquancy
@Reliquancy 2 жыл бұрын
I thought this was going to be about looking at the eigenvalues and eigenvectors you get from the adjacency matrix of a graph. They call that the spectrum of the graph too I think.
@modolief
@modolief 2 жыл бұрын
Loved it! Great video, thanks!!
@Jules-Henri-Poincare
@Jules-Henri-Poincare Жыл бұрын
I love the style and every single word of this video!!
@void2509
@void2509 2 жыл бұрын
This is a really amazing maths video! How can this only have 1k views!
@dannycrytser7268
@dannycrytser7268 Жыл бұрын
Nice video. Minor nitpick: in your statement of the spectral theorem, you assert that "the eigenvectors {v_1,...,v_n} of T with eigenvalues {lambda_1,...,lambda_n} form an orthonormal basis for V" which is generally incorrect. The issue arises from saying "the eigenvectors" -- there are lots of them to choose from! If you write down a list of eigenvectors for all the eigenvalues, there is no guarantee that the eigenvectors are unit vectors, and (in the case of repeated eigenvalues) there is no guarantee that the vectors are orthogonal. For example, if T is the identity map from R^2 to R^2 (a normal operator), then you could write { (1,1), (2,2)} which would be a set of eigenvectors for the (repeated) eigenvalues {1,1}. However, this is not an orthonormal basis for R^2: the vectors aren't unit vectors and they aren't orthogonal. For normal transformations without repeated eigenvalues we never have trouble with orthogonality, but the unit vector issue can arise: for the diagonal matrix [[1,0],[0,0]] the basis B= {(1,0),(0,2)} certainly consists of eigenvectors for the complete set of eigenvalues {1,0}, but the second eigenvector is not a unit vector and hence B is not an orthonormal basis. To create the orthonormal basis of eigenvectors: first find a (typically non-orthonormal) basis B_k for each eigenspace of T by solving Tv=(lambda_k)v, then apply Gram-Schmidt to convert B_k into an orthonormal basis B_k'. (If you have repeated eigenvalues, you only find a single orthonormal basis for each eigenvalue, with number of vectors equal to the number of times that eigenvalue appears.) Then form the union of all these orthonormal bases to get the orthonormal basis B = B_1' u B_2' u ... B_n' for V. (The nice thing about normal operators is that all the different eigenspaces are orthogonal, so B will automatically be orthonormal.) A more precise statement: "If T:V->V is normal and {lambda_1,...,lambda_n} are the eigenvalues of T (possibly with repetition) then there exists an orthonormal basis {v_1,...,v_n} for V such that Tv_k = lambda_k v_k for k=1,...n."
@ruferd
@ruferd 2 жыл бұрын
I have a solution to the two exercise problems, but this comment box is too small to contain them.
@redaabakhti768
@redaabakhti768 2 жыл бұрын
Thank you for the review
@tariklahcen9928
@tariklahcen9928 Жыл бұрын
Many thanks for this video.
@passeur5526
@passeur5526 4 ай бұрын
I’m not a math person and I never have been, I don’t understand any of the stuff in this video but I would love to. When I hear about quantum theory and all these important mathematical concepts I don’t understand them because of a lack of knowledge as well as a lack of comprehension in regards to the limited knowledge I already have. Where can I find out about the real world implications these things have? What do these things mean in regards to life?
@awsmith1007
@awsmith1007 8 сағат бұрын
Wonderful video
@byronvega8298
@byronvega8298 2 жыл бұрын
Does anyone know why adjoint operators are defined that way? About the applications, this theory is important to explore the realm of partial differential equations. The book by Olver on the topic is pretty good.
@abhijeetsarker5285
@abhijeetsarker5285 2 жыл бұрын
good job....very awasome video!
@Speed001
@Speed001 2 жыл бұрын
5:35 Exactly, lol. While I've probably learned this before, I think I learned different terms for everything making understanding much harder.
@aflah7572
@aflah7572 2 жыл бұрын
Great Stuff!
@user-pp6zj8ow8j
@user-pp6zj8ow8j Жыл бұрын
“Consider the following matrice with real entries” proceeds to put sqrt(-2)
@syllabusgames2681
@syllabusgames2681 2 жыл бұрын
A “Things To Know” page, that’s what every other video has been missing. I don’t know what a conjugate transposition is though, and I don’t think that was really on the list. You lost me. I have stumbled through a few of these videos, but this one I just didn’t get. If you had stretched this into twenty minutes, I might have been able to catch on, but thankfully you didn’t. The video was concise and well made, and aside from getting a little too far from the mic at one point, I don’t really know how you could improve it other than making it part of a series to fill out the prerequisites.
@jacquelinedoan2224
@jacquelinedoan2224 2 жыл бұрын
Thank you for your feedback! The sound quality is definitely something we have to work on, as this time we were still figuring out how to use the mic 😅 You brought up a good point about compromising the video's self-containedness for conciseness. Alex and I are still figuring this out, so we really appreciate your comment!
@carterwoodson8818
@carterwoodson8818 2 жыл бұрын
@@jacquelinedoan2224 so does the conjugation mean the complex conjugate then??
@steviebudden3397
@steviebudden3397 2 жыл бұрын
@@carterwoodson8818 Yup. Reflect the matrix in the leading diagonal and then take complex conjugates of the entried.
@aziz0x00
@aziz0x00 Жыл бұрын
The intro and the outro are hulariouussssss
@saptarshisahoo5075
@saptarshisahoo5075 2 жыл бұрын
at 5:50 should not the eigen vectors be orthogonal? The picture doesn't quite capture their orthogonality.
@quantum5867
@quantum5867 2 жыл бұрын
Will you tell me the font you used to wrote spectral theorem
@jojodi
@jojodi 2 жыл бұрын
Great videos! What music is this? :)
@miguelriesco466
@miguelriesco466 Жыл бұрын
I think that the video was really cool. However, it should be noted that many things in this video only hold when you’re working with finite dimensional vector spaces. But not all is lost: there is actually a generalization of this theorem in infinite dimensions. It is the spectral theorem for compact normal operators. Compact operators behave very similarly to operators in finite dimensional spaces. But a few more courses are needed to understand the topic in depth, notably courses in topology, complex analysis and functional analysis.
@Shaan_Suri
@Shaan_Suri 29 күн бұрын
What do you mean by the adjoint being the "conjugate" transposition? I suppose conjugate doesn't mean the same as complex conjugate? Could someone please clarify
@arisoda
@arisoda 5 ай бұрын
5:32 but you should also add in that sentence of what vectors the projections are FROM.
@Iceman3524
@Iceman3524 2 жыл бұрын
This is solid
@davidk7212
@davidk7212 2 жыл бұрын
Nice, thank you
@hannahnelson4569
@hannahnelson4569 5 ай бұрын
So the spectral theorem is that all eigenvectors of a normal matrix are othagonal?
@usernameisamyth
@usernameisamyth 2 жыл бұрын
Good stuff
@annaclarafenyo8185
@annaclarafenyo8185 2 жыл бұрын
The name 'spectral theorem' is reserved for the infinite dimensional case. Diagonalizing an nxn symmetric or Hermitian matrix is trivial.
@thatkindcoder7510
@thatkindcoder7510 2 жыл бұрын
*Whips out dictionary* Time to understand
@annaclarafenyo8185
@annaclarafenyo8185 2 жыл бұрын
​@@thatkindcoder7510 It's not that hard to understand. Finite dimensional matrices which are symmetric can be diagonalized step by step, by first applying them to a vector again and again (this quickly produces the largest eigenvector), then considering the matrix restricted to the perpendicular space to the vector. This is an inductive procedure, and it diagonalizes any symmetric matrix step by step, by induction on the dimension. The complex analog of symmetric is Hermitian (symmetric after complex conjugation), and the same thing applies there. But the 'spectral theorem' is a result about function spaces, about INFINITE dimensional 'matrices'. These are the ones that show up in quantum mechanics, or in applications to PDEs or whatever. An infinite dimensional matrix is a linear operation on a function space. For example, consider the differentiable functions on the interval [0,1] with periodic boundary conditions. That just means periodic functions f(x+1)=f(x). This is a vector space, because you can add two such functions pointwise and they are in the same class. Any linear operation on these functions is a 'matrix' of sorts, but now with infinite dimensions. So consider the operator 'differentiation'. This has eigenvectors those functions whose derivative is proportional to themselves, i.e. exp(kx) for some k. To be periodic, k has to be imaginary and a multiple of 2\pi. The spectral theorem then tells you that this is a basis for the space, i.e. that any differentiable function can be written as a combination of these basis functions. Now consider the same operator acting on the vector space of functions on [0,1] with zero boundary conditions. The operator is still formally anti-symmetric (on those functions where it is defined and takes function in the space to other functions in the space), so i times the operator is 'hermitian', but now it has no eigenvectors in the vector space, because the eigenvalues should still be exp(ikx), and these don't obey the boundary conditions. So some infinite dimensional operators admit a basis of eigenvectors, some don't. The ones that do are a subset of the really self-adjoint ones, including the 'compact operators', which admit a limiting procedure to find the eigenvectors step by step, like in the finite dimensional case. These infinite dimension irritations, the distinction between 'symmetric' or 'hermitian' and 'self adjoint', the limit difficulties that sometimes the eigenvectors are distributional and don't even lie in the space you are analyzing, these are what make the spectral theorem interesting. The video doesn't discuss these issues, just diagonalizing finite dimensional matrices, which is easy. Physicists tend to diagnolize finite dimensional approximations, and then check the limit is sensible (which it sometimes is, and sometimes it isn't).
@marcuslaurel5758
@marcuslaurel5758 2 жыл бұрын
@@annaclarafenyo8185 “it’s not that hard to understand” *proceeds to write a short novel on the subject* All math is trivial if you study it for long enough. No one wants to see you reinforce your own comfortability/expertise in a subject via an esoteric wall of text written under the false pretense that what you’re writing is easy to understand for someone who perhaps is hearing of this stuff for the first time. The plethora of mathematical machinery you’ve assumed as prior knowledge in your reply is covered in years worth of college level math courses. What shouldn’t be hard to understand is that new math can be difficult and shouldn’t automatically be treated as easy by someone for which said math is not so new.
@annaclarafenyo8185
@annaclarafenyo8185 2 жыл бұрын
@@marcuslaurel5758 I haven't assumed anything, and, yes, what I wrote is not difficult, nor is it a 'short novel', it's the simplest example of infinite dimensional function spaces and the issues with diagonalizing infinite dimensional matrices.
@imsleepy620
@imsleepy620 2 жыл бұрын
@@marcuslaurel5758 true
@Nathouuuutheone
@Nathouuuutheone 2 жыл бұрын
Things to know: 5 things I do not know and have never heard about Nice
@Nathouuuutheone
@Nathouuuutheone 2 жыл бұрын
While I'm here... Why the hell do I know NOTHING about matrices and they get mentioned EVERYWHERE? Is it that abnormal to not know of them? Is it supposed to be a highschool notion? Cause it certainly wasn't in my highschool.
@johannbauer2863
@johannbauer2863 10 ай бұрын
small mistake in 0:44, the last matrix isn't real EDIT: ah, it's already in the description
@mimithehotdog7836
@mimithehotdog7836 2 жыл бұрын
very cool
@nk2904
@nk2904 2 жыл бұрын
Subbed with that first bit… the last two questions will be left as an exercise… 🤣🤣
@jacquelinedoan2224
@jacquelinedoan2224 2 жыл бұрын
I take full credit for that joke 😎
@mastershooter64
@mastershooter64 2 жыл бұрын
0:42 ah yes sqrt(-2) definitely a real number
@davidepierrat9072
@davidepierrat9072 2 жыл бұрын
Lol I noticed too
@MichaelRothwell1
@MichaelRothwell1 2 жыл бұрын
As I have just tutored a student through elementary Iinear algebra, this exposition was spot on for me - starting the journey towards C*-Algebras. Nicely paced, nicely explained, well done!
@BenjaminLiraLuttges
@BenjaminLiraLuttges 11 ай бұрын
What is the name of that font?
@ralph3295
@ralph3295 19 күн бұрын
good vid ! but I think the animations could have been played a lot slower to give time to think (or pause and ponder as 3B1B says). leave more silent gaps and speak slower. thank youu
@DavidBrown-nd7lz
@DavidBrown-nd7lz Жыл бұрын
Do you guys watch Sisyphus 55 your art style reminds me of his.
@astroceleste292
@astroceleste292 2 жыл бұрын
can you put subtitles? the automatic captions are crap for mathematical terms.
@nanke1987
@nanke1987 2 жыл бұрын
here for the comedy
@shubhamchoudhari1489
@shubhamchoudhari1489 2 жыл бұрын
Make more such videos
@genericperson8238
@genericperson8238 2 жыл бұрын
Really good video, but please pause some between statements. Things move way too fast and it Is a bit annoying to always scroll back to pause manually.
@W1ngSMC
@W1ngSMC 2 жыл бұрын
4:01 Shouldn't that A matrix be A*T or A† (dagger) instead of just A*.
@kazachekalex
@kazachekalex 2 жыл бұрын
It's a difference of notation between mathematical physics (where * denotes conjugates and dagger is the Hermitian adjoint) and pure mathematics (where * denotes Hermitian adjoints and \overline is for the conjugate)
@AlainNaigeon
@AlainNaigeon 2 жыл бұрын
Why music in the background ???
@emmepombar3328
@emmepombar3328 2 жыл бұрын
You lost me. Although I know all the things of the "Things To Know" page, you lost me even in the initial explanation of these points. Way too fast and the images didn't help at all. It's defintively not "for dummies".
@polyhistorphilomath
@polyhistorphilomath 2 жыл бұрын
Summary: [some] matrices (like derivatives) can be thought of as defining or being defined by the effect applying such objects will have on the vector or scalar operated on. So figure out the zeros of det(A-λI) for your matrix. These values are representative of the whole matrix A. Then you can conjugate and perform whatever operation you want. g h g^-1 or g^-1 h g lets you use the values on the diagonal (once you have diagonalized your matrix) as scalars. And you still have a valid result. Also the inner product generalizes the dot product. That’s with hardly any restrictions on what the operands are. If you have two abelian operations on a set of vectors you can show it (with some additional constraints, maybe?) is closed. Meaning you don’t need to worry about performing only finitely many operations on the elements. Closure. But even in this context operators like T have a 1-to-1 correspondence with matrices. So you can decompose T, an operator in a Hilbert space, the same way you can decompose a matrix.
@polyhistorphilomath
@polyhistorphilomath 2 жыл бұрын
This means you can apply the matrix exponential to T and apply that functor to a vector in the space. This is maybe two steps away from using a Fourier or Laplace transform on T. Heck, you could apply a translation operator, assuming that your differential operator D is well-defined. The world is your oyster.
@agam5429
@agam5429 2 жыл бұрын
W O W
@philkaw
@philkaw 2 жыл бұрын
This is poggers maths conent!
@nUrnxvmhTEuU
@nUrnxvmhTEuU 2 жыл бұрын
A 7min-long video that aims to explain the (finite-dimensional) spectral theorem, and yet it assumes the viewer knows how to diagonalize a matrix? That seems like a really odd choice, considering the spectral theorem literally says that "normal matrices are orthogonally diagonalizable". I would assume most people either struggle with diagonalization, or already know what the spectral theorem is. Who is the intended audience then?
@marcuslaurel5758
@marcuslaurel5758 2 жыл бұрын
I’d imagine there are many introductory linear algebra courses which talk about how to diagonalize a matrix, but fail to go into further depth beyond that. While an introductory linear algebra course can be more theory based than any previous math course, at the end of the day it can still be mostly a course in computing things involving matrices, leaving out much of the deeper theory and the more mathematically technical theorems.
@kazachekalex
@kazachekalex 2 жыл бұрын
Our goal was to introduce a geometric interpretation of what is happening when you diagonalize a finite-dimensional operator. Lots of people learn theorems about diagonalization as a rote process of manipulating boxes of numbers, with no intuitive understanding of the process
@morristgh
@morristgh 2 жыл бұрын
@@kazachekalex I think you did a great job. I was taught all of these concepts in theoretical chemistry but the video really helped deepening my understanding of it!
@lexinwonderland5741
@lexinwonderland5741 Жыл бұрын
Well, it's a few years late, but I just stumbled on this video after watching kzfaq.info/get/bejne/g95naK6a1t_FmZ8.html series, and this explains SO incredibly well!!! great job!! shame this didn't make bigger waves in SoME1 or 2, y'all deserve more credit!!
@malikcollins9388
@malikcollins9388 6 ай бұрын
I'm just like Jackie in linear algebra; stiff hair and huge glasses.
@freddyfozzyfilms2688
@freddyfozzyfilms2688 Жыл бұрын
yoneda lemma ?
@noamzilo6730
@noamzilo6730 Жыл бұрын
I wish all of this existed without the music. It really makes me want to not listen to the end, though It is a really good resource otherwise
@basecode06791
@basecode06791 Жыл бұрын
opnesource the code from the video bro
@polyhistorphilomath
@polyhistorphilomath 2 жыл бұрын
I took exception to your statement that orange plus blue is me. It’s not. I am not the secondary or tertiary color you think I am. My life is a lie.
@paradoxicallyexcellent5138
@paradoxicallyexcellent5138 2 жыл бұрын
The cuts were too jerky. Gotta leave equations and images on screen an extra half-second to second.
@DC430
@DC430 2 жыл бұрын
The alternating voiceover was detrimental to the video. Just stick to a single narrator
@chandrashekarramachandran9769
@chandrashekarramachandran9769 2 жыл бұрын
wtf
@khudadatbaluch7884
@khudadatbaluch7884 2 жыл бұрын
it is good, but i could not understand, get better, master it and get the inner story, point out, make me under stand, i am a lay man
@jacquelinedoan2224
@jacquelinedoan2224 2 жыл бұрын
Thank you for your comment. Our intended target audience is students with some background in linear algebra, so we understand that it might be difficult to follow as a layman. We will improve on self-containedness in later work!
@khudadatbaluch7884
@khudadatbaluch7884 2 жыл бұрын
@@jacquelinedoan2224 it is not nicely done, i teach math, but but you have to be better than this
@emmepombar3328
@emmepombar3328 2 жыл бұрын
@@jacquelinedoan2224 I have a master degree in computer science and had a lot of math but still couldn't follow the video, because it was too fast and the pictures were poor.
@LaureanoLuna
@LaureanoLuna 2 жыл бұрын
3blue1brown as sloppy as usual, always assuming a rushing graphic can make for a real explanation.
The Spectral Theorem
11:18
Sheldon Axler
Рет қаралды 44 М.
Chips evolution !! 😔😔
00:23
Tibo InShape
Рет қаралды 42 МЛН
Joven bailarín noquea a ladrón de un golpe #nmas #shorts
00:17
New Gadgets! Bycycle 4.0 🚲 #shorts
00:14
BongBee Family
Рет қаралды 9 МЛН
100😭🎉 #thankyou
00:28
はじめしゃちょー(hajime)
Рет қаралды 39 МЛН
Diffie-Hellman Key Exchange: How to Share a Secret
9:09
Spanning Tree
Рет қаралды 59 М.
Galois Theory Explained Simply
14:45
Math Visualized
Рет қаралды 451 М.
The most misunderstood equation in math (associative property)
29:37
Lingua Mathematica
Рет қаралды 282 М.
Some geometry behind the Basel problem
19:32
Michael Penn
Рет қаралды 20 М.
5 MILLION SUBSCRIBERS!!!!!!!!!!!!!!!!!!!!!!!!!!
33:25
GothamChess
Рет қаралды 47 М.
The Beauty of Bézier Curves
24:26
Freya Holmér
Рет қаралды 1,9 МЛН
Visualizing Diagonalization
5:00
QualityMathVisuals
Рет қаралды 8 М.
Spectral Graph Theory For Dummies
28:17
Ron & Math
Рет қаралды 39 М.
Padé Approximants
6:49
Dr. Will Wood
Рет қаралды 432 М.
Chips evolution !! 😔😔
00:23
Tibo InShape
Рет қаралды 42 МЛН