No video

Lagrangian Neural Network (LNN) [Physics Informed Machine Learning]

  Рет қаралды 26,012

Steve Brunton

Steve Brunton

Күн бұрын

This video was produced at the University of Washington, and we acknowledge funding support from the Boeing Company
%%% CHAPTERS %%%
00:00 Intro
02:14 Background: The Lagrangian Perspective
05:14 Background: Lagrangian Dynamics
06:46 Variational Integrators
10:40 The Parallel to Machine Learning/ Why LNNs
13:22 LNNs: Underlying Concept
16:02 LNNs are ODEs/ LNNs: Implementation
18:21 Outro

Пікірлер: 29
@psychii678
@psychii678 Ай бұрын
this in combination with some of the modern neural operator (fourier, wavelet) methods are really going to be the norm for most computational physics in industry that use continuum models I think
@maxbaugh9372
@maxbaugh9372 21 күн бұрын
So we have Lagrangian & Hamiltonian Neural Networks, I think the question is obvious: do we have Hamilton-Jacobi Neural Networks?
@as-qh1qq
@as-qh1qq Ай бұрын
Integrating chaotic systems: when Runge-Kute can be called naive
@hyperduality2838
@hyperduality2838 Ай бұрын
Problem (input), reaction (hidden), solution (output) -- the Hegelian dialectic! Your mind (concepts) is the reaction or anti-thesis to the outside world (perception). Input vectors are dual -- contravariant is dual to covariant -- dual bases, Riemann geometry. Concepts are dual to percepts -- the mind duality of Immanuel Kant. "Always two there are" -- Yoda. Neural networks are based upon the Hegelian dialectic! Lagrangians are dual to Hamiltonians.
@kannan.j7867
@kannan.j7867 Ай бұрын
Great content
@esti445
@esti445 10 күн бұрын
I suggest a "transcendental neural network". Can I publish?
@ingolifs
@ingolifs Ай бұрын
Can I clarify something? Does the NN just give the updated position and velocity at the next time step? And then you repeatedly use the NN to integrate the system to find its full time evolution? You can't use this sort of architecture (at least for simple problems) to find the state at an arbitrary point in time in a single NN calculation?
@tassiedevil2200
@tassiedevil2200 Ай бұрын
This is a good question. I interpreted that you get back the accelerations i.e. enough to make a timestep from your (input) initial values. In either case (Baseline or Lagrangian NN) it seems the accelerations are the training data - it is just how they are used. For predictions, the difference is how the accelerations for the next step are generated - the autodifferentiated L version being superior, presumably being more constrained by the Lagrangian rather than simply being some sort of ML interpolator of the training observations. While I can see this is potentially interesting for deducing underlying dynamics from observations, I am curious how it useful it is for chaotic systems. Consider the training data as samples of positions and velocities in the phase space (e.g. 4 dimensional for the double pendulum), then of course velocities (given) and accelerations indicate the tangent to a trajectory at each of these phase space points. However, given that trajectories of initially close points diverge in chaotic systems, how realistic is this for marching forwards?
@MDNQ-ud1ty
@MDNQ-ud1ty Ай бұрын
In his older videos he used a PINN that was was an integrator to show that they are much better at long term predictability. I imagine it is exactly the same. These are local methods, not global. Since they are discrete methods they can't do any global derivations which would be symbolic. It is likely an impossible problem to have a global solver. Effectively you are then asking to be able to compute an exact timestep to get where you want with zero loss. This would, at the very least, require one to have the symbolic description of the system rather than just discrete sample points.
@zanubiadepasquale
@zanubiadepasquale Ай бұрын
So cool, thank you, professor! I have a possibly naive question: does this mean that MLPs are inherently unable to fully model such systems, no matter the complexity or depth of their architecture, because they will always lose a system's symmetry relations?
@substanceandevidence
@substanceandevidence Ай бұрын
It's extremely unlikely that you happen upon an architecture that preserves symmetries by accident and then train that network so that it exactly fulfils symmetry relations. It's so improbable that you can be certain that the result will be wrong. By embedding the model in the lagrangian formalism the opposite becomes true: you're suddenly guaranteed that these symmetries are kept.
@theekshanabandara9293
@theekshanabandara9293 Ай бұрын
Very interesting! ❤️
@mootytootyfrooty
@mootytootyfrooty Ай бұрын
least action seems like the only way you can actually ground a neural net if you want it stay in reality, even abstract from physics since at some point it needs to come back to reality where there is thermodynamics governing always. Seems like a necessary core for neural nets in general to adopt.
@dadsonworldwide3238
@dadsonworldwide3238 Ай бұрын
I do long for an agent that can serf all models tuned and weighted although I'm sure it will be a while before we the people really get tomorrow's access today like that .
@johnwaczak8028
@johnwaczak8028 Ай бұрын
Excellent video! About the intrinsic coordinates problem for HNNs, can't you use an auto-encoder to "discover" the correct (q,p) pairs from your input data like Greydanus et al do for the pendulum video example in the HNN paper? It seems like the added cost of computing the Hessian could be a significant bottleneck for more realistic, high-dimensional datasets.
@jaikumar848
@jaikumar848 Ай бұрын
Hello sir ! Is it possible to make mathematical model/transfer function of Diode /Thyristor so that we can predict output of diode just by convolution of diode and input sine wave ?
@FredericMbouleNgolle
@FredericMbouleNgolle Ай бұрын
Good eveny sir and tank you for yours videos Please Can WE use or those méthodes(all that you have présent) in a epidemylogycal model ? ( Driving by ODE or PDE system)
@looper6394
@looper6394 Ай бұрын
any proof that it will find the right lagrangian? in my case the qpp values fit, however the lagrangian is completly off. seems like there is no 1:1 relationship.
@vinitsingh5546
@vinitsingh5546 Ай бұрын
Could you please do a video explanation one for Implicit Neural Representations with Periodic Activation Functions? Thank you!
@BillTubbs
@BillTubbs 6 күн бұрын
An easy introduction/refresher on Newtonian, Hamiltonian, and Lagrangian mechanics here: kzfaq.info/get/bejne/Zqp4gaql2NPReGw.html
@799usman
@799usman Ай бұрын
To all those who read my comment: I want to apply a Lagrangian Neural Network (NN) to approximate or model a temporal signal, such as temporal traffic flow. However, I don't know where to start. Could you guide me on whether it is possible and where I can find related python-code? I would also be happy to learn if anyone has applied LNN to the MNIST features as an embedded layer in a neural network.
@arafathasan-ec5cj
@arafathasan-ec5cj Ай бұрын
sir..can u make a video on tensor for a physics major.??????we will be grateful if u make one....
@Jaylooker
@Jaylooker Ай бұрын
I think this Lagrangian neural network would be good at most classical physical simulations and has applications like being used in a physics engine. There is the Lagrangian of the standard model so it should be possible to also replicate particle physics excluding gravitational effects.
@edisonj5335
@edisonj5335 Ай бұрын
excellent
Ай бұрын
@raaedalmayali3685
@raaedalmayali3685 Ай бұрын
@rudypieplenbosch6752
@rudypieplenbosch6752 Ай бұрын
The videos, went from being really instructive, towards, just skimming the surface unfortunately. The content has changed and not for the better..
@AABB-px8lc
@AABB-px8lc Ай бұрын
I know, no one care, but please tell me when that Perceptron BS ended on this channel, and I can finnaly enjoy real math as half year or so ago.
@SuperSuperGenius
@SuperSuperGenius Ай бұрын
I so want to dub a version of this video to ZZTops 'LaGrange.'
Neural ODEs (NODEs) [Physics Informed Machine Learning]
24:37
Steve Brunton
Рет қаралды 57 М.
Lagrangian and Hamiltonian Mechanics in Under 20 Minutes: Physics Mini Lesson
18:33
If Barbie came to life! 💝
00:37
Meow-some! Reacts
Рет қаралды 58 МЛН
Magic trick 🪄😁
00:13
Andrey Grechka
Рет қаралды 37 МЛН
The Joker saves Harley Quinn from drowning!#joker  #shorts
00:34
Untitled Joker
Рет қаралды 58 МЛН
ОБЯЗАТЕЛЬНО СОВЕРШАЙТЕ ДОБРО!❤❤❤
00:45
Residual Networks (ResNet) [Physics Informed Machine Learning]
17:26
New Recipe for Pi - Numberphile
14:29
Numberphile
Рет қаралды 385 М.
These Illusions Fool Almost Everyone
24:55
Veritasium
Рет қаралды 2,2 МЛН
Why Lego Is So Expensive | So Expensive | Business Insider
28:18
Business Insider
Рет қаралды 770 М.
First Look Inside Blue Origin's New Glenn Factory w/ Jeff Bezos!
1:12:59
Everyday Astronaut
Рет қаралды 906 М.
The Clever Way to Count Tanks - Numberphile
16:45
Numberphile
Рет қаралды 942 М.
Newtonian/Lagrangian/Hamiltonian mechanics are not equivalent
22:29
Gabriele Carcassi
Рет қаралды 42 М.
The Boundary of Computation
12:59
Mutual Information
Рет қаралды 1 МЛН
Official PyTorch Documentary: Powering the AI Revolution
35:53
If Barbie came to life! 💝
00:37
Meow-some! Reacts
Рет қаралды 58 МЛН