These videos were created to accompany a university course, Numerical Methods for Engineers, taught Spring 2013. The text used in the course was "Numerical Methods for Engineers, 6th ed." by Steven Chapra and Raymond Canale.
Пікірлер: 16
@GPSOne10 жыл бұрын
You could say it in the comments of this video that it represents the finite element Galerkin method in a very understandable. I've looked for this video for a long time and to be honest like that I don't need subtitles to understand your english :)
@islamelbaz72328 жыл бұрын
You have been used the weight functiona to be as the basisi function meanwhile you said the weight functions equal the first derivative of residual w.r.t alpha, how come? you have used the technique of least squares method
@lamprosmuda9 жыл бұрын
Please, I need too learn Finite methods for Partial differential equations such as Helmholtz, laplace, conduction etc. I cannot lay my hands on materials with solved examples. Thanks.
@islamelbaz72328 жыл бұрын
+lamprosmuda also i'm ... if i achieve anything i'll tell u ...
@nguyenhung974 жыл бұрын
At 4:49, Can you explain why the partial derivative with the respect to alpha is set to equal zero? Thank you.
@MuhammedAshour4 жыл бұрын
because you need to minimize the sum of squares of errors. Whenever you need to minimize, set the derivative to zero and since we need to minimize both parameters alpha_0 and alpha_1, we do it with a partial derivative.
@dgholstein9 жыл бұрын
Okay, let me ask the question another way, the "Let" appears to define the "weak" formulation, is that correct? If it is, I think you need to make that more clear.
@oscar-53266 жыл бұрын
I don't understand what Wi and T1 is at around 14:07
@TheFireHacker4 жыл бұрын
Wi is Partial Derivative of Error with respect to both Alpha0 and Alpha1. Alpha0 and Alpha1 are parameters, so Wi can be thought of as measure of how much error was observed due to our parameters. (Very Hand wavy explanation)
@dgholstein9 жыл бұрын
Okay, when you define the Galerkin at around 12:00, you're letting W sub i, which would be derivatives with respect to your linear regression terms, subscripts 0 and 1, be equated to two linear equations, each with an alpha 0 and and alpha 1. It's doubly confusing since you now refer to subscripts 1 and 2, when your regression uses subscript 0 and 1. Maybe you can write out the basis functions a little more clearly, especially in terms of the alphas.
@dgholstein9 жыл бұрын
Watching it again, it appears the equation for the error minima is changed with the "Let". That we are no longer getting exactly the minima, but the Galerkin is getting us close enough; but also allowing us to manipulate the equations to be solvable. Is this what is happening?
@forTodaysAdventure7 жыл бұрын
you forgot a 2
@deshengzheng47047 жыл бұрын
I think the right side equal to 0, so the 2 can be cancelled here. But with 2 there should be much clear
@lamprosmuda9 жыл бұрын
The equation ain't partial differential equations.
@kvyi9 жыл бұрын
No, it is an ordinary differential equation. The reason for the title is that the video is part of a unit on methods that *can* be used to solve PDEs, and the finite element method *can* be used to solve PDEs. I use an ODE because it makes the explanation simpler.
@marshalcraft9 жыл бұрын
lamprosmuda did you go an post this on all videos from him?