How do I rotate a 2D point?
2:05
10 ай бұрын
Limit
9:29
Жыл бұрын
Infinito
0:38
Жыл бұрын
Point in polygon (Python3)
2:29
Жыл бұрын
Calculus in Python
5:18
Жыл бұрын
Convolution
6:19
Жыл бұрын
The Recursive Square Function
3:37
Merge Sort - Recursive Procedure
1:42
Merge Sort example
0:54
2 жыл бұрын
Angles from 3 points in computer
2:59
Пікірлер
@kifiansy
@kifiansy 13 күн бұрын
what the name of application do you use?
@jcaceres149
@jcaceres149 17 күн бұрын
However, this algorithm is not optimal in the worst case, and it does not deal with unbounded Voronoi cells
@potatomo9609
@potatomo9609 21 күн бұрын
whats with all the jump scares? 😭
@uncleole503
@uncleole503 Ай бұрын
this is very different from Fortune's algorithm
@unveil7762
@unveil7762 Ай бұрын
This is very cool thank you
@trumpgaming5998
@trumpgaming5998 2 ай бұрын
Okay but why don't you explain why this method doesn't work sometimes for particular degrees depending on the function
@trumpgaming5998
@trumpgaming5998 2 ай бұрын
For instance if you wanted to minimize cos(x) = c1 where c1 is a constant, using gradient descent one way or another yields you that c1 = 0, but the constant term in the taylor expansion of cos(x) is 1 since cos(x) = 1 - x^2/2 + ... This means that you have to include at least the 2nd term for this to work, or even a higher degree depending on the function other than cos(x) in the example.
@ritwickjha3954
@ritwickjha3954 2 ай бұрын
when the ray casted from the point crosses a vertex, the one intersection is counted twice (because 2 edges are defined to have that point), which will give wrong answers
@stevencrews5796
@stevencrews5796 2 ай бұрын
Thanks so much for this! I needed to find centroids of irregular polygons for a Matter.js project and your explanation and code examples got me up and running quickly.
@shihyuehjan3835
@shihyuehjan3835 2 ай бұрын
Thank you so much for the video!
@NeoZondix
@NeoZondix 2 ай бұрын
You're Chopping it
@gutzimmumdo4910
@gutzimmumdo4910 2 ай бұрын
what's the time complexity of this algo?
@tedlorance6968
@tedlorance6968 2 ай бұрын
Out of curiosity, is there a known or best-guess optimal or near-optimal value for the padding in the algorithm? Perhaps related to the mean distance between the sites?
@matinfazel8240
@matinfazel8240 3 ай бұрын
very helpful :))
@aleksandrstukalov
@aleksandrstukalov 3 ай бұрын
Is there any research paper that you took this algorithm from?
@EdgarProgrammator
@EdgarProgrammator 3 ай бұрын
No, I couldn't find an easy, step-by-step algorithm for building Voronoi diagrams (unlike Delaunay triangulation algorithms, which are easy to find). That's why I created this video.
@Kewargs
@Kewargs Ай бұрын
​@@EdgarProgrammatorWhat about the Fortune sweep algorithm?
@zzz_oi
@zzz_oi 3 ай бұрын
this channel is art
@EdgarProgrammator
@EdgarProgrammator 3 ай бұрын
Thank you
@jamesgaither1899
@jamesgaither1899 3 ай бұрын
Edgar who is that guy? XD
@richardmarch3750
@richardmarch3750 3 ай бұрын
this is exactly how math should be ngl
@GustavoCesarMoura
@GustavoCesarMoura 3 ай бұрын
lmao the jumpscare
@ignSpoilz
@ignSpoilz 3 ай бұрын
Omg the nun face why 😭😭
@EdgarProgrammator
@EdgarProgrammator 3 ай бұрын
idk 😐
@aleksandrstukalov
@aleksandrstukalov 3 ай бұрын
This is an awesome explanation of the algorithm! Thank you for sharing such a helpful content!❤❤❤
@toddkfisher
@toddkfisher 3 ай бұрын
Would a sixth degree polynomial in x be referred to as "x hexed"? Really like the video.
@LEGEND_SPRYZEN
@LEGEND_SPRYZEN 4 ай бұрын
We are taught this in high school class 12.
@korigamik
@korigamik 4 ай бұрын
Bro this is cool. Can you share the source code for the animations in this video?
@stuart_360
@stuart_360 4 ай бұрын
oh its good , but i thought i will be able to apply it in my exams lol
@rosettaroberts8053
@rosettaroberts8053 4 ай бұрын
The second example would have been solved better by linear regression.
@beautyofmath6821
@beautyofmath6821 4 ай бұрын
Beautiful and very well made video, I personally loved the old tv vibe to this, not to disregard the instructive yet nicely explained method of gradient descent. Subscribed
@bernardoolisan1010
@bernardoolisan1010 4 ай бұрын
Why squaring the function? do we always need to square the function to solve it via gradient descent?
@nguyenthanhvinh5942
@nguyenthanhvinh5942 4 ай бұрын
Gradient descent is finding optimal minimum point of the function f(x), not finding solution of f(x)=0. However, optimal point of any f(x) is exactly the solution of f'(x) (derivative function of f(x)). So, in case your function has only one variable, to find the solution of f(x)=0, you can replace the derivative term with f(x) and so on. If your function has more than one variable, you can't replace, cause there's only one function has been given, so you do not know that function is depends on which variable (as mentioned above, if you have one variable, f(x) is derivative function depends on x when you use Gradient Descent to find solution). So, the solution is using Least Square Approximation method as the video has shown. Function f^2(variable) always has optimal minimum point. If minimum point's value is 0, it is the solution. If not, GD still finds optimal minimum point, but it is not the solution.
@jadeblades
@jadeblades 4 ай бұрын
genuinely curious why you put that in the intro
@jamesgaither1899
@jamesgaither1899 4 ай бұрын
Where did you get the idea for the intro? It's kind of hilarious and terrifying and I love it.
@devrus265
@devrus265 4 ай бұрын
The video was helpful
@roberthuff3122
@roberthuff3122 4 ай бұрын
Panache defined.
@MrBrassmonkey12345
@MrBrassmonkey12345 4 ай бұрын
alan watts?
@AhmAsaduzzaman
@AhmAsaduzzaman 4 ай бұрын
Yes, solving the equation x^5 + x = y for x in terms of y is much more complex than solving quadratic equations because there is no general formula for polynomials of degree five or higher, due to the Abel-Ruffini theorem. This means that, in general, we can't express the solutions in terms of radicals as we can for quadratics, cubics, and quartics. However, we can still find solutions numerically or graphically. Numerical methods such as Newton's method can be used to approximate the roots of this equation for specific values of y. If we're interested in a symbolic approach, we would typically use a computer algebra system (CAS) to manipulate the equation and find solutions.
@AhmAsaduzzaman
@AhmAsaduzzaman 4 ай бұрын
AWESOME Video! Thanks! Trying to put some basic understanding on this: "We seek a cubic polynomial approximation (ax^3 + bx^2 + cx + d) to cosine on the interval [0, π]." Let's say you want to represent the cosine function, which is a bit wavy and complex, with a much simpler formula-a cubic polynomial. This polynomial is a smooth curve described by the equation where a, b, c, and d are specific numbers (coefficients) that determine the shape of the curve. Now, why would we want to do this? Cosine is a trigonometric function that's fundamental in fields like physics and engineering, but it can be computationally intensive to calculate its values repeatedly. A cubic polynomial, on the other hand, is much simpler to work with and can be computed very quickly. So, we're on a mission to find the best possible cubic polynomial that behaves as much like the cosine function as possible on the interval from 0 to π (from the beginning to the peak of the cosine wave). To find the perfect a, b, c, and d that make our cubic polynomial a doppelgänger for cosine, we use a method that involves a bit of mathematical magic called "least squares approximation". This method finds the best fit by ensuring that, on average, the vertical distance between the cosine curve and our cubic polynomial is as small as possible. Imagine you could stretch out a bunch of tiny springs from the polynomial to the cosine curve-least squares find the polynomial that would stretch those springs the least. Once we have our cleverly crafted polynomial, we can use it to estimate cosine values quickly and efficiently. The beauty of this approach is that our approximation will be incredibly close to the real deal, making it a nifty shortcut for complex calculations.
@sang459
@sang459 4 ай бұрын
Elegant
@newmoodclown
@newmoodclown 4 ай бұрын
i thought my screen got dust, but unique style. Nice!
@ananthakrishnank3208
@ananthakrishnank3208 4 ай бұрын
Thank you for the video!! Took some time to grasp the second example. No surprise. This gradient descent optimization is at the heart of machine learning.
@mourensun7775
@mourensun7775 4 ай бұрын
Want to know how you made this video animation?
@hallooww
@hallooww 4 ай бұрын
what text to speech do you use
@markzuckerbread1865
@markzuckerbread1865 4 ай бұрын
Awesome vid, instant sub
@darkseid856
@darkseid856 4 ай бұрын
what is that intro bruh
@zacvh
@zacvh 4 ай бұрын
Bro this video is so fire. I get so annoyed by the voices in my actual school videos that they make you watch and this is a huge step up from that it actually makes this seem like it’s a top secret information like you’re debriefing the first nuclear tests or something
@KP-ty9yl
@KP-ty9yl 4 ай бұрын
Excellent explanation, immediately subscribed 😁
@Daniel_Larson_Records
@Daniel_Larson_Records 4 ай бұрын
There's something about the way you talk and edit the video together that actually makes it interesting. I can't put my finger on it. Maybe it's how novel it is? I don't know, but PLEASE make more videos like this. It's amazing, and I actually understood it completely (rare for someone so bad at math lol)
@ErikNij
@ErikNij 4 ай бұрын
But how do you choose this "learning rate"? Like in your x^5 example, if you would have chosesn 0.025, then you will never get a solution, as your solver will spiral to infinity? If you know your solution has a 0, could you use the "reseduial" (value of previous evaluation) to guess how far you need to step? Perhaps paired with a relaxation factor?
@nolanfaught6974
@nolanfaught6974 4 ай бұрын
More advanced gradient descent algorithms use a decreasing sequence of numbers as the learning rate. This allows the algorithm to quickly converge in the first few iterations and more slowly converge in later iterations to avoid “overstepping” the solution. Another modification involves solving for the optimal learning rate at each step with another gradient descent method, called exact gradient descent. Conjugate gradient descent uses orthogonal step directions to guarantee convergence in exactly n iterations, but each iteration is more costly. It’s important to recognize that the learning rate shouldn’t matter too heavily unless your problem is ill-conditioned, in which case derivative-based methods don’t provide a much of an advantage over just guessing and you would use simulated annealing or other stochastic (rng-based) methods.
@MissPiggyM976
@MissPiggyM976 4 ай бұрын
Well done, thanks!
@YannLeBihanFractals
@YannLeBihanFractals 4 ай бұрын
Use Newton method, it's quiker!
@aymanelhasbi5030
@aymanelhasbi5030 4 ай бұрын
thanks sir !
@Arycke
@Arycke 4 ай бұрын
I hear Kingdom Hearts Menu selection sound😮