However, this algorithm is not optimal in the worst case, and it does not deal with unbounded Voronoi cells
@potatomo960921 күн бұрын
whats with all the jump scares? 😭
@uncleole503Ай бұрын
this is very different from Fortune's algorithm
@unveil7762Ай бұрын
This is very cool thank you
@trumpgaming59982 ай бұрын
Okay but why don't you explain why this method doesn't work sometimes for particular degrees depending on the function
@trumpgaming59982 ай бұрын
For instance if you wanted to minimize cos(x) = c1 where c1 is a constant, using gradient descent one way or another yields you that c1 = 0, but the constant term in the taylor expansion of cos(x) is 1 since cos(x) = 1 - x^2/2 + ... This means that you have to include at least the 2nd term for this to work, or even a higher degree depending on the function other than cos(x) in the example.
@ritwickjha39542 ай бұрын
when the ray casted from the point crosses a vertex, the one intersection is counted twice (because 2 edges are defined to have that point), which will give wrong answers
@stevencrews57962 ай бұрын
Thanks so much for this! I needed to find centroids of irregular polygons for a Matter.js project and your explanation and code examples got me up and running quickly.
@shihyuehjan38352 ай бұрын
Thank you so much for the video!
@NeoZondix2 ай бұрын
You're Chopping it
@gutzimmumdo49102 ай бұрын
what's the time complexity of this algo?
@tedlorance69682 ай бұрын
Out of curiosity, is there a known or best-guess optimal or near-optimal value for the padding in the algorithm? Perhaps related to the mean distance between the sites?
@matinfazel82403 ай бұрын
very helpful :))
@aleksandrstukalov3 ай бұрын
Is there any research paper that you took this algorithm from?
@EdgarProgrammator3 ай бұрын
No, I couldn't find an easy, step-by-step algorithm for building Voronoi diagrams (unlike Delaunay triangulation algorithms, which are easy to find). That's why I created this video.
@KewargsАй бұрын
@@EdgarProgrammatorWhat about the Fortune sweep algorithm?
@zzz_oi3 ай бұрын
this channel is art
@EdgarProgrammator3 ай бұрын
Thank you
@jamesgaither18993 ай бұрын
Edgar who is that guy? XD
@richardmarch37503 ай бұрын
this is exactly how math should be ngl
@GustavoCesarMoura3 ай бұрын
lmao the jumpscare
@ignSpoilz3 ай бұрын
Omg the nun face why 😭😭
@EdgarProgrammator3 ай бұрын
idk 😐
@aleksandrstukalov3 ай бұрын
This is an awesome explanation of the algorithm! Thank you for sharing such a helpful content!❤❤❤
@toddkfisher3 ай бұрын
Would a sixth degree polynomial in x be referred to as "x hexed"? Really like the video.
@LEGEND_SPRYZEN4 ай бұрын
We are taught this in high school class 12.
@korigamik4 ай бұрын
Bro this is cool. Can you share the source code for the animations in this video?
@stuart_3604 ай бұрын
oh its good , but i thought i will be able to apply it in my exams lol
@rosettaroberts80534 ай бұрын
The second example would have been solved better by linear regression.
@beautyofmath68214 ай бұрын
Beautiful and very well made video, I personally loved the old tv vibe to this, not to disregard the instructive yet nicely explained method of gradient descent. Subscribed
@bernardoolisan10104 ай бұрын
Why squaring the function? do we always need to square the function to solve it via gradient descent?
@nguyenthanhvinh59424 ай бұрын
Gradient descent is finding optimal minimum point of the function f(x), not finding solution of f(x)=0. However, optimal point of any f(x) is exactly the solution of f'(x) (derivative function of f(x)). So, in case your function has only one variable, to find the solution of f(x)=0, you can replace the derivative term with f(x) and so on. If your function has more than one variable, you can't replace, cause there's only one function has been given, so you do not know that function is depends on which variable (as mentioned above, if you have one variable, f(x) is derivative function depends on x when you use Gradient Descent to find solution). So, the solution is using Least Square Approximation method as the video has shown. Function f^2(variable) always has optimal minimum point. If minimum point's value is 0, it is the solution. If not, GD still finds optimal minimum point, but it is not the solution.
@jadeblades4 ай бұрын
genuinely curious why you put that in the intro
@jamesgaither18994 ай бұрын
Where did you get the idea for the intro? It's kind of hilarious and terrifying and I love it.
@devrus2654 ай бұрын
The video was helpful
@roberthuff31224 ай бұрын
Panache defined.
@MrBrassmonkey123454 ай бұрын
alan watts?
@AhmAsaduzzaman4 ай бұрын
Yes, solving the equation x^5 + x = y for x in terms of y is much more complex than solving quadratic equations because there is no general formula for polynomials of degree five or higher, due to the Abel-Ruffini theorem. This means that, in general, we can't express the solutions in terms of radicals as we can for quadratics, cubics, and quartics. However, we can still find solutions numerically or graphically. Numerical methods such as Newton's method can be used to approximate the roots of this equation for specific values of y. If we're interested in a symbolic approach, we would typically use a computer algebra system (CAS) to manipulate the equation and find solutions.
@AhmAsaduzzaman4 ай бұрын
AWESOME Video! Thanks! Trying to put some basic understanding on this: "We seek a cubic polynomial approximation (ax^3 + bx^2 + cx + d) to cosine on the interval [0, π]." Let's say you want to represent the cosine function, which is a bit wavy and complex, with a much simpler formula-a cubic polynomial. This polynomial is a smooth curve described by the equation where a, b, c, and d are specific numbers (coefficients) that determine the shape of the curve. Now, why would we want to do this? Cosine is a trigonometric function that's fundamental in fields like physics and engineering, but it can be computationally intensive to calculate its values repeatedly. A cubic polynomial, on the other hand, is much simpler to work with and can be computed very quickly. So, we're on a mission to find the best possible cubic polynomial that behaves as much like the cosine function as possible on the interval from 0 to π (from the beginning to the peak of the cosine wave). To find the perfect a, b, c, and d that make our cubic polynomial a doppelgänger for cosine, we use a method that involves a bit of mathematical magic called "least squares approximation". This method finds the best fit by ensuring that, on average, the vertical distance between the cosine curve and our cubic polynomial is as small as possible. Imagine you could stretch out a bunch of tiny springs from the polynomial to the cosine curve-least squares find the polynomial that would stretch those springs the least. Once we have our cleverly crafted polynomial, we can use it to estimate cosine values quickly and efficiently. The beauty of this approach is that our approximation will be incredibly close to the real deal, making it a nifty shortcut for complex calculations.
@sang4594 ай бұрын
Elegant
@newmoodclown4 ай бұрын
i thought my screen got dust, but unique style. Nice!
@ananthakrishnank32084 ай бұрын
Thank you for the video!! Took some time to grasp the second example. No surprise. This gradient descent optimization is at the heart of machine learning.
@mourensun77754 ай бұрын
Want to know how you made this video animation?
@hallooww4 ай бұрын
what text to speech do you use
@markzuckerbread18654 ай бұрын
Awesome vid, instant sub
@darkseid8564 ай бұрын
what is that intro bruh
@zacvh4 ай бұрын
Bro this video is so fire. I get so annoyed by the voices in my actual school videos that they make you watch and this is a huge step up from that it actually makes this seem like it’s a top secret information like you’re debriefing the first nuclear tests or something
@KP-ty9yl4 ай бұрын
Excellent explanation, immediately subscribed 😁
@Daniel_Larson_Records4 ай бұрын
There's something about the way you talk and edit the video together that actually makes it interesting. I can't put my finger on it. Maybe it's how novel it is? I don't know, but PLEASE make more videos like this. It's amazing, and I actually understood it completely (rare for someone so bad at math lol)
@ErikNij4 ай бұрын
But how do you choose this "learning rate"? Like in your x^5 example, if you would have chosesn 0.025, then you will never get a solution, as your solver will spiral to infinity? If you know your solution has a 0, could you use the "reseduial" (value of previous evaluation) to guess how far you need to step? Perhaps paired with a relaxation factor?
@nolanfaught69744 ай бұрын
More advanced gradient descent algorithms use a decreasing sequence of numbers as the learning rate. This allows the algorithm to quickly converge in the first few iterations and more slowly converge in later iterations to avoid “overstepping” the solution. Another modification involves solving for the optimal learning rate at each step with another gradient descent method, called exact gradient descent. Conjugate gradient descent uses orthogonal step directions to guarantee convergence in exactly n iterations, but each iteration is more costly. It’s important to recognize that the learning rate shouldn’t matter too heavily unless your problem is ill-conditioned, in which case derivative-based methods don’t provide a much of an advantage over just guessing and you would use simulated annealing or other stochastic (rng-based) methods.