Optical Flow - Computerphile

  Рет қаралды 102,966

Computerphile

Computerphile

4 жыл бұрын

Pixel level movement in images - Dr Andy French takes us through the idea of Optic or Optical Flow.
Finding the Edges (Sobel): • Finding the Edges (Sob...
More on Optic Flow: Coming Soon
/ computerphile
/ computer_phile
This video was filmed and edited by Sean Riley.
Computer Science at the University of Nottingham: bit.ly/nottscomputer
Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

Пікірлер: 82
@gorkyrojas3446
@gorkyrojas3446 4 жыл бұрын
He never explained how you detect motion between images., only that it is "hard" for various reasons. How do you generate those vectors?
@sodafries7344
@sodafries7344 4 жыл бұрын
Very very simplified: Select a pixel. Find the pixel in the next imagne. Draw a vector between them. You probably didn't watch the video.
@misode
@misode 4 жыл бұрын
They talked about it a little bit at 7:15, but you’re right. I wish they had went into more detail how the algorithm actually works. Maybe we’ll get a follow up video.
@themrnobody9819
@themrnobody9819 4 жыл бұрын
Description: "More on Optic Flow: Coming Soon"
@Real_Tim_S
@Real_Tim_S 4 жыл бұрын
Brute force way to do it: 1) Take a tile of pixels (2x2 4x4 8x8, other mixes - whatever you fancy). 2) On the next frame, try to find that block up, down, left, right, rotated left, rotated right, grown (closer), shrunk (farther). 3) Where the probability of each of those searches is summed, you get a 3D vector Now, if you have 1920x1080 pixels in a frame, you need to do this for every tile of the previous image (for an 8x8 tile size, 32400 tiles, 8 searches each) - and you'd have to do this at the video stream frame rate (a common one is 30 frames per second - but it can be much higher in industrial cameras). you can probably see why massively parallel small slice GPUs are ideal for this type of image processing.
@benpierre2435
@benpierre2435 4 жыл бұрын
Here is a simplified explanation: It works by downsampling the images into different resolution images (resize each image to various sizes, say 4x4, 16x16, 64x64, 512x512, 1024x1024, etc), Then comparing the coresponding images frame1->2, starting with the lowest rez. Checking which way each pixel went, left.right,up, down. Storing the vectors in a buffer/image (u, v / red & green / tangent & magnitude, or some other variant). Go to the next higher resolution of the same color images, frames 1->2, refine the motion within each of the previous stored quads/pixels vectors, storing, next higher rez, compare, store, etc.. iterate up to full rez. Then on to next frames 2->3, compute vectors, store. frames 3->4, 4->5, etc.. Optical Flow is used in mpeg compression. a sequence of vectors, and keyframes of color images. You see it all the time in streaming video, news feeds, scratched dvd/blu-ray disks, poor connections over wifi, internet, satellite.. The video may ~freeze, and blocks of pixels will follow along with the motion in the video, breaking up the picture, until the next color keyframe updates, the image pops back into a full image and the video continues. The vectors are sent for every frame, the color is updated on keyframes when needed. Of course this is a very simple explanation., There are in fact many more adaptive optimizations in compression.Full keyframes, sub-keyframes, adaptive spacialy and temporaly. Dependent on fast or slow moving action, camera or subject(s), fast or slow color and lighting changes, ie. distant background don't move much, unless the camera is panning. Or the entive background is moving left to right, and can be stored in 1 vector representing the motion of huge blocks in the image, etc
@MrSigmaSharp
@MrSigmaSharp 4 жыл бұрын
One of my instructors once said if someone is talking about some idea and lists the flaws in that, Its a great idea and they understand it very well. Im looking forward to see more of him
@Willy_Tepes
@Willy_Tepes 11 ай бұрын
Everything in life has limitations. It is just as important to point these out as the capabilities to get a full understanding.
@riyabanerjee2656
@riyabanerjee2656 2 жыл бұрын
This guy is the reason why I subscribed to Computerphile :)
@ashwanishahrawat4607
@ashwanishahrawat4607 4 жыл бұрын
Beautiful topic, thanks for making me more aware of it.
@procactus9109
@procactus9109 4 жыл бұрын
Speaking of optical and computers, Anynoe there at the UNI know anything reasonable on Optic CPUs ?
@cgibbard
@cgibbard 4 жыл бұрын
Doing essentially the same thing with lightness-independent colour channels as well (perhaps the a and b channels in the Lab colour space) seems like it could be very useful in many circumstances where the lighting varies. The amount of light reflected by a physical object might vary quite often, but the colour of most objects doesn't change as much. Still, you'd want to be able to detect a black ball moving against a white background, so *only* using colour information won't work because you'll miss some motion entirely. Given that you *do* detect motion in the colour channel though, I'd expect to have a higher confidence that something was actually moving as described, so it's kind of interesting to think about how you'd want to combine the results.
@aDifferentJT
@aDifferentJT 4 жыл бұрын
That’s not true, brighter objects get less saturated
@TheAudioCGMan
@TheAudioCGMan 4 жыл бұрын
I like this approach to assign higher confidence to brightness independent channels. I see one problem with compressed videos though, as they often go very aggressive on the color channels. I also know of one attempt to be independent of lighting if you're interested. youtube blocks my comment with link, search for "An Improved Algorithm for TV-L1 Optical Flow". they first denoise the input image with an elaborate technique. They assume the resulting image contains just big shapes and the lighting. They take the difference to the original to get just "texture". Then they use the default TV-L1 optical flow on the texture images.
@benpierre2435
@benpierre2435 4 жыл бұрын
Jonathan Tanner yes they do.
@dreammfyre
@dreammfyre 4 жыл бұрын
Next level thumbnail.
@ArumesYT
@ArumesYT 4 жыл бұрын
Just wondering. Modern video compression formats use vectors a lot already. Does it really take that much EXTRA calculation to detect stuff like a shaky image, or can you just use the existing video compression vectors to stabilise the image? If you want to record an image with stabilisation, would it be possible to do a kind of two-pass recording? Push it through the video compression circuitry once to get stabilisation data, then correct the image, and feed the corrected image through again for actual compression and saving?
@benpierre2435
@benpierre2435 4 жыл бұрын
Yes, and they do do that. That is exactly how image stabalization works. And there are motion sensors in some cameras, that correct the shaking by first hardware-wise by moving the lens elements to correct for shaking. Secondly removing large motions software-wise. But it can only "remove" so much as the linear blur of fast panning & tilting decreases the usable image resolution., and the image gets soft.
@Originalimoc
@Originalimoc 4 жыл бұрын
Same thought 👻
@agsystems8220
@agsystems8220 4 жыл бұрын
This is one of the reason the problem is so important. The best compression for an object moving in front of a background would be a moving still image over another still image. The better you solve this problem the better you are able to compress video. It doesn't take extra calculation because solving this is already an important part of how compression works.
@ArumesYT
@ArumesYT 4 жыл бұрын
​@@agsystems8220 I don't think it's all about video compression. Video compression is a strong economic force, therefor most consumer computers have dedicated video compression hardware, and now my question was about using that video compression hardware to calculate vectors for commercially less succesful (yet) applications. There are a lot more fields where vectors are becoming more and more important. Vectors are an important part of general image data, and can be used for scene recognition in all kinds of situations. Three well known examples are augmented reality, improving stills from small/cheap (smartphone) cameras and self driving cars. But I think we're going to see a lot more applications in the near future, and it's nice if we can use hardware acceleration even though it was designed for a different application.
@absurdengineering
@absurdengineering 3 жыл бұрын
Modern video compression gives you motion stabilization at a minimal cost and there’s nothing special you need to do other than use a decoder that takes flow information and uses it to stabilize the output. The encoder/compressor has done all the hard work of flow estimation already. So no need to do “two pass” recording or anything. The playback has a choice of doing stabilization, I guess it’s not commonly done but it definitely can be. For best results the decoder needs to buffer a few seconds of data so that it can fit a nice path along with the flow, and to make future-informed decisions on where to break the tracking.
@MePeterNicholls
@MePeterNicholls 4 жыл бұрын
Can you look at planar tracking next pls?
@subliminalvibes
@subliminalvibes 4 жыл бұрын
I wonder if optic flow data could be "passed on" to playback hardware, perhaps to assist with real-time features such as frame interpolation or blur reduction... 🤔
@gtasa619
@gtasa619 4 жыл бұрын
Yes
@kpunkt.klaviermusik
@kpunkt.klaviermusik 4 жыл бұрын
I would expect that the real processing should be much simpler, because otherwise it would need weeks of analysing every pixel in every frame. So you just look whether the whole image is rotated and how much, if the whole image is shifted up/down, left/right and the whole image is zoomed in or out. That's work enough for hirez images.
@Henrix1998
@Henrix1998 4 жыл бұрын
Optic or optical?
@russell2952
@russell2952 4 жыл бұрын
Could your cameraman perhaps drink decaf instead of twelve cups of strong coffee before filming?
@zachwolf5122
@zachwolf5122 4 жыл бұрын
You did him dirty with the thumbnail lol
@retf054ewte3
@retf054ewte3 5 ай бұрын
what is optical flow good for? in a simple language
@DaniErik
@DaniErik 4 жыл бұрын
Is this similar to digital image correlation used for strain field measurements in continuum mechanics?
@ativjoshi1049
@ativjoshi1049 4 жыл бұрын
I was unable to understand 8 out of 15 words that make up your comment, just saying.
@absalomdraconis
@absalomdraconis 4 жыл бұрын
Not familiar with your use-case, but supposing that it's at all similar to stress on clear plastics causing differing polarizations, then the answer is basically no.
@ericxu7681
@ericxu7681 3 жыл бұрын
Your video really needs the Video Stabilization algorithm, which was made available by optical flow!
@Abrifq
@Abrifq 4 жыл бұрын
Cool subject, even more cooler video!
@wktodd
@wktodd 4 жыл бұрын
Big version of an optical mouse?
@AngDavies
@AngDavies 4 жыл бұрын
A laser beaver?
@Ragnarok540
@Ragnarok540 4 жыл бұрын
Optical elephant
@biggiebeans187
@biggiebeans187 4 жыл бұрын
Yup
@Hodakovi
@Hodakovi Жыл бұрын
what model watch on he hand?
@Elesario
@Elesario 4 жыл бұрын
And here I thought Optic flow was the science of measuring shots of whisky in a bar.
@scowell
@scowell 4 жыл бұрын
Same kind of thing can happen in multi-touch processing, if you want to get crazy with it.
@circuit10
@circuit10 4 жыл бұрын
It's similar because you don't know which finger/pixel came from where
@HomicidalPuppy
@HomicidalPuppy 4 жыл бұрын
I am a simple man I see a Computerphile video, I click
@Abrifq
@Abrifq 4 жыл бұрын
We are not unicorn, if(video.publisher === 'Computerphile'){video.click(video.watch); video.click(video.upvote);}
@Gooberpatrol66
@Gooberpatrol66 4 жыл бұрын
what are some libraries that can do this?
@mylenet5190
@mylenet5190 4 жыл бұрын
Opencv ;)
@blasttrash
@blasttrash 4 жыл бұрын
thumbnail looks like the guy is about to sell weed :P
@perschistence2651
@perschistence2651 4 жыл бұрын
HAHAHA
@spider853
@spider853 4 жыл бұрын
WTF, I was googling optical flow implementation last days and here is a video about it O_o
@Abrifq
@Abrifq 4 жыл бұрын
Also, there is some audio delay after ~2:26
@ironside915
@ironside915 4 жыл бұрын
That thumbnail was really unnecessary.
@bra1nsen
@bra1nsen 2 жыл бұрын
sourcecode?
@picklerick814
@picklerick814 4 жыл бұрын
h265 is awesome. i encoded a movie, 1280x720 in 900kbit/s. it looks sometimes a little compressed, but otherwise it's totally fine. that is so cool!
@Dazzer1234567
@Dazzer1234567 4 жыл бұрын
Eliza Doolittle at 6:40
@baji1443
@baji1443 Жыл бұрын
Good video but just because it's about optical flow does not mean you have to use shaky cam.
@bhuvaneshs.k638
@bhuvaneshs.k638 4 жыл бұрын
Hmmm... Interesting
@recklessroges
@recklessroges 4 жыл бұрын
cliffhanger...
@Tibug
@Tibug 4 жыл бұрын
Why don't you use a tripod for filming or at least a deshake filter (like ffmpeg vidstabtransform) on post-processing? To me this immense shaking is distracting from the actual (cool) content.
@killedbyLife
@killedbyLife 4 жыл бұрын
This was an extremely unsatisfactory clip which I feel was cut way too early. Was the plan a two clip theme but in reality the material was way too thin for two but you went with two anyway? If so that's quite unfair to the the guy doing the explaining.
@robertszuba3382
@robertszuba3382 8 ай бұрын
Blokady reklam naruszają warunki korzystania z KZfaq→ reklamy naruszają wolność osobistą → protestuj!!!
@RoGeorgeRoGeorge
@RoGeorgeRoGeorge 2 жыл бұрын
He keeps saying "optic flow", isn't that called "optical flow"?
@MonkeyspankO
@MonkeyspankO 4 жыл бұрын
Thought this would be about optical computing (sad face)
@markkeilys
@markkeilys 4 жыл бұрын
recently went on a bit of a dive through the wikipeda pages for optical transistors.. was interesting, would recomend
@MrIzzo006
@MrIzzo006 4 жыл бұрын
I call 2nd since everyone came first 🤪
@StreuB1
@StreuB1 4 жыл бұрын
WHERE MY CALC3 STUDENTS AT?!?!?
@pdrg
@pdrg 4 жыл бұрын
So "pixelation" anonymising/censorship can be undone as you know the boundaries between areas with respect to motion. Japanese teenagers are excited.
@studentcommenter5858
@studentcommenter5858 4 жыл бұрын
FiRSt
@antoniopafundi3455
@antoniopafundi3455 3 жыл бұрын
Please stop using those markers on paper, is so tough to keep watching. Those are made for a whiteboard, use a normal pen.
@o.429
@o.429 4 жыл бұрын
Please try not to capture that marker's sound. Or at least do something like cropping for high frequencies. That sound makes me so sick that I couldn't watch the entire video.
@Nitrxgen
@Nitrxgen 4 жыл бұрын
it's ok, you didn't miss much this time
@DamonWakefield
@DamonWakefield 4 жыл бұрын
First comment!
@markoloponen2861
@markoloponen2861 4 жыл бұрын
First!
@Bowhdiddley
@Bowhdiddley 4 жыл бұрын
first
Optic Flow Solutions - Computerphile
12:54
Computerphile
Рет қаралды 46 М.
Optical Flow Constraint Equation | Optical Flow
15:17
First Principles of Computer Vision
Рет қаралды 46 М.
Beautiful gymnastics 😍☺️
00:15
Lexa_Merin
Рет қаралды 15 МЛН
How AI 'Understands' Images (CLIP) - Computerphile
18:05
Computerphile
Рет қаралды 189 М.
Self Compiling Compilers - Computerphile
12:56
Computerphile
Рет қаралды 158 М.
3D Gaussian Splatting! - Computerphile
17:40
Computerphile
Рет қаралды 125 М.
Тесты Optical Flow & Lidar Sensor на Inav.
22:20
RCSchoolmodels
Рет қаралды 35 М.
Musing: Flow Not Force
8:04
Vusi Thembekwayo
Рет қаралды 91 М.
How Autofocus Works - Computerphile
18:24
Computerphile
Рет қаралды 213 М.
Motion Field and Optical Flow | Optical Flow
9:39
First Principles of Computer Vision
Рет қаралды 43 М.
Lambda Calculus - Computerphile
12:40
Computerphile
Рет қаралды 1 МЛН
Has Generative AI Already Peaked? - Computerphile
12:48
Computerphile
Рет қаралды 899 М.
Beautiful gymnastics 😍☺️
00:15
Lexa_Merin
Рет қаралды 15 МЛН