Deep Video Portraits - SIGGRAPH 2018

  Рет қаралды 508,674

Christian Theobalt

Christian Theobalt

6 жыл бұрын

H. Kim, P. Garrido , A. Tewari, W. Xu, J. Thies, M. Nießner, P. Pérez, C. Richardt, Michael Zollhöfer, C. Theobalt, Deep Video Portraits, ACM Transactions on Graphics (SIGGRAPH 2018)
We present a novel approach that enables photo-realistic re-animation of portrait videos using only an input video. In contrast to existing approaches that are restricted to manipulations of facial expressions only, we are the first to transfer the full 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor. The core of our approach is a generative neural network with a novel space-time architecture. The network takes as input synthetic renderings of a parametric face model, based on which it predicts photo-realistic video frames for a given target actor. The realism in this rendering-to-video transfer is achieved by careful adversarial training, and as a result, we can create modified target videos that mimic the behavior of the synthetically-created input. In order to enable source-to-target video re-animation, we render a synthetic target video with the reconstructed head animation parameters from a source video, and feed it into the trained network -- thus taking full control of the target. With the ability to freely recombine source and target parameters, we are able to demonstrate a large variety of video rewrite applications without explicitly modeling hair, body or background. For instance, we can reenact the full head using interactive user-controlled editing, and realize high-fidelity visual dubbing. To demonstrate the high quality of our output, we conduct an extensive series of experiments and evaluations, where for instance a user study shows that our video edits are hard to detect.

Пікірлер: 235
@suyac1774
@suyac1774 5 жыл бұрын
Whos here from muh boi chills?
@buns19
@buns19 5 жыл бұрын
YEAHHHHH 👐
@Yawopy
@Yawopy 5 жыл бұрын
mee
@antwonanderson2562
@antwonanderson2562 5 жыл бұрын
Me too
@peachpeach821
@peachpeach821 5 жыл бұрын
Me.
@hothickoryhell09
@hothickoryhell09 4 жыл бұрын
Me
@Peacepov
@Peacepov 6 жыл бұрын
Incredible!, now we need a system that can differentiate the real and fake videos
@aleksandersuur9475
@aleksandersuur9475 6 жыл бұрын
If you have a system that can find something fake about image/video, then you can highlight it and modify it until it doesn't trigger "fake" and you simply get a better quality fake that becomes indistinguishable from real.
@Peacepov
@Peacepov 6 жыл бұрын
aleksander suur unfortunately your idea makes a lot of sense but I'm hoping there's got to be a way around it
@MucciciBandz
@MucciciBandz 5 жыл бұрын
If you read their research paper, they do adversarial training. Meaning their method contains the "system that can differentiate the real and fake videos". So as the network gets more photo realistic, the detector gets smarter as well. :)
@snowflake6010
@snowflake6010 5 жыл бұрын
What do you do when you identify the fake? You remove it from social media, prevent it's spread, and notify users they've been had. And that means... we need to add the ability to instantly block, across all social media, any particular video by order of... ? The government? Once we have the ability to 'Deep Remove' a video... any unflattering thing might suddenly become unavailable if it's politically inconvenient. The sh*t is going to get real when things get this fake.
@higy33
@higy33 5 жыл бұрын
We are building it www.deeptracelabs.com/
@BlueyMcPhluey
@BlueyMcPhluey 6 жыл бұрын
who's ready for our legal systems to become completely paralysed by this technology?
@1ucasvb
@1ucasvb 6 жыл бұрын
josh mcgee We're totally screwed.
@jonwise3419
@jonwise3419 6 жыл бұрын
If you mean in regards to evidence, then why would it? It wasn't paralyzed with photos. It won't be with videos or audio. Even if a reannactment will leave no fingerprint it only means that we'll have to sign digitally sign information like videos. Your security cam will simply digitally sign every clip it produces, which will take nearly zero impact on CPU or memory overhead. Same with phones an any other recording device. If the video is fake and you provide it as an evidence from your phone claiming that it's real, then you will be responsible for the false testimony. But in regards to synthesizing funny or pornographic videos of anybody without their consent, it will continue to cause a lot of drama, until eventually we give up and admit that anybody's face and can be digitized as a digital copy and be used in any way and nobody can stop it, so the idea that people own rights to their looks is as idiotic as it's unenforceable. Btw, I like your face, I think I'll take it and lick it in VR... no homo. (in 30-50 years that's going to be "I'm going to put a couple of bots to crawl your social media and synthesize a predictive model of your responses where AI will construct the best model of your personality and then bring it into consciousness so that I do whatever I want to it in VR. Or I'll upload that model from my own mind. After all, my ability to detect a fake depends only on predictions inside my mind, so if an AI has access to my predictions about you, it doesn't need to fake real you, it just needs to fake my predictions about you to make any diversion from the original undetectable for me")
@amurgcodru
@amurgcodru 6 жыл бұрын
Ok mister genius who knows cryptography in and out. How do you prove a video is fake/real based on digital signature? THis would mean that every person who has a mobile phone WOULD have to generate a Public/private key AND would have to keep them secretly in store AND that the state and/or another entity would have access to those keys to verify that it was you. Looking at a lot of overhead here and if people are as bad with PKI as they are with passwords you're looking at a very big problem here since this is the main issue, being able to prove fake/real video/audio. What happens with lost private keys, who revokes them? How do you prove your innocence when your PKI and/or identity was stolen ? Your assumption with zero impact on CPU or memory is only based on data of Text based digital HASHING not Cryptographic signatures. A video should be signed on each frame and/or per total. If you then upload a video to let's say youtube then a whole new process occurs where it's converted to another codec. This means that even a per file hash/signature would be invalid. A per frame one would be almost impossible if you convert/reencode. TThere are other problems as well. Don't fool yourself this is a very big issue and no matter which new technical security things will be implemented in the future, it can and will cause MANY problems when it gets in to the hands of the minority of very bad people.
@jonwise3419
@jonwise3419 6 жыл бұрын
Elixir Alchemist Blender Well, we can examine two different scenario here: one is where people present video evidence in court and another where they present it somewhere on the Internet. The court evidence scenario just requires each device having private keys stored on them. A person testifying will simply claim that this is their phone and they filmed it. The signature on the video will have a proof that it was indeed filmed on that hardware and they claim that it was they who filmed it. It's also likely that there will be several sources of the same thing happening in reality. If a street camera signs a video message, and 10 people sign that they saw the same thing, then it's likely that actually happened. In a more distant future, it's not improbable that we'll have small cameras on us running constantly anyway (for example, for AI personal assistants to see the world to help them assisting us ), so we might even be talking about a future here where there are always many different recordings from different sources of the same thing. As for the Internet, your account already has identity associated to it, so the site can sign material for you. In other words, if you post on social media account and claim that it's you on a photo somewhere, a social media site signs the photo or video for you (sign that it's you who posted it). If several people who are on the photo post it, in the end, there can be several signatures from different people who posted the same photo. Signatures, don't have to be included in the original formats of course. A photo and video can have a separate signature stored on other place (like blockchain, a DB provided by that social media, etc). > Your assumption with zero impact on CPU or memory is only based on data of Text based digital HASHING not Cryptographic signatures. Because you would not *sign* every frame / segment. If a separate proof for each part of the video is not needed, then the whole video can just be signed. Otherwise, you can use merkle proofs to allow cutting a part of the video out while still having a signature associated with that part. That last part can actually be accomplished with only one signature as well. You would *hash* every segment and sign only the merkle root hash. Even for hashes, you wouldn't even hash every frame, because usually people don't streams / cuts just one frame of the video. Although, you still can. The cost of storing a merkle proof for a segment would be `proof size = log(n) * hashSize`, where `n` is the number of segments. So, if you sign each 1 MB, then even for 1GB video file the overhead of each proof would be `512 bits * log2(1000 segments) = 512 * 10 = 5K proof per 1MB = 0.5% size overhead`. Checking the proof would be `log2(1000) = 10`, in other words, simply running a hash function ten times - ridiculously cheap. For smaller number of segments, it will be even less overhead. Also, you would not need to design a special format for this. You could include a file with merkle proofs and signature somewhere else. So, if somebody streams original video, they can open a second stream and stream proof of originality for every segment of the video, downloading first 1MB and downloading the first merkle proof, or start both stream from somewhere in the middle. Reencoding is not a big problem. Firstly, for most video it won't matter, because nobody cares whether a video of a cat farting is real or not. If you're giving a video of your dashcam as evidence, then you most likely will give it directly to the court rather than upload it to KZfaq. But, for videos where it matters and that have to circulate social media, a reputation can be attached. Firstly, you can just download the reencoded video and sign it again, associating your reputation with it again. Automatically checking whether original fits the encoded version would be possible, so you don't need to watch it again. Another scenario is services like KZfaq just signing reencoded video, claiming that nothing content-wise changed from the original and including the original signature of the video. If the video is fake, then either the original is fake or KZfaq falsely claimed that it simply reencoded the video. Both claims are provable. If you trust that KZfaq simply reencodes videos, then you can treat it as the same (KZfaq generates new merkle proofs for each fragment of the video, but you can still trust them if you trust that KZfaq simply reencodes videos; there is no incentive for KZfaq to cheat, because it's easily provable by an author and one case of this make destroy their reputation). Also, since cheating is provable for some of those things it makes it easy to design cryptoeconomic protocols, where reencoding happens and nothing else. So, you can have a decentralized streaming service that encodes while original reputation still being valid. But that's an overkill. For more distant future, if you really want to an overkill for making it easy to trust information, you can actually make lying very hard. You simply use cryptoeconomics akin to prediction markets, but, instead of predictions about future outcomes, you can have people's hardware / personal assistant AIs / street cameras / dashcams betting on what really happened in reality. The more important event is, the more eyes are on it, the more probability there would be that people are telling the truth (in this case, they put a collateral, like promising 1000$ that they are telling the truth, nobody is intentivized to tell a lie, because they will simply loose money, if they are telling the truth, they are rewarded a small amount for giving their opinion to the network). A participation cost can be reduces as well. For example, if a camera was turned on May 5 in some location where an event happened, and there is a prediction market for that event on May 5 in that location, that camera might automatically participate and give a testimony to get a small fee, and it can put a large collateral to show that it has no incentive to lie.
@jonwise3419
@jonwise3419 6 жыл бұрын
Elixir Achemist Blender Forgot to address your point about the private key storage. People are and will be using private keys on their phones for things much more security sensitive than simply claiming "hey I filmed this video and I claim it's real". For example, running blockchain apps in their phones (Status.im app for the Ethereum network). Also, some countries already have every citizen with a private key inside their document (e.g., Estonia has it, where people can do nearly everything, including voting via Internet, they just plug a card reader and have ID card that signs things on its chip). If you loose your phone, simply sign that the signature on your phone is no longer valid with whatever reputation associated key you had (either your ID or a key associated with your social media).
@federrr7
@federrr7 6 жыл бұрын
i had a bad feeling about this
@joshuasamuel2042
@joshuasamuel2042 5 жыл бұрын
Just imagine if this technology fell into the wrong hands
@RealGubby
@RealGubby 5 жыл бұрын
Joshua Samuel it already has
@11crysissnake19
@11crysissnake19 5 жыл бұрын
It was made by the wrong hands
@davehug
@davehug 5 жыл бұрын
in reality any 3d modeler can do this
@trnobles
@trnobles 6 жыл бұрын
I never thought about the possibility of changing facial animation of dubbed movies, that's a great idea and would make watching dubbed movies a lot less irritating
@ZoidbergForPresident
@ZoidbergForPresident 6 жыл бұрын
I disagree, it's even worse I'd say. But whatever, I always prefer original voicing anyway and watch it that way. :P
@TeisJM
@TeisJM 6 жыл бұрын
Theres so much body language that won't match the face
@robindegen2931
@robindegen2931 6 жыл бұрын
Why watch dubbed movies at all though? I never understood them.. What's wrong with subtitles?
@shortbuspimp
@shortbuspimp 6 жыл бұрын
Robin Degen I don't like subtitles. I'm watching a movie, arguably mostly a visual medium, that I constantly am looking away from and missing out on. I'm not opposed to reading, I've read hundreds of books over my life. I just don't like having to look away from the action that I'm supposed to be seeing. To each their own.
@backyardcook42
@backyardcook42 6 жыл бұрын
Or just stop dubbing movies.. Watching english movies or any movie in their native tongue is a great way to learn a new language. The "i don't like subtitles" argument doesn't hold up, you get used to it really quick and after a while you won't need the subtitles. This tech shouldn't exist, it's way to easy to abuse.
@blueberry1c2
@blueberry1c2 6 жыл бұрын
This technology is amazing and i seriously applaud your efforts in its creation. However i am tinged with fear about how it will be abused by more extreme media sources and in the justice system.
@KatieGray1
@KatieGray1 3 жыл бұрын
It's already happening. Unfortunately, the people creating these things do not often think through the implications and who it will ultimately impact. So far, it's impacting a lot of women so I don't know if there were no women involved in developing this technology or they also did not think about how it could be used. I suspect there's just not enough women in the room when these things are being created to say hey, just because we can do this, we need to ask if we should. www.abc.net.au/news/2019-08-30/deepfake-revenge-porn-noelle-martin-story-of-image-based-abuse/11437774
@MattSayYay
@MattSayYay 2 жыл бұрын
Apparently Chills can't unsee this.
@FTLNewsFeed
@FTLNewsFeed 5 жыл бұрын
I'm psyched to see this work make its way into dubbing. Sometimes you don't want to read subtitles and you'd rather have the dubbed voices coming out of the actors' face.
@AlexanderSama
@AlexanderSama 6 жыл бұрын
Just thinking about how destructive could be a video of a few seconds in a social network of a president talking gives me gosebumps. Our current society *is not* prepared to accept that there are no digital media 100% reliable.
@tubelitrax
@tubelitrax 6 жыл бұрын
Astonishing! I'm speechless...
@Unreissued
@Unreissued 6 жыл бұрын
the "nearest neighbour retrieval" thing was especially clever. fantastic stuff
@dan_loeb
@dan_loeb 5 жыл бұрын
All of the generated content hits me hard in the uncanny valley, at least when they are in motion.
@grzesiekmazur7711
@grzesiekmazur7711 6 жыл бұрын
The future is now, Old man .
@jamesbarnes1496
@jamesbarnes1496 6 жыл бұрын
Just simply amazing
@piotrkakol1992
@piotrkakol1992 6 жыл бұрын
It's awesome that they maked this technology public. It makes a huge difference if everyone can use it than if it could only be used by the governments. People who think they maked a wrong decision by developing this technology don't understand that it would have been maked sooner or later and if the first people to develop it haved malicious intents, it could have catastrophic results. By realizing this technology is here we can improve our future decisions.
@TheTrumanZoo
@TheTrumanZoo 6 жыл бұрын
you could use a second pass, or third pass... the output fed into a new input signal, creating a secondary even harder to spot output. if the initial output is used for another pass with less differences it could reduce a lot of artifacts.
@sychedelix
@sychedelix 6 жыл бұрын
This is awesome!
@BakuTheLaw
@BakuTheLaw 6 жыл бұрын
It's time to get framed. Thank you!
@alexandrepv
@alexandrepv 6 жыл бұрын
Please, tell me you guys have the source code on github. And where can I get the pdf of your paper? Please! :D
@mattsponholz8350
@mattsponholz8350 6 жыл бұрын
Seriously impressive technology. You should all be very proud of yourselves! As with all things powerful, this has the potential for bad, but also the potential for good! A lot of good. Well done :)
@christangey
@christangey 5 жыл бұрын
Name ONE good use for it
@SRAKA-9000
@SRAKA-9000 6 жыл бұрын
We'll see a lot of porn made with this technology
@TheChristmasCreeper
@TheChristmasCreeper 6 жыл бұрын
Profile pic checks out. EleGiggle
@descai10
@descai10 6 жыл бұрын
Already happening. "Deepfakes"
@immersiveparadox
@immersiveparadox 5 жыл бұрын
Already happening dude. And Pornhub banned those deepfake porns.
@maximuswillpower
@maximuswillpower 5 жыл бұрын
First we need full body reconstruction
@thingsprings5493
@thingsprings5493 5 жыл бұрын
Can't wait
@RobinCawthorne
@RobinCawthorne 6 жыл бұрын
this is truly revolutionary. does this conversion happen in real time?
@iLikeTheUDK
@iLikeTheUDK 6 жыл бұрын
I was about to ask where I could get a download of the code or an executable, but then I realised what that could lead to...
@kevincozens6837
@kevincozens6837 5 жыл бұрын
This is amazing. It did miss one quick look to the right eye movement at 1:45.
@shango12b
@shango12b 6 жыл бұрын
Just because you can, doesn't mean you should.
@MattGDreal
@MattGDreal 5 жыл бұрын
This is awesome technology, it's amazing, I've never seen anything like it but only on 1 app before, "facerig" but it is truly magnificent
@donnell760
@donnell760 6 жыл бұрын
Amazing!
@bakhtikian
@bakhtikian 6 жыл бұрын
How the book cover in background is recovered at 2:03?
@nilspin
@nilspin 6 жыл бұрын
Niessner lab rocks! I hope to do PhD there someday :)
@JorgeGamaliel
@JorgeGamaliel 6 жыл бұрын
Awesome!! There are a lot of enormous implications for this super-technology for example security, intelligence, fake news.. Generative adversarial networks a game of imitation and perfect indistinguishability.
@thor2070
@thor2070 6 жыл бұрын
Framing people!
@JohannSuarez
@JohannSuarez 5 жыл бұрын
Interesting, but also incredibly terrifying.
@leetae-kyoung1084
@leetae-kyoung1084 6 жыл бұрын
Awesome!!!! So cool!!! Wow!!!
@theoriginalgoogle3615
@theoriginalgoogle3615 4 жыл бұрын
If a person had a notable feature on their face [scar, mole, birthmark, etc] could that possibly be unable to mask ? Can a result show features from both people on a video ? Cheers
@micocoleman1619
@micocoleman1619 5 жыл бұрын
Seems pretty cool.
@darkknight4353
@darkknight4353 5 жыл бұрын
where can i get the software ? is it public yet?
@yuzzo92
@yuzzo92 6 жыл бұрын
This is an amazing technology, but i can't help getting the feeling that it's going to be used the most in a malicious way than not.
@smoquart
@smoquart 6 жыл бұрын
Will there be any code published?
@ramanadk
@ramanadk 4 жыл бұрын
where do we get the code for above video?
@deaultusername
@deaultusername 6 жыл бұрын
Definitely getting there, more than good enough as it is to mess with youtubers.
@SinuousGrace
@SinuousGrace 5 жыл бұрын
If you can destroy somebody with, essentially, one word, how long before AI is used to make somebody say something that they never actually said in order to destroy that person?
@loudvoice5903
@loudvoice5903 5 жыл бұрын
IT IS ALREADY IN PROGRESS, A LONG LONG TIME!
@167195807
@167195807 5 жыл бұрын
ماهو البرنامج
@lucasvca
@lucasvca 6 жыл бұрын
DEUS É MAIS
@babyjesuslovesme1219
@babyjesuslovesme1219 5 жыл бұрын
Scary but genius
@ElmarVeerman
@ElmarVeerman 6 жыл бұрын
We need a new class of videos: certified unedited video. Can tech companies provide this feature?
@jeffhalmos7981
@jeffhalmos7981 6 жыл бұрын
The nuclear bomb of software: Great technology; never want to see it used.
@iamnotanuggetblackhart5103
@iamnotanuggetblackhart5103 5 жыл бұрын
EXCEPT in movies and TV shows... I would love to see them used in that way.
@ey5644
@ey5644 5 жыл бұрын
Where can I buy this?
@simoncarlile5190
@simoncarlile5190 5 жыл бұрын
Thinking about this makes me think about how time travel is described by the makers of the movie Primer: It's too important to use just to make money, but it's too dangerous to be used for anything else.
6 жыл бұрын
I think you should also work on tool which can recognize if the video was created artifically or not. Otherwise I am quite nervous about future (fake) news manipulation.
@doctorscarf8958
@doctorscarf8958 5 жыл бұрын
with this I can become masahiro sakuri!
@azra31
@azra31 6 жыл бұрын
why is this a good thing?
@fccheung1798
@fccheung1798 6 жыл бұрын
This can wage wars...
@SamJohnsonking
@SamJohnsonking 6 жыл бұрын
Where is the Github code?
@leahnicole4443
@leahnicole4443 5 жыл бұрын
Scary AF.
@Yawopy
@Yawopy 5 жыл бұрын
who’s here bc chills sent ya? me!
@sebhch244
@sebhch244 5 жыл бұрын
dark side of 3d, vfx , motion graphics .
@marcusk.6223
@marcusk.6223 6 жыл бұрын
pleas sell this technology to vidro game developers! This would make great games!
@chrisbraddock9167
@chrisbraddock9167 3 жыл бұрын
What good could this bring to society that would outweigh the obvious evil?
@Madison__
@Madison__ 6 жыл бұрын
Imagine being an artist and using this tech to figure out head angles by using a real model
@renookami4651
@renookami4651 6 жыл бұрын
You'll also replicate the errors of the computer generated images just like someone learning with 3D model or anything else. But at least this base looks accruate enough, as long as you stay close to the original pose you would probably get good results. I don't know about trying a full profile, making the model look at something behind their shoulder, and other big changes. I guess further you modify the more errors appears.
@longliverocknroll5
@longliverocknroll5 6 жыл бұрын
Ren Ookami Still, look at the difference in technology between the three different studies over that small time-frame. It conceivably won't be long before they work out individual uncanny valley-esque aesthetics.
@SuperGranock
@SuperGranock 4 жыл бұрын
my question is who can't tell the difference, it's in the eyes and expressions
@johnsherfey3675
@johnsherfey3675 6 жыл бұрын
Time to make ultiment ytp?
@BorisMitrovicG
@BorisMitrovicG 6 жыл бұрын
Can't wait for the paper to be out!
@pickcomb332
@pickcomb332 6 жыл бұрын
Looks like Cellulose is gonna make a comeback
@fleetwoodray
@fleetwoodray 6 жыл бұрын
Kind of reminds me of the movie called, Simone. No one is safe from exploitation now.
@joe-rivera
@joe-rivera 6 жыл бұрын
Terrible things could come of this line of work. I hope that (unlike many in the technology field) you are considering consequences and building in fail-safes for detecting fakes. Advancement of technology can’t be the only goal.
@Yui714
@Yui714 6 жыл бұрын
It's all about advancing. Our species never plans for anything. We just adapt to the changes we make. Even something as huge as nations aren't planned but instead reactionary making up how it operates as we go along. We're not planners. Climate change is our weakness because it requires planning, so what we're going to do as a species is wait it out, hope to fix it through technological advancements, and if that doesn't happen come up with a plan B like sending people to another planet. Point being, we don't plan even if we know our species will die if we don't. We're not planners and this tech is cool.
@STie95
@STie95 6 жыл бұрын
The Dead Past by Asimov comes to mind.
@aleksandersuur9475
@aleksandersuur9475 6 жыл бұрын
Doesn't work, if they can make a system with failsafes included then someone else can make the same system minus the failsafes. In fact people are doing it all over the place. Check out "deepfakes", that's how this type of AI work really got started. Mostly it's used for patching celebrity faces into porn videos.
@jettthewolf887
@jettthewolf887 5 жыл бұрын
Not going to lie but this scares the shit out of me.
@simoncarlile5190
@simoncarlile5190 6 жыл бұрын
To quote Patton Oswalt: "Science, all about coulda, not shoulda."
@wowepic2256
@wowepic2256 4 жыл бұрын
Github?
@nightshade2541
@nightshade2541 6 жыл бұрын
oh well thanks for all the fish
@StevenFox80
@StevenFox80 6 жыл бұрын
This is spooky.... o.O
@yumazster
@yumazster 5 жыл бұрын
This is impressive technology. The shitstorm it is going to cause will be equally impressive. 1984 full blast...
@MrCalhoun556
@MrCalhoun556 6 жыл бұрын
Welcome to the Death of Reality.
@lilyzwennis1195
@lilyzwennis1195 6 жыл бұрын
This is revolutionary and will make for epic realistic gaming. But no, someone should burn it. Burn it with fire!
@c1vlad
@c1vlad 6 жыл бұрын
Freaking awesome technology ...but i have a bad feeling about it
@shawnwooster7190
@shawnwooster7190 6 жыл бұрын
More dangerous than nukes. Great. Welcome to the New Age of Hyper-Anxiety.
@mstyle2006
@mstyle2006 6 жыл бұрын
imagine how our next generations will be mass controlled by this technology
@snowflake6010
@snowflake6010 5 жыл бұрын
lol. Yeah. Them. Coming soon to an election near you sir!
@UrzaMaior
@UrzaMaior 6 жыл бұрын
Aaaaand we're doomed.
@TheUmbrella1976
@TheUmbrella1976 6 жыл бұрын
And outside, people protest against genome manipulations in crops because its 'dangerous'.
@goliathfox
@goliathfox 5 жыл бұрын
Fortnite players will love this!
@MagicBoterham
@MagicBoterham 6 жыл бұрын
Garrido et al. getting owned.
@Namelocms
@Namelocms 6 жыл бұрын
What is this used for?
@jameslucas5590
@jameslucas5590 6 жыл бұрын
You could use it of evil or you could use it to dub foreign movies and turn them into a different language and make it seamless.
@murraymacdonald4959
@murraymacdonald4959 6 жыл бұрын
James Dean, Frank Sinatra, Elvis, Marilyn, Princess Leia...
@longliverocknroll5
@longliverocknroll5 6 жыл бұрын
James Lucas That's such a ridiculously small scope of what it could be used for in terms of "not evil" applications lol.
@Santins12
@Santins12 5 жыл бұрын
I only can imagine bad applications with this technology...
@snowflake6010
@snowflake6010 5 жыл бұрын
We've built an engine that can swing any election. Free speech with high viewer-counts will have to be constrained. I imagine this is what is really behind the EU wanting to establish their copyright check system. [so any uploaded video can be instantly cancelled so deepfakes can be taken down quickly.] I'd imagine all U.S. social media companies are making sure the capability to take something down quickly is possible. We're going to have to encoded video in a way that leaves an edit trail. Blockchain will somehow help there. I think.
@DouglasDuhaime
@DouglasDuhaime 6 жыл бұрын
Code or it didn't happen
@PrakharShukla
@PrakharShukla 6 жыл бұрын
Reference Paper if anyone needs it. arxiv.org/pdf/1805.11714.pdf
@winsomehax
@winsomehax 6 жыл бұрын
I've seen some crude versions of this type of thing.. and I suppose I knew better ones were coming... but jesus... it suddenly hits you what's coming down the pipe at us. Prepare yourself.
@Zeeeeeek
@Zeeeeeek 6 жыл бұрын
the FBI wants to know your location
@whilebeingjezebel
@whilebeingjezebel 6 жыл бұрын
#lazyeye
@mari_hase
@mari_hase 6 жыл бұрын
While being impressive, you can definitely distinguish the real actor from the fake.
@kawabungadad8945
@kawabungadad8945 6 жыл бұрын
This is going to lead to the start of WWIII.
@Sychonut
@Sychonut 6 жыл бұрын
4:23 Stroke Simulator
@808GT
@808GT 6 жыл бұрын
We are fucked.
@hausmaus5698
@hausmaus5698 6 жыл бұрын
And bye voice actors
@sn-zd8ct
@sn-zd8ct 6 жыл бұрын
Damn this is scary
@alejandrodaguy5732
@alejandrodaguy5732 5 жыл бұрын
Chris Hansen is somewhere plotting.
@TheAleStuffs
@TheAleStuffs 6 жыл бұрын
*Que puto miedo...*
@deepfakescoverychannel6710
@deepfakescoverychannel6710 3 жыл бұрын
that is fake paper without the code.
@dangraphic
@dangraphic 6 жыл бұрын
I'm sorry but this is fucking scary.
@jamespatches4553
@jamespatches4553 6 жыл бұрын
I mean thats cool, but how exactly does this technology help society?
@FrankAtanassow
@FrankAtanassow 6 жыл бұрын
For one thing, this sort of research will inevitably be done in the private and government sectors for nefarious purposes. By also doing it in the public sector, we can see what sort of image/video manipulation is possible or plausible and become more skeptical and critical of what others might present as evidence in bad faith. In short, we can see how others might try to trick us. If this sort of research isn't done in a transparent manner, then we become more gullible as covert technology subverts our so-called common sense.
@jonwise3419
@jonwise3419 6 жыл бұрын
Well, at the very least, like any other other CG innovations advances entertainment. Imagine, just like anybody can write a book, anybody in the future being able to create a movie due to advancements in CG tools.
@leecaste
@leecaste 6 жыл бұрын
Siggraph papers usually aim to improve vfx not society
@FrancoisZard
@FrancoisZard 6 жыл бұрын
Soon enough to replace human actors who are grossly overpaid, given god-like powers amd spoiled like brats. Imagine never having to hear about the kardashians or having to deal with justin bieber and his shit. That's a huge service to society IMHO.
@starrychloe
@starrychloe 6 жыл бұрын
Cheaper movie tickets. You can fire all the overpaid actors and just reuse John Wayne and Marlon Brando and Marilyn Monroe
@brookshunt928
@brookshunt928 6 жыл бұрын
You will no longer be able to know what is real and what is fake.
We Need to Rethink Exercise - The Workout Paradox
12:00
Kurzgesagt – In a Nutshell
Рет қаралды 5 МЛН
80 Year Olds Share Advice for Younger Self
12:22
Sprouht
Рет қаралды 1,6 МЛН
Double Stacked Pizza @Lionfield @ChefRush
00:33
albert_cancook
Рет қаралды 103 МЛН
Finger Heart - Fancy Refill (Inside Out Animation)
00:30
FASH
Рет қаралды 27 МЛН
Smart Sigma Kid #funny #sigma #comedy
00:40
CRAZY GREAPA
Рет қаралды 13 МЛН
AI art, explained
13:33
Vox
Рет қаралды 2,4 МЛН
Inside Mark Zuckerberg's AI Era | The Circuit
24:02
Bloomberg Originals
Рет қаралды 1,3 МЛН
The incredible inventions of intuitive AI | Maurice Conti
15:24
SIGGRAPH 2018: DeepMimic paper (main video)
7:02
Jason P.
Рет қаралды 167 М.
Why AI will never replace humans | Alexandr Wang | TEDxBerkeley
13:40
HeadOn: Real-time Reenactment of Human Portrait Videos (Siggraph 2018)
4:28
Double Stacked Pizza @Lionfield @ChefRush
00:33
albert_cancook
Рет қаралды 103 МЛН