Рет қаралды 9,190
Face2Face is an approach for real-time facial reenactment of a monocular target video sequence (e.g., KZfaq video). The source sequence is also a monocular video stream, captured live with a commodity webcam. This video and accompanying article present research with the goal of developing methods for animating facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. The result is a convincing re-render okf the synthesized target face on top of the corresponding video stream that seamlessly blends with the real-world illumination. In this video, Justus Thies discusses "Face2Face: Real-Time Face Capture and Reenactment of RGB Videos," a Research Highlights article in the January 2019 Communications of the ACM.
Read the full article here: cacm.acm.org/magazines/2019/1/233531-face2face/fulltext