top of page

Automated Facial Rig Registration for Motion Capture

Bachelor's Thesis

January 31, 2022


To create believable 3D animated faces, actors’ facial expressions are retargeted from monocular RGB videos onto 3D character rigs. Though, the workflow requires large amounts of tedious and time-consuming manual labor, due to the psychological sensitivity and complex structure of the facial rigs. Consequently, the goal of this work was to automate the facial motion capture pipeline as much as possible. This thesis proposes a fully automated solution to register virtually any rig and proposes an unsupervised learning based algorithm capable of posing the registered rig based on a single frame of a video performance, synthesized by a deepfake algorithm. This makes the pipeline independent of the actor’s external characteristics, lighting and background. A user study in the form of an expert interview was conducted to evaluate the usability and quality of the prototype design, which showed that the algorithm is able to save averagely 56% of the expert’s time. The prototype’s output poses are also preferred in 60% of the 30 cases, when compared to a traditional point tracking solution. The study also suggests that the algorithm is able to supersede other monocular point tracking methods if minimal manual labor is required and is more easy to use than current solutions. To the best of our knowledge, the algorithm proposed in this work is the first unsupervised image-based learning algorithm for retargeting facial animations based on a single frame of a processed RGB video to a generic 3D character, not relying on any actor-specific initialization or training.

bottom of page