Jump to content United States-English
HP.com Home Products and Services Support and Drivers Solutions How to Buy
» Contact HP

HP.com home


Information Theory Seminar


printable version
» 

HP Labs

» Research
» News and events
» Technical reports
» About HP Labs
» Careers @ HP Labs
» People
» Worldwide sites
» Downloads
Content starts here

TITLE: Analysis, Synthesis and Retargeting of Facial Expressions

SPEAKER: Erika Chuang [Stanford University]

DATE: 2:00 - 3:00 P.M., ThursdaySeptember 4, 2003

LOCATION: Sigma, 1L (PA)

HOST: Vinay Deolalikar

ABSTRACT:

Computer animated characters have recently gain popularity in many applications, including web pages, computer games, movies, and various human computer interface designs. In order to make these animated characters lively and convincing, they require sophisticated facial expressions and motions. Traditionally, these animations are produced entirely by skilled artists. Although the quality of the animation produced this way remains the best, the process is slow and costly. Motion capture performance of actors/actresses is one technique that attempts to speed up this process. One problem for this technique is that the captured motion data is not easily editable. In recent years, statistical techniques have been used to address this problem by learning the mapping between audio speech and facial motion. New facial motion can be synthesized for novel audio data by reusing the motion capture data. However, since facial expressions are not modeled in these approaches, the resulting facial animation is realistic, yet expressionless. 

This work takes an expressionless talking face and creates an expressive facial animation. This process consists of three parts: expression synthesis, blendshape retargeting, and head motion synthesis. Expression synthesis uses a factorization model to describe the interaction between facial expression and speech content underlying each particular facial appearance. A new facial expression can be applied to novel input video, while retaining the same speech content. Blendshape retargeting maps facial expressions onto a 3D face model using the framework of blendshape interpolation. Three methods of sampling the key shapes, or prototype shapes, from data are evaluated. In addition, the generality of blendshape retargeting is demonstrated in three different domains. Head motion synthesis uses audio pitch contours to derive new head motion. The global and local structure of the pitch statistics and the coherency of head motion are utilized to determine the optimal motion trajectory. Finally, expression synthesis, blendshape retargeting, and head motion synthesis are combined into a prototype system and demonstrated through an example.

Seminars

» Information Theory
» Publications
» People
» Discrete Universal Denoiser (DUDE)
» Elliptic Curve Cryptography
» Image Compression
» Seminars
» Related Links
This is a controller for a color printer. Each chip contains a compressor/decompressor based on an algorithm created by HP Labs.
Privacy statement Using this site means you accept its terms Feedback to HP Labs
© 2009 Hewlett-Packard Development Company, L.P.