Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
The lack of large-scale, demographically diverse face images with precise Action Unit (AU) occurrence and intensity annotations has long been recognized as a fundamental bottleneck in developing generalizable facial AU recognition systems. In this paper, we propose MAUGen, a diffusion-based multi-modal framework that jointly generates a large collection of photorealistic facial expressions and anatomically consistent AU labels, including both occurrence and intensity, conditioned on a single descriptive text prompt. Our MAUGen involves two key modules: (1) a Multi-modal Representation Learning (MRL) module that captures the relationships among the paired facial textual description, facial identity, facial expression image, and AU activations within a unified latent space; and (2) a Diffusion-based Image-label Generator (DIG) that decodes the obtained joint representation into aligned facial image-label pairs across diverse identities. Under this framework, we introduce the Multi-Identity Facial Action (MIFA), a large-scale multi-modal (i.e., text descriptions, face images with labels) synthetic dataset that features comprehensive AU annotations and identity variations. Extensive experiments demonstrate that MAUGen outperforms existing methods in synthesizing photorealistic, demographically diverse facial images, along with semantically aligned AU labels. Our code will be released upon acceptance.
