Rapid Synthesis of Massive Face Sets for Improved Face Recognition

Published in IEEE International Conference on Automatic Face and Gesture Recognition (FG) Washington, DC, 2017

Recommended citation: Iacopo Masi, Tal Hassner, Anh Tuan Tran, and Gerard Medioni. Rapid Synthesis of Massive Face Sets for Improved Face Recognition. IEEE International Conference on Automatic Face and Gesture Recognition (FG) Washington, DC, 2017.


Rendered (synthesized) new views of a real face.Input face image (left) is rendered to frontal views by previous work (top) and to multiple views by our fast method (bottom). Results in top row taken from Effective Face Frontalization in Unconstrained Images.

Abstract

Recent work demonstrated that computer graphics techniques can be used to improve face recognition performances by synthesizing multiple new views of faces available in existing face collections. By so doing, more images and more appearance variations are available for training, thereby improving the deep models trained on these images. Similar rendering techniques were also applied at test time to align faces in 3D and reduce appearance variations when comparing faces. These previous results, however, did not consider the computational cost of rendering: At training, rendering millions of face images can be prohibitive; at test time, rendering can quickly become a bottleneck, particularly when multiple images represent a subject. This paper builds on a number of observations which, under certain circumstances, allow rendering new 3D views of faces at a computational cost which is equivalent to simple 2D image warping. We demonstrate this by showing that the run-time of an optimized OpenGL rendering engine is slower than the simple Python implementation we designed for the same purpose. The proposed rendering is used in a face recognition pipeline and tested on the challenging IJB-A and Janus CS2 benchmarks. Our results show that our rendering is not only fast, but improves recognition accuracy.

Project and Code

Download paper here