Driving Animatronic Robot Facial Expression From Speech

Beijing Institute for General Artificial Intelligence (BIGAI)
National Key Laboratory of General Artificial Intelligence

*Equal Contribution, Corresponding Author

Abstract

Animatronic robots aim to enable natural human-robot interaction through lifelike facial expressions. However, generating realistic, speech-synchronized robot expressions is challenging due to the complexities of facial biomechanics and responsive motion synthesis. This paper presents a principled, skinning-centric approach to drive animatronic robot facial expressions from speech. The proposed approach employs linear blend skinning (LBS) as the core representation to guide tightly integrated innovations in embodiment design and motion synthesis. LBS informs the actuation topology, enables human expression retargeting, and allows speech-driven facial motion generation. The proposed approach is capable of generating highly realistic, real-time facial expressions from speech on an animatronic face, significantly advancing robots' ability to replicate nuanced human expressions for natural interaction.

BibTeX

@article{li2024driving,
         title={Driving Animatronic Robot Facial Expression From Speech},
         author={Li, Boren and Li, Hang and Liu, Hangxin},
         journal={arXiv preprint arXiv:2403.12670},
         year={2024},
         url={https://arxiv.org/abs/2403.12670}}