Animatronic robots hold the promise of enabling natural human-robot interaction through lifelike facial expressions. However, generating realistic, speech-synchronized robot expressions poses significant challenges due to the complexities of facial biomechanics and the need for responsive motion synthesis. This paper introduces a novel, skinning-centric approach to drive animatronic robot facial expressions from speech input. At its core, the proposed approach employs linear blend skinning (LBS) as a unifying representation, guiding innovations in both embodiment design and motion synthesis. LBS informs the actuation topology, facilitates human expression retargeting, and enables efficient speech-driven facial motion generation. This approach demonstrates the capability to produce highly realistic facial expressions on an animatronic face in real-time at over 4000 fps on a single Nvidia RTX 4090, significantly advancing robots' ability to replicate nuanced human expressions for natural interaction. To foster further research and development in this field, the code has been made publicly available at: https://github.com/library87/OpenRoboExp.
@inproceedings{li2024driving,
title={Driving Animatronic Robot Facial Expression From Speech},
author={Li, Boren and Li, Hang and Liu, Hangxin},
booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
year={2024},
url={https://arxiv.org/abs/2403.12670}}