Human–Computer Communication Using Recognition and Synthesis of Facial Expression

Authors
Yasunari Yoshitomi*
Graduate School of Life and Environmental Sciences, Kyoto Prefectural University, 1-5 Nakaragi-cho, Shimogamo, Sakyo-ku, Kyoto 606-8522, Japan
*Email: [email protected]
Corresponding Author
Yasunari Yoshitomi
Received 19 December 2020, Accepted 12 March 2021, Available Online 27 May 2021.
DOI
https://doi.org/10.2991/jrnal.k.210521.003How to use a DOI?
Keywords
Emotion; facial expression recognition; infrared-ray image; facial expression synthesis; personified agent
Abstract
To develop a complex computer system such as a robot that can communicate smoothly with humans, it is necessary to equip the system with a function for both understanding human emotions and expressing emotional signals. From both perspectives, facial expression is a promising research area. In our research, we have explored both aspects of facial expression using infrared-ray images and visible-ray images and have developed a personified agent for expressing emotional signals to humans. A human–computer–human communication system using facial expression analysis and a music recommendation system using a personified agent with facial expression synthesis are reported in this paper.
Copyright
© 2021 The Author. Published by ALife Robotics Corp. Ltd.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).