Authors
Yuu Nakanishi, Yasunari Yoshitomi, Taro Asada, Masayoshi Tabuse
Corresponding Author
Yuu Nakanishi
Available Online 1 September 2015.
DOI
https://doi.org/10.2991/jrnal.2015.2.2.3How to use a DOI?
Keywords
Efficient gathering of training data, Facial expression recognition, Thermal
image processing, Speech recognition, Vowel judgment
Abstract
Using our previously developed system, we investigated the influence of
training data on the facial expression accuracy using the training data
of “taro” for the intentional facial expressions of “angry,” “sad,” and
“surprised,” and the training data of respective pronunciation for the
intentional facial expressions of “happy” and “neutral.” Using the proposed
method, the facial expressions were discriminable with average accuracy
of 72.4% for “taro,” “koji” and “tsubasa”, for the three facial expressions
of “happy,” “neutral,” and “other”.
Copyright
© 2013, the Authors. Published by ALife Robotics Corp. Ltd.
Open Access
This is an open access article distributed under the CC BY-NC license (http://creativecommons.org/licenses/by-nc/4.0/).