SATO Wataru Laboratory

An artificial intelligence model for sensing affective valence and arousal from facial images


(Nomiya, Shimokawa, Namba, Osumi, & Sato: Sensors)


Artificial intelligence (AI) models can sense subjective affective states from facial images.
Although recent psychological studies have indicated that dimensional affective states of valence and arousal are systematically associated with facial expressions, no AI models have been developed to estimate these affective states from facial images based on empirical data.

We developed a recurrent neural network-based AI model to estimate subjective valence and arousal states from facial images.
We trained our model using a database containing participant valence/arousal states and facial images.
Leave-one-out cross-validation supported the validity of the model for predicting subjective valence and arousal states.



We further validated the effectiveness of the model by analyzing a dataset containing participant valence/arousal ratings and facial videos.
The model predicted second-by-second valence and arousal states, with prediction performance comparable to that of FaceReader, a commercial AI model that estimates dimensional affective states based on a different approach.



We constructed a graphical user interface to show real-time affective valence and arousal states by analyzing facial video data.
We distributed our model named KKR Facial Affect Reader.



Our model is the first distributable AI model for sensing affective valence and arousal from facial images/videos to be developed based on an empirical database.
We anticipate that it will have many practical uses, such as in mental health monitoring and marketing research.


Return to Recent Research.
Return to Main Menu.