Use of machine learning to determine emotions and control your facial expressions

Andrei Savchenko from the Nizhny Novgorod branch of the Higher School of Economics published The result studies in the field of machine learning associated with recognition of emotions on people’s faces, present in photographs and videos. The code is written in the Python language using Pytorch and spreads under the Apache 2.0 license. Several finished models are available, including suitable for use on mobile devices.

on the basis of the library by another developer, a program was created sevimon , which allows to track the change in emotions using the video camera and help in controlling the muscle tension , for example, to eliminate overstrain, indirect impact on mood and, with prolonged use, prevent the appearance of facial wrinkles. To determine the face position in the video, the library is involved centerface . The Sevimon code is written on Python spreads under the AGPLV3 license. At the first launch, models are loaded, after which the program does not require Internet connection and works completely autonomously. Prepared instructions for launching in Linux/UNIX and Windows, as well as Docker-image for Linux.

sevimon works as follows: first, the face is determined on the image from the camera, then the face is compared with each of the eight emotions (anger, contempt, disgust, fear, joy, lack of emotions, sadness, surprise), after which a certain emotion is given Assessment of similarity. The obtained values ​​are stored in a journal in a text format for subsequent analysis by the Sevistat program. For each emotion in the settings file, you can set the upper and lower boundaries of the values, at the intersection of which a reminder is immediately issued.

/Media reports.