Analysis of live and dynamic movements by computer is one of the areas that draws a great deal of interest to itself. One of the important parts of this area is the motion capture process that can be based on the appearance and facial mode estimation. The aim of this study is to represent 3D facial movements from estimated facial expressions using video image sequences and applying them to computer-generated 3D faces. We propose an algorithm which can classify the given image sequences into one of the motion frames. The contributions of this work lie mainly in two aspects. Firstly, an optical flow algorithm is used for feature extraction, that instead of using two subsequent images (or two subsequent frames in a video), the distinction between images and the normal state is used. Secondly, we realize a multilayer perceptron network that their inputs are matrices obtained from optical flow algorithm to model a mapping between person movements and database movement categories. A three-dimensional avatar, which is made by means of Kinect data, is used to represent the face movements in a graphical environment. In order to evaluate the proposed method, several videos are recorded in order to compare the available modes and discovered modes. The results indicate that the proposed method is effective.
Fateme Zare Mehrjardi, Mehdi Rezaeian