Inferring the emotional state of an individual by viewing his/her facial expression seems to be present in all human cultures. Numerous studies have shown that various changes in facial muscles determine the resulting facial expression.
The analysis of images of faces expressing emotional states promises to contribute to quantification of the claimed observations. Here, we use a suite of AI (artificial intelligence) algorithms, along with ML (maximum likelihood) estimated distributions to quantify the shift in facial expression from "neutral" to "fear" and "pain" to "pleasure".
The images are single frames of five emotional states (neutral, fear, pain, pleasure, laugh) expressed by actors and actresses in BDSM videos. We extract a feature vector for each image, dimension-reduce these feature vectors by mapping them onto a two-dimensional manifold and calculate the norms of the normalized displacement vectors for each emotional pair.
We then find that the ML distributions of the norms are Gamma-distributed and that the modes for each pair are different for both males and females. We use Wilks lambda to determine significance.
We find that the distributions for the females are significantly different, but not for the males. The methodology we present here has widespread applications: monitoring the emotional states of humans in various settings; among these: determining whether participants in BDSM and similar videos are indeed volunteering their participation or are victims of criminal activity.