Horses feel emotions when they watch positive and negative horse–human interactions in a video and transpose what they saw to real life

Animals can indirectly gather meaningful information about other individuals by eavesdropping on their third-party interactions. In particular, eavesdropping can be used to indirectly attribute a negative or positive valence to an individual and to adjust one’s future behavior towards that individual. Few studies have focused on this ability in nonhuman animals, especially in nonprimate species. Here, we investigated this ability for the first time in domestic horses (Equus caballus) by projecting videos of positive and negative interactions between an unknown human experimenter (a “positive” experimenter or a “negative” experimenter) and an actor horse. The horses reacted emotionally while watching the videos, expressing behavioral (facial expressions and contact-seeking behavior) and physiological (heart rate) cues of positive emotions while watching the positive video and of negative emotions while watching the negative video. This result shows that the horses perceived the content of the videos and suggests an emotional contagion between the actor horse and the subjects. After the videos were projected, the horses took a choice test, facing the positive and negative experimenters in real life. The horses successfully used the interactions seen in the videos to discriminate between the experimenters. They touched the negative experimenter significantly more, which seems counterintuitive but can be interpreted as an appeasement attempt, based on the existing literature. This result suggests that horses can indirectly attribute a valence to a human experimenter by eavesdropping on a previous third-party interaction with a conspecific.


Introduction
Social animals can indirectly gather meaningful information from other individuals by eavesdropping on their third-party interactions (i.e., "social eavesdropping"; Bonnie and Earley 2007). This social eavesdropping has been observed in various species and contexts. For instance, in Electronic supplementary material The online version of this article (https ://doi.org/10.1007/s1007 1-020-01369 -0) contains supplementary material, which is available to authorized users. the context of mate choice, birds and fishes can eavesdrop on interactions to assess the quality of a potential partner (Mennill et al. 2002;Ophir and Galef 2004;Valone 2007;Aquiloni and Gherardi 2010;Kavaliers et al. 2017). Several species of birds, fishes and rodents observe conflictual interactions to judge the relative quality and dominance rank of the opponents (Peake et al. 2001;Earley and Dugatkin 2002;Höjesjö et al. 2007;Valone 2007;Lai et al. 2014). Furthermore, social eavesdropping can also allow animals to attribute a negative or positive valence indirectly to unknown individuals by two different mechanisms. First, individuals can observe the emotional reaction of a third-party person during an interaction with a stranger to adjust their own behavior towards that stranger; this has, for instance, been studied in dogs and human infants (Feinman and Lewis 1983;Murray et al. 2008;Freidin et al. 2013;Duranton et al. 2016). Second, animals can also indirectly attribute a valence to unknown individuals based on how the individuals behaved towards a third party and to use this information to adjust their own behavior towards the individuals accordingly. This latter ability has been investigated in a few non-human species, including great apes, capuchin and marmoset monkeys, domestic dogs and reef fish (see Abdai and Miklósi 2016 for a review). Chimpanzees, dogs and marmoset monkeys were, for instance, shown to prefer eating food offered by an experimenter who previously gave food to a human beggar more than food offered by an experimenter who refused to do so (Russell et al. 2008;Subiaul et al. 2008;Marshall-Pescini et al. 2011;Kawai et al. 2014). Similarly, capuchin monkeys and dogs showed a preference for an experimenter who had previously helped open a container to retrieve an object (of neutral valence to the subject) compared to an unhelpful experimenter (Anderson et al. 2013;Chijiiwa et al. 2015). A similar ability has also been studied in reef fish species in the context of interspecific mutualism with the cleaner fish (Labroides dimidiatus): reef fish were shown to eavesdrop on the interaction between these cleaner fishes and other third-party clients in order to select cooperative partners (that remove the ectoparasites from clients' mouths and do not cheat by eating the clients' mucus; Bshary and Grutter 2006). This underlies the importance of carrying out studies in taxa other than dogs and primates in order to know how widespread this ability to attribute a valence based on social eavesdropping is.
This valence attribution, often studied in the context of cooperation, has also been reported during interactions involving emotional content. Nitzschner et al. (2012) showed dogs an interaction between a conspecific and a "nice" experimenter-paying attention to and petting another dog-and an "ignoring" experimenter who ignored the other dog. As cooperation is a complex mechanism that implies high cognitive demands (Cheney 2011), this alternative paradigm can be particularly useful to study a broader range of species, especially species that are not known to cooperate.
In this study, we investigated for the first time whether domestic horses (Equus caballus) could indirectly attribute a valence to a human by eavesdropping on an interspecific social interaction. We used a paradigm based on emotional interactions, as the expression and processing of emotions have been well described in this species. Horses can express emotions both vocally (Briefer et al. 2017) and visually (by body posture and facial features; Lansade et al. 2018). Typical facial expressions have been well described for both negative (Dalla Costa et al. 2014Gleerup et al. 2015) and positive (Lansade et al. 2018) emotions. Horses can recognize facial expressions related to emotions in both humans (Smith et al. 2016;Nakamura et al. 2018;Trösch et al. 2019a) and conspecifics (Wathan et al. 2016) and adjust their behavioral response accordingly, even in a future encounter a few hours later . Moreover, horses are both highly sociable (Ringhofer et al. 2017) and share a close relationship with humans in their daily lives. They are very attentive to humans and their behavior (e.g., Proops et al. 2009;McComb 2010, 2012;Lampe and Andre 2012;Proops et al. 2013;Malavasi and Huber 2016;Ringhofer and Yamamoto 2016;Trösch et al. 2019b) and can learn socially from a conspecific or a human demonstrator (Krueger et al. 2014;Rørvang et al. 2015;Schuetz et al. 2017). Hence, horses are good candidates for investigating whether they can also learn by eavesdropping on interactions between a human and a third-party conspecific to attribute a valence to the human.
To examine this, we showed the horses two kinds of interactions between an experimenter and an unfamiliar "actor" horse: a positive interaction (a grooming session) and a negative interaction (an unpleasant veterinary act). These scenes were presented to the horse through silent videos projected in real size in front of each horse, and we investigated whether they could perceive the emotional content of these scenes through behavioral and physiological measurements. Then, the horses met the two experimenters in real life in a choice test. Our hypotheses were the following: (1) Horses would express more positive emotions when watching the positive scene and more negative emotions in front of the negative scene. A control condition in which the tested horse observed the same videos but with the human and horse actors replaced by moving ovals allowed us to check that the horses were not simply reacting to a difference in movements between the two videos. (2) The horses would discriminate between the positive and negative experimenters during the choice test by seeking more contact with one of them.

Subjects and husbandry
Forty-seven female adult Welsh ponies (mean age ± SE = 8.47 ± 0.61) were used for this study. Twenty-four of them were included in the test condition, and 23 were included in the control condition. These horses were bred and had lived all their lives at the experimental unit (PAO, doi: 112 10.15454/1.5573896321728955E12) of the French National Institute for Agricultural Research (INRA, Nouzilly, France). They were familiar with humans and being handled. During the experiment, the horses were housed indoors on straw bedding in groups. They were fed with straw and hay, with access to water ad libitum. None of the horses had been involved in a similar experiment before.

Ethical note
Our protocol was submitted to the Val de Loire Ethical Committee (CEEA VdL, France) and received a positive recommendation. Horses did not undergo any invasive procedure during the experiment. Horses lived in groups, went to an outside paddock on a daily basis, and were not food-deprived.

Test condition
The experiment consisted of consecutively projecting two silent videos (one positive video and one negative one), each 30 s long, to a horse (Fig. S1). The positive video showed an experimenter grooming an "actor" horse. In this case, the actor horse expressed positive emotions (Table S1) through a typical facial expression that has been described before (Lansade et al. 2018): ears oriented backwards, low neck, lips extended and moving in a grooming attempt. The negative video showed a second experimenter (different from the first one) performing a veterinary act on the horse: touching her ears to administer an ointment and applying a spray towards her head. We used a small air spray, which was completely hidden in the experimenter's hand, so that it was invisible on the video. In that case, the actor horse expressed fearful reactions by jolting and trying to run away from the experimenter (four times in each video). She adopted a typical facial expression of negative emotions (Table S1): elevated neck and eyes wide open with the sclera visible (Lansade et al. 2018).
The two experimenters (both women) and the actor horse were unfamiliar with the horses prior to this experiment.
Each experimenter wore a specific color of clothes (light brown for one of them and dark blue for the other) and could be either positive or negative depending on the horse tested: the role the two experimenters and the order of the videos were counterbalanced between the horses. The videos were recorded beforehand and in the same location. Importantly, the experimenters were asked to keep a neutral facial expression (not expressing any positive or negative emotions) in both videos, and the videos were later checked independently by three people and edited to delete the scenes in which this was not the case. The videos consisted of several short scenes following one another. The number and the duration of these scenes were identical in all videos: two scenes of 10 s, followed by three scenes of 2 s and then one last scene of 4 s. Moreover, all the videos were checked to ensure that the experimenters' face was entirely visible during at least 60% of the video.

Control condition
In the control condition, the videos used during the test were edited so that the experimenters and the horse were replaced by two ovals with the same surface area and colored with the main color of the experimenter and the actor horse as appropriate. The videos were edited semiautomatically using the image editing and analysis software ImageJ and Fiji (Schindelin et al. 2012;Rueden et al. 2017). These ovals followed the movement pattern of the horse and the experimenter and copied the variations in their surface area (which was recalculated every 25 ms).

Set up and procedure
The entire experiment took place in a large, familiar stall (3.5 × 4.5 m; Fig. 1). The videos were projected onto a large white screen (2 × 2 m) by a DLP projector Acer P6200S. During the video projection, the horse was attached in front of the screen by two loose leading reins (one on each side of the horse). Moreover, an assistant stood next to the horse during the projection for security reasons in case the horse became distressed or risked hurting itself during the course of the experiment (which never occurred during our study). The assistant had his or her back to the screen, was looking on the ground, and did not know which video was being projected in order to avoid unintentional social cueing.
The horses were tested individually in the following manner:

Test condition
The test condition was realized with 24 horses.
1. Familiarization: The horses were familiarized with the experimental set up, video projection and device used to measure their heart rate (cf. "Coding and statistical analyses"). Familiarization consisted of at least three sessions of 10 min each, with one session per day, typically on three consecutive days. More sessions were added if necessary until the horse could be secured and stayed still while in front of a video with a heart rate lower than 80 bpm during a period of more than 30 s. The videos used were specific to this familiarization and showed an unknown horse walking and running in a sand paddock. The test took place in the afternoon on the same day as the last familiarization session. 2. Projection of the negative and positive videos in random order. 3. Choice test: At the end of the video projection, both experimenters, wearing the same clothes as in the videos, entered the stall at the same time and placed themselves on either side of the horse, facing the horse. The assistant gave each of the experimenters the two extremities of a string passing by the halter of the horse, so that the horse was held in the middle of the stall while the assistant was leaving (Fig. S2). The experimenters held the strings at 1 m from the horse and did not interact directly with her (they did not touch the horse and were looking at the ground). When the assistant had left the stall and closed the door, he gave a signal to the experimenters. At this signal, both experimenters released one extremity of the string at the same time, let the other extremity slide by the halter and wrapped the string in their hand, so that the horse was let free. The choice test then began. The test duration was 75 s, during which the experimenters tried to attract the horse for 15 s (by looking at the horse, holding out a hand towards the horse and calling it by doing a clicking sound with their mouths) and then stood still with their hands behind their back and watching the ground for 60 s. Importantly, the experimenters and the assistant did not know which videos the horse had seen before the choice test and were thus blind to which experimenter played the role of the positive or the negative experimenter.

Control condition
The control condition was realized with 23 new horses who were different from the ones used in the test condition. The familiarization and projection of the negative and positive videos were similar as in the test condition except that, in all videos, the horse and experimenters were replaced by moving ovals. No choice test was carried out after the video projection.

Coding and statistical analyses
The behavior of the horses was recorded with a camera during the video projection and the choice test and analyzed later using BORIS software (v. 6.0.6; Friard and Gamba 2016).

Video projection
Four types of measures were analyzed: (1) the time spent with both ears oriented backward or forward; (2) the time spent with the neck high, medium or low; and (3) the time spent with the whites of the eyes visible (as we did not observe any half-closed eyes, we did not include the time spent with the eyes wide open in the analyses since it would have been the exact opposite of the time with the whites of the eyes visible). We only included in the analysis the variables that were expressed by the animals and visible enough on the video recordings to be analyzed without ambiguity (see Table 1 for the complete list). 2. Number of grooming attempts. The number of times the horse extended her lips to nibble the leading reins or the assistant. 3. Variation in heart rate between the beginning and end of the video. The heart rate of the horses was measured during the video projection using a heart monitor system (Polar Equine RS800CX Science, Polar Oy, Finland). The difference in mean heart rate during the first and last 5 s of each video was calculated. 4. Time spent facing the videos. We considered the horses as being attentive to the video when they directly faced the video (the angle between their head and the center of the screen was < 15°) or when the angle was between 10° and 45° but at least one ear was oriented towards the video (Proops et al. 2009;Lampe and Andre 2012;Proops and McComb 2012;Smith et al. 2018).

Choice test
During the choice test, we observed whether the horse touched each experimenter and how many times each experimenter was touched. Indeed, physical contacts with the muzzle has been described as an attention seeking behavior in horses (Malavasi and Huber 2016;Ringhofer and Yamamoto 2016;Trösch et al. 2019b). The touches were usually brief, but if those were longer than 3 seconds, it was taken into account by counting a new occurrence every 3 seconds (examples of this procedure: Lansade et al. 2018;Trösch et al. 2019b). Data were analyzed by a coder who was blind to whether the video watched by the horse was positive or negative. Moreover, facial expressions and time facing the videos were analyzed by two coders since these components can be relatively ambiguous to record. The arithmetic means of the values coded by the two coders were used for the statistical analyses. Interobserver reliabilities were calculated: ears oriented forward (ICC = 0.96, lower bound = 0.94), ears oriented backward (ICC = 0.93, lower bound = 0.90); low neck (ICC = 0.93, lower bound = 0.90); medium neck (ICC = 0.98, lower bound = 0.98); high neck (ICC = 0.99, lower bound = 0.98); whites of the eyes visible (ICC = 1, lower bound = 1); time spent paying attention to the video (ICC = 0.95, lower bound = 0.92). The values obtained are all considered to represent excellent reliability (Koo and Li 2016).
Statistics were performed with Xlstat software (Addinsoft, Paris, France) for the analysis of facial expression and with R 3.0.2 (R Core Team 2013) for all the other analyses. The significance threshold was fixed at 0.05. To cope with our limited sample size, we used nonparametric statistical tests. All of the tests we used were two-tailed. To reduce the number of two-by-two comparisons and to be able to consider facial expression as a whole (cf. Lansade et al. 2018), facial expressions were characterized using a principal component analysis (PCA) based on Spearman correlations. A Wilcoxon signed-rank test for paired samples was then performed to test whether the two first factors of the PCA of each subject differed while watching the negative and the positive videos. We used a Wilcoxon signed-rank test for paired samples to test whether the horses expressed each of the variables described above differently when watching the two videos (both in the test and the control conditions) and whether they touched one experimenter more than the other in the choice test. We used a Wilcoxon signed-rank test to test whether the variation in heart rate differed significantly from zero during the negative and positive videos. We used McNemar's test to investigate whether horses were more likely to touch one of the experimenters throughout the total duration of the test (75 s), as our data were binary. Supplementary statistical analyses comparing the results for each variable between the test and the control condition are presented in Online Resource 1.

Test condition
Facial expression was described by a PCA. The first factor of the PCA explained 57.30% of the total variability ( Fig. 2; Table 1). It opposed two different facial expressions: "ears oriented forward", "high neck", and, to a lesser extent, "whites of the eyes" were negatively correlated with factor 1, while "ears oriented backwards", "low neck" and "medium neck" were positively correlated ( Table 1). The scores of the horses on factor 1 differed significantly between the two videos: horses presented significantly lower values when they watched the negative video than when they watched the positive video (V = 33, P < 0.001; Fig. 2). Factor 2 represented 16.75% of the variability and was mainly correlated with the variable "whites of the eyes" (Table 1). The scores on factor 2 tended to differ between the two videos: the horses tended to score higher while watching the negative video than while watching the positive video (V = 215, P = 0.065). The number of grooming attempts was significantly higher (W = 131, P < 0.001; Fig. 3) while watching the positive video than while watching the negative video.
The difference in mean heart rate between the first and last 5 s of the video (W = 408, P < 0.001; Fig. 3) was significantly higher for the negative video than for the positive video. This difference significantly differed from zero for both videos (negative: V = 227, P < 0.001; positive: V = 57, P = 0.023). It was positive for the negative video, meaning that the heart rate increased while the horse watched the negative video, and was negative for the positive video, meaning that the heart rate decreased while the horse watched the positive video. The time spent facing the videos did not significantly differ between the videos (W = 324, P = 0.282).

Control condition
The first two factors of the PCA characterizing the facial expression explained 62.79% of the total variability and are described in Table S2. None of these factors enabled significant differentiation between the two videos (F1: V = 138, P = 0.988; F2: V = 162, P = 0.482; Fig. S3). None of the other variables differed significantly between the two videos ( Table S3)

Choice test
The horses touched the negative experimenter significantly more frequently than they did the positive experimenter (W = 384, P = 0.042; Fig. 4). They were also more likely to touch the negative experimenter at least once when we considered the data as binary (McNemar's chi-squared = 7.11, df = 1, P = 0.008; Fig. 4).

Discussion
In accordance with our hypotheses, our results show that the horses reacted differently when watching a positive video (with an experimenter grooming a horse) and a negative video (with an experimenter performing an unpleasant veterinary act) and that the information observed in these videos later influenced their behavior towards the two experimenters in a choice test in real life.

Horses perceived and reacted to the emotional content of the videos
In our study, the horses reacted coherently with the type of information contained in the video they were watching: they showed clear positive facial expressions during the positive video and clear negative facial expressions during the negative video; they sought contact almost exclusively during the positive video; their heart rate decreased during the positive video while it increased during the negative video. In a control condition, we verified that these differences were not merely due to the fact that there was more movement in the negative video (as the actor horse tried to get away from the unpleasant stimulus) compared to the positive video. Indeed, in the control videos, the horse and the experimenter were replaced with moving ovals that precisely followed their displacement and variations in surface area. The horses did not react differently when watching the two videos in this control condition. This suggests that the horses actually perceived and reacted to the content of the videos in the test condition. We can also notice that the emotional response of the horses during the control condition was intermediate between the positive and the negative video of the test condition: they did not attempt to groom the experimenter as they did during the positive video of the test and their heart rate increased less than during the negative video of the test (Online Resource 1).
Their emotional reactions while watching the videos in the test might have been caused by two different types of cues (nonexclusively). First, they might have used the emotions of the actor horse in the videos. Our results could thus be explained by an emotional contagion between the actor horse and the horse watching the video. Emotional contagion is the reflex-like trigger of one's emotion caused by observing this emotion in another individual (Hatfield et al. 1993;Boissy et al. 1998;de Waal 2007de Waal , 2012. This effect carries important functions for group living, such as the synchronization of behaviors and the consolidation of social bonds by sharing positive emotions, and reacting appropriately to an unperceived or unknown stimulus (e.g., reacting quickly to the arrival of a predator that only one member of the group has seen; Špinka 2012). Emotional contagion has been shown in various species (for a review: de Waal 2007; Špinka 2012; Briefer 2018), including farm animals: some evidence has been found among sheep (e.g., Yonezawa et al. 2017), pigs (e.g., Goumon and Špinka 2016;Reimert et al. 2014), chickens (Edgar et al. 2011) and cattle (Boissy et al. 1998). Previous studies on horses have shown a transmission of emotions in negative contexts (e.g., Christensen et al. 2008;Keeling et al. 2009;Rørvang et al. 2015;Rørvang and Christensen 2018) but never in positive contexts. Our study suggests the emotional contagion of positive valence as well. Indeed, the horses not only reacted less negatively while watching the positive video, they also presented a facial expression associated with positive emotions (Lansade et al. 2018) and a behavioral response typical of this type of positive interaction, as they attempted to groom the assistant. Mutual grooming is an important affiliative behavior in horses (Arnold and Grassia 1982;Kimura 1998;Heitor et al. 2006) and when a human grooms a horse, the horse tends to reciprocate the grooming with humans (McGreevy et al. 2009). Second, the horses might have reacted to the valence of the experimenters' behaviors during the interaction with the actor horse (i.e., grooming the actor horse or performing unpleasant veterinary acts). Importantly, the spray used in the negative video was completely hidden inside the experimenter's hand so that the horse watching the video did not simply react to the vision of a scary object. Nor did the horses react to the experimenters' facial expressions as we controlled both experimenters' facial expressions in the videos. In any case, our results show that the horses perceived the valence of the interactions seen in the videos and that perception induced a consistent emotional response.

Horses discriminated between the two experimenters in the choice test
During the choice test, the horses behaved differently with the experimenters based on the previously observed interaction between the experimenters and the actor horse. However, contrary to a more intuitive hypothesis, horses touched the negative experimenter more than they did the positive experimenter. One possible explanation is that the horses interpreted the negative video as a conflict between the horse and the negative experimenter. Hence, the horses might have touched the negative experimenter as an appeasement behavior. Indeed, during the postconflict periods, friendly contacts between a third-party horse and group mates previously involved in a conflict have been described in horses (Cozzi et al. 2010). Similar results have also been found in bonobos in a cooperation paradigm (Krupenye and Hare 2018). Indeed, the bonobos preferred the negative experimenter, who tried to steal a stuffed toy from the neutral experimenter, more than the positive experimenter, who helped the neutral experimenter retrieve the toy. The authors proposed that the negative experimenter might have appeared more dominant than the positive one, which would explain the bonobos' preference, as they also showed an attraction to dominant individuals in the same study. It would thus be interesting in further studies to test whether a similar phenomenon could explain the behavior of the horses towards the negative experimenter.
To discriminate between the experimenters, the horses might have adjusted their behavior towards the experimenters based on the emotional state expressed by the actor horse during the interactions seen on video: the horses eavesdropped on the emotional state of a third-party horse involved in a social interaction with a human experimenter, thereby attributing a positive or negative valence to that specific experimenter; they remembered the information and used it to adjust their later behavior towards this experimenter. This type of eavesdropping has, for instance, been shown in human infants and in dogs: they adapt their behavior with a stranger depending on their mother's/owner's emotional reaction towards the stranger (Feinman and Lewis 1983;Murray et al. 2008;Freidin et al. 2013;Duranton et al. 2016).
Furthermore, the horses might have responded to the choice test based on the experimenters' behavior seen on video, either by using the behavior in itself (e.g., the experimenters' gestures)-independently of the actor horse-or by interpreting this behavior as part of the interaction (e.g., an experimenter grooming a horse and an experimenter performing veterinary acts). In the latter circumstance, they would have attributed a positive or negative valence to each experimenter based on the behavior the experimenter demonstrated during the interaction with the actor horse and then adjusted their own behavior in the choice test based on the experimenters' valence. This might be related to the ability of "social evaluation" described in the context of cooperation as the ability to attribute a value to a behavioral pattern demonstrated during a third-party social interaction, associate this behavior with a specific individual and use this information to adjust one's behavior towards this individual (Abdai and Miklósi 2016). Again, the horses could not have reacted to the human facial expressions in the videos (as it was shown in Proops et al. 2018) because we made sure that both experimenters kept a neutral facial expression.
In all cases, our results suggest that horses are capable of social learning by eavesdropping on the behavior of others (McGregor 1993). This is coherent with the results of previous studies in horses that showed that they could attribute a valence to humans based on the previous interactions they had had together (Henry et al. 2005;Sankey et al. 2010). Our study goes one step further, as the horses attributed a valence based on indirect cues only-by observing a thirdparty interaction.

Video projection was validated as a tool to study horse behavior
The horses presented a high interest for the videos, facing them for almost the entire duration of the video projection in both the test and the control conditions (Table S3). They faced even more the videos in the test condition-showing an interaction between an experimenter and a horse-than in the control condition-showing moving ovals that had no particular meaning for them (Online Resource 1). This confirms that they perceived the content of the videos. Furthermore, they reacted accordingly to the content of the test videos: reacting negatively to the negative video and positively to the positive video. The horses also discriminated between the positive and negative experimenters they saw in the videos when meeting them for the first time in real life. This implies that the horses recognized the experimenters they saw in 2D when confronted with these experimenters in real life during the choice test. This is particularly impressive because there is a loss of information in 2D videos compared to reality (Fagot 2000), e.g., the horses did not have access to olfactory or auditory cues, nor to some visual aspects, such as depth and perspective. Previous studies have shown that horses can recognize persons they have seen in pictures (e.g., Proops et al. 2018;Stone 2009), our results show that this extends to videos as well. To recognize the experimenters, our subjects might have used the experimenters' face, body features or the color of their clothes. Hence, our findings are particularly interesting for further studies, as video projection can be a very useful tool for behavioral studies, e.g., to study animals' understanding of emotions or of specific communication gestures.

Conclusion and implications for horse management and welfare
Our study shows that horses react appropriately to the interactions between a horse and a human seen on video: the horses expressed the same emotions as the actor horse seen in these videos via their facial expressions, behavior and heart rate. These results might be explained by a reaction to the valence of the experimenters' behavior during the interaction or by emotional contagion. This emotional contagion would have occurred by visual means only and concerned emotions of negative as well as positive valence. When meeting the real persons afterwards in a choice test, the horses discriminated between them by touching the negative experimenter more, which may be interpreted as an appeasement behavior. Our results suggest that horses are capable of perceiving the valence of an interaction seen on video, of reacting emotionally accordingly to this valence, and of social eavesdropping.
Our findings can have applied repercussions in terms of horse welfare and management. Indeed, in the case of veterinary or farriery practices that are unpleasant, it might be better to conduct them with no other horse watching; otherwise, the stress of the handled horse could spread to the other horses through emotional contagion, and the other horses might attribute a negative valence to the vet or farrier, potentially complicating future encounters with them. Finally, if horses can attribute a valence to humans after seeing a positive or negative interaction with another horse, we could also use this type of set up to test the valence of a specific interaction for horses, e.g., handling or riding methods.