Grimace project documentation

March 17, 2009
Oliver Spindler 0100611 / 066 935 Thomas Fadrus 0075129 / 033 532

Vienna University of Technology Institute for Design and Assessment of Technology Supervisor: Ao.Univ.-Prof. Dr. Peter Purgathofer

1

Contents
1 Introduction 1.1 1.2 2 3 Emotion model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3 4 4 6 6 7 8 9 11 11 11 13 14 14 15 16 19 21 22 22 23 24 27 29

Related work Design 3.1 3.2 3.3 3.4 Graphical approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Uncanny Valley . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Our approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Face model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4

Development 4.1 4.2 Selected technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Iterative development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5

Technical details 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 Muscles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Emotions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Splines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stroke styles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Facedata file format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deployment and use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Class diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6

Results 6.1 Conclusion and future directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

References

2

1

Introduction

Emotions are important guides in our lives. Neglected since Plato as something to be overcome by rational thought, newer research shows emotions play a key role in problem-solving and sensemaking (e.g. Ekman 2003, Norman 2004). Emotions are an aspect of affect, which is an umbrella term for cognitive phenomena apart from rational thought. Other kinds of affect include moods or personality traits. Information entities, be it words, objects, ideas, music pieces or photographs, carry meaning. An entity’s meaning can be seen as the sole reason for us to get interested in it; “meaningless” and “pointless” are synonyms. Meaning is frequently seen to be comprised of two complementary aspects, denotation and connotation (e.g. Ogden et al. 1969; Garza-Cuarón 1991; Barthes 1996). Denotation is the actual, or, intended meaning of something; the meaning of words you find in dictionaries, the plot of a film, the score of a music piece. Affective connotation is the emotion or feeling which we believe to be communicated by an entity, or which is aroused in us when exposed to the entity. With the rise of experimental psychology, this aspect has received more attention. An important step was the work of Osgood (1957), who quantitatively measured the affective connotation of words and from their results inferred a semantic space or affective space. Nowadays, connotation usually refers to this kind of meaning. Seeing the importance emotional information plays in our understanding of the world, it should be natural to use this kind of information in the description of content on the web. However, content description is currently focused on denotation, while there are only few attempts to target affective connotation. A notable exception is the common usage of emoticons in textual communication. It is no coincidence that we use emoticons, which are symbolic abstractions of facial expressions, to express our emotional state and the affective connotation of our textual messages. Animals and humans across all cultures express emotions through facial expressions. This hardwired relation between emotions and facial expressions makes information about emotions an ideal candidate for describing affect on the web. Emoticons show that detailed reproduction of facial expression is not necessary to convey emotions unambiguously. Reduced detail and complexity allows to focus on those features which are necessary to effectively display a specific facial expression. Our hypothesis is that concentrating on relevant facial features shows emotions more clearly.

1.1

Emotion model

The nature of emotions is an ongoing scholarly debate. Over the years, many different explanations have been put forward, resulting in various emotion models. Most models are based on one of two major approaches. In the dimensional approach, emotions are being described by a small number of independent dimensions – usually two or three. For instance, the circumplex model (Russell, 1980) describes emotion via the dimensions “valence” and “arousal”. The categorical approach, on the other hand, assumes a finite number of basic emotions, which describe innate emotional reactions. These basic emotions are believed to have developed evolutionary, guiding human behaviour in a world of unforeseeable events (Ekman, 1999, Sloboda and Juslin, 2001). 3

McCloud develops an emotion model which follows the categorical approach. p. offering the ideal framework for our work. (1972) showed photographs of facial expressions to members of different cultures.2 Goals The goal of the project Grimace was to build a facial expression display using web technology which effectively conveys emotions as depicted by McCloud (2006. Accordingly. the 4 .83-85). In McCloud’s model. Chapter 2 of the book deals with how to convicingly draw facial expressions. in turn. In a principal work for this research area. which aims to enhance interactive systems with affective awareness and expressiveness. postulating 6 basic emotions: Anger Joy Surprise Disgust Sadness Fear Table 1: The basic emotions defined by McCloud (2006) This list is. He asserts that mixtures can occur in arbitrary intensity and might even include three emotions. is based on a categorical model. he inferred the universality of several specific emotions.Grimace was based on the book Making Comics by Scott McCloud (2006). This includes any primary (or basic) emotion and any secondary emotion (blendings of two basic emotions) in arbitrary intensity. Since the posed expressions could be judged accurately. and blendings of two basic emotions secondaries. his results show that these 6 emotions can be judged correctly. which we used as the basis for our work. a manual for artists on how to draw comics. Artists who want to draw convincing portraits of humans need to be expert observers of facial expression. The result should be a free component which can be easily integrated into other projects to enable the addition of emotional expressiveness to interactive systems. Most of this work is being undertaken in the field of affective computing. The Artist’s Complete Guide to Facial Expression by Gary Faigin (1990). 1. these basic emotions can be blended to achieve complex emotions. While the existence and the number of basic emotions is still debated. In this chapter. an excellent guide to drawing detailed facial expressions. He compares this process to the way arbitrary colours can be mixed from three primary colours. He gives example depictions and names for all primary and secondary emotions. McCloud (2006) takes visual cues for his model from Faigin’s work and shows how to depict emotion through facial expressions in the world of comics. Ekman et al. based on the research of Paul Ekman. In a series of cross-cultural experiments. 2 Related work Realistic approaches The development of dynamic face models is an important research area. he calls the 6 basic emotions primaries.

ogi. thinking and uncertainty through facial expressions and animated gestures. There are publicly available ECA frameworks. Albrecht et al. The face consists of a set of hierarchical b-spline patches. It is a comprehensive framework which facilitates application development with the included ECA. high-level affective dimensions are translated into MPEG 4 Facial Animation instructions. Usually. 10 × 9 pixels in size). The CSLU Toolkit1 is one example. an ECA is modeled as a 3D head. We aim to unambiguously express emotions through facial expressions. and highly abstracted emoticons (black & white. McCloud (2006) shows that a certain level of abstraction is possible without any loss of expressiveness (see section 3). Their agent supports a text-to-speech system. Since an ECA also speaks. is not necessary for this project. they need to be able to express their emotions through facial expressions. (2005). Pan et al. Ochs et al. base their three-dimensional ECA on basic emotions. There are a few attempts to develop face models which aim to achieve such abstracted comic-like appearance.goal was defined as “making machines less frustrating to interact with.” (Picard.1) are not the ones which are actually needed for conversations with believable agents. (2007) focus on the notion that the basic emotions as postulated by Ekman (see section 1. which can be manipulated by simulated muscles. Three representations at different abstraction levels were compared. (2007) follow a similar approach. interest. In their system. Another example is Xface2 . the system interacts with the user through an avatar with human-like appearance and expressiveness. They developed a 3D character which expresses affective states like agreement. It makes use of the MPEG 4 Facial Animation standard. In such an environment. 214) Interactive systems should recognise the affective state of the user and adapt their output and behaviour accordingly.it/ 5 . which predates affective computing.edu/toolkit/ 2 http://xface. One commonly proposed solution is the use of embodied conversational agents (ECA). p. Therefore. however. another requirement is appropriate facial animation which supports the impression that the ECA is actually speaking. Bartneck (2001) conducted a study on the well-suitedness of different representations of facial expressions. Comic-like approaches The approaches described above usually employ characters which are designed for a high level of realism. the system augments the spoken text with appropriate facial expressions. (2005) depart from the concept of basic emotions and base their agent on a dimensional approach. on the other hand. With these values.cse. photographs of a real face. Wang (1993) undertook an early yet impressive attempt at building a three-dimensional face model.itc. The system analyses a text for certain terms whose coordinates for three affective dimensions are stored in a table. an embodied conversational agent from the CSLU Toolkit. which achieves an impressive level of realism. but allow them to be blended to achieve more subtle affective states. Zhang et al. This standard is described by Pandzic and Forchheimer (2002) in detail. ECA strive to be believable conversation partners. Results showed that the very abstract emoticon 1 http://cslu. Such level of realism. Subjects rated facial expressions of these representations for convincingness and distinctness. 1997.

This brief survey leads to the principles on which we built our system. which only consists of eyes and mouth. His model is based on a dimensional approach. Tanahashi and Kim (1999) developed a comic-like face model. the ECA cited above do not include wrinkles in their design. However. while distinctness was rated as decreasing with increasing abstraction levels. However. in which they asked subjects to pick the most expressive and convincing representation out of a few alternatives. Inspired by the approach applied in figure 1 by McCloud. In fact. for example). this cannot be done ad infinitum. The shape of the mouth indicates valence. These features are represented by parabolic functions. The first three images all show the same emotion. The model expresses these changes in affect. it is not necessary to use an entirely realistic representation to achieve emotional readability. Tanahashi and Kim try to counter this with the addition of symbols. because we did not want to use symbols to augment it. They also experiment with exaggeration of the features and with the addition of symbols to achieve a higher level of expressiveness. we believe that expressiveness can be increased further. because there is a point when too much of the facial features are omitted and the intended emotion may not be recognized 6 . reduce the facial complexity to an extent which results in a loss of expressiveness. we decided that we need to make our facial representation somewhat comic-like but also natural enough to be able to stand on its own. An interesting point is the use of a survey to improve the validity of the system. Iwashita et al. Schubert (2004) describes a very simple comic-like face model. The model is used to visualise emotions expressed by music. 3 3.was perceived to be as convincing as the photograph.1 Design Graphical approach Our main premise was to keep everything simple and minimalist. it even gets easier to identify this state when the head is simplified. The symbols employed are highly culturally dependent. Discussion Our goal is not to build a conversational agent. because symbols are culturally dependent. Their model is designed to express four out of six basic emotions as defined by Ekman. As seen in figure 1. wrinkles are an essential aspect of some facial expressions (disgust. we do not include animation or a speech component. We also applied this principle to the visual aspects of Grimace. The comic-like models we have found. The realistic three-dimensional approaches are designed for a purpose quite different from our goals. (1999) also follow a comic approach in their system for caricature development. Although embodied conversational agents can express emotions through facial expressions. For instance. which can be reliably identified as an anxious state of mind. This is something we wish to avoid. on the other hand. the degree to which eyes are opened represents arousal. while cross-cultural research has shown that facial expressions on their own are universally understood. Emotions are very short experiences which are not constant over the duration of a song.

More abstract depictions were perceived as more likeable than pictures of real humans. It is difficult to read the emotion on the face. Their findings did not confirm the predicted raise in likeability. but obviously deep in the eerie area of the uncanny. Mori also states that movement amplifies the aversive effect. So that something that looks almost like a human and moves in an almost human fashion.1). this only happens at the very end of the human likeness graph. p. because this would make blending different emotions together far more difficult. The concept was introduced in a short essay by robot-researcher Masahiro Mori (1970). They used pictures of robots at different levels of human-likeness.96) any more. Furthermore. A highly realistic 3D representation of a human would be very near to the right wall of the valley. The face has been stripped of all the wrinkles and even the primary features like the eyes mouth and nose are mere dots and lines.Figure 1: McCloud (2006. as well as pictures of real humans. In figure1. Bartneck et al. because we are so used to seeing human beings around us all day. in this case drops of anxious sweat emenating from the face. is needed to make it work again. In contrast. They note that knowledge about whether the photo showed a 7 . As it can be seen on figure 2. a robot) looks almost humanlike. measured the level of likeability in their study. It has to be noted that Bartneck et al. that slight deviations can cause a repulsive reaction. Symbolic augmentation.2 The Uncanny Valley Another reason why we opted for a comic-like approach is the notion of The Uncanny Valley. if the face was too humanlike the viewer wouldn’t tolerate if something wasn’t completely right. a comical face would be located somewhere on the left slope of the ascent to humanoid robot. he describes the phenomenon that when something (e. Furthermore. Small errors in the appearance or movement can have disastrous effects. That’s exactly why we chose to use a comic-like appearence for Grimace. symbols aren’t as easily recognised across different cultures (see section 1. 3. it causes aversion rather than sympathy. The comic-like face is better suited for our approach because it would allow us to work more freely. In this essay. Omitting all the unnecessary wrinkles and features makes the face easier to recognize and it would also ease the implementation and aid the performance of the system. because it is far more difficult to get something to look right if it’s supposed to be very humanlike. (2007) conducted a study in which they tried to find out if Mori’s predictions were accurate.g. causes a repulsive reaction in an observer. which puts it comfortably out of the uncanny area. not the level of familiarity which Mori described. this point is reached a little bit left of the dotted line. We did not want to use symbols in our implementation.

One important factor which influences familiarity is the depiction of the eyes. Instead. As it was difficult to judge which ones could be left out. questions the predicted raise in familiarity. Geller (2008) gives an up-to-date examination of the concept. 3. based on Mori (1970) real human or a robot did not have an influence on likeability. 8 . Mori now believes that the human ideal is best expressed by the calm facial expressions of Buddha statues. it would have been helpful to conduct an early experiment with these reduced representations. Taken from Geller (2008). to verify that all the minimized faces still conveyed the intended emotion. omitted some wrinkles and removed the plasticity (see figure 3). in which he. the eyes need to be very well done to be acceptable. they infer that it might be more accurate to speak of an Uncanny Cliff. There might be an uncanny threshold. He closes with a recent quote of Mori.3 Our approach So the first step was to find a simplified version for each of the basic emotions laid out in table 1. we used the representations of McCloud (2006).moving still humanoid robot uncanny valley bunraku puppet healthy person stuffed animal familiarity industrial robot human likeness 50% 100% corpse prosthetic hand zombie Figure 2: Graphical representation of the Uncanny Valley. From their results. results were highly dependent on the perceived beauty of the stimulus. too. To do this. He notes that there are a number of examples which contradict Mori’s predictions.

5. As it can be seen in figure4 we first identified the wrinkles that occur in most of the basic emotions. To simulate this behaviour of appearing and disappearing wrinkles we used opacity. The muscles are basically a system of influences. the face model was implemented following a muscle-based approach.Figure 3: Simplified faces Wrinkles that form when muscles in the face are contorted are as important for emotionrecognition as the features themselves. We realised that most of the facial expressions are fairly symmetrical and decided to model only half of the face and then mirror all of it over to the other side. It was also necessary to take some time and find out which wrinkles comprise the minimal set to accurately represent all the emotional states and still be manageable in terms of implementation complexity.4 Face model We believe that closely following the biological process of how emotions result in facial expressions increases the credibility and clarity of the displayed emotion. Figure 4: Essential wrinkles Basically wrinkles are crevices that form when skin is compressed through muscle tension. The tension is calculated with a mathematical function which was specified to match the non-linear behaviour of human muscles. Therefore. Facial features had to be translated into graphical elements that could be transformed algorithmically. We found the necessary combination of accuracy and flexibility in Bézier splines. Than we incrementally added the ones that help to recognise a certain emotion and added them to the set. All 9 . When a wrinkle would form on a human face we would slowly raise the opacity of the correspoding spline. This approach is explained in detail in section 5. These muscles are themselves influenced by emotions. This would ease development and also reduce the redundancy in the system. In humans and animals alike. like the wrinkles in purple around the mouth or the ones around the eyes in red. 3. facial expressions result from the contraction of facial muscles. These are defined as groups of muscles which are tensioned if an emotion is applied to the face. When tension is applied to them they deform the facial features they are attached to. blue and green.

the system checks which muscles are involved and calculates the corresponding tension through the function specified for this particular set of emotion and muscle. After the calculation phase the result is used to deform the splines which make up the features. If a specific emotion is dialed in. Grimace also handles the interpolation based on a priority assigned to each muscle. Figure 5 shows the first attempt to represent the facial features with a minimal number of Bézier splines. Furthermore. This early model proved to be not capable of expressing all necessary facial expressions and was augmented in later iterations.facial features and muscles are represented by one or more splines. The shape of Bézier splines is determined by a very small number of control points. Figure 5: First attempt to represent facial features via Bézier splines. it blends multiple muscle-tensions together if they are attached to the same feature point. The idea was to connect the virtual muscles to these control points in such a manner that contraction of the muscles would result in natural looking transformations of the splines. if more than one emotion is applied. emotions muscles static features dynamic Figure 6: Face model 10 . If necessary.

these simple splines where reworked and extended to cover the whole gamut of facial expressions. as they are one of the simpler forms in the human face. SWF files are displayed in exactly the same way across platforms and browsers. • Optimised for dynamic vector graphics: Flash originated as a vector-based animation tool and offers comprehensive vector-drawing functions. so we started again with a minimum set of splines to define the features.2 Iterative development After the visual style had been laid out we started with the technical implementation. They are fairly static and basically just need to open and close. without the need for an IDE like Adobe Flash IDE or Adobe Flex Builder. First we modeled the eyes. It contains MXMLC. an Actionscript 3 compiler. which allows generation of SWF files entirely through Actionscript 3 code. In the course of developing the face model. but it sufficed to take the complexity of the modelling process away. • Ubiquity and consistency: The Flash player is available for all major operating systems and has an extremely high install base. Actionscript 3 is the scripting language used by Adobe Flash and Adobe Flex. As already stated we tried to keep the face model as simple as possible to keep it managable and also performant.4 4. 11 . Again it was our premise to keep the muscular system as uncomplex as possible without sacrificing expressiveness. Then we had to try and match all the basic emotions with this setup and when it wasn’t possible to mimic all the emotional states it had to be adjusted and the trial phase had to be repeated until every necessary facial expression could be formed with the defined muscular system. Muscles were defined and linked to the spline points of their corresponding feature. To achieve this. This approach should be applied throughout the whole system. The technology was selected for several reasons: • Free: The Flex 3 SDK is available open source under the Mozilla Public License. This gamut represents the entirety of facial expressions that can be covered with a certain set of emotions through blending. the language can be used on its own. 4. The next step was to define the muscles that would influence the features to form the respective facial expressions. Though not advertised. • Optimised for the web: Flash is a web-centric technology which delivers small file sizes and can be conveniently integrated in web projects. we iteratively added splines and tried to model every possible facial expression with them. If the simple spline turned out to be insufficient it was extended and reevaluated until it was sufficient to represent all of the basic emotions. This is an oversimplified assumption.1 Development Selected technology Grimace has been developed in Actionscript 3 and is being deployed as a Flash / SWF file. We had to model our system so that it would cover as much of this gamut as possible.

5).75 1 Anger Figure 7: Muscle tensions were plotted and interpolated for each emotion.25 0. visually aligning them with McCloud’s illustrations of the according state. because the underlying physics are far more complex than our minimalist system could reproduce.Tension Polynomial mapping 1 0. which specifies how the tension changes with different intensities of this emotion.5 0. For example. We manually modelled four gradual steps for every emotion. This also proved to be very helpful at a later stage when it was necessary to blend different emotions together. These functions were then implemented via mappings (see section 5.25 0 0. After this step we had a numerical representation of the motion flow of a certain emotion from neutral to a fully expressed state. After that a similar procedure had to be executed to model the wrinkles and tie them to the muscle system.app and either manually matched a mathematical function to the point set or used so called “curve fitting” the get an interpolated polynomial function. every emotion and all the wrinkles we had a pretty good system to dial in a basic emotion in a continous way and get a meaningful representation of the defined state. He already defined how a face changes with different emotional intensities very precisely. The indicated forms are approximated by two mathematical functions.5 0. Unfortunately it wasn’t possible to simply connect the wrinkles to the same muscles that moved the feature splines. The only thing left to do was to make the combination of emotions work as well. For every muscle we dialed these values into Grapher. When we had verified that the system was able to represent the basic emotions in this static setup we had to put it all into motion. Now we could define an emotion as a set of muscle influences with a mapping. In parts this was automatically achieved by averaging the mathemat- 12 . Then we printed out the corresponding muscle tensions and put them into a table. because we had separate control over the muscle systems. McCloud (2006) again proved to be a valuable resource at this stage. After this step had been repeated for every muscle.75 Sine mapping 0. so we had to define a seperate muscle system for the wrinkles because we needed very fine-grained control over how they moved. figure 7 shows the relationship of two muscles with anger.

Each emotion affects specific regions of the face and results in familiar facial expressions. A muscle has a defined path (shown as ) and a current tension (the dot along its path). A lot of manual work was required to determine the priorities so that every mixture resulted in a meaningful facial expression. but human emotions aren’t just mathematical functions so a mixture of different emotions is hardly ever just an average state between the corresponding basic emotions. while the other one is attached directly into the facial skin. Each control point can be influenced by multiple muscles (influences are ). The mouth is surrounded by several muscles. Our face model consists of three major components: emotions. the component was adapted for deployment.ical functions that formed the basic emotions. facial expressions result from contraction of facial muscles. muscles and features. Any control point which is influenced by the muscle is then moved. Typically. this includes dynamic features like eyes. which are the link between emotions and features. each of these components and their underlying technologies are being described. The shape of the mouth is represented by two features. eyebrows. 5 Technical details Grimace follows a muscle-based approach and thus mimics the way biological faces operate. Features can be transformed by muscles. In the following. upper lip and lower lip (shown as ). When a muscle is contracted. and the construction of a project website. This was quite tedious. • Emotions. A feature consists of several control points. it moves its tension dot along its path. • Muscles. mouth and wrinkles. The shape of a muscle is defined by a spline and when contracted can move an arbitrary number of control points along its path. Figure 8 illustrates how these three components work together to achieve a facial expression. which results in a change of the feature’s shape. which are the high-level concept that influences a number of muscles in an arbitrary fashion. A complete overview of all the classes is given in a UML-style class diagram. only fixed to bones at one end. It shows the mouth and its surrounding muscles for a neutral expression and four states of anger. which allows full control over the face’s capabilities. This included the addition of a JavaScript API. which are the visible elements of a face. Facial muscles are. 13 . This is followed by a brief description of how Grimace can be put to use in other projects. when an emotion is present – the example shows the influence of anger – several muscles contract simultaneously. unlike muscles in any other part of the body. In human and animal faces. • Features. When the face was capable of expressing any primary or secondary emotion. Now. The ones with higher priority where favored in the blending process. but our system proved to be very manageable and our commitment to simplicity helped a lot in this phase. To cope with this a priority was assigned to every muscle. This unique property allows the wide range facial expressions humans are capable of displaying. as well as static features like nose and head outline.

0 0. The Feature class encapsulates distinct facial features.2 Features Features. This grouping is optional. We simulate this behaviour. A muscle can be defined with the parameter initTension. The shape of a muscle is defined by a spline (see section 5. the visible parts of a face. t = 0 is a completely relaxed muscle.1 Muscles As explained before. the distance between the points Q(t = initTension) and Q(t = tension) influences the position of feature nodes (see section 5.0 Figure 8: Influence of anger on muscles surrounding the mouth. which are instances of the FeatureSegment class. A feature is comprised of one or more segments.75 1.g. The tension of a muscle. no emotion is active – results in contracted muscles. Muscles are grouped into instances of MuscleGroup. Thus. Since the eyes are halfway open in neutral state. which controls the upper eye lid. which defines the neutral state for this muscle. but in some cases. defining additional muscles which simulate the wrinkles resulting from when feature muscles contract. unlike real muscles. and attached to skin at the other end. Finally. An example is Levator palpebrae. this muscle is defined with initTension. can be transformed by muscles. but currently muscles are divided into feature muscles and wrinkle muscles.2: Node influences).25 0. 5. 14 . 5. 1] along the spline. This defaults to 0. This is a point where we had to leave an accurate biological representation to achieve the desired facial expression gamut. a neutral face – i. while t = 1 represents maximum tension. However. the tension of a muscle is calculated from the emotions which exhibit an influence on the muscle (see section 5. When muscles contract. the upper lip. Thus.e. or rather. an eyebrow or a wrinkle. facial muscles are fixed to a bone at one end.3).4). In turn. The tension parameter of a muscle corresponds to the position t ∈ [0. a feature can take an arbitrary shape by connecting several segments. muscles in Grimace have no width.0. The shape of a segment is defined by a spline. e. they shorten and thus pull the skin towards the point where they are attached to the bone.5 0.

we have defined a mapping which allows flexible control over how a muscle is contracted for an emotion state (see section 5. For n registered muscles. more than one set of muscles is affected. N = N0 + ∑ (wi · ( Mi (v) − Mi (t))) i =1 n Fills Features can also be filled arbitrarily.1). 15 . When a muscle is influenced by more than one emotion simultaneously. represented by the FeatureFill class. thus adding a way to add animation. Therefore. which have been said to be recognisable cross-culturally (see section 1. and value = 1 represents maximum influence of an emotion. since different emotions sometimes influence the same muscles. the 6 basic emotions we have implemented result in distinct facial expressions. while others are continously influenced. If two or more emotions are present simultaneously. priority defines the influence of each emotion on the final tension of a muscle. The presence of an emotion is represented by a parameter value ∈ [0. where value = 0 means the emotion is not present. but more strongly in early than in later stages. influences have a priority parameter.5). referred to as nodes.5). the position of node N is evaluated in the following way: For each registered muscle M we calculate the distance between the muscle’s position resulting from its current tension v. and the position resulting from its initial tension t. 5. Our emotion model subscribes to the idea that complex emotions are in fact mixtures of basic emotions. The node’s initial position N0 is then translated by the resulting vector. a weight parameter is stored. To simulate this behaviour. When an emotional state is present. Every point is represented by the FeatureNode class and can be influenced by an arbitrary number of muscles.3 Emotions Emotions are the high-level concept which we aim to display via facial expressions. it results in simultaneous contraction of a set of muscles. Alpha mapping Not every feature is constantly visible. some features only start to be influenced when an emotion is strongly present. For every node-muscle influence. For every fill.Node influences A spline has two endpoints and 0 or more control points. However. Fills can also be influenced by muscles. the visibility of features can be mapped to the tension of a muscle. a FeatureNode represents a pivot point. For instance. In this way. This contraction does not follow value linearly. The relation is not direct but mediated through mappings (see section 5. for every emotion-muscle influence. 1]. In real faces as well as in our implementation. The distance is scaled by the respective weight factor w. which can then be influenced by muscles like any other node and moves the whole fill when translated. the feature opacity can be controlled flexibly. Wrinkles result from tightening of facial skin and thus only become visible when certain muscles are contracted.

and the selected technology offers native support for these types of splines. a result of feeling surprised are widely-opened eyes. Given an influence of n emotions. the eyes remain open. and the shape of each segment is defined by exactly one spline. If joy and surprise are experienced together. The interface defines the getPoint(t) method. With splines. influence priorities pi and raw emotion tensions ti . The following splines are available for muscles and facial features: Figure 9: Line Line Figure 10: Quadratic Bézier R1 R1 R2 R2 Figure 11: Cubic Bézier S3 S3 A spline which connects two endpoints with a straight line. Q0 Q0 C1 continuity C1 continuity S1=RB S1=RB Q2=RA Q2=RA Q3=R0 Q3=R0 S0=R3S0=R3 S2 S2 Q1 Q1 R1 R2 16 S C1 C1 1 8 1 2 1 2 3 1 2 1 2 C2 1 8 1 2 1 2 7 8 7 8 1 2 1 2 C2 Q3=R0 S0=R3 R1 S2 R1 R2=S0R2=S0 S1 S1 . which calculates the location of a point along the spline given the position t ∈ [0.For instance. A spline has two endpoints and may have control points in between. They offer an easily understandable way to model the face. All shapes in Grimace – facial features and muscles – are based on straight lines and Bézier curves. It is a notable property of Bézier curves that the curve does not run through the control points but is merely pulled towards them. but also results in squinting of the eye. where t = 0 is the starting point of the spline. Each feature consists of one or more segments. muscles are also based on Bézier curves. with emotion values vi . the eyes. because surprise has a stronger influence on the eyes than joy. However. This is represented by different priorities. complex shapes can easily described or approximated by very few points. Flash offers the native drawing method lineTo for this spline type. the shape of each muscle is defined by exactly one spline.4 Splines Spline is the common term for the use of piecewise parametric curves in computer graphics. Splines implement the ISpline interface. In addition. the mouth or wrinkles. a genuine smile not only influences the shape of the mouth. and t = 1 is the endpoint. e. the final tension of a muscle is calculated as:  t=  1 i =1 n i =1 ∑  vi · pi · ti ·  ∑ ( vi · pi ) n    5. Bézier curves are a form of parametric curves which are commonly used in vector-drawing and animation software.g. 1]. Facial features are all visible components of the face.

including the mouth. the form can Q1 offer a native drawing Q2=RA be approximated by lower-complexity curves like Quadratic Bézier splines. The parametric form of a Quadratic Bézier curve is: Q(t) = P0 (1 − t)2 + P1 · 2t(1 − t) + P2 · t2 . It approximates a Cubic Bézier with four Quadratic Béziers and offers a good trade-off between accuracy and calculation complexity. 1] 1 B Flash does not method for Cubic Béziers. The more lower- S =R complexity curves are used. The approach is illustrated in figure 12. the more accurate the form of the approximated curve becomes. which demands the greatest flexibility. 1 2 C1 1 8 1 2 1 2 7 8 C2 1 2 R1 R2=S0 S1 1 2 3 8 Q2=R0 S2=T0 3 8 1 2 T1 Q1 C3=T2 C0=Q0 Figure 12: Fixed Midpoint approximation We have selected the Fixed Midpoint approximation method described by Groleau (2002). S2 If two S =R Q3=R0 they offer enough flexibility to draw all necessary or more Cubic Bézier splines are concatenated. 1] R1 R2 S3 Cubic Bézier A Cubic Bézier spline has two control points and offers great control over the curve form. S and T are calculated in the following way: 17 . 0 3 facial features. t ∈ [0. However. 1 Cis: continuity The parametric form of a Cubic Bézier curve Q0 Q(t) = P0 (1 − t)3 + P1 · 3(1 − t)2 t + P2 · 3(1 − t)t2 + P3 · t3 .Quadratic Bézier A Quadratic Bézier curve has one control point. Given the four points of a Cubic Bézier C to be approximated. Flash offers the native drawing method curveTo for this spline type. R. endpoints and control points for Quadratic Béziers Q. t ∈ [0.

e. two or more curves may be joined together to form a curve with additional flexibility.C0 + C2 C + 1 4 2 C1 +C2 C2 +C3 C + C C 3 2 2 H1 = = 1 + 2 4 2 5C0 + 3C1 Q0 = C0 . S3 R1 R2 Q3=R0 Q0 C1 continuity S0=R3 S2 Q1 S1=RB Q2=RA Figure 13: Joiner 1 2 C1 1 8 1 2 1 2 18 7 8 1 2 C2 R1 S1 . Joiner For some facial features. a single Cubic Bézier curve does not suffice. S1 = 8 3C2 + 5C3 T1 = . connected Bézier curves only offer C0 continuity. Parametric continuity C n is a description of the smoothness of concatenated parametric curves: • C0 : curves are joined. however. one feature consists of more than one segment. at least C1 continuity is necessary. In these cases. i. Q1 = 8 7 H0 + H1 H0 + H1 R1 = . Without additional measures. T2 = C3 8 S + T1 S2 = T0 = 1 2 H0 = C0 +C1 2 + 2 + 2 C1 +C2 2 = In our implementation. • C1 : first derivatives are equal. while the approximation is handled internally by the class. the shape of the mouth or several wrinkles. In these cases. the spline can be used like a regular Cubic Bézier with two endpoints and two controlpoints. R2 = 8 2 Q1 + R1 Q2 = R0 = 2 H0 + 7 H1 S0 = R 2 . • C2 : first and second derivatives are equal. If two connected splines are to appear as a single and coherent curve.

Then. If R0 = ( x = 0. The values of the muscle tensions were saved for each intensity level. McCloud offers drawings for each basic emotion in 4 intensity levels. and y1 for x ≥ x1 . 5. A Joiner spline R is constructed from two endpoints R0 . the relation is a linear one – heightening the level of an emotion increases a muscle’s tension. The function returns y0 for x < x0 . We represent the relationships by a number of mathematical functions. needs to be interpolated. then R A would be set to Q2 . Assume a mirror through the vertical axis at position x = 0. These additional points are used to calculate the necessary control points R1 . the relation is much more complicated. The Joiner class is also used for mirroring. Likewise. y = y0 ). C1 continuity is achieved. which we call Mappings. The IMapping interface is merely a wrapper for a low-level mathematical function with one parameter and only has one method: function y(x:Number):Number.The Joiner spline is a Cubic Bézier spline whose control points are calculated from the control points of adjacent splines to achieve C1 continuity. Three mapping types are currently available: SineMapping This form of Mapping is defined by four parameters. Plots of the muscle tensions showed that the relationship is a different one for each combination of muscle and emotion. then R B would be set to S1 . R1 and R2 lie on the lines formed by R0 R A and R3 R B . muscles were adjusted to match the reference drawing. which we wanted to match. In order to achieve credible muscle tensions. When the curve is now horizontally mirrored at this point. More often than not. For instance. if Cubic Bézier S starts in R3 . R2 to achieve − −− → − −− → C1 continuity in both endpoints. this can be achieved by setting R A = ( x < 0. only indicated by 5 points (neutral and 4 intensity levels for each emotion). if Cubic Bézier Q ends in R0 . y = y0 ). which places R1 at ( x > 0. Typically. The distance of the control points from the respective endpoints on their respective axis is derived from the distance between the endpoints. Representing this relationship through Mappings allows fine-grained control over the opacity.5 Mappings Each emotion influences a different set of muscles. R3 and two additional points R A . R A and R B are set to the nearest control points of adjacent splines. however. Another relation represented by Mappings is the visibility of Features. R B . A Mapping takes a few parameters which influence the resulting function in a flexible way to approximate the form of the underlying relationship. To ensure a smooth curve. The concept is illustrated in figure 13. These drawings were used as references. R0 and R A form a horizontal line. y = y0 ) and results in zero slope for x = 0. This results in a smooth transition between the two states. R ( x = 0) must be 0. Every registered emotion-muscle influence is represented by a Mapping. this relationship. In some cases. Some features – wrinkles – only become visible when a muscle is contracted. 19 . which results in horizontal mirroring. For x0 ≤ x < x1 . following the form of a sine function. the curve interpolates between y0 and y1 . For each emotion and each intensity level. The y-method takes the current value of an emotion as parameter x and returns the current tension for the muscle.

mean = µ. variance = σ2 PolynomialMapping This is a direct representation of a polynomial function. However.01 σ2=0. σ2=0. 20 . It can approximate any necessary form by increasing the order of the polynomial.5 + 0.5 · (y1 − y0 ) + y0 x0 ≤ x < x1 x ≥ x1 GaussMapping This mapping represents the Gaussian function and is used in cases where a muscle is only contracted for intermediate values of an emotion.02 a2 a1 a0 σ2=0.005 0 µ 0 µ (a) Influence of scale factor a (b) Influence of variance σ2 Figure 15: GaussMapping ( x − µ )2 1 − y ( x ) = a · √ e 2σ 2 σ 2π The mapping takes three parameters: value = a.5 · sin π ·    y 1 x < x0 x − x0 x1 − x0 + 1.y1 y0 0 x0 x1 Figure 14: SineMapping   y    0 y( x ) = 0. but not for low or high values. the function is hard to configure manually.

On each normal. the spline is simply stroked by a constant width brush. y ( x ) = a n x n + a n −1 x n −1 + · · · + a 2 x 2 + a 1 x + a 0 5. BrushStyle Currently. thin lines at the start. However. in many cases. Stroke styles determine how the splines are visually represented. The concept is illustrated in Figure 16.In practice. this does not deliver favourable results. getting thicker towards the center. the other one defines the lower edge. points of upper spline to the left. One spline defines the upper edge. It simulates the characteristic form of a brush. points of lower spline to the right. If no stroke style is set. BrushStyle is the only stroke style available. Thus. In every point of the base spline. t= ma xW h idt 2 3 t= 1 3 idth endW startWid th Figure 16: BrushStyle applied to a Cubic Bézier spline 21 maxWidth . a normal is drawn. maxWidth and endWidth. This corresponds to the parameters startWidth. two splines are derived which define the shape of the stroke.app. but the distance of the control points. we used the curve-fitting methods of Grapher. The interface’s draw method supplies the style with the spline to be drawn. From the spline to be stroked. and again thinner towards the end. the positions of the points of the upper and lower splines are shifted. maxWidth does not directly represent the actual thickness of the resulting stroke. Stroke styles implement the IStrokeStyle interface. which calculates a polynomial interpolation of desired order for a given point set.6 Stroke styles The shape of features is represented by splines.

5. as well as static features like nose and head outline.dtd 22 . eyebrows.8 Deployment and use Grimace is a self-contained component which enables the addition of facial expressions to software projects. Currently. The recommended method is to embed Grimace into web pages and control it through JavaScript via the API. which is compatible with JavaScript and Actionscript 3. which are optional graphical elements added on top of the face to add additional personality to the face. • Emotions. • Muscles.g. 5. Pixel-based graphics can also be included. Typically. The loadFacedata API method takes an array of URLs as parameter. which are the link between emotions and features. Control of the component is offered by an API. A complete set of Facedata defines the following: • Features.7 Facedata file format Faces are entirely defined through external files which are loaded at runtime. Currently.org/xml/ latest version of the DTD can always be found at http://grimace-project. Since the definitions can become quite large and data have to be edited manually. The shape of a muscle is defined by a spline and when contracted can move an arbitrary number of control points along its path. The component can be downloaded from the project website and includes detailed instructions and demo files. Grimace can also be controlled via Actionscript 3. In the standard model. • Overlays. the hairdo is an overlayed vector graphic. no graphical editor is available. this includes dynamic features like eyes. We encourage the development of new faces based on these definitions. Additional emotions can also be implemented. Facedata has to be edited manually. Customisation The download package includes a complete face in the form of a set of Facedata XML files. which are the high-level concept which influence a number of muscles in an arbitrary fashion. Facedata is an XML-based file format. http://www. Apart from pure AS3. A corresponding DTD is kept up-to-date3 with the current capabilities of Grimace and allows face developers to validate their files through an XML validation service4 .net/dtd/latest. this includes Flex and Flash (from version CS4 upwards). This allows the development of faces which look entirely different to the standard face we developed. Being written in Actionscript 3. Features can be transformed by muscles. Each emotion affects specific regions of the face and results in familiar facial expressions. the component is deployed as SWF file and can be opened by Adobe Flash Player 9 and upwards. The AS3 API is basically identical to the JavaScript API but less tested. which are the visible elements of a face. mouth and wrinkles. loading the files in the supplied order. no graphical 3 The 4 e. Facedata definitions can be spread across files.validome. Through embedding.

.. the package also includes Facemap. The tool allows to show muscles and their current tension.* FeatureNode <<moves>> Figure 17: Class diagram 23 . values need to be edited manually.* Emotion 1 0.* Feature 1 1 Data input FacedataLoader XMLFactory PolynomialMapping XMLDraw GaussMapping 1. the tool we used to develop the face definition.* 1...* <<contracts>> JSHandler FeatureController 1 Geom ASHandler 1 0.* 1 1 SineMapping IMapping <<interface>> 0..* 1 FeatureFill <<visibility>> 1 FeatureSegment 1 1 <<shape>> 1 ISpline <<interface>> 1 <<shape>> 0.* 0...* MuscleGroup 1 0. and allows the output of the current state of all components. However.1 IStrokeStyle <<interface>> AbstractSpline BrushStyle Line QuadraticBezier CubicBezier <<pivot>> Joiner 0.. include underlayed pictures which offer reference. 5.editor is available..swf.* Muscle 0...9 Class diagram External API Grimace ExternalCommands MuscleController EmotionCore 1 0.

2009. for which our project might be very useful. A project website5 has been implemented. Furthermore.6 Results We have developed a software component which can display all primary and secondary emotions as depicted by McCloud. The component has been released to the public under a Creative Commons licence. A download package is available.net 24 . demo applications for all supported programming environments and comprehensive documentation on how to use the component. shown in a large number of approving comments. In figure 20. We are also very thankful to Mr. The website features a demo application that allows visitors to express arbitrary blendings of any two emotions. McCloud for his encouraging words and useful comments about our work at an intermediate stage of the project. Public reactions to the project were notedly positive. The resulting face is shown in figure 18 with a neutral expression. Figure 19 shows the 6 emotional primaries at four intensity levels. primaries can be blended in arbitrary intensities. which includes the component. Figure 18: Neutral expression 5 http://grimace-project. thus covering states not covered before. Scott McCloud kindly featured our project on his blog on February 25. emphasising that facial expressions should be taught in school. any combination of two primaries (both at 75% intensity level) is shown.

Joy Surprise Fear Sadness Disgust Anger Figure 19: Primary emotions in 4 intensity levels 25 .

Joy + Surprise Surprise + Fear Fear + Disgust Joy + Sadness Surprise + Sadness Fear + Anger Joy + Fear Surprise + Disgust Sadness + Disgust Joy + Disgust Surprise + Anger Sadness + Anger Joy + Anger Fear + Sadness Disgust + Anger Figure 20: Secondary emotion blendings of intensity level 3 26 .

While we believe that the goal of the project has been achieved in a satisfactory manner. We include all facial features which are necessary to convey an emotion while omitting the rest. Of course. from which the component and documentation can be downloaded. The Facial Action Coding System. and facial expressions are a natural way of expressing this kind of information. however. The work of McCloud (2006) was used as guide and visual reference throughout the design and development process. The file format is fully documented and allows full customisation of all aspects – features. About 200 people from around the world participated in the experiment. First of all. or FACS. the control points for features need to be entered manually in XML files. The setup consisted of a number of facial expressions rendered by Grimace. Our model has comic-like appearance.. In order to verify this an online experiment was conducted. Facial expressions which cannot be expressed currently include doubt or agreement. which allows easy deployment. A website was designed and built to make the survey easily available to a large audience. First and foremost.1 Conclusion and future directions The described software component Grimace displays emotions through a comic-like face. which can be easily observed by studying the wide range of facial expressions which actors can display. need considerable attention and are quite tedious to change. Customisation and extension of the current face model would become much easier if a graphical editor was available. which use an XML-based file format. (2000) show that comprehensibility of facial expressions can be increased further if the characteristic features of an expression are exaggerated. Currently. We believe that the display of emotional information is a valuable addition to information resources. 1978) describes a comprehensive framework of all possible 27 . The component was developed using web technology. Furthermore it would be interesting to see. there are many areas which remain to be addressed. it might be possible to reduce the number of necessary muscles by optimising the definition of the actual muscles. The relationships between muscles and emotions. Calder et al. and it might be possible to make our model even more expressive if we allow a certain level of unrealistic. These are the parts which can be exchanged easily. if the intended emotions are actually recognized in the facial expressions Grimace produces. cartoon-like exaggerated expressions. We had to add additional muscles to the principal facial muscles in a few cases to achieve the desired expressiveness. We believe to have found a useful compromise between simplicity and necessary detail. So far. The component is stable and ready for use for the intended purpose. muscles and emotions. the system can only display facial expressions which represent emotional states. covering the whole emotional gamut.6. such an editor should facilitate customisation of visible features of a face. The analysis of the collected data is in progress and the findings may be incorporated in the next iteration of Grimace to improve the readability of the emotions. However. We defined an API which allows convenient integration into other projects without the need for knowledge about technical details. All configuration data is loaded from external files. the current face model can be further optimised. A project website was implemented. Next to the face the 6 basic emotions (see figure 1) where listed and the participants had to specify which emotions they associated with the shown expression. (Ekman et al. a few of which will be outlined in the following. humans can communicate much more through their faces.

In FACS. this framework would offer a good basis. Ideally. asymmetrical movement of features is possible. If the range of possible facial expressions was to be extended.facial movements. the system would still mirror those parts that are symmetrical and only consider the differences to the symmetrical state when necessary. Right now. 28 . facial features are completely symmetrical. This would also mean a departure from the mirroring of facial features.

pdf. Basic emotions. In Smart Graphics: 5th International Symposium. and A. Facial action coding system. J. pages 45–60. 2005. 1978. Friesen. R. Seidel. Overcoming the uncanny valley. T. The Communication Theory Reader.pdf. P. 2002. pages 129–133. Emotions Revealed: Recognizing Faces and Feelings to Improve Communication and Emotional Life. Virtual Reality. Haber. Magee. P. Pergamon. Kanda. Etcoff and J. How Convincing is Mr. Keane. and D. Categorical perception of facial expressions. J. P. Handbook of cognition and emotion. Schröder. RO-MAN. Calder. Mixed feelings: expression of non-basic emotions in a muscle-based talking head. Caricaturing facial expressions. B. and H. M.nih.gov/pubmed/1424493.htm. Denotation and connotation. 2007. 11(4):279–295. Cognition. H. 1972. IEEE Computer Graphics and Applications. Hager. URL http: //www. Groleau. URL http:// timotheegroleau. Germany. Watson-Guptill. Hagita. Face. Approximating cubic bezier curves in flash mx. 1999. Perrett. 29 .com/ retrieve/pii/S0010027700000743.ncbi. 8(4):201–212. 1990. pages 368–373. Ishiguro. Nimmo-Smith. Xface: Open Source Toolkit for Creating 3D Faces of an Embodied Conversational Agent. Ekman. D. Bartneck. Rowland. C. Cognition. C. I.elsevier. Barthes. 28(4):11–17. S. Connotation and Meaning. URL http://www. Ekman. 2000. August 22-24. Garza-Cuarón. Mouton De Gruyter. and N. Caricature generator: The dynamic exaggeration of faces by computer. 1996. Balci. URL http: //www. A. 2005: Proceedings. Times Books. Brennan. 18 (3):170–178. 2005. Emotion in the Human Face: Guidelines for Research and an Integration of Findings. Faigin. 1992.nlm. Springer. Leonardo. Frauenwörth Cloister. Young. G. J.com/index/G407T21751T81161.org/theses/wijayat/sources/writings/papers/basic_emotions. T. N.com/Flash/articles/cubic_bezier_in_flash. User Modeling and User-Adapted Interaction. Geller. Ekman.springerlink. SG 2005. 1985. The Artist’s Complete Guide to Facial Expression. Ellsworth. W. 44(3):227–40.vhml.References I. Is The Uncanny Valley An Uncanny Cliff? In Proceedings of the 16th IEEE International Symposium on Robot and Human Interactive Communication. P. 2003. 1991. 2001. URL http://linkinghub. W. and P. T. 2008. Ekman. Bartneck. Albrecht. Friesen. A. 76(2):105–146. Data’s Smile: Affective Expressions of Machines. K.

ieee. I. Wiley. Osgood.com/index/7007280gtq412j0h. Gillies. Sezgin. J. University of Illinois Press. edu/~staadt/ECS280/Mori1970OTU. 1999. M. McCloud. Psychological perspectives on music and emotion.springerlink. C.pdf. Ortony and T. FUZZ-IEEE’99. Niewiadomski. Forchheimer. On Auditory Displays (ICAD).jsp?arnumber=818417. Russell. A. I.cs. 2007.cs. URL http://books. Norman.ucdavis.apa. 1990. A comic emotional expression method and its applications. 2002.arts. MIT Press. Proceedings of the IEEE Region 10 Conference. Affective Computing. The Measurement of Meaning. 1999 IEEE International.. Journal of Personality and Social Psychology. In TENCON 99. 1999. 97(3):315–331. T. Richards. 2004. Turner. C. URL http://www. volume 1. 1969. Juslin. 39(6):1161– 1178. Routledge & Kegan Paul.com/index/l8607854jt5q23l9. MPEG-4 Facial Animation: The Standard. and U. 2006.org/journals/psp/39/6/1161. Basic Books.google. 1980. 1970. X. Takeda. Int. E. URL http://music. Pelachaud. volume 3. URL http://content. Psychological Review. A circumplex model of affect. Emotional Design: Why We Love (or Hate) Everyday Things. In 1st International Conference on Affective Computing and Intelligent Interaction ACII. springerlink. J. LECTURE NOTES IN COMPUTER SCIENCE. Ochs.S. 7(4):33–35. URL http://www. and D.jsp?arnumber=790143. R.pdf. Springer. Music and emotion: Theory and research. S. Tanahashi and Y. URL http://www. Making Comics: Storytelling Secrets of Comics. In Fuzzy Systems Conference Proceedings. Iwashita.org/xpls/abs_all. URL http://books.org/xpls/abs_all. Y. S. Schubert. URL http://ieeexplore.pdf. Implementation and Applications. Loscos. Ltd. In Proc. C. Sloboda and P. M. The Meaning of Meaning: A Study of the Influence of Language Upon Thought and of the Science of Symbolism.au/aboutus/ research/Schubert/ICAD04SchubertEmotionFace.pdf.ieee. M.unsw.google.com/books?hl=en&lr=&id=Qj8GeUrKZdAC&oi=fnd&pg=PA1&dq= osgood+measurement&ots=RFI2_XNI8d&sig=hv5zzkO69BJWzCIK-37hS8QoecU.at/books? hl=en&lr=&id=GaVncRTcb1gC&oi=fnd&pg=PP11&dq=picard+affective+computing&ots= F1k6rlAaab&sig=qxVU7LSWnrL3XWmOthw7YX3cC-U. Emotionface: Prototype facial expression display of emotion in music. Pan. What’s basic about basic emotions. and T. Sadek. Manga and Graphic Novels. 30 . Conf. Eco.northwestern. 4738:745.edu/~ortony/papers/basic%20emotions. R. 2004. 1999. Mori.edu. Energy. Expressive facial caricature drawing. URL http://ieeexplore. 1997. HarperPerennial. The uncanny valley. Expressing complex mental states through facial expressions.pdf. Onisawa. 1957. Kim. URL http://graphics. 2001. pages 71–104. Picard. 2005. Intelligent expressions of emotions. and C. D. Pandzic and R. Ogden.

Z. Cai. University of Calgary. Wang. URL http://www.com/index/118766717256766j. H. Meng. PhD thesis.C. 31 .springerlink. 4738:24. Wu. LECTURE NOTES IN COMPUTER SCIENCE. 1993. Langwidere: A Hierarchical Spline Based Facial Animation System with Simulated Muscles. Facial expression synthesis using pad emotional parameters for a chinese expressive avatar. S. 2007. and L.pdf. Zhang.