You are on page 1of 14




A review of Facial Caricature Generator
Suriati Bte Sadimon, Mohd Shahrizal Sunar, and Habibollah Haron
Abstract— Caricature is a pictorial description of a person or subject in a summarizing way using exaggeration of the most distinguish features and oversimplification of the common features in order to make that subject ‘unique’ and to preserve the recognizable likeness of the subject. Facial caricature generator is developed to assist the user in producing facial caricature automatically or semi-automatically. It is derived from the rapid advance in computer graphics and computer vision as well as introduced as a part of non-photorealistic rendering technologies. Recently, facial caricature generator becomes particularly interesting research topic due to the advantageous features of privacy, security, simplification, amusement and their rampant emergent realworld application such as in magazine, digital entertainment, Internet and mobile application. This paper reviews the uses of caricature in variety of applications, theories and rules in the art of drawing caricature, how these theories are simulated in the development of caricature generation system and the current research trend in this field. There are two main categories of facial caricature generator based on their input data type: human centered approach and image centered approach. It also briefly explains the general process of generating caricature. The state of the art techniques in generating caricature are described in detail by classifying it into four approaches: interactive, regularity-based, learning-based and predefined database of caricature illustration. Expressive caricature is also introduced which is evolved from the neutral caricature. This paper also discusses relevant issues, problems and several promising directions for future research. Index Terms—caricature, exaggeration, face image, facial feature extraction, computer graphics and vision.

——————————  ——————————

aricature is a pictorial or literary representation of a person or thing by exaggerating the most distinctive features and simplifies the features that are more common in order to make the person or thing different from others and to create an easily identifiable visual likeness. Facial caricature represents features of an individual face in a simple and effective way. It exhibits the extraordinary characteristics of a person. Caricature can be drawn in a humorous, comical, laughable, insulting or offensive style depending on their purpose. The word ‘caricature’ is often mistaken for portrait and cartoon drawing. Portrait is any artistic representation of a person, which the intent is to display the likeness, personality and the mood of the person. A cartoon is a piece of art, usually humorous or satirical in intent. The proportions of the facial features in caricature are exaggerated and the style is much simple whereas the exaggeration does not exist in portrait drawing. The proportion of the facial features in portrait drawing must be exactly the same as the subject. The picture in caricature should look like a real person even though it is exaggerated but in cartoon, the picture does not refer to any particular real person or the person might be not exit in the real world. If someone known by the viewer is seen in a cartoon, the cartoon becomes a caricature. So, a caricature can be viewed as a combination between portrait drawing and cartoon drawing. However, a caricature does not necessarily have to be a cartoon. It could be a painting, or a


 Suriati Sadimon, Department of Computer Graphics and Multimedia, Universiti Tekonologi Malaysia, Johor, Malaysia.  Mohd Shahrizal Sunar, Department of Computer Graphics and Multimedia, Universiti Tekonologi Malaysia,Johor, Malaysia.  Habibollah Harun, Department of Industrial Computing, Universiti Tekonologi Malaysia, Johor, Malaysia.

sculpture. As long as the proportions are incorrect, and it can be recognizable as a particular person then it could be seen as a caricature [1]. A digital caricature is a caricature drawn using the artistic skills with the help of computer software like Corel Painter, Flash, Photoshop, and Illustrator. It is drawn completely by freehand directly onto the computer with the use of a tablet, an electronic pencil and a mouse. Computers can really aid the process of drawing caricature in making it all a bit quicker and easier. However, some artists prefer to draw caricature in traditional and natural way using pencil and paper and then scan the caricature into the computer. This is called scanned digital caricature. On the other hand, facial caricature generator is to produce facial caricature automatically or semiautomatically using computer graphics techniques. There are very few software programs designed specifically for automatically creating caricatures, and they are still not really successful [2] . Facial caricature generator attempts to imitate the artist in drawing caricatures. They try to convert the process of drawing caricature into the formula and algorithm that will be executed by computer. It can assist the user in producing caricatures whether for the skilled user or for those who do not have any ability in drawing caricature. Therefore, this paper provides a comprehensive and critical review of the recent research activity, approach and application in facial caricature generator. This paper is organized as follows: in section 2, we briefly survey the application of caricature. Section 3 gives a brief review of the theories and rules in the art of drawing caricature, including the element of caricature and the process of drawing caricature. Category of facial caricature generator based on their input data type is described in section 4 and the general process of generating facial caricature is detailed explained in section 5. In section 6, we provide a detail review of different approaches

© 2011 Journal of Computing Press, NY, USA, ISSN 2151-9617



in generating facial caricature from face image. Section 7 briefly addresses the expressive caricature. In Section 8, we discuss several issues, problems and the promising direction for facial caricature generator research. Lastly, section 9 summarizes and concludes this review paper

2.1 Caricature in Entertainment and Political Magazine
Caricature has been widely used in our daily life since the past few decades. It always can be seen in magazine or newspaper for various different purposes. Caricatures of movie stars are often displayed in entertainment magazines. It is used for entertainment through the combined use of fantasy and humor. Caricature is also used for expression of social and political perspective such as criticizing intolerance, injustice, political corruption and social evils using humor and satire. Caricatures of politicians are commonly met in editorial cartoons. Political caricatures generally are thought provoking and strive to educate the viewer about a current issue. A typical newspaper article has a great many words to deliver information and ideas whereas a political caricature can address issues to ground level and expose the reader to sharp ridicule by reducing an entire article down to simple pictures. Political caricatures have been proved as a powerful vehicle for swaying public opinion and criticizing or praising political figures [3].

of caricature in internet and mobile phone application [5], [6], [7], [8], [9]. J. Liu et al. [5] and R. Q. Zhou and J. X. Liu [7] changed the people’s real photograph captured by mobile camera to black and white caricature and then it was used to synthesize the multimedia animation message and transferred it to the mobile terminal as an entertainment. M. Lyons et al. [8] used caricature of human face in creation personalized avatar system. J. Liu et al. [6] proposed a prototype system for online chatting in which caricature was used for people to express their emotion or to show their identity to the others they communicating with M. Sato et al. [9] used caricature with a small set of characteristic points for non-verbal communication in mobile phone.

2.3 Caricature in Face Recognition
Use of caricature for face recognition and perception has been extensively studied in cognitive psychology, visual perception, computer vision, and pattern recognition area. Caricature is often used in facial recognition especially to recognize the criminal faces in police investigation or to identify subjects in forensic process. The victims or witnesses are often asked to describe the facial features of the criminal and then, an artist will draw the caricature according to the description. Caricatures are generally perceived as better likenesses of a person than the veridical images [10]. It is because the exaggeration of the distinctive feature in caricature emphasizes the features of the face and enhances the recognition. The distinctive features are important in face recognition and faces with unusual features are better recognized than more typical faces [11]. Caricature is not only used during recognized faces but also used to increase the amount of memorable information of the face encoded for later recognition. People often recognize faces from their own races better than those from other races [12]. J. Rodriguez et al. [12] used caricature to reduce other-race effect in facial recognition and to help individuals focus their attention to the features that are useful for recognition. G. Rhodes et al. [13] found that line drawing caricatures can be better recognized than the veridical image but photographic caricature did not. C. Frowd et al. [10, 14], and J. Smitaveja et al. [15] employed caricature in face recognition system using different approaches.

2.2 Caricature in Internet and Mobile Application
With the emergence of Internet and mobile technology, a caricature has been used for social communication and entertainment over web or mobile phone. The user can protect their identity and real image from other users for security purposes but still allows the basic facial gestures to be recognizable. Caricature is very applicable to be used in network environment because of their simplification in representing a person, which can reduce the computational burden and the bandwidth requirement, compared to the real photo image. Caricature is used as a display picture to create a memorable impression of a particular user in visual messenger, visual chat room, bulletin board and forum. It is also used as an avatar in virtual community such as virtual museum, virtual classroom, in interactive movie and multiple user role-playing community in virtual game. Avatar is a graphical representation of a person in virtual environment. According to K. L. Nowak and C. Rauh [4], the people preferred an avatar that is matched to their own gender, type (human) and with other characteristics that are similar to their own. Thus, use of facial caricature of the people as an avatar might be a good preference since the caricature can provide users with valuable information about their partners and can reduce uncertainty in their interaction. Caricature is also used as an avatar in a blog, website, profile page on the social networking and dating sites. Other than that, caricature also can be used in e-business card, e-greeting card, e-logo or as a perfect gift idea for many occasions. Many research have been carried out concerning the use

2.4 Caricature in Information Visualization and Education
Basic concept of caricature that exaggerated the characteristic features of the subject and simplifies the overall structure is adopted in information visualization for better understanding about the subject. It provides a powerful metaphor for illustrative visualization. It is called as caricaturistic visualization [16]. Caricature is also used to visualize the differentiation of subjects or concepts that hard to be noticed or to ease the user in understanding the difference of subjects. The potential applications are such as quality control, comparative biology, case-based education for medical students and deformation surveillance. Caricature is also used to enhance learning espe-



cially for the complex learning. It helps the cognitive system to pick up distinctive features of the learned material and minimizes any perceptual or representational confusion [17].

2.5 Caricature in User Interface Agent
Caricature is also used as an interface agent to assist a user in accomplishing daily computer tasks such as sorting email, filtering information and scheduling meetings. It can help the user interact with the agent and understand the agent’s behavior or characteristic through their external trait [18]. Caricature could also make a computer more human-like, engaging, entertaining, approachable, and understandable to the user, establish relationships with users, and make the users feel more comfortable with computers [19]. It also creates strong implications in user interaction with the system, which makes the user more likely to be cooperative.



3.1 Elements of Caricature
There are three essential elements that must be presented in caricature [20],[21]. The first element is Likeness. All good caricature must have a good likeness of their subjects. If the caricature cannot represent who it is supposed to be, then it is not a caricature. The second element is exaggeration. All caricature must have some form of exaggeration or a departure from the exact representation of the subject features. If there is no exaggeration, what we have is a portrait but not a caricature. The degree of exaggeration employed by each individual caricaturist varies enormously. Exaggeration is the essence of a caricature and not a distortion. Exaggeration is the overemphasis of truth, while distortion is a complete denial of truth [22] for instance if the mouth of a person is prominent, make it enormous and not make it tiny. However, there are extraordinary creative caricaturists who succeed in distorting some part of facial feature without losing their likeness. The last element is statement or subjective impression. Caricature must have something to say about the subject. It might be something to do with the situation the subject is drawn in, their personality through expression or body language or visual fun of some aspect or their image.

3.2 Process of drawing caricature
Drawing a caricature is an inborn talent exits only in a few peoples [23]. This inborn talent embedded in their subconscious mind and hard to be explained. However, these talented people still have to be taught because sometimes this talent may go unnoticed or undiscovered until later of life. Drawing caricature needs eye-hand coordination, which uses the eyes to pay attention to the subject and the hands to draw the subject based on the information received through the eyes. In addition, we need our brain to find the most prominent facial features that make the person different from everyone else [21]. The process of drawing caricature involves two basic

steps, observation and exaggeration. Observation is the most important things to do in drawing caricature. The likeness can be lost very easily if the observation is not carefully accomplished. Caricaturist need to understand what they see in the individual face that makes him recognizable, observe which of the facial features are larger, smaller, sharper, and rounder than most people’s, identify the salient facial features that make him unique and extract his personal interpretation. Person’s distinguishing characteristic can be determined by comparing his/ her facial features with the reference. According to the psychologist [13], human beings have a “mean face” recorded in their brain, which is an average of faces they encounter in life. This “mean face” acts as a reference. The distinguishable facial feature is the feature that larger or smaller or sharper or rounder than average. However, a caricaturist considers the difference not only from the average face but also their surrounding features of the same subject. L. Redman [22] employed the principle of relativity to define the distinctive facial features. Relativity in the arts can be thought of as having two parts: the relationship of things to others of their own kind and the relationship of things to their surrounding and abutting elements. The first part gives us the information about the size or shape of the facial features compare to others such as the eye is larger than others whereas, the second part tells about the distance or aspect ratio between the facial features such as distance between eyes or ratio of the mouth width to the nose width. L. Redman [22] used in-betweener as a frame of reference for determining how to exaggerate the subjects’ features. He exaggerates any features of the subject that is different from in-betweener. T. Richmond [20] defined the relationship as the distances between the five simple shapes, their size relative to one another, and angle they are at in relationship to the center axis of the face. The five simple shapes are left eye, right eye, nose, mouth, and face shape. He used classical portrait proportion as a point of reference to observe where the subject's face might differ and decide what relationships to change and how much to change them. The width of an eye is used as the primary frame of reference. It is difficult to observe those faces in which the unique features are not obvious. In order to make observation easy, simplify the face into five basic shapes by eliminating the details. The second step is exaggeration. The way of the caricaturists draw and exaggerate a caricature is depend on their style. Most of the artists start drawing a caricature with the simplest form or outline of the subject after they have determined the unique features or the relationship to be exaggerated from their observation. The simple shape is easier to draw, control and exaggerate to the desired shape than ones with a lot of complex element. The caricaturist always creates several little drawing of different exaggerated facial features and then picks the best one [1]. After that, the detail of facial features will be added to create the likeness. Which facial features are the most important part and should be looked and drawn first is var-



ies to different artist. K. Bjorndahl [1] , and Shafali [21] consider that the eyes is the most important feature on the face, whereas T. Richmond [20] believes that the face shape is the most important part and then eye and nose. The exaggeration is the manipulation of the relationship of the features to one another in term of their distance, size and angle and not the features themselves taken individually. Even the eyes is bigger than other peoples, it cannot be simply drawn in bigger size. The rest of the face should be taken into account. Every change to one feature will affect to the others features or called as action and reaction. For example, if the eyes are moved farther apart, the nose will move closer to the eyes. If the chin is changed to become bigger, the top of a head will be smaller. This technique will create higher impact exaggeration. Other than that, basic knowledge of drawing, anatomical knowledge and experience in drawing a human portrait are very helpful in drawing caricature.

related to the linguistic terms. The increase and the decrease of the parameter value are defined according to the Linguistic term. Linguistic terms used to describe the facial features are such as “big”, “small”, “round”, “thin” and “thick”. Impression terms are such as “pretty”, “severe”, “honesty”, “active”[28], “cheerful”, “quiet”, “mature”, “youthful” [30]. The general procedure for generating facial caricature from word description is shown in Fig 1. The user will input words that describe their desired facial caricature image. The words have been defined in the fuzzy set. Some calculation will be done according to the inputted term to get the features parameter values. The correspondence image in the database will be located [30] or the average face that is used as an initial face will be transformed to the desired facial caricature according to the parameter value [27], [29].
Input Linguistic terms / words - round eyes, big nose, little, more, very, slightly, etc



Facial caricature generator originally is intend to generate all type of caricature and not only human face caricature. However, most of the previous studies only focus on producing or synthesizing facial caricature. It is because face is a unique feature of human being and a substantial factor in providing a large part of human identity. Faces have been the subjects of interest of numerous studies such as in image processing, computer vision, biometric, pattern recognition, cognitive psychology, visual perception, computer graphic and animation. Basically, there are two main purposes of study dealing with faces: for face recognition that involves analysis tasks and face reconstruction that consist of synthesis tasks [24], [25]. Synthesizing facial caricature can be classified under the face reconstruction studies but also involves many other techniques from another area of study. In fact, it is also introduced as a part of non-photorealistic rendering technology, which attempt to mimic any existing artistic style whether in image-space or object-space. Facial Caricature Generator can be divided into two main category based on their input data set: human centered approach that use word descriptions as input data and image centered approach that use photographic face images as input data [26].

Relationship Linguistic terms

Facial features parameter values

Get the corresponding image according to the parameter values

output caricature Fig 1 General procedure to generate facial caricature from word description.

4.1 Human Centered Approach
A facial caricature is generated based on the user’s verbal description of facial features and the impressions about the target face. This approach attempts to imitate how the suspect’s face is drawn based on verbal description given by the witness in police investigation. There have been some studies on generating facial caricature from the user’s linguistic description [27], [28], [29], [30]. These studies are using fuzzy theory to express the meaning of the linguistic term or word and to represent the parameter value of each facial feature. These parameter values are

Finally, facial caricature with obtained parameter values is presented to the user. If the output is not fit to the desired facial caricature image, some modification can be done also using the linguistic term. S. Iwashita et al. [27], S. Iwashita et al. [29], and T. Onisawa and Y. Hirasawa [30] focused on how to seek the desired image of the face so that the output really reflect the face image in the user’s mind while J. Nishino et al. [28] concentrated on the linguistic acquisition and emphasizing on the structure of the fuzzy set. The problems that always encountered in these studies are the ambiguity meaning of the linguistic term, the display of the facial features is not really similar



to the defined term, the output are seem unnatural, limited number of the defined linguistic term causes the desired facial caricature cannot be displayed and the meaning of the linguistic words do not express the size of the facial features relatively to the size of the face.

input face image

4.2 Image Centered Approach
This approach uses a 2D face image as an input data to generate a facial caricature drawing. It attempts to imitate how the caricaturists draw the caricature by looking at the face image or at the real person or just based on their memory. This approach takes advantages of the image processing technology to process input face images and other related analysis tasks. The first caricature generator was created by S. E. Brennan [31]. She extensively studied the style, theory and method of caricature generation and examined the perceptual phenomena regarding to the distinctive facial features and came out with some heuristic of the caricature generation process. The process of caricature generation was carried out on the basis of exaggerating the differences between the subject and the average one. However, after that there have been many other works with different approaches. Generally, the process of generating facial caricature can be described as shown in Fig 2. First, extract facial feature points from the original face image. Then, find the distinctive features and exaggerate it to the new ones. Lastly, warp the face shape of the original face image or the face sketch to the target position according to the new points in order to produce a photographic caricature or sketch caricature, respectively. Some other works just connect the exaggerated points using line or curve function to generate a facial caricature. This process involves many areas of study. Face processing and extraction is a subject research in image processing and computer vision whereas line drawing, stroke, image warping or deforming is in computer graphics and animation. To find the distinctive facial features and exaggeration rate need for knowledge of cognitive psychology and visual perception.

Face processing/ feature extraction

Face sketch

Other sources

Extracted feature points

Find distinctive features Transformationmorph/ deform Exaggerate the features

New points / exaggerated features points

Output Caricature Fig 2 General procedure to generate facial caricature from input face image



The detail of the common process in generating facial caricature is explained as followed.

H. Koshimizu [34] also extracted iris using Hough transform and extracted other features using K-L expansion. Most of the existing works in generating caricature used modified Active Shape Model (ASM) [35], [36], [37], [38] and modified Active Apperance Model (AAM) [39], [40], [41], [42] to extract facial feature points. ASM and AAM is a model of the shape variations and texture variations of facial images using Principal component analysis (PCA). The coefficients of eigenshape or eigenface are used as parameters to represent the face shape and face texture. By adjusting the model parameters, the model can be well fit to a new facial image and can be used to automatically extract the facial shape feature points [38]. The difference between ASM and AAM is the ASM model uses local image textures in small regions of landmark points, whereas the AAM model uses the whole appearance[37]. AAM generally performs better than ASM [33]. Normally, automatic extraction of facial feature points is done during run time whereas manual extraction is employed during learning or training phase to get an average face or to get underlying features.

5.1 Face Extraction
Facial feature extraction can be performed manually or automatically. This step can take advantages of image processing technology to process the face image such as preprocessing, detection, localization, segmentation and feature extraction. P. Y. Chiang et al. [32] utilized Canny edge detector and Hough transform for iris localization and applied Active Contour Model (ACM) to obtain the final face mesh. ACM only performs when the shape of object is simple, the background is monotonous and no prior knowledge [33].

5.2 Facial feature point definition
These defined facial feature points are also known as landmark points. Other than that, we need to define what parameters to be extracted. The parameters that are always being used in caricature generation are facial feature points, size feature, shape feature, aspect ratio, distance between features and curvature property of each facial segment. The number of facial feature points used by the existing researchers varies according to the purpose of their use but usually ranges from 50 to 300. Higher number of facial feature points is commonly used for animation [32] and lower number is always used for mobile application [9]. High number of facial feature points can generate variety form of exaggeration without de-



stroy the likeness of the subject and more interesting but it has high computation and storage complexity whereas the lower number of facial feature points can save the computation and storage cost but the result is less interesting and limited to certain exaggeration. Table 1 below shows the landmark points used in some research works. Table 1 Number of Landmark Points Used in Some Research Works
Basic features eyes eyebrows nose mouth ears Face contour Hair style Total points [43] 16 20 12 17 19 84 [44] 16 20 12 20 19 87 Research works

ing to the SRM.

Fig 3 Canon model [47] [45] 16 20 12 20 24 92 [46] 40 14 25 24 12 25 31 171 [32] 44 16 22 18 19 119 [23] 8 8 5 6 10 9 46 [9] 12 16 3 10 10 9 60 Fig 4 Norm face grid

5.3 Distinctive Facial Features and Exaggeration
After the entire required facial feature points have been extracted from the input face image, the distinctive facial features need to be determined and then those features would be exaggerated to the desired form. According to the theory and rule of drawing caricature, an input face image need to be compared with the reference face in order to determine the distinctive facial features. The reference face can be either an average face or a standard face model. Average face or mean face is derived from the collection of face images in database. Facial features are represented by using N points of x and y coordinates. Let say, there are M face images in the database, the mean face can be calculated by
x i( s )  1 M

There are many other approaches that employed by the existing works to find the distinctive facial features and to assign the exaggeration rate, which will be explained later in detail at section 6. The summarization of these approaches is shown in Table 2 below. There are three approaches to find the distinctive features: interactive, regularity-based, and learning-based and three approaches to assign or to define the exaggeration rate: interactive, empirical, and learning-based. However, there is one approach that did not use this step in producing caricature. It is predefined database of caricature illustration. Table 2 Different approaches in generating caricature
To Define the Exaggeration Rate interactive To Determine the Distinctive Facial Features
interactive regularity-based learningbased

[53], [60] 


j 1





, y i( s ) 

1 M


j 1






empirical learning-based


i  1 , 2 ,..., N

[26], [31],  [34],  [38],  [39],  [41],  [48],  [55],[56], [57],[58],  [61], [62],[63]  [64], [32],[ 44], [54], [59] [23], [65]


  [43], [36]   [35]

where xi(Pj) and yi(Pj) are the x and y coordinates for the ith feature point of the j-th normalized face data. Standard face model is a standard proportion of the ideal face. V. Boyer [47] proposed a caricature modeling and rendering based on the Canon model as shown in Fig 3. The exaggeration is applied on the differences from the face image and the canon model. B. Gooch [48] used norm face as a reference to find the distinctive features to be exaggerated. Four vertical lines are set to be equidistant while the horizontal lines are assigned distance values 4/9, 6/9 and 7/9 from the top of the frame as shown in Fig 4. Furthermore, F. Ni et al. [49] proposed Self Reference Model (SRM) to be used in caricature generation system. The distinctive features are properly estimated and quantified by evaluating some differences between the input face and the standard facial parameters accord-

5.4 Image Transformation
This is the last step in generating caricature. The most common technique in computer graphics that can be used to transform the input facial image to the desired facial caricature is image metamorphosis or morphing. Image metamorphosis is implemented by combining image warping with color interpolation (cross-dissolve). There are two basic techniques of morphing: field morphing and mesh morphing [50]. The field morphing algorithm uses a set of control line segments to relate features in the source image to features in the target image, while the mesh morphing algorithm uses a rectangular grid of points, which has aligned to the key feature location of the image. The field morphing method is easier to be ap-



plied than the mesh morphing method but it takes more time to compute [50], [51]. The most common problem in morphing is to specify corresponding feature points of the images that can provide reasonable result and reduce computational complexity [52] and to morph only specific parts of the face image while holding other parts constant [50]. The metamorphosis techniques that implemented by the previous works in generating caricature are mesh morphing [23], [35], [36], feature-based morphing [32], [53] or known as field morphing, Thin plate spline warping [43], and Delaunay Triangulation warping [54]. Furthermore, Arruda et al. [42] used deformable mesh template, M. Obaid et al. [40] employed quadratic deformation model and E. Akleman et al. [53] used implicit free form deformation. C. C. Tseng et al. [44], B. Gooch et al. [48] and Z. Mo et al. [55] did not state the specific technique of morphing. The final caricatures produced by the existing works come in a variety of styles such as photographic caricature [23], [35], [36], [54], [55], [56] as shown in Fig 5(a), sketch caricature [43], [44], [55] in Fig 5(b), blackwhite illustration caricature [41], [48] and hand-drawnlike caricature as shown in Fig 5(c) [32]. Some of other works generate outline caricatures as shown in Fig 5(d) by connecting the exaggerated facial feature points using lines [31],[34], [57], [58], [59], piecewise cubic Hermite interpolation [37], Bezier curves [39], and B-Spline curves [9], [38],[46].

algorithm, which utilize the interactive morphing tool to generate caricatures as shown in Fig 6. He starts with an extremely simple template, which represents the essential features by a few numbers of lines. Only one feature is exaggerated at a time. He uses try and error method to find the distinctive feature to be exaggerated. If that exaggeration did not create a likeness, go in the opposite direction. If this too did not give the required result, return the feature to its original position. The user must correctly identify the distinctive features. It is difficult to do for the ordinary user and take a long time in doing try and error process to get the desired result. The user is also might exposed to the risk of producing unrecognizable caricature. E. Akleman et al. [53] further came up with a new deformation technique that uses simplical complex to generate caricature. He uses simplices (point and triangle) as a basic deformation primitive. Triangles can provide shear transformation, which cannot be supported by line. The deformations are defined by a set of simplex pair. Blending function is used to interpolate the translation vector, which define the source to target mapping. This system can intuitively and interactively produce an extreme caricature but requires experience to meaningfully specify source and target simplices. All these works require expert knowledge or skilled user input, which limits their applicability for every-day use [48].




(c) (d)
Fig 6 (a) original face image and its simple template [53] (b) caricature image and its simple template [53]

Fig 5 (a) Photographic caricature [55] (b). sketch caricature [44] (c) handdrawn like caricature [32] (d) outline caricature [34]

6.2 Regularity-based Approach
Regularity based approach is summarizing the rules of creating caricature and simulated by computer to generate caricature automatically or semi-automatically. The basic rule in drawing caricature is exaggerating the difference from the reference face. Average face is widely agreed among the caricaturists and the researchers to be used as a reference face. S. E. Brennan [31] used an average face derived from a collection of face images in database as a reference to find distinctive features. The positions of 165 feature points on the input face image corresponding to the reference points in the average face are marked manually with mouse click. These points are then



6.1 Interactive Approach
In this approach, the distinctive facial features and the exaggeration rate are determined interactively by the user. The user can drag the points of the facial features to enlarge or reduce the size in order to generate the desired caricature. E. Akleman [60] came up with a very simple



compared with the positions of similar points on the average face in order to determine the extent of deviation. This deviation would further be exaggerated by moving away the feature points from the average to yield a caricature as shown in Fig 7. The rate of exaggeration was defined interactively by hand control. The facial caricature is represented in line drawing that connects all these feature points. This system still requires skill user input to control the feature points in order to generate an interesting caricature.




Fig 7 (a) input face image (b) line representation of input face image [31] (c) caricature image [31]

This rule is also used by [23] ,[26] ,[32], [39], [41], [55], [56], [61]. P. Y. Chiang et al. [32] carried out with a different result as shown in Fig 8 but still using an average face as a reference face. An artist’s work is used as a source image to be morphed according to the exaggerated points of original face image to produce a caricature. The notion of “exaggerating the difference from the mean” (EDFM) is also used by [34],[57],[58],[62],[63]. This rule can be defined in formula as

Q  P  b  P  S 


pression to the user. T. Kamimura and Y. W. Chen [38], C. C. Tseng [44], S. J. Gibson et al.[54], and X. Guangzhe et al. [59] employed Principle Component Analysis (PCA) to find the major principle components of features. The parameter of exaggeration rate is determined empirically by performing orthogonal expansion to the eigenvector [59]. Varying the coefficient of eigenvector would change the face shape. However, Z. Mo et al.[55] has a strong persuasion that EDFM method may not produce the best caricatures. The distinctiveness of a displaced feature not only depends on its distance from the mean, but also its variance for example even eye width and mouth width deviate from the mean in equal distance, the eye will be more distinctive compared to the mouth if the eye width has much less widely distributed. Instead of using mean face as a reference to find the distinctive features, some works using standard face model as a reference. B. Gooch et al. [48] proposed an interactive method in deforming the black and white facial illustration to produce caricatures which rely much less on the presence of well-trained users. He uses a norm face grid (Fig 4) as a reference and a facial feature grid (FFG) for an input face image which consist a rectangle frame, four vertical line that marking the inner and outer corners of the eyes and three horizontal lines mark the position of the eyes, the tip of the nose, and the mouth as shown in Fig 9(a). Scaling the vectors between corresponding vertices in the norm face grid and the user-set grid does the exaggeration. To allow more expressive freedom, vertices on the left and right of the grid can be manipulated individually but not the internal features in order to produce a caricature as shown in Fig 9(b) and Fig 9(c).

where P is input images, Q is generated caricature, S is the average face, b is the rate of exaggeration and (P-S) is the difference between input image and the average face.




Fig 9 (a) Facial Feature Grid (FFG) (b) both grid and underlying image are warp interactively (c) the resulting caricature [48].




Fig 8 (a) artist’s work [32] (b) original face image [32] (c) caricature image [32].

They proposed an interactive system (PICASSO), which take the model, the caricaturist and the gallery into consideration. The area of the model’s face that the gallery looks mainly at can be detected by collecting the gaze distribution using eye-camera. Those features in the area of high distribution of gaze will be exaggerated to the greater degree. The scalar of exaggeration rate depends on the caricaturist and gallery. T. Fujiwara [57] proposed WEB-PICASSO, which employs variety of mean face that can be selected by the user. The use of different mean face will produce different caricature and give different im-

Table 3 shows all the previous works in this approach. Most of the works implemented the first relativity principle [22]. Only C. C. Tseng [44] and C. Wenjuan et al. [39] apply both of relativity principle, which considers the difference not only from the average face but also other neighbor features of the same subject to make the caricature more contrastive. C. Wenjuan et al. [39] employed the handcraft rules of a particular caricaturist in his work. He characterizes the face in a proportion form and uses hierarchy to control the exaggeration. He does not only increase exaggeration of the major distinctive features but also decrease the exaggeration of non-distinctive features. Most of the previous works in this approach defined the exaggeration rate interactively by dragging the feature points or setting the parameter value except [23],[32],[44],[54],[59]. K. H. Lai et al. [23] used learningbased approach to capture the exaggeration rate from the



caricaturist. C. C. Tseng [44], S. J. Gibson[54], and X. Guangzhe [59] determined the exaggeration rate empirically by eigenvector and by a simple linear model [32]. Table 3 Regularity-based approach
Reference face Average face Standard face model Relativity Relationship to others [26], [31],  [34],  [38],  [39],  [41],  [55],[56],  [57],[58], [61], [62],[63] [47] , [48], [49]  Relationship to the surrounding [44], [39]

6.3 Learning-based Approach
This approach is always used to capture the artist style or to simulate the caricaturist creativity that are difficult to codify algorithmically such as the style of the particular artist exaggerate the facial features, the style of stroke and the style of sketch. The artist’s product will be learned using machine learning technique or artificial intelligent methods. This approach requires extensive training databases containing pairs of original face image and its corresponding caricatured face image drawn by caricaturist. L. Liang et al.[43] proposed the caricature generating system based on example. He analyses 92 face imagecaricature pairs and classifies into 28 prototypes using Partial Least Square (PLS). Each prototype contains samples with similar direction of exaggeration correspond to some facial features. Any input of face image would be classified into one prototype. The facial features to be exaggerated and the rate of exaggeration are estimated using local linear model (least square). The exaggeration direction and the selected facial features determined by this method are limited and the distinctive facial features selected by the artist may cover different prototypes. J. Liu et al. [35] proposed a mapping learning approach to generate facial caricature. He employs Principle Component Analysis (PCA) to obtain the principle component of the facial features and uses Support Vector Regression (SVR) to learn the mapping model in principle component space. A new input face photograph would be projected into the principle component space and the vector that represents the position of the target caricature could be predicted using the mapping model. This vector needs to be transformed from the principle component space to high-level feature space to get the caricature’s shape. Both of the works [35]and [43] used a linear methods to map the original face image to its corresponding facial caricature whereas J. Liu et al. [36] further came up with nonlinear mapping model. He employs semi-supervised manifold regularization learning to reduce dimensionality of the caricature and to build regressive model in order to predict the caricature pattern from the given original face image. K. H. Lai [23] and N. S. Rupesh [65] also believe that generating facial caricature involves non-linear

exaggerations and not only linear exaggeration, which scale the features with a factor as provided in most of the existing caricature generation systems. They proposed a neural network based caricature generation. Feed forward back propagation [23] and cascade correlation [65] are adopted to capture the relationship between the distinctive facial features and the changes from original face image to caricature. The distinctive facial features are determined by comparing face image to the average face. This system cannot generate caricature exactly the same as the artist drawing but still can be accepted as satisfactory result as shown in Fig 10. J. Liu et al.[35] and J. Liu et al. [36] learn the style of general artist by using hand drawn caricatures that are created by many artists over the world while K. H. Lai et al.[23], L. Liang et al.[43] and N. S. Rupesh [65] learn an individual artist style and the caricatures are drawn by a particular artist. K. H. Lai et al.[23], L. Liang et al.[43], and N. S. Rupesh [65] known as supervised learning whereas Liu et al.[35] and J. Liu et al. [36] as semi-supervised learning.




Fig 10: (a) original face image (b) hand-drawn caricature created by an artist (c) the resulting caricature [23]

6.4 Predefined Database of Caricature Illustration
The distinctive features and exaggeration rate are not determined explicitly in this approach. Illustration caricatures correspond to the original facial parts need to be defined beforehand [46], [66], [67]. This approach is also known as component-based [68]. An input face image will be extracted and caricature illustration closest to the extraction result would be selected from a database. T. W. Pai and C. Y. Yang [46] concentrated on classifying and drawing methodology in order to ease the matching process between an input face image and the pre-generated caricature in facial bank. The resulting caricature generated by T. W. Pai and C. Y. Yang [46] shown in Fig 11. The problem of this approach is to obtain all the required caricature illustration in advance. If the correct caricature illustration is not available in the database, the facial part of an input face image might be replaced with unsuitable one and resulted in producing unrecognizable caricature. Sometime, even the individual parts of the input face image are matched flawlessly, the resulting caricature bear very little resemble to the person [68].








Fig 11 (a) original face image (b) matched face contour (c) caricature image [46]

Fig 12 (a) input face image (b) natural expression (c) smile expression (d) surprise expression [40]



Instead of generating facial caricature without considering the expression of the face image as explained in all previous sections, there are some effort to produce an expressive facial caricature [27], [29], [40], [68], [69], [70]. In A. J. Calder et al. [69], a neutral expression face image acts as a reference face. The expressive caricature is produced by exaggerating the difference between an expressive face image and the neutral expression face image. W. H. Liao and Chien An Lai [68] used facial component action definition (FCAD) to control a subset of facial parameter to produce a multiple expressive caricature. M. Obaid et al. [40] defined the expression using facial action coding system (FACS) and used quadratic deformation model to synthesize an expressive caricature. A. J. Calder et al. [69] generated an expressive caricature from an expression face image whereas [40], [68], [69] produced multiple expressive caricature from a neutral expression face image. The multiple expressive caricatures produced by [40] shown in Figure 12. However, the expressive caricature produced by them [40], [68], has less or no exaggeration if compared to the original expressive face image. S. Iwashita [27] and J. Nishino [29]generate an expressionless facial caricature first and then, the facial features are changed to the desired facial expression based on the facial expression definition obtained from questionnaire, which contains a parameter value of each facial features for various types of facial expression. The process of producing an expressive caricature is generally similar to the general process of generating caricature that explained before in section 5 except they need to add some method to control the location of facial feature points according to the type of facial expression. Most of them assumed that a particular expression of all people has the same expressive face and used the same value of parameter to change the facial feature points.



8.1 Human Centered Approach
All the previous studies assume that the linguistic term have the same meaning for all people but according to H. Benhidour and T. Onisawa [26], the same faces can be differently interpreted by different people. So, the meaning of the linguistic term for drawing facial caricature should be represented according to the type of users and many other factors. Instead of using fuzzy theory in representing the meaning of the language terms, other suitable approaches or hybrid approach might be investigated. Other than that, linguistic term in those studies is inputted in text format. Thus, use of voice as an input to generate facial caricature can be an interesting subject to be studied.

8.2 Reference Face
The distinctive facial features can be determined by comparing one face with the reference face. Average face is well accepted as a reference face for the caricature generated system which is used by the skilled user or the artistic effect of the resulting caricature is not considered, but if the system attempt to imitate the style of a particular artist, it begs the questions: “Does the average face obtained from a set of face images in the database equal to the “mean face” in the mind of the artist?” and “ Do a set of face images in the database represent all the faces that the artist encounter in life?”. If the average face does not represent the “mean face” in a particular artist’s mind, it might affect the generated caricature because the different average face will produce different caricature [57] and how a person interprets the human face is different from others [26]. Therefore, to find a method and the best reference face that exactly represents the ‘mean face’ in artist’s mind can be one of the future works.

8.3. Correspondence of Feature Points
In the caricature generation process, comparison of facial feature points between two different face images need to be implemented such as between the input face image and the hand-drawn caricature image in learning-based approach or between the input face image and the reference image in finding the distinctive facial features. Most of the previous works manually labeled the position of facial feature points in one face image corresponding to





the facial feature points in another face image. They used their experience and judgment to select the position of the facial feature points, which is prone to human mistake, and take a lot of time. The questions here are how to find the corresponding feature locations in different faces? Does the location of facial feature points in one face is accurately correspond to the facial feature points in another face?

esting subject research.

8.7. Non-photorealistic Rendering Techniques
This technique such as pen-and-ink, brush, hatch, painting etc need to be improved and enhanced to render the facial caricature texture and the style of artist drawing need to be captured and combine with the existing nonphotorealistic rendering technique in order to let the caricature approaching the artist’s product.

8.4. Learning-based Approach
The crucial problems in this approach are limited data collection of face image-caricature pairs and inconsistency of the artist styles. To find a method, which can generate caricature that exactly the same or very close to the artist’s work under that problem is one of the future challenges. This approach still need to be explored since the number of the previous works using this approach is still very reduced. Furthermore, none of them extracted the rule of artist’s style, explained how the artist exaggerated the facial features and showed the comparison of the different artist’s style. If the rules of the artist’s style can be summarized in formula or words, it can enhance the artistic effect of the current facial caricature generator and the style of a particular artist can be preserved even when the artist and his works are no longer available. In addition, if the art-style of the caricaturist can be understood, it can give many beneficial to the application of the caricature [25] . A good facial caricature generator system not only able to simulate the artist works and produce an interesting caricature but also at the same time allows users to easily interact with the system. Thus, how to integrate between learning-based approach and interactive control of the exaggeration rate also can be one of the future works. The user might be allowed to produce caricature in their style of exaggeration based on the style of several artists.

This paper attempts to provide a comprehensive survey of application, theory and research on facial caricature generator and to provide some structural categories for the approach and method described in the previous papers. The use of caricature can be seen in various applications such as in entertainment and political magazines, in Internet and mobile application, in face recognition, in information visualization and education, and in user interface agent. The important things in process of drawing caricature are observation and exaggeration. The challenge of the artist is to find the most distinctive facial features that make the person different from everyone else, to exaggerate that features and at the same time to maintain the likeness of the person. Facial caricature generator can be categorized into two according to the input data set, image processing approach and human centered approach. The first one uses face image as an input data and another one uses word description as an input. The first category is studied by the researcher and discussed in this paper more than the second one. The approaches of generating facial caricature from the input face image can be divided based on how the distinctive facial features are determined and how the exaggeration rate is defined. There are three approaches according to how the distinctive facial features are determined: interactive, regularitybased, and learning-based and there are also three approaches based on how the exaggeration rate is defined: interactive, empirical and learning-based. However, there is one approach that did not define the distinctive facial features and the exaggeration rate. It is predefined database of caricature illustration. Regularity-based approach is the most widely used approach to determine the distinctive features in the previous work, whereas most of the previous works used interactive approach in defining the exaggeration rate. Nevertheless, the learning-based approach seem to become a significant approach due to the researchers do not only intend to produce a simple caricature but also consider the artistic effect of caricature and endeavor to capture the style of the artist in their works. In addition, there is a lack of integrity in learningbased approach and interactive approach in order to develop a good facial caricature generator system. Facial caricature generator also has been enhanced to generate multiple expressive facial caricatures which can be employed in caricature animation and other application in the future. Nonetheless, to generate a facial caricature that

8.5. Pose of the Input Face Image
In previous works, pose of the input face image and the generated facial caricature are limited to the front-view whereas in the real world, most of the caricature drawn by the artist is in multi-view according to the prominent features of the person and the pose of original face image is also not only in front-view. In fact, sometime the artist uses more than one original face image with various pose to get more information about the subject in order to draw very good caricature. Therefore, multi-view facial caricature generation system needs to be developed in future.

8.6. Elements of Good Caricature
There are three elements of good caricature: likeness, exaggeration and statement or subjective impression. The first two elements are successful displayed in current caricature generation system but how to express the third element in the facial caricature generator can be an inter-



exactly similar to the artist’s product still remain as an open problem for further investigation and the existing issues in the previous works could spur researchers to explore and enhance this field in future.

bernetics‐‐Part A: Systems and Humans. vol. 33, p. 407(6), 2003.  [15]  J.  Smitaveja,  K.  Sookhanaphibarn,  and  C.  Lursinsap,  ʺFacial  Metrical  and  Caricature‐Pattern‐Based  Learning  in  Neural  Network  System  for  Face  Recognition,ʺ  in  Eigth  IEEE/ACIS  International  Conference  on  Computer  and  Information  Sci‐ ence, 2009. 

[1]  K.  Bjorndahl,  ʺlearn  to  draw  caricature,ʺ  http://www.learn‐to‐, 2002.  [2]  E. Akleman, ʺAutomatic Creation of Expressive Caricatures: A  Grand Challenge For Computer Graphics,,ʺ in Computational  Aesthetics  in  Graphics,  Visualization  and  Imaging  Girona,  Spain, 2005.  [3]  T. W. Neighbor, C. Karaca, and K. Lang, ʺUNDERSTANDING  THE WORLD OF POLITICAL CARTOONS,ʺ in Newspaper in  education, Settle Post intelligencer Seattle, Washington: world  affair council, 2004.  [4]  K.  L.  Nowak  and  C.  Rauh,  ʺThe  influence  of  the  avatar  on  online  perceptions  of  anthropomorphism,  androgyny,  credi‐ bility,  homophily,  and  attraction,ʺ  Journal  of  Computer‐ Mediated Communication,, vol. 11, p. article 8, 2005.  [5]  J. Liu, Y. Chen, W. Gao, and R. Z. R Fu, ʺCreative Cartoon Face  Synthesis System for Mobile Entertainment,ʺ in Lecture notes  in  computer  science.  vol.  Part  II,  Y.  S.  H.  a.  H.  Kim,  Ed.:  Springer‐Verlag Berlin Heidelberg, pp. 1027‐1038, 2005.  [6]  J. Liu, Y. Chen, X. Gao, J. Xie, and W. Gao, ʺAAML Based Ava‐ tar Animation with Personalized Expression for Online Chat‐ ting  System  ʺ  in  Advances  in  Multimedia  Information  Pro‐ cessing PCM2008. vol. 5353/2008: Springer Berlin / Heidelberg,  2008.  [7]  R.  q.  Zhou  and  F.  x.  Liu,  ʺCartoon  facial  animation  system  oriented mobile entertainment,ʺ in Computer Engineering and  Applications. vol. 45, pp. 96‐8, 2009.  [8]  M.  Lyons,  A.  Plante,  S.  Jehan,  S.  Inoue,  and  S.  Akamatsu,  ʺAvatar  Creation  using  Automatic  Face  Recognition,ʺ  in  Pro‐ ceedings  of  ACM  Multimedia  Bristol,  England,    pp.  427‐434,  1998.  [9]  M. Sato, Y. Saigo, K. Hashima, and M. Kasuga, ʺAn Automatic  Facial  Caricaturing  Method  for  2D  Realistic  Portraits  Using  Characteristic  Points.,ʺ  Journal  of  the  6th  Asian  Design  Inter‐ national Conference, Tsukuba, Japan, vol. 1, 2003.  [10]  C. Frowd, V. Bruce, D. Ross, A. McIntyre, and P. J. B. Hancock,  ʺAn application of caricature: How to improve the recognition  of facial composites,ʺ in Visual Cognition. vol. 15, pp. 954‐984,  2007.  [11]  R.  Mauro  and  M.  Kubovy,  ʺCaricature  and  face  recognition,ʺ  Memory & Cognition, vol. 20, pp. 433‐440, 1992.  [12]  J. Rodríguez, H. Bortfeld, and R. Gutiérrez‐Osuna, ʺReducing  the  Other‐Race  Effect  through  Caricatures,ʺ  in  Proceeding  in  8th IEEE International Conference on Automatic Face and Ges‐ ture Recognition Amsterdam, Netherlands, 2008.  [13]  G.  Rhodes,  G.  Byatt,  T.  Tremewan,  and  A.  Kennedy,  ʺFacial  distinctiveness and the power of caricatures,ʺ Perception, vol.  26, pp. 207‐223, 1997.  [14]  G. Yongsheng, ʺFacial expression recognition from line‐based  caricatures,ʺ  in  IEEE  Transactions  on  Systems,  Man,  and  Cy‐

[16]  P.  Rautek,  I.  Viola,  and  M.  E.  Groller,  ʺCaricaturistic  Visuali‐ zation,ʺ  Visualization  and  Computer  Graphics,  IEEE  Transac‐ tions on, vol. 12, pp. 1085‐1092, 2006.  [17]  E. D. Itiel, V. S. Sarah, and R. S. A. Alan, ʺHelping the cogni‐ tive  system  learn:  exaggerating  distinctiveness  and  unique‐ ness,ʺ  in  Applied  Cognitive  Psychology.  vol.  22,  pp.  573‐584,  2008.  [18]  T.  Koda,  ʺUser  Reactions  to  Anthropomorphized  Interfaces,ʺ  IEICE  Transactions  on  Information  and  Systems.,  vol.  E86‐D,  pp. 1369‐1377, 2003.  [19]  R.  Catrambone,  J.  Stasko,  and  J.  Xiao,  ʺAnthropomorphic  Agents  as  a  User  Interface  Paradigm:  Experimental  Findings  and  a  Framework  for  Research,ʺ  in  Proceeding  of  CogSci,  pp.  166‐171, 2002.  [20]  T.  Richmond,  ʺHow  to  draw  caricature,ʺ  30  Febuary  2010  ed:‐to‐draw‐ caricatures/, 2009.  [21]  Shafali, ʺHow to draw caricatures,ʺ in The evolution of a cari‐ caturist  book‐book‐on‐ how‐to‐draw‐caricatures‐the#, 2009.  [22]  L. Redman, How to draw caricatures. Chicago: McGraw‐Hill,  1984.  [23]  K.  H.  Lai,  P.  W.  H.  Chung,  and  E.  A.  Edirisinghe,  ʺNovel  ap‐ proach to neural network based caricature generation,ʺ in IET  International Conference on Visual Information Engineering is‐ sue CP522 ed Bangalore, India, pp. 88‐93, 2006.  [24]  M. Kaneko and O. hasegawa, ʺProcessing of face images and  its application,ʺ IEICE Trans.  Inf. & Sys., vol. E82‐D, pp. 589‐ 599, 1999.  [25]  S.  N.  Yanushkevich,  ʺSynthetic  Biometrics:  A  Survey,ʺ  in  In‐ ternational  Joint  Conference  on  Neural  Networks  Vancouver,  BC, Canada, pp. 676‐683, 2006.  [26]  H.  Benhidour  and  T.  Onisawa,  ʺInteractive  face  generation  from verbal description using conceptual fuzzy sets,ʺ Journal  of Multimedia vol. 3, pp. 52‐59, 2008.  [27]  S.  Iwashita,  Y.  Takeda,  and  T.  Onisawa,  ʺExpressive  facial  caricature  drawing,ʺ  in  IEEE  International  Conference  Pro‐ ceedings Fuzzy Systems FUZZ‐IEEE ʹ99. . vol. 3, p. 1597, 1999.  [28]  J.  Nishino,  T.  Kamyama,  H.  Shira,  T.  Odaka,  and  H.  Ogura,  ʺLinguistic knowledge acquisition system on facial caricature  drawing system,ʺ in Proceedings of IEEE International Confer‐ ence on Fuzzy Systems  vol. 3, p. 1591, 1999.  [29]  S. Iwashita, Y. Takeda, T. Onisawa, and M. Sugeno, ʺAddition  of facial expressions to facial caricature drawn using linguis‐ tic expressions,ʺ in The 10th IEEE International Conference on  Fuzzy Systems,. vol. 2, p. 736, , 2001.  [30]  T. Onisawa and Y. Hirasawa, ʺFacial caricature drawing using  subjective image of a face obtained by words,ʺ in IEEE Inter‐ national  Conference    Proceedings  on  Fuzzy  Systems.  vol.  1,  p.  45, 2004.  [31]  S. E. Brennan, ʺCaricature Generator: The Dynamic Exaggera‐ tion  of  Faces  by  Computer,ʺ  Leonardo,  vol.  40,  pp.  392‐400, 



2007.  [32]  P.  Y.  Chiang,  W.‐H.  Liao,  and  T.‐Y.  Li,  ʺAutomatic  caricature  generation  by  analyzing  facial  feature,ʺ  Proceeding  of  Asian  Conference on computer vision, Jan 27‐30 2004.  [33]  S. Wang, Y. Wang, and X. Chen, ʺWeighted Active Appearance  Models,ʺ  Lecture  Notes  in  Computer  Science,  vol.  4681/2007,  pp. 1295‐1304, 2007.  [34]  H.  Koshimizu,  M.  Tominaga,  T.  Fujiwara,  and  K.  Murakami,  ʺOn KANSEI facial image processing for computerized facial  caricaturing system PICASSO,ʺ in IEEE International Confer‐ ence Proceedings Systems, Man, and Cybernetics. vol. 6,  p. 294,  1999.  [35]  J. Liu, Y. Chen, and W. Gao, ʺMapping Learning in Eigenspace  for  Harmonious  Caricature  Generation.,ʺ  in  14th  ACM  Inter‐ national  Conference  on  Multimedia  Santa  Barbara,  USA,    pp.  683–686, 2006.  [36]  J. Liu, Y. Chen,  J. Xie, X. Gao, and W. Gao, ʺSemi‐supervised  Learning  of  Caricature  Pattern  from  Manifold  Regulariza‐ tion,ʺ  in  Proceedings  of  the  15th  International  Multimedia  Modeling. vol. LNCS 5371: Springer‐Verlag Berlin Heidelberg,  pp. 413–424, 2009.  [37]  W. Gao, R.  Mo,  L. Wei, Y. Zhu,  Z. Peng, and Y.  Zhang, ʺTem‐ plate‐based  portrait  caricature  generation  with  facial  compo‐ nents  analysis,ʺ  in  IEEE  International  Conference  on  Intelli‐ gent  Computing  and  Intelligent  Systems.  vol.  4,  pp.  219‐223,  2009.  [38]  T.  Kamimura  and  Y.  W.  Chen,  ʺFacial  caricaturing  system  based on  multi‐view Active Shape Models,ʺ in 2009 Fifth In‐ ternational  Conference  on  Natural  Computation:  IEEE  com‐ puter society, pp. 17 ‐ 21, 2009.  [39]  C.  Wenjuan,  Y.  Hongchuan,  S.  Minyong,  and  S.  Qingjie,  ʺRegularity‐Based  Caricature  Synthesis,ʺ  in  International  Conference on Management and Service Science, pp. 1‐5, , 2009.  [40]  M. Obaid, D. Lond, R. Mukundan, and M. Billinghurst, ʺFaci‐ al  caricature  generation  using  a  quadratic  deformation  mod‐ el,ʺ in Proceedings of the International Conference on Advances  in  Computer  Enterntainment  Technology  Athens,  Greece:  ACM, 2009.  [41]  C.  Yoon‐Seok,  L.  In‐Kwon,  and  K.  Bon‐Ki,  ʺPainterly  carica‐ ture maker,ʺ in SIGGRAPH ʹ09: Posters New Orleans, Louisi‐ ana: ACM, 2009.  [42]  F. A. P. V. Arruda, V. A. Porto, J. B. O. Alencar, H. M. Gomes, J.  E.  R.  Queiroz,  and  N.  Moroney,  ʺA  New  NPR  Approach  for  Image  Stylization:  Combining  Edges,  Color  Reduction  and  Facial  Exaggerations,ʺ  in  International  Conference  on  Compu‐ tational Intelligence for Modelling Control & Automation, pp.  1065‐1070, 2008.  [43]  L. Liang, H. Chen, Y. Q. Xu, and H. Y. Shum, ʺExample‐based  Caricature Generation with Exaggeration,ʺ Proceedings of 10th  pacific conference on computer graphics and applications, p. pg  386, 2002.  [44]  C.  C.  Tseng,  ʺSynthesis  of  exaggerative  caricature  with  inter  and  intra  correlations,ʺ  in  Proceedings  Computer  Vision  ‐  ACCV 2007, PT I, vol. 4843, pp. 314‐323, 2007.  [45]  e. a. Yin  Qin Xu, ʺcaricature exaggeration,ʺ  United States Pa‐ tents  Application  Publication,  vol.  US  2005/0212821  A1,  sept  29, 2005.  [46]  T. W. pai and C. Y. Yang, ʺFacial model analysis for 2D carica‐

ture  generating  systems,ʺ  in  Workshop  on  Artificial  Intelli‐ gence: National Taiwan Ocean University, 2006.  [47]  V. Boyer, ʺAn Artistic Portrait Caricature Model,ʺ in Advances  in Visual Computing, pp. 595‐600, 2005.  [48]  B. Gooch, E. Reinhard, and A. Gooch, ʺHuman Facial Illustra‐ tions:  Creation  and  Psychophysical  Evaluation,ʺ  ACM  Trans‐ action on Graphic, vol. 23, pp. 27‐44, 2004.  [49]  F.  Ni,  Z.  Fu,  Q.  Cao,  and  Y.  Zhao,  ʺA  new  method  for  facial  features  quantification  of  caricature  based  on  self‐reference  model,ʺ International Journal of Pattern Recognition and Arti‐ ficial Intelligence, vol. 22, pp. 1647‐1668, 2008.  [50]  M.  Steyvers,  ʺMorphing  techniques  for  manipulating  face  images,ʺ Behavior Research Methods, Instruments & Comput‐ ers, vol. 31, pp. 359‐369, 1999.  [51]  G. Wolberg, ʺImage morphing:a survey,ʺ The Visual Computer,  vol. 14, pp. 360‐372, 1998.  [52]  L.  Varshney,  ʺFace  Metamorphosis  and  Face  Caricature:  A  User’s Guide,ʺ in Applications of Signal Processing: School of  Electrical  and  Computer  Engineering  Cornell  University,  2004.  [53]  E.  Akleman, J. Palmer, and R. Logan,  ʺMaking  Extreme Cari‐ catures  with  a  New  Interactive  2D  Deformation  Technique  with  Simplicial  Complexes,ʺ  in  Proceeding  of  the  third  Inter‐ national Conference on Visual Computing Mexico,  pp. 165‐170,  2000.  [54]  S.  J.  Gibson,  C.  J.  Solomon,  and  A.  Pallares‐Bejarano,  ʺNon‐ linear photo‐realistic caricatures using a parametric facial ap‐ pearance  model,ʺ  in  Behavior  Research  Methods.  vol.  37,  pp.  170‐181, , 2005.  [55]  Z.  Mo,  J.  P.  Lewis,  and  U.  Neumann,  ʺImproved  Automatic  Caricature  by  Feature  Normalization  and  Exaggeration,ʺ  in  Proceedings  of  ACM  SIGGRAPH    Sketches  New  York:  ACM,   pp. 57‐57, 2004.  [56]  A.  Pujol,  J.  J.  Villanueva,  and  H.  Wechsler,  ʺAutomatic  view  based caricaturing,ʺ in Proceedings of 15th International Con‐ ference  on  Pattern  Recognition,  vol.  1,  pp.  1072‐1075  vol.1,  2000.  [57]  T.  Fujiwara,  ʺWeb‐PICASSO:  Internet  implementation  of  facial  caricature  system  PICASSO,ʺ  in  Proceedings  Advances  in Multimodal Interfaces, vol. 1948, pp. 151‐159, 2000.  [58]  K.  Murakami,  M. Tominaga, and H. Hoshimizu,  ʺAn  interac‐ tive facial caricaturing system based on the gaze direction of  gallery,ʺ in 15th International Conference  Proceedings on Pat‐ tern Recognition vol. 4, pp. 710, 2000.  [59]  X. Guangzhe, K. Masahide, and K. Akira, ʺSynthesis of facial  caricature using eigenspaces,ʺ in Electronics and Communica‐ tions  in  Japan,  Part  III:  Fundamental  Electronic  Science  [Eng‐ lish translation of Denshi Tsushin Gakkai Ronbunshi]. vol. 88,  pp. 65‐76, 2005.  [60]  E.  Akleman,  ʺmaking  caricature  with  morphing,ʺ  in  Visual  Proceedings of ACM SIGRAPH, New York, pp. 145‐145, 1997.  [61]  R. Zhou, J. Zhou, Y. Chen, J. Liu, and L. Li, ʺCaricature genera‐ tion based on facial feature analysis,ʺ in Journal of Computer  Aided Design & Computer Graphics, vol. 18, pp. 1362‐6, 2006.  [62]  M.  Tominaga,  T.  Asao,  K.  Murakami,  and  H.  Koshimizu,  ʺKANSEI  facial  caricaturing  based  on  the  eye‐camera  inter‐ face from gallery,ʺ in 26th Annual Confjerence of the IEEE In‐ dustrial Electronics Society, 2000. IECON 2000. .vol. 1, pp. 307‐



312 , 2000,.  [63]  T.  Yamaguchi,  M.  Tominaga,  and  H.  Koshimizu,  ʺInteractive  facial caricaturing system based on Eye Camera,ʺ in Proceed‐ ings of  SPIE ‐  The International Society for Optical  Engineer‐ ing. vol. 5132, pp. 390‐399, 2003.  [64]  Y. Q. Xu, H. Y. Shum, M. Cohen, L. Liang, and H. Zhong, ʺcar‐ icature exaggeration,ʺ United States Patents Application Pub‐ lication, vol. US 2005/0212821 A1, sept 29, 2005.  [65]  N. S. Rupesh, H. L. Ka, A. E. Eran, and W. H. C. Paul, ʺUse of  Neural  Networks  in  Automatic  Caricature  Generation:  An  Approach  Based  on  Drawing  Style  Capture,ʺ  in  Lecture  note  in computer science : Pattern Recognition and Image Analysis,  vol. 3523/2005: Springer Berlin, 2005.  [66]  C.  Y.  Yang,  T.  W.  Pai,  Y.  S.  Hsiao,  and  C.  H.  Chiang,  ʺFace  shape classification for 2D caricature generation,ʺ in Proceed‐ ings of Workshop on Consumer Electronics, 2002.  [67]  F. Min, J. L. Suo, S. C. Zhu, and N. Sang, ʺAn Automatic Por‐ trait System Based on And‐Or Graph Representation ʺ in Lec‐ ture Note in Science Computer, 2007.  [68]  W. H. Liao and Chien An Lai, ʺAutomatic Generation of Cari‐ catures with Multiple Expressions Using Transformative Ap‐ proach,ʺ in Arts and Technology 2009, LNICST 2010, 30 ed,  pp.  263‐271, 2010.  [69]  A.  J.  Calder,  D.  Rowland,  A.  W.  Young,  I.  Nimmo‐Smith,  J.  Keane,  and  D.  I.  Perrett,  ʺCaricaturing  facial  expressions,ʺ  in  Cognition, vol. 76, pp. 105‐146, 2000.  [70]  O. Mohammad, M. Ramakrishnan, and B. Mark, ʺGenerating  and  rendering  expressive  caricatures,ʺ  in  ACM  SIGGRAPH  2010 Posters Los Angeles, California: ACM, 2010.      Suriati bte Sadimon received her BSc in Science, Computer with Education from Universiti Teknologi Malaysia and her MSc in Distributed Multimedia System from University of Leeds, United Kingdom. She is currently persuing her Phd study at Universiti Teknologi Malaysia. Her area of interest includes computer graphics, face image processing and machine learning. Mohd Shahrizal Sunar received his BSc in Computer Science from Universiti Teknologi Malaysia, his Msc in Computer Graphics and Virtual Environment from University of Hull, United Kingdom and his Phd in Computer Graphics from Universiti Kebangsaan Malaysia. Currently, he is a senior lecturer and a head of department of Computer Graphics and Multimedia, Universiti Teknologi Malaysia. His area of interest includes computer graphic, virtual reality, game programming and image processing. Habibollah Haron received his BSc in Computer Science from Universiti Teknologi Malaysia, his Msc in Computer Technology in Manufacture from Universiti of Sussex, United Kingdom and his Phd in Computer Aided Geometric Design from Universiti Teknologi Malaysia. Currently, he is an Associate Professor and a head of department of Modelling and Industrial Computing, Universiti Teknologi Malaysia. His area of interest includes computer aided geometric design and soft computing.