Professional Documents
Culture Documents
CHAPTER 1
INTRODUCTION
1.1 OBJECTIVE
The distribution of facial block colors in the form of a facial color gamut and
typical color centroids is important in facial block analysis. The facial color gamut
shows the range of facial block colors. The idea of the six color centroids is derived
from the principle of facial diagnosis in TCM, in which main color types are
commonly extracted from a set of color categories, and utilized as an important feature
for disease diagnosis.
Facial block color feature extraction is described in this section. First, color
feature extraction using the facial color gamut is presented, and preceded by six
centroids representing main colors of the facial blocks. The six centroids are then used
to calculate a facial color feature vector for each block. There are two main types of
DM: Type 1 DM and Type 2 DM. People with Type 1 DM fail to produce insulin, and
therefore require injections. Type 2 DM can be categorized by insulin resistance and is
the most common type. Currently, there is no cure for Type 1 DM or Type 2 DM. The
main component of this device is a SONY 3- CCD video camera, which is a high-end
industrial camera able to acquire 25 images. The size of each image is 640 × 480
pixels. Illustrates the schematic diagram of the viewing angle and imaging path. With
the camera placed in the center of the device, two fluorescent lamps are situated on
either sides of the camera. The bottom of depicts the six centroids from the facial color
gamut as a solid color square with its label on top and correspondingly RGB value
below. These six centroids are red, yellow, light yellow, gloss, deep red, and black.
3
A fasting plasma glucose (FPG) test is the standard method practiced by many
medical professionals to diagnose DM. The FPG test is performed after the
patient has gone at least 12 h without food, and requires taking a sample of the
patient’s blood (by piercing their finger) in order to analyze its blood glucose
levels.
A noninvasive method to detect DM by distinguishing Healthy and DM samples
(using facial block color features) via a sparse representation classifier (SRC).
A comparison was made between the color features of the two classes, and
classification was performed between Healthy and DM using a combination of
facial blocks and SRC.
sufficiently cropped, that is if the face occupies most of the image space, then a high
correlation is expected within the class.
In contrast, the dolphin whistles are narrowband signals so the identifying parts
of the whistle spectrograms have very small support on the time-frequency plane. They
are embedded in ambient noise and mixed with echolocation clicks. That the whistles
are not dense in the spectrogram prevents the training set of the spectrogram images
from being a near complete representation of its class. Such sparseness also begets a
very low signal-to-noise ratio (SNR) and signal-to-interference ratio (SIR) overall.
Here we distinguish the underwater ambient noise which is present nearly everywhere
in the time-frequency domain and the localized echo-location clicks which we model
as interference. The fortunate circumstance has to do with the localized version of the
SNR and SIR: The former is very high (20 to 30 dB) in the vicinity of the whistles and
the echo-location interference does not overlap the whistles except at a negligible
number of narrow regions. Cropping the spectrogram to a region that contains only the
defining segment of a whistle is akin to analyzing the signal over a sub-band.
To prepare the whistle spectrogram data for the SRC classifier we chose to
preprocess it using the Local Binary Pattern (LBP) operator. Most preprocessing
procedures for the undertaken task involve contour tracing but the LBP technique does
not rely on whistle contours for obtaining salient information. The LBP operator
encodes both the global and the local characteristics of the calls into a compact
representation and eliminates the need for tedious formulations, parameter derivations,
denoising, and other prior processing. To establish identifying feature vectors, the
operation creates binary pattern templates of contours by exploiting the difference
between connected, line forming pixels, and diffuse textures.
For the undertaken task, it eases the classification by eliminating some
preprocessing algorithms as well as contour tracing. LBP operates on the spectrogram
defined over the time-frequency domain to extract the important features that are
directly fed to the SRC algorithm. Classes of dolphin calls can then be determined by
6
the linear basis pursuit algorithm or other procedures that minimize the l1-norm of the
error vector. Introducing refinements to the simple SRC implementation can improve
significantly the classification performance. The results of our experimental studies
demonstrate that the SRC method coupled with LBP features is capable of
distinguishing classes of vocalizations with nearly perfect accuracy.
texture feature values. To obtain an optimal result, different Gabor filter and facial
block combinations were tested. The RBF kernel function with Blocks ABC achieved
the highest accuracy of 93 %, a sensitivity of 94 %, and a specificity of 92 %. This
demonstrates that the system performs well, completing each examination in 1.69e-04 s
(without having the individual to fast), and all in a non-invasive manner.
It is common knowledge that healthy people have skin that looks and feels
smooth. Examining this skin up close will reveal tiny peaks around the hair follicles
and pores, while there will be tiny valleys in between these peaks. Throughout the skin
the peaks and valleys are consistent, which provides a uniform appearance.
Conversely, a person suffering from a disease will have non-uniform skin textures.
Hence, the texture value for Healthy is less than Disease refer to and therefore can be
used in health status classification.
Fisherfaces, and Laplacianfaces have been used on full face images. Gabor filter banks,
which are used to approximately model the processing in the primary visual cortex,
have been successfully used as a feature extraction method. Moreover, these features
are locally concentrated and have been shown to be robust to block occlusion. Once the
feature vector has been extracted from an image, the vector is passed to a classifier
which then gives the recognized expression.
1.4.1 Feature Extraction and Classification
For performance comparison, two different sets of features were extracted from
the images and used to classify the facial expressions. Firstly, eigenvectors were
obtained using Principal Component Analysis (PCA) and projections on these basis
functions were used to form the feature vectors. These basis functions have been
termed Eigenfaces resulting features are denoted as Eigen in the tables below. SRC is
implemented as described training image data (i.e., pixel values) are used to form the
dictionary A. Since the image of size 96 × 72 leads to dictionary vectors of length
6912, we also experimented with using downsampled images as the dictionary vectors.
The current study investigated the influence of a low-level local feature (curvature) and
a high-level emergent feature (facial expression) on rapid search. These features
distinguished the target from the distractors and were presented either alone or
together. Stimuli were triplets of up and down arcs organized to form meaningless
patterns or schematic faces.
In the feature search, the target had the only down arc in the display. In the
conjunction search, the target was a unique combination of up and down arcs. When
triplets depicted faces, the target was also the only smiling face among frowning faces.
The face-level feature facilitated the conjunction search but, surprisingly, slowed the
feature search. These results demonstrated that an object inferiority effect could occur
even when the emergent feature was useful in the search. Rapid search processes
9
Finally, based on the above overviews, some current research highlights and
existing issues are discussed for further improvement of TCM diagnosis. Actually, due
12
to a large amount of works on patient classification, the current survey in this paper
may been not completed and need to be improved further. Nevertheless, it is enough to
reflect the current advances in patient classification for TCM. For the comprehensive
analysis of current TCM diagnosis for patient classification, we would complement our
reviews and complete the current overview tables in the future work. Finally, based on
the above overviews, some current research highlights and existing issues are discussed
for further improvement of TCM diagnosis. Actually, due to a large amount of works
on patient classification, the current survey in this paper may been not completed and
need to be improved further. Nevertheless, it is enough to reflect the current advances
in patient classification for TCM. For the comprehensive analysis of current TCM
diagnosis for patient classification, we would complement our reviews and complete
the current overview tables in the future work.
repeated measurements are made in similar but not identical experimental conditions,
the results are semiquantitative at best, and their use as indicators for follow-up is thus
limited. The method presented in this paper is based on analysis of an
electrocardiographic recording, taken for 5 min while the patient is comfortably at rest:
Like the ECG itself, repeated recordings may therefore be reliably compared to each
other and progression of autonomic neuropathy may be monitored over long periods of
time.
Main reasons for loss of vision in patients with diabetes mellitus are diabetic
macular edema and proliferative diabetic retinopathy. Incidence or progression of these
potentially blinding complications can be greatly reduced by adequate control of blood
glucose and blood pressure levels. Additionally, regular ophthalmic exams are
mandatory for detecting ocular complications and initiating treatments such as laser
photocoagulation in case of clinical significant diabetic macular edema or early
proliferative diabetic retinopathy. In this way, the risk of blindness can considerably be
reduced. In advanced stages of diabetic retinopathy, pars-plana vitrectomy is
performed to treat vitreous hemorrhage and tractional retinal detachment. In recent
years, the advent of intravitreal medication has improved therapeutic options for
patients with advanced diabetic macular edema.
CHAPTER 2
LITERATURE SURVEY
J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma, “Robust face recognition via
sparse representation,” They consider the problem of automatically recognizing human
faces from frontal views with varying expression and illumination, as well as occlusion
and disguise. They cast the recognition problem as one of classifying among multiple
linear regression models and argue that new theory from sparse signal representation
offers the key to addressing this problem. Based on a sparse representation computed
by C1-minimization, we propose a general classification algorithm for (image-based)
object recognition. This new framework provides new insights into two crucial issues
in face recognition: feature extraction and robustness to occlusion. For feature
extraction, we show that if sparsity in the recognition problem is properly harnessed,
the choice of features is no longer critical. What is critical, however, is whether the
number of features is sufficiently large and whether the sparse representation is
correctly computed. Unconventional features such as down sampled images and
random projections perform just as well as conventional features such as eigenfaces
and Laplacian faces, as long as the dimension of the feature space surpasses certain
threshold, predicted by the theory of sparse representation. This framework can handle
errors due to occlusion and corruption uniformly by exploiting the fact that these errors
are often sparse with respect to the standard (pixel) basis. The theory of sparse
representation helps predict how much occlusion the recognition algorithm can handle
and how to choose the training images to maximize robustness to occlusion. They
conduct extensive experiments on publicly available databases to verify the efficacy of
the proposed algorithm and corroborate the above claims. In this paper, we introduce
18
the theory of sparse representation and its application onto face recognition. They
verify that the feature extraction is no longer critical to recognition once the sparsity of
the problem is properly harnessed. They improve the sparse description by
incorporating group sparseness. They also test the SRC algorithm under noisy and
occluded images. The experiment results show that the SRC outperforms other
techniques under all circumstances.
X. Wang and D. Zhang, proposed the paper “An optimized tongue image color
correction scheme,” The color images produced by digital cameras are usually device-
dependent, i.e., the generated color information (usually presented in RGB color space)
is dependent on the imaging characteristics of specific cameras. This is a serious
problem in computer-aided tongue image analysis because it relies on the accurate
rendering of color information. In this paper, we propose an optimized correction
scheme that corrects the tongue images captured in different device-dependent color
spaces to the target device-independent color space. The correction algorithm in this
scheme is generated by comparing several popular correction algorithms, i.e.,
polynomial-based regression, ridge regression, support vector regression, and neural
network mapping algorithms. We test the performance of the proposed scheme by
computing the CIE L*a*b* color difference (ΔE* ab) between estimated values and the
target reference values. The experimental results on the colorchecker show that the
color difference is less than 5 (ΔE*ab <; 5), while the experimental results on real
tongue images show that the distorted tongue images (captured in various device-
dependent color spaces) become more consistent with each other. In fact, the average
color difference among them is greatly reduced by more than 95%. This paper presents
an optimized color correction scheme for computer-aided tongue image analysis. The
proposed scheme first analyzes the particular color correction requirement for selection
of device-independent target color space lying in tongue image analysis, and then
optimizes the color correction algorithms accordingly. The proposed scheme is very
effective, reducing the color difference between images captured using different
19
cameras or under different lighting conditions to less than 0.0085, and the distances
between the color centers of tongue images by more than 95% while images tend to
cluster toward an “ideal” or “standard” tongue image. In addition, we have
demonstrated the validity of the proposed method on real tongue images. In future
research, we intend to collect a much larger, real tongue image database including a
number of images of typical healthy and pathological tongues to verify the validity of
our proposed scheme. In particular, we will seek to further address the ground-truth
problem by using physical measurements or feedback from TCM doctors. Utilizing
colorchecker other than the Munsell colorchecker 24 as the reference target for color
correction will also be studied in the future research.
H. Wang, S. Li, and Y. Wang, proposed the paper “Generalized Quotient Image,”
They present a unified framework for modeling intrinsic properties of face images for
recognition. It is based on the quotient image (QI) concept, in particular on the existing
works of QI Spherical Harmonic Image Ratio and Retinex. Under this framework, we
generalize these previous works into two new algorithms: (1) Non-Point Light
Quotient Image (NPL-QI) extends QI to deal with non-point light sources by modeling
non-point light directions using spherical harmonic bases; (2) Self-Quotient Image (S-
QI) extends QI to perform illumination subtraction without the need for alignment and
no shadow assumption. Experimental results show that our algorithms can significantly
improve the performance of face recognition under varying illumination conditions.
The paper is organized we review the most related works about QI and Illumination
modeling. They describe the intrinsic factor of face image and propose our generalized
QI framework. Two new methods, NPL-QI and S-QI, are advanced. A generalized QI
framework based on previous works is presented. This unified framework explains the
essence of previous QI-based Retinex-based image ratio-based algorithms without any
assumption of illumination type and absence of shadow. Under this framework, we
derive two new algorithms, NPL-QI and S-QI. These algorithms extend the original QI
from point lighting source to any type of lightings, without restrictions on shadows.
20
Compared with the baseline algorithms of original QI and PCA, the two algorithms
demonstrate significant performance improvement. The reliability of facial recognition
techniques is often affected by the variation of illumination, such as shadows and
illumination direction changes. They present a novel framework, called the self-
quotient image, for the elimination of the lighting effect in the image. Although this
method has a similar invariant form to the quotient image by Shashua, it does not need
the alignment and bootstrap images. Our method combines the image processing
technique of edge-preserved filtering with the Retinex applications of by Jobson, Gross
and Brajovie. We have analyzed this algorithm with a 3D imaging model and
formulated the conditions where illumination-invariant and –variant properties can be
realized, respectively. A fast anisotropic filter is also presented. The experiment results
show that our method is effective in removing the effect of illumination for robust face
recognition.
R. Basri and D. Jacobs, proposed the paper “Lambertian Reflection and Linear
Subspaces,” They prove that the set of all reflectance functions (the mapping from
surface normals to intensities) produced by Lambertian objects under distant, isotropic
lighting lies close to a 9D linear subspace. This implies that the images of a convex
Lambertian object obtained under a wide variety of lighting conditions can be
approximated accurately with a low-dimensional linear subspace, explaining prior
empirical results. This explains prior empirical results. It also gives us a new and
effective way of understanding the effects of Lambertian reflectance as that of a low-
pass filter on lighting. This description allows us to produce efficient recognition
algorithms in which we know we are using an accurate approximation to the model’s
images. Or, if we are willing to settle for a less accurate approximation, we can
compute the positive lighting that best matches a model to an image by just solving a
six-degree polynomial in one variable. They evaluate the effectiveness of all these
algorithms using a data base of models and images of real faces. Variations in lighting
can have a significant impact on the appearance of an object. A method for choosing
21
feature space. The significance of the nonlinear mapping is that it increases the
discriminating power of the KFA method, which is linear in the feature space but
nonlinear in the input space. The novelty of the KFA method comes from the fact
extends the two-class kernel Fisher methods by addressing multiclass pattern
classification problems improves upon the traditional generalized discriminant
analysis (GDA) method by deriving a unique solution (compared to the GDA solution,
which is not unique). The fractional power polynomial models further improve
performance of the proposed pattern recognition framework. Experiments on face
recognition using both the FERET database and the FRGC (face recognition grand
challenge) databases show the feasibility of the proposed framework. In particular,
experimental results using the FERET database show that the KFA method performs
better than the GDA method and the fractional power polynomial models help both the
KFA method and the GDA method improve their face recognition performance.
Experimental results using the FRGC databases show that the proposed pattern
recognition framework improves face recognition performance upon the BEE baseline
algorithm and the LDA-based baseline algorithm by large margins. Experimental
results show that the proposed framework improves face recognition performance by
large margins compared to the FRGC baseline algorithms. Applying the 2D Gabor
image representation and the labeled elastic graph matching method, Lyons then
proposed an algorithm for two-class categorization of gender, race, and facial
expression. One solution to this drawback is to analyze the reasons for over fitting and
propose new models with improved generalization abilities. In particular, the proposed
method achieves the rank one face recognition rate (face recognition rate of top
response being correct) of 78 percent, compared to the LDA-based baseline algorithm
rank one rate of 48 percent and the BEE baseline rank one rate of 37 percent.
acupuncture, and discusses in detail the use of the acupuncture points and the
principles of treatment. The material is based on rigorous reference to ancient and
modern Chinese texts, and explains the application of theory in a Western practice
context. The new edition includes additional new material, as well as revised chapters,
as detailed below. In particular, 50 more acupuncture points are discussed, and further
patterns (as well as some combined patterns) have been added. More case studies and
case histories have been included, and pinyin equivalents have been added to key
terms. Additional information is given on pathological processes and factors; principles
of point prescribing; diagnosis, especially pulse diagnosis; the relative weighting of
symptoms; identification of patterns according to the Four Levels and the Three
Burners; channel theory; Warm and Cold disease; and vital substances. The glossary
has been considerably expanded. In addition, the text presentation has been redesigned
to make it even easier and clearer for students to navigate around the chapters. A
second colour has been introduced, and more drawings, diagrams and tables now
supplement the text. Summaries are included at the beginning and end of each chapter,
and icons have been added to support the clarity of the text. A CD-ROM is also
included with the book, and it contains over 750 self-testing questions in a variety of
formats. The students have the option to be marked electronically on their answers.
Also included are 65 full colour surface anatomy images as an additional resource.
Full coverage of the basic tenets of Chinese Medicine, from its historical roots
to modern scientific research, methods, and findings.
Informative chapter on diagnosis in Chinese Medicine.
Practical discussion of Chinese herbs and their usage, including formulas for
various common ailments.
diagnose all kinds of diseases. Facial skin is sensitive and can reflect internal changes
faster than other parts of the body. Learn how you can use this ancient practice to
uncover clues about your health. This ancient practice to uncover clues about your
health. Medicine and Ayurveda, what shows up on your face is connected directly or
indirectly to the state of your internal organs and systems, making our skin one of the
greatest mirrors to inner imbalances. Chinese Medicine, Ancient Chinese, Chinese
Face Reading, Classic Chinese, Chinese Art, Chinese Faces Reading, Faces Reading
Charts, Faces Weird Chinese face mapping or 'Mien Shiang' literally means reading the
face and traditional Chinese medicine firmly believes in the face telling more stories
than whether you are pretty or not. An exciting new, full-colour edition of Face
Reading in Chinese Medicine featuring over 200 colour photographs and practical
instructions on how to conduct a face reading. Face reading has been part of
Traditional Chinese Medicine for many centuries, and Professor Lillian Bridges is a
popular academic and international lecturer on the subject who gained her fascinating
knowledge through her family line of Master Face Readers in China. Based on an
understanding of the shapes, markings and features of a face, practitioners can learn
about the health and life of a patient relating to the principles of Chinese medicine. In
addition to understanding how the body's internal functions - physical, psychological
and emotional - can be seen on a face, practitioners can also learn how to evaluate.
Shen to understand non-verbal expressions. Technical and detailed information is
presented in an upbeat, insightful and highly readable manner. This was the first book
to focus on the deeper aspects of face reading and diagnosis, this edition includes
ancient Taoist knowledge regarding the Original Face and Facial Jing and Qi markers
which have previously only been taught through the oral tradition.
"Colorimetry for Dummies," the heart of the book covers the main topics in
colorimetry, including the space of beams, achromatic beams, edge colors, optimum
colors, color atlases, and spectra. Other chapters cover more specialized topics,
including implementations; metrics pioneered by Schrdinger and Helmholtz, and
extended color space. Our results highlight the importance of recognition by others for
women in the three science identity trajectories: research scientist; altruistic scientist;
and disrupted scientist. The women with research scientist identities were passionate
about science and recognized themselves and were recognized by science faculty as
science people. The women with altruistic scientist identities regarded science as a
vehicle for altruism and created innovative meanings of ‘‘science,’’ ‘‘recognition by
others,’’ and ‘‘woman of color in science.’’ The women with disrupted scientist
identities sought, but did not often receive, recognition by meaningful scientific others.
Although they were ultimately successful, their trajectories were more difficult
because, in part, their bids for recognition were disrupted by the interaction with
gendered, ethnic, and racial factors.
will be refined in the coming years as new data become available. Since the color
appearance parameters and color appearance phenomena are numerous and the task is
complex, there is no single color appearance model that is universally applied; instead,
various models are used. Color science is a multidisciplinary field with broad
applications in industries such as digital imaging, coatings and textiles, food, lighting,
archiving, art, and fashion. Accurate definition and measurement of color appearance is
a challenging task that directly affects color reproduction in such applications. Color
Appearance Models addresses those challenges and offers insight into the preferred
solutions. Extensive research on the human visual system (HVS) and color vision has
been performed in the last century, and this book contains a good overview of the most
important and relevant literature regarding color appearance models. Overall, Color
Appearance Models is a suitable companion to the reference books in the field of
colorimetry, color reproduction, vision science, and digital imaging. It is also a good
starting point for those interested in color constancy, color gamut mapping, and color
management. The book is useful for students, scientists, and engineers in
multidisciplinary fields dealing with color issues. Those working in archiving and
entertainment industries can also benefit from it. Finally, the book can be used as a
tutorial with basic knowledge of mathematics and physics for learning about color
appearance.
29
CHAPTER 3
SYSTEM ANALYSIS
A fasting plasma glucose (FPG) test is the standard method practiced by many
medical professionals to diagnose DM. The FPG test is performed after the patient has
gone at least 12 h without food, and requires taking a sample of the patient’s blood (by
piercing their finger) in order to analyze its blood glucose levels. Even though this
method is accurate, it can be considered invasive, and slightly painful (piercing
process).Therefore, there is a need to develop a noninvasive yet accurate detection
method.
color gamut was first applied such that each facial block is represented by six colors.
SRC with two sub dictionaries, one characterizing Healthy facial color features and the
other DM facial color features, was applied along with various values for sparse
coding. Given a test sample, its smallest reconstruction error calculated either from
Healthy or DM determines its class membership.
By evaluating a combination of seven different facial block groupings (from
three blocks) and various values, the highest average accuracy of 97.54% was attained
from Block A with equal. This outperforms the traditional classifiers of k-NN and
SVM, and potentially provides a newway to detect DM, one which does not inflict any
harm or induce any pain. As part of our future work, more Healthy and DM samples
will be collected in order to further validate the statistical accuracy of the proposed
method.
Non invasive method to detect Diabetic Mellitus by distinguishing Healthy and
DM samples (using facial block color features) via a sparse representation
classifier (SRC).
The origin of the disease can be reflected on the face through color changes.
The principle of SRC is to represent a test sample as a linear combination of the
training samples, or a dictionary of atoms from the training samples.
Accurate detection.
High Accuracy.
Yet accurate detection.
Classification methods based on sparse representation is their ability to deal
with corrupted data within the same framework.
31
Face Image
Binarization
Morphological Operation
A, B, C & D
6 Color Features
SRC
MODULES
The primary issue with automatic facial diagnosis is image capturing and its
representation. Facial images from various health statuses must be captured and
depicted in an accurate way under a standardized setting in order to ensure unbiased
feature extraction and analysis. The main component of this device is a SONY 3-CCD
video camera, which is a high-end industrial camera able to acquire 25 images. The
size of each image is 640 ×480 pixels. Location of Blocks A, B, C, and D on a facial
image of the camera.
The angle between the incident light and emergent light is 45◦, recommended by
Commission International de l’Eclairage (CIE). In order to portray the color images in
a precise way so as to facilitate quantitative analysis, a color correction procedure is
performed before feature extraction and classification. This eliminates any variability
in color images caused by variations of illumination and device dependence, allowing
images taken in a variety of environments to be compared to each other. A polynomial-
based regression method was utilized to train the correction model based on the
corresponding values of a reference training set, obtained using the Munsell Color
33
Checker in the device. Using this correction model, uncorrected facial images can be
corrected, and rendered to be in standard RGB (sRGB) color space.
Four blocks (A, B, C, and D) of size 64 ×64 strategically located around the face
are extracted (automatically) to better characterize a facial image. As long as the blocks
are located in the regions, analysis can be performed. If a patient is positioned further
away from the capture device, both the camera and block size will require
recalibration.
Binarization: They can be first applied to each image, it used to Conversion of a picture
to only black and white.
Dilution: It will expand from white colour to fill on block small holes.
Erosion: It will expand from black color to fill in on white small holes.
From there, the distance between the pupils is used to map out the blocks. Block A is
located on the forehead. Blocks B and D are symmetrical and found below the left and
right eyes, respectively.
Facial block color feature extraction using, the facial color gamut is presented,
the facial colour gamut means that refer to the colour ranges and reproduce by the any
34
device. It preceded by six centroids representing main colors of the facial blocks. The
six centroids are then used to calculate a facial color feature vector for each block. The
distribution of facial block colors in the form of a facial color gamut and typical color
centroids is important in facial block analysis. The six centroids characterize the most
commonly found colors in the facial block (since it is within the black boundary) and
are spread out to ensure that two or more colors do not over-lap. This is more
resourceful than using the entire distribution which would make color feature
extraction computationally in-efficient. These six centroids are red, yellow, light
yellow, gloss, deep red, and black.
Once the facial color feature vectors are extracted from the facial blocks, they
are classified using the SRC. SRC is first introduced in this section. Afterward, a
discussion given on how the SRC can be applied to separate Healthy and DM samples.
Given a test sample and a set of training samples, the idea of SRC is to represent the
test sample as a linear combination of the training samples, while requiring that the
representation coefficients are as sparse as possible. If the test sample is from classi,
then among its representation coefficients over all the training samples, only those
from the samples in class i will be significant while others will be insignificant, and
hence, the class label of the test sample can be determined.
35
CHAPTER 4
4.1 MATLAB
MATLAB supports structure data types. Since all variables in MATLAB are
arrays, a more adequate name is "structure array", where each element of the array has
37
the same field names. In addition, MATLAB supports dynamic field names (field look-
ups by name, field manipulations etc.
Although MATLAB supports classes, the syntax and calling conventions are
significantly different from other languages. MATLAB supports value classes and
reference classes, depending if the class has handle as super-class (for reference
classes) or not (for value classes).MATLAB can call functions and subroutines written
in the C programming language or FORTRAN. A wrapper function is created allowing
MATLAB data types to be passed and returned.
The Command Window is the window on the right hand side of the screen. This
window is used to both enter commands for MATLAB to execute, and to view the
results of these commands. The Command History window, in the lower left side of the
screen, displays the commands that have been recently entered into the Command
Window. In the upper left hand side of the screen there is a window that can contain
three different windows with tabs to select between them.
The first window is the Current Directory, which tells the user which M-files are
currently in use. The second window is the Workspace window, which displays which
variables are currently being used and how big they are. The third window is the
Launch Pad window, which is especially important since it contains easy access to the
available toolboxes, of which, Image Processing is one. If these three windows do not
38
all appear as tabs below the window space, simply go to View and select the ones you
want to appear.
In order to gain some familiarity with the Command Window. If writing a code
that do not want to reappear in the MATLAB Command Window, there must place a
semi colon after the line of code. If there is no semi colon, then the code will print in
the command window just under where it is typed.
M-file- An M-file is a MATLAB document the user creates to store the code
they write for their specific application. An M-file is useful because it saves the code
that the user has written for their application. It can be manipulated and tested until it
meets the user’s specifications. The advantage of using an M-file is that the user, after
modifying their code, must only tell MATLAB to run the M-file, rather than reenter
each line of code individually.
Creating an M-file to create an M-file, select File\New ->M-file. Saving the next
step is to save the newly created M-file. In the M-file window, select File\Save as
Choose a location that suits the needs, such as a disk, the hard drive or the U drive. It is
not recommended that the work from the disk or from the U drive, so before editing
and testing your M-file it may want to move your file to the hard drive. Opening an
M-file to open up a previously designed M-file, simply open MATLAB in the same
manner as described before. Then, open the M-file by going to File Open and selecting
file. Resaving – After writing code, must save the work before it can run. Save the
code by going to File\Save. Running Code – To run code, simply go to the main
MATLAB window and type the name of the M-file after the >> prompt. Other ways to
run the M-file are to press F5 while the M-file window is open, select Debug\Run, or
press the Run button (see Figure 3.1) in the M-file window toolbar.Images – The first
step in MATLAB image processing is to understand that a digital image is composed
of a two or three dimensional matrix of pixels. Individual pixels contain a number or
numbers representing what grayscale or color value is assigned to it. Loading an Image
39
many times want to process a specific image, other times that just want to test a filter
on an arbitrary matrix. If choosing to do this in MATLAB, MATLAB needs to load the
image so that process begins. If the image that has is in color, the color is not important
for the current application, then change the image to grayscale. This processing is
much simpler since there is only a third value of the pixel present in the new image.
Color may not be important in an image when trying to locate a specific object that has
good contrast with its surroundings.
4.4 ADVANTAGES
40
Hardware : Pentium
Speed : 1.1 GHz
RAM : 1GB
Hard Disk : 20 GB
Floppy Drive : 1.44 MB
Key Board : Standard Windows Keyboard
The testing phase involves testing the system using various test data.
Preparation of test data plays a vital role in the system testing. After preparing the test
data, the system is tested using those test data. Errors are found and corrected by using
the following testing steps and corrections are recorded for future reference. Thus a
series of testing is performed on the system before it is ready for implementation.
Equivalence partitioning
Boundary-value analysis
Error guessing.
Here, elements are selected such that each edge of the EC is the subject of a test.
Example: If an input specifies a range of valid values, write test cases for the ends of
the range and invalid-input test cases for conditions just beyond the ends. If the input
requires a real number in the range 0.0 and 90.0 degrees, then write test cases for
0.0,90.0. But in this algorithm, Integers are needed for Key length.
test cases are more refined and are generally written with details such as ‘Expected
Result’, ‘Test Data’, etc.
4.6.3 Unit testing
Unit testing focuses on the verification effort of the smallest unit of design
module. Attention is diverted to individual modules, independently to locate errors.
This has enabled the detection of errors in coding and logic. The various modules of
the system are tested in unit testing method. Using the detailed description as a guide,
important control parts are tested to uncover errors within the boundary of the module.
The relative complexity of tests and the error detected as a result is limited by the
constrained scope established for unit testing. This test focuses on each module
individually, ensuring that it functions properly as a unit, and hence the name Unit
Testing.
Module tests seek to validate the code produced to create sets of logically
connected subroutines and data which have been grouping together into modules.
Module testing is concerned with testing the smallest piece of software for which a
separate specification exists. After checking for errors the modules can be integrated.
Integration testing is carried out after the modules are integrated. This test
uncovers the errors associated with the interface. This testing is done with sample data.
The need for integration is to find overall system performance. The objective is to take
unit tested modules to build a programmed structure.
44
CHAPTER 5
RESULTS
Input Image
Binarization
Dilation Image
Erosion Image
C D
110
K-NN
100
SVM
90 SRC
80
70
Average Accuracy
60
50
40
30
20
10
0
1 2
No of Diabetes human = 44
No of Non-Diabetes human = 7
CHAPTER 6
6.1 CONCLUSION
A facial color gamut was first applied such that each facial block is represented
by six colors. SRC with two sub dictionaries, one characterizing Healthy facial color
features and the other DM facial color features, was applied along with various values
for sparse coding. Given a test sample, its smallest reconstruction error calculated
either from Healthy or DM determines its class membership. By evaluating a
combination of seven different facial block groupings (from three blocks) and various
values, the highest average accuracy of 97.54% was attained from Block A with equal.
This outperforms the traditional classifiers of k-NN and SVM, and potentially provides
a new way to detect DM, one which does not inflict any harm or induce any pain. As
part of our future work, more Healthy and DM samples will be collected in order to
further validate the statistical accuracy of the proposed method.
REFERENCES
4. Liu B. and Wang T., “Inspection of Face and Body for Diagnosis of Diseases.”
7. Wang X. and Zhang D., “An optimized tongue image color correction scheme,”
9. Wright J., Yang, A., Ganesh A., Sastry S., and Ma Y. “Robust face recognition
via sparse representation,”
10. Wyszecki I.G. and StilesW.S., “Color Science: Concepts and Methods,
Quantitative Data, and Formulae.”