You are on page 1of 7

2015 IEEE International Symposium on Technology in Society (ISTAS) Proceedings

ISBN: 978-1-4799-8283-7

A Semi-Automated Food Voting Classification


System: Combining User Interaction
and Support Vector Machines
Patrick McAllister1, Huiru Zheng1, Raymond Bond1, Anne Moorhead2
School of Computing and Mathematics1
School of Communication2
Ulster University, Jordanstown
BT37, Newtownabbey, Northern Ireland

Abstract — Obesity is prevalent worldwide including UK and monitoring applications. For these applications, users must
Ireland, affecting all demographics. Obesity can have a remember exact portion sizes to obtain nutritional content. This
detrimental affect on an individual’s health, which can lead to method can cause several issues in regards to accuracy. The user
chronic conditions. Different digital interventions have enabled can miscalculate food portion size. To tackle this issue, image
users to photograph food items to be identified using different calorie and food identification techniques have been employed
feature extraction methods. In this research, we proposed a system to determine food items and content in images.
that allows users to draw a polygon around a food item for
segmentation. After segmented, the region is then classified using II. RELATED WORK
an automated voting system. Different features will then be
extracted from the specified area. Support Vector Machines will Different methods were employed to identify food. In [5],
be issued for each feature type. This system is a proof-of-concept SURF is incorporated with BoFs and colour histogram
and is designed to research the effectiveness of employing multiple extraction is used to identify foods within an image. To extract
feature detection algorithms to classify food images. To classify separate food items for classification, Grubcut [5] was used.
food regions a Bag-of-features (BoFs) approach will be used for Results in [5] research show an 81.55% accuracy result using
each. Speeded Up Robust Features point detection and descriptors these methods. In [6], SURF and colour features were used to
was used along with colour spatial features, and also MSER region determine different food types and results show 86% precision
detection with SURF. Each of these methods will have their own and 83% recall. In [7], a food recognition system that utilised an
BoF to train an SVM. The aim of this research was to create a optimised version of bag-of-words was created. This research
voting classification system that utilises each feature detection involved collating and organising a visual dataset of 5000
algorithm to ultimately identify the segmented food region images into categories. Local features were collated to build a
through plurality (or majority) vote. Testing showed that the visual dictionary. The system achieves 78% accuracy. In other
system achieved 75% accuracy when combining each feature SVM research [8], colour segmentation, texture segmentation, and K-
to create a voting system. The system outperforms two of the means clustering were utilised to segment foods and feature
feature classifiers (SURF and MSER with SURF). LAB colour extraction used to classify foods with SVM. In [8] using all
classifier slightly outperformed the voting mechanism within the
feature extraction methods achieved accuracy of 92.21. In this
developed system. In regards to future work, further development
research, we offer a method in which users are able to manually
and testing would be completed through increasing the variety of
food items used in the training phase and a larger test dataset draw around a food item segmentation and identification.
would also be used.
In [9], SIFT was investigated in combination with colour
Keywords— Bag-of-features; classification; SURF; features; descriptors. Accuracy rate combining different feature
predict; semi-automated; voting; descriptors, including SIFT, was 82.8%. Other research [10]
utilises SIFT descriptors along with colour histograms in order
I. INTRODUCTION to identify food items, this research utilises a SVM approach for
Obesity is prevalent worldwide throughout UK and Ireland each method (colour histogram and SIFT features) to classify
[1, 2]. Obesity affects all age groups and brings with it various major food types. In [11] SURF was used along with other
chronic conditions such as diabetes, hypertension, heart disease, interest and blob detectors such as MSER and STAR to extract
and certain types of cancers [3]. Research has shown that information from videos using a bag-of-words model. The
children who are obese have a greater chance of being obese system aims to identify food portions and estimate calorie
when they are adults [4]. Digital interventions are becoming content. Results show in [11] that SURF achieved 95%
more and more ubiquitous. There has been an increase of calorie accuracy rating. The aim of this research is to combine different

2015 IEEE International Symposium on Technology in Society (ISTAS) Page 1 of 7


feature detection and descriptors to create a voting needed to increase information gain and accuracy of the
classification system. This research will use a semi-automated classification system. Maximally Stable Extreme Regions
approach in that users will be able to draw around a food portion (MSER) were also used along with SURF feature extraction.
within an image for classification. The dataset contained 4 categories. Each category represented
a different vegetable. The images were collated through taking
III. OBJECTIVES images with a smartphone, Flickr images, and images obtained
The objectives of this work is to combine computer vision through Google’s free to use
feature detection algorithms to identify food items within an image search.
image. The aim is to utilise a semi-automated approach that will
B. SURF Features
allow users to highlight the food items within the image using
a polygonal tool. The highlighted portion of the image will be SURF method differs from SIFT as it operates using a
cropped and analysed by feature detection algorithms. A voting Hessian matrix for identifying interest points. SURF utilises
mechanism will be used by using plurality from each of the BLOB (binary large object) detection to attain interest points.
BLOB elements are detected at different locations where the
feature detection algorithms. The most popular tag outputted
determinant of Hessian is set at maximum. [13] The determinant
from the algorithms will be the final classification label for the
of Hessian is also used for scale selection [13]. The Hessian
food item. The system will then be tested against a number of matrix at a point, x in an image I is defined in equation (1).
images to measure the accuracy of the system to find out if
multiple feature detection algorithms is beneficial in identifying ௫௫ ሺ ǡ ሻ ௫௬ ሺ ǡ ሻ
࣢ǡ  ൌ 
food items in images. ௫௬ ሺ ǡ ሻ ௬௬ ሺ ǡ ሻ
IV. METHODS (1)
This section will discuss the methods used in developing the ௫௫  ǡ ǡ ௫௬  ǡ  and ௬௬ ሺ ǡ ሻ represent the
system. The work proposed in this paper uses images as a way convolution of the Gaussian second order derivative with image
to document food logging through using computer vision I at point x [4]. SURF uses Hessian matrix and integral images
methods. Much of the popular food logging applications require to allow for fast approximations. Convolutions are expedited
the user to manually input the food item using a local or online using integral images. An integral image at a specific location
database. This process can be cumbersome for some users. The represents the sum of all the pixels within an input image with a
research in this paper proposes a method to document food region as stated in (2) [9, 11].
intake by using computer vision technologies by allowing the ǡ  ൌ ǡ  ൅  െ ͳǡ  ൅ ǡ  െ ͳ െ ሺ െ ͳǡ  െ ͳሻ
user to draw around the food item within the image to then be (2)
classified. A number of feature detection algorithms will be ǡ  represents a point within a rectangular region. The
used and each will have their own SVM. The purpose of using remaining points  െ ͳǡ , ǡ  െ ͳ, and  ሺ െ ͳǡ  െ ͳሻ
multiple feature detection algorithms is to find out if there is an represent the other points in the rectangular region. This process
increase in overall accuracy in correctly classifying food items only requires three operations to calculate the value of the
through incorporating a voting classification mechanism. This region [11]. Within this system SURF feature vector was
section will discuss the following approaches; (a) BoF, (b) increased to 128 elements. This means that a SURF descriptor
SURF, (c) Colour features, (d) MSER features and (e) K-means size of 128 would enable greater accuracy when matching
clustering, (f) SVM, (g) Semi-automated approach, and (h) features.
voting classification.
C. Colour Features
A. Bag-of-Features
Colour features were extracted from the training dataset to
A BoF (Bag of Features) approach was used for each feature increase the classification accuracy. Each within the training set
type extracted [12]. A dataset consisting of different food was converted to LAB colour space. Converting RGB images
images was collected. Each of these images was categorised to LAB colour space enables colour values to be efficiently
into a particular class. These images could then be used for quantified. This would make the process easier to distinguish
training and validation sets. For training the classification between different colours. The next step is to compute the
system, the features were extracted from the training dataset. average colour value using pixel blocks. This is achieved by
Each feature type was extracted from the same dataset reducing the size of images by a factor of 16. In doing this,
consecutively to build a visual dictionary. Features are clustered colour is analysed and extracted over 16 x 16 blocks. The colour
to obtain a collection of key features (i.e. a visual dictionary), feature that was extracted from the image is augmented by
which can be used to classify test images. To extract features appending the XY location from the original image. This
from the training image set, both Scale Invariant Transform process takes into account the layout of the colour features
Features (SIFT) and SURF methods were considered as key situated around the original image [15]. Once the features are
point detectors. SURF was selected for detecting key points due extracted, K-means clustering is then used with the features to
to its advantage in greater speed when compared to the slower create the visual dictionary.
execution of SIFT. In addition, SURF is also scale and
rotational invariant [13]. SURF utilises integral images in order D. Maximally Stable Extreme Regions (MSER)
to decrease computational time. SURF point selection was MSER Detector is a blob detector algorithm used with
assigned to ‘detector’ as opposed to utilising a ‘grid’ layout. images. The regions within this blob detector are classed as an
The grid method takes much longer in comparison to detector extremal element of the intensity function [17]. MSER is able
method [14]. It was also decided that a colour features were to extract different regions from an image. These regions are

2015 IEEE International Symposium on Technology in Society (ISTAS) Page 2 of 7


called MSER [16]. MSER is described as being a stable region produce a confusion matrix for each of the feature type
detector due to a number of advantages. MSER is invariant to classification tests and describe how accurate the feature
affine transformation intensities. MSER is stable as detected extraction and descriptor methods were in assigning image
regions remain unaffected across different thresholds. Also categories to test data. Once the initial confusion matrix is
MSER is able to be detected across different scales. In this work generated from each feature type classifier, the results can be
MSER is used as a blob detector and the MSER object is passed analysed to generate accuracy performance ratings as well as
to the SURF descriptor to extract SURF features [17]. sensitivity performance ratings. Further tests will also be
conducted using the voting classification system. A series of
E. K-means Clustering images will be collated and be used to test the accuracy of
K-means clustering is used to cluster features around theproposed system. These images were taken with a
centroid data points. K-means clustering is the process of Smartphone and have not been used within the training classifier
partitioning data into clusters and to determine the centroid point stage to train the SVM. The accuracy was determined for each
for each k cluster. Data points are then assigned to a key centroid confusion matrix. The accuracy is determined by equation (3).
point in relation to this distance. This process is used to obtain Once the classifiers have been trained using the different
visual dictionary consisting of visual words [17]. feature extraction methods, it was decided to test each feature
F. SVM Classifier classifier using a series of validation images. This process would
produce a confusion matrix for each of the feature type
The key features are then encoded for each category to train classification tests and describe how accurate the feature
a SVM multiclass classifier. The SVM has several parameters extraction and descriptor methods were in assigning image
that are defined to optimise the classification of images. To train categories to test data. Once the initial confusion matrix is
the SVM classifier, different parameters were configured. Data generated from each feature type classifier, the results can be
standardisation was enforced to standardise predictors. For data analysed to generate accuracy performance ratings as well as
optimization, SMO (Sequential Minimal Optimisation) was sensitivity performance ratings. Further tests will also be
used. conducted using the voting classification system. A series of
images will be collated and be used to test the accuracy of the
G. Semi-Automated Approach
proposed system. These images were taken with a Smartphone
The system is described semi-automatic given a user is and have not been used within the training classifier stage to
required to manually draw a polygon around a food item. We train the SVM. The accuracy was determined for each confusion
hypothesis that this man-machine approach will increase the matrix. The accuracy is determined by equation (3).
accuracy of the system given a user assists the system in
locating the food; the system is subsequently responsible for its
classification. This will ultimately lead to greater calorie
estimation. This system will utilise the semi-automatic
approach to accurately pinpoint food boundary areas by using a
polygon tool. Once the SVM classifier has been trained, the
system can then be used to categorise unseen images. In order
to train the algorithm, training sets were partitioned from the
overall dataset. The training image set contained a mixture of
images of food items downloaded from Flickr as well as
photographs captured using a camera on a smartphone. The
main use of this classifier would be used on a region specified
by the user. This area would be classified as opposed to
classifying the entire image.
H. Voting Classification
Once the features have been extracted from training sets and
encoded within each SVM (as described in Fig. 2), the classifier,
for each feature type, is able to categorise test images. The
system operates using a voting majority mechanism. When
classifers for each feature type catergorise an image the result is
then saved to an array. This is completed for each feature type
and the resulting array will consist of three food type labels. The
array is then analysed to find out the most common label. The
results is then outputted to the user. This voting classification
system is able to analyse the user segmented area with a variety
of different feature extraction methods. SURF, colour
extraction, and MSER detection with SURF descriptors are used Fig. 1. Diagram Describing Feature Extraction Processes.
to classify the segmented area to increase information gain and
to promote accuracy. The voting classification process is
described in Fig. 3. ሺ  ൅ ሻ
 ൌ
ሺ  ൅  ൅  ൅ ሻ
Once the classifiers have been trained using the different
(3)
feature extraction methods, it was decided to test each feature
classifier using a series of validation images. This process would

2015 IEEE International Symposium on Technology in Society (ISTAS) Page 3 of 7


This equation (3) describes how an accuracy rating could be This equation (4) was used to find the percentage of
obtained for a two class problem. This equation is a predictions that were labelled correctly under each category. The
generalisation as the confusion matrix for each classification test accuracy was computed for the overall confusion matrix and the
would hold up to four classes. TN represents true negative and precision was computed for each classification category. After
TP represents true positive. TP would represent the number of the precision was calculated for each category the overall
images that were correctly classified. FP and FN represent the precision average was calculated. The next section lists the
number of items that were classified as positive incorrectly and confusion matrix accuracy and precision for each feature type
negative incorrectly. In order to assess the precision rating for classification. It is important to note that this testing phase
each confusion matrix the following equation will be utilised. classified entire food images without using semi-automated
approach. After the initial testing for each classification, 20 test

   ൌ images were gathered through Flickr to test the system.
 ൅ 
(4)

Fig. 2 Diagram Describing Voting SVM and Voting Classification System.

V. RESULTS Labels beans broccoli corn peas


This section will present the results of each of the SVM beans 12 0 0 0
classifiers for each feature type. The following tables will broccoli 1 10 1 0
present a confusion matrix stating the classification accuracy corn 2 0 8 2
of each category. A validation dataset consisting of 12 images peas 1 0 1 10
was used to test each SVM. This image dataset was not used
within the training stage. TABLE IV. ACCURACY AND PRECISION ANALYSIS

Feature Type Accuracy Precision


TABLE I. CONFUSION MATRIX USING SURF FEATURES
SURF 85.41% 86.51%
LAB Colour Feature 83.33% 83.57%
Labels beans broccoli corn peas MSER + SURF 83.33% 84.58%
beans 11 0 1 0 Average 84.02 % 84.88%
broccoli 1 10 1 0
corn 3 0 9 0
peas 0 1 0 11

TABLE II. CONFUSION MATRIX UDING LAB

Labels beans broccoli Corn peas


beans 12 0 0 0 Further testing was undertaken in assessing the SVM
classification for each feature type by incorporating a voting
broccoli 0 9 0 3 mechanism. A validation dataset was gathered. These images
corn 0 0 12 0 in the validation dataset were not used within the training
peas 0 5 0 7 stage. Images were collated through using Flickr. Each image
was used as input into the system and a polygonal tool was
used by the user to draw around the food portion. The region
of interest is segmented and is then saved as an image. A
trained SVM would classify the region for each feature type.
The result of each SVM is saved into a variable. The label with
the highest occurrence in the variable is outputted to the user.
TABLE III. MSER AND SURF FEATURES CONFUSION MATRIX Table 4 shows the results of these tests.

2015 IEEE International Symposium on Technology in Society (ISTAS) Page 4 of 7


This aim of this research was to allow users to classify a over were overweight or obese [30]. Technology has become
region of interest (ROI) specified by the user using different very pervasive in society notably in the use of smartphone
feature Support Vector Machine (SVM) classifiers. Different devices. This trend is prevalent across different ages and
feature detection types were used within this process (SURF, demographics. Research has shown adolescents as 78% of
LAB, and MSER with SURF). Each trained SVM feature teens own a mobile phone and 47% of those own a smartphone
classifier was tested using a dataset to compute a confusion [28]. Research also shows that six in ten adults use a
matrix. The combined mean accuracy of each of these smartphone [27]. This is an opportunity to use these
classifiers was 84.02% for classifying 12 images for each technologies to create interventions that promote healthy
SVM. A multi-classification approach was used. Each SVM eating and physical activity.
classifier categorised the ROI and the result was saved within
a variable. One of most popular methods to monitor energy intake is
using a food log application. This type of application allows
Plurality voting mechanism is used to determine the final users to document food intake. The most popular form of this
classification. Much research has used classification type of application allows users to search a local or online food
techniques to automatically determine food regions database to search for food items. There has been research
automatically by analysing an entire image [7, 21]. This conducted [25, 26] that suggests that people who self-monitor
approach allows the user to determine the ROI for their energy intake may improve their weight loss. Other
classification. This approach would improve classification as research has been conducted that utilise computer vision
the SVM is only classifying a particular section of the image. methods to ascertain food type and to quantity portion size [5,
From the results (Table 5), the voting system is able to achieve 6]. Common food log applications use drop down menus to
75% accuracy combining SVM feature classifiers to create a select food items for documentation, computer vision methods
voting classification system. The final classification accuracy offer an alternative that involves finding nutritional
was greater than two of the SVM feature classifiers (SURF information from an image. This would be more convenient
and MSER with SURF). It was realised that more feature from the user instead of searching through food databases.
types were needed to correctly classify the ROI for
information gain. Colour features and MSER features were The system described in this research discusses the
incorporated with SURF to develop a voting system and identification process of such a system. In this work different
results show that this system is able to improve upon accuracy feature detection algorithms and descriptors were
results of SURF and MSER+ SURF (Table 5). incorporated together to create a voting classification system.
This system is merely a proof of concept to find out if these
The purpose of this research was to investigate computer feature detection algorithms improves accuracy in identifying
vision methods to analyse food images for accurate and food items. A voting mechanism was used by employing
convenient food logging. Using images as a way to document plurality to output the final classification label for the
nutritional intake allows users to obtain more information specified food portion. The specified food portion is selected
such as portion size and calorie content. Many food using a polygonal tool, which allows the user to draw around
applications require the user to use self-reporting techniques the food item in the image. In terms of the impact of this
to document food portions. Many users may report the wrong system, initial testing has been completed which shows
portion size which then can lead to inaccuracies in promising results of increasing accuracy in identifying food
determining calorie content. Many food applications require items over two of the feature detection algorithms used (SURF
the user to use self-reporting techniques to document food and MSER with SURF). Table 5 states that using SURF
portions. Many users may report the wrong portion size, features to train an SVM achieved 50% accuracy and MSER
which then can lead to inaccuracies in determining calorie with SURF feature SVM achieved 70%. LAB colour feature
content. This work allows users to identify food items using a classification achieved marginally slightly better results that
semi-automated approach and incorporates a plurality system using the voting system with 80%.
to determine the final classification. In regards to implication
for obesity management, this system is tries promote self- The overall voting classification result is 75%, however
management by accurately documenting food items by more images will be used in the future for further accuracy
testing to find out if there is a change in the results. The
classifying food regions specified by the user instead of
manually typing in the food item. In previous work [22] this number of food item categories will also be increased. The
same technique, described in could be employed to this number of images for each category will also be increased
system to estimate calorie content by aligning calories with during the training phase. This system would then be tested
food portion area after classification. against a larger validation dataset. The initial testing phase
used 20 images, future works will increase this number. In
VI. DISCUSSION AND FUTURE WORK regards to feature extraction, more feature selection
algorithms will be used such as Histogram of Orientated
Obesity is a major health concern globally including in UK Gradients (HOG) [19], texture features (STFA [18], Gabor
and Ireland [23, 24]. Being obese can have a detrimental effect Filters) and also Gray Level Occurrence Matrix (GLCM) [20]
on an individuals’ health as it can cause chronic conditions will be incorporated.
such as diabetes, bowel cancer, heart disease, and
hypertension [29]. In the UK, research conducted in 2013
reveal that 67.1% of men and 57.2% of women aged 16 and

2015 IEEE International Symposium on Technology in Society (ISTAS) Page 5 of 7


Image Known Food Region SURF LAB MSER+ SURF Classification
#1 Peas Peas Peas Peas Peas 
#2 Peas Beans Peas Peas Peas 
#3 Peas Peas Peas Peas Peas 
#4 Peas Peas Corn Peas Peas 
#5 Peas Peas Peas Peas Peas
#6 Broccoli Broccoli Broccoli Broccoli Broccoli 
#7 Broccoli Broccoli Broccoli Broccoli Broccoli 
#8 Broccoli Broccoli Peas Broccoli Broccoli 
#9 Broccoli Broccoli Peas Broccoli Broccoli 
#10 Broccoli Broccoli Corn Broccoli Broccoli
#11 Beans Peas Beans Peas Peas ‫ݳ‬
#12 Beans Peas Beans Peas Peas‫ݳ‬
#13 Beans Beans Beans Peas Beans 
#14 Beans Peas Beans Beans Beans 
#15 Beans Peas Beans Peas Peas‫ݳ‬
#16 Corn Peas Corn Peas Peas‫ݳ‬
#17 Corn Peas Corn Peas Peas‫ݳ‬
#18 Corn Peas Corn Corn Corn 
#19 Corn Peas Corn Corn Corn 

#20 Corn Peas Corn Corn Corn 

Accuracy 50% 80% 70% 75%


Precision 30.76% 84.52% 86.36% 87.5%

TABLE V. INITIAL TESTING OF VOTING CLASSIFICATION SYSTEM FOR 20 VALIDATION IMAGES

ACKNOWLEDGEMENTS
This system would also include methods to detect the Acknowledgement to DELNI for funding this research.
portion area measured in cm2 by including a predetermined
reference point in future test images as described in previous REFERENCES
work [22]. The reference point will be an object of known area [1] National Obesity Awareness Week, “State of the Nation’s Waistline”,
e.g. coin, thumb, or a square. The system will detect the pixels Obesity in the UK, 2014.
in the ROI specified by the user and the reference point would [2] Keane, E., Kearney, P. M., Perry, I. J., Kelleher, C. C., & Harrington,
be used to calculate the portion area in cm2. The food portion J. M. (2014). Trends and prevalence of overweight and obesity in
can then be calculated and the region classified for accurate primary school aged children in the Republic of Ireland from 2002-
2012: a systematic review. BMC Public Health, 14(1), 974.
food logging. For further accuracy measurements depth sensor http://doi.org/10.1186/1471-2458-14-974
methods could be employed to accurately determine food [3] NHLBI Obesity Education Initiative Expert Panel on the Identification,
portion size along with this classification system for food E. and T. of O. in A. (US). (n.d.). Overweight and Obesity:
logging. Instead of using SVM to identify the ROI, other Background. National Heart, Lung, and Blood Institute. Retrieved from
supervised classification techniques could be employed such http://www.ncbi.nlm.nih.gov/books/NBK1995/
[4] Ebbeling, C. B., Pawlak, D. B., & Ludwig, D. S. (2002). Childhood
as Neural Networks (NN) within this system to investigate obesity: public-health crisis, common sense cure. Lancet, 360(9331),
changes or improvements, if any in regards to accuracy. A 473–82. http://doi.org/10.1016/S0140-6736(02)09678-2
combination of such supervised techniques could be used [5] Kawano, Y., & Yanai, K. (2013). Real-Time Mobile Food Recognition
along with feature detection algorithms to garner more results System. In Computer Vision and Pattern Recognition Workshops
for further analysis. (CVPRW), 2013 IEEE Conference on (pp. 1–7).
http://doi.org/10.1109/CVPRW.2013.5
[6] Pham, C., Thi, N., & Thuy, T. (2014). Fresh Food Recognition Using
Feature Fusion, 298–302.

2015 IEEE International Symposium on Technology in Society (ISTAS) Page 6 of 7


[7] Anthimopoulos, M. M., Gianola, L., Scarnato, L., Diem, P., & [18] S. Extractor, 'SFTA Texture Extractor - File Exchange - MATLAB
Mougiakakou, S. G. (2014). A Food Recognition System for Diabetic Central', Mathworks.com, 2015. [Online]. Available:
Patients Based on an Optimized Bag-of-Features Model, 18(4), 1261– http://www.mathworks.com/matlabcentral/fileexchange/37933-sfta-
1271. texture-extractor. [Accessed: 01- Jun- 2015].
[8] Alfanindya, A., Hashim, N., & Eswaran, C. (2013). Content Based [19] Dalal, N.; Triggs, B., "Histograms of oriented gradients for human
Image Retrieval and Classification using speeded-up robust features detection," Computer Vision and Pattern Recognition, 2005. CVPR
(SURF) and grouped bag-of-visual-words (GBoVW). Proceedings of 2005. IEEE Computer Society Conference on , vol.1, no., pp.886,893
2013 International Conference on Technology, Informatics, vol. 1, 25-25 June 2005. doi: 10.1109/CVPR.2005.177
Management, Engineering and Environment, TIME-E 2013, 77–82. [20] Uk.mathworks.com, 'Gray-Level Co-Occurrence Matrix (GLCM) -
http://doi.org/10.1109/TIME-E.2013.6611968 MATLAB & Simulink', 2015. [Online]. Available:
[9] Y. He, C. Xu, N. Khanna, C. J. Boushey, and E. J. Delp, “Food image http://uk.mathworks.com/help/images/gray-level-co-occurrence-
analysis: Segmentation, identification and weight estimation,” Proc. - matrix-glcm.html. [Accessed: 01- Jun- 2015].
IEEE Int. Conf. Multimed. Expo, 2013. [21] Y. He, C. Xu, N. Khanna, C. J. Boushey, and E. J. Delp, “Food image
[10] W. Wang, P. Duan, W. Zhang, F. Gong, P. Zhang, and Y. Rao, analysis: Segmentation, identification and weight estimation,” Proc. -
“Towards a pervasive cloud computing based food image recognition,” IEEE Int. Conf. Multimed. Expo, 2013.
in Proceedings - 2013 IEEE International Conference on Green [22] P. McAllister, et al, ‘Semi-Automated System for Predicting Calories
Computing and Communications and IEEE Internet of Things and in Photographs of Meals’, 2015, ICE/ IEEE International Technology
IEEE Cyber, Physical and Social Computing, GreenCom-iThings- Management Conference, 22-24 June 2015.
CPSCom 2013, 2013, pp. 2243–2244. [23] Public Health England, “Childhood Obesity”, noo.org.uk. [Online],
[11] N. Chen, Y. Y. Lee, M. Rabb, and B. Schatz, “Toward Dietary Available: http://www.noo.org.uk/NOO_about_obesity/child_obesity
Assessment via Mobile Phone Video Cameras.,” AMIA Annu. Symp. [Accessed 1 June 2015].
Proc., vol. 2010, pp. 106–110, 2010 [24] DHSSPSNI, Health Survey Northern Ireland, “First Results
[12] Uk.mathworks.com, 'Image Category Classification Using Bag of 2013/2014”, 2013-2014
Features - MATLAB & Simulink Example', 2015. [Online]. Available: [25] R. Wing and J. Hill, 'Successful Weight loss Maintenance’, Annu. Rev.
http://uk.mathworks.com/help/vision/examples/image-category- Nutr., vol. 21, no. 1, pp. 323-341, 2001.
classification-using-bag-of-features.html. [Accessed: 01- Jun- 2015]. [26] R. Baker and D. Kirschenbaum, 'Self-monitoring may be necessary for
[13] H. Bay, T. Tuytelaars, and L. Van Gool, “SURF: Speeded Up Robust successful weight control', Behavior Therapy, vol. 24, no. 3, pp. 377-
Features,” in 9th European Conference on Computer Vision, 2006. 394, 1993.
[14] Uk.mathworks.com, 'Detect SURF features and return SURFPoints [27] Media.ofcom.org.uk, 'Half of UK homes turn to tablets - in just five
object - MATLAB detectSURFFeatures', 2015. [Online]. Available: years', 2015. [Online]. Available:
http://uk.mathworks.com/help/vision/ref/detectsurffeatures.html. http://media.ofcom.org.uk/news/2015/five-years-of-tablets/.
[Accessed: 08- Jun- 2015]. [Accessed: 17- Aug- 2015].
[15] Uk.mathworks.com, 'Image Retrieval Using Customized Bag of [28] M. Madden, et al. “Teens and Technology”, Pew Research Center,
Features - MATLAB & Simulink Example', 2015. [Online]. Available: Harvard University, Washington, D. C, 2013.
http://uk.mathworks.com/help/vision/examples/image-retrieval-using- [29] E. and T. of O. in A. (US) NHLBI Obesity Education Initiative Expert
customized-bag-of-features.html. [Accessed: 01- Jun- 2015]. Panel on the Identification, “Overweight and Obesity: Background.”
[16] J. Matas, O. Chum, M. Urban and T. Pajdla, 'Robust wide-baseline National Heart, Lung, and Blood Institute, 1998.
stereo from maximally stable extremal regions', Image and Vision [30] Public Health England, “Childhood Obesity”, noo.org.uk. [Online],
Computing, vol. 22, no. 10, pp. 761-767, 2004. Available: http://www.noo.org.uk/NOO_about_obesity/child_obesity
[17] Uk.mathworks.com, 'k-means clustering - MATLAB kmeans', 2015. [Accessed 1 December 2014].
[Online]. Available: http://uk.mathworks.com/help/stats/kmeans.html.
[Accessed: 01- Jun- 2015].

2015 IEEE International Symposium on Technology in Society (ISTAS) Page 7 of 7

You might also like