Professional Documents
Culture Documents
JIN SUNG KIM1 , JAE YEOL SONG2 and JIN KOOK LEE3
1,2,3
Department of Interior Architecture & Built Environment, Yonsei
University, Republic of Korea
1,2
{wlstjd1320|songjy92}@gmail.com 3 leejinkook@yonsei.ac.kr
1. Introduction
1.1. RESEARCH OBJECTIVE
The artificial intelligence (AI) and the machine learning have been applied in many
fields of research and industry. Especially, deep artificial neural networks have
achieved great success on many contests about pattern recognition (Schmidhuber
2015). The advent of deep convolutional neural networks(CNNs) and the powerful
graphics processing unit(GPU) are an important role of successful large-scale
image recognition. General-purpose image recognition model like Alexnet
(Krizhevsky et al. 2012) and GoogLeNet(a.k.a Inception) (Szegedy et al. 2015)
were proposed and opened to people to utilize them for the various domains.
T. Fukuda, W. Huang, P. Janssen, K. Crolla, S. Alhadidi (eds.), Learning, Adapting and Prototyping,
Proceedings of the 23rd International Conference of the Association for Computer-Aided Architectural
Design Research in Asia (CAADRIA) 2018, Volume 2, 287-296. © 2018 and published by the Association
for Computer-Aided Architectural Design Research in Asia (CAADRIA) in Hong Kong.
288 J.S. KIM, J.Y. SONG AND J.K. LEE
The framework for deep learning such as Tensorflow (Abadi et al. 2016) and
its related library also was provided as open source and have helped people in
various domains to retrain image recognition model. Furthermore, deep learning
model could catch some object on image or video. Through the object detection
approach like Faster R-CNN (Ren et al. 2015) and YOLO (Redmon et al. 2016), it
is possible to analyze a scene and extract object and its information like the label.
This could be very useful for the various domain where image processing is an
important task.
To go beyond the general purpose of image recognition and object detection,
domain knowledge-based approach is needed. In terms of interior design, furniture
is one of the most common interior design elements in the interior design process
or for designer and client. It is usually carried out to analyze each design element
and its feature from the indoor scene or product image for various activities related
to interior design. In this regard, it is more meaningful to recognize design features
of each element such as function, materials, style and etc. than to classify whether
it is chair or table. Sub-feature of furniture includes implicit design features,
such as design type, style, shape and usage, and also explicit properties, such
as component, color, materials, and size. Interior design experts who trained
with sufficient and good resource could successfully recognize such things. For
successful automated extraction of such things, good training resources are needed
to learn what features are and train to recognize and classify. The ultimate purpose
of this research is to develop a system or applications to efficiently manage and
utilize design information about the interior design and its elements on the digital
image. The applications may include database system managing interior images
with interior design information, tools for analysis on interior design and design
recommendation for interior designer and client. The objective of this paper is to
look for possibilities of deep learning-based approach to automated recognition
design features of the interior design element. We developed image dataset for
training some design features of the chair, retrained some models using that
dataset and pre-trained model and showed the results of testing automate extraction
through application demonstration.
follows:
1. Training Automated extraction of design features of chair: Furniture and its
feature are identified and interesting criteria features which can be checked on
visual. Next, image dataset has been built according to features from various
web-based source. Multiple image recognition models are retrained for each
objective.
2. Demonstration of automate chair and its some design features using re-training
models: to collaborate multiple automized extraction models, the various design
features are inferred from given image.
Figure 1. Scope of work for automated extraction of interior design elements and their design
features.
2. Background
Fully reviewing deep learning and convolutional neural networks is out of the
scope of this paper. In this paper, the survey on important background and the
way to use such approach and tools for interior design domain are covered.
Figure 2. The structure of the typical convolutional neural networks for image recognition
(LeCun and Bengio 1995).
Figure 3. The framework of combinations of multiple models for the design features extraction.
frame could be included in the components. This feature is very explicitly shown.
On the other hand, the other features were defined in various perspective. This is
the because those non-functional features are subjective and implicit. The feature
related to human‘s feeling, for example, is affected by one‘s characteristic and
preference. It is also difficult to distinguish clearly style that is the main one of
the most important feature of appearance. However, these features obviously help
people to understand the furniture.
Hence, this research aims to check whether a computer could learn these
features from furniture images and then recognize them. Automated extraction
of them helps many people like a designer, customer, and user to make a decision
on interior design development. As the basic study for that, the target is limited
to chair and its features. The categories of target features of the chair that can be
identified on the image are as follows (e.g. figure 4):
1. Functional features that can be check by additional components of chair such as
armrest, backrest, headrest
2. Materials that are limited to five materials commonly used, such as leather, fabric,
wood, plastic, and velvet
3. Seating capacity that can be checked by the number of seats
4. Design style that are limited most popular, such as classic, eclectic, modern,
Scandinavian style
Figure 4. The categories and their detailed classes of target design feature for training
automized extraction.
processed according to count of sheet division and length up to four in this paper.
In case of design style, we selected only four different classes. Classification of
design style is even controversial because its definition is implicit and may vary
from person to person. Hence, we collected and classified not some chairs of
modern style but chairs that can seem modern style according to criteria of Miller
(2010). Total 3933 images were prepared for training and the number of training
images per detailed classes of each category is shown in figure 5.
Figure 5. The number of training images per detailed classes of each category .
5. Conclusion
This paper proposed deep learning-based approach for automized extraction of
design feature of from image. This research aims ultimately to develop automized
extraction of such features and make application for in interior design, such
as supporting case study of interior design or interior element database and
recommend system. As the basic research for that, we utilized deep learning-based
CNNs model for extracting design features of interior design elements on digital
image. Using well-trained model, we made multiple model learn four different
categories of design features. We categorized design feature of interest, such as
functional features, materials, seating capacity and design style, and collected
image dataset and subset according to the category. Each category includes
detailed classes. functional features include armrest, backrest and headrest.
materials do fabric, leather, plastic, velvet and wood. Seating capacity do seat
of division and length. Finally, design style does classic, modern eclectics,
Scandinavian. Total 3933 images were used and 6 models were re-trained. Mean
training accuracies of functional features is 94.3% and higher than other models.
This is because that such features are more explicit by existence and nonexistence
of components. However, detailed texture of material, seating capacity and design
style were not well recognized. We developed GUI tool for extracting design
features of given images. Through demonstration of the tool, it is checked for
anyone to easily use them for extracting some series of images. For future works,
the effective training methods will be needed as each feature with expansion
of target design element and features. With these, approach to applications for
interior design will be able to be proposed, such as digital image-based interior
design element and information database or recommendation system of interior
design element according to requirements.
Acknowledgements
This work was supported by the National Research Foundation of Korea
Grantfunded by the Korean Government (NRF-2015R1C1A1A01053497).
References
Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A.,
Dean, J., Devin, M. and Ghemawat, S.: 2016, Tensorflow: Large-scale machine learning
on heterogeneous distributed systems, OSDI, 16, 265-283.
Bell, S. and Bala, K.: 2015, Learning visual similarity for product design with convolutional
neural networks, ACM Transactions on Graphics (TOG), 34(4), 98.
Dazkir, S.S. and Read, M.A.: 2012, Furniture forms and their influence on our emotional
responses toward interior environments, Environment and Behavior, 44(5), 722-732.
Hu, Z., Wen, Y., Liu, L., Jiang, J., Hong, R., Wang, M. and Yan, S.: 2017, Visual Classification
of Furniture Styles, ACM Transactions on Intelligent Systems and Technology (TIST), 8(5),
67.
Karayev, S., Trentacoste, M., Han, H., Agarwala, A., Darrell, T., Hertzmann, A. and
Winnemoeller, H.: 2013, Recognizing image style, arXiv preprint arXiv:1311.3715.
Krizhevsky, A., Sutskever, I. and Hinton, G.E.: 2012, Imagenet classification with deep
convolutional neural networks, Advances in neural information processing systems,
1097-1105.
296 J.S. KIM, J.Y. SONG AND J.K. LEE
LeCun, Y. and Bengio, Y. 1995, Convolutional networks for images, speech, and time series,
in M.A. Arbib (ed.), The handbook of brain theory and neural networks, MIT press.
LeCun, Y., Bengio, Y. and Hinton, G.: 2015, Deep learning, Nature, 521(7553), 436-444.
Lee, A.S.: 2010, A study on predicting consumers‘satisfaction based on the features of furniture
product designs, International Journal of Organizational Innovation (Online), 2(3), 138.
McDonagh, D., Bruseberg, A. and Haslam, C.: 2002, Visual product evaluation: exploring
users’ emotional relationships with products, Applied Ergonomics, 33, 231-240.
Miller, J.: 2010, Furniture: World Styles from Classical to Contemporary, Dorling Kindersley.
Redmon, J., Divvala, S., Girshick, R. and Farhadi, A.: 2016, You only look once: Unified,
real-time object detection, Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, 779-788.
Ren, S., He, K., Girshick, R. and Sun, J.: 2015, Faster R-CNN: Towards real-time object
detection with region proposal networks, Advances in neural information processing
systems, 91-99.
Schmidhuber, J.: 2015, Deep learning in neural networks: An overview, Neural networks, 61,
85-117.
Smardzewski, J.: 2015, Furniture design, Springer.
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke,
V. and Rabinovich, A.: 2015, Going deeper with convolutions, Proceedings of the IEEE
conference on computer vision and pattern recognition, 1-9.
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J. and Wojna, Z.: 2016, Rethinking the inception
architecture for computer vision, Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, 2818-2826.
Taigman, Y., Yang, M., Ranzato, M. and Wolf, L.: 2014, Deepface: Closing the gap to
human-level performance in face verification, Proceedings of the IEEE conference on
computer vision and pattern recognition, 1701-1708.
Zehtaban, L., Elazhary, O. and Roller, D.: 2016, A framework for similarity recognition of CAD
models, Journal of Computational Design and Engineering, 3(3), 274-285.