You are on page 1of 404

CONVR 2009

Proceedings of the 9 th International Conference on Construction Applications of Virtual Reality Sydney, Australia, 5-6 November 2009

Edited by

Xiangyu Wang The University of Sydney, Australia Ning Gu The University of Newcastle, Australia

Sponsors

Sponsors The University of Sydney The University of Teesside The University of Newcastle FORUM8 CO. The
Sponsors The University of Sydney The University of Teesside The University of Newcastle FORUM8 CO. The

The University of Sydney

The University of Teesside

The University of Sydney The University of Teesside The University of Newcastle FORUM8 CO. The CONVR

The University of Newcastle

The University of Teesside The University of Newcastle FORUM8 CO. The CONVR 2009 Conference Organising Committee

FORUM8 CO.

The CONVR 2009 Conference Organising Committee Xiangyu Wang, The University of Sydney, Australia Ning Gu, The University of Newcastle, Australia Michael Rosenman, The University of Sydney, Australia Anthony Williams, The University of Newcastle, Australia Nashwan Dawood, The University of Teesside, United Kingdom

ISBN 978-1-74210-145-3 All rights reserved © 2009 No part of this work may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording or otherwise, without written permission from the Publisher, with the exception of any material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work.

Published and printed at The University of Sydney, Australia, by University Publishing Service.

TABLE OF CONTENTS

PREFACE

vii

CONVR 2009 INTERNATIONAL SCIENTIFIC COMMITEE

viii

ACKNOWLEDGEMENTS

ix

KEYNOTE SPEECH ABSTRACTS AND SPEAKER BIOS

Construction Synthetic Environments--------------------------------------------------------1 Simaan M. AbouRizk

Seeking How Visualization Makes Us Smart------------------------------------------------3 Phillip S. Dunston

intuBE Project-------------------------------------------------------------------------------------4 Nashwan Dawood

The Role of VR in Improving Collective Intelligence for AEC Processes-------------6 Mary Lou Maher

I. DESIGN COLLABORATION

Virtual Worlds and Tangible Interfaces: Collaborative Technologies that Change t he Way Designers Think-----------------------------------------------------9 Mary Lou Maher, Ning Gu and Mijeong Kim

A Novel Camera-based System for Collaborative Interaction with

Multi-dimensional Data Models--------------------------------------------------------------19

Michael Van den Bergh, Jan Halatsch, Antje Kunze, Frédéric Bosché, Luc Van Gool and Gerhard Schmitt

Towards a Collaborative Environment for Simulation Based Design----------------29 Michele Fumarola, Stephan Lukosch, Mamadou Seck and Cornelis Versteegt

Empirical Study for Testing Effects of VR 3D Sketching on Designers’ Cognitive Activities----------------------------------------------------------------39 Farzad Pour Rahimian and Rahinah Ibrahim

Analysis of Display Luminance for Outdoor and Multi-user Use ---------------------49 Tomohiro Fukuda

A Proposed Approach to Analyzing the Adoption and Implementation of

Virtual Reality Technologies for Modular Construction--------------------------------59 Yasir Kadhim, Jeff Rankin, Joseph Neelamkavil and Irina Kondratova

Collaborative 4D Review through the Use of Interactive Workspaces---------------71 Robert Leicht and John Messner

Design Scenarios: Methodology for Requirements Driven Parametric Modelling of High-rises---------------------------------------------------------79 Victor Gane and John Haymaker

An Experimental System for Natural Collocated and Remote Collaboration--------91 Jian Li and Jingyu Chen

Urban Wiki and VR Applications--------------------------------------------------------------97 Wael Abdelhameed and Yoshihiro Kobayashi

II. AUTOMATION AND INTERACTION

Toward Affective Handsfree Human-machine Interface Approach in Virtual Environments-based Equipment Operation Training--------------------------107 Iman Mohammad Rezazadeh, Xiangyu Wang, Rui Wang and Mohammad Firoozabadi

Construction Dashboard: An Exploratory Information Visualization Tool for Multi-system Construction----------------------------------------------------------------117 Cheng-Han Kuo, Meng-Han Tsai, Shih-Chung Kang and Shang-Hsien Hsieh

Computer Gaming Technology and Porosity-----------------------------------------------127 Russell Lowe and Richard Goodwin

Virtual Reality User Interfaces for the Effective Exploration and Presentation of Archaeological Sites----------------------------------------------------------139 Daniel Keymer, Burkhard Wünsche and Robert Amor

Interactive Construction Documentation----------------------------------------------------149 Antony Pelosi

Case Studies on the Generation of Virtual Environments of Real World Facilities-----------------------------------------------------------------------------155 Michele Fumarola and Ronald Poelman

Evaluation of 3D City Models Using Automatic Placed Urban Agents----------------165 Gideon Aschwanden, Simon Haegler, Jan Halatsch, Rafaël Jeker, Gerhard Schmitt and Luc van Gool

Integration of As-built and As-designed Models for 3D Positioning Control and 4D Visualization during Construction---------------------177 Xiong Liang, Ming Lu and Jian-Ping Zhang

Augmenting Site Photos with 3D As-built Tunnel Models for Construction Progress Visualization----------------------------------------------------------187 Ming Fung Siu and Ming Lu

Automatic Generation of Time Location Plan in Road Construction Projects------197 Raj Kapur and Nashwan Dawood

Development of 3D-Simulation Based Genetic Algorithms to Solve Combinatorial Crew Allocation Problems-------------------------------------------207 Ammar Al-Bazi, Nashwan Dawood and John Dean

Integration of Urban Development and 5D Planning--------------------------------------217 Nashwan Dawood, Claudio Benghi, Thea Lorentzen and Yoann Pencreach

III. SIMULATION AND ANALYSIS

A Simulation System for Building Fire Development and

the Structural Response due to Fire----------------------------------------------------------229

Zhen Xu, Fangqin Tang and Aizhu Ren

Physics-based Crane Model for the Simulation of Cooperative Erections-----------237 Wei Han Hung and Shih Chung Kang

Interaction between Spatial and Structural Building Design:

A Finite Element Based Program for the Analysis of

Kinematically Indeterminable Structural Topologies-------------------------------------247 Herm Hofmeyer and Peter Russell

Virtual Environment on the Apple iPhone/iPod Touch-----------------------------------257 Jason Breland and Mohd Fairuz Shiratuddin

3D Visibility Analysis in Virtual Worlds: The Case of Supervisor---------------------267 Arthur van Bilsen and Ronald Poelman

Evaluation of Invisible Height for Landscape Preservation Using Augmented Reality-----------------------------------------------------------------------279 Nobuyoshi Yabuki, Kyoko Miyashita and Tomohiro Fukuda

An Experiment on Drivers’ Adaptability to Other-hand Traffic Using a Driving Simulator----------------------------------------------------------------------287 Koji Makanae and Maki Ujiie

C2B: Augmented Reality on the Construction Site----------------------------------------295 Léon van Berlo, Kristian Helmholt and Wytze Hoekstra

Development of a Road Traffic Noise Estimation System Using Virtual Reality Technology-------------------------------------------------------------305 Shinji Tajika, Kazuo Kashiyama and Masayuki Shimura

Application of VR Technique to Pre- and Post-Processing for Wind Flow Simulation in Urban Area--------------------------------------------------------315 Kazuo Kashiyama, Tomosato Takada, Tasuku Yamazaki, Akira Kageyama, Nobuaki Ohno and Hideo Miyachi

Construction Process Simulation Based on Significant Day-to-day Data-------------323 Hans-Joachim Bargstädt and Karin Ailland

Effectiveness of Simulation-based Operator Training------------------------------------333 John Hildreth and Michael Stec

IV. BUILDING INFORMATION MODELLING

BIM Server: Features and Technical Requirements--------------------------------------345 Vishal Singh and Ning Gu

LEED Certification Review in a Virtual Environment-----------------------------------355 Shawn O’Keeffe, Mohd Fairuz Shiratuddin and Desmond Fletcher

Changing Collaboration in Complex Building Projects through the Use of BIM-------------------------------------------------------------------------363 Saskia Gabriël

The Introduction of Building Information Modelling in Construction Projects:

An IT Innovation Perspective-----------------------------------------------------------------371 Arjen Adriaanse, Geert Dewulf and Hans Voordijk

Creation of a Building Information Modelling Course for Commercial Construction at Purdue University------------------------------------------383 Shanna Schmelter and Clark Cory

Preface

The Faculty of Architecture, Design and Planning, at the University of Sydney and the School of Architecture and Built Environment, at the University of Newcastle are proud to co-host CONVR 2009, the 9 th International Conferences on Construction Applications of Virtual Reality. Significantly, the conference is the 9 th gathering of this international body of scholars and professionals across all Architecture, Engineering and Construction (AEC) disciplines, who dedicate and contribute to the knowledge building and applications of a broad range of advanced visualisation technologies in the AEC industry. Although the name of the conference has “Virtual Reality” (VR) as the keyword, it actually covers much broader range of visualisation–related topics beyond VR, which makes the range of the audiences wider with broader impact as years go.

The CONVR 2009 conference has attracted much attention and recognition among the research and professional communities involved in the AEC industry. The organising committee received close to 70 abstracts. After two rounds of rigorous double blind reviews (the first round for abstract review and the second round for full paper review) by International Scientific Committee, the CONVR 2009 is very pleased to accept 39 high quality full papers in this volume.

The CONVR 2009 conference provides a unique platform for experts in the fields to report, discuss and exchange new knowledge, which has resulted from the most current research and practice of advanced visualisation technologies. The Organising Committee is pleased to present selected papers that highlight the state-of-the-art development and research directions across the following four themes:

Design Collaboration

Automation and Interaction

Simulation and Analysis

Building Information Modelling

In the following pages, you will be able to find a range of quality papers that truly capture the quintessence of these concepts and will certainly challenge and inspire readers.

The CONVR 2009 Conference Organising Committee:

Xiangyu Wang (Chair), The University of Sydney, Australia Ning Gu (Co-chair), The University of Newcastle, Australia Michael Rosenman (Co-chair), The University of Sydney, Australia Anthony Williams, The University of Newcastle, Australia Nashwan Dawood, The University of Teesside, United Kingdom

November 2009

CONVR 2009 International Scientific Committee

Karin Ailland Robert Amor Serafim Castro Chiu-Shui Chan Clark Cory Robert Cox Nashwan Dawood Paulo Dias Ning Gu Fátima Farinha Michele Fumarola Jan Halatsch David Heesom Wei-Han Hung Rahinah Ibrahim Vineet Kamat Jeff Kan Shih-Chung Kang MiJeong Kim Robert Lipman Russell Lowe Koji Makanae John Messner Esther Obonyo Svetlana Olbina Aizhu Ren Enio Emanuel Ramos Russo Iman Rezazadeh Michael Rosenman Marc Aurel Schnabel Mohd Fairuz Shiratuddin Augusto de Sousa Andrew Strelzoff Walid Tizani Xiangyu Wang Vaughn Whisker Antony Williams

Bauhaus-University Weimar The University of Auckland The University of Teesside Iowa State University Purdue University Purdue University The University of Teesside Instituto de Engenharia Electrónica e Telemática de Aveiro (IEETA) The University of Newcastle, Australia EST- Algarve University Portugal Delft University of Technology ETH Zurich The University of Wolverhampton National Taiwan University University Putra Maylaysia The University of Michigan Taylor College Malaysia National Taiwan University Kyung Hee University National Institute of Standards and Technology (NIST) The University of New South Wales Miyagi University Penn State University The University of Florida The University of Florida Tsinghua University Catholic University in Rio Islamic Azad University The University of Sydney Hong Kong Chinese University The University of Southern Mississippi Universidade do Porto Brown University The University of Nottingham The University of Sydney Penn State University The University of Newcastle, Australia

ACKNOWLEDGEMENTS

We express our gratitude to all authors for their enthusiasms to contribute their research as published in this proceedings. Furthermore, this proceedings would not have been possible without the constructive comments and advice from all the International Scientific Committee members. We are also deeply grateful to the other members on the organising committee, Dr. Michael Rosenman, Professor Anthony Williams, and Professor Nashwan Dawood. Thanks and appreciation specifically goes to Ms Rui Wang for designing our proceedings and CD covers. We are also grateful to the conference assistants Ms Mercedes Paulini, Mr Lei Hou and Mr Wei Wang, whose great backup support is essential for the success of the conference. Financial aid came from Design Lab at the Faculty of Architecture, Design and Planning at the University of Sydney, School of Architecture and Built Environment at the University of Newcastle Australia, and Forum8 Co.

KEYNOTE SPEECH 1

Dr. Simaan M. AbouRizk, Professor and NSERC Industrial Research Chair in Construction Engineering and Management Canada Research Chair in Operation Simulation Department of Civil and Environmental Engineering, University of Alberta, Canada.

“Construction Synthetic Environments”

The presentation describes our vision for a highly integrated, interoperable, distributed simulation framework for modeling and analyzing construction projects.

We first describe the evolution of simulation applications (over a period of 15 years) within the construction industry in Alberta by providing the attendees with an overview of select implementation of simulation-based systems in industrial applications. Systems were introduced through collaborations between major construction companies and the University of Alberta. Those systems were deployed by the partner companies in different ways, including planning for tunnel construction projects, scheduling of modules in a module yard with space constraints for an industrial contractor, analysis of fabrication shops for improvement, process improvement studies, and others.

The presentation then provides an overview of our vision of advanced simulation systems we call Construction Synthetic Environments (COSYE), the intent of which will be to achieve “a fully integrated, highly automated construction execution environment across all project phases and throughout the facility’s life cycle”, as articulated in Figure 1. The figure demonstrates a large-scale distributed simulation framework that provides a comprehensive representation of an entire construction project with all of its components, including: a model of the facility (product model), the production/construction operations (process models), the business models, the resources involved, and the environment under which the project takes place. The framework allows the simulation models to extend throughout the life of the project with real-time input and feedback to manage the project until it is handed over to operations. The goal is to provide a virtual world where a construction project is planned, executed, and controlled with minimum disruption to the actual project. The framework will provide means to establish:

“detailed and comprehensive modeling of the entire life cycle of facilities; collaboration amongst a variety of stakeholders in building the required virtual models that represent the project; seamless integration between various forms of simulation (discrete, continuous, heuristic, etc.) and simulation software and tools; reusable simulation components for many applications (e.g. weather generation, equipment breakdown processes etc); and man-machine interactions with the models.”

We have completed three prototype synthetic environments using the COSYE framework over the past few years, including ones for industrial construction, steel construction, a bidding game, and tunnel construction. We will provide an overview of these during the presentation and select environment to demonstrate in greater detail.

We will provide an overview of these during the presentation and select environment to demonstrate in

The Construction Synthetic Environment Framework

The Construction Synthetic Environment Framework Bio of Dr. AbouRizk: Dr. AbouRizk currently holds the positions of
The Construction Synthetic Environment Framework Bio of Dr. AbouRizk: Dr. AbouRizk currently holds the positions of

Bio of Dr. AbouRizk:

Dr. AbouRizk currently holds the positions of “Canada Research Chair in Operation Simulation” and the “Industrial Research Chair in Construction Engineering and Management” in the Department of Civil and Environmental Engineering at the University of Alberta. He received his PhD degree from Purdue University in 1990 and his MSCE from Georgia Tech in 1985. He joined the University of Alberta in 1990 and was promoted to full professor in July 1997.

Dr. AbouRizk’s research accomplishments have been recognized through numerous awards for the quality of his research in the field of construction engineering and management, including the prestigious ASCE Peurifoy Construction Research Award, the E.W.R. Steacie Memorial Fellowship from the Natural Sciences and Engineering Research Council of Canada, the Thomas Fitch Rowland Prize for best paper in construction engineering management, the Killam Professorship, the Walter Shanly Award, and the E. Whitman Wright Award.

Dr. AbouRizk has led the development of the Hole School of Construction Engineering at the University of Alberta into one of the most reputable construction engineering and management programs in North America, boasting global recognition for the success of its graduate students and the strength of its faculty members. The success and distinctiveness of this program are based on strong industry collaboration in the areas of research, teaching, and overall practice. Dr. AbouRizk’s method has garnered wide support from funding agencies, policy makers, and industry practitioners, and has attracted some of the brightest students from around the world. He is renowned in the academic construction community for his research in computer simulation and its applications in construction planning, productivity improvement, constructability reviews and risk analysis.

KEYNOTE SPEECH 2

Dr. Phillip S. Dunston, Associate Professor in the Division of Construction Engineering and Management, School of Civil Engineering, at Purdue University, USA.

“Seeking How Visualization Makes Us Smart”

It was in 1996 that a computer science researcher

brought a vision for applying Augmented Reality visualization technology to the attention of attendees at a civil engineering computing conference. The door was then opened for a new set of inquiring minds to join architects who were already taking a look at the possibilities of virtual visualization. The exciting visualization opportunities presented by Virtual Reality and Mixed Reality technologies have since captured the attention of a growing number of researchers from the broad architecture, engineering, construction and facilities management (AEC/FM) domain. Unlike computer science and computer engineering researchers who have a technology development perspective, the AEC/FM community has a user perspective that must be developed as part of our contribution to shaping the development of these technologies and ultimately realizing their adoption into practice. Opportunities exist for improving practice through new efficiencies and through devising new ways of executing work tasks. Our attention to human resource capabilities and the attendant human factors can yield successful technology development decisions and integration. This keynote talk will review our experience in exploring this softer side as well as how we have inevitably had to confront the more technical challenges and will also suggest how some future objectives might be pursued.

also suggest how some future objectives might be pursued. Bio of Dr. Dunston: Phillip S. Dunston,

Bio of Dr. Dunston:

Phillip S. Dunston, an Associate Professor with appointments in the Division of Construction Engineering and Management and the School of Civil Engineering at Purdue University in West Lafayette, Indiana, USA. He is

a 2003 US National Science Foundation Career grantee for research on Mixed Reality applications for the

architecture, engineering and construction (AEC) industry. He directs the Advanced Construction Systems Laboratory (ACSyL) and is a Co-Director of the Center for Virtual Design of Healthcare Environments, both at Purdue. His research emphasizes the human factors related to virtual visualization and applying such principles in specifying the features and functions of visualization systems.

KEYNOTE SPEECH 3

Prof. Nashwan Dawood,

University of Teesside and Cecil M Yuill Professor of Construction management & IT, UK.

Director for the Centre for Construction Innovation & Research,

“intUBE Project” Intelligent Use of Building’s Energy Information (www.intube.eu)

It is a well established fact that buildings are one of the major contributors to energy use and CO2 emissions. The energy used in buildings accounts for 40 % of the total energy use in Europe. While some breakthroughs are expected in new buildings, the pace of these improvements is too slow considering the EU's ambitious goal to improve energy efficiency by 20 % before 2020. With over 80% of the European buildings standing in 2020 being already built, the main aim of the IntUBE project is to develop and make use of information and communications technologies including Virtual Reality to improve the energy efficiency of these existing buildings in compliance with the EU's aims of improving energy efficiency. IntUBE will develop tools for measuring and analysing building energy profiles based on user comfort needs. These will offer efficient solutions for better use and management of energy use within buildings over their lifecycles. Intelligent Building Management Systems will be developed to enable real-time monitoring of energy use and optimisation. They will, through interactive visualisation of energy use, offer solutions for user comfort maximisation and energy use optimisation.

visualisation of energy use, offer solutions for user comfort maximisation and energy use optimisation. intUBE concept

intUBE concept

visualisation of energy use, offer solutions for user comfort maximisation and energy use optimisation. intUBE concept

Bio of Prof. Dawood:

Prof. Dawood is the director for Construction Innovation & Research at the unversity of Teesside and hold Yuil Professor Chair. Prof Dawood has spent many years as an academic and researcher within the field of construction management and the application of IT in the construction process. This has ranged across a number of research topics including information technologies and systems (4D,VR,Integrated databases), risk management, and business processes. This has resulted in over 170 published papers in refereed international journal and conferences, and research grants from British Council, Industry, Engineering Academy , EPSRC, DTI and construction industry companies, totalling about £2,500,000. Final reports of the last three EPSRC grants received ‘Tending to Outstanding' peer assessment review from EPSRC.

I have been a visiting fellow/Professor at VTT -Finland, University of Calgary- Canada, University of Bahrain- Bahrain, Central University of Taiwan, AIT- Thailand, Stanford University-USA, PWRI (Public Works Research Institutes)- Japan, Georgia Tech- USA, Virginia Tech- USA, UNSW- Australia, University of Parana- Brazil, University of Florida-USA, International Islamic University Malaysia, Gyeongsang National University, Korea and Miyagi university , Japan and Osaka University, Japan.

Prof. Dawood has originated the CONVR conference series (Construction Applications of Virtual Reality:

Current Initiatives and Future Challenges). The mission of this is to bring together national and international researchers and practitioners from all areas of the construction industry and promote efficient exchange of ideas and develop mutual understanding of needs and potential applications of VR modelling. CONVR 2000 was organised at Teesside and attended by participants from 9 countries, CONVR 2001 organised at Chalmers University, Sweden and attended by participants from 12 countries. CONVR 2003 was organised by Virginia Tech, USA, CONVR 2004 was organised by ADETTI (Portugal), CONVR 2005 organised in Durham UK, CONVR 2006 was organised by Florida State University and CONVR 2007 was organised at Penn State University, USA, CONVR 2008 was organised by IIUM Malaysia and CONVR 2009 will be organised by the University of Sydney, Australia.

KEYNOTE SPEECH 4

Prof. Mary Lou Maher, the Deputy Division Director of the Information and Intelligent Systems Division at National Science Foundation and Professor at the University of Sydney, Australia.

“The Role of VR in Improving Collective Intelligence for AEC Processes”

Collective intelligence is a kind of intelligence that emerges from the collaboration and competition of individuals. While the concept of collective intelligence has been around for a long time, recent renewed interest in collective intelligence is due to internet technologies that allow collective intelligence to emerge from remotely located and potentially very large numbers of individuals. Wikipedia is a product of collective intelligence as a source of knowledge that is continuously generated and updated by very large numbers of individuals. Similarly, Second Life is a product of collective intelligence as a 3D virtual world that is created and modified by the large numbers of individuals that enter the world. The AEC industry relies on the collective intelligence of many individuals and teams of professionals from different disciplines. Virtual reality in its many forms provides collaborative technologies that not only enable people to work together from a distance but also change the way we interact with each other and the shared digital models that comprise the product of the collaboration. This presentation presents a new kind of collective intelligence enabled by emerging technologies, the research challenges, and the potential impact on design and creativity in AEC projects.

Bio of Prof. Maher:

Mary Lou Maher is the Deputy Division Director of the Information and Intelligent Systems Division at NSF. She joined the Human Centered Computing Cluster in July 2006 and initiated a funding emphasis at NSF on research in creativity and computing called CreativeIT. She is the Professor of Design Computing at the University of Sydney. She received her BS (1979) at Columbia University and her MS (1981) and PhD (1984) at Carnegie Mellon University. She was an Associate Professor at Carnegie Mellon University before joining the University of Sydney in 1990. She has held joint appointments in the Faculty of Architecture and the School of Information Technologies at the University of Sydney. Her own research includes empirical studies and new technologies for design in virtual worlds and other collaborative environments, behavior models for intelligent rooms, motivated reinforcement learning for non-player characters in MMORPGs, and tangible user interfaces for 3D design.

motivated reinforcement learning for non-player characters in MMORPGs, and tangible user interfaces for 3D design. 6

DESIGN COLLABORATION

Virtual Worlds and Tangible Interfaces: Collaborative Technologies That Change the Way Designers Think-----------------------------------------------------9 Mary Lou Maher, Ning Gu and Mijeong Kim

A Novel Camera-based System for Collaborative Interaction with

Multi-dimensional Data Models--------------------------------------------------------------19 Michael Van den Bergh, Jan Halatsch, Antje Kunze, Frédéric Bosché,

Luc Van Gool and Gerhard Schmitt

Towards a Collaborative Environment for Simulation Based Design----------------29 Michele Fumarola, Stephan Lukosch, Mamadou Seck and Cornelis Versteegt

Empirical Study for Testing Effects of VR 3D Sketching on Designers’ Cognitive Activities----------------------------------------------------------------39 Farzad Pour Rahimian and Rahinah Ibrahim

Analysis of Display Luminance for Outdoor and Multi-user Use ---------------------49 Tomohiro Fukuda

A Proposed Approach to Analyzing the Adoption and Implementation of

Virtual Reality Technologies for Modular Construction--------------------------------59

Yasir Kadhim, Jeff Rankin, Joseph Neelamkavil and Irina Kondratova

Collaborative 4D Review through the Use of Interactive Workspaces---------------71 Robert Leicht and John Messner

Design Scenarios: Methodology for Requirements Driven Parametric Modelling of High-rises---------------------------------------------------------79 Victor Gane and John Haymaker

An Experimental System for Natural Collocated and Remote Collaboration-------91 Jian Li and Jingyu Chen

Urban Wiki and VR Applications-------------------------------------------------------------97 Wael Abdelhameed and Yoshihiro Kobayashi

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

VIRTUAL WORLDS AND TANGIBLE INTERFACES: COLLABORATIVE TECHNOLOGIES THAT CHANGE THE WAY DESIGNERS THINK

Mary Lou Maher, Professor, University of Sydney, Australia; mary@arch.usyd.edu.au, http://web.arch.usyd.edu.au/~mary

Ning Gu, Lecturer, University of Newcastle, Australia; ning.gu@newcastle.edu.au, http://www.newcastle.edu.au/school/arbe

Mi Jeong Kim, Lecturer, Kyung Hee University, Korea; mijeongkim@khu.ac.kr, http:// housing.khu.ac.kr

ABSTRACT: Reflecting on the authors’ computational and cognitive studies of collaborative design, this paper characterizes recent research and applications of collaborative technologies for building design. The specific technologies considered are those that allow synchronous collaboration while planning, creating, and editing 3D models, including virtual worlds, augmented reality (AR) tabletop systems, and tangible user interfaces (TUIs). Based on the technical capabilities and potential of the technologies described in the first part, the second part of the paper considers the implications of these technologies on collaborative design based on an overview of the results of two cognitive studies conducted by the authors. Two studies, using protocol analysis, are described as the basis for characterizing the designers’ cognitive actions, communication and interaction in different collaborative design situations. The first study investigated collaborative design in a virtual world to better understand the changes in design behavior when the designers are physically remote but virtually collocated as avatars in a 3D model of their design solution. The second study measured the effects of tangible user interfaces (TUIs) with AR on a tabletop system on designers’ cognitive activities and design process in co-located collaboration. The paper concludes by discussing the implications of the results of these studies on the future design of collaborative technologies for designers.

KEYWORDS: Collaborative Design, 3D Virtual Worlds, Tangible Interfaces, Protocol Analysis, Design Cognition.

1. COLLABORATIVE TECHNOLOGIES AND DESIGN

Collaborative design is a process of dynamically communicating and working together within and across disciplines in order to collectively establish design goals, search through design problem spaces, determine design constraints, and construct a design solution. While each designer contributes to the development of the design solution, collaboration implies teamwork, negotiation, and shared models. Collaborative technologies support design in several ways, two of which are (1) the ability for collaboration to occur at the same time while the participants are remotely located and (2) the ability to augment the perception of the shared design drawings or models through new technologies for interacting with digital models. In this paper we show how two specific collaborative technologies change the way designers think.

We focus on the architectural design of buildings where eventually the design solution is a model of a 3D product that evolves as the record and the focus of the design process. Bringing designers into 3D virtual environments has the potential to improve their understanding of the design models during the collaborative process. Two such environments, 3D virtual worlds and tangible user interfaces (TUIs) to 3D models, are very different approaches to making the design model accessible to remote and collocated designers. Various virtual worlds and tangible interaction technologies have been developed for the AEC (Architecture, Engineering and Construction) domain, but most of them are still in the lab-based prototype development and validation stages. In this paper we reflect on the potential and implications of these collaborative technologies from a cognitive perspective in order to understand their role in design practice and to contribute a cognitive basis for the design of new collaborative technologies.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

2. BACKGROUND

The background section reviews recent developments in collaborative technologies and introduces a research method - protocol analysis - for studying design cognition in collaborative design.

2.1 Developments in collaborative technologies for design

Collaborative technologies for design allow two or more designers to create shared drawings or model and design together while remote or collocated. Since the focus while designing is on the shared drawings or models of the design, the important aspects of collaborative technologies for design are: the type of digital media available to represent the design, the interaction technologies for creating, visualizing, and modifying the shared drawings or models, and the ways in which the designers communicate and interact with each other.

Research into digital technologies for supporting collaborative work started in the 1960s with the early work at Stanford Research Institute into innovative interaction techniques. Later the developments at Xerox PARC in the 1970s and 1980s brought the field to what became known as user-centered design (Norman and Draper 1986) an important research focus in an emerging field called Human Computer Interaction (HCI). In the early 1990s, a branch of HCI developed into the research area of Computer Supported Cooperative Work (CSCW) and groupware technologies. Like HCI, CSCW is a multi-disciplinary field that has made significant contributions to the design of collaborative technologies. All these technological developments changed the practice of many fields such as science, art and design. In terms of functions, the developments in computer-supported technologies for collaborative design can be classified into the categories of video conferencing, shared drawings, and shared models. The types of digital media for design representation can include bit mapped images, sketches, structured graph models, 2D and 3D geometric CAD models, and 3D object oriented models. Some commonly used technologies for supporting the collaborative design of buildings are digital sketching tools such as GroupBoard (http://www.groupboard.com), Sketchup (http://sketchup.google.com), and the major CAD systems. The interaction technologies include the standard graphical user interfaces (GUI) using keyboard and mouse, touch screens and tables such as the Microsoft Surface (http://www.microsoft.com/surface/), augmented reality, and game controllers such as the Nintendo Wii (http://wii.com). Designers can communicate with each other using text chat, voice over IP, and/or video. With the technical advances in the field and the wider adoption of high-bandwidth internet, new generation of collaborative technologies have emerged, two of which including 3D virtual worlds and TUIs to 3D models are the focus of our paper.

3D virtual worlds are networked environments designed using the place metaphor. One of the main characteristics that distinguish 3D virtual worlds from conventional virtual reality is that 3D virtual worlds allow multiple users to be immersed in the same environment supporting a shared sense of place and presence (Singhal and Zyda 1999). Multi-user 3D virtual worlds have grown very rapidly, with examples such as Second Life (http://www.secondlife.com) having reached millions of residents and boasting a booming online economy. Through the use of the place metaphor, 3D virtual worlds have been associated with the physical world ever since the early conceptual formation of the field. On one hand, the rich knowledge and design examples in the physical world provide a good starting point for the development of 3D virtual worlds. On the other hand, designing in 3D virtual worlds is becoming an exciting territory for the new generation of designers to explore. For the AEC industry, recent developments in 3D virtual worlds and the proliferation of high bandwidth networked technologies have shown great potential in transforming the nature of remote design collaboration. In 3D virtual worlds, designers can remotely collaborate on projects without the barriers of location and time differences. With high-speed network access, real-time information sharing and modifications of large data sets such as digital building models become possible over the World Wide Web. Distant design collaboration can significantly reduce the relocation costs and help to increase efficiency in global design firms. Current development of such systems, for example, DesignWorld (Maher et al. 2006a) supports remote communication, collaborative 3D modeling and multidisciplinary building information sharing.

TUIs couple physical artifacts and architectural surfaces to the correlated digital information, whereby the physical representations of digital data serve simultaneously as interactive controls. For design applications, TUIs are often combined with AR, which allows designers to explore design alternatives using modifiable models. Through the direct tangible interaction, designers can use their hands and often their entire bodies for the physical manipulation. More recently, a wide range of tangible input devices have been developed for collaborative design (Deitz and Leigh

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

2001; Fjeld et al. 1998; Ishii and Ullmer 1997; Moeslund et al. 2002). BUILD-IT provides physical 3D ‘bricks’ for concurrently accessing to multiple digital models and the ARTHUR system employs a ‘wand’ for interacting with 3D virtual settings projected into their common working environment. The metaDESK embodies many of the metaphorical devices of windows, icons, and menus of GUIs as a physical instantiation such as a lens with wooden frames, phicons, and trays respectively. The InteracTable and DiamondTouch use touch-sensitive displays for supporting cooperative team work. These tangible input devices support multi-user interactions for co-located or remote collaboration by providing a sensory richness of meaning. Tangible interaction is recently becoming an important research issue in the domain of design computation. However, compared to the rapid development of tangible interaction technologies, relatively little is known about the effects of tangible interaction systems on design cognition.

2.2 Studying design cognition in collaborative design: protocol analysis

While usability is a critical feature of collaborative technologies, usability studies do not necessarily provide a critical assessment of the impact of technology on design processes. Rather than focus on usability, studying the perception, actions, and cognition of designers using collaborative technologies allows us to compare the technologies and their impact on human behavior while designing. Protocol analysis has been used to study how people solve problems, and has been used widely in the study of design cognition (Gero and Mc Neill 1997; Suwa et al. 1998). We adapt this method to study perception, action, and cognition while designers are using collaborative technologies.

A protocol is the recorded behavior of the problem solver which is usually represented in the form of sketches,

notes, video or audio recordings. Whilst the earlier studies dealt mainly with protocols’ verbal aspects, later studies

acknowledge the importance of design drawing, associating it with design thinking which can be interpreted through verbal descriptions (Stempfle and Badke-Schaub 2002; Suwa and Tversky 1997). Recent design protocol studies employ analysis of actions which provide a comprehensive picture of physical actions involved during design (Brave et al. 1999). In design research, two kinds of protocols are used: concurrent protocols and retrospective protocols. Generally, concurrent protocols are collected during the task and utilized when focusing on the process- oriented aspect of designing, being based on the information processing view (Simon 1992). The ‘think-aloud’ technique is typically used, in which subjects are requested to verbalize their thoughts as they work on a given task (Ericsson and Simon 1993; Lloyd et al. 1995). Retrospective protocols are collected after task and utilized when focusing on the content-oriented aspects of design, being concerned with the notion of reflection in action (Dorst and Dijkhuis 1995; Schön 1983).

The methodology involves developing an experiment in which one or more designers are asked to work on a design task while being recorded. The recording, or protocol data, is a continuous stream of data and can include video of the designers, and/or continuous video of the computer display showing the designers’ actions and an audio stream

of the verbalization of the designer(s). The protocol data is segmented into units that are then coded and analyzed to

characterize the design session. The coding scheme is developed according to the theory, model, or framework that

is being tested and can include cognitive, communication, gesture, or interactive actions.

The protocol analysis technique has been adopted to understand the interactions of design teams (Cross and Cross 1996; Stempfle and Badke-Schaub 2002) and design behavior of teams (Goldschmidt 1996; Valkenburg and Dorst 1998). Protocol studies of collaborative architectural design focus on understanding team collaboration, in terms of use of communication channels and design behavior variables (Gabriel and Maher 2002). Protocol coding has been conducted on professional architects and students architects respectively for the two studies described below in Sections 3 and 4. For the two studies, the think aloud method is not directly applicable in the protocol collection. The protocol data comprises the designers’ conversations, gestures, and interactions rather than the designers’ verbalization of their thoughts as in the think aloud method. Such collaborative protocols provide data indicative of cognitive activities that are being undertaken by the designers, not interfering with design process as a natural part of the collaborative activities.

3. DESIGNING IN A 3D VIRTUAL WORLD

This study compares the collaborative design process in a 3D virtual world to collaborative design processes in a traditional face-to-face sketching environment and in a remote sketching environment. We set up three distinctive

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

design environments and studied four pairs of professional architects 1 each collaborating on a different design task with similar complexity in each of the three design environments. The comparison of the same pairs of designers in three different environments is assumed to provide a better indication of the impact of the environments on design cognition than using different designers or a same design task. A more detailed description this study is reported in (Maher et al. 2006b).

3.1 Experiment setup and coding scheme design

A different design brief and a collage of the photos showing the site and the surroundings were provided for each of the three different experiment sessions. In the face to face session of the experiment, the pairs of designers used pen and paper as the sketching tools. In the remote sketching session, they sketched remotely and synchronously, with one designer using a Smart Board (http://www.smarttech.com) and the other using Mimio (http://www.mimio.com). Both technologies provide a pen and digital ink interface. In the final 3D virtual world session, designers collaborated remotely and synchronously in a 3D virtual world - Active Worlds (http://www.activeworlds.com) - through 3D design and modeling. In the latter two sessions of the experiment, remote and synchronous communication was simulated by locating both designers in two different parts of the same room, allowing them to talk to each other, but only seeing each other via web cams. Each session required the designers to complete the design task in 30 minutes. They were given training sessions on the use of Smart Board, Mimio and Active Worlds prior to the experiment.

The basis of the coding scheme design for the research is a consideration of a set of expected results. We developed and applied a five-category coding scheme including communication content, design process, operations on external representations, function-structure, and working modes. The communication content category partitions each session according to the content of the designers’ conversations, focusing on the differences in the amount of conversation devoted to discussing design development when compared to other topics. The design process category characterizes the different kinds of designing tasks that dominate in the three different design environments. The operations on external representation category look specifically at how the designers interacted with the external design representation to see if the use of 2D sketches or 3D models was significantly different. The function- structure category further classifies the design-related content as a reference to the function of the design or the structure of the design. The working modes category characterizes each segment according to whether or not the designers were working on a same design task or on a same part of the design representations.

3.2 Protocol analysis result

The analysis of collaborative design behavior involves documenting and comparing the categories of codes. We looked at frequencies of the occurrence of the code categories in the three different sessions. We also documented the time spent for each category, with respect to the total time elapsed during the session. This data gives us the duration percentages of the codes in each main category. Table 1 provides an overview of the focus of activity in each of the three design environments by showing the average percentages for 4 of the 5 coding categories of the pairs of the designers who participated in the experiment. We don’t show the working mode category in TABLE 1 because it will always be a total of 100% of the duration since the designers are always either working on the same or different tasks. The categories of codes were applied independently therefore each segment could be coded in more than one category.

TABLE. 1: Durations of codes in each main category as average percentages of the total elapsed time.

Categories

Face to Face Sketching

Remote Sketching

3D Virtual Worlds

Communication Content

72%

72%

61%

Design Process

69%

48%

34%

Operations on External Representations

96%

90%

93%

Function-Structure

67%

43%

27%

1 While four pairs of designers are not considered a statistically significant number of participants in a cognitive study, we can use these results as an exploratory study to identify major differences that are common across this sample of designers.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

We tested if there are significant differences between the pairs across the three different design sessions in terms of the design behavior (coded activity categories). The ANOVA test (ANOVA with replication, P<0.05 between the three different design sessions) result shows that there is no significant difference between the pairs in terms of communication content (p=0.15), operations related to external representations (p=0.80) and working mode (p=0.99). The results are listed in TABLE 2. These results support that the collaborative behavior in these categories did not show a significant variance amongst the different pairs. Note that the design process and function-structure categories are significantly different between the pairs. These differences are not surprising and they are common in design studies since design activities of a particular designer can change due to different situations in the actual context, and the variance in individual design strategies can have an effect on the collaborative design process.

TABLE. 2: ANOVA test result on action categories of different designer pairs.

 

Communication

Design

Operations on External Representations

Function-

Working

P-value

Content

Process

Structure

Mode

Between designer pairs in each design session

0.15

2.46E-08

0.80

3.26E-05

0.99

The details of the protocol analysis, reported in (Maher et al. 2006b), show that there are insignificant differences among the three design environments, in terms of the communication content and operations on external representations categories. 3D virtual worlds are able to support design communication and representation during collaboration. There is a significant decrease from face to face sketching to remote sketching and to 3D design and modeling in virtual worlds, in terms of the design process and function-structure categories. The first interpretation of such differences is that during remote collaboration, designers are able to design both collectively and individually due to the flexibility of digital media in modifying and integrating different parts or different versions

of the design representations, as well as the physical separation of the designers. The evidence for this is the results

of the working mode category, showing a significant difference among the face to face and remote design

environments. It was observed in our experiment that during the face to face sessions, designers mostly worked together with over 95% of the duration devoted to collaborative mode. Although sometimes a particular designer led the process while the other observed and critiqued, they always focused on the same tasks. In the remote sessions, and especially in 3D virtual worlds, an average 40% of the duration was for individual design phases where different designers worked on different tasks or different parts of the design representations. They often came together after

an individual phase to review each other’s outcomes or swap tasks. During these individual phases, they reduced and

some pairs even stopped verbal communications (note the decrease in the communication content category for 3D virtual worlds). This explains the smaller number of design-related segments during the remote sketching and 3D virtual world sessions. The second interpretation of such differences is the possible changes in the approach to design development when switching from 2D sketching to 3D modeling. In sketching sessions, although designers constantly externalize their design ideas in separated sketch parts or over other existing sketches, there is usually a clear separation between the development of design concepts and the development of formal design representations. In 3D virtual worlds, these two processes become more blurry. The 3D virtual world objects designers used to explore concepts also become the 3D models for the final design representations. The design ideas are evolved, explored and externalized all through 3D modeling. Our current perceptions about 3D modeling and the current setups of the experiment are inadequate to further understand the different roles of 3D modeling in design collaboration. Future research is needed in this regard.

A further analysis on each of the coded categories was also conducted and the main points are summarized below:

(1) the communication about awareness of the other designer increases during remote collaboration with the highest duration percentage observed in 3D virtual worlds. There is a growing focus on the communication about design representations but the changes across the three design environments are not significant; (2) the highest percentage of visual analysis for design development is observed in 3D virtual worlds. 3D modeling as the main design approach in virtual worlds does not fit into the traditional “analysis-synthesis” model and should be studied further; (3) As discussed, in virtual worlds designers use 3D models for both exploring design concepts and representing final outcomes and often the transformation of the 3D models captures the design development process. The most frequent operations on external representations are change-related activities in 3D virtual worlds, while create- related activities occur most frequently in the two sketching environments.

Both similar and different patterns were observed when designers change from face to face collaboration to remote collaboration, and from 2D sketching to 3D modeling. In a follow-up study, we further observed and surveyed the

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

same designer pairs’ experiences and preferences when they were given the choice to collaborate using the typical

3D modeling features supported in virtual worlds or the integrated remote sketching features. It was noted that (1) different designers can have different preferences in applying 2D sketching, 3D modeling or the combination of the two for design collaboration; (2) 3D models appear to be a more popular media for final design representations compared to 2D sketches (3) both 2D sketches and 3D models can be used to explore abstract design concepts such

as spatial volumes and relationships as well as to develop external representations such as design details; and (4) 3D virtual worlds support both collaborative and individual design tasks while remote sketching environments tend to encourage collaborative tasks.

4. DESIGNING USING TANGIBLE USER INTERFACES

Until recently, research on TUIs focused on developing new systems and exploring technical options rather than addressing users’ interaction experience in the new hybrid environments. This study explored the potentials and

perspectives of tangible interaction with an AR display for supporting design collaboration by measuring the effects

of TUIs on designers’ cognitive activities and design process. The aim was to gain a deeper understanding of the

designers’ experience of tangible interaction specifically, in terms of the collaborative affordance of TUIs, in the context of co-design. In this study, physical manipulation and spatial interaction were considered as tangible interaction. Physical manipulation of objects exploits intuitive human spatial skills, where movement and perception are tightly coupled (Hornecker and Buur 2006; Sharlin et al. 2004). Spatial interaction is engaging in interaction with the space created by the spatial arrangement of the design objects, and the use of this engagement for communication in collaboration.

4.1 Experiments and coding scheme design

A tabletop system with TUIs including a horizontal surface and a vertical display has been developed at the Design

Lab at the University of Sydney to support problem solving, negotiation and establishing shared understanding in collaborative design. As multiple, specialized tangible input devices for TUIs, 3D blocks with tracking markers in ARToolKit (Billinghurst et al. 2001) can be attached to different functions, each independently accessible to 3D virtual objects (Fitzmaurice 1996). The tabletop system was compared to a typical desktop system with GUIs for 3D spatial planning tasks in a controlled laboratory experiment. Multiple designers can concurrently access the 3D blocks whereas only one person can edit the model at a time using the mouse and keyboard. Comparing

conventional input devices such as mouse and keyboard, it was assumed that the affordances of the physical handles

of the TUIs facilitate two-handed interactions and thus offers significant benefits to collaborative working in design

collaboration (Granum et al. 2003; Moeslund et al. 2002)

The participants comprised three pairs of 2nd or 3rd year architecture students, with minimum of one year’s experience as CAD users. Each pair performed both sessions, a TUI and a GUI, in one day for two different design tasks. The chosen scenario was the redesign of a studio into a home office or a design office by configuring spatial arrangements of furniture, where each 3D block can represent a piece of furniture in the tabletop system, and pre- designed furniture can be imported from the library in ArchiCAD using a mouse and keyboard. Two design tasks were developed to be similar in complexity and type, and the systems were assessed by letting designers discuss the existing design and proposed new ideas in co-located collaboration. Designers’ conversation, interactions, and gestures were videotaped while they designed the layout for four required areas according to the design requirements and then the analysis of the data carried out using the protocol analysis method. The collected data in the form of collaboration protocols were transcribed and then segmented using an utterance based technique. Each utterance flagged the start of a new segment, and then for each segment relevant codes were assigned according to a customized coding scheme.

The coding scheme comprised six categories at four levels: 3D modeling actions at the Action level, perceptual activities at the Perceptual level, set-up goal activities and co-evolution at the Process level, and cognitive synchronization and gesture actions at the Collaborative level. The Action level represents designers’ tangible interaction with the external representation through the 3D modeling actions. The Perception level represents designers’ perception of visuo-spatial features in the external representation. Designers’ perceptual activities in 3D configurations are related to the reconsideration for different meaning and function of the same objects rather than reconstructing unstructured images. The Process level represents designers’ ‘problem-finding’ behaviors associated with creative design. Specifically set-up goal activities refer to introducing new functional issues as new design

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

requirements. The co-evolution represents the exploration of two design spaces, problem and solution spaces. The Collaborative level reflects designers’ collective cognition. Cognitive synchronization represents the argumentative process in collaborative design, showing how designers construct a shared understanding of the problem and a shared representation of the solution through negotiating activities. Gesture actions represent non-verbal design communication in collaborative design. More information on the coding scheme can be found in Kim’s PhD thesis (Kim 2006).

4.2 Protocol analysis results

TABLE 3 shows the mean values of segment durations in the design sessions ranging from 6.9 to 8.9 seconds. The standard deviations are rather high (5.0 to 7.2 seconds), which suggests that the distribution of the segment durations is widespread from the mean values. The average segment duration in TUI sessions (7.3 second) was shorter than that in the GUI (8.3 second) sessions. Thus designers’ utterances during TUI sessions were on average shorter than those in GUI sessions. Designers in the GUI session did not make as much progress in developing design solutions as they did in the TUI sessions given the same amount of time.

TABLE. 3: Duration of segments

Pair 1

Pair 2

Pair 3

Session/Design task

TUI 1/A

GUI 2/B

TUI 2/A

GUI 1/B

TUI 1/A

GUI 2/B

Task completion

Yes

Yes

Yes

No

Yes

No

Total time

1065 sec

1116

1206 sec

1196 sec

892 sec

911 sec

Segment no.

133

80

89

66

120

81

Mean (sec)

7.3

8.9

6.9

7.3

7.8

8.8

Std. Deviation

4.8

6.6

4.9

7.2

5.4

5.8

Session: 1– first session; 2 – second session / Design task: A - Home office; B – Design office

The difference in the task progression might also have affected the occurrence of cognitive actions as shown in TABLE 4. With the direct, naïve manipulability of physical objects and rapid visualization, designers in the TUI session produced more multiple cognitive actions compared to designers in the GUI session (210 and 127). The average occurrence of the perceptual actions in the TUI session (105) was twice that of the GUI session (57), and the average occurrence of set-up goal actions in the TUI session (36) was almost twice that of the GUI session (20). In order to identify how the different HCI environments influenced the proportion of the cognitive actions, the relatively higher occurrences of each action category are shaded for each designer. Since we intended to identify the trend in the differences between the two design sessions through the results of the collaborative study, we assumed that a value higher than 2% between the two design sessions would indicate the changes according to the different interaction modes. The trend shows that all designers produced more perceptual actions (50.0% and 44.9%) in the TUI session, and more functional actions (32.9% and 39.4%) in the GUI session.

TABLE 4. Occurrence percentages of action categories

 

Perceptual

Functional

Set-up goal

Total actions

 

TUI

136

(52.9%)

81

(31.5%)

40 (15.6%)

257 (100%)

Pair 1

GUI

57 (39.0%)

63

(43.2%)

26 (17.8%)

146 (100%)

 

TUI

100

(46.9%)

77

(36.2%)

36 (16.9 %)

213 (100%)

Pair 2

GUI

61 (45.2%)

53

(39.3%)

21

(15.5%)

135 (100%)

 

TUI

77

(49.4%)

54

(34.6%)

25 (16.0%)

156 (100%)

Pair 3

GUI

50 (47.6%)

39

(37.2%)

16

(15.2 %)

105 (100%)

 

TUI average

105

(50.0%)

69

(32.9%)

36 (17.1%)

210 (100%)

Pairs (average)

GUI average

57 (44.9%)

50

(39.4%)

20

(15.7%)

127 (100%)

With the focus being on designers’ spatial cognition, six cognitive action categories were investigated in terms of four levels of the coding scheme. The encoded protocols were analyzed using a Mann-Whitney U test to examine differences between the two sessions, and the structures of design behaviors were explored through the interactive graphs. 3D modeling actions, perceptual and set-up goal activities were combined into generic activity components, highlighting the different patterns of design behaviors. In order to compare in a same condition for two design sessions, the total time of each GUI session was cut at a same time point as the corresponding TUI session.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

Epistemic action refers to ‘exploratory’ motor activity, where users would get the final stage through performing a lot of physical actions even with no specific goal (Fitzmaurice and Buxto 1997; Kirsh and Maglio 1994). The findings at the Action level suggested that in order to develop design ideas designers in the TUI session changed the external representation by performing more ‘movement’ modeling actions rather than compute it mentally. The short and frequent 3D modeling actions are considered as epistemic actions, reducing their cognitive load and producing a ‘conversation’ style of interaction, to express and negotiate design ideas, establishing shared understanding in design collaboration. These epistemic 3D modeling actions allowed designers to test their ideas quickly by proceeding in small steps and then these continuities brought more opportunities to execute creative leaps in designing. The findings at the Perception level imply that by manipulating 3D blocks, designers produced more perceptual actions, whereby more new visuo-spatial features were created and discovered through performing these modeling actions. The new perceptual information could have brought about new interpretations on the external representation since thinking about emergent properties evokes shifting focus to a new topic. Furthermore, designers using TUIs perceived more new and existing ‘spatial relationships’ among elements, while focusing more on the ‘existing’ elements themselves in the GUI session. Perceiving more ‘spatial relationships’ potentially encouraged designers to explore related functional thoughts, to go beyond retrieving the visual information, and thus make abstract inferences.

The findings at the Process level suggested that designers invented new design requirements by restructuring the problem space based on the perceived information in a situated way rather than synthesizing design solutions fixed on the initial requirements. The retrieval of knowledge, based on their expertise or past experience, for the new constraints suggests that designers’ recall might be improved through the manipulation of 3D blocks. Furthermore, they developed the formulation of the problem and alternatives for a solution throughout the design session, showing a co-evolutionary process which is associated with creative outcomes in designing. Consequently, designers using these 3D blocks would gain more opportunities to discover key concepts through the ‘problem-finding’ process. The embodied facilitation theme highlights that tangible interaction embodies structure that allows or hinders some actions, and thereby shapes emerging group behavior. Spatial interaction refers to the fact that tangible interaction is embedded in real space, thus has the potential to employ full-body interaction, acquiring communicative function. The findings from the Collaborative level reveal that designers in the TUI session tend to establish more cognitive synchronization through active negotiation processes, especially, the cycle of three codes ‘Propose’, ‘Argument’ and ‘Resolution’. The horizontal table and 3D blocks on the tabletop system might facilitate collaborative interactions by working as the embodied structure. Spatial interaction with spaces was also facilitated while using 3D blocks, thus designers in the TUI session produced more immersive gesture actions using hands and arms, leading to whole body interaction with the external representation. ‘Touch’ actions seemed to be beneficial for designers’ perceptual activities because designers in the TUI sessions kept touching the 3D blocks, which might have simplified designers’ mental computation through epistemic actions.

To sum up, the protocol analysis of the four levels of designers’ spatial cognition reveals that the epistemic 3D modeling actions using TUIs put much less load on designers’ cognitive processes, thus resulting in the co- generation of new conceptual thoughts and perceptual discoveries in the external representation. The off-loaded designers’ cognition affected the design process by increasing ‘problem-finding’ behaviors associated with creative design and supported design communication, negotiation and shared understanding. In conclusion, the results of this study suggest that tangible interaction afforded by TUIs provides important benefits for designers’ spatial cognition and a cognitive and collaborative aid for supporting collaborative design. To verify these conclusions, more designers need to be observed and their protocols analyzed.

5. IMPACT OF COLLABORATIVE TECHNOLOGIES ON DESIGN

In this paper we describe collaborative technologies as enablers of collective intelligence in design in two ways: (1) the ability for collaboration to occur at the same time while the participants are remotely located and (2) the ability to augment the perception of shared design drawings or models through new technologies for interacting with digital models. Since the focus while designing is on the shared drawings or models of the design, critical aspects of collaborative technologies for design are: the type of digital media available to represent the design, the interaction technologies for creating, visualizing, and modifying the shared drawings or models, and the ways in which the designers communicate and interact with each other.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

In our study of 3D virtual worlds, we show how a multi-user virtual world that allows co-editing of 3D models changes the behavior of designers in two important ways: given the same amount of time for the collaborative session, the designers worked on the same task most of the time while collocated and only part of the time when remotely located in physical space but collocated in the virtual world; and the 3D model as a focus for design development and communication embodied concept and geometry in the 3D objects where this appears to be separated in sketches. These results show that the type of digital media available, comparing 2D sketches and 3D models, changes what designers talk about when they are collaborating; and the change from physically located around a shared drawing to remotely located within a 3D model changes the working modes and encourages designers to smoothly move between working on the same task and working on different aspects of the design.

In our study of TUIs compared to GUIs, we show how tangible interaction on a tabletop encourages the designers to engage in more exploratory design actions, and more cognitive actions in general. We attribute this change to the additional perceived affordances of the tangible blocks as interfaces to the digital model when compared to the keyboard and mouse as the interface to the digital model. The blocks became specific parts of the model for the designers as they moved the pieces on the tabletop, while the keyboard and mouse was negotiated to be different parts of the model at different times. These affordances effectively allowed the designers to focus their actions directly on the design alternatives and development rather than on the interface to the design model. This difference affected the number of segments, an increased number when using TUIs implying an epistemic approach to generating alternatives, and the number of cycles in the problem-finding process implying a potential for more creative solutions.

The results from these two very different studies converge in a set of recommendations for the design of collaborative technologies for designers: The development of multi-user 3D virtual worlds for designers, when merged with current CAD capabilities, has the benefit of a smooth transition between working on the same task and working on separate tasks. When using CAD systems for collaboration, a significant amount of time is spent coordinating working on the same or different tasks at same or different times. A 3D virtual world enables a group of designers to be more aware of each other’s presence when they are focused on designing. While the main role of 3D modeling in traditional CAD systems is design documentation, our study shows that 3D modeling can play different roles in design collaboration from the early concept exploration to the final design representation. The development of TUIs for 3D modeling changes the perception and interaction with digital models and should become a standard alternative to the keyboard and mouse for design. Most collaborative technologies for designers still rely on the keyboard and mouse for interacting with digital models. There is significant benefit and relatively little training needed to incorporating TUIs into 3D modeling systems so that designers can choose the best interaction style for the stage of the design and modeling tasks.

6. REFERENCES

Billinghurst, M., Kato, H., and Poupyrev, I. (2001). "MagicBook: a Transitional AR Interface." Computer Graph 25,

745-753.

Brave, S., Ishii, H., and Dahley, A. (1999). "Tangible Interface for Remote Collaboration and Communication." CHI '99: Conference on Human Factors in Computing Systems, Pittsburgh, Pennsylvania, USA, 394-401.

Cross, N., Christiaans, H and Dorst, K (1996). Analysing Design Activity, John Wiley & Sons Ltd, Chichester, UK.

Cross, N., and Cross, A. (1996). "Observations of Teamwork and Social Processes in Design." Analysing Design Activity, N. Cross, H. Christiaans, and K. Dorst, eds., John Wiley & Sons Ltd, Chichester, UK.

Deitz, P., and Leigh, D. (2001). "DiamondTouch: A Multi-User Touch Technology." UIST 2001: The 14th Annual ACM Symposium on User Interface Software and Technology, Orlando, Florida, USA.

Dorst, K., and Dijkhuis, J. (1995). "Comparing Paradigms for Describing Design Activity." Design Studies 16(2),

261-275.

Ericsson, K., and Simon, H. (1993). Protocol Analysis: Verbal Reports as Data, MIT Press, Cambridge, Massachusetts, USA.

Fitzmaurice, G. (1996). "Graspable User Interfaces", University of Toronto, Toronto.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

Fitzmaurice, G., and Buxto, W. (1997). "An Empirical Evaluation of Graspable User Interfaces: Towards Specialized, Space-multiplexed Input." CHI '97: Conference on Human Factors in Computing Systems, Los Angeles, USA, 43-50.

Fjeld, M., Bichsel, M., and Rauterberg, M. (1998). "BUILD-IT: an Intuitive Design Tool based on Direct Object Manipulation." Gesture Workshop '96: Gesture and Sign Language in Human-Computer Interaction, Bielefeld, Germany, 297-308.

Gabriel, G., and Maher, M. (2002). "Coding and Modeling Communication in Architectural Collaborative Design." Automation in Construction 11, 199-211.

Gero, J., and Mc Neill, T. (1997). "An Approach to the Analysis of Design Protocols." Design Studies 19(1), 21-61.

Goldschmidt, G. (1996). "The Designer as a Team of One." Analysing Design Activity, N. Cross, H. Christiaans, and K. Dorst, eds., John Wiley & Sons, Chichester, UK.

Granum, E., Moeslund, T., and Störring, M. (2003). "Facilitating the Presence of Users and 3D Models by the Augmented Round Table." Presence 2003: The 6th Annual International Workshop on Presence 19.

Ishii, H., and Ullmer, B. (1997). "Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms." CHI '97: Conference on Human Factors in Computing Systems, Los Angeles, USA, 234-241.

Kim, M. (2006). "Effects of Tangible User Interfaces on Designers' Spatial Cognition," University of Sydney, Sydney.

Kirsh, D., and Maglio, P. (1994). "On Distinguishing Epistemic from Pragmatic Action." Cognitive Science 18, 513-

549.

Lloyd, P., Lawson, B., and Scott, P. (1995). "Can Concurrent Verbalization Reveal Design Cognition?" Design Studies 16 (2), 237-259.

Maher, M., Rosenman, M., Merrick, K., and Macindoe, O. (2006a). "DesignWorld: An Augmented 3D Virtual World for Multidisciplinary Collaborative Design." CAADRIA 2006, Osaka, Japan, 133-142.

Maher, M. L., Bilda, Z., Gu, N., Gul, F., Huang, Y., Kim, M. J., Marchant, D., and Namprempree, K. (2006b). "CRC Design World Report." University of Sydney, Sydney.

Moeslund, T., Störring, M., and Liu, Y. (2002). "Towards Natural, Intuitive and Non-Intrusive HCI Devices for Roundtable Meetings." MU3I: Workshop on Multi-User and Ubiquitous User Interfaces, Funchal, Madeira, Portugal, 25-29.

Norman, D., and Draper, S. (1986). "User Centered System Design: New Perspectives on Human-Computer Interaction." Lawrence Erlbaum Associates, Hillsdale, NJ.

Schön, D. (1983). The Reflective Practitioner: How Professionals Think in Action, Basic Books, New York.

Sharlin, E., Watson, B., Kitamura, Y., Kishino, F., and Itoh, Y. (2004). "On Tangible User Interfaces, Humans and Spatiality." Personal and Ubiquitous Computing 8(5), 338-346.

Simon, H. (1992). The Sciences of the Artificial, The MIT Press, Cambridge, Massachusetts, USA.

Singhal, S., and Zyda, M. (1999). Networked Virtual Environments: Design and Implementation, ACM Press, New York.

Stempfle, J., and Badke-Schaub, P. (2002). "Thinking in Design Teams - an Analysis of Team Communication." Design Studies 23(5), 473-496.

Suwa, M., Purcell, T., and Gero, J. (1998). "Macroscopic Analysis of Design Processes based on A Scheme for Coding Designers' Cognitive Actions." Design Studies 19(4), 455-483.

Suwa, M., and Tversky, B. (1997). "What Do Architects and Students Perceive in Their Design Sketches?" Design Studies 18(4), 385-403.

Valkenburg, R., and Dorst, K. (1998). "The Reflective Practice of Design Teams." Design Studies 19(3), 249-271.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

A NOVEL CAMERA-BASED SYSTEM FOR COLLABORATIVE INTERACTION WITH MULTI-DIMENSIONAL DATA MODELS *

M ichele Van den Bergh,

Computer Vision Laboratory, ETH Zurich, Zurich, Switzerland; vamichae@vision.ee.ethz.ch

J an Halatsch, Chair of Information Architecture, ETH Zurich, Zurich, Switzerland; halatsch@arch.ethz.ch

A ntje Kunze, Chair of Information Architecture, ETH Zurich, Zurich, Switzerland; kunze@arch.ethz.ch

Frédéric Bosché, PhD, Computer Vision Laboratory, ETH Zurich, Zurich, Switzerland; bosche@vision.ee.ethz.ch

L uc Van Gool, Prof., Computer Vision Laboratory, ETH Zurich, Zurich, Switzerland; vangool@vision.ee.ethz.ch

G erhard Schmitt, Prof.,

Chair of Information Architecture, ETH Zurich, Zurich, Switzerland;

schmitt@ia.arch.ethz.ch

ABSTRACT: In this paper, we address the problem of effective visualization of and interaction with multiple and multi-dimensional data supporting communication between project stakeholders in an information cave. More exactly, our goal is to enable multiple users to interact with multiple screens from any location in an information cave. We present here our latest advancements in developing a novel human-computer interaction system that is specifically targeted towards room setups with physically spread sets of screens. Our system consists of a set of video cameras overseeing the room, and of which the signals are processed in real-time to detect and track the participants, their poses and hand-gestures. The system is fed with camera based gesture recognition. Early experiments have been conducted in the Value Lab, which has been introduced recently at ETH Zurich, and they focus on enabling the interaction with large urban 3D models being developed for the design and simulation of future cities. For the moment, experiments consider only the interaction of a single user with multiple layers (points of view) of a large city model displayed on multiple screens. The results demonstrate the huge potential of the system, and the principle of vision based interaction for such environments. The work continues on the extension of the system to a multi-user level.

KEYWORDS: I nformation cave, interaction, vision, camera, hand gestures.

1. INTRODUCTION

1.1 Product Information Models for Design and Simulations

Future cities, standing for evolving medium-size and mega-cities, have to be understood as a dynamic system – a network that bridges different scales, such as local, regional, and global scales. Since such a network comprises several dimensions, for example social, cultural, and economic dimensions it is necessary to connect active research,

*This work was supported by the Competence Center for Digital Design Modeling (DDM) at ETH Zurich.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

project management, urban planning as well as communication with the public to establish a mutual vision, or to map the desires of the involved participants.

In the last few decades, the use of computers, software and digital models has expanded within many fields related to the Architecture, Engineering, Construction and Facility Management (AECFM), where Facility may refer to commercial, industrial or infrastructure building assets, and also cities. However, it is only recently that researchers have started tackling the problems of the compartmentalization of this expansion within these different fields corresponding to the multiple stakeholders of such project. And, this expansion occurred without wider project integration. For example, in urban planning, multiple different digital models are often used to perform different analyses such as: CO 2 emissions, energy consumption and traffic load. Nonetheless, significant progresses have recently been made in the integration of information models into what are now commonly referred to Building Information Models (BIM), City Information Models (CIM), etc.

These integrated models enable earlier and more systematic (sometimes automated) detection of conflicts different multiple analysis and processes. However, the resolution of these conflicts still requires human negotiations, and effective methods and technologies for interacting collaboratively with the information in order to resolve detected conflicts are still missing. The main complexity here is that large projects, such as large scale planning projects, require the involvement of many technical experts and other stakeholders (e.g. owners, pubic) who approach projects from many different view points, which results in many different types of conflicts that must resolve collaboratively.

In order to address this problem, holistic participative planning paradigms (governing process management, content creation as well as design evaluation) have to evolve, and consider new software and hardware solutions that will enable the different stakeholders to effectively work collaboratively.

1.2 Example: Dübendorf Urban Planning Project

Today’s urban planning and urban design rely mainly on static representations (e.g. key visuals, 3D models). Since the planning context and its data (for example scenario simulations) are dynamic, visual representations need to be dynamic and interactive too, resulting in the need for physical environments enabling such dynamic processes.

During spring semester 2009 students researched how to establish design proposals in a more collaborative manner. The focus was on an urban planning project, the rehabilitation of the abandoned Swiss military airport in Dübendorf. The main goal of this research project was to develop an interactive shape grammar model (Müller, 2009), which was implemented with the CityEngine (http://www.procedural.com/cityengine). In combination with real-time visualization using Autodesk Showcase (http://usa.autodesk.com/adsk/servlet/index?id=6848305&siteID=123112), a better understanding design interventions was achieved.

While this research project showed the feasibility of collaborative interactive design, the experiments, then conducted in the Value Lab (see section 2) showed that the interactivity offered by such information caves did not always meet the expectations of the users (see analysis in section 2).

the expectations of the users (see analysis in section 2). FIG. 1: As a result of

FIG. 1: As a result of collaborative city design workshops a new use for an abandoned military airport in the out- skirts of Zurich had been implemented with the collaborative interaction tools that are available at the Value Lab.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

1.3 Information Visualization Caves

Information visualization caves have been investigated in order to enable stakeholders to sit in a single room and collaboratively solve conflicts, during planning, construction or operation. Such caves are typically designed with complex multimedia settings to enable participants to visualize the project model, and the possible conflicts at hand, from multiple points of view simultaneously (e.g. owner vs. user vs. contractor, contractor A vs. contractor B).

Traditional human-computer interaction devices (e.g. mouse, keyboard) are typically focused to fulfill single user requirements, and are not adapted to work with the multiplicity of participants and the multi-dimensionality (as well as multiplicity) of the data sets representing large projects. Solutions have however been proposed to improve interactivity. A multi-screen setup can drastically enhance collaboration and participatory processes by keeping information present to all attendees, and such setup is common in information caves (Gross et al., 2003, König et al., 2007). Additionally, (multi-) touch screens are now available as more intuitive multi-user human-computer interaction devices. However, despite their definite advantages for interactions with multiple users, particularly in table settings, multi-touch screens remain inadequate for use in rooms with physically spread sets of screens, as they require the users to constantly move from a screen to the other.

2. VALUE LAB

The ETH Value Lab (see figure 2) is a special kind of information visualization room, and was designed as a research platform to guide and visualize long-term planning processes while intensifying the focus on the optimization of buildings and infrastructures through new concepts, new technologies and new social behaviors to cut down CO 2 emissions, energy consumption, traffic load, and to increase the quality of life in urban environments (Halatsch and Kunze, 2007). It helps researchers and planners to combine existing realities with planned propositions, and overcome the multiplicity (GIS, BIM, CAD) and multi-dimensionality of the data sets representing urban environments (Halatsch et al., 2008a and 2008b).

The Value Lab consists of a physical space with state-of-the art hardware (supercomputer), software (e.g. urban simulation and CAD/BIM/GIS data visualization packages) and intuitive human-computer interaction devices. The interface consists of several high-resolution large area displays including:

Five large screens with a total of 16 mega pixels and equipped with touch interface capabilities; and two; and

Three FullHD projectors. Two projectors form a concatenated high-resolution projection display with 4 Megapixel in resolution. That particular configuration is for example used for real-time landscape visualization. The third projector delivers associated views for videoconferencing, presentation and screen sharing.

The computing resources, display and interaction system produces a tremendous amount of possible configurations especially in combination with the connected computing resources. The system manages all computing resources, operation systems, displays, inputs, storage and backup functionality in the background as well as lighting conditions and different ad hoc user modes.

As a result, The Value Lab forms the basis for knowledge discovery and representation of potential transformations of the urban environment, using time-based scenario planning techniques in order to test the impact of varying parameters on the constitution of cities. It shows how the combination of concepts for hardware, software and interaction can help to manage digital assets and simulation feedback as well as promoting visual insights from urban planners to associated stakeholders in a human-friendly computer environment (Fox, 2000).

However, as discussed earlier, we found out that beside the direct on-screen manipulation of information, a technology was needed to steer larger moderated audiences inside a project, and that offers a more integrated navigation and usability behavior as well as permitting a wider overview on the main contents presented.

Therefore we are investigating a novel touch-less interaction system with camera-based gesture recognition. This system is presented below and early experimental results are presented in section 4.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

Construction Applications of Virtual Reality Nov 5-6, 2009 FIG. 2: The Value Lab represents the interface

FIG. 2: The Value Lab represents the interface to advanced city simulation techniques and acts as the front-end of the ETH Simulation Platform.

3. VISION SYSTEM

In this section, we describe the vision system, that detects and tracks hand gestures of a user in front of a camera

mounted on top of a screen as shown in figure 3. The goal of the system is to enable the interaction of the person

with a 3D model.

A recent review of vision-based hand pose estimation (Erol et al., 2007) states that currently, the only technology

that satisfies the advanced requirements of hand-based input for human computer interaction is glove-based sensing.

In this paper, however, we aim to provide hand-based input without the requirement of such markers.

The first contribution is an improved skin color segmentation algorithm that combines an offline and an online model. The online skin model is updated at run-time based on color information taken from the face region of the user. This skin color segmentation is used to detect the location of the hands. The second contribution is a novel hand gesture recognition system, which combines the classification performance of average neighborhood margin maximization (ANMM) with the speed of 2D Haarlets. The system is example-based, matching the observations to predefined gestures stored in a database. The resulting system is real-time and does not require the use of special gloves or markers.

and does not require the use of special gloves or markers. FIG. 3: Person interacting with

FIG. 3: Person interacting with a camera and screen.

3.1 Skin Color Segmentation

The hands of the user are located using skin color segmentation. The system is hybrid, combining two skin color segmentation methods. The first is a histogram-based method, which can be trained online, while the system is running. The advantage of this system is that it can be adapted in real-time to changes in illumination and to the person using the system. The second method is trained in advance with a Gaussian mixture model (GMM). The

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

benefit of the offline trained system is that it can be trained with much more training data and is more robust. Howe`ver, it is not robust to changes in illumination or to changes in the user.

3.1.1 Online model

Every color can be represented as a point in a color space. A recent study (Schmugge et al., 2007) tested different color spaces, and concluded that the HSI (hue, saturation and intensity) color space provides the highest performance for a three dimensional color space, in combination with a histogram-based classifier.

A nice characteristic is that the histograms can be updated online, while the system is running. Two histograms are

kept, one for the skin pixel color distribution (H skin ), and one for the non-skin colors (H non-skin ). For each frame in the

incoming video stream, the face region is found using a face detector such as the one in OpenCV (http://opencvlibrary.sourceforge.net/), and the pixels inside the face region are used to update H skin . Then, the skin color detection algorithm is run and it finds the face regions as well as other skin regions such as the hands and arms. The pixels that are not classified as skin are then used to update H non-skin .

3.1.2 Offline model

In the GMM-based approach, the pixels are transformed to the rg color space. A GMM is fitted to the distribution of

the training skin color pixels using the expectation maximization algorithm as described in (Jedynak et al., 2002). Based on the GMM, the probabilities P(skin|color) can be computed offline, and stored in a lookup table.

3.1.3 Post processing

On one hand, the histogram-based method performs rather well at detecting the skin color pixels under varying

lighting conditions. However, as it bases its classification on very little input data, it has a lot of false positives. On the other hand, the GMM-based method performs well in constrained lighting conditions. Under varying lighting conditions it tends to falsely detect white and beige regions in the background. By combining the results of the histogram-based and the GMM-based methods, many false positives can be eliminated. The resulting segmentation

is improved further in additional post processing steps, which include median filtering and connected components

analysis.

3.2 Hand Gesture Recognition

The hand gesture recognition algorithm is based on the full body pose recognition system using 2D Haarlets described in (Van den Bergh et al., 2009). Instead of using silhouettes of a person as input for the classifier, hand images are used.

3.2.1 Classifier input

The hands are located using the skin color segmentation algorithm described in section 4.1. A cropped grayscale image of the hand is extracted, as well as a segmented silhouette, which are then concatenated into one input sample,

as

shown in figure 4. The benefit of using the cropped image without segmentation, as shown on the right, is that it

is

very robust for noisy segmentations. Using the silhouette based on skin color segmentation only, as shown on the

left, the background influence is eliminated. Using the concatenation of both gives us the benefit of both input sample options.

of both gives us the benefit of both input sample options. FIG. 4: Example of an

FIG. 4: Example of an input sample.

3.2.2 Haarlet-based classifier

For details about the classifier we refer to (Van den Bergh et al., 2009). It is based on an average neighborhood margin maximization (ANMM) transformation T, which projects the input samples to a lower dimensional space, as

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

shown in figure 5. This transformation is approximated using Haarlets to improve the speed of the system. Using nearest neighbors search, the coefficients are then matched to hand gestures stored in a database.

are then matched to hand gestures stored in a database. FIG. 5: Structure of the classifier

FIG. 5: Structure of the classifier illustrating the tranformation T (dotted box), approximated using Haarlets. The Haarlet coefficients are computed on the input sample. The approximated coefficients (that would result from T) are computed as a linear combination C of the Haarlet coefficients.

4. EXPERIMENTS

In this section, we describe the demo application that allows for the visualization of 3D models that can be loaded into the program. Using hand gestures, the user can zoom in on the model, pan and rotate it.

4.1 Gestures

The hand gesture classifier is trained based on a set of training samples containing the gestures shown in figure 6. An example of the system detecting these static gestures is shown in figure 7.

system detecting these static gestures is shown in figure 7. FIG. 6: The gestures that are

FIG. 6: The gestures that are trained in the hand gesture classifier.

gestures that are trained in the hand gesture classifier. FIG. 7: Examples of the hand gesture

FIG. 7: Examples of the hand gesture recognition system detecting different hand gestures.

The hand gesture interaction in this application is composed of the hand gestures shown in figure 6. It recognizes the gestures and movements of both hands to enable the manipulation of the object/model. Pointing with one hand selects the model to start manipulating it. By making two fists, the user can grab and rotate the model along the z- axis. By making a fist with just one hand, the user can pan through the model. By making a pointing gesture with both hands, and pulling the hands apart, the user can zoom in and out of the model. The open hands release the model and nothing happens until the user makes a new gesture. An overview of these gestures is shown in figure 8.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

Construction Applications of Virtual Reality Nov 5-6, 2009 (a) select (start interaction) (b) rotating (c) panning

(a) select (start interaction)

Virtual Reality Nov 5-6, 2009 (a) select (start interaction) (b) rotating (c) panning (d) zooming (e)

(b) rotating

Nov 5-6, 2009 (a) select (start interaction) (b) rotating (c) panning (d) zooming (e) release (do

(c) panning

2009 (a) select (start interaction) (b) rotating (c) panning (d) zooming (e) release (do nothing) FIG.

(d) zooming

(start interaction) (b) rotating (c) panning (d) zooming (e) release (do nothing) FIG. 8: The hand

(e) release (do nothing)

FIG. 8: The hand gestures used for the manipulation of the 3D object on the screen.

The time delay of the vision system on average is 31 ms. This is the time recorded from sending the command to grab the image from the camera, to sending the resulting interaction commands to the user interface. This allows us to run the system at a refresh rate of 30 Hz. The accuracy of the recognition system (using the three hand poses shown in figure 6) is 99.8%. Increasing the number of trained poses from three to ten results in a recognition accuracy of 98.2%. The accuracy of the hand localization based on the skin color segmentation is less than a pixel, granted that the hand is segmented correctly. The cases where the hand is not segmented are: when the hand overlaps with the face of the user, or overlaps with a similarly colored person or object in the background. These are predictable and could be eliminated in future work with a form of depth estimation, of which unfortunately no accurate real-time implementations exist to our knowledge at time of writing.

4.2 Application

The interaction system above has been implemented as an extension of an open-source 3D model viewer, the GLC Player (http://www.glc-player.net/). This enables us to: (1) load models in multiple formats (OBJ, 3DS, STL, and OFF) and of different sizes, and (2) use our hand interaction system in combination with standard mouse and keyboard interaction. Pressing a button in the toolbar activates the hand interaction mode, after which the user can start gesturing to navigate through the model. Pressing the button again deactivates the hand interaction model and returns to the standard mouse-keyboard interaction mode.

We conducted experiments by installing our system in the Value Lab and tested with multiple 3D models, and in particular with a model created as part of the Dübendorf urban planning project. This model represents an area of about 0.6 km2 and is constituted of about 4000 objects (buildings, street elements, trees) with a total of about 500,000 polygons. Despite this size, our system achieved frame rates of about 30fps (frame per second), which is sufficient for smooth interaction. Examples of the user zooming, panning and rotating through the 3D model are shown in figures 9, 10 and 11 respectively. In each figure, the left column shows side and back views of the system in operation at the beginning of the gesture, and the right columns the same views but at the end of the gesture.

The hand interaction mode is currently only available for model navigation (rotation, panning and zooming), all the other features of the viewer being only accessible in mouse-keyboard interaction mode. Nonetheless, our implementation enables simple extensions of the hand interaction mode. In the near future, we for instance aim to enable the hand interaction mode for object selection (to view its properties).

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

Construction Applications of Virtual Reality Nov 5-6, 2009 FIG. 9: Zooming into the model. FIG. 10:
Construction Applications of Virtual Reality Nov 5-6, 2009 FIG. 9: Zooming into the model. FIG. 10:
Construction Applications of Virtual Reality Nov 5-6, 2009 FIG. 9: Zooming into the model. FIG. 10:
Construction Applications of Virtual Reality Nov 5-6, 2009 FIG. 9: Zooming into the model. FIG. 10:

FIG. 9: Zooming into the model.

Construction Applications of Virtual Reality Nov 5-6, 2009 FIG. 9: Zooming into the model. FIG. 10:
Construction Applications of Virtual Reality Nov 5-6, 2009 FIG. 9: Zooming into the model. FIG. 10:
Construction Applications of Virtual Reality Nov 5-6, 2009 FIG. 9: Zooming into the model. FIG. 10:
Construction Applications of Virtual Reality Nov 5-6, 2009 FIG. 9: Zooming into the model. FIG. 10:

FIG. 10: Panning the model.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

Construction Applications of Virtual Reality Nov 5-6, 2009 FIG. 11: Rotating the model. 5. CONCLUSION In
Construction Applications of Virtual Reality Nov 5-6, 2009 FIG. 11: Rotating the model. 5. CONCLUSION In
Construction Applications of Virtual Reality Nov 5-6, 2009 FIG. 11: Rotating the model. 5. CONCLUSION In
Construction Applications of Virtual Reality Nov 5-6, 2009 FIG. 11: Rotating the model. 5. CONCLUSION In

FIG. 11: Rotating the model.

5. CONCLUSION

In this paper, we first described the need for novel human-computer interaction tools, enabling users in information

visualization caves to simultaneously interact with large amounts of information displayed on multiple screens spread around the cave. Today’s urban design tasks could be significantly enhanced in terms of interaction especially when different stakeholders are involved. Currently available interaction devices, such as mouse- keyboard or screen (multi-) touch capabilities, are often not adapted to such requirements, and this was confirmed in an urban design project conducted in the Value Lab at ETH Zurich.

A novel solution for human-computer interaction was then introduced that is based on vision. Compared to currently

existing systems, it presents the advantage of being marker-less. Experiments, conducted in the Value Lab, investigated the usability of this system in a situation as realistic as possible. For these, our interaction system has

been integrated to a 3D model viewer, and tested with a large 3D model of an urban development project. The results show that our system enables a stable, smooth and natural interaction with 3D models at refresh rates of 30 Hz.

Nonetheless, these results remain preliminary. The system is not always as robust as it should be, and its applicability to enable multiple users to simultaneously interact with multiple screens remains to be demonstrated. Future work will thus be targeted to: (1) extend the set of viewing features accessible through hand gesture (in particular object selection and de-selection); (2) further improve the robustness of the system, particularly with

respect to different users; and (3) develop a larger system containing multiple cameras and enabling the interaction

of multiple users with different screens.

6. REFERENCES

Erol, A., Bebis, G., Nicolescu, M., Boyle, R. D., and Twombly, X. (2007). “Vision-based hand pose estimation: a review.” Computer Vision and Image Understanding, vol. 108, 52-73.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

Fox, A., Johanson, B., Hanrahan, P., and Winograd, T. (2000). “Integrating information appliances into an interactive workspace.” IEEE Computer Graphics & Applications, vol. 20, no. 3, 54-65.

Gross, M., Würmlin, S., Naef, M., Lamboray, E., Spagno, C., Kunz, A., Koller-Meier, E., Svoboda, T., Van Gool, L., Lang, S., Strehlke, K., Moere, AV., and Staadt, O. (2003). “Blue-c: a spatially immersive display and 3D video portal for telepresence.” ACM Transactions on Graphics, 819-827.

Halatsch, J., and Kunze, A. (2007). “Value Lab: Collaboration In Space.” IV 2007: 11th International Conference Information Visualization, Zurich, Switzerland, July 4-6, 376-381.

Halatsch, J., Kunze, A., Burkhard, R., and Schmitt, G. (2008a). “ETH Value Lab - A Framework For Managing Large-Scale Urban Projects.” 7th China Urban Housing Conference, Chongqing, China, Sept. 26-27.

Halatsch, J., Kunze, A., and Schmitt, G. (2008b). “Using Shape Grammars for Master Planning.” DCC 2009: 3rd Conference on Design Computing and Cognition, Atlanta, Sept. 21-26, 655-673.

Jedynak, B., Zheng, H., Daoudi, M., and Barret, D. (2002). “Maximum entropy models for skin detection.” ICVGIP 2002: 3rd Indian Conference on Computer Vision, Graphics and Image Processing, Ahmadabad, India, Dec. 16-18, 276-281.

König, W. A., Bieg, H.-J., Schmidt, T., and Reiterer, H. (2007). “Position-independent interaction for large highresolution displays.” IHCI 2007: IADIS International Conference on Interfaces and Human Computer Interaction, Lisbon, Portugal, July 6-8, 117-125.

Müller, P., Wonka, P., Haegler, S., Ulmer, A., and Van Gool, L. (2006). “Procedural Modeling of Buildings.” ACM Transactions on Graphics, vol. 25, no. 3, 614-623.

Schmugge, S. J., Jayaram, S., Shin, M. C., and Tsap, L. V. (2007). “Objective evaluation of approaches of skin detection using ROC analysis.” Computer Vision and Image Understanding, vol. 108, 41–51.

Van den Bergh, M., Koller-Meier, E., and Van Gool, L. (2009). “Real-time body pose recognition using 2D or 3D Haarlets.” International Journal on Computer Vision, vol. 83, 72-84.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

TOWARDS A COLLABORATIVE ENVIRONMENT FOR SIMULATION BASED DESIGN

Michele Fumarola, Ph.D. Candidate, Systems Engineering Group Faculty of Technology, Policy and Management, Delft University of Technology Jaffalaan 5, 2628 BX Delft, The Netherlands; Tel. +31 (0)15 27 89567 m.fumarola@tudelft.nl http://www.tudelft.nl/mfumarola

Stephan Lukosch, Assistant Professor, Systems Engineering Group Faculty of Technology, Policy and Management, Delft University of Technology Jaffalaan 5, 2628 BX Delft, The Netherlands; Tel. +31 (0)15 27 83403 s.g.lukosch@tudelft.nl http://www.tudelft.nl/sglukosch

Mamadou Seck, Assistant Professor, Systems Engineering Group Faculty of Technology, Policy and Management, Delft University of Technology Jaffalaan 5, 2628 BX Delft, The Netherlands; Tel. +31 (0)15 27 83709 m.d.seck@tudelft.nl http://www.tudelft.nl/mseck

Cornelis Versteegt, Senior Project Manager, APM Terminals Management BV Anna van Saksenlaan 71, 2593 HW The Hague, The Netherlands; cornelis.versteegt@apmterminals.com

ABSTRACT: Designing complex systems is a collaborative process wherein modeling and simulation can be used for support. Designing complex systems consists of several phases; specification, conceptual and detailed design and evaluation. Modeling and simulation is currently mostly used in the evaluation phase. The goals, objectives and IT support for each phase differ. Furthermore, multi-disciplinary teams are involved in the design process. We aim at providing an integrated collaborative environment for modeling and simulation throughout entire design projects. The proposed architecture, called Virtual Design Environment, consists of three main components: a design, a visualization, and a simulation component. The layout of the design is made in the design component. The design component has been developed as an AutoCAD plug-in. This approach was chosen, due to AutoCAD being used in many complex design projects. The AutoCAD plug-in communicates the design decisions to the simulation component. The processes that will take place once the system is built, are simulated by the simulation component. Finally, the results of the simulation are sent to the visualization component. The visualization component provides an interactive 3D environment of the design and can serve decision makers as a tool for communication, evaluation and reflection. In this paper, we present the architecture of this environment and show some preliminary results.

KEYWORDS: Collaborative design, modeling and simulation, virtual environment, knowledge sharing, complex systems

1. INTRODUCTION

As introduced by Simon (1977), decision making is composed of structuring the problem, evaluating alternatives upon criteria and selecting the best alternative. Modeling and simulation (M&S) is often seen as a tool to analyze

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

(Zeigler & Praehofer, 2000) and is therefore mainly used in the evaluation phase of a decision making process. We propose that M&S can be introduced much earlier in the design process of complex systems; more precisely in the phase of structuring the problem. Many different design methodologies can be found in literature for the design of complex systems (INCOSE, 2009; Pielage, 2005; Sage & Armstrong, 2000). There are many similarities in all of these design methodologies. We do not focus on an individual design methodology, but more on the similar phases in the methodologies. We identify the following phases; specification, conceptual and detailed design, and evaluation. Simulation can add value in each of these mentioned phases. Currently, however, simulation is mostly used in the evaluation phase. When using simulation earlier in the design, the designers and decision maker’s ability to generate alternatives will be enhanced. The designers can generate more alternatives and study them more comprehensively. However, designing a complex system is commonly a collaborative process wherein multiple actors are involved. These actors have varying interests and fields of expertise. To achieve a fruitful process, these actors must first acquire a shared understanding of the problem domain and afterwards be able to collaborate effectively on this problem.

To design a complex system, support is therefore needed from different perspectives. An integrated environment to support the design of a complex system should not only be able to support design, but also simulation. As a shared understanding among all involved actors (Piirainen, Kolfschoten, & Lukosch, 2009) is a major challenge in collaborative design, the design and the simulation results need to be visualized to present them to the various actors active in the design process. Moreover, these actors need to able to collaborate on the design by simulating and visualizing the result. These perspectives are shown in Figure 1. We therefore propose to use a collaborative design environment, based on M&S which supports designers in all the design phases mentioned earlier.

designers in all the design phases mentioned earlier. FIG. 1 The design environment should cover different

FIG. 1 The design environment should cover different perspectives.

In this paper, we will present how such an environment can be achieved. We will begin by describing a case study we conducted at a large container operator which is currently dealing with the design of automated container terminals. Designing a container terminal is a complex problem and automation brings additional challenges as it is a novel approach wherein little experience has been gathered so far. From this problem, we will gather requirements which will be used to design the environment. After discussing the related work on this topic, we will present our approach for such an environment. Subsequently, some preliminary experiences will be presented based on interviews with domain experts. We will finally conclude the paper and present our future work.

2. REQUIREMENTS ANALYSIS

Starting from an exploratory case study performed at a container terminal operator, we identify requirements for the design environment that we propose. We will first present the case study and focus on a common approach on designing a complex system as for instance a container terminal. From there, we will perform an analysis and extract general requirements.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

2.1 Case study

Automated container terminals are gaining momentum as the advantages in terms of costs and productivity are getting clear. In this novel type of terminals, operations (e.g. picking up a container, placing it on a vessel, etc) are computer operated, which has a number of advantages: lower life cycle costs, improvement in terms of safety, reduction of damage to containers, higher service levels, etc. Designing these terminals is however a complex endeavor: a large number of actors are involved in the design process and the design space (i.e. the number of alternative designs) is quite large. Moreover, experience in the design of automated container terminals is still limited as the number of these terminals around the globe is small. During the design process, two types of situations may occur: brainstorming sessions and reflection.

In the brainstorming sessions, the different people involved in the process, gather in the same location and work together on the design of the new terminal. In these situations, a white board is mostly used to sketch the design and to present ideas. Sketching the design is however mostly coarse, and often the design is quickly described in words e.g. stating the number of needed equipment without specifying the location. Supporting documents are shared between the participants, notes are usually taken on paper or typed on a computer, and there is verbal communication.

In the reflection, the actors involved in the design, work separately to concentrate on their particular task: e.g. the CAD designer will make an initial drawing of the terminal, and the business analysts will use their business model. There is little interaction between these actors until they reach a certain goal for which a new meeting will be scheduled.

2.2 Analysis

The brainstorming sessions are mainly paper based and unstructured. As certain expertise is needed to use CAD environments such as AutoCAD, these tools are seldom used in such sessions. Although CAD designers are skilled in these environments, the remaining actors are not. These environments do however offer the opportunity of making exact initial designs without running into misunderstandings. The first requirement is therefore the possibility of sharing a CAD environment without exposing non-experts with unneeded complexity. Thereby, non- experts have the possibility to gain a better understanding of the current design which is one of the major challenges in complex collaborative design projects (Piirainen, et al., 2009).

In these design, understanding the dynamics of the system can be a hard task. As the system comprises a large amount of entities, the exploration of alternatives and the experimentation with these alternatives needs to be supported. Inputting a decision in the design environment should therefore be facilitated by taking into account such things as contrasting decisions, physical feasibility, future outcomes, etc. The link between the design environment and a possible simulation environment is hereby made.

The decision making process is supported by a large amount of documentation which is mainly printed or shared through a computer network. Querying for a specific item of information can be challenging and inefficient. This is due to the lack of structure of the common approach of sharing documentation. Having the ability to easily find the right document to make a decision is important. This is however only possible if the right information is available in the decision making environment: having the possibility to easily share documentation is therefore required.

For enabling collaborative decision making, a collaborative environment has to support a number of functionalities. At the core of a collaborative decision-making process, actors have to reach a shared understanding of the problem domain. For that purpose, the actors need to communicate their understanding to the others actors and they need the possibility to discuss the feasibility of their decisions. The latter also requires understanding of recent developments and changes in the design. As a result, the collaborative environment has to offer various means for synchronous as well as asynchronous communication and mechanisms for achieving group awareness (Gutwin, Greenberg, Roseman, & Sasse, 1996).

From this analysis, a number of requirements can be extracted:

1. Visualization for non-experts of the system under investigation

2. Documentation sharing and structuring

9 th International Conference on Construction Applications of Virtual Reality

3. Simulation-based experiments

4. Communication and awareness support for the participants

Nov 5-6, 2009

3. RELATED WORK

Modeling and simulation has been used often in the analysis of container terminal design. Stahlbock & Voss (2008) reviewed the existing literature on container terminal logistics and found simulation as a tool to analyze different aspects of container terminals. Nevertheless there is little reference to the use of M&S during the synthesis phase of the design process. Ottjes et al. (2006) discuss their solution using three main functions to make the conceptual design of a container terminal. This environment however, mainly focuses on simulation, leaving out the collaborative aspects. More attention on collaboration in simulation projects for container terminal logistics have been put by Versteegt, Vermeulen, & van Duin (2003), although existing tools were used which do not offer explicit support for collaboration.

On the design of complex systems, more work has been done which comes closer to fulfilling our stated requirements. Peak, et al. (2007) presented their approach based on SysML where they use components to design a system, they do not however concentrate on collaboration during the design. Paredis et al. (2001) also introduced a component based simulation environment for the design of mechatronic systems. Their solution enables multiple users to collaborate on the design of such a system, but do not take into account the different skills of the actors involved, as their scenario assumes users with comparable backgrounds. Comparable environments exist for virtual prototyping in specialized engineering (e.g. automobile, aeronautical) industries.

On collaborative design in virtual environments, various examples are at hand. Shiratuddin & Breland (2008) present an environment for architectural design that uses a 3D game engine to present the final design. They argue shared understanding is achieved across interdisciplinary groups. Rosenman, Smith, Maher, Ding, & Marchant (2007) discuss a solution that has different views (CAD and 3D virtual environment) on a given design in the AEC domain. However, simulation is out of the scope of this research. Further examples of collaborative design in virtual environments can be found in Conti, Ucelli, & Petric (2002), and Pappas, Karabatsou, Mavrikios, & Chryssolouris

(2006).

The conclusions found in Sinha, Lian, Paredis, & Khosla (2001) suggest a lack of close integration of design and modeling & simulation tools. They also recommend the use of design repositories that would provide a way to share knowledge about the system that is being designed. Furthermore, the few integrated environments do not support collaboration across interdisciplinary groups.

4. APPROACH

Based on the requirements set in section 2, we will present our approach which comprises 4 parts: visualization, sharing, simulation, and collaboration. These parts come forward from the different requirements. Firstly, we discussed the need of non-experts to be able to understand the complex designs made in environments such as

AutoCAD. In order to do so, an understandable way of visualizing such a design is needed. Secondly, we identified the requirement of sharing documents which are important for the decision making process. Thirdly, experimentation with a given design is desirable, for which simulation is needed to predict the workings of the system. Lastly, communication between the different actors needs to be supported as well as collaboration while

working on the design

With these parts, an architecture can be finally constructed, which we will discuss as well.

4.1 Visualization

During the design process, visualization plays an important role in order to understand the problem at hand. In the case of container terminals, it becomes even more important as the point of focus is a physical facility.

The design of a container terminal is commonly done in a CAD environment such as AutoCAD. This offers the designers the possibility to precisely specify the design and later to use the drawings for the construction process. CAD drawings are usually rather complex, making them hard to use by less proficient users such as business analysts. An alternative way of visualizing the future facility is therefore required.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

Various studies have been performed showing 3D as the preferred approach on visualizing complex physical systems to increase interest and understanding of a user and to facilitate dialogue (Fabri, et al., 2008; Terrington, Napier, Howard, Ford, & Hatton, 2008). It is therefore desirable to have a translation from the CAD drawings to a 3D environment understandable by the actors involved in the design process. According to Whyte et al. (2000) this translation can be achieved in several ways: by the library approach, the simple translation and the database approach. The library approach uses a set of predefined 3D models to map with the CAD drawing. On the other hand, the simple translation purely transforms the CAD drawing to a 3D model, using CAD drawings which are drawn with 3D vectors. Lastly, the database approach uses a central database with a description of an object from which a 3D model and a CAD drawing can be extracted.

Once this translation took place, the visualization is in place to be used. The designers interact with the 3D environment to input their design decisions and to support their communication on ideas and possible issues. This relieves them from having to describe everything in words, as what happens traditionally.

4.2 Sharing

The design process of a complex system is commonly accompanied by a large amount of documentation. A lack of structure and inability to look through this documentation slows down the design process. Furthermore, everyone involved in the design process should have access to the available documentation. Because of this, sharing and structuring documentation should be part of the design environment.

The structuring of documentation can exploit the structure of the actual system. An ontology of the system would provide the entities to which the documentation can be linked. In the case of a container terminal, the structure of the system would be the physical structure of the terminal: e.g. yard, stacks, and different types of equipment. If a user would want to find out information about a specific entity, the logical place to look for would be the visualized entity in the virtual environment.

Using the structure, sharing can also become a possibility. By providing the functionality of adding documentation by choosing a specific entity, this can be achieved. Furthermore, additional information such as notes, ideas and issues can be shared as well using the same functionality.

4.3 Simulation

When a system is composed of many parts interacting dynamically in a non-trivial way, it becomes difficult to understand or predict their performance through mere static analysis. In this context, simulation is a powerful instrument for enabling experimentation with the system-to-be, in order to anticipate the implications of the design decisions at a relatively low cost. Simulation is routinely used in this way to evaluate design alternatives through what-if analysis.

For most clients involved in a modeling and simulation effort, only the experimentation results (and their interpretations) have added value. All preceding phases, although appraised as essential, are typically seen as technical work and are naturally externalized. This situation stems from the fact that advanced modelling and simulation requires specialized training and is not at the core of the client’s business. This state of affairs produces a gap between the client and the constructor of the simulation. On the one hand, the constructor’s only input is a system description document that could never capture the richness of the design activity. On the other hand, the client loses the benefits that would have accrued from using simulation earlier in the cycle to explore a larger design space.

A collaborative environment integrating design and simulation would bridge the proverbial gap, allowing the (CAD) designs to be translated into simulations models by an underlying simulation environment and thus facilitating experimentation within the design activity, fostering creativity and reactivity. There is of course no magic bullet. Such capability will only be possible if a comprehensive library of domain specific models has been constituted and individually validated beforehand. An ontology guarantying the compositional soundness of the design is also an essential asset. This ontology can be developed using the System Entity Structure formalism (Zeigler & Hammonds, 2007). The use of simulation formalisms supporting modularity and hierarchical construction of models is a strong requirement of the simulation part. A system theoretic formalism such as DEVS is particularly suited for this goal because due to its property of closed under coupling.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

4.4 Communication and awareness support

The major requirements for enabling collaborative decision-making are related to achieve a shared understanding by means of communication and awareness support. According to Gerosa, Fuks, Raposo, & Lucena (2004) cooperating users must be able to communicate and to coordinate themselves. When communicating, users might generate commitments and define tasks that must be completed to accomplish the common group goal. These tasks must be coordinated so that they are accomplished in the correct order and at the correct time with respect to possible external restrictions. To accomplish these tasks the users have to cooperate in a shared environment. However, while cooperating, unexpected situations might emerge that demand new communication. In such communication new commitments and tasks might be defined, which again must be coordinated to be accomplished in cooperation. In this cooperation cycle, awareness plays a central role. Every user action that is performed during communication, coordination, or cooperation generates information. Some of this information involves two or even more users, and should be made available to all cooperating users so that they can become aware of each other. This helps to mediate further communication, coordination, and cooperation and build up a shared understanding of their common group goals and to synchronize their cooperation.

Considering the above, we decided to link the different components of the system architecture by means of

communication functionality and awareness widgets. In the following, we briefly describe the functionality we want

to integrate by means of patterns 1 for computer-mediated interaction (Schümmer & Lukosch, 2007) which capture

best practices in the design of collaborative environments. For awareness purposes, we want to integrate a USER

LIST, an INTERACTIVE USER INFO, a CHANGE INDICATOR, a TELEPOINTER, a REMOTE FIELD OF VISION. The USER

LIST will show which decision-makers are currently present and take a look a the design visualization. The INTERACTIVE USER INFO will be coupled with the USER LIST allow decision-makers to directly select between different communication possibilities when choosing one decision-maker from the USER LIST. We will use the CHANGE INDICATOR pattern within the visualization environment to highlight changes which a decision-maker has not seen yet and thereby make decision-makers aware of the recent changes of the designer. The TELEPOINTER will allow the stakeholders in the visualization environment to point to specific design parts and thereby support an ongoing discussion. Finally, the REMOTE FIELD OF VISION will allow decision-makers to identify in which parts of the simulation the others are interested in. Thereby, REMOTE FIELD OF VISION will foster discussion among the decision-makers as well as raise the level of shared understanding.

To further increase the shared understanding, we decided to add functionality for synchronous as well as asynchronous communication. We want to integrate an EMBEDDED CHAT, a FORUM, SHARED ANNOTATIONS as well as a FEEDBACK LOOP. The EMBEDDED CHAT will be available in the visualization environment. It will allow

decision-makers to directly communicate with each other and discuss general questions concerning the design. We will also include a FORUM in which decision-makers can start asynchronous discussions. By using THREADED DISCUSSIONS these FORUMS can also serve a knowledge base and repository for the VDE. To allow decision-makers

a artifact-centred discussion we will support SHARED ANNOTATIONS. Decision-makers will be able to add

annotations to specific points of interest within the visualization environment and share these annotations with the other decision-makers. Furthermore, these SHARED ANNOTATIONS will be pushed to the design environment so that

the designer becomes aware of questions as well as discussions concerning the design of the complex environment. Thereby, we implement a FEEDBACK LOOP. In addition, the designer will be able to comment on the annotations so

that a THREADED DISCUSSION can evolve within a SHARED ANNOTATION.

5. SOLUTION

5.1 Architecture

The different parts which result from the requirement, serve as a basis to construct a software architecture for the proposed environment. Each part contains different components in order to achieve the required functionality. A component diagram of this architecture is shown in Figure 2.

The visualization part contains a 3D visualizer and 2D textual output. The 3D visualizer renders the 3D environment which is based on the CAD drawing. The visualization is currently being handled in a rendering engine, namely

1 Please note that pattern names are set in SMALL CAPS.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

Ogre3D. This enables us to accurately render the large amount of objects commonly found in container terminals. The 2D textual input provides the possibility to visualize textual information as well as pictures, videos, etc. This is needed for documentation sharing and some functionality to enable collaboration.

The environment is fed by different sources. First of all, a database can feed the documentation to the environment and information can also be added by users who wish to share it. On the other hand, simulation results can be used as input by the simulation data feed.

The user interface gives access to the virtual environment with its documentation and simulation results. This is the main interface for the decision makers involved in the design process. The world loader reads the CAD drawing which is being edited by the designers. Once the designers finish to work on the layout, the CAD drawing is send to the 3D virtual environment which will visualize the new design. This design will hereafter serve to run the simulation and thereby study the performance of the new design. Feedback from non-designers can hereafter be fed

new design. Feedback from non-designers can hereafter be fed FIG. 2 An architecture for the proposed

FIG. 2 An architecture for the proposed environment. back to the CAD environment making it possible for the designers to handle new requests. The interaction between designers and non-designers is supported as outlined in the previous section. This process can run multiple times throughout the design process. The process is sketched in Figure 3.

5.2 Discussion

The environment presented here provides the means to collaboratively design a system using simulation across a multidisciplinary group of actors. To achieve this, we presented an architecture based on the different parts resulting from the requirements. This provides us an environment wherein a complete design process can take place. In contrast to existing environments (for instance the ones discussed in the related work-section), the presented environment provides the possibility to design a complex system collaboratively, visualize the design to achieve shared understanding and support the design by sharing existing knowledge. The strength of the environment is therefore not found in the individual components, which are already known and widespread, but in the integrated environment wherein the components reside.

The design of the system is supported by simulation which is used to evaluate viable alternatives. Actors can achieve insight into the workings of the systems which has not been physically developed yet. The actors, which have different backgrounds, use a view on the system which they can understand (2D CAD or 3D) and can share existing and new knowledge on the given system. Lastly, communication between these actors is possible through the environment.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

6. EXPERIENCES

Initial feedback from designers and decisions makers that have worked with the Virtual Design Environment is very positive. First, the Virtual Design Environment provides more insight by using photorealistic 3D images and movies of high quality. The designers used 2D images and movies in the past. The users especially appreciated the realism of the environment. The Virtual Design Environment uses models and CAD drawing to create a realistic representation. Schematic representations were used in the past. Secondly, the Virtual Design Environment is an environment in which the users can freely move around. Traditionally, the movies contained predefined flight paths that offered no flexibility in the point of view of the designers. The designers and decision makers indicated that they see high value in using the Virtual Design Environment. They expect a number of benefits that were not anticipated by the developers. The Virtual Design Environment can be used as a training tool before the terminal is implemented. Operators can be trained for their future job using the Virtual Design Environment. The operators can be trained in a safe environment without disrupting day-to-day operations. The decision makers also expect that the Virtual Design Environment will have high value for the commercial aspects of the terminal. The Virtual Design Environment can be shown to customers and used as a “selling” tool. Finally, the Virtual Design Environment can be used for communicating the design to port authorities, governments and other stakeholders. The design can be presented in an understandable format to all stakeholders.

7. CONCLUSIONS

In this paper we presented our work towards the Virtual Design Environment, an environment meant to support the design of complex systems in a multi-actor setting. From an exploratory case study at a large container terminal operator, we gathered the requirements for the design environment. These requirements were marked as five components: visualization, sharing, communication, collaboration, and simulation. Although the design environment is still under research, preliminary results could be gathered. Consultation with domain experts showed that the design environment can indeed result in a more effective and efficient design process. Nevertheless, extended evaluations have to be done to confirm these early expectations. Future work will therefore consist of further development of the environment. Moreover, generalizing the environment should be considered, instead of restraining it to the design of automated container terminal.

8. REFERENCES

Conti, G., Ucelli, G., & Petric, J. (2002). JCAD-VR: a collaborative design tool for architects. Paper presented at the 4th International Conference on Collaborative virtual environments.

Fabri, D., Falsetti, C., Iezzi, A., Ramazzotti, S., Rita Viola, S., & Leo, T. (2008). Virtual and Augmented Reality. In H. H. Adelsberger, Kinshuk, J. M. Pawlowski & D. G. Sampson (Eds.), Handbook on Information Technologies for Education and Training (pp. 113-132): Springer Berlin Heidelberg.

FIG. 3 Design process using the Virtual Design Environment

36

and Training (pp. 113-132): Springer Berlin Heidelberg. FIG. 3 Design process using the Virtual Design Environment

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

Gerosa, M. A., Fuks, H., Raposo, A. B., & Lucena, C. J. P. (2004). Awareness Support in The AulaNet Learning Environment. Paper presented at the International Conference on Web-based Education.

Gutwin, C., Greenberg, S., Roseman, M., & Sasse, M. (1996). Workspace Awareness in Real-Time Distributed Groupware: Framework, Widgets, and Evaluation. Paper presented at the People and Computers XI (Proceedings of the HCI'96).

INCOSE (2009). The International Council on Systems Engineering Retrieved July 13th, 2009, from http://www.incose.org

Ottjes, J. A., Veeke, H. P. M., Duinkerken, M. B., Rijsenbrij, J. C., & Lodewijks, G. (2006). Simulation of a multiterminal system for container handling. OR Spectrum, 28(4), 447-468.

Pappas, M., Karabatsou, V., Mavrikios, D., & Chryssolouris, G. (2006). Development of a web-based collaboration platform for manufacturing product and process design evaluation using virtual reality techniques. International Journal of Computer Integrated Manufacturing, 19(8), 805-814.

Paredis, C. J. J., Diaz-Calderon, A., Sinha, R., & Khosla, P. K. (2001). Composable Models for Simulation-Based Design. Engineering with Computers, 17(2), 112-128.

Peak, R. S., Burkhart, R. M., Friedenthal, S. A., Wilson, M. W., Bajaj, M., & Kim, I. (2007). Simulation-Based Design Using SysML - Part 2: Celebrating Diversity by Example. Paper presented at the INCOSE International Symposium.

Pielage, B.-J. (2005). Conceptual Design of Automated Freight Transport Systems: Methodology and Practice. Delft University of Technology, Delft.

Piirainen, K., Kolfschoten, G., & Lukosch, S. (2009). Unraveling Challenges in Collaborative Design: A Literature Study. Paper presented at the 15th Collaboration Researchers' International Workshop on Groupware.

Rosenman, M. A., Smith, G., Maher, M. L., Ding, L., & Marchant, D. (2007). Multidisciplinary collaborative design in virtual environments. Automation in Construction, 16(1), 37-44.

Sage, A., & Armstrong, J. J. (2000). Introduction to systems engineering. New York, NY, USA: John Wiley & Sons Inc.

Schümmer, T., & Lukosch, S. (2007). Patterns for Computer-Mediated Interaction: John Wiley & Sons.

Shiratuddin, M. F., & Breland, J. (2008). Development of a Collaborative Design Tool for Virtual Environment (CDT-VE) Utilizing a 3D Game Engine. Paper presented at the 8th International Conference on Construction Applications of Virtual Reality 2008.

Simon, H. (1977). The new science of management decision. Upper Saddle River, NJ, USA: Prentice Hall PTR.

Sinha, R., Lian, V. C., Paredis, C. J. J., & Khosla, P. K. (2001). Modeling and Simulation Methods for Design of Engineering Systems. Journal of Computing and Information Science in Engineering, 1(1), 84-91.

Stahlbock, R., & Voss, S. (2008). Operations research at container terminals: a literature update. OR Spectrum, 30(1), 1-52.

Terrington, R., Napier, B., Howard, A., Ford, J., & Hatton, W. (2008). Why 3D? The Need For Solution Based Modeling In A National Geoscience Organization. Paper presented at the 4th International Conference on GIS in Geology and Earth Sciences.

Versteegt, C., Vermeulen, S., & van Duin, E. (2003). Joint Simulation Modeling to Support Strategic Decision- Making Processes. Paper presented at the 15th European Simulation Symposium and Exhibition.

Whyte, J., Bouchlaghem, N., Thorpe, A., & McCaffer, R. (2000). From CAD to virtual reality: modelling

approaches, data exchange and interactive 3D building design tools Automation in Construction, 10(1), 43-

55.

Zeigler, B. P., & Hammonds, P. E. (2007). Modeling & Simulation-Based Data Engineering: Introducing Pragmatics into Ontologies for Net-Centric Information Exchange: Academic Press.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

Zeigler, B. P., & Praehofer, H. (2000). Theory of modeling and simulation: Academic Press.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

EMPIRICAL STUDY FOR TESTING EFFECTS OF VR 3D SKETCHING ON DESIGNERS’ COGNITIVE ACTIVITIES

F arzad Pour Rahimian, Dr., Department of Architecture, Faculty of Design and Architecture, Universiti Putra Malaysia; ulud_rah@yahoo.com

R ahinah Ibrahim, Associate Professor Dr., Department of Architecture, Faculty of Design and Architecture, Universiti Putra Malaysia; rahinah@putra.upm.edu.my

ABSTRACT: To optimise the level of information integration of the critical conceptual architectural-engineering design stage, designers need to employ more flexible and intuitive digital design tools during this phase. We studied the feasiblity of using 3D sketching in VR in order to replace current non-intuitive Computer Aided Design (CAD) tools that designers would rather not using during the conceptual architectural process. Using the capablities of VR-based haptic devices, the proposed 3D sketching design interface relies on the sense of touch for simplifing designing and integrating designers’ cognitions and actions in order to improve design creativity. Adopting a cognitive approch to designing, the study compares the effectivenss of the proposed VR-based design interface with common manual sketching design interfaces. For this purpose, we conducted a two-session expeiment which comprises of design activites of three pairs of 5th year architecture students. In comparing the designers’ collective cognitive and collaboratibve actions the study employs design protocol analisys research methodology. This study evaluated the designers’ spatial cognition at four different levels: physical-action, perceptual-actions, functional- actions, conceptual-actions. The results show that compared to the traditional design interfaces, the utilized VR- based simple and tangible interface improved designers’ cognitive design activities. We claim that due to the capability of reversing any undesired changes, 3D sketching design interface increases designers’ motivation and courage for performing more cognitive activities than conventional approach. Increasing the occurrence frequency of designers’ perceptual actions, the 3D sketching interface associated cognition with action and supported the designers’ epistemic actions which are expected to increase design creativity. The rich graphical interface in 3D sketching system has led to the occurrence of more ‘unexpected discoveries’ and ‘situative inventions’ that carried both problem and solution spaces towards maturity. Moreover, the increment in the percentage of new physical action has decreased the amount of unnecessary physical actions and possibility for shifting from pragmatic actions towards epistemic actions. Results of this study can help the development of cutting-edge information technologies in either design or education of architecture. They also can help in the creation of training programs for professional graduates who are competent in multidisciplinary teamwork and equally competent in utilizing IT/ICT in delivering their building projects within time and budget.

KEYWORDS: Conceptual Design, 3D Sketching, Multidisciplinary, Virtual Reality, Protocol Analysis.

1. INTRODUCTION

Early conceptual phases of the design process are characterized by fuzziness, coarse structures and elements, and a trial-and-error process. Craft and Cairns (2006) mentioned that searching for form and shape is the designers’ principal goal. During these stages the chances of correcting errors are the highest and the use of low-expenditure sketches and physical models is crucial. Cross (2007) believes that the thinking processes of the designer hinge around the relationship between internal mental processes and their external expression and representation in sketches. Cross (2007, p.33) is confident that the designer has to have a medium “which enables half formed ideas to be expressed and to be reflected upon: to be considered, revised, developed, rejected and returned to.”

Using current CAD tools has bad effects on designers’ reasoning procedures and hampers their cognitive activities (Ibrahim and Pour Rahimian In review). Existing literature (e.g. Bilda and Demirkan, 2003) have also shown that due to some inherent characteristics of current CAD tools, designers are not quite successful when they are working with such digital design tools during conceptual design phase. However, literature recommends to designers to migrate from manual design tools to digital design systems in order to integrate the whole design process (Kwon et al. 2005). Literature also highlights that the intangible and arduous user interface of current CAD systems are two

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

major issues which hamper designers’ creativity during conceptual design phase (Kwon et al. 2005). Consequently, this paper proposes VR 3D sketching which is a simple and tangible VR-based design interface as an alternative solution to replace ordinary CAD systems during conceptual design phase. Regenbrecht and Donath (1996) posit that 3D sketching in VR can be used as instantaneous reflection and feedback of the design procedure in which the user can act with the digital design support tool in a spontaneous, game-like, empirical manner. Consequently, VR can offer the ideal interface for free artistic visualization and linking creative experimentation and accurate manufacturing-oriented modelling (Fiorentino et al. 2002). (Regenbrecht and Donath 1996)

This paper reports a conducted empirical experiment for reaffirming efficiency of the proposed system in conceptual architectural design phase. The paper presents the results of a comparison on design activities between a VR-based simple and tangible interface and a traditional pen and paper sketching interface. It focuses on designers’ collective cognitive activities when working on similar design tasks. Here the traditional sketching method is selected as a baseline to be compared to a proposed 3D sketching design methodology. The purpose is to reveal the cognitive impacts of the proposed design system. Five pairs of 5 th year architecture students experienced with the traditional design and CAD systems were selected as participants for this experiment. During the experiment, protocol analysis methodology (Ericsson and Simon 1993; Lloyd et al. 1995; Schön 1983a) was selected as a research and data acquisition method to explore the effects of the different media on designers’ spatial cognition.

2. VR 3D SKETCHING AND COGNITIVE APPROACH TO DESIGNING

Goldschmidt and Porter (2004) defined designing as a cognitive activity which entails the production of sequential representations of a mental and physical artefact. Tversky (2005) believes that constructing the external or internal representations, designers are engaged in spatial cognition process in which the representations serve as cognitive aids to memory and information processing. Schön (1992) asserted that with execution of action and reflection, each level of representation makes designers evolve in their interpretations and ideas for design solutions. Such cognitive approach to designing considers design media as something beyond mere presentation tools. In this approach, reflections which are caused by design media are expected to either stimulate or hamper designers’ creativity during design reasoning.

2.1 Creativity

The term, creative, is usually used as a value of a design artefact (Kim and Maher 2008). Yet, according to Visser (2004) in cognitive psychology discussions this is linked to design activity which also comprises particular procedures that have the potential to produce creative artefacts. Cross and Dorst (1999) define the creative design procedure as a sort of non-routine designing activities that usually is differentiated from the others by the appearance of considerable events or unanticipated novel artefacts.

‘Situative-inventions’ is a more evolved model for measuring design creativity. According to Suwa et al. (2000), situated-invention of new design requirements (S-invention) can be considered as a key for inventing a creative artefact. Based on this model, when introducing the new constraints for design artefact, designers capture significant parts of the design problem and go beyond a synthesis of solutions that suits the given requirements. On the other hand, Cross and Dorst (1999) posit the modelling of the design creativity as a co-evolution of problem and goal spaces. Co-evolutionary design is an approach to problem-solving (Kim and Maher 2008). In this approach the design requirements and design artefacts are formed disjointedly while mutually affecting each other. Kim and Maher (2008) believe that in this approach the changing of a problem causes some changes in the designer’s insight of a problem situation.

‘Unexpected discoveries’ (Suwa et al. 2000) is a key for evaluating creative design process. They define it as perceptual activities of articulating tacit design semantics into visuo-spatial forms in an unanticipated way for later inspection. They found that the appearance of unexpected discoveries of visuo-spatial forms and S-invention are strongly related to each other. As another approach, Suwa and Tversky (2001) in a constructive approach posit that ‘co-evolution’ of new conceptual semantics and ‘perceptual discoveries’ improve designers’ understandings of external representations. In Gero and Damski’s (1997) opinion, constructive perceptions allow designers to change their focus and to understand design problem in a different way in which re-interpretation may be stimulated so that designers find the opportunity to be more creative. (Suwa and Tversky 2001)

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

2.2 Spatial cognition using tangible user interfaces (TUIs) in VR 3D sketching

Tangible user interfaces (TUIs) is a technology which comprises of digital information and physical objects to virtually mimic an absolute environment. According to Kim and Maher (2008), as opposed to a simple time- multiplexed technology which is used in ordinary input devices (e.g. a mouse), the main advantages of TUIs is the space multiplexing input technology which is able to control various functions at different times. One instance of such machinery is haptic technology. In computer science discussions, the term ‘haptic’ relates to the sense of touch. In other words, this technology is a technology which unites the user to a digital system by simulating the sense of touch and applying force-feedback, vibrations, and motions to the user (Basque Research 2007). We believe that this physical interaction with real world is the quality that Stricker et al. (2001) describe as the technology which augments our cognition and interaction in the physical world. Based on its capabilities we believe that haptic technology provides an advanced TUI for designers.

As mentioned above, in haptic technology the sense of touch is not limited to a feeling and it facilitates a real-time interactivity with virtual objects. According to Brewster (2001), haptic technology is a huge pace in VR area since it allows users to utilize their touch sense to feel virtual objects. He argues that although touch is an extremely powerful sense, it has so far been abandoned in the digital world. In this research we focused on the role of force- feedback facilitated by SensAble Technology TUIs in forming a designer’s spatial cognition. (Fitzmaurice 1996)

Fitzmaurice (1996) relates the effects of such interfaces to the quality of motor activities. He uses definitions of epistemic or pragmatic actions (Kirsh and Maglio 1994) to classify designers’ motor activities. In his definition, epistemic actions are taken to reveal hidden information or that are difficult for mankind to compute mentally. He believes that the physical activities help people perform easier, faster and more reliable on internal cognitive computation. This is something like using the fingers when counting. According to Fitzmaurice (1996) the epistemic actions can improve cognition by: 1) decreasing the involvement of memory in mental computation (space complexity), 2) decreasing the number of mental computation steps (time complexity), and 3) decreasing the rate of mental computation error (unreliability). On the other hand, Fitzmaurice (1996) define pragmatic actions as physical actions which primary physically perform to make the user closer to the aim.

Fitzmaurice (1996) believes that such activities can strongly help an integrated human cognitive model in which necessary information for each step can be provided by both mental processing resources and physical modifying. He argues that consequently this can support perceiving external environment. Fitzmaurice (1996) concludes these arguments positing that mental modules can trigger motor activity which propose to lead to changes in the physical environment that assists cognitive processes. He argues that when using physical objects, users are able to manipulate and influence their environment. This paper seeks the effects of proposed VR 3D sketching interface of designers’ collective cognitive activities based on above mentioned cognitive approach to designing.

3. COMPARING VR 3D SKETCHING AND TRADITIONAL SKETCHING

This section presents the developed methodological framework besides the details of the conducted experimental protocol analysis for testing proposed VR 3D sketching interface. It comprises explanations of five-step conducted protocol analysis methodology which is proposed by van Someren et al. (1994). The five steps are as follow: 1) conducting experiments, 2) transcribing protocols, 3) segmentation procedure, 4) developing coding scheme and encoding protocol data besides developing research hypotheses, and 5) selecting strategies to analyze and interpret the encoded protocols. The section ends with the explanation of the adopted strategies for validation and reliability.

3.1 Experimental protocol analysis for testing VR 3D sketching design interface

As discussed above, this study proposes a simple and tangible VR-based design system as an alternative solution to replace ordinary CAD systems during conceptual design phase. This section reports an empirical experiment for reaffirming efficiency of the proposed system in conceptual architectural design phase. The study proposes a comparison on design activities between a VR-based simple and tangible interface and a traditional pen and paper sketching interface. It focuses on designers’ collective cognitive activities when working on similar design tasks.

Here the traditional sketching method is selected as a baseline to be compared to a proposed 3D sketching design methodology. The purpose is to reveal the cognitive impacts of the proposed design system. Five pairs of 5 th year architecture students experienced with the traditional design and CAD systems were selected as participants for this

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

experiment. Each pair was required to complete two design tasks which utilized traditional and 3D sketching design media sequentially. During the experiment, protocol analysis methodology (Ericsson and Simon 1993; Lloyd et al. 1995; Schön 1983a) was selected as a research and data acquisition method to explore the effects of the different media on designers’ spatial cognition.

3.1.1 Development of research instrument, SensAble haptic devices vs. manual sketching

In order to compare the impacts of the proposed 3D sketching design methodology on the designers’ cognitive activities, we proposed a simple traditional conceptual design package as a baseline system and a VR-based digital design package as a 3D sketching environment. The utilized traditional conceptual design package comprises design pencils and pens, butter papers, and simple mock-up materials e.g. polystyrene as well as drafting tables and chairs. On the other hand, the proposed VR-based digital design package consists of a tablet PC for supporting designers’ preliminary ideations and a desktop PC for supporting digital design process. Both systems are shown in Figure 1.

digital design process. Both systems are shown in Figure 1. FIG.1: Prepared traditional (left) and 3D
digital design process. Both systems are shown in Figure 1. FIG.1: Prepared traditional (left) and 3D

FIG.1: Prepared traditional (left) and 3D sketching (right) design settings

The commercial software Adobe Photoshop TM was installed on the tablet PC to provide the layering ability that was available in the traditional sketching system which used butter papers. Designers therefore were able to produce preliminary sketches directly on the screen of the tablet PC. The utilized desktop PC comprised of a monitor as the output system and a keyboard, a mouse, and a 6DF SensAble haptic device which supported force-feedback and vibration as the input system. During the experiment we used an evaluation version of ClayTools TM software as the basic environment for the modelling and spatial reasoning. ClayTools TM is VR-based software which is designed for being used with SensAble haptic devices. The experiment expects the used 6DF coordination system to solve the previous 2D mouse coordination problems which designers faced in the traditional CAD systems. Ultimately, the 3D sketching interface was set up to offer the expected simplicity and tangibility of using VR in design.

3.1.2 Design tasks

In order to test the effects of the interface on all aspects of conceptual design, the designers were required to perform in two comprehensive conceptual design sessions for full three hours each. Therefore, during these sessions, designers were asked to undergo all stages of conceptual design: initial bubble diagramming, developing design idea, and preparing initial drawings. The goal of the first design task was to design a shopping centre with maximum 200000 square feet built area. On the other hand, the goal of the second design task was to design a culture and art centre with maximum 150000 square feet built area. In order to make designers concentrate on design itself rather than presentation, during both sessions they were required not to use more than one colour in the presentations.

3.1.3 Experimental set-ups: traditional session vs. 3D sketching session

We started our experiment with 5 pairs of designers. However, since 2 groups failed to complete their training sessions, we performed the experiment for only 3 groups of designers. The traditional sessions were held at a design studio while the 3D sketching sessions were held in an office which was being used as a VR lab during the experiment. During both sessions to record all of the events during the design sessions, two digital cameras and one sound recorder were used. The purpose of the first camera was to record all the drawings which were produced during the test. The other camera was set up to record the designers’ design gestures and behaviours. Finally, a digital sound recorder was used to record the designers’ conversations for transcription purposes. The designers were asked to sit on one side of the table which was facing both cameras. Without interfering with the designers’

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

thinking process, the experimenter was present at the design studio and Lab to prevent any technical problem. The explained setting is shown in Figure 2 and Figure 3.

The explained setting is shown in Figure 2 and Figure 3. FIG.2: Experimental set-up for traditional
The explained setting is shown in Figure 2 and Figure 3. FIG.2: Experimental set-up for traditional

FIG.2: Experimental set-up for traditional sessions

3. FIG.2: Experimental set-up for traditional sessions FIG.3: Experimental set-up for 3D sketching sessions
3. FIG.2: Experimental set-up for traditional sessions FIG.3: Experimental set-up for 3D sketching sessions

FIG.3: Experimental set-up for 3D sketching sessions (Ericsson and Simon 1993)

3.2 Protocol analysis

Due to a tendency towards the objective ways for studying designers’ problem-solving processes, protocol analysis is the emerging prevailing method for design studies (Cross et al. 1996). Kim and Maher (2008) have advocated using this methodology for analyzing and measuring designers' cognitive actions instead of using subjective self- reports such as questionnaires and comments. Having all strategies of protocol analysis methodology, since our study focuses on designers’ cognitive activities and also the use of concurrent method is impossible for collaborative works (Kan 2008), the retrospective content-oriented protocol analysis is selected as the data collection strategy for our research. Herewith, the designers worked naturally while the entire processes were recorded. After finishing their sessions, the designers were required to transcribe their sessions using the aid of the recorded media. Their transcriptions as well as the recorded media provided the research data.

3.2.1 Unit of analysis and strategy in parsing the segments

Since this research focuses on designers’ cognitive actions during both traditional and digital sessions and the tested hypotheses are relying on their actions, the codes assigned to the different segments are considered as our units of analysis. Besides, we followed Ericsson and Simon’s (1993) suggestions in segmenting the process depending on the occurrences of the processes.

3.2.2 Coding Scheme

Basically, the coding scheme which is used in this study is borrowed and adopted from Suwa et al.’s (1998, 2000) studies. The developed coding scheme comprises main categories from the Suwa et al.’s (1998, 2000) coding scheme and our own sub-categories based on the designers’ different actions which were particular for our study. Ultimately, the proposed coding scheme characterizes designers’ spatial cognition at four different levels: physical- action, perceptual-actions, functional-actions, and conceptual-actions. Although it is not claimed that our sub- categories are the best possible answers for this kind of study, we are confident that this coding scheme is capable to embrace all cognitive codes that our designers produced during the experiment. Details are shown in Table 1.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

TABLE. 1: Developed coding scheme for 4 action-categories and their sub-categories

Category

ID

Index

Description

Physical

P

Directly related to the P‐ actions

 

D ‐ actions

Da

Depicting actions which create or deal with any visual external representation of design

 

CreateNew

Dacn

New

To create a new design element or a symbol (drawing circles, lines, textures, arrows, etc) To edit the shape, siz e, texture etc of the depicted element To create a mask area for selecting something To change the location or the orientation of the depicted element To duplicate an existing elemen t (for digital work only) To trace over the existing drawing To remove an existing object or (for digital work only) to undo any command or to turn off

ModifyExisting

Dame

New

CreateMask

Dacm

Old

RelocateExisting

Dare

New

CopyExisting

Dace

New

TracingExisting

Date

Old

RemoVeExisting

Dave

New

 

L‐ actions

La

Look actions which include insp ecting a previous depictions or any given information

 

InspectBrief TurnonObject InspectScreen Inspect ‐ sHeet

Laib

Old

Referring to the design brief Turning on the invisible objects Looking at screen (for digital work only) Looking at design sheet (for manual work only) Looking at virtual or physical 3D model while rotating it

Lato

Old

Lais

Old

Laih

Ol d

Inspect3DModel

Lai3

Old

 

M ‐ actions

Ma

Other P‐ actions which can fall into the motor activities

 

MovePen

Mamp

New

To move pen on the paper or b oard without drawing any thing To move an element in the space arbitrarily for finding new spatial relationship To touch either physical or virtual model to stimulate motor activities

MoveElement

Mame

New

TouchModel

Matm

New

ThinkingGesture

Matg

New

Any arbitrarily gesture which motivates thinking about design

Perceptual

 

Actions related to the paying attention to the visuo‐ spatial features of designed elements or

 

Pe

 

space

 

P ‐ visual

Pv

Discovery of visual features (geometrical or physical attri butes) of the objects and the spaces

 

NewVisual

Pnv

Unexp.D New attention to a physical attributes of an existing object or a space (shape, size or texture)

EditVisual

Pev

Other

Editing or overdrawing of an element to define a new physical attribute

 

NewLocation

Pnl

Unexp.D New attention to the location of an element or a space

EditLocation

Pel

Other

Editing or overdrawing of the location of an element or a space to define a new physical

attribute

 

P ‐ relation

Pr

Discovery of spatial or organizatio nal relations among objects or spaces

 

NewRelation

Pnr

Unexp.D New attention to a spatial or organizational relations among objects or spaces

EditRelation

Per

Other

Editing or overdrawing of a spatial or organizational relations among objects or spaces

 

P ‐ implicit

Pi

Discovery of implicit spaces existing in between objects or spaces

 

NewImplicit

Pni

Unexp.D Creating a new space or object in between the existing objects Editing the implicit space or object in between the exis ting objects by editing or relocating the

Other

objects

EditImplicit

Pei

Functional

 

Associating visual or spatial attributes or relations of the elements or the spaces with

 

F

 

meanings, etc

 

F‐ interactions

Fi

Interactions between designed elements or spaces and people

 

NewInteractive

Fni

Associating a interactive function with a just created element or space or a spatial relation Associating a interactive function with an existing element or space or a spatial relation Thinking of an interactive function to be implemented independently of visual features in the

ExistingInteractive

Fei

ConsiderationInteract ive

Fci

 

scene

 

F‐ psychological

Fp

People’s psychophysical or psychological interactions with designed elements or spaces

 

NewPsychological

Fnp

Associatin g a psychological function with a just created element or space or a spatial relation Associating a psychological function with an existing element or space or a spatial relation

ExistingPsychological

Fep

ConsiderationPsychological Fcp

Thinking of an psychological function to be implemented independently of visual features

Conceptual

C

Cognitive actions which are not directly caused a visuo ‐ spatial features

 

Co‐ evolution

Ce

Preferential (like‐ dislike) or aesthetical (beautiful ‐ ugly) assess ment of the P‐ actions or F ‐ actions

 

Set up Goal activities

Cg

Abstracted issues out of particular situations in design representation which are general enough to be accepted via the design process thoroughly as a major design necessity Goals based on the requirements of the design brief Goals introduce by the explicit knowledge or previous cases Coming out through past goals The goals that are not supported by expl icit knowledge, given requirements, or previous goals Goals devised to overcome problems which are caused by previous goals Goals to apply already introduced functions in the new situation Goals repeated through segments

GoalBrief

Cgb

O ther S ‐ inv S ‐ inv S ‐ inv S ‐ inv Other

GoalExplicit

Cge

GoalPast

Cgp

GoalTacit

Cgt

GoalConflict

Chc

GoalReapply

Cgr

GoalRepeateD

C gd

Other

3.2.3 Measurement of design protocols and testing hypotheses

After performing segmentation process and developing the coding scheme, based on recorded videos and transcribed media we assigned related codes to every segment. Protocol analysis and interpretation starts only after assigning related codes to every segment. In this study we relied on both descriptive and inferential statistics to analyze and interpret the collected data. Graphs and charts have been employed in descriptive statistics to explore the meaningful

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

patterns of changes in designers’ cognitive actions. On the other hand, the inferential statistics has been employed for testing the assumed hypotheses. For comparison purpose we used Wilcoxon ranks test which is equivalent to paired sample t-test for non-parametrically distributed data. Then, for the purpose of testing relationship Chi-square test have been selected for the cases in which the independent variables are categorical (nominal) and the dependent variables are non-parametrically distributed ratio scaled values. For interpretation of the results, we compared the test statistics against the significant value which is at (.05) for social sciences.

4. RESULTS AND ANALYSIS OF COLLECTED EMPIRICAL PROTOCOL DATA

This section presents the results and analysis data collected during the empirical study which was explained in Section 3. The analysis in this chapter mostly hinges around four different levels of designers’ spatial cognition:

physical-action (P), perceptual-actions (Pe), functional-actions, and conceptual-actions (FC).

4.1 Overview of the coded data

Table 2 shows the mean value and standard deviation of segment duration for six sessions by three designer groups as well as average value of each design method. For maintaining homogeneity of the results we tried to balance the total time which every group spent for each of the sessions. The conducted Wilcoxon ranks test on coded data show significant reduction in the average length of designers’ utterances during 3D sketching process compared to those in traditional sketching process. According to Kim and Maher (2008) this could be considered as a good phenomenon since it has potential for decreasing the load of designers’ mental cognitive processes.

TABLE. 2: Duration of segments for both traditional (Man) and 3D sketching (Digi) sessions

Session

Total time (s)

Segment num.

Mean (s)/Std.

Z value/Sig.

Pairs’ Ave. (Man) Pairs’ Ave. (Digi)

10148

10422

1941

2581

14.61/20.67

10.78/14.80

(a)*** /.000 8.77
(a)*** /.000 8.77
(a)*** /.000
8.77

4.2 Analysis of designers’ spatial cognition

Table 3 exposes the occurrence frequency percentage of Physical-actions (P-actions), Perceptual-actions (Pe- actions) and Functional-Conceptual actions during both manual and 3D sketching design sessions. The analysis and of occurrence percentage of all three mentioned action categories are presented in related following sections.

TABLE. 3: Occurrence frequency percentage of cognitive activities (CA) for Manual and 3D Digital design sessions

Cognitive Activities

P-actions

Pe-actions

FC-actions

Total

Mode CA

Manual

Digital

% within the whole CA % within the whole CA

74.9%

69.1%

11.6%

11.8%

13.5%

19.1%

100.0%

100.0%

4.2.1 Physical-actions

So far, based on the results coming from the selected sample we have one confirmed hypothesis which infers that the proposed 3D sketching methodology can increase the total amount of the designers’ external cognitive activities compared to the traditional design tools. For interpretation of this finding, the study needs to test the percentage of the total P-actions among the whole cognitive actions. Table 3 shows decrement for percentage of the P-actions in 3D sketching sessions (69.1%) in comparison to those in traditional design sessions (74.9%). Conducted Chi-square test confirms the assumed significance (X 2 =12.851, df=1, r<.001). As a consequence, the second hypotheses could be confirmed. Based on results from the selected sample it can be inferred that compared to the traditional design tools, the proposed 3D sketching methodology can decrease the occurrence frequency percentage of P-actions among the entire designers’ external cognitive activities.

4.2.2 Perceptual-actions

In our coding system two major concerns are related to Pe-actions. The first concern is with the absolute number and percentage of occurrence frequency of whole Pe-actions. The other concern is related to the occurrence of the unexpected discoveries codes compared to the occurrence of the other perceptual codes. Figure 4 illustrates the occurrence frequency percentage of the designers’ unexpected discoveries and the other Pe-actions during both sessions. Results show that in all 3D sketching sessions there is an obvious increment (X 2 =9.889, df=1, r<.01) in

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

percentage of unexpected discoveries compared to the other Pe-actions. Moreover, according to the conducted Wilcoxon ranks test to compare total amount of perceptual activities within similar allotted time, when using 3D sketching methodology designers performed significantly more Pe-actions (Z=-6.489, p<.001) compared to what they did during the traditional design sessions. Finally, in terms of visual analysis of the processes, referring to the occurrence frequency scatter bars of Pe-actions reveals that in the two of the three cases occurrence of Pe-actions are more consistent throughout the 3D sketching sessions compared to those in traditional sessions.

sessions compared to those in traditional sessions. FIG. 4: Occurrence frequency percentage of the designers’

FIG. 4: Occurrence frequency percentage of the designers’ unexpected discoveries (Unex. Disc.), and the other Pe- actions

4.2.3 FC-actions

In this section designers’ cognitive activities are categorized into three major categories. The first part comprises designers’ all evaluations of their previous physical, perceptual, and functional actions. The second category belongs to designers’ all functional and set-up goal activities. In this section, regardless of whether those actions are about assigned functions or their set-up goals, all the codes are analyzed under the following two groups: 1) situative- inventions, and 2) other FC-actions. Figure 5 illustrates the occurrence frequency percentage of all three groups of the codes for all six traditional and 3D sketching design sessions. From the bar charts it can be concluded that although the occurrence tendency of situative-inventions codes is almost the same (X 2 =1.509, df=1, r>.05) for all the traditional and 3D sketching sessions, there is a huge increment for occurrence tendency of co-evolutions codes during 3D sketching sessions compared to those in traditional session (X 2 =53.555, df=1, r<.001). This increment can be considered as a consequent of the more explicit representations during 3D sketching sessions. Moreover, the conducted chi-square test shows that there is a significant increment for occurrence percentage of FC-actions among total cognitive activities (X 2 =17.179, df=1, r<.001). Finally, in analysing the processes visually, the occurrence frequency scatter bars of FC-actions reveal that occurrence of FC-actions is more consistent throughout the 3D sketching sessions compared to those in traditional sessions.

sessions compared to those in traditional sessions. FIG. 5: Occurrence frequency percentage of the designers’

FIG. 5: Occurrence frequency percentage of the designers’ co-evolutions (Co-evol.), situative-inventions (S-inv), and the other functional-conceptual (FC) actions during all six traditional (M) and 3D sketching (D) sessions

5. CONCLUSIONS

The main aim of this experiment was to provide objective and empirical evidence for the proposed VR-based 3D sketching interface improves the designers’ spatial cognition during conceptual architectural design phase. In this

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

experiment the focus was on designers’ cognitive actions and the hypotheses were being tested relying on the designers’ actions. The codes assigned to the different segments were considered as our units of analysis. Although this experiment was made up of three pairs of designers performing six design sessions in total, the experiment provides adequate data for observing overall designingly trends and actions. Besides, we were guided by Clayton et al.’s (1998) recommendations in validating our results. Moreover, during our exploratory study we had observed consistent improvements in the main five aspects of design sessions and spatial cognition across the three pairs that further validated the claim that 3D sketching interface facilitates better quality of designing.

The study found that in 3D sketching sessions, the increased integration of the physical actions with mental perceptions and conceptions would lead to occurrence of epistemic actions which improves the designers’ spatial cognition. The results support (Kirsh and Maglio 1994) argument that the epistemic actions facilitated by the rich interface would offload the designers’ mental cognition partly into the physical world, hence allowing them freer mind to create more design ideas. Moreover, the 3D sketching interface improves the designers’ perception of visuo- spatial features, particularly in terms of unexpectedly discoveries of spatial features and relationships. The phenomenon we observed is explained by Schön (1983a) whereby there exist an association between mental cognition and the perception of physical attributes that could stimulate creativity and offload the mental load. Furthermore, the results support from Suwa et al.’s (2000) arguments to explain how unexpected discoveries can lead to more creativity and also to the occurrence of more situative inventions.

In terms of functional-conceptual actions of the design process, we posit that 3D sketching interface would improve the designers’ problem finding behaviours as well as improving their co-evolutionary conceptions of their perceptions and problem findings. Suwa et al.’s (2000) explain these behaviours as ‘situative-inventions’ and argue how the increased percentage of the co-evolutionary and situative-inventions actions can lead towards improved creativity in 3D sketching design session. In conclusion, we argue that the emerging VR technologies are capable to facilitate physical senses beyond the visual aspects of the design artefact by offering a new generation of promising CAD tools which are constantly in touch with designers’ cognition during conceptual architectural design process.

6. ACKNOWLEDGMENTS

We acknowledge that this research is a part of doctoral study by the first author at Universiti Putra Malaysia (UPM) which is partly sponsored by UPM's Graduate Research Fellowship (GRF). We also would like to gratefully acknowledge contributions of the fifth year architectural students in the Semester 2 2008/2009 at the Faculty of Design and Architecture, UPM. We also acknowledge the contributions of Prof. Dr. Mohd Saleh B. Hj Jaafar, Associate Prof. Dr. Rahmita Wirza Binti O. K. Rahmat, and Dr. Muhamad Taufik B Abdullah during this study.

7. REFERENCES

Basque Research. (2007). "Using Computerized Sense Of Touch Over Long Distances: Haptics For Industrial Applications." ScienceDaily.

Bilda, Z., and Demirkan, H. (2003). "An insight on designers' sketching activities in traditional versus digital media." Design Studies, 24(1), 27-50.

Brewster, S. (2001). "The Impact of Haptic ‘Touching’ Technology on Cultural Applications." Glasgow Interactive Systems Group, Department of Computing Science, University of Glasgow, Glasgow.

Craft, B., and Cairns, P. "Work interaction design: Designing for human work." IFIP TC 13.6 WG conference:

Designing for human work, Madeira, 103-122.

Cross, N. (2007). Designerly Ways of Knowing (paperback edition), Birkhäuser, Basel, Switzerland.

Cross, N., Christiaans, H., and Dorst, K. (1996). Analysing Design Activity, NY: Wiley & Sons, New York.

Cross, N., and Dorst, K. (1999). "Co-evolution of problem and solution space in creative design " Computational models of creative design, J. S. Gero and M. L. Maher, eds., Key Centre of Design Computing, University of Sydney, Sydney, 243-262.

Ericsson, K. A., and Simon, H. A. (1993). Protocol analysis: verbal reports as data, MIT Press, Cambridge

Fitzmaurice, G. W. (1996). "Graspable user interfaces," University of Toronto Toronto.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

Gero, J. S., and Damski, J. (1997). "A symbolic model for shape emergence." Environment and Planning B:

Planning and Design, 24, 509-526.

Goldschmidt, G., and Porter, W. L. (2004). Design representation, Springer, New York.

Ibrahim, R., and Pour Rahimian, F. (In review). "Empirical comparison on design synthesis strategies by architectural students when using traditional cad and manual sketching tools." Automation in Construction, Special Issue for CONVR08's Selected Papers.

Kan, W. T. (2008). "Quantitative Methods for Studying Design Protocols," The University of Sydney, Sydney.

Kim, M. J., and Maher, M. L. (2008). "The impact of tangible user interfaces on spatial cognition during collaborative design." Design Studies, 29(3), 222-253.

Kirsh, D., and Maglio, P. (1994). "On distinguishing epistemic from pragmatic action." Cognitive Science, 18(4),

513-549.

Kwon, J., Choi, H., Lee, J., and Chai, Y. " Free-Hand Stroke Based NURBS Surface for Sketching and Deforming 3D Contents." PCM 2005, Part I, LNCS 3767, 315 – 326.

Lloyd, P., Lawson, B., and Scott, P. (1995). "Can concurrent verbalization reveal design cognition?" Design Studies, 16(2), 237-259.

Regenbrecht, H., and Donath, D. (1996). "Architectural Education and VRAD." uni-weimar.

Schön, D. (1983a). The Reflective Practitioner: How Professionals Think in Action, Temple Smith, London.

Schön, D. (1992). "Designing as reflective conversation with the materials of a design situation " Knowledge-Based Systems, 5(1), 3-14.

Stricker, D., Klinker, G., and Reiners, D. (2001). "Augmented reality for exterior construction applications." Augmented reality and wearable computers, W. Barfield and T. Caudell, eds., Lawrence Erlbaum Press, 53.

Suwa, M., Gero, J. S., and Purcell, A. T. (2000). "Unexpected discoveries and S-inventions of design requirements:

important vehicles for a design process " Design Studies, 21(6), 539-567.

Suwa, M., and Tversky, B. (2001). "How do designers shift their focus of attention in their own sketches? ." Diagrammatic reasoning and representation, M. Anderson, B. Meyer, and P. Olivier, eds., Springer, Berlin,

241-260.

Tversky, B. (2005). "Functional significance of visuospatial representations." Handbook of higher-level visuospatial thinking, P. Shah and A. Miyake, eds., Cambridge University Press, Cambridge, 1-34.

van Someren, M. W., Barnard, Y. F., and Sandberg, J. A. C. (1994). The Think Aloud Method: A Practical Guide to Modelling Cognitive Processes, Academic Press, London.

Visser, W. (2004). "Dynamic aspects of design cognition: elements for a cognitive model of design." Theme 3A- Databases, Knowledge Bases and Cognitive Systems, Projet EIFFEL, France.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

ANALYSIS OF DISPLAY LUMINANCE FOR OUTDOOR AND MULTI- USER USE

T omohiro Fukuda, Associate Professor, Graduate School of Engineering, Osaka University; fukuda@see.eng.osaka-u.ac.jp and http://y-f-lab.jp/fukudablog/

ABSTRACT: The use of digital tools outdoors is anticipated, but a problem exists that when an ordinary PC display is used it is hard to see because of outside light. To clarify the cause, three elements of the display were evaluated, namely luminance, the contrast ratio, and the viewing angle. Five displays were assessed by using a luminance meter, and the three elements of each display were measured in a darkroom. To decrease the various factors affecting luminance measurement outdoors, the illuminance change outdoors, the influence of sunlight, and the influence of the ambient surroundings were considered. Also, using experimental methodology that reduced these factors, data about the three elements were acquired outdoors. Future work will need to clarify the element of the luminance of the display outdoors.

KEYWORDS: D isplay for outdoor, luminance, contrast ratio, viewing angle, digital tool.

1. INTRODUCTION

Digital tools for multi-media purposes including MR Mixed Reality are expected to be used outdoors by many users. However, in general, the medium of paper is used in design studies and by tour guides due to the various problems that digital tools have (left and middle of FIG.1). In this study, from the various problems affecting digital tools, the problem of the display is targeted. In general, the display is not easy to see outdoors due to the influence of the outside light (Right of FIG.1).

due to the influence of the outside light (Right of FIG.1). FIG. 1: Sharing information outdoors

FIG. 1: Sharing information outdoors using paper (left and middle); State of display outdoors (right).

Digital tools used outside by individuals, such as cellular phones, PDAs or HMDs (Head Mounted Display), were not the target of this study. Instead the focus was on digital tools with which a number of people can share information. Many papers have reported on digital tools used outdoors (Feiner, 1997; Behringer 2000; Julier 2000; Baillot 2001; Kuo, 2004; Onohara, 2005). However, those papers described the feature development of the digital tools, and did not consider the problem of ease of viewing the display outdoors.

The author has developed an MR system which includes a video image displaying a present image, and a real-time 3DCG image displaying images of objects or scenes that do not exist, such as design proposals or demolished buildings in real time (Fukuda, 2006; Kaga, 2007). The set-up of this system for outdoor, multi-user, and mobile use includes a tablet PC, MR software, a live camera, RTK-GPS (Real Time Kinematic - Global Positioning System), and a 3D motion sensor. This system is expected to be used for city tours or design studies in the areas of education, research, and practice.

In the presented paper (Fukuda, 2009), a problem of the developed MR system was shown, namely that the display is hard to see outdoors because of the influence of the outside light. To grasp the problem quantitatively, three displays were measured. However, this was a preliminary study and further research on the conditions of display experiment used outdoors is needed.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

This study aims to examine the problem of the display being hard to see outdoors. Two displays in addition to those described in the presented paper (Fukuda, 2009) were added to improve the research method. In addition, the experiment was carried out on five displays after the problem and measurements of the experiment on outdoor were clarified.

2. DISPLAY EVALUATION ELEMENT

The brightness of a display is expressed with a luminance and a contrast ratio. Luminance is a photometric measure of the luminous intensity per unit area of light travelling in a given direction. Where F is the luminous flux or luminous power (lm), is the angle between the surface normal and the specified direction, A is the area of the surface (m 2 ), and is the solid angle (sr), Luminance L (cd/m 2 ) is defined by

solid angle (sr), Luminance L (cd/m 2 ) is defined by (1) The contrast ratio is

(1)

The contrast ratio is the luminosity ratio of the maximum luminosity L max {(R, G, B) = (255, 255, 255)} and the minimum luminosity L min {(R, G, B) = (0, 0, 0)} on a display. The higher the contrast ratio, the greater the difference between L max and L min is. To raise the contrast ratio, L max is enlarged or L min is unlimitedly brought close to 0. Contrast ratio CR is defined by

brought close to 0. Contrast ratio CR is defined by (2) In addition, a viewing angle

(2)

In addition, a viewing angle that assumes about ten people can see the display is set. The viewing angle is the luminosity ratio of the display front luminosity L f and the diagonal 45 display degree luminosity L s . The lower the value, the smaller the difference between the luminance from the front and the diagonal luminance. That is, the screen can be seen very well from a diagonal viewpoint. Viewing angle VA is defined by

from a diagonal viewpoint. Viewing angle VA is defined by 3. EXPERIMENT AND RESULT 3.1 Whole

3.

EXPERIMENT AND RESULT

3.1

Whole image of experiment

(3)

To grasp the characteristics of the display for outdoor use, luminance L was measured in the darkroom and outdoors. The contrast ratio CR and the viewing angle VA were calculated based on the measured luminosity value. Display makers usually provide the dark room contrast ratio, which is a contrast ratio usually measured in a dark room where illuminance is 0. The dark room contrast ratio is the standard numerical value. However, since it is significantly influenced by outdoor daylight, just the dark room contrast ratio of evaluation of an outdoor display is inadequate. Therefore, it is necessary to measure the ambient contrast ratio, which is a contrast ratio that adds and measures the conditions of fixed outdoor daylight. The luminance meter used was an LS-100 by Konica Minolta Sensing, inc. and the illuminance meter used was a T-10 by Konica Minolta Sensing, inc.

The procedure used in the experiment is shown below:

The display and the luminance meter were set up in the darkroom as shown in FIG. 2.

The power supply on the display was switched off.

Luminance at the center of the screen was measured five times.

The power supply on the display was switched on.

A black screen {(R,G,B)=(0,0,0)}, which has the lowest luminance, was displayed, and the luminance at the center of the screen was measured five times.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

A white screen {(R,G,B)=(255,255,255)}, which has the highest luminance, was displayed, and luminance at the center of the screen was measured five times.

The positions in which luminance was measured were the front and a diagonal 45-degree position relative to the display.

The specifications of the display are shown in TABLE 1. The measurement conditions are shown in TABLE 2. FIG. 2 shows the plans of the experiment and a photo.

TABLE.1:Specifications of display. Display1 Display2 Display3 Display4 Display5 NEC VersaPro Lenovo X41Tablet Sony
TABLE.1:Specifications of display.
Display1
Display2
Display3
Display4
Display5
NEC VersaPro
Lenovo X41Tablet
Sony VAIO
NEC Sheild PRO
Sony XEL-1
VY11F/GL-R
VGN-SZ94PS
FC-N22A
Picture
CPU
Intel Pentium M
Intel Pentium M
Intel Core2 Duo
T7800 2.6GHz
Intel ULV U7500
1.1GHz
1.6GHz
1.06GHz
RAM
512MB
1.49GB
2GB
2GB
Graphic Memory
ATI MOBILITY
Intel 915GM Express
(96 MB)
NVIDIA GeForce
Intel GMAX3100
(VRAM)
RADEON 7500
8400M GS
(32MB)
(256MB)
Weight (kg)
0.855
1.88
1.75
2.5
2.0
Display size (inch)
10.4
12.1
13.3
12.1
11
Time to market
Aug., 2004
Jul., 2005
Dec., 2007
Aug., 2008
Oct., 2008

TABLE. 2: Measurement conditions.

 

Dark-room experiment

Outdoor pilot experiment

Outdoor experiment

Illuminance on the ground (lx)

0

22,030 – 107,800

29,470 – 83,300

Measurement time

Jul. 4, 2009, at 20:00 – 22:00

Jul. 3 and 6, 2009, at 8:00 – 11:30

Jul. 9, 2009, at 9:45 – 12:00

Weather

Fine weather

Obscured sky

Obscured sky

Latitude, Longitude

34.822623, 135.522781 (Osaka, Japan)

 
Longitude 34.822623, 135.522781 (Osaka, Japan)   FIG. 2: Plan of dark-room experiment (upper left) and

FIG. 2: Plan of dark-room experiment (upper left) and outdoor experiment (upper middle) (o: display power-off; k:

black image; w: white image); Photo of outdoor experiment (right).

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

3.2 Darkroom experiment

3.2.1 Experiment method

The method of the experiment outdoors was the same as described in Chapter 3.1. It was confirmed that the illuminance of the darkroom was 0 lx by using the illuminance meter.

3.2.2 Result

The measurement was carried out on each pattern five times. The mean value of the value obtained for the five measurements was used for the analysis. FIG.3 shows the luminance comparison on each display. FIG.4 shows the contrast ratio comparison on each display. FIG.5 shows the viewing angle comparison on each display.

The value of the screen luminance when power was off was the lowest in each display, as shown in FIG.3. Each value was close to 0. When the screen was black, L d5 indicated the same value as the luminance when power was off. This shows that jet-black can be expressed with the display5 made by the organic EL display. The luminance value from the display1 to the display4 was higher than that of the display5 at the black screen. This is because it is difficult to completely remove the influence of the backlight from the screen since it forms a basic part of the principle of operation of the liquid crystal panel. When the screen is white, the L d1 used as a display of the MR system indicated 77.854cd/m 2, , which was the lowest value. L d4 indicated 394.48cd/m 2 , which was the highest.

The contrast ratio CR d1 -CR d4 of the liquid crystal display (Display1-4) was 346.9-577.4, as shown in FIG.4. On the other hand, CR d5 of the organic EL display (Display5) was 147857.1, which was a considerably high value.

The viewing angle of the display was 2.21-5.33, as shown in FIG.5. The value of VA d5 was the lowest with 2.21.

in FIG.5. The value of VA d 5 was the lowest with 2.21. FIG. 3: Dark-room

FIG. 3: Dark-room luminance (L d ) comparison on each display.

Dark-room luminance (L d ) comparison on each display. FIG. 4: Dark-room contrast ratio (CR d

FIG. 4: Dark-room contrast ratio (CR d ) comparison on each display.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

Construction Applications of Virtual Reality Nov 5-6, 2009 FIG. 5: Dark-room viewing angle (VA d )

FIG. 5: Dark-room viewing angle (VA d ) comparison on each display.

3.3 Outdoor pilot experiment

Outdoors, there are many parameters that make it difficult to obtain a constant result. Such experimental problems were clarified by conducting a pilot study using display2 and display3.

3.3.1 Illuminance change outdoors

The illuminance of the ground was measured in pilot experiment 1. In regard to the experiment time, the illuminance was measured for 2 hours and 30 minutes. This was done on July 3 and July 6, 2009. The time was from 9:00 to 11:30. The weather was overcast. The result is shown in FIG. 6.

FIG. 6 clearly shows that illuminance outdoors is not constant. On July 3, the illuminance meter indicated 35,100- 107,800lx. On July 6, the illuminance meter indicated 22,030-72,100lx. The illuminance changes outdoors influence the luminance of the display. It is thought this change is caused by changes in the weather and the difference in the time of the experiment. It is possible to select dates when the weather is steady, and to shorten the time of the experiment.

is steady, and to shorten the time of the experiment. FIG. 6: Illuminance change in outdoor.

FIG. 6: Illuminance change in outdoor.

3.3.2 Influence of sunlight

The influence of sunlight was examined in pilot experiment 2. To reduce the influence on the luminance of the display, the display showed a black screen, and a black board was set up in the background. Luminance was measured from the front, from a diagonal 45-degree position, and from a diagonal 135-degree position relative to the display. The result is shown in FIG. 7.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

FIG. 7 clearly shows that the luminance of the display measured from the diagonal positions was higher than that measured from the front. This is thought to be due to specular reflection when measuring diagonally because of the relation to the angle of incidence of sunlight. The luminance of the sun is about 2 × 10 9 cd/m 2 , and it is overwhelming compared with the brightness of the display. The display was set up with the sun behind it so that the influence of direct sun could be reduced.

it so that the influence of direct sun could be reduced. FIG. 7: Influence of sunlight

FIG. 7: Influence of sunlight (-fk: measurement from the front; -s1k: measurement from the diagonal 45 degrees; - s2k: measurement from the diagonal 135 degrees).

3.3.3 Influence of ambient surroundings

The influence of the ambient surroundings was examined in pilot experiment 3. The display was set up with the sun behind it, taking account of the result of pilot experiment 2. Then, the display was changed to a black screen, and measured from the front. Different ambient surroundings that were reflected on the display were used and measured, namely an unchanged state, a white board, and a black board. The result is shown in FIG. 8.

FIG. 8. clearly shows that the luminance of the display was the highest with the white board, and lowest with the black board. That is, the ambient surroundings influence the luminance of the display. It is possible to stabilize the scenery that is reflected on the display. The black board, which caused the least reflections, was set as the ambient surroundings.

the least reflections, was set as the ambient surroundings. FIG. 8: Influence of ambient surrounding (-fn:

FIG. 8: Influence of ambient surrounding (-fn: a state without change; -fw: a state with a white board; -fk: a state with a black board).

As a result of these considerations, the following experimental conditions were applied:

The change in the illuminance was noted.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

The display was set up with the sun behind it.

The black board was used for the background reflected on the display.

3.4 Outdoor experiment

3.4.1 Experiment method

The method of the outdoors experiment was the same as that described in Chapter 3.1. The illuminance of the ground was measured at the same time as measuring the luminance.

3.4.2 Result

FIG.9 shows the luminance of each display and the illuminance of the ground. When the display was set to a black screen, the luminance was 224.3-239cd/m 2 in display1, 75.96-81.6cd/m 2 in display2, 99.98-103.9cd/m 2 in display3, 229.9-241.5cd/m 2 in display4, and 243-258.3 cd/m 2 in display5. All the values were higher than that of the darkroom. It is thought that the outside light influenced this. When the display was set to a white screen, the luminance was 320.2-337.3 cd/m 2 in display1, 187.9-199.5 cd/m 2 in display2, 229.4-231.5 cd/m 2 in display3, 677.4- 726.5 cd/m 2 in display4, and 326.4-333.6 cd/m 2 in display5. All the values were higher than that of the darkroom. It is thought that the outside light influenced this.

The ambient contrast ratio CR bn of all displays was 1.314-3.010 from FIG.10. The viewing angle VA bn of all displays was 0.822-1.919 as shown in FIG.11.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

Construction Applications of Virtual Reality Nov 5-6, 2009 FIG. 9: Outdoor luminance (L b ) and

FIG. 9: Outdoor luminance (L b ) and the illuminance comparison on each display (-fo: display power-off; -fk: black screen; -fw: white screen).

display power-off; -fk: black screen; -fw: white screen). FIG. 10: Outdoor contrast ratio (CR o )

FIG. 10: Outdoor contrast ratio (CR o ) comparison on each display.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

Construction Applications of Virtual Reality Nov 5-6, 2009 FIG. 11: Outdoor viewing angle (VA o )

FIG. 11: Outdoor viewing angle (VA o ) comparison on each display.

4. DISCUSSION

Outdoors, there are many factors that make it difficult to obtain a constant result. In this study, to decrease the variation factor, the illuminance change outdoors, the influence of sunlight, and the influence of the ambient surroundings were considered, as described in Chapter 3.3. Also, experimental methodology that reduced these variation factors as much as possible was used in Chapter 3.4.

When FIG.9 is compared with FIG.3, the luminance L of the displays differs greatly in the dark room and outdoors in the sunlight. The luminance of a white screen is higher than that of a black screen in each display though it is necessary to remember that the illuminance can be uneven. Moreover, the luminance of a switched off screen is higher than that of a black screen, excluding display5. The luminance of a white screen is higher than that of the darkroom. This reason is that the influence of the outside light is received. Whether the ratio of the luminance of the outdoor display is an influence of the luminescence of the liquid crystal display or of the outside light cannot be judged from the measuring method of the luminance used in this study. When FIG.10 is compared with FIG.4, the contrast ratio CR also differs greatly in the dark room and outdoors in the sunlight. One of the causes of this is that the luminance of a black screen is high compared to the influence of the outside light. When FIG.11 is compared with FIG.5, the viewing angle VA also differs greatly in the dark room and outdoors in the sunlight. As Chapter 3.1 described, when evaluating the display used outdoors, measurement of the ambient contrast ratio is important.

This experiment was executed in a certain specific time. That is, since outdoor illuminance constantly changes, it is difficult to reproduce the result of the experiment. Therefore, it is necessary to build a display evaluation system which can reproduce and measure outdoor daylight conditions.

5. CONCLUSION AND FUTURE WORK

The results achieved in the present study are as follows:

The outdoor use of digital tools such as MR is anticipated, and there is a problem that the screen is hard to see when a normal display is used. To clarify the reason for this, luminance, the contrast ratio, and the viewing angle of the display were evaluated.

Five displays were measured using a luminance meter, and as part of the luminance data, the contrast ratio, and the viewing angle of each display were acquired in the darkroom.

Outdoors, there are many factors that make it difficult to obtaining a constant result. To decrease the variation factors, the illuminance change outdoors, the influence of sunlight, and the influence of the ambient surroundings were considered. Also, using experimental methodology that reduced these variation factors as much as possible, data on luminance, the contrast ratio, and the viewing angle of each display were acquired outdoors.

Future works could investigate the following areas:

Whether the ratio of the luminance of the outdoor display is an influence of the luminescence of the liquid crystal display or of the outside light cannot be judged from the luminance measuring method

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

used in this study. A clear relation was not obtained although clarification was attempted in the experiment by measuring luminance in the switched off state. It is necessary to examine the character of the material that composes the display.

To understand the outdoor characteristics of each display, the construction of an evaluation system that artificially produces the outdoor environment is thought necessary.

6. ACKNOWLEDGEMENT

The author would like to thank Mr. Wei Cheng Lin and Mr. Tian Zhang who are research students of Osaka University for supporting these experiments.

7. REFERENCES

Behringer, R. et al. (2000). "A Wearable Augmented Reality Test-bed for Navigation and Control, Built Solely with Commercial-off-the-Shelf (COTS) Hardware.", Proc. 2nd Int’l Symp. Augmented Reality 2000 (ISAR 00), IEEE CS Press, Los Alamitos, Galif., 12-19.

Baillot, Y., Brown, D., and Julier, S. (2001). "Authoring of Physical Models Using Mobile Computers.", Proc. Int’l Symp. Wearable Computers, IEEE CS Press, Los Alamitos, Galif.

Feiner, S., MacIntyre, B., Höllerer, T., Webster, A. (1997). "A touring machine: Prototyping 3D mobile augmented reality systems for exploring the urban environment.", Personal and Ubiquitous Computing, Volume 1, Number 4, Springer London, 208-217.

Fukuda, T., Kawaguchi, M., Yeo, W.H., and A. Kaga, A. (2006). "Development of the Environmental Design Tool "Tablet MR" on-site by Mobile Mixed Reality Technology.", Proceedings of The 24th eCAADe (Education and Research in Computer Aided Architectural Design in Europe), 84-87.

Fukuda, T. (2009). "Analysis of a Mixed Reality Display for Outdoor and Multi-user Implementation ", 4th ASCAAD Conference (Arab Society for Computer Aided Architectural Design), 323-334.

Julier, S. er al. (2000). "Information Filtering for Mobile Augmented Reality.", Proc. Int’l Symp. Augmented Reality 2000 (ISAR 00), IEEE CS Press, Los Alamitos, Calif., 3-11.

Kaga, A., Kawaguchi, M., Fukuda, T., Yeo, W.H. (2007). "Simulation of an Historic Building Using a Tablet MR System.", Proceedings of the 12th International Conference on Computer Aided Architectural Design Futures, Sydney (Australia), 45-58.

Kuo, C.G., Lin, H.C., Shen, Y.T., Jeng, T.S. (2004). "Mobile Augmented Reality for Spatial Information Exploration.", Proceedings of The 9th International Conference on Computer Aided Architectural Design Research in Asia (CAADRIA2004), 891-900.

Onohara, Y., Kishimoto, T., (2005). "VR System by the Combination of HMD and Gyro Sensor for Streetscape Evaluation.", Proceedings of The 10th International Conference on Computer Aided Architectural Design Research in Asia (CAADRIA2005), vol. 2, 123-128.

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

A PROPOSED APPROACH TO ANALYZING THE ADOPTION AND IMPLEMENTATION OF VIRTUAL REALITY TECHNOLOGIES FOR MODULAR CONSTRUCTION

Yasir Kadhim, Graduate Research Student Department of Civil Engineering, University of New Brunswick, Canada;

y26f6@unb.ca

Jeff Rankin, Associate Professor and M. Patrick Gillin Chair in Construction Engineering and Management Department of Civil Engineering, University of New Brunswick, Canada; rankin@unb.ca

Joseph Neelamkavil, Senior Research Officer National Research Council of Canada, Centre for Computer-assisted Construction Technologies Joseph.Neelamkavil@nrc-cnrc.gc.ca

Irina Kondratova, Group Leader National Research Council of Canada, Institute for Information Technology Irina.Kondratova@nrc-cnrc.gc.ca

ABSTRACT: To achieve successful adoption and implementation of process technologies in the construction industry requires a better understanding of practices of innovation management. Defining innovation as the process of applying something new, a research project is being undertaken to contribute to a better understanding of its concomitant practices. The project focuses on virtual reality (VR) technologies within a specific application of modular construction. From a potential adopter’s perspective, the process of technology adoption and implementation is often less than satisfactory. The research project is addressing this by furthering the understanding of the innovation process in a series of case studies. This paper presents work in progress to this end by providing the background on assessing management practices, the link between VR technologies and modular construction and early results of case study activities. To date, providing the functionality to enhance communication has been identified as the best fit for the case studies of applying virtual reality technologies to the process of modular construction engineering and management. The conceptual framework of assessing innovation management practices that employs the concept of capability maturity is presented as a predictive indicator for the adoption and implementation that is to follow.

KEYWORDS: Virtual reality, modular construction, innovation management, technology adoption

1. INTRODUCTION

Many practitioners and researchers alike agree that the architectural engineering and construction (AEC) industry can improve its overall performance (measured in terms of cost, time, safety, quality, sustainability, etc.) by creating a better business environment that encourages innovation. Innovation is defined in this context as “application of technology that is new to an organization and that significantly improves the design and construction of a living space by decreasing installed cost, increasing installed performance, and/or improving the business process (e.g., reduces lead time or increases flexibility)“ (Tooles 1998).

The research described focuses on process technologies within the AEC industry as a class of innovations. Process technologies are loosely defined as any tool or technique that supports the management of a civil engineering project during execution from concept, through design, construction and operation, to decommissioning. This focus area presents some interesting challenges and some corresponding gaps in the knowledge area. From a potential adopter’s perspective, it is difficult to objectively assess process technologies for adoption and implementation as there are not many decision-making tools and techniques for industry to properly identify needs and match corresponding solutions. Overcoming this challenge requires a direct link to performance, whether at the organization, project or industry level, whereas currently, the focus has been on operational savings. For example, it

9 th International Conference on Construction Applications of Virtual Reality

Nov 5-6, 2009

is easy to measure the time savings realized by recording information electronically versus on paper; however, an assessment of the knock-on positive effects on performance, by having this information conveniently archived, is a bit more difficult to measure. Some of the questions to answer include: how do we improve the performance of the AEC industry through the effective development and appropriate adoption and implementation of process technologies; what are the techniques to support practitioners in the analysis of new process technologies; and what contributes to a strategy for increasing the impact and rate of process technology adoption within the AEC industry?

The approach that has been taken is to assess the performance of the process of construction while taking into account the management practices being applied. A modest research project is being conducted jointly by the University of New Brunswick’s Construction Engineering and Management Group (UNB CEM) and the National Research Council of Canada’s Centre for Computer-assisted Construction Technologies (NRC-CCCT). The short term research objectives are to study the implementation of a specific advanced process technology (i.e., virtual reality technologies) for a specific scenario in the industry (i.e., modular construction). The research is also intended to contribute to a broader research program of more formally assessing the impact of innovation management practices on industry performance. The research project hypothesis states that the maturity of managem