Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Look up keyword
Like this
1Activity
0 of .
Results for:
No results containing your search query
P. 1
daw2007 Video SA EAE

daw2007 Video SA EAE

Ratings: (0)|Views: 10|Likes:
Published by shigeki
Shigeki Amitani, Ernest Edmonds: "A Dynamic Concept Base: A Component for Generative Systems", Digital Art Weeks Festival'07, 9-14 July, Zurich, Switzerland, 2007.
Shigeki Amitani, Ernest Edmonds: "A Dynamic Concept Base: A Component for Generative Systems", Digital Art Weeks Festival'07, 9-14 July, Zurich, Switzerland, 2007.

More info:

Categories:Types, School Work
Published by: shigeki on May 21, 2008
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

06/16/2009

pdf

text

original

 
Designing a System for Supporting the Process ofMaking a Video Sequence
Shigeki Amitani
Creativity & Cognition StudiosAustralasian CRC for Interaction DesignUniversity of Technology, Sydney AUSTRALIA+61-(0)2-9514-4631
shigeki@shigekifactory.comErnest Edmonds
Creativity & Cognition StudiosAustralasian CRC for Interaction DesignUniversity of Technology, Sydney AUSTRALIA+61-(0)2-9514-4640
ernest@ernestedmonds.com
ABSTRACT
The aim of this research is to develop a system to supportvideo artists. Design rationales of software for artists should be obtained through investigating artists' practice. In thisstudy, we have analysed the process of making a videosequence in collaboration with an experienced video artist.Based on this analysis we identified design rationales for asystem to support the process of making a video sequence. A prototype system “Knowledge Nebula Crystallizer for Time- based Information (KNC4TI)” has been developed. Further development towards a generative system is also discussed.
Categories and Subject Descriptors
H5.m. Information interfaces and presentation (e.g., HCI):Miscellaneous.
General Terms
Design
Keywords
Video making, cognitive process, sketching, software, time- based information, generative system
1.
 
INTRODUCTION
Artists have started to use information technologies sincecomputers have become available (e.g. [14]). Those tools helpartists to break new ground from an artistic perspective.However,
these tools are not optimally designed for artists.
This paper presents:
 
Results of investigation of the process of makinga video sequence to identify design rationales for a supporting system in collaboration with a professional video artist
 
Development of a prototype system called“Knowledge Nebula Crystallizer for Time-basedInformation (KNC4TI)” that supports the process based on the investigation
 
Plans for extending the KNC4TI system to agenerative system
2.
 
RELATED WORK 
In composing a video sequence, an editing tool isindispensable. Traditionally, conventional video editingequipment has been designed for industrial video productionswhich are different to those of artists. While industrial video production needs tools to organise a certain video sequencealong with a storyboard devised in advance, artistic video production tends to proceed more through interactions between an artist and a work, rather than following a pre-defined storyboard.Even so, artists have adopted that equipment so that they can present their art works. Recently, artists as well as industrialvideo producers have started to use computer software for their compositions. However, most video editing software has beendeveloped as a metaphor of the traditional editing equipmentsuch as films and VCRs, as the general GUI operating systemsadopted the desktop metaphor. This means that video editingsoftware still does not provide suitable interactiverepresentations for artists, whilst the editing process of industrial video producers is different from that of artists.In order to understand design processes in detail, a number of analyses, especially in architects' design processes, have beenconducted [3-5, 22], however, most of the studies focused ondesign process of non-time-based information. Few analyseshave been conducted on those of time-based information suchas making video sequence and musical composition.Tanaka [23] has pointed out that the problem in analysesconducted so far in the musical composition research field isthat although generic models for musical composition processes have been proposed based on analyses of those processes (macroscopic models and analyses), little has beenconducted to investigate how each stage in those models proceeds and how transitions between stages occurs(microscopic models and analyses). Amitani et al. [1] haveconducted a microscopic analysis on the process of musicalcomposition. However, few microscopic analyses have beenconducted on the process of making a video sequence.From the viewpoint of human-computer interaction research,Shipman et al. [17] have developed a system called “Hyper-Hitchcock”. This system has the required flexibility for videoediting, but the system aims to index video clips based ondetail-on-demand concept that facilitate user navigation of video sequences efficiently.Yamamoto et al. [24] have developed ARTWare, a library of components for building domain-oriented multimediaauthoring environments. A system was developed particularlyfor empirical video analysis of usability analysis.Although these systems above have been developed based onthe design perspective, their focuses are on supportingnavigation processes and analyses of video contents. For authoring information artefacts, it is important to support itsentire process, from the early stages where ideas are not clear tothe final stages where a concrete work is produced.
 
Shibata [15, 16] have claimed the importance of an integratedenvironment that supports entire process of creative activities,as the process is composed of sub processes (e.g. thegenerating process and the exploring process [8]) and they areinseparable. In our study, we also regard this concept of integration important and implement to realise the integration.
3.
 
A CASE STUDY
We have investigated the process of making a video sequenceto identify design rationales for development of a videoauthoring tool that fits to designers’ cognitive processes. Itwas a collaborative work with an experienced video artist (wecall the artist “participant” in this paper). As the participantoriginally had a plan to compose a video clip, we couldobserve a quasi-natural process of making a video sequence.Retrospective reporting of protocol analysis [7],questionnaires and interviewing methods were analysed.Overall tendencies are summarised as below:
 
Conceptual works such as considering wholestructure of a piece, semantic segmentation of amaterial movie are conducted on the participant’ssketch book 
 
Software is mainly used for:
o
 
Observing what is really going on in amaterial video sequence
o
 
Implementing the result of his thoughts inhis sketch in response to what is seen onthe softwareThe analysis shows that conceptual design processes areseparated from implementation processes, while they cannot be separated with each other. Design process is regarded as a"dialog" between the designer and his/her material [13].Facilitating designers going back and forth between whole anda part, and between conceptual and represented world willsupport this design process.
3.1
 
Roles of Sketching
Sketching plays significant roles that existing software doesnot cover. Sketching allows designers to:
 
Externalise designer's multiple viewpointssimultaneously with written and diagrammaticannotations
 
Visualise relationships between the viewpointsthat the designer definesIn the following sections we discuss how these two featureswork in the process of making a video sequence.
3.1.1
 
Written and diagrammatic annotations for designers' multiple viewpoints
Figure 1 shows the participant's sketch. Each of the sixhorizontal rectangles in Figure 1 represents the entire materialvideo. They all refer to one same video sequence with differentlabels so that he can plan what should be done regarding toeach element that he decided to label.From the top to the bottom they are labelled as follows (shownin (1) Figure 2):
Movements
: movements of physical objects such as person coming in, a door opening, etc.
Sound levels
: change of sound volume
Sound image
: types of sounds (e.g. "voices", etc.)
Pic (= picture) level
: change of density of the image
Pic image
: types of the images
Compounded movements
: plans
Figure 1 The participants' sketch
 
These elements are visualised in the sketch based on thetimeline conventions. Although some of the existing videoauthoring tools present sound level with timelines as thesecond rectangle shows, existing software allows limitedwritten annotations on a video sequence, and eventually itdoes not provide sufficient functionality for externalisingmultiple viewpoints. Especially the top sequence labelled as"movement" is conceptually important in making a videosequence, which is not supported by any video authoringtools. As (1) in Figure 2 shows, a mixture of written and non-diagrammatic annotations works for analysing what is goingon in the material sequence.
(2) Same object withdifferent annotations(1) Designer’s own annotations
Figure 2: Annotations
3.1.2
 
Visualising relationships between multipleviewpoints
As shown (2) in Figure 2, a certain part of the materialsequence is annotated in different ways in each rectangle inorder to describe conditions represented by the rectangle, thatis:
 speak; zero (with shading); null; black; T (or a T-shaped  symbol); meter.
This is a power of multiple viewpoints withwritten annotations.These annotations are explanations of a certain part of a videosequence in terms of each correspondent viewpoint. For example, in terms of "Sound levels", this sketch shows that thesound level will be set to zero at this point of the sequence.The participant also externalises the relationships across theviewpoints in his sketch by using both written anddiagrammatic annotations as shown in Figure 3.Sketching supports designers to think semantic relationshipssuch as "voices leads pics" shown in (1) of Figure 3, as well asrelationships among physical features such as timing betweensounds and pictures.(2) indicates that he visualised the relationships between picture images and his plan on a certain part of the materialsequence by using written and diagrammatic annotations.(3) shows that he was thinking about the relationships acrossthe viewpoints.The relationships that the participant visualised are both physical and semantic. Some authoring tools supportvisualising physical relationships, however, they have fewfunctions to support semantic relationships amongviewpoints of designers. Sketching assists this process.
(1)Relationships between sound and vision(2) Relationships betweenvision and plan(3) Relationships acrossthe viewpoints
Figure 3: Relationships between multiple viewpoints
Sketching also provides a holistic view of time-basedinformation. Implementation of these features of sketching tosoftware will facilitate designers going back and forth betweenconceptual and physical world, and whole and a part, so thatthe process of making a video sequence is supported.
3.2
 
Roles of Software
We investigated the process of making a video sequence withsoftware. The participant was to edit a material video sequencecomposed of a single shot. The editing tool he used wasFinalCut Pro HD which the participant had been using for about five years. The duration was up to the participant(eventually it was 90 minutes). The video editing wasconducted at studio at Creativity & Cognition Studios,University of Technology, Sydney. It was the first time heengaged himself in the piece. That is, the process was theearliest stage of using video-authoring software for the new piece.The process of making a video sequence was recorded bydigital video cameras. Following elements were recorded:
 
The participant's physical actions during makinga video sequence on the video editing software
 
The participant's actions on computer displaysAfter authoring a video sequence, the participant was asked toconduct a retrospective report on his authoring process withwatching the recorded video data. We adopted theretrospective report method so that we can excerpt cognitive processes in actual interactions as possible as we can. Therecorded video data was used as a visual aid to minimisememory of the participant
 
[22]. The participant was alsoasked to report what he thought during editing with watchingthe recorded video data. Following this the participant wasasked to answer a free-form questionnaire via e-mail.
3.2.1
 
Observing "facts" in a material sequence
The participant reported that he was just looking at the filmclip as follows:[00:01:30]
At this stage, I'm just looking again at the film clip.
[00:02:32]
So again, still operating on this kind of looking at the image over, the perceptual thing.

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->