Designing a System for Supporting the Process of Making a Video Sequence
Creativity & Cognition Studios Australasian CRC for Interaction Design University of Technology, Sydney AUSTRALIA +61-(0)2-9514-4631
Creativity & Cognition Studios Australasian CRC for Interaction Design University of Technology, Sydney AUSTRALIA +61-(0)2-9514-4640
The aim of this research is to develop a system to support video artists. Design rationales of software for artists should be obtained through investigating artists' practice. In this study, we have analysed the process of making a video sequence in collaboration with an experienced video artist. Based on this analysis we identified design rationales for a system to support the process of making a video sequence. A prototype system “Knowledge Nebula Crystallizer for Timebased Information (KNC4TI)” has been developed. Further development towards a generative system is also discussed.
along with a storyboard devised in advance, artistic video production tends to proceed more through interactions between an artist and a work, rather than following a predefined storyboard. Even so, artists have adopted that equipment so that they can present their art works. Recently, artists as well as industrial video producers have started to use computer software for their compositions. However, most video editing software has been developed as a metaphor of the traditional editing equipment such as films and VCRs, as the general GUI operating systems adopted the desktop metaphor. This means that video editing software still does not provide suitable interactive representations for artists, whilst the editing process of industrial video producers is different from that of artists. In order to understand design processes in detail, a number of analyses, especially in architects' design processes, have been conducted [3-5, 22], however, most of the studies focused on design process of non-time-based information. Few analyses have been conducted on those of time-based information such as making video sequence and musical composition. Tanaka  has pointed out that the problem in analyses conducted so far in the musical composition research field is that although generic models for musical composition processes have been proposed based on analyses of those processes (macroscopic models and analyses), little has been conducted to investigate how each stage in those models proceeds and how transitions between stages occurs (microscopic models and analyses). Amitani et al.  have conducted a microscopic analysis on the process of musical composition. However, few microscopic analyses have been conducted on the process of making a video sequence. From the viewpoint of human-computer interaction research, Shipman et al.  have developed a system called “HyperHitchcock”. This system has the required flexibility for video editing, but the system aims to index video clips based o n detail-on-demand concept that facilitate user navigation of video sequences efficiently. Yamamoto et al.  have developed ARTWare, a library of components for building domain-oriented multimedia authoring environments. A system was developed particularly for empirical video analysis of usability analysis. Although these systems above have been developed based o n the design perspective, their focuses are on supporting navigation processes and analyses of video contents. For authoring information artefacts, it is important to support its entire process, from the early stages where ideas are not clear t o the final stages where a concrete work is produced.
Categories and Subject Descriptors
H5.m. Information interfaces and presentation (e.g., HCI): Miscellaneous.
Video making, cognitive process, sketching, software, timebased information, generative system
Artists have started to use information technologies since computers have become available (e.g. ). Those tools help artists to break new ground from an artistic perspective. However, these tools are not optimally designed for artists. This paper presents:
• • •
Results of investigation of the process of making a video sequence to identify design rationales for a supporting system in collaboration with a professional video artist Development of a prototype system called “Knowledge Nebula Crystallizer for Time-based Information (KNC4TI)” that supports the process based on the investigation Plans for extending the KNC4TI system to a generative system
2. RELATED WORK
In composing a video sequence, an editing tool i s indispensable. Traditionally, conventional video editing equipment has been designed for industrial video productions which are different to those of artists. While industrial video production needs tools to organise a certain video sequence
Shibata [15, 16] have claimed the importance of an integrated environment that supports entire process of creative activities, as the process is composed of sub processes (e.g. the generating process and the exploring process ) and they are inseparable. In our study, we also regard this concept of integration important and implement to realise the integration.
a part, and between conceptual and represented world will support this design process.
3.1 Roles of Sketching
Sketching plays significant roles that existing software does not cover. Sketching allows designers to:
3. A CASE STUDY
We have investigated the process of making a video sequence to identify design rationales for development of a video authoring tool that fits to designers’ cognitive processes. It was a collaborative work with an experienced video artist (we call the artist “participant” in this paper). As the participant originally had a plan to compose a video clip, we could observe a quasi-natural process of making a video sequence. Retrospective reporting of protocol analysis , questionnaires and interviewing methods were analysed. Overall tendencies are summarised as below:
Externalise designer's multiple viewpoints simultaneously with written and diagrammatic annotations Visualise relationships between the viewpoints that the designer defines
In the following sections we discuss how these two features work in the process of making a video sequence.
3.1.1 Written and diagrammatic annotations for designers' multiple viewpoints
Conceptual works such as considering whole structure of a piece, semantic segmentation of a material movie are conducted on the participant’s sketch book Software is mainly used for: o o Observing what is really going on in a material video sequence Implementing the result of his thoughts in his sketch in response to what is seen o n the software
Figure 1 shows the participant's sketch. Each of the six horizontal rectangles in Figure 1 represents the entire material video. They all refer to one same video sequence with different labels so that he can plan what should be done regarding t o each element that he decided to label. From the top to the bottom they are labelled as follows (shown in (1) Figure 2): Movements: movements of physical objects such as person coming in, a door opening, etc. Sound levels: change of sound volume Sound image: types of sounds (e.g. "voices", etc.) Pic (= picture) level: change of density of the image Pic image: types of the images Compounded movements: plans
The analysis shows that conceptual design processes are separated from implementation processes, while they cannot be separated with each other. Design process is regarded as a "dialog" between the designer and his/her material . Facilitating designers going back and forth between whole and
Figure 1 The participants' sketch
These elements are visualised in the sketch based on the timeline conventions. Although some of the existing video authoring tools present sound level with timelines as the second rectangle shows, existing software allows limited written annotations on a video sequence, and eventually i t does not provide sufficient functionality for externalising multiple viewpoints. Especially the top sequence labelled as "movement" is conceptually important in making a video sequence, which is not supported by any video authoring tools. As (1) in Figure 2 shows, a mixture of written and nondiagrammatic annotations works for analysing what is going on in the material sequence.
(1) Designer’s own annotations
(1) Relationships between sound and vision
(3) Relationships across the viewpoints
(2) Relationships between vision and plan
Figure 3: Relationships between multiple viewpoints Sketching also provides a holistic view of time-based information. Implementation of these features of sketching t o software will facilitate designers going back and forth between conceptual and physical world, and whole and a part, so that the process of making a video sequence is supported.
(2) Same object with different annotations
3.2 Roles of Software
We investigated the process of making a video sequence with software. The participant was to edit a material video sequence composed of a single shot. The editing tool he used was FinalCut Pro HD which the participant had been using for about five years. The duration was up to the participant (eventually it was 90 minutes). The video editing was conducted at studio at Creativity & Cognition Studios, University of Technology, Sydney. It was the first time he engaged himself in the piece. That is, the process was the earliest stage of using video-authoring software for the new piece. The process of making a video sequence was recorded b y digital video cameras. Following elements were recorded:
Figure 2: Annotations
3.1.2 Visualising relationships between multiple viewpoints
As shown (2) in Figure 2, a certain part of the material sequence is annotated in different ways in each rectangle i n order to describe conditions represented by the rectangle, that is: speak; zero (with shading); null; black; T (or a T-shaped symbol); meter. This is a power of multiple viewpoints with written annotations. These annotations are explanations of a certain part of a video sequence in terms of each correspondent viewpoint. For example, in terms of "Sound levels", this sketch shows that the sound level will be set to zero at this point of the sequence. The participant also externalises the relationships across the viewpoints in his sketch by using both written and diagrammatic annotations as shown in Figure 3. Sketching supports designers to think semantic relationships such as "voices leads pics" shown in (1) of Figure 3, as well as relationships among physical features such as timing between sounds and pictures. (2) indicates that he visualised the relationships between picture images and his plan on a certain part of the material sequence by using written and diagrammatic annotations. (3) shows that he was thinking about the relationships across the viewpoints. The relationships that the participant visualised are both physical and semantic. Some authoring tools support visualising physical relationships, however, they have few functions to support semantic relationships among viewpoints of designers. Sketching assists this process.
The participant's physical actions during making a video sequence on the video editing software The participant's actions on computer displays
After authoring a video sequence, the participant was asked t o conduct a retrospective report on his authoring process with watching the recorded video data. We adopted the retrospective report method so that we can excerpt cognitive processes in actual interactions as possible as we can. The recorded video data was used as a visual aid to minimise memory of the participant . The participant was also asked to report what he thought during editing with watching the recorded video data. Following this the participant was asked to answer a free-form questionnaire via e-mail.
3.2.1 Observing "facts" in a material sequence
The participant reported that he was just looking at the film clip as follows: [00:01:30] At this stage, I'm just looking again a t the film clip. [00:02:32] So again, still operating on this kind o f looking at the image over, the perceptual thing.
This was reported for 18 times in his protocol data. These observations occurred at the early and late phase of the process as shown in Figure 4.
Frequency of observation
and sequences used for their current video compositions (this is called a shot list). The list function in a video editing tool supports to compare multiple alternatives. It allows a designer to list not only files to be potentially used but also created sequences. In the retrospective report, the participant said: [00:11:10] It would be a kind of parallel process where you make a shot list is causing what they call a shot list [pointing at the left most list-type window in FinalCut]. And essentially you go through on the list, the different shots, the different scenes as we would often call, um. Whereas I'm just working with one scene, dynamics within one scene. So, I'm working with a different kind of material, but it's related too.
Figure 4 The time distribution of observation process The observation of facts was 75 minutes, and the exploration of possibilities was 5 minutes, the rest was for other events such as reading a manual to solve technical problems and talking to a person. In this observation process, it was also observed that the participant was trying to find a "rhythm" in the material sequence which the participant calls "metre". [00:02:23] One of the things I've been thinking about ... is actually to, is actually well, what is the kind of metre, what is the rhythm that you are going to introduce into here This type of observation is for checking the actual duration for each scene that the participant considered as "a semantic chunk". The participant recorded precise time durations of the semantic chunks and listed them in his sketchbook. This means that the participant was trying to refine his idea by mapping conceptual elements to a physical feature. [00:08:32] It's a matter of analysing each, almost each frame to see what's going on and making a decision of. Having kind of analyse what's going o n and making a decision of, well therefore this duration works out of something like this. The durations are in seconds and frames, so that [...] 2 0 unit [...]. It counts from 1 to 24 frames, 25th frame rolls over number second. In the process of making a video sequence, the software plays a role of elaborating what the participant decided roughly in his sketch. This process has features listed below:
Although this function helps designers conducting trial-anderror processes through comparing multiple possibilities, the participant mentioned a problem: [ A - 4 ] Film dubbing interface metaphor [is inconvenient]. The assumption is that a TV program or a cinema film is being made, which forces the adoption of the system to other modes. For instance, why should there be only one Timeline screen? There are many instances where the moving image is presented across many screens. Existing video editing software has adopted a metaphor of the tools used in industrial film making process. As a result, the software presents only the time axis in the sequence currently composed. This problem was also reported in the context of musical composition .
3.3 Identified Design Rationales
Three design rationales have been identified based on our analysis.
• • •
Allowing seamless transition between a conceptual holistic viewpoint (overview) and a partial implementation of the concepts (detail) Visualising timelines multiple viewpoints and
Enhancing trial-and-error processes
These three points are not exclusive with each other, we tried to separate them in order to facilitate to implement a system based on the knowledge obtained through this study, and t o contribute to more generic design theory for creativity support tools.
Transitions from macroscopic viewpoints appeared in his sketch to microscopic actions such as focusing on time durations were frequently observed Almost no transition in the opposite direction was observed, such as seeing the effect of microscopic changes on the entire concept
3.3.1 Allowing Seamless Transition between Overview and Detail Representations
The process of making a video sequence especially in an artistic context is a design process which has hermeneutical feature, that is, the whole defines a meaning of a part and at the same time meanings of parts decide the meaning of the whole . So a video authoring tool should be designed to support this transition between whole and a part. Although the overview + detail concept is a generic design rationale applicable to various kinds of design problems, we consider that this is a very important strategy for the process of creating time-based information, because time-based information takes a form which is difficult to have an overview
3.2.2 Trial-and-error processes
Video authoring software supports trial-and-error processes with "the shot list" as well as the "undo" function. Existing video editing software usually allows designers to list files
by nature. For example, in order to see the effect on the whole caused by a partial change, you have to watch and/or listen t o the sequence through from the beginning to the end, or you have to memorise the whole and to imagine what impact the partial change has on the whole. In an architectural design, the effects of partial changes are immediately visualised on the sketch, which makes it easy for designers to transit between whole and a part. This transition should be supported in the process of making a video sequence. The participant first conducted a conceptual design in the sketching process by overviewing the whole with using written and diagrammatic annotations to articulate relationships among the annotated elements. Then, the participant proceeds to detail implementation of the video sequence on the software, which was a one-way process. As the conceptual design of the whole is inseparable from the detailed implementation on the software, they should be seamlessly connected. The reason why this one-way transition occurs may be partially because this is the early stage of the process of making a video sequence. However, we consider that this i s because tools for conceptual design (sketch) and implementation (software) are completely separated. This causes that a designer does not modify a sketch once it i s completed. This phenomenon was observed in the study o n musical composition process . It was observed that comparison between multiple possibilities occurred by providing an overview with the traditional score-metaphor interface. It is expected that providing an overview supports comparisons between multiple possibilities derived from partial modifications.
Instead of the list representation, a spatial representation i s more suitable to this kind of information-intensive tasks . While this comparison has been conducted in a designer's mind, externalisation of a designer's mental space is helpful for deciding whether an information piece is used or not. Shoji et al. have investigated differences between a list representation and a spatial representation. They found that a spatial representation contributes to elaborating concepts better than a list representation. We believe that spatial representations will facilitate a designer to compare multiple possibilities . In the next section, a prototype system for supporting the process of making a video sequence with spatial representations is presented.
4. KNOWLEDGE NEBULA CRYSTALLIZER FOR TIME-BASED INFORMATION
“Knowledge Nebula Crystallizer (KNC)” has been originally suggested by Hori et al.  as a prototype knowledge management system which has a repository called “knowledge nebula”. The knowledge nebula is an unstructured collection of small information pieces. Essential operations of the KNC system are crystallization and liquidization. During the crystallization, information pieces from the nebula are selected and structured according to a particular context, resulting in a new information artefact. During the liquidization, an information artefact is segmented into elements that are added to the knowledge nebula. The Knowledge Nebula Crystallizer for Time-based Information has been developed with Java 1.4.2 on Mac OS X platform. Figure 5 shows a snapshot of the KNC4TI system.
3.3.2 Visualising Multiple Viewpoints and Timelines
It was observed that the participant visualise multiple viewpoints and timelines, however, existing software presents only one timeline. Amitani et al.  have claimed that a musical composition process does not always proceed along with the timeline of a musical piece based on their experiment. They also claimed the importance of presenting multiple timelines in musical composition. Some musicians compose a musical piece along with its timeline, however, we consider that tools should be designed to support the both cases. This is applicable to the process of making a video sequence.
3.3.3 Enhancing Trial-and-Error Processes
As mentioned before, a shot list helps designers to understand relationships among sequences. In the questionnaire, he described how he uses the list: [A-5] Selecting short segments into a sequence o n the Timeline, to begin testing noted possibilities with actual practice and their outcomes. Although the shot list helps designers to some degree, the list representation only allows designers to sort the listed materials along with an axis such as alphabetical order. This i s useful, however, designers cannot arrange them along with their own semantic ways. It makes it difficult for designers t o grab relationships between files and sequences, so that the list representation prevents designers from full exploration of multiple possibilities.
: double click
Figure 5 A Snapshot of the KNC4TI System The interface part of the KNC4TI system is composed of: (1) OverviewEditor; (2) DetailEditor; (3) ElementViewer; and (4) ElementEditor. For a practical reason, we have adopted FinalCut Pro HD as the ElementEditor. The reason is described in this section.
OverviewEditor provides, as its name says, the overview of what movie objects are available at hand. They are added b y either choosing a folder that contains movie objects to be
potentially used or by drag & drop movie files into the OverviewEditor. Figure 6 shows the snapshot of the OverviewEditor. Each object has its thumbnails on itself in order for a designer to grab what the movie is about, in addition to its file name. When a movie object is double-clicked, the ElementViewer pops up and the correspondent movie file is played so that a designer can check the contents (right in Figure 5). The ElementViewer is a simple QuickTime-based viewer. It plays a selected movie object on demand.
Movie object Grouping by the user
The DetailEditor appears by double clicking a group on the OverviewEditor. The DetailEditor shows only the grouped objects in the clicked group as shown in Figure 5. Figure 7 shows a snapshot of the DetailEditor.
Figure 7 DetailEditor In the DetailEditor, the horizontal axis is a time line and vertical axis is similar to tracks. It plays the grouped movies from left to right. If two objects are horizontally overlapped as Figure 7 shows, then the first movie (Hatayoku.mpg) is played first, then in the middle of the movie the next one (Impulse.mpg) is played. The timing when to switch from the first movie to the second one is defined by the following rule: Movie 1 has a time duration d1 and represented rectangle with width l 1 pixel, is located at x Movie 2 has a duration d2 and represented rectangle with width l 2 pixel, is located at x (Figure 8). as a = x1. as a = x2
A comment by the user
Figure 6 OverviewEditor The shot list in the FinalCut Pro HD is a similar component t o the OverviewEditor in a sense as available movie objects are listed in the shot list, however, following expected interactions are advantages of adopting a spatial representation: Rearrange the positions of the objects While a list representation provides designers with a mechanically sorted file list, a two-dimensional allows designers to arrange movie objects along with their viewpoints. For example, movie files that might be used in a certain video work can be arranged close so that the designer can incrementally formalise his or her ideas about the video piece . Annotations An annotation box appears by drag & drop in a blank space in the OverviewEditor. A designer can put annotations and can freely arrange it wherever he or she wants on the OverviewEditor. This is an enhancement of written annotation function. Grouping A designer can explicitly group movie objects o n the OverviewEditor. Grouped movie objects are moved as a group. Objects are addable to and removable from a group at anytime by drag & drop. Copy & Paste A movie object does not always belong to only one group when a designer is exploring possibilities of what kind of combinations is good for a certain video work. To facilitate this process, copy & paste function was implemented. Whilst only one possibility can be explored in a timeline representation and a shot list on a normal video editing software, it visually allows a designer to examine multiple possibilities.
When the play button is pushed, play the movie 1 i n the ElementViewer and after time t 1, play the second movie. The playing duration t 1 is defined by the equation (1) in Figure 8.
l1 Object 1 duration = d1 Object 2 l2 x1 x2 x
d1(x2-x1) … (1) l1
duration = d2
Figure 8 The Timing Rule for Playing Overlaps Along this rule, movie objects grouped into the DetailEditor are played from right to left in the ElementViewer. This facilitates designers to quickly check how a certain transition from one file to another looks like. Designers can open as many DetailEditors as they wish so that they can compare and explorer multiple possibilities.
4.3 ElementEditor: Seamless connection with FinalCut Pro HD
Starting from the OverviewEditor, a designer narrows down his focus with the DetailEditor and the ElementViewer, then the designer needs to work on the video piece more precisely. For this aim, we adopted the FinalCut Pro as the ElementEditor and the KNC4TI system is seamlessly connected with FinalCut Pro HD via XML. The FinalCut Pro HD provides importing and exporting functions for .xml files of video sequence information. The
DetailEditor also exports / imports .xml files formatted i n Final Cut Pro XML Interchange Format  by double clicking any point on a DetailEditor. An exported XML file by the DetailEditor is automatically fed to the FinalCut Pro HD. Figure 9 shows the linkage between the DetailEditor and FinalCut.
Double click on DetailEditor
the overview of a movie file space, the way of arranging objects in a two-dimensional space is critical. Sugimoto et al.  have proved statistically that similarity based arrangement works better than random arrangement in the comprehension of information indicated in a two-dimensional space. That is, the DCB potentially has an ability to help a designer to understand an information space. Movie objects are arranged based on the similarities computed by the DCB.
A Generative System
Such as: - Texts - Videos - Images - etc.
- Artist Possible information artefacts - Information Designer (work as stimulants) - Public / active audience
The output becomes an input for the next loop
Such as: - Multimedia composite - Web page - Document - etc.
Figure 9 Linkage between the DetailEditor and FinalCut through XML import/export Using FinalCut Pro is advantageous because of the following reasons: First, it increases practicality. One of the most difficult things in applying a new system to practice is that practitioners are reluctant to change their tool. And FinalCut Pro is one of the most used video authoring tools. That is, potentially the KNC4TI could be used as an extension of the existing video authoring environment. Second, it reduces development loads. It is not efficient t o develop a system that beats, or at least has the same quality as, a well-developed system such as FinalCut Pro. We are not denying the existing sophisticated systems, but extending what they can do for human designers. Figure 10 How a Generative System Work Similarities between movies are computed based on physical features, such as brightness and hue. While the arrangement i s conducted by the system, it does not necessarily fit to a designer’s context . So the system should allow end-user modification  for incremental formalisation of information artefacts . The DCB is reconfigured through interactions such as rearranging, grouping and annotating objects. If two objects are grouped together by a designer, then the DCB computes their similarity again (Figure 11) so that the similarity definition becomes more contextually suitable.
Dynamic Concept Base
?? ? ?? ??? ?? ?????? ???? ?????? ? ?? ??? ?? ? ?? ??? ?? ?????? ???? ?????? ? ?? 1 0.0958930.258199 0.0912870.062622 0 0 0 0 0 0.095893 1 0.371391 0.19696 0.090075 0.227429 0 0 0 0 0.2581990.371391 1 0.3535530.242536 0 0 0 0 0 0.091287 0.19696 0.353553 1 0.085749 0 0 0 0 0 0.0626220.0900750.242536 0.085749 1 0 0 0.0393440.171499 0 0 0.227429 0 0 0 1 0.408248 0 0 0 0 0 0 0 0 0.408248 1 0 0 0 0 0 0 0 0.039344 0 0 1 0.057354 0.162221 0 0 0 0 0.171499 0 0 0.057354 1 0.353553 0 0 0 0 0 0 0 0.1622210.353553 1 0 0.092848 0 0 0 0 0 0.4055540.176777 0.5
5. TOWARDS A GENERATIVE VIDEO AUTHORING SYSTEM
Edmonds has suggested that a computer can certainly be a stimulant for human creative activities . The important question is how we can design a computer system that supports people to increase their capacities to take effective and creative actions. We are currently developing components that extend the current system to a generative system that stimulates designers' thinking. Figure 10 shows the model of the generative systems. First, information artefacts (existing ones and/or new pieces of information) are collected and stored (left in Figure 10). A system (top in Figure 10) generates possible information artefacts (right in Figure 10). These outputs work in two ways: (1) as final products that a user can enjoy; (2) as draft materials that a user can modify (at the centre of Figure 10). In order to deliver possible information artefacts to users, a component called a Dynamic Concept Base (DCB) is being developed. It is a concept base that holds multiple similarity definition matrices which are dynamically reconfigured through interactions. The more the number of objects increases, the more difficult to grab the relationships on the physically limited display. In order to assist a designer to grab
user Restructured Dynamic Concept Base
Original similarity matrix
?? ? ?? ??? ?? ?????? ???? ?????? ? ?? ??? ?? ? ?? ??? ?? ?????? ???? ?????? ? ?? 1 0.095893 0.2581990.0912870.062622 0 0 0 0 0 0.095893 1 0.371391 0.19696 0.090075 0.227429 0 0 0 0 0.2581990.371391 1 0.3535530.242536 0 0 0 0 0 0.091287 0.19696 0.353553 1 0.085749 0 0 0 0 0 ?? ? ?? ??? ?? ?????? ???? ?????? ? ?? 0.0626220.090075 0.2425360.085749 1 0 0 0.0393440.171499 0 ?? 1 0.0958930.258199 0.0912870.062622 0 0 0 0 0 0 0.227429 0 0 0 1 0.408248 0 0 0 ? 0.095893 1 0.371391 0.19696 0.090075 0.227429 0 0 0 0 0 0 0 0 0 0.408248 1 0 0 0 ?? 0.2581990.371391 1 0.3535530.242536 0 0 0 0 0 0 0 0 0 0.039344 0 0 1 0.057354 0.162221 ??? 0.091287 0.19696 0.353553 1 0.085749 0 0 0 0 0 0 0 0 0 0.171499 0 0 0.057354 1 0.353553 ?? 0.0626220.0900750.242536 0.085749 1 0 0 0.0393440.171499 0 0 0 0 0 0 0 0 0.1622210.353553 1 ?????? 0 0.227429 0 0 0 1 0.408248 0 0 0 0 0.092848 0 0 0 0 0 0.4055540.176777 0.5 ???? 0 0 0 0 0 0.408248 1 0 0 0 ?????? 0 0 0 0 0.039344 0 0 1 0.057354 0.162221 ? 0 0 0 0 0.171499 0 0 0.057354 1 0.353553 ?? 0 0 0 0 0 0 0 0.1622210.353553 1 ??? 0 0.092848 0 0 0 0 0 0.4055540.176777 0.5
New matrix to be reconfigured
Figure 11 Reconfiguring the DCB through Interactions
In this paper, we presented: (1) the analysis of the process of making a video sequence to identify design requirements for a supporting system; (2) The developed system based on this analysis; and (3) plans for a generative system.
Design rationales for an appropriate video-authoring tool derived from our investigation which are summarised as three inter-related features: (1) Allowing seamless transition between a conceptual holistic viewpoint (overview) and a partial implementation of the concepts (detail); (2) visualising multiple viewpoints and timelines; and (3) enhancing trialand-error processes. A prototype system "Knowledge Nebula Crystallizer for Time-based Information (KNC4TI)" i s developed based on this analysis. We are going to evaluate the system through user studies and also implement the generative function.
This project was supported by Japan Society of the Promotion for Science, and is supported by the Australasian CRC for Interaction Design and Australian Centre for the Moving Image. The authors are also grateful to Dr. Linda Candy and Mr. Mike Leggett for their useful comments for improving our research.
 Amitani, S. and Hori, K., Supporting Musical Composition by Externalizing the Composer's Mental Space. in Proceedings of Creativity & Cognition 4, Loughborough University, Loughborough, 13-16 October, (2002), 165-172.  Apple, I. Final Cut Pro XML Interchange Format.  Bilda, Z. and Gero, J. Analysis of a Blindfolded Architect's Design Session 3rd International Conference on VISUAL AND SPATIAL REASONING IN DESIGN, 22-23 July 2004, MIT, Cambridge, USA, 2004.  Cross, N., Christiaans, H. and Dorst, K. Analysing Design Activity. John Wiley & Sons, 1997.  Eckert, C., Blackwell, A., Stacey, M. and Earl, C. Sketching Across Design Domains 3rd International Conference on Visual and Spatial Reasoning in Design, MIT, Cambridge, USA, 2004.  Edmonds, E., Artists augmented by agents (invited speech). in Proceedings of the 5th international conference on Intelligent user interfaces, New Orleans, Louisiana, United States (2000), 68-73.  Ericsson, A. and Simon, H. Protocol Analysis: Verbal Reports as Data. Cambridge, MA: MIT Press, 1993.  Finke, R.A., Ward, T.B. and Smith, S.M. Creative Cognition -Theory, Research, and Applications. A Bradford Book The MIT Press, 1992.  Fischer, G. and Girgensohn, A. End-user modifiability in design environments. Proceedings of the SIGCHI conference on Human factors in computing systems: Empowering people, Seattle, Washington, United States. 183-192. 1990  Hori, K., Nakakoji, K., Yamamoto, Y. and Ostwald, J. Organic Perspectives of Knowledge Management: Knowledge Evolution through a Cycle of Knowledge
Liquidization and Crystallization. Journal of Universal Computer Science, 10 (3). 252-261. 2004  Kasahara, K., Matsuzawa, K., Ishikawa, T. and Kawaoka, T. Viewpoint-Based Measurement of Semantic Similarity between Words. Journal of Information Processing Society of Japan, 35 (3). 505-509. 1994  Marshall, C. and Shipman, F. Spatial Hypertext: Designing for Change. Communications of the ACM, 38 (8). 88-97. 1995  Schoen, D.A. The Reflective Practitioner: How Professionals Think in Action. Basic Books, New York, 1983.  Scrivener, S. and Edmonds, E., The computer as an aid to the investigation of art exploration. in Proceedings of Euro IFIP, Amsterdam, Netherlands (1979), NorthHolland Publishing Company, 483-490.  Shibata, H. and Hori, K. A Framework to Support Writing as Design. Journal of Information Processing Society Japan, 44 (3). 1000-1012. 2003  Shibata, H. and Hori, K., Toward an integrated environment for writing. in Proceedings of Workshop on Chance Discovery European Conference on Artificial Intelligence (ECAI), (2004).  Shipman, F., Girgensohn, A. and Wilcox, L. HyperHitchcock: Towards the Easy Authoring of Interactive Video. Proceedings of INTERACT 2003. 33-40. 2003  Shipman, F.M. and McCall, R.J. Incremental formalization with the hyper-object substrate. ACM Transactions on Information Systems, 17 (2). 199-227. 1999  Shoji, H. and Hori, K. S-Conart: an interaction method that facilitates concept articulation in shopping online. AI & Society, Social Intelligence Design for Mediated Communication, 19 (1). 65-83. 2005  Snodgrass, A. and Coyne, R. Is Designing Hermeneutical? Architectural Theory Review, 1 (1). 65-97. 1997  Sugimoto, M., Hori, K. and Ohsuga, S. An Application of Concept Formation Support System to Design Problems and a Model of Concept Formation Process. Journal of Japanese Society for Artificial Intelligence, 8 (5). 39-46. 1993  Suwa, M., Purcell, T. and Gero, J. Macroscopic analysis of design processes based on a scheme for coding designers' cognitive actions. Design Studies, 19 (4). 455-483. 1998  Tanaka, Y. Musical Composition as a Creative Cognition Process (in Japanese). Report of Cultural Sciences Faculty of Tokyo Metropolitan University, 307 (41). 51-71. 2000  Yamamoto, Y., Nakakoji, K. and Aoki, A., Visual Interaction Design for Tools to Think with: Interactive Systems for Designing Linear Information. in Proceedings of the Working Conference on Advanced Visual Interfaces (AVI2002), Torento, Italy (2002), ACM Press, 367372.