You are on page 1of 54

UNIVERSITY OF COPENHAGEN DEPARTAMENT OF COMPUTER SCIENCE, DIKU

Blender batch rendering of liquid

Final project Presented by David Calabuig Lpez Supervisors: Morten Engell-Nrregrd and Kenny Erleben January 18, 2011

ABSTRACT
This project studies the problem of creating a command line tool for rendering an animation in Blender. Blender is a cross-platform software tool, especially devoted to modeling, animation, and graphics creation. Blender has a very powerful feature that nowadays it is very used; this feature is a fully functional Python interpreter. This allows any user to add functionality to Blender writing a simple Python script. As Blender uses Python for scripting, thus is seems natural to use Python for creating the functionality of the tool. At DIKU the Image group invents new fluid simulation methods. One important aspect of successful publication of research results is the ability to render nice animation movies of the simulation results. This project proposes a solution for creating the animation of the simulation results. Thus, from the set of simulation results, the application will generate a render the video animation. The main features of the rendering are: Be able to specify simple parameters like the number of frames in the sequence, the number of lights in the scene or the material of the object. Have control over the codec's of the resulting animation movie. The application must work on three different platforms, Linux, Mac OSX and Windows. In summary, this project explores the potential in creating a simple tool that allows researchers to quickly create an animated film.

RESUMEN
Este proyecto estudia el problema de crear una herramienta de lnea de comandos para renderizar una animacin en Blender. Blender es una herramienta informtica multiplataforma, dedicada especialmente al modelado, animacin, y creacin de grficos. Blender tiene una caracterstica muy poderosa que hoy en da es muy utilizada; esta caracterstica es un intrprete de Python totalmente funcional. Esto le permite a cualquier usuario aadir funcionalidades a Blender escribiendo un simple script de Python. Como Blender usa Python para scripting, lo ms natural ser utilizar dicho lenguaje de programacin para crear la funcionalidad de la herramienta. En el grupo de imagen DIKU inventan nuevos mtodos de simulacin de fluidos. Uno de los aspectos importantes del xito de la publicacin de resultados de la investigacin es la capacidad de renderizar animaciones de los resultados de la simulacin. En este proyecto se propone una solucin para la creacin de la animacin de los resultados de la simulacin. As, a partir del conjunto de resultados de la simulacin, la aplicacin generar un render de la animacin de video. Las caractersticas ms destacables del render son: Ser capaz de especificar los parmetros simples como el paso de tiempo entre frames consecutivos, la posicin de la cmara, luces o materiales. Tener el control sobre codecs de la pelcula de animacin resultante. La aplicacin debe trabajar en tres diferentes plataformas, Linux, Mac OSX y Windows. En resumen, en este proyecto se desea explorar el potencial en la creacin de una herramienta sencilla que permite a los investigadores crear rpidamente una pelcula de animacin.

GENERAL INDEX
General Index 1. Introduction 1 1.1. Introduction to project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2. Introduction to Blender . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.3. Project objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2. Analysis 4 2.1. Description of the workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.1.1. Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.1.2. Python scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.1.3. Plugins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1.4. Comparison of solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2. Detailed description of the solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.3. Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3. Design 9 3.1. How to build the animation? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2. How to use the lights? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.3. How to use the materials? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.4. How will the graphical user interface be? . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.5. Output files formats and their differences . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.5.1. Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.5.2. Video . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4. Implementation 16 4.1. Graphical user interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.1.1. Using Blender . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.1.2. Without using Blender . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4.2. Import figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 4.3. Creation of video . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 4.4. Insertion of lights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.5. Insertion of materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 4.6. Formats chosen for the output file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 4.6.1. PNG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 4.6.2. JPEG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.6.3. AVI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

4.6.4. XVID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.6.5. MPEG1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.6.6. Formats comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 5. Improvements, problems and examples 27 5.1. Improvements in application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5.2. Problems found. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 5.3. Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 5.4. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 5.4.1. Example using Blender . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 5.4.2. Example without using Blender . . . . . . . . . . . . . . . . . . . . . . . . . . 36 6. Conclusions 39 6.1. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 6.2. Future Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Appendix A Source Code: Import figures Appendix B Source Code: Lights Appendix C Source Code: Materials Bibliography 40 42 44 46

CHAPTER 1

INTRODUCTION

1.1. Introduction to project


At DIKU the Image group invents new fluid simulation methods. One important aspect of successful publication of research results is the ability to render nice animation movies of the simulation results. In this project I wish to explore the potential in creating a simple tool that allows researchers to quickly create an animation movie. This tool will be used by a group of researchers from the University of Copenhagen. The tool will take as input a sequence of numbered files with extension. Obj as frame_000.obj, frame_001.obj, ..., frame_099.obj, etc., where each file contains raw data for a single mesh frame in the animation. The application will output a video animation of simulation results.

1.2. Introduction to Blender


Originally, the program was developed as an own application by Dutch animation studio NeoGeo, the main author, Ton Roosendaal, founded the company "Not a Number Technologies" (NaN) in June 1998 to develop and distribute the program. The company went bankrupt in 2002, then creditors have agreed to offer Blender as a product of free and open source under the GNU GPL in exchange for 100,000. In March 2002 Ton Roosendaal founded the non-profit organization Blender Foundation (Blender). The first objective of the Blender Foundation was finding a way to continue developing and promoting Blender as an open source project based on the user community. In July 2002, Ton managed to get the NaN investors "yes" for the Blender

~1~

Foundation to carry out its plan to be open source. The campaign to "Free Blender" had to get 100,000 for the Foundation to buy the rights to source code and intellectual property Blender NaN investors and subsequently release Blender to the open source community. With an enthusiastic group of volunteers, among whom were several exNaN employees, the campaign to "Free Blender was launched. To the delight and surprise of everyone, the campaign reached the 100,000 EUR goal in only 7 weeks. On Sunday October 13, 2002, Blender was released to the world under the terms of the GNU General Public License (GPL). Blender development continues to this day driven by a team of volunteers from various parts of the world led by Blender's creator, Ton Roosendaal. [1]

Figure 1. Screenshot of Blender 3D interface.

~2~

1.3. Project objectives


The specific objectives are: I. Plan and design a tool to create animation movies. II. Getting a better understanding of basis of computer graphic (such as lighting, materials, camera and so on). III. Evaluate the result of rendering and improve the tool if possible. IV. Getting a better understanding of the Python programming language. V. Learning the differences between the different kinds of formats. These objectives are developed in 6 chapters. The first section gives an introduction to the application to be developed and a brief introduction to the history and origin of Blender. Chapter 2 will contain step by step description of the solution as well as the requirements that the application must meet and the description of workflow within the application. In Chapter 3 I describe the design; this is how I'm going to solve the project. It will also describe the user interface. In Chapter 4 will describe how I have solved the application. It will introduce and will explain the code snippets of the tool that I think most important. Chapter 5 will contain the various problems encountered in creating the tool. Application will be evaluated and explained by means of graphs of time because I have chosen a solution or another. Also examples of how to use the application will be shown. Chapter 6 will describe the conclusions on the project.

~3~

CHAPTER 2

ANALYSIS

2.1. Description of the workflow


This section outlines the different types of workflow that can be used in the application.

2.1.1. Types
Blender has a very important feature; it is a fully functional Python interpreter. This feature allows any user to add new functionality to Blender writing a simple script in Python. Python is an interactive, object-oriented and interpreted programming language. It incorporates modules, exceptions, dynamic typing, dynamic data type of very high-level classes. Python combines high power with a very simple syntax. It is expressly designed to be used as an extension for applications that need a programmable interface, and this is why Blender uses it. There are two ways of extending Blender: by using Python scripts and binary plugins. The following will explain the advantages and disadvantages of each of the two options.

2.1.2. Python scripts


The first option is to use scripting with the Python programming language. Using this option will have advantages such as faster development: the user can write a program, save it and run it. In a compiled language one has to go through the steps of compiling and linking software, which can be a slow process. Another advantage is multiplatform; it means that the same code works on any architecture, the only condition is that the language interpreter is available. No need to compile the code once for each architecture. A common disadvantage is the speed; interpreted programs are

~4~

slower than compiled. However, the interpreted programs are usually short, so the difference is negligible. In the Figure 2, I showed the scheme of scripts in python. Also the script to can interact with the main program, in our case Blender, uses the application programming interface known as API. The script also uses Python because it is the option most powerful, versatile, robust and easy to understand.

Figure 2. First option: Scripts in Python.

2.1.3. Plugins
The second option uses plugins, we can see it in Figure 3. The plugins (loaded / called) in Blender via dlopen (). For those unfamiliar with the system dlopen, this allows a program (Blender) use a compiled object as if it were part of the program itself, similar to when libraries are linked dynamically, except that the object, that is load, is determined when the program is running. The advantage of using the dlopen system for plugins is that it is very fast access to a function, and there are no overhead in the plugin interface, which is critical when the plugin can be called several million times in a single render . The disadvantage of this system is that the plugin works as if it were really part of Blender, so if the plugin is dying, Blender dies. The header files found in the subdirectory plugin / include / Blender installation are documented showing the features that are provided to plugins.

~5~

Figure 3. Second option: Plugin.

2.1.4. Comparison of solutions


Of the two ways of extending Blender, it seems it really is preferable to use Python script compared to writing a plugin. I choose the option of Python scripts because one of the advantages is as faster development since the user only has to write the program, save it and run it. Another reason is that the same code can be used on other architectures, in other words the application is multiplatform. Therefore, this is the option chosen to solve the problem of this project. Python script actually had some limited functionality in Blender 2.25, the latest released versions of NaN. When Blender became open source a lot of developers that they moved around the Foundation, they chose python script to work and, together with the change of user interface, the Python API is probably the part of Blender has had a larger development. A complete reorganization of what existed was carried out and many new modules were added. This evolution is still in progress and a better integration is to come in future versions of Blender.

~6~

2.2. Detailed description of the solution


The first solution has been written in a script that runs Python within Blender as shown in the figure below (see Figure 4).

Figure 4. Screenshot of Blender 3D interface with the interface of scripts. The steps taken to the solution are the following: I. II. III. IV. V. VI. VII. VIII. IX. Import the file .obj to Blender as a figure. Select position of the camera. Add the Blender object, activate it. When active, add a frame. Add the lights. Add the material. Rendering. Deactivate the objects and delete it. I repeat this step until I have completed the 8 steps above for all the objects that make the video.

2.3. Requirements
As this is an application that it has files input .obj, these files must meet some requirements that may be imported by the program. These files must be formatted .obj, for the tool will can make the animation. Another requirement is that the user change by hand in the script to the directory where the files .obj and its name, this requirement we hope improved later, trying to avoid this.

~7~

Before importing the .obj files, you must select the camera position where the user wants to be, it is understood that the user enters the coordinates of the camera which it displays the imported object. On the other hand the user cannot change the position of lights; position lights will always be the same in order to fulfill the lighting method of the three points explained in section 3.2. Another important requirement, for users not using Blender, is the input text file. This file is a text file containing the input features to create the animation. The user must change the characteristics of input file by hand in the text, these features are: the directory where the .obj files and its name, the number of .obj files, the coordinates of the position of the camera, the number of lights, the route and type of material and type the path and output file. I chose these features because they are necessary and which I thought appropriate to create the functionality to convert the .obj files input to render the animation video. This functionality is the goal pursued in this work.

~8~

CHAPTER 3

DESIGN

This chapter will discuss how information will flow between the user and the program, how could I create the animation, how are objects such as light or the material or how will the graphical user interface be. That is, this section will answer the question: How can I fix it? A very general solution is that shown in the following figure number 5:

Figure 5. General outline of how the project will address The following sections begin with a brief explanation of the object in the relevant section (animation, lights, material or interface) and then I explain as I intend to solve it. The following will respond to different "how" previously made.

3.1. How to build the animation?


In this e-book "Fundamentals of 3D Image Synthesis. A Practical Approach to Blender", the authors Carlos Gonzlez Morcillo y David Vallejo Fernndez argue that: We can define the stage of animation as the generation, storage and presentation of images in rapid succession, producing a sensation of movement. The eye retains images viewed about 40 ms. By this "defect" of the visual system, people perceives as a continuous sequence of images (static) that its display more than 20Hz. Each of these images on computer graphics are called Frame. [5]

~9~

To create the animation we need to have mesh, first I import the .obj file objects. When I have all imported input objects, now the application have to create the animation, a proposed solution to this animation is that the X object, the X object is visible in the X frame, while in the X+1 frame will be visible X+1 object; so on for all objects imported from all input files. I use a type of curves called IPO curves (Interpolation). These curves are curves of animation used in 3D design environments to control the variation of object attributes over time. In other words are interpolation curves, from the values taken at control points (key frames), the parameter values interpolated in between frames. What is used is in fact motion channels which make it possible to store the deformation of a mesh in morph targets which can then be activated at different frames this definition of IPO curves has as authors: Carlos Gonzalez Morcillo and David Vallejo Fernandez. [5]

3.2. How to use the lights?


Now let us turn to a very important part of all the images in 3D, which is lighting. Many times we need the light coming from elsewhere, coming from various places, that is more intense, that the lights have more intensity or any other color, All this is done with the lamps. First, I begin by describing the basic types of lights. Lamp: This object of lighting is responsible simulate an ordinary bulb. The light that brings spreads in all directions from the position that is the lamp, which is known as "Omni-directional light." The illumination decays with distance, this factor will be important in this type of lighting. Area: This object is a directional light, a light that comes as a particular direction: Although spreads from one region or area almost as in the previous case, but now is not the generator point of the light, it is a rectangle that can be scaled to increase the radiating surface and thus the illuminated area. For these lamps is important: the distance to objects and the size of the light source. Spot: Simulates a directional light source. The lighting comes from a lighted area increasing as we move away from the spotlight. An example that illustrates this kind of object can be the headlights of a vehicle, or the lights used at concerts. Sun: Simulates the effect of sunlight. Illuminates the whole scene equally, regardless of position. The light generated a parallel rays according to a direction. It is therefore a directional light but does not decay in intensity depending on the distance to the point where we place. Hemi: This light is similar to solar, but does not produce shadows cast by the object. It can be considered directional in many ways, being independent similar to solar, of the point at which we place it. In the pictures below the reader can see the direction of light sources (see Figure 6) and the effects generated on a surface (see Figure 7):

~ 10 ~

Figure 6. Different types of light.

Figure 7. Effect of different types of lights. Now I will discuss how I have used the lights, I could have used the number and types of lights that I wanted but I have chosen the technique of lighting that is used in the project. In the application, I will use the technique known as three points. The threepoint lighting uses three light sources to illuminate a scene (hence the name). The first and most important is the key of light [6]. The key can be placed anywhere on the scene. The following light is filler, which does what its name implies, fill in areas where no light arrives. The last lamp, the back, which is placed behind the object and gives an illumination of its edges. The general outline of the position of these lights can be seen in the following image (see Figure 8). In the application I have chosen the position of the lights so as to meet the technique of the three points discussed above. These lights are oriented toward the center of the scene (0, 0, 0) but with different positions. The user can change these positions in the script of the application or by Blender in screen layout: Default

~ 11 ~

Figure 8. Three point lighting.

3.3. How to use the materials?


So far I have focused on defining the properties of light sources in the scene. The final representation of an object is determined by the material and how it reflects light. The way to reflect determines the color and appearance that an object will have on the picture. In very general terms, a material that absorbs certain frequencies of light and reflects others. The part reflected, which is complementary to that absorbed, is used to calculate the color that corresponds to a certain point on the surface as seen from the camera. As I want to represent real textures, I have been downloaded materials from http://matrep.parastudios.de/ that simulate real textures and added to the program depending on the type of object, the user can choose different textures as can be seen in Figure 9. The different textures from left to right and from top to bottom are: amber, Voronoi bronze, crystal, denses clouds, fire, red hot metal, water and wood. If the user is not interested in any of the material listed above, the user can also create their own materials for the object.

~ 12 ~

Figure 9. Different types of materials [4].

3.4. How will the graphical user interface be?


The user interface is the means of interaction between the user and the program. The user communicates with the program via the keyboard and mouse, and the program provides feedback through the window system. Also the application will can run from Blender for users who uses Blender and for user that do not use Blender, the application will can run from a terminal. The user interaction is that the user of the information needed to create the animation. Depending on whether the user is using Blender or not, the input information will be different. In section 4.1 will explain the differences in the input information for each case.

3.5. Output files formats and their differences


There are many image formats out there for many different uses. A format stores an image in a lossless or lossy format; with lossy formats you suffer some image degredation but save disk space because the image is saved using fewer bytes. A lossless format preserves the image exactly, pixel for pixel. You can break formats down into static images and movie clips. Within either category there are standards (static formats and clip codecs) which may be proprietary standards (developed and controlled by one company), or open standards (which are community or consortiumcontrolled). Open standards generally outlive any one particular company and will always be royalty-free and freely obtained by the viewer. Proprietary formats may only work with a specific video card, or while the codec may be free, the viewer may cost.

~ 13 ~

In Blender, there are two groups of output format: images and video. The following sections explain formats of each of these two categories. The explanation of these 2 categories, section 3.5.1 and 3.5.2, it has been copied of the Wikipedia of Blender [13].

3.5.1. Image
Blender supports a wide mix of image formats. Some formats are produced by internal Blender code. Others (Tiff, for example) may require a dynamic load library (such as libtiff.dll) in your Blender installation folder. The output image formats are:

BMP Bit: Mapped Paint lossless format used by early paint programs. Cineon: format produced by a Kodak Cineon camera and used in high-end graphics software and more directed toward digital film. DPX: Digital Moving-Picture eXchange format; an open professional format (close to Cineon) that also contains metainformation about the picture; 16-bit uncompressed bitmap (huge file size). Used in preservation. Iris: the standard Silicon Graphics Inc (SGI) format used on those spanking Unix OS machines. Jpeg: Joint Picture Expert Group (name of the consortium which defined it), an open format that supports very good compression with little loss of quality. Only saves RGB value. Re-saving images results in more and more compression and loss of quality. MultiLayer: an OpenEXR format that supports storing multiple layers of images together in one file. Each layer stores a renderpass, such as shadow, specularity, color, etc. You can specify the encoding used to save the MulitLayer file using the codec selector (ZIP (lossless) is shown and used by default). OpenEXR: an open and non-proprietary extended and highly dynamic range (HDR) image format, saving both Alpha and Z-depth buffer information. PNG: Portable Network Graphics, a standard meant to replace old GIF inasmuch as it is lossless, but supports full true colour images. Supports Alpha channel. Radiance HDR: a High Dynamic Range image format that can store images with big changes in lighting and color. TARGA and Targa raw: Truevision Advanced Raster Graphics Adapter is a simple raster graphics format established in 1984 and used by the original IBM PC's. Supports Alpha Channel. TIFF: Often used for teletype and facsimile (FAX) images.

3.5.2. Video
This explains the different formats of video:

AVI Raw: saves an AVI uncompressed, lossless but with a huge size. AVI Jpeg: saves an AVI as a series of JPEG images. It has lost small but not so small as you can get a better compression algorithm. Also Jpeg AVI format does not read most of the players. AVI Codec: saves an AVI compressed with a codec. Blender automatically provides a list of available codecs on your system and lets you select the various parameters. QuickTime: saves a QuickTime animation.

~ 14 ~

H.264: is a standard capable of providing good image quality at bit rates substantially lower than previous standards (MPEG-2, H.263 or MPEG-4 Part 2), in addition to not increase the complexity of your design. MPEG: use codecs (coders-decoders) low loss compression using codecs information processing. Ogg Theora: a video compression method at a loss. The compressed video can be stored on any media suitable container. Xvid: is the name of a popular codec developed as a free software project. Xvid encoded films offer high quality video in small file sizes, less time in addition to its compression in MPEG-2 due to a more advanced compression algorithm.

In the created application, I have chosen the most popular formats and common today, which are as follows: for the category of image, the formats chosen are PNG, JPEG, and for the category of video, the formats chosen are Xvid and FFMPEG with encoded AVI and MPEG1. These 5 formats have been chosen because they are the most well-known and more used nowadays. In section 4.6 I will explain the pros and cons of some of the formats selected for the output file. I will also discuss why some are better than others in terms like image quality file size, problems with proprietary viewers and so on.

~ 15 ~

CHAPTER 4

IMPLEMENTATION

This chapter describes and explains how the application has been made and how the application is created step by step, as already mentioned is some .obj files convert in an render animation movie. I begin with the first section that is the first step which is the creation of a graphical user interface; the user introduced all the requirements of the application to import figures. Then I create the animation and finally I explain the inclusion of lights and materials. In addition to the explanation of each step, in each section will be introduced pieces of code and pictures that I consider important to the creation of the application.

4.1. Graphical user interface


The first and most important point is the communication between the user and the application, in other words the graphical user interface. There are two ways to use the application, one of which is to run the script from Blender and the other is from the console. Now it will explain each of the two ways to run the application.

4.1.1. Using Blender


The first is for users who use Blender. The user opens Blender, he changes to the window to edit scripts, he opens the script and executes the application. When the user runs the script containing the application, the user changes to the Default view and notes that in the menu of the window Object Tools appear buttons needed to create the animation. The interface of the application from Blender shown in Figure 10, is divided into several parts which are discussed now:

~ 16 ~

Position of the Camera: Here the user introduces coordinates of the camera position. Select obj files: This is a button which opens a file browser to select all .obj files to be used to create the animation. Light: Here the user introduces the number of lights he wants to enter in the scene, taking a minimum of zero lights and a maximum of three lights. The position and color of the lights can be changed manually in the script before running the application. The maximum three-light corresponds to the three points illumination method mentioned in paragraph 3.2. Materials Path: In this browser the user will write the path where the materials are which can be introduced to the object in the scene. Type of Material: Clicking, the user will see a list of materials that can be applied on the object of the scene. These materials have been described in section 3.3. For example, if the material selected is Amber, Amber material is applied to the object in the scene. Type of output file: This button is the same as above but the user selects the output format the user wants to generate for the rendering of the scene. Output Path: In this browser the user will enter the path where to save the output file or output files generated. Generate the output file: This button generates the output file or output files with the characteristics discussed above.

Figure 10. Window graphical user interface of the application. If the user wants to introduce another type of material or other type of output file, he will need to add the new type to the window of the types of the interface and then add the code required to use the new type. In section 4.6 I explain how to introduce a new type of material.

~ 17 ~

4.1.2. Without using Blender


The second way to use the application is for users who do not use Blender, for which the user must enter a command at the console with the following pattern:
$> PathOfBlender b P scriptApplication PathOfInputData

Now I explain each of the points from the previous command: PathOfBlender: This part contains the path where the application is installed Blender. -b: flag for background. -P: for telling it to run a python script. scriptApplication: This part will contain the path and name of the script of the application created. PathofInputData: This part defines the path and file name where the input data is to make the animation. The next point is to explain the input file (PathofInputData) and what characteristics it should have in order to make the animation.
# Path of the files # Name of files # Number of files # Position X of the Camera # Position Y of the Camera # Position Z of the Camera # Lights # Path of the Materials # Material # Path output # Format output

The above table shows the features the user must have the input data file and in what order they should appear the features. It now explains each: Path of the Files: In this first line the user must put the path where the .obj files are. Name of files: In this second line the user puts the name of all .obj files. Number of files: In this third line as its name indicates, the user will write the amount of files the user wants to make the animation.

~ 18 ~

Position of the Camera: In this fourth, fifth and sixth line the user introduces coordinates of the camera position. Lights: In the seventh line the user introduces the number of lights he wants to enter the scene, taking a minimum of zero lights and a maximum of three lights. The maximum three-light corresponds to the illumination method to the three points mentioned in paragraph 3.2. Path of the Materials: In this eighth line the user introduces the path where the material is that the user can put the object in the scene. Material: In the ninth line, the user will introduce the material applied to the object in the scene. Path output: In the tenth line, the user will enter the path where he wants to save the output file. Format output: In the eleventh line, the user will introduce the output file format. Note that the type of materials can be: none, water, wood, amber, clouds, fireball, bronzeMetal, glass and hotmetal as seen in section 3.3. And the output file format can be: png, jpeg, avi, Xvid, mpeg1 and none; as seen in section 3.5. In section 5.3.2 shows an example of the application using this part. Another thing to mention is that all the features of the input file are strings; except the option of the number of lights, which is an integer (0, 1, 2 or 3). The following table shows a complete example of the input file with the characteristics discussed above.
# Path of the files C:/Users/Calabuig/Desktop/Final Project/test/ # Name of files cylinder # Number of files 25 # Position X of the Camera 0.825 # Position Y of the Camera -0.8 # Position Z of the Camera 0.51 # Lights 1 # Path of the Materials clouds # Material C:/Users/Calabuig/Desktop/Final Project/Material/ # Path output C:/Users/Calabuig/Desktop/Final Project/Results/ # Format output png

~ 19 ~

4.2. Import figures


In Blender any user can import files with formats such as COLLADA (.dae), Motion Capture (.bvh), Scalable Vector Graphics (.svg), Stanford (.ply), Stl (.stl), 3D Studio (.3ds), Wavefront (.obj) and X3D Extensible 3D (.x3d/.wrl). These formats have implemented the way of how to import a Blender scene files with any of the formats appointed, as the type of our input files to the application is within this set, I will use the instruction the Blender API to import files with the format .obj. Therefore, to import all object files goes through a loop that will go from the object 1 to the last object of the animation and for each object of the file of name (theFileName) with path (thePathFiles) with index number, I import it in the scene. This loop can see the code used in Appendix A.

4.3. Creation of video


Seen how figures of .obj files imported, I now turn to discuss the next step: creating the animation. We see that all the objects have been imported to the scene and that each of them is a shape, as I want that will exist only a shape; therefore I join all the shapes of the files introduced in only shape with the following object operator:
bpy.ops.object.join_shapes()

The next step is to get the effect explained in Chapter 3: Design the section 3.1. How to build the animation? So to achieve this effect, I will loop for all the objects in the scene from the first object to the last object. In other words, in the iteration X the object X will put the key value to 1 to display the object X and anterior and posterior objects, ie objects X-1 and X +1, will the key value to 0. This will get the effect that the objects seem animated.

~ 20 ~

Figure 11. TimeLine window and IPO Curve Editor. The effect discussed in the previous paragraph is displayed in the image above (see Figure 11), where the reader can see the IPO curves for displaying objects. These curves are observed at the interface of editor of IPO curve in Figure 12, the user can see the effect of viewing explained before where in the green curve we can see that the frame 6 displays the object with name theFileName_00006.obj file. The representation of IPO curves using a graph in which the horizontal axis of abscissas represents the frames (hence, time) and the ordinate the values that objects can take on the magnitude of visibility of objects. To achieve this effect I used the code of Appendix A which can help us understand this functionality.

Figure 12. Interface Editor of IPO Curve.

4.4. Insertion of lights


Now the next step is the lighting of the scene. As explained in paragraph 3.2. How are the lights?, I use the traditional method of lighting called three-point lighting. The user has introduced in the interface and stored the number of lights in the variable theLights that are to be inserted into the scene. The lights are always oriented to the

~ 21 ~

point (0, 0, 0) of the scene. It is seen in the snippet of code (see Appendix B) the position, rotation and characteristics of the lamp type inserted.

4.5. Insertion of materials


The next step is the insertion of selected materials in the interface. The path is introduced in the interface where the user finds all types of materials to be inserted. The functionality described in the inclusion of materials, it shown in the snippet of the Appendix C where thePathMaterials will contain the path where the user placed the materials to be imported. For the selection of materials to be imported, I use a boolean indicating whether the material is selected. For example, the code shows that if bool_Water is true, the application will import the Water material into Blender, if it is false does not matter, this happens for all other types of materials. I have inserted different materials but the user perhaps wants to use another type of material. For it, the user will must use this code pattern and it introduce in the function of "introduce_materials" of the application (see Appendix C):
if materials == 'NAME_INTERFACE': bpy.ops.wm.link_append(filepath = thePathMaterials + "NAME_FILE.blend/", directory = thePathMaterials + "NAME_FILE.blend/Material/", link=False, filename="NAME_MATERIAL") ob.active_material = bpy.data.materials['NAME_MATERIAL']

I am going to explain the characteristics that it is necessary to change into the previous pattern whenever a new material introduces in the application: NAME_INTERFACE: Here the user will introduce the key name of the material that the user has chosen in the user's interface. NAME_FILE: This characteristic will be the name of the file of the material. NAME_MATERIAL: This characteristic will be the name of the material inside the file NAME_FILE. Now, I propose a finished example of as inserting a new material of the web Blender Material [4] to the application. The texture chosen for the new material is "Beer". The first step is to add the name that the user will use in the interface; that is to say the NAME_INTERFACE:

~ 22 ~

bpy.types.Scene.MyMaterial = EnumProperty( items = [('none', 'No Material', 'Material: None'), ('amber', 'Amber', 'Material: Amber'), ('bronzeMetal', 'Bronze Metal Voronoi', 'Material: Bronze Metal Voronoi'), ('crystal', 'Crystal', 'Material: Crystal'), ('clouds', 'Dense Clouds', 'Material: Dense Clouds'), ('fireball', 'Fireball', 'Material: Fireball'), ('hotmetal', 'Red Hot Metal', 'Material: Red Hot Metal'), ('water', 'Water fresh', 'Material: Water fresh'), ('wood', 'Wood varnished', 'Material: Wood varnished') ('beer', 'Beer', 'Material: Beer')], name = "Type of material") scn['MyMaterial'] = 0

Later it is necessary to change the name of the file (NAME_FILE) and the name of the material (NAME_MATERIAL), these 2 characteristics depend on the downloaded material. Therefore, the code that I will insert in the function introduce_material is the following:
if materials == 'beer': bpy.ops.wm.link_append(filepath = thePathMaterials + "beer.blend/", directory = thePathMaterials + "beer.blend/Material/", link=False, filename="beer") ob.active_material = bpy.data.materials['beer']

Now, the user will be able already to use the new material that has downloaded itself. This step will have to repeat it whenever the user wants to insert a new material of Blender Materials [4].

4.6. Formats chosen for the output file


In the section 3.5 Output file formats and their differences explained all types of formats for the output file that Blender supports. In the created application, I have chosen the most popular formats and common today, which are as follows: for the category of image, the formats chosen are PNG, JPEG, and for the category of video, the formats chosen are Xvid and FFMPEG with encoded AVI and MPEG1. In the following five sections I explain the pros and cons of the output formats I have chosen for the output file.

4.6.1. PNG
PNG stands for Portable Network Graphics (or, depending on whom you ask, the recursive PNG-Not-GIF). It was developed as an open alternative to GIF. PNG is an

~ 23 ~

excellent file type for internet graphics, as it supports transparency in browsers with an elegance that GIF does not possess. PNG supports 8-bit color, but also supports 24-bit color RGB, like JPG does. They are also non-lossy files, compressing photographic images without degrading image quality. In addition to being an excellent format for transparency, the non-lossy nature of 24-bit PNG is ideal for screenshot software, allowing pixel for pixel reproduction of your desktop environment.

4.6.2. JPEG
JPG was a file type developed by the Joint Photographic Experts Group (JPEG) to be a standard for professional photographers. Like the method ZIP files use to find redundancies in files to compress data, JPGs compress image data by reducing sections of images to blocks of pixels or tiles. JPG compression has the unfortunate side effect of being permanent, however, as the technology for the file was created for storing large photographic image files in surprisingly small spaces, and not for photo editing. JPGs have become the standard image of the internet because they can be compressed so much. However, because of the lossy nature of JPG, it is not an ideal way to store art files. Even the highest quality setting for JPG is compressed, and will change the look of your image, if only slightly. JPG is also not an ideal medium for typography, crisp lines, or even photographs with sharp edges, as they are often blurred or smeared out by anti-aliasing. JPGs support 24-bit RGB and CMYK, as well as 8-bit Grayscale. Its also important to note that Grayscale JPGs do not compress nearly as much as color ones do.

4.6.3. AVI
AVI (Audio Video Interleave) format digital video files have been around since Windows 3.0 was released, many years ago. Due to it's longer history vs. MPEG format files, the associated computer "drivers" for AVI tend to be more mature. Essentially, AVI programmatic control is more predictable (I.e. does what it is told to do) than MPEG control. AVI driver architecture involves more layers than MPEG, but because of this, the underlying "decompression" algorithm becomes unimportant to the programmer. The "driver" interface (accessed through the Windows MCI protocol/command language) is more standardized and comes from one vendor (namely Microsoft). AVI files allow more control over 256 palette display settings. That is, for scenes such as computer generated animations, the animation source images can be designed to use certain non-base palette positions so as to eliminate palette shifting in a program. AVI, depending on encoding size, color depth, compressor, frame encoding rate, and audio resolution, can have a significantly lower data rate than MPEG. AVI has another advantage over encoding via MPEG in that you can set the encoding size. That is, there

~ 24 ~

is no "native" size per say. With the right system, you could encode AVI (all bit a higher data rate) to turn out much better than MPEG.

4.6.4. XVID
XviD is an open source MPEG-4 video codec library distributed under the terms of the GNU General Public License. It emerged to compete the proprietary DivX Pro codec, which is placed under license restrictions and therefore is only available for members of the DivX Advanced Research Centre (DARC). XviD codec is intended for compressing video data in order to facilitate and speed up online video data exchange and improve storage on hard disks. The codec is capable of stripping video data of unnecessary junk and ensures higher compression rates. XviD-compressed videos can be 200 times smaller than the source video, with the visual quality well intact. XviD ensures fast compression and exceptional quality video performance and exceeds many expensive similar products. The codec is available for free, and it is incorporated in many hardware devices. The extensive hardware support eases data exchange between portable, home and other types of devices. There are no feature, testing or time restrictions for XviD, and it can be used safely and conveniently all the time. Since XviD is open-source software, its source code is available for public review, so anyone can check it and make sure there is no spyware or adware.

4.6.5. MPEG1
MPEG (Motion Pictures Experts Group) format video, as we know it, is technically called MPEG-1. The MPEG-1 standard specifies a "native" size of 352x240. Color depth is in millions (24-bit usually). Using a more advanced compression algorithm than AVI compressors use, data rates as low as 150KB/sec. can net you 352x240, 30 frames per sec., stereo 16-bit 44Khz. audio playback. One drawback of this type of file is the software required to run it is currently immature and proprietary. Each vendor has a different set of bugs. Basically, a finished file would have to be "pre-edited" and then re-encoded. The main advantage of MPEG is high quality video at fairly low data rates. In properly equipped pcs, this is an excellent solution. We can size/position the video anywhere on the screen in both AVI and MPEG format.

4.6.6. Formats comparison


This section compares the advantages and the disadvantages that have these 5 elected formats. I have chosen these 5 formats (png, jpeg, avi, Xvid and mpeg1) because they are most used nowadays. First I will start with the images category: the images in format png have like advantage that they are non-lossy files, compressing photographic images without degrading image quality. In other hand, the jpeg format is a standard for professional photographers. The two formats, jpeg and png, have become the standard image of the internet because they can be compressed so much; but the format png supports transparency in browsers with an elegance.

~ 25 ~

Now, I am going to comment on the pros and contras of the video category for the elected formats: avi, Xvid y mpeg1. The advantages of using mpeg1 are: great compression and it can deliver full-motion video with relatively small file size. Also using a more advanced compression algorithm than avi compressors use and a disadvantage: software-based decompression just becoming available to general public. For the format avi, an advantage is: native support on Windows and a disadvantage is that this format creates large files and also often problems syncing audio with video. With the right system, you could encode AVI (all bit a higher data rate) to turn out much better than MPEG. Finally, for the format Xvid: XviD-compressed videos can be 200 times smaller than the source video, with the visual quality well intact.

~ 26 ~

CHAPTER 5

IMPROVEMENTS, PROBLEMS AND EXAMPLES

In this chapter there will be a description of all the improvements made to the application as the improvement of the user graphic interface or also throughout the functionalities of the application itself. There will also be an explanation of all the different issues. I have encountered when creating some functionalities and the application as a whole.

5.1. Improvements in application


This section explains the changes and improvements of the application. The first improvement is the graphical user interface when the user runs the application in Blender, the first draft of this interface was the one shown in Figure 13. In this first draft it can be seen that the user must introduce by hand the full path where the .obj files are and where the materials are as well as the user must introduce the name of the files the user want to use to create the animation.

~ 27 ~

Figure 13. Window graphical user interface of the first draft. In the second draft of the Blender interface (see Figure 14) the user can select the path using a file browser where the .obj files are; also the user can choose the path using a browser where the materials are and the path to the output file.

Figure 14. Window graphical user interface of the second draft. Another improvement in the application is the subject of efficiency in import figures to the scene of Blender. There are two ways to import figures: one is using the Blender API and the other creating a script that reads the vertices and faces of an .obj file. To make the measurement of time, I have used functions of the time's library:
initiation = time.time() # function - import .obj files end = time.time() total_time = end - initiation

After measuring the time it takes to import the figures of the input files for each of the two forms, these durations were obtained and a graph was created (see Figure 15) where the x-axis contains the number of .obj files chosen and the y-axis shows the time

~ 28 ~

in seconds taken for the application to import the files. Therefore, shown in the Figure 15 it is clear that more efficient option is import Blender API in blue against the script that reads the positions of the vertices and faces of the figure of an .obj file in red.
700 600 500 400 import API 300 200 100 0 1 2 3 4 5 import Script

Figure 15. Graphical import 10, 20, 30, 40 and 50 .obj files Note: The blue line is not really zero, but these points are pretty small when compared with the values of the red line. The blue line values are: 0.113, 0.352, 0.562, 0.814 and 1.169. All values of unit of time are: seconds.

5.2. Problems found


This section discusses the problems encountered in creating the application. The first of the problems is the API of Blender, I had to learn and find information about the API at any time to introduce light, material, objects and so on. Another problem that has been constant is the Python programming language, which I have had to read me several books to deepen the Python programming language. I have needed around 3 weeks to read, to learn and to deepening in the comprehension of python, these books that have helped me to understand better this programming language are those who are in the bibliography with number [10], [11] and [12]. Other more specific problems that have emerged only when used, are problems such as import materials to the scene and apply it to the figure. Another problem is: I couldn't render the scene and I was thinking that the scene was correct; but the error was that the camera of the scene was not active.

~ 29 ~

5.3. Comparisons
This section I will present a comparison between different input options. I begin to explain the first comparison is between the output files using one light and three lights, the other input options are the same, it just change the number of lights used.

Figure 16. Comparison of the use of one light and three lights. It is noted in the previous figure (see Figure 16) that the first 8 frames correspond to the output of the application using only one light for lighting and the remaining 8 frames use 3 lamps, ie the method of the three points as described in Section 3.2. For this example I compare the time it took to render: for output using one light the average time for rendering is 2.65 seconds per frame, in other hand to the output using 3 lamps the average time for rendering is 4.6 seconds per frame. Therefore and like conclusion, if the user introduces more lights, the render will be slower but with better lighting. The next point is to compare the time and quality rendering and size of output files for a given set of input data. First compare the quality, size and rendering time for different image formats (png and jpeg).

Figure 17. Rendering a same frame in JPEG and PNG

~ 30 ~

It can be seen in the figure above (see Figure 17) rendering of the same frame for different image output formats. The frame on the left is rendered with JPEG output format, which the rendering time was 0.98 seconds and the image size is 16.7KB. And for the frame on the right is rendered with PNG output format, which the rendering time was 0.92 seconds and the image size is 35.3KB. Comparing the results, it appears that the rendering time is almost equal, while the size of the PNG format images is more than double the size of JPEG images. Also the quality of the image PNG is better that the image JPEG. Now I will compare the different video formats (avi, Xvid and mpeg1). It has been used for the 3 video formats the same input data are: the material of Water Fresh, a light and 50 .obj files.
Format avi Xvid mpeg1 Rendering Time 68.546 73.315 74.492 Size (KB) 424 394 448 Quality Low High High

Figure 18. A table that compares the different video formats. In Figure 18 I will show in the table: the rendering time, the output file size and quality of the output file for each video formats. Comparing the results, the above table shows that the rendering time and file size video are very similar. Instead avi video format has worse image quality compared to formats: Xvid and mpeg1. In my opinion the best video format of the three is Xvid compared with avi and mpeg1, since in Xvid rendering time is in the midst of rendering time avi and mpeg1. About the size of the file, the Xvid format is the smaller and in terms of quality the Xvid format gets a high quality image.

5.4. Examples
In this section, as its name suggests, will present an example using Blender and another example without using Blender.

5.4.1. Example using Blender


This section will explain the use of the application using Blender. This first step is to open the application of Blender; the user will load the script in Blender that contains the functionality of this project. Later the user executes the script in Blender. Now the options of our application appear in the Object Tools of the view Default, in other words the buttons to create the animation explained in section 4.1.1 Using Blender, this window corresponds to the red square in the Figure 19.

~ 31 ~

Figure 19. Window of the graphical user interface of the application. Now the user introduces the data of input: 1. The first step is: Select position of the camera (see Figure 20).

Figure 20. Window to select the number of lights.

~ 32 ~

2. The second step is: Select the .obj files (see Figure 21).

Figure 21. Window to select .obj files. 3. The third step: Select the number of lights (see Figure 22).

Figure 22. Window to select the number of lights.

~ 33 ~

4. The fourth step: Choose the path where the materials are (see Figure 23).

Figure 23. Window to select the path of materials. 5. The fifth step: Select the type of the material and the output file (see Figure 24).

Figure 24. Window to select type of material and output file.

~ 34 ~

6. The sixth step: Select the path of the output file (see Figure 25).

Figure 25. Window to select the output path. 7. The seventh step: the last step is pressing the Generate button to generate output file (see Figure 26).

Figure 26. Window to press button: Generate the output file. Once the application has generated the output file, we go to the directory where the user has chosen to save the output files and we see the result (see Figure 27 and Figure 28).

~ 35 ~

Figure 27. Results of 8 consecutive frames with 1 light and dense clouds texture.

Figure 28. Results of 8 consecutive frames with 3 lights and wood texture. Note: the rendering time of a frame for the result of the Figure 18 is 0.98 seconds / frame. As for the result of the Figure 19 the average of rendering is 4.6 seconds / frame. Therefore, the choice of the number of lights and material of an object is very important in terms of rendering time.

5.4.2. Example without using Blender


As explained there are users who do not use Blender and for that we saw in section 4.1.2 how to use the application without opening Blender. Now I will show an example. First of all, the user must enter data into a file as explained in section 4.1.2 in this case included the data in a notebook (see Figure 29).

~ 36 ~

# Path of the files C:/Users/Calabuig/Desktop/Final Project/test/ # Name of files cylinder # Number of files 50 # Position X of the Camera 0.825 # Position Y of the Camera -0.8 # Position Z of the Camera 0.51 # Lights 1 # Path of the Materials hotmetal # Material C:/Users/Calabuig/Desktop/Final Project/Material/ # Path output C:/Users/Calabuig/Desktop/Final Project/Results/ # Format output png

Figure 29. Example of input file. Note: the user must enter input data from any text editor and the user is not necessary to open Blender because this option is for users who do not use Blender. When I have created the input data file; I open the console. I assume that any reader knows how to open a console. For example, in Windows from Start-> All Programs -> Accessories -> Command Prompt. The second step, when the input file is ready and opened the console, the only thing missing is run the application with the command pattern (see Figure 30) explained in Section 4.1.2.
Microsoft Windows [Versin 6.1.7601] Copyright (c) 2009 Microsoft Corporation. Reservados todos los derechos. C:\Users\Calabuig>"C:\Program Files\Blender Foundation\Blender\blender.exe" -b P applicationFinalCalabuig.py C:\Users\Calabuig\dataInput2.txt

Figure 30. Window with the command to run the application without using Blender. When the application is completed, I will go to the directory I've chosen to save the file or output files and see the result for the example in this section in Figure 31.

~ 37 ~

Figure 31. Results of 8 consecutive frames with 1 light and Hot Metal texture.

~ 38 ~

CHAPTER 6

CONCLUSIN

6.1. Conclusions
In this work I have studied various aspects with the creation of a tool to render an animation in Blender. This has been pursued several objectives named in the introduction to the study, these objectives are following: I. II. III. IV. V. Plan and design a tool to create animation movies. Getting a better understanding of basis of computer graphic (such as lighting, materials and so on). Evaluate the result of rendering and improve the tool if possible. Getting a better understanding of the Python programming language. Learning the differences between the different kinds of formats.

These objectives have been met throughout the chapters of the work. This report has proposed a method for creating an animation for users who typically use Blender and for users who do not use it. Thus, from the set of input files and of the properties of output file, the application has the ability to render a set of .obj files in to an animation movie.

6.2. Future works


As future lines of work could consider the following: Extending the method to be able to have the input files other formats, not just the .obj format. To improve and optimize the method of creating animation. To can edit the parameters of light: choose color, position, power and so on. Being able to setup other parameters such as sky color, raytrace versus shadow buffer and so on.

~ 39 ~

APPENDIX A. IMPORT FIGURES


In this part the reader can see the code for the creation of the animation and how to import .obj files to scene.

# theFiles contains all the .obj files selected with the path theFiles = [] if console: for index in range(1,theNumberFiles): theFiles.append(thePathFiles + theFileName + ("%05d" % index) + ".obj") else: for file in theAllFiles: theFiles.append(theFilesPath + file.name) # Cleaning up the scene first by deleting everything bpy.ops.object.select_all(action='SELECT') bpy.ops.object.delete() # The beginning and end of the animation startframe = 1 endframe = len(theFiles) # Make a pointer to the current scene scene = bpy.data.scenes['Scene'] # We set the endframe of the animation to be this endframe scene.frame_end = endframe for index in range(len(theFiles)): bpy.ops.import_scene.obj(filepath = theFiles[index]) # Here all object are selected and joined # into one object as shapekeys bpy.ops.object.select_all(action='DESELECT') bpy.ops.object.select_all(action='SELECT') # This must be done to get the context right scene.objects.active = scene.objects['Mesh'] # Note there is no check to make sure we actually select all. # This is a toggle so in theory we could choose none instead bpy.ops.object.join_shapes() # Make ob a pointer to the active object which should be the one we have just made ob = bpy.context.active_object # Find the one which has the shapekeys(the basis)

~ 40 ~

# Run through as many frames as there are keys and set them to 0-10 for the frame # i-1, i, i+1 respectively for the ith shapekey for i in range(1, endframe - 1): # scene.frame_current = i j = i + 1 k = i - 1 name = 'Mesh.%03d' % i name2 = 'Mesh.%03d' % j name3 = 'Mesh.%03d' % k ob.data.shape_keys.key_blocks[name].value = 1.0 ob.data.shape_keys.key_blocks[name].keyframe_insert("value",frame = (i)) if k >= startframe: ob.data.shape_keys.key_blocks[name3].value = 0.0 ob.data.shape_keys.key_blocks[name3].keyframe_insert("value", frame = (i)) if j <= endframe - 2: ob.data.shape_keys.key_blocks[name2].value = 0.0 ob.data.shape_keys.key_blocks[name2].keyframe_insert("value", frame = (i)) # Now we need to clean up and remove all the other meshes which are no longer used bpy.ops.object.select_all(action='DESELECT') # Selecting the first mesh bpy.ops.object.select_name(name="Mesh") bpy.ops.object.select_inverse() # Deleting all other meshes bpy.ops.object.delete() # And select the mesh again bpy.ops.object.select_name(name="Mesh") # Automatic start animation bpy.data.objects['Mesh'].active_shape_key_index = len(theFiles) - 2 # Initial frame bpy.ops.screen.frame_jump(end=False) # Put a camera in the scene bpy.ops.object.camera_add(view_align=True, enter_editmode=False, location=(cameraX, cameraY, cameraZ), rotation=(1.109, 0.0108, 0.85)) bpy.data.objects['Camera'].draw_type = 'WIRE' # Set the active camera context = bpy.context scene = context.scene currentCameraObj = bpy.data.objects[bpy.context.active_object.name] scene.camera = currentCameraObj # Finally, selecting the mesh bpy.ops.object.select_name(name="Mesh")

~ 41 ~

APPENDIX B. LIGHTS
In this part the reader can see the code for the creation of the lights in the scene using the method of the three points of light.

if theLights >= 1: # Add lamp into the scene bpy.ops.object.lamp_add(type='SPOT', view_align=False, location=(-13.21, -16.96, 8.26), rotation=(0.941318,0.917498,-1.18762), layers=(True, False, False, False, False, False,False, False, False, False, False, False, False, False, False, False, False, False, False, False)) lamp1 = bpy.context.object # Configure Lighting Setup lamp1.name = 'Key' lamp1.data.energy = 12.0 lamp1.data.distance = 30.0 lamp1.data.spot_size = 1.570797 lamp1.data.spot_blend = 1 lamp1.data.shadow_method = 'BUFFER_SHADOW' lamp1.data.shadow_buffer_type = 'HALFWAY' lamp1.data.shadow_filter_type = 'GAUSS' lamp1.data.shadow_buffer_soft = 20 lamp1.data.shadow_buffer_size = 2048 lamp1.data.shadow_buffer_bias = 1 lamp1.data.shadow_buffer_samples = 16 lamp1.data.use_auto_clip_start = True lamp1.data.use_auto_clip_end = True if theLights >= 2: bpy.ops.object.lamp_add(type='SPOT', view_align=False, location=(-12.85, 19.61, 0.057), rotation=(1.53793,1.53793,3.68718), layers=(True, False, False, False, False, False,False, False, False, False, False, False, False, False, False, False, False, False, False, False)) lamp3 = bpy.context.object lamp3.name = 'Spot1' lamp3.data.energy = 4.0 lamp3.data.distance = 25.0 lamp3.data.spot_size = 1.396264 lamp3.data.spot_blend = 1 lamp3.data.shadow_method = 'BUFFER_SHADOW' lamp3.data.shadow_buffer_type = 'HALFWAY' lamp3.data.shadow_filter_type = 'GAUSS' lamp3.data.shadow_buffer_soft = 10 lamp3.data.shadow_buffer_size = 2048 lamp3.data.shadow_buffer_bias = 0.100 lamp3.data.shadow_buffer_samples = 8 lamp3.data.use_auto_clip_start = True lamp3.data.use_auto_clip_end = True

~ 42 ~

if theLights >= 3: bpy.ops.object.lamp_add(type='SPOT', view_align=False, location=(19.825, -18.28, -0.93), rotation=(1.61476, 0.709077, 0.853816), layers=(True, False, False, False, False, False,False, False, False, False, False, False, False, False, False, False, False, False, False, False)) lamp2 = bpy.context.object lamp2.name = 'Spot2' lamp2.data.energy = 12.0 lamp2.data.distance = 25.0 lamp2.data.spot_size = 1.047198 lamp2.data.spot_blend = 1 lamp2.data.shadow_method = 'BUFFER_SHADOW' lamp2.data.shadow_buffer_type = 'HALFWAY' lamp2.data.shadow_filter_type = 'GAUSS' lamp2.data.shadow_buffer_soft = 5 lamp2.data.shadow_buffer_size = 2048 lamp2.data.shadow_buffer_bias = 0.100 lamp2.data.shadow_buffer_samples = 16 lamp2.data.use_auto_clip_start = True lamp2.data.use_auto_clip_end = True

~ 43 ~

APPENDIX C. MATERIALS
In this part the reader can see the code to import the materials that can be used by default, described in Section 3.3.

# Finally, selecting the mesh bpy.ops.object.select_name(name="Mesh") ob = bpy.context.active_object # Add a slot for the material bpy.ops.object.material_slot_add() # Choose the material if materials == 'none': print( "No material" ) if materials == 'water': bpy.ops.wm.link_append(filepath = thePathMaterials + "water_fresh_water.blend/", directory = thePathMaterials + "water_fresh_water.blend/Material/", link=False, filename="water") ob.active_material = bpy.data.materials['water'] if materials == 'wood': bpy.ops.wm.link_append(filepath = thePathMaterials + "wood_varnished_wood.blend/", directory = thePathMaterials + "wood_varnished_wood.blend/Material/", link=False, filename="Vanished_Wood") ob.active_material = bpy.data.materials['Vanished_Wood'] if materials == 'amber': bpy.ops.wm.link_append(filepath = thePathMaterials + "amber.blend/", directory=thePathMaterials + "amber.blend/Material/", link=False, filename="amber") ob.active_material = bpy.data.materials['amber'] if materials == 'clouds': bpy.ops.wm.link_append(filepath = thePathMaterials + "dense_clouds.blend/", directory = thePathMaterials + "dense_clouds.blend/Material/", link=False, filename="dense_clouds") ob.active_material = bpy.data.materials['dense_clouds'] if materials == 'fireball': bpy.ops.wm.link_append(filepath = thePathMaterials + "fireball.blend/", directory = thePathMaterials + "fireball.blend/Material/", link=False, filename="fireball") ob.active_material = bpy.data.materials['fireball']

~ 44 ~

if materials == 'bronzeMetal': bpy.ops.wm.link_append(filepath = thePathMaterials + "bronze_voronoi.blend/", directory = thePathMaterials + "bronze_voronoi.blend/Material/", link=False, filename="Bronze Voronoi") ob.active_material = bpy.data.materials['Bronze Voronoi'] if materials == 'crystal': bpy.ops.wm.link_append(filepath = thePathMaterials + "crystal.blend/", directory = thePathMaterials + "crystal.blend/Material/", link=False, filename="Crystal") ob.active_material = bpy.data.materials['Crystal'] if materials == 'hotmetal': bpy.ops.wm.link_append(filepath = thePathMaterials + "red_hot_metal.blend/", directory = thePathMaterials + "red_hot_metal.blend/Material/", link=False, filename="Red Hot Metal") ob.active_material = bpy.data.materials['Red Hot Metal']

~ 45 ~

BIBLIOGRAPHY
[1] Blender. [Online] http://es.wikipedia.org/wiki/Blender Dean Leffingwell. Agile Software Requirements. Addison-Wesley, Pearson Edutacion. 2011. Blender. [Online] http://wiki.blender.org/index.php/Doc:2.4/Books/Essential_Blender/11.2.Lightin g:_Discussion Blender Materials. [Online] http://matrep.parastudios.de/ Sintesis de Imagen, Blender 3D. [Online] http://www.esi.uclm.es/www/cglez/fundamentos3D/index.html Blender. Chapter 11: Lighting. [Online] http://wiki.blender.org/index.php/Doc:2.4/Books/Essential_Blender/11.1.Lightin g:_Hands_on Carsten Wartmann. Blender Book: Free 3D Graphics Software for the Web and Video. Linux Journal Express. 2000. Jason van Gumster. Blender For Dummies. Wiley Publishing, Inc. 2009. Allen Downey, Jeffrey Elkner and Chris Meyers. Aprenda a pensar como un programador con Python. Wellesley. 2002. Magnus Lie Hetland. Beginning Python: From Novice to Professional. Apress, Second Edition. 2008. Raul Gonzalez. Python para todos. Creative Commons Reconocimiento 2.5. Edition Kindle. 2011. Mark Lutz. Learning Python. OReilly Media, Third Edition. 2008. Blender. Output. [Online] http://wiki.blender.org/index.php/Doc:2.4/Manual/Render/Output Blender Artists. [Online] http://blenderartists.org/forum/index.php

[2]

[3]

[4]

[5]

[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

~ 46 ~

[15]

Difference jpeg and png. [Online] http://www.howtogeek.com/howto/30941/whats-the-difference-between-jpgpng-and-gif/ Peter Symes. Digital Video Compression. Kindle Edition. 2003. All about multimedia. [Online] http://www.webdeveloper.com/multimedia/multimedia_qa.html Blender. Output. [Online] http://wiki.blender.org/index.php/Doc:2.4/Manual/Render/Output

[16]

[17]

[18]

~ 47 ~