Fast 3D Graphics in Processing for Android

By Andres Colubri October 3rd UIC Innovation Center Chicago

7. OpenGL and Processing

OpenGL ES
3D drawing in Android is handled by the GPU (Graphic Processing Unit) of the device.  The most direct way to program 3D graphics on Android is by means of OpenGL ES. OpenGL ES is a cross-platform API for programming 2D and 3D graphics on embedded devices (consoles, phones, appliances, etc). OpenGL ES consists in a subset of OpenGL. http://developer.android.com/guide/topics/graphics/opengl.html http://www.khronos.org/opengles/

OpenGL ES is the 3D API for other platforms, such as Nokia and iPhone:

The graphics pipeline on the GPU

The graphics pipeline is the sequence of steps in the GPU from the data (coordinates, textures, etc) provided through the OpenGL ES API to the final image on the screen (or Frame Buffer)

org/OpenGL-ES .Relationship between OpenGL and OpenGL ES Source: http://wiki.maemo.

Qualcomm Adreno PowerVR SGX NVidia Tegra .Mobile GPUs The main GPUs manufacturers for mobile devices are Qualcomm. PowerVR and NVidia.

jsp? .Comparison of current mobile GPUs Medium performance High performance Qualcomm Adreno PowerVR SGX Nvidia Tegra 200 (Nexus 1. Desire) 205 (Desire HD) SGX 530 (Droid. Vibrant.com/latest_results. DroidX) Tegra 250 Useful websites for benchmark information about mobile GPUs:  http://smartphonebenchmarks.glbenchmark.com/ http://www. Droid SGX 540 (Galaxy S. HTC Evo. Captivate) 2.

but the performance is limited.1. so froyo (2. In principle. 4. any GPU that supports OpenGL ES 1.2) is needed to take full advantage of the hardware. certain OpenGL features were missing in 2. GPUs such as the Adreno 200 or PowerVR SVG 540/530 are recommended.Hardware requirements for 3D in Processing for Android 1. However.1 2. Older GPUs found on devices such as the G1 might work. As a general requirement for Processing. Android 2. 3. .1.

2. 4. then A2D is used by default. 3. In Processing for Android there is no need to use OpenGL ES directly (although it is possible).The A3D renderer 1. renderer) where renderer = A2D or A3D 6. The drawing API in Processing uses OpenGL internally when selecting the A3D (Android 3D) renderer. 5. If no renderer is specified. height. which only supports 2D drawing. . During the first part of this workshop we used the A2D renderer. The renderer in Processing is the module that executes all the drawing commands. The renderer can be specified when setting the resolution of the output screen with the size() command: size(width.

A3D).. . pg. void setup() { size(480. } . 800.Offscreen drawing We can create an offscreen A3D surface by using the createGraphics() method: PGraphicsAndroid3D pg..beginDraw(). void draw() { pg.. A3D).rect(100.. 50. .endDraw().setTexture(pg. pg. We will see more of this at the end. } The offscreen drawing can be later used as an image to texture an object or to combine with other layers.. 300. 100.getOffscreenImage()). .. 40). pg = createGraphics(300. cube.

Geometrical transformations (translations.  2. <Getting Started with P rocessing>. rotations and scalings) are applied to the entire coordinate system.   3.8. Geometrical transformations Casey R eas and Ben Fry. the origin is at the upper left corner of the screen. O’R eally Media. 2010 1. The coordinate system in Processing is defined with the X axis running from left to right. In particular. Y axis from top to bottom and negative Z pointing away from the screen. .

void setup() { size(240. 100. 0). dy. 0. dz) function displaces the coordinate system by the specified amount on each axis. fill(255. 150). rect(60. noStroke(). translate(50. 400. } . 50. stroke(255. } void draw() { background(0). 100).Translations The translate(dx. A3D). 200).

} void draw() { background(0). 150). 200). vx. rotate(angle. Y. vz) void setup() { size(240. rect(60.Rotations Rotations have always a rotation axis that passes through the origin of the coordinate system. 100). fill(255. rotateY(angle). } . 400. vy. rotateZ(angle). 100. 0. This axis could be the X. A3D). stroke(255. rotateZ(PI / 4). or an arbitrary vector: rotateX(angle). noStroke(). Z axis.

200). 60. 3. rect(60. A3D). 60). sz) function allows to specify different factors along each direction. since the scale(sx. fill(255. noStroke().0. scale(1. 0. void setup() { size(240. 1. } .5. Scaling Scaling can be uniform (same scale factor on each axis) or not. 150). sy. stroke(255. 400. } void draw() { background(0).0).

. 0). rect(60. fill(255. translate(50. rotateZ(PI / 4). 150). } . 0. noStroke(). stroke(255. 60). 200).Just a couple of important points about geometrical transformations. } void draw() { background(0). A3D). By combining translate() with rotate(). 1. 50. the rotations can be applied around any desired point.. 2. 400. The order of the transformations is important void setup() { size(240. 60.

100). void setup(){ size(240. fill(0. 400. translate(-50. A3D). -50). translate(width/2. 100. -50). 255). 50). 255. 100. 0). All the geometric transformations issued between two consecutive calls to pushMatrix() and popMatrix() will not affect the objects drawn outside. fill(255. } void draw(){ background(0). translate(-50. fill(0. 2. 0. height/2). 0). translate(50. 255. rotateY(frameCount*PI/60). 50). box(100. 0). 100). 100. } . box(100.The transformation stack 1. The transformation stack we have in the 2D mode is also available in A3D through the functions pushMatrix() and popMatrix(). box(100. 100. box(100. 100). fill(255. 100). 0. translate(50.

100). box(100. 0. height/2). 100. 100). box(100. popMatrix(). 100). translate(50. 100. pushMatrix(). box(100. fill(255. translate(width/2. fill(0. 255). 50). 100). translate(50. A3D). pushMatrix(). fill(0. pushMatrix(). 100. popMatrix(). 0). 255. -50). 50). 400. } void draw(){ background(0). translate(-50. -50). 0. popMatrix(). 255. pushMatrix(). 100. translate(-50. 0). box(100. } . popMatrix().void setup(){ size(240. rotateY(frameCount*PI/60). 0). fill(255.

centerY. first edition) . upX. 2. bottom. right. upZ) perspective(fov. aspect. zFar) ortho(left. zNear. centerZ. top. Camera and perspective 1. Configuring the view of the scene in A3D requires setting the camera location and the viewing volume. near. This can be compared with setting a physical camera in order to take a picture: camera(eyeX. centerX.9. eyeZ. upY. far) (image from the OpenGL Red Book. eyeY.

0. eyeZ.0. 1. 0. 0. mouseY.0. -100. height/2. 2. the center of the scene and which axis is facing upwards:camera(eyeX. height/2. The camera placement is specified by the eye position. line(0. 0). upY. A3D automatically does it with the following values: width/2.0.0. 0.0. A3D). 0.0).0. stroke(255). 100). 0. centerX. (height/2. width/2. upX. } void draw() { lights(). upZ) If camera() is not called. 0 void setup() { size(240. box(90). fill(204). centerY. Camera placement 1. 0).0. centerZ. 400. 1. 0. eyeY. noStroke(). camera(30. 220.0 / 360. background(0).0.0. 100.0) / tan(PI*60. 0. } . 0. -100.0.0. 0. 0.0). 0. 0. line(0. 0. 100. line(-100. 0.

aspect. and the convergence of the lines towards the eye point create a perspective projection where objects located farther away from the eye appear smaller. from http://jerome. zNear.0.0/360.0) where cameraZ is ((height/2. zFar) perspective(PI/3.0.fr/OpenGl/Lessons/Lesson1.Perspective view The viewing volume is a truncated pyramid. cameraZ/10.0)) (default values) .0) / tan(PI*60. cameraZ*10.free.jouvie. width/height.php perspective(fov.

bottom. -10. top. regardless of whether  they are near or far from the camera. near. height. 10) (default) . far) ortho(0. width.Orthographic view In this case the viewing volume is a parallelepiped. right. ortho(left. 0. All objects with the  same dimension appear the same size.

noStroke(). } translate(width/2. box(160). -10.0. lights().0) / tan(PI * fov / 360. 10). rotateY(PI/3).0). fill(204). float(width)/float(height). width/2. height/2. } . -height/2.void setup() { size(240. if(mousePressed) { float fov = PI/3.0. 0). 400. rotateX(-PI/6). A3D). float cameraZ = (height/2. height/2. } else { ortho(-width/2. cameraZ/2. cameraZ*2. } void draw() { background(0).0). perspective(fov.

stroke(0). 150. fill(200. h. d) void setup() { size(240. Creating 3D objects A3D provides some functions for drawing predefined 3D primitives: sphere(r). pushMatrix(). rotateY(frameCount*PI/185). translate(width/2. rotateX(-frameCount*PI/200). fill(200. popMatrix(). } void draw() { background(0). sphere(50).0).height/2. box(w.10. 150). 40. } . 100. 400. pushMatrix(). A3D). box(150. popMatrix(). 200). 200).

75. The beginShape()/endShape() functions allow us to create complex objects by specifying the vertices and their connectivity (and optionally the normals and textures coordinates for each vertex) 2. vertex(30. 0). vertex(50. 0). 0). endShape(CLOSE). beginShape(TRIANGLES). vertex(80. 75. endShape(). 0). Closed polygon Individual triangles . 0). 0). vertex(85. vertex(85. with the difference that in A3D we can specify vertices with z coordinates. 20. 0). 20. beginShape(). vertex(40. 20. 0). vertex(30. This functionality is already present in A2D. 75. 20. vertex(60. 0). 20.beginShape()/endShape() 1. 75. vertex(30. 0). vertex(70. 75.

vertex(85.beginShape(TRIANGLE_STRIP).html . 0). 75. 75. 0). vertex(50. 20. 20. vertex(60. vertex(65. 0). vertex(40. 0). 0). vertex(80. 75. 0). vertex(65. 0). vertex(30. 75. vertex(50. vertex(30. 0). vertex(50. 0). 75. vertex(70. 0). 0). 0). 20. 20. vertex(85. 75. endShape(). 0). 0).org/reference/beginShape_. endShape(). 75. 75. 20. vertex(30. Triangle strip Individual quads Check the Processing reference for more details: http://processing. 20. vertex(90. beginShape(QUADS). 20. 0).

 realistic "skin". illumination effects. .Texturing Texturing is an important technique in computer graphics consisting in using an image to “wrap” a 3D object in order to simulate a specific material. etc.

org/wiki/UV_mapping .Basic texture mapping: Adapted from wikipedia.org. UV mapping: http://en.wikipedia.

etc. folds.Texture mapping becomes a very complex problem when we need to texture complicated tridimensional shapes (organic forms). by Brent Burley and Dylan Lacewell . Image from: Ptex: Per-Face Texture Mapping for Production Rendering. Finding the correct mapping from 2D image to 3D shape requires mathematical techniques that takes into account edges.

Simple shape texturing Objects created with beginShape()/endShape() can be textured using any image loaded into Processing with the loadImage() function or created procedurally by manipulating the pixels individually. 0. endShape().jpg"). img = loadImage("beach. 0. A3D). 0. vertex(width. 1). 1). 0). 0. height. PImage img. 1. } The texture mode can be NORMAL or IMAGE Depending on the texture mode. 0. 240.  . vertex(0. void setup() { size(240. 0. vertex(0. 1. } void draw() { background(0). 0. texture(img). beginShape(QUADS). we use normalized UV values or relative to the image resolution. 0). height. vertex(width. textureMode(NORMAL). 0.

0). } void draw() { background(0). height. img2. 0. texture(img2). 240. 0. 1.jpg"). endShape(). A3D). 0. 1). vertex(width. vertex(width. img2 = loadImage("peebles. vertex(0. vertex(width. 0). textureMode(NORMAL). vertex(0. noStroke(). 0.beginShape/endShape in A3D supports setting more than one texture for different parts of the shape: PImage img1. height. } . 1). 0. beginShape(TRIANGLES). texture(img1). 0. 0. void setup() { size(240. 1). height. 0. 0.jpg"). 0. 1. img1 = loadImage("beach. 0). 0. vertex(0. 1. 0.

jouvie.org/steve/omniv/opengl_lighting.fr/OpenGl/Lessons/Lesson6.php http://jerome.free. A3D offers a local illumination model based on OpenGL’s model.falloutsoftware.php - Tutorial15. Proper lighting calculations require to specify the normals of an object Ambient Diffuse Specular From http://www.fr/OpenGl/Tutorials/Tutorial12.com/tutorials/gl/gl8. This model doesn't allow the creation of shadows 4. It is a simple real-time illumination model.html .htm Some more good resources about lights in OpenGL: http://jerome.php http://www. We can define up to 8 light sources. where each light source has 4 components: ambient + diffuse + specular + emissive = total 3.jouvie.Lighting 1. 5.sjbaker. 2.free.

html http://jerome.  http://iphone-3d-programming.jouvie.labs.In diffuse lighting.fr/OpenGl/Lessons/Lesson6. the angle between the normal of the object and the direction to the light source determines the intensity of the illumination: From iPhone 3D programming. by Philip Rideout.com/ch04.php .oreilly.free.

ambientLight(v1. nz) v1. v2. v3. x. z) v1. z position: Directional: Directional light comes from one direction and is stronger when hitting a surface squarely and weaker if it hits at a a gentle angle. Ambient lights are almost always used in combination with other types of lights. a directional lights scatters in all directions. . v3: rgb color of the light x. y. After hitting a surface. and nz the direction the light is facing. v2. v2. nx. directionalLight(v1. the rays have light have bounced around so much that objects are evenly lit from all sides. v3. v2. y. ny. ny.Light types in A3D Ambient:  Ambient light doesn't come from a specific direction. v3: rgb color of the light nx.

v2. v2.Point:  Point light irradiates from a specific position. pointLight(v1. angle. v3. z. nz specify the direction or light angle float: angle the spotlight cone concentration: exponent determining the center bias of the cone . v3: rgb color of the light x. y. z position: Spot: A spot light  emits lights into an emission cone by restricting the emission area of the light source. spotLight(v1. z) v1. v2. z position: nx. nz. y. v3: rgb color of the light x. y. ny. nx. x. v2. v3. y. ny. x. concentration) v1.

php .jouvie.free.fr/OpenGl/Lessons/Lesson6.http://jerome.

v1). Polygon winding: The ordering of the vertices that define a face determine which side is inside and which one is outside. v2. v1.x.cross(b).Normals: each vertex needs to have a normal defined so the light calculations can be performed correctly PVector a = PVector b = PVector n = normal(n.z).y. v3. vertex(v2.x.z). PVector. n. vertex(v3.z). and the normals we provide to it must be consistent with this.sub(v3. v2.z). PVector.y. Processing uses CCW ordering of the vertices.x. v1. v3. v1).y.sub(v2.y. a. . n.x. vertex(v1.

10. write text using the text() function PFont fontA. A3D). fill(204). load/create fonts with loadFont/createFont 2. 10. void setup() { size(240. String[] fonts = PFont. background(102). text("droid". text("A3D". fill(255). 3D Text Text in A3D works exactly the same as in A2D: 1. 60). text("An". 10. 95). text("in".11.list(). 32). set current font with textFont 3. fill(51). textFont(fontA. 400. } void draw() { fill(0). 32). 130). fontA = createFont(fonts[0]. 165). 10. } .

length(). pushMatrix(). Each string of text we print to the screen with text() is contained in a rectangle that we can rotate. rotateX(textWidth(k)/70. scale. rotateY(-textWidth(k)/70. The main addition in A3D is that text can be manipulated in three dimensions.1). for(int i = 0. The rendering of text is also very efficient because is accelerated by the GPU (A3D internally uses OpenGL textures to store the font characters) fill(0). i++) { k = buff. . char k. translate(-textWidth(k).0).0. translate. } popMatrix(). 2.10+25).i < buff. etc.0).0).1.charAt(i). 3. scale(1. text(k. translate(rPos.0).

i < 24. textFont(font. for (int i = 0. 0). 'p'. translate(100. font = loadFont("Ziggurat-HTF-Black-32. '5'. 'O'. 0. popMatrix(). P3D). 'A' 'A' 'A' 'A' . } } . 'p'. 'Q'. 'S'. 'O'. 'Q'. 'Q'. translate(width/2. 400. 'O'. 'p'. 32). 'O'. 'p'. text(sentence[i]. 'S'. void setup() { size(240. } void draw() { background(0). 0). . pushMatrix(). char[] sentence = { 'S'.vlw"). . height/2. i++) { rotateY(TWO_PI / 24 + frameCount * PI/5000). 10). '5'. 'Q' }. '5'. . 'S'. 0.PFont font. //box(10. 50. 0). '5'.

we can draw it without uploading it again. 3. the data that defines a 3D object (vertices. Normally. so data   transfers can be slow. The current GPUs in mobile devices have limited bandwidth. 6.) 5.12 Models: the Pshape3D class Vertex Buffer Objects 1. and once the VBO is stored in   GPU memory. This is similar to the concept of Textures (upload once. colors. colors. A Vertex Buffer Object is a piece of GPU memory where we can   upload the data defining an object (vertices. 4. use multiple   times). . The upload (slow) occurs only once. If the geometry doesn’t change (often) we can use Vertex Buffer   Objects. 2. etc.   texture coordinates) are sent to the GPU at every frame. normals.

html . see this page: http://www.ca/opengl/gl_vbo.songho.For a good tutorial about VBOs.

but slow. loading from OBJ files. triangles. to facilitate  normal and color assignment. The class Pshape3D in A3D encapsulates a VBO and provides a  simple way to create and handle VBO data. Resizing is possible. etc. How vertices are interpreted depends on the geometry type specified at creation (POINT. 2. etc). texturing and creation of complex  shapes with multiple geometry types (line. TRIANGLES. etc). including updates. in a similar way to beginShape()/endShape() 4. PShape3D has to be created with the total number of vertices know beforehand. data  fetches. 3. Vertices in a PShape3D can be divided in groups. texturing. .The PShape3D class in A3D encapsulates VBOs 1.

+100. 50.. cube.setVertex(1.setVertex(1. Drawing translate(width/2. cube.setVertex(0. cube. cube.endUpdate(). . etc) manually. height/2. cube. -1).setVertex(1.. cube. 50...beginUpdate(NORMALS). -100). cube.endUpdate(). color(200. . -100). 150)). 2.Manual creation of Pshape3D models 1..beginUpdate(VERTICES). -100. shape(cube). A PShape3D can be created by specifying each vertex and associated  data (normal. 0. color.setVertex(0.beginUpdate(COLORS). cube. color(200. 0. . Remember that the normal specification must be consistent with the CCW vertex ordering. 0. 50. cube. Creation cube = createShape(36. cube. TRIANGLES). -1). -100. 0. 150)).. 50. . 0). -100. cube.endUpdate().setVertex(0. cube.

cube.. face0).setGroupTexture(0. +100). -100). cube..setGroupTexture(1. cube. 50. 2.setGroupNormal(1.. . .. cube. . cube. 50.setGroupColor(1. 3. cube.setGroup(0). Groups also facilitate the assignment of colors and normals. face1).setVertex(0.. 0.setGroupColor(0.. . cube.. -1). +100. cube. cube.. which allows to assign a different texture to each group. +100. -100.setGroupNormal(0. 150)). 0. .1. cube..setVertex(7. +100). cube. -100. +100. cube. 50. +1. A PShape3D can be textured with one ore more images. color(200.. 0. -100. 50.setVertex(1. The vertices can be organized in groups.setGroup(1). -100.endUpdate(). . cube.setVertex(6. cube. +100. color(200. 0). -100).beginUpdate(VERTICES). 150)).

org/wiki/Obj 3. 250).obj"). } . PShape object. 200). The styles active at the time of loading the shape are used to generate the geometry.obj) Processing will attempt to interpret the file as either SVG or OBJ. A3D supports loading OBJ files into PShape3D objects with the loadShape() function. For more info: http://en.OBJ loading 1. 255. rotateY(rotX). rotateX(rotY). A3D). float rotX. 0. 2. 255. shape(object).wikipedia. noStroke(). 4. It is supported by many tools for 3D modeling (Blender. 250. 0. float rotY. ambient(250.svg or . The OBJ format is a text-based data format to store 3D geometries. 800. There is an associated MTL format for materials definitions. translate(width/2. 5. Google Sketchup. Depending the extension of the file passed to loadShape (. Maya. 400). pointLight(255. } void draw() { background(0). object = loadShape("rose+vase. 3D Studio). height/2. void setup() { size(480.

bot = loadShape("bot. public void setup() { size(480.Copying SVG shapes into PShape3D Once we load an SVG file into a PShapeSVG object. } . PShape3D bot3D. 100. A3D). mouseX. } public void draw() { background(255). 800. 100). shape(bot3D.svg"). mouseY. bot3D = createShape(bot). we can copy into a PShape3D for increased performance: PShape bot.

50).. void setup() { size(480.. } PShape3D objects. vertex(width/2. Shape recording into a PShape3D object is a feature that can greatly increase the performance of sketches that use complex geometries. A3D). 1. and the second allows saving multiple shapes with beginShapesRecorder/endShapesRecorder Pshape recShape. 2. vertex(50. . recShape = endShapeRecorder(). vertex(50. There are two ways of using shape recording: one is saving a single shape with beginShapeRecorder/endShapeRecorder. beginShapeRecorder(QUADS). void setup() { size(480. .. objects = endShapesRecorder()... vertex(width/2. 3. beginShapesRecorder(). . 1. height/2). 0.. shape(recShape3D). 800.. The basic idea of shape recording is to save the result of standard Processing drawing calls into a Pshape3D object. shape(objects).. 1).. 800. } void draw() { . rect(0.. box(1.. .. 50). . } . 1). } void draw() { ...Shape recording 1. A3D). height/2).

Textured sphere Birds flock Without shape recording: 19fps With shape recording: 55fps Without shape recording: 7fps With shape recording: 18fps .  It usually increases the rendering framerate by 100% or more.The performance gains of using shape recording are quite substantial.

30). DYNAMIC). p[1] += random(-1. p[0] += random(-1. At least one texture must be attached to the model. for (int i =0. particles. } particles. particles = createShape(1000.setTexture(sprite). The position and color of the vertices can be updated in the draw loop in order to simulate motion (we have to create the shape as DYNAMIC). sprite = loadImage("particle. i < particles. float z = random(-30.endUpdate(). in order to texture the sprites.Particle systems PShape3D allows to create models with the POINT_SPRITES geometry type. 30). each vertex is treated as a textured point sprite.getVertex(i). y. p[2] += random(-1. With this type.beginUpdate(VERTICES). i < particles. particles. i++) { float x = random(-30.endUpdate(). 1). particles. i++) { float[] p = particles. 1). p). by dividing the vertices in groups. 30). x. particles. Dynamic update . POINT_SPRITES.getNumVertices(). } particles. 1).setVertex(i. z).beginUpdate(VERTICES).png"). More than sprite texture can be attached. float y = random(-30. Creation/initialization particles.getNumVertices(). for (int i =0.setVertex(i.

. dynamic texturing of a 3D shape using the result of the offscreen drawing .. Lets look an example where we do: 1.Wrapping up. particle system (rendered to offscreen surface) 3. offscreen rendering to texture 2.

google.org/ Processing resources: http://processing.de/articles/Android/article.glbenchmark.com/index.org/ Processing.anddev.html SDK: http://developer.android.com/guide/topics/graphics/opengl.appinventor.html Mailing list: http://groups.com/sdk/index.html OpenGL: http://developer.Very good Android tutorial: http://www.vogella.com/ GLES benchmarks: http://www.html Guide: http://developer.com/group/android-developers Developers forums: http://www.processing.hardkernel.Android forum: http://forum.html Links Official google resources: http://developer.org/w/Android .anddev.Android forum: http://wiki.android.com/guide/index.android.com/result.com/p/min3d/ Developer’s devices: http://www.org/ Book: http://andbook.processing.google.com/ AppInventor: http://www.cyanogenmod.org/android-processing Processing.android.jsp Min3D (framework 3D): http://code.org/ Cyanogenmod project: http://www.