You are on page 1of 18

COMPUTER GRAPHICS AND

VISUALISATION

COMPUTER SCIENCE
AND ENGINEERING
DIVYASHREE J

SJB
I N S T I T U T E O F T E C H N O LO G Y
CHAPTER 1
Input & interaction, Curves and Computer

1
SECTION 1

Input and Interaction

Input devices In computing, an input device is a piece of computer hardware equipment used to provide
data and control signals to an information processing system such as a computer or
Input devices in computer graphics An input device is any hardware device that sends data information appliance. Examples of input devices include keyboards, mouse, scanners,
to the computer, without any input devices, a computer would only be a display device For digital cameras and joysticks.
example, a keyboard is an¨and not allow users to interact with it, much like a TV. input
device https://cs.appstate.edu/~rt/cs4465/notes/chap3.pdf

2
Input devices can be categorized based on: • Portable media player
• Webcam
modality of input (e.g. mechanical motion, audio, visual, etc.) • Microsoft Kinect Sensor
whether the input is discrete (e.g. pressing of key) or continuous (e.g. a mouse's position, • Image scanner
though digitized into a discrete quantity, is fast enough to be considered continuous) the • Fingerprint scanner
number of degrees of freedom involved (e.g. two-dimensional traditional mice, or • Barcode reader
three-dimensional navigators designed for CAD applications) • 3D scanner
Pointing devices, which are input devices used to specify a position in space, can further be • Laser rangefinder
classified according to: • Eye gaze tracker
Audio input devices are used to capture sound. In some cases, an audio output device can be
Whether the input is direct or indirect. With direct input, the input space coincides with the used as an input device, in order to capture produced sound.Audio input devices allow a user
display space, i.e. pointing is done in the space where visual feedback or the pointer appears. to send audio signals to a computer for processing, recording, or carrying out commands.
Touchscreens and light pens involve direct input. Examples involving indirect input include Devices such as microphones allow users to speak to the computer in order to record a voice
the mouse and trackball. message or navigate software.Aside from recording, audio input devices are also used with
speech recognition software.
Whether the positional information is absolute (e.g. on a touch screen) or relative (e.g. with Examples of types of audio input devices include:
a mouse that can be lifted and repositioned) • Microphones
Direct input is almost necessarily absolute, but indirect input may be either absolute or • MIDI keyboard or other digital musical instrument
relative.[clarification needed] For example, digitizing graphics tablets that do not have an
embedded screen involve indirect input and sense absolute positions and are often run in an
absolute input mode, but they may also be set up to simulate a relative input mode like that
clients and servers
of a touchpad, where the stylus or puck can be lifted and repositioned. The client–server model is a distributed application structure that partitions tasks or
workloads between the providers of a resource or service, called servers, and service
Video input devices are used to digitize images or video from the outside world into the requesters, called clients.[1] Often clients and servers communicate over a computer
computer. The information can be stored in a multitude of formats depending on the user's network on separate hardware, but both client and server may reside in the same system. A
requirement. server host runs one or more server programs which share their resources with clients. A
Examples of types of video input devices include: client does not share any of its resources, but requests a server's content or service function.
• Digital camera Clients therefore initiate communication sessions with servers which await incoming
• Digital camcorder

3
requests. Examples of computer applications that use the client–server model are Email, format, it facilitates parsing. By abstracting access, it facilitates cross-platform data
network printing, and the World Wide Web. exchange.[4]
A server may receive requests from many distinct clients in a short period of time. A
The client-server characteristic describes the relationship of cooperating programs in an computer can only perform a limited number of tasks at any moment, and relies on a
application. The server component provides a function or service to one or many clients, scheduling system to prioritize incoming requests from clients to accommodate them. To
which initiate requests for such services. Servers are classified by the services they provide. prevent abuse and maximize availability, server software may limit the availability to
For example, a web server serves web pages and a file server serves computer files. A shared clients. Denial of service attacks are designed to exploit a server's obligation to process
resource may be any of the server computer's software and electronic components, from requests by overloading it with excessive request rates.
programs and data to processors and storage devices. The sharing of resources of a server
constitutes a service.
Whether a computer is a client, a server, or both, is determined by the nature of the
Display Lists
application that requires the service functions. For example, a single computer can run web
server and file server software at the same time to serve different data to clients making A display list (or display file) is a series of graphics commands that define an output image.

different kinds of requests. Client software can also communicate with server software The image is created (rendered) by executing the commands to combine various primitives.

within the same computer.[2] Communication between servers, such as to synchronize data, This activity is most often performed by specialized display or processing hardware partly

is sometimes called inter-server or server-to-server communication. or completely independent of the system's CPU for the purpose of freeing the CPU from the

In general, a service is an abstraction of computer resources and a client does not have to be overhead of maintaining the display, and may provide output features or speed beyond the

concerned with how the server performs while fulfilling the request and delivering the CPU's capability.

response. The client only has to understand the response based on the well-known For a display device without a frame buffer, such as the old vector graphics displays, the

application protocol, i.e. the content and the formatting of the data for the requested service. commands were executed every fraction of a second to maintain and animate the output. In

Clients and servers exchange messages in a request–response messaging pattern. The client modern systems, the commands need only be executed when they have changed or in order

sends a request, and the server returns a response. This exchange of messages is an example to refresh the output (e.g., when restoring a minimized window).

of inter-process communication. To communicate, the computers must have a common A display list can represent both two- and three-dimensional scenes. Systems that make use

language, and they must follow rules so that both the client and the server know what to of a display list to store the scene are called retained mode systems as opposed to immediate

expect. The language and rules of communication are defined in a communications protocol. mode systems.

All client-server protocols operate in the application layer. The application layer protocol One of the earliest popular systems with true display list was the Atari 8-bit family. The

defines the basic patterns of the dialogue. To formalize the data exchange even further, the display list (actually called so in Atari terminology) is a series of instructions for ANTIC, the

server may implement an application programming interface (API).[3] The API is an video co-processor used in these machines. This program, stored in the computer's memory

abstraction layer for accessing a service. By restricting communication to a specific content and executed by ANTIC in real time, can specify blank lines, any of six text modes and

4
eight graphics modes, which sections of the screen can be horizontally or vertically fine In more primitive systems, the results of a display list can be simulated, though at the cost of
scrolled, and trigger Display List Interrupts (called Raster interrupts or HBI on other CPU-intensive writes to certain display-mode, color-control, or other visual effect registers
systems). in the video device, rather than a series of rendering commands executed by the device.
The Amstrad PCW family contains a Display List function called the 'Roller RAM'. This is a Thus, one must create the displayed image using some other rendering process, either before
512-byte RAM area consisting of 256 16-bit vectors into RAM, one for each line of the 720 or while the CPU-driven display generation executes. In many cases, the image is also
× 256 pixel display. Each vector identifies the location of 90 bytes of monochrome pixels modified or re-rendered between frames. The image is then displayed in various ways,
that hold the line's 720 pixel states. The 90 bytes of 8 pixel states are actually spaced at depending on the exact way in which the CPU-driven display code is implemented.
8-byte intervals, so there are 7 unused bytes between each byte of pixel data. This suits how Examples of the results possible on these older machines requiring CPU-driven video
the text-orientated PCW constructs a typical screen buffer in RAM, where the first include effects such as Commodore 64/128's FLI mode, or Rainbow Processing on the ZX
character's 8 rows are stored in the first 8 bytes, the second character's rows in the next 8 Spectrum.
bytes and so on. The Roller RAM was implemented to speed up display scrolling as it would
have been unacceptably slow for its 3.4 MHz Z80 to move up the 23 KB display buffer 'by
hand' i.e. in software. The Roller RAM starting entry used at the beginning of a screen
Display Lists and Modelling
refresh is controlled by a Z80-writable I/O register. Therefore, the screen can be scrolled
simply by changing this I/O register.
Another system using a Display List-like feature in hardware is the Amiga, which, not
3D computer graphics or three-dimensional computer graphics, (in contrast to 2D computer
coincidentally, was also designed by some of the same people who made the Atari 8-bits
graphics) are graphics that use a three-dimensional representation of geometric data (often
custom hardware. The Amiga display hardware was extremely sophisticated for its time and,
Cartesian) that is stored in the computer for the purposes of performing calculations and
once directed to produce a display mode, it would continue to do so automatically for every
rendering 2D images. Such images may be stored for viewing later or displayed in real-time.
following scan line. The computer also included a dedicated co-processor, called "Copper",
3D computer graphics rely on many of the same algorithms as 2D computer vector graphics
which ran a simple program or 'Copper List' intended for modifying hardware registers in
in the wire-frame model and 2D computer raster graphics in the final rendered display. In
sync with the display. The Copper List instructions could direct the Copper to wait for the
computer graphics software, the distinction between 2D and 3D is occasionally blurred; 2D
display to reach a specific position on the screen, and then change the contents of hardware
applications may use 3D techniques to achieve effects such as lighting, and 3D may use 2D
registers. In effect, it was a processor dedicated to servicing Raster interrupts. The Copper
rendering techniques.
was used by Workbench to mix multiple display modes (multiple resolutions and color
3D computer graphics are often referred to as 3D models. Apart from the rendered graphic,
palettes on the monitor at the same time), and by numerous programs to create rainbow and
the model is contained within the graphical data file. However, there are differences: a 3D
gradient effects on the screen. It was also capable of sprite multiplexing, repositioning a
model is the mathematical representation of any three-dimensional object. A model is not
number of hardware sprites available per scanline.
technically a graphic until it is displayed. A model can be displayed visually as a

5
two-dimensional image through a process called 3D rendering or used in non-graphical Computer animation is the process used for generating animated images. The more general
computer simulations and calculations. With 3D printing, 3D models are similarly rendered term computer-generated imagery (CGI) encompasses both static scenes and dynamic
into a 3D physical representation of the model, with limitations to how accurate the images, while computer animation only refers to the moving images. Modern computer
rendering can match the virtual model. animation usually uses 3D computer graphics, although 2D computer graphics are still used
for stylistic, low bandwidth, and faster real-time renderings. Sometimes, the target of the
animation is the computer itself, but sometimes film as well.

Programming Event Driven Input Computer animation is essentially a digital successor to the stop motion techniques using 3D
models, and traditional animation techniques using frame-by-frame animation of 2D
illustrations. Computer-generated animations are more controllable than other more
physically based processes, constructing miniatures for effects shots or hiring extras for
In computer programming, event-driven programming is a programming paradigm in which
crowd scenes, and because it allows the creation of images that would not be feasible using
the flow of the program is determined by events such as user actions (mouse clicks, key
any other technology. It can also allow a single graphic artist to produce such content
presses), sensor outputs, or messages from other programs/threads. Event-driven
without the use of actors, expensive set pieces, or props. To create the illusion of movement,
programming is the dominant paradigm used in graphical user interfaces and other
an image is displayed on the computer monitor and repeatedly replaced by a new image that
applications (e.g. JavaScript web applications) that are centered on performing certain
is similar to it, but advanced slightly in time (usually at a rate of 24, 25 or 30
actions in response to user input. This is also true of programming for device drivers (e.g. P
frames/second). This technique is identical to how the illusion of movement is achieved with
in USB device driver stacks[1])
television and motion pictures.
In an event-driven application, there is generally a main loop that listens for events, and then
For 3D animations, objects (models) are built on the computer monitor (modeled) and 3D
triggers a callback function when one of those events is detected. In embedded systems the
figures are rigged with a virtual skeleton. For 2D figure animations, separate objects
same may be achieved using hardware interrupts instead of a constantly running main loop.
(illustrations) and separate transparent layers are used with or without that virtual skeleton.
Event-driven programs can be written in any programming language, although the task is
Then the limbs, eyes, mouth, clothes, etc. of the figure are moved by the animator on key
easier in languages that provide high-level abstractions, such as closures.
frames. The differences in appearance between key frames are automatically calculated by
the computer in a process known as tweening or morphing. Finally, the animation is
rendered.[1]

Menus Picking, Building Interactive For 3D animations, all frames must be rendered after the modeling is complete. For 2D

Models, Animating Interactive programs vector animations, the rendering process is the key frame illustration process, while tweened
frames are rendered as needed. For pre-recorded presentations, the rendered frames are
transferred to a different format or medium, like digital video. The frames may also be

6
rendered in real time as they are presented to the end-user audience. Low bandwidth Consisting of six main components including User control, Responsiveness,
animations transmitted via the internet (e.g. Adobe Flash, X3D) often use software on the Real-Time Interactions, Connectedness, Personalization, and Playfulness [8] Focuses on the
end-users computer to render in real time as an alternative to streaming or pre-loaded high use and experience of the software [9]. Retrieving and processing information through
bandwidth animations. on-demand responsiveness [10]. Acting upon information to transform it [11]. The constant
changing of information and media, regardless of changes in the device [12]. Providing
interactivity through a focus on the capabilities and constraints of human cognitive

Design of Interactive programs processing [13].

While both definitions indicate a strong focus on the user, the difference arises from the
purposes of Interactive Design and Interaction Design. In essence Interactive Design
Interactive Design is defined as a user-oriented field of study that focuses on meaningful
involves the creation of meaningful uses of hardware and systems and that Interaction
communication of media through cyclical and collaborative processes between people and
Design is the design of those hardware and systems. Interaction Design without Interactive
technology. Successful interactive designs have simple, clearly defined goals, a strong
Design provides only hardware or an interface. Interactive Design without Interaction
purpose and intuitive screen interface.
Design cannot exist for there is no platform for it to be used by the user.

In some cases Interactive Design is equated to Interaction Design, however in the


specialized study of Interactive Design there are defined differences. To assist in this
distinction, Interaction Design can be thought of as:
Logic operations
Making devices usable, useful, and fun, focusing on the efficiency and intuitive Programming languages typically support a set of operators: constructs which behave
hardware[3]. A fusion of product design, computer science, and communication design [3]. generally like functions, but which differ syntactically or semantically from usual functions.
A process of solving specific problems under a specific set of contextual circumstances[3] Common simple examples include arithmetic (addition with +), comparison (with >), and
The creation of form for the behavior of products, services, environments, and systems[4]. logical operations (such as AND or &&). More involved examples include assignment
Making dialogue between technology and user invisible, i.e. reducing the limitations of (usually = or :=), field access in a record or object (usually .), and the scope resolution
communication through and with technology.[5]. About connecting people through various operator (often ::). Languages usually define a set of built-in operators, and in some cases
products and services[6]. allow user-defined operators.

Whereas Interactive Design can be thought of as:

Giving purpose to Interaction Design through meaningful experiences [7]

7
Review 1.1

Question 1 of 5 Movie 1.1 curve


Identify the input device.

A. monitor

B. keyboard

C. both

D. none

curve example

Check Answer

8
SECTION 2

Curved surfaces

Curved surfaces A curve is an infinitely large set of points. Each point has two neighbors except endpoints.
Curves can be broadly classified into three categories − explicit, implicit, and parametric
Definition of surfaces of the solids or 3-D figures: ... The objects having plane surfaces are curves.
called plane objects. The surfaces of book, match box, almirah, table, etc., are the examples Implicit Curves
of plane surfaces. (ii) Curved surface: The surfaces which are not flat, are called curved Implicit curve representations define the set of points on a curve by employing a procedure
surface. that can test to see if a point in on the curve. Usually, an implicit curve is defined by an
Types of Curves implicit function of the form −

9
f(x, y) = 0
OpenGL Quadric-Surface and
It can represent multivalued curves (multiple y values for an x value). A common example is
the circle, whose implicit representation is Cubic-Surface Functions, Bezier Spline
x2 + y2 - R2 = 0 Curves and Bezier surfaces
Explicit Curves
Bézier surfaces are a species of mathematical spline used in computer graphics,
A mathematical function y = f(x) can be plotted as a curve. Such a function is the explicit
computer-aided design, and finite element modeling. As with the Bézier curve, a Bézier
representation of the curve. The explicit representation is not general, since it cannot
surface is defined by a set of control points.
represent vertical lines and is also single-valued. For each value of x, only a single value of
y is normally computed by the function.
Properties of Bezier Curves
Bezier curves have the following properties −
quadric surfaces
They generally follow the shape of the control polygon, which consists of the
Quadric surfaces are defined by quadratic equations in two dimensional space. Spheres and
segments joining the control points.. They always pass through the first and last control
cones are examples of quadrics. The quadric surfaces of RenderMan are surfaces of
points. They are contained in the convex hull of their defining control points. The degree of
revolution in which a finite curve in two dimensions is swept in three dimensional space
the polynomial defining the curve segment is one less that the number of defining polygon
about one axis to create a surface
point. Therefore, for 4 control points, the degree of the polynomial is 3, i.e. cubic
quadric surfaces,
polynomial. A Bezier curve generally follows the shape of the defining polygon. The
direction of the tangent vector at the end points is same as that of the vector determined by
A frequently used class of objects is the quadric surfaces, which are described with second -
first and last segments. The convex hull property for a Bezier curve ensures that the
degree equations (quadratics). They include spheres, ellipsoids, tori, paraboloids, and
polynomial smoothly follows the control points.
hyperboloids. Quadric surfaces, particularly spheres and ellipsoids, are common elements of
graphics scenes, and they are often available in graphics packages as primitives from which
No straight line intersects a Bezier curve more times than it intersects its control polygon.
m o r e c o m p l e x o b j e c t s c a n b e c o n s t r u c t e d .

They are invariant under an affine transformation. Bezier curves exhibit global control

means moving a control point alters the shape of the whole curve. A given Bezier curve can
S p h e r e

be subdivided at a point t=t0 into two Bezier segments which join together at the point
In Cartesian coordinates, a spherical surface with radius r centered on the coordinate origin
corresponding to the parameter value t=t0.
i s d e f i n e d a s t h e s e t o f p o i n t s ( x , y, z ) t h a t s a t i s f y t h e e q u a t i o n


10
Corresponding openGL functions. dimensions, using the provided primitives, together with commands that control how these
objects are rendered (drawn).

Scroll here for openGL code Since OpenGL drawing commands are limited to those that generate simple geometric
primitives (points, lines, and polygons), the OpenGL Utility Toolkit (GLUT) has been
created to aid in the development of more complicated three-dimensional objects such as a
sphere, a torus, and even a teapot. GLUT may not be satisfactory for full-featured OpenGL
/* Recursive subdivision of tetrahedron to form 3D applications, but it is a useful starting point for learning OpenGL.
Sierpinski gasket */ //tetra_vtu.cpp
GLUT is designed to fill the need for a window system independent programming interface
#include <stdlib.h> for OpenGL programs. The interface is designed to be simple yet still meet the needs of

#include <stdio.h> useful OpenGL programs. Removing window system operations from OpenGL is a sound
decision because it allows the OpenGL graphics system to be retargeted to various systems
#include <GL/glut.h>
including powerful but expensive graphics workstations as well as mass-production graphics
typedef float point[3]; systems like video games, set-top boxes for interactive television, and PCs.
/* initial tetrahedron */ GLUT simplifies the implementation of programs using OpenGL rendering. The GLUT
application programming interface (API) requires very few routines to display a graphics
point v[]={{0.0, 0.0, 1.0}, {0.0, 0.942809,
-0.33333},{-0.816497, -0.471405, -0.333333}, scene rendered using OpenGL. The GLUT routines also take relatively few parameters.
{0.816497, -0.471405, -0.333333}};
Rendering Pipeline
static GLfloat theta[] = {0.0,0.0,0.0};
Most implementations of OpenGL have a similar order of operations, a series of processing
int n;
stages called the OpenGL rendering pipeline. Although this is not a strict rule of how
OpenGL is implemented, it provides a reliable guide for predicting what OpenGL will do.

void triangle( point a, point b, point c) Geometric data (vertices, line, and polygons) follow a path through the row of boxes that
includes evaluators and per-vertex operations, while pixel data (pixels, images and bitmaps)
/* display one triangle using a line loop for wire frame, a
single normal for constant shading, or three normals for are treated differently for part of the process. Both types of data undergo the same final step
(rasterization) before the final pixel data is written to the frame buffer.
OpenGL is a low-level graphics library specification. It makes available to the programmer
a small set of geometric primitives - points, lines, polygons, images, and bitmaps. OpenGL
provides a set of commands that allow the specification of geometric objects in two or three Per-Vertex and Primitive Assembly: For vertex data, the next step converts the vertices into
primitives. Some types of vertex data are transformed by 4x4 floating-point matrices. Spatial

11
coordinates are projected from a position in the 3D world to a position on your screen. In alpha, corresponds to the notion of opacity. An alpha value of 1.0 implies complete opacity,
some cases, this is followed by perspective division, which makes distant geometric objects and an alpha value of 0.0 complete transparancy. Color-index mode, in contrast, stores color
appear smaller than closer objects. Then view port and depth operations are applied. The buffers in indices. Your decision on color mode should be based on hardware availability
results at this point are geometric primitives, which are transformed with related color and and what you application requires. More colors can usually be simultaneously represented
depth values and guidelines for the rasterization step. with RGBA mode than with color-index mode. And for special effects, such as shading,
Pixel Operations: While geometric data takes one path through the OpenGL rendering lighting, and fog, RGBA mode provides more flexibility. In general, use RGBA mode
pipeline, pixel data takes a different route. Pixels from an array in system memory are first whenever possible. RGBA mode is the default.
unpacked form one of a variety of formats into the proper number of components. Next the
data is scaled, biased, processed by a pixel map, and sent to the rasterization step.

Gallery 1.1 sample graphics images


Rasterization: Rasterization is the conversion of both geometric and pixel data into
fragments. Each fragment square corresponds to a pixel in the frame buffer. Line width,
point size, shading model, and coverage calculations to support antialiasing are taken it to
consideration as vertices are connected into lines or the interior pixels are calculated for a
filled polygon. Color and depth values are assigned for each fragment square. The processed
fragment is then drawn into the appropriate buffer, where it has finally advanced to be a
pixel and achieved its final resting place.

The first thing we need to do is call the glutInit() procedure. It should be called before any
other GLUT routine because it initializes the GLUT library. The parameters to glutInit()
should be the same as those to main(), specifically main(int argc, char** argv) and
glutInit(&argc, argv), where argcp is a pointer to the program's unmodified argc variable
from main. Upon return, the value pointed to by argcp will be updated, and argv is the
program's unmodified argv variable from main. Like argcp, the data for argv will be
updated.
The next thing we need to do is call the glutInitDisplayMode() procedure to specify the
display mode for a window. You must first decide whether you want to use an RGBA
(GLUT_RGBA) or color-index (GLUT_INDEX) color model. The RGBA mode stores its
color buffers as red, green, blue, and alpha color components. The forth color component,

12
3D

three Dimensional

Related Glossary Terms


Drag related terms here

Index Find Term

Chapter 1 - Input and Interaction !


Chapter 1 - Input and Interaction !
Bezier curve

Related Glossary Terms


Drag related terms here

Index Find Term


CPU's

control processing unit

Related Glossary Terms


Drag related terms here

Index Find Term

Chapter 1 - Input and Interaction !


Digital

either the value is 1 ot 0

Related Glossary Terms


Drag related terms here

Index Find Term

Chapter 1 - Input and Interaction !


Link

https://cs.appstate.edu/~rt/cs4465/notes/chap3.pdf

Related Glossary Terms


Drag related terms here

Index Find Term

You might also like