You are on page 1of 34

2015

Multimedia Systems Laboratory Manual

Wondim D
School of Computing, Bahir Dar Institute of
Technology-BDU
3/9/2015
BAHIR DAR UNIVERSITY
BAHIR DAR INSTITUTE OF TECHNOLOGY

School of Computing

Approval Certificate of the Reviewer

Here I have certified and approved that the Multimedia Systems laboratory manual prepared by

Wondim Dessiye and as I have reviewed and commented on the manual; it contains, carry out

the objectives, completes practical part of the course and the manual fulfills the standard of the

curriculum of the course.

___________________ ____________________ __________________

Name Signature Date

2|Page
Contents
1. Digital Images ........................................................................................................................................ 5
1.1. Introduction ...................................................................................................................................... 5
1.2. Reading, Writing and Displaying Digital Images ............................................................................... 5
1.2.1. Reading Digital Images .................................................................................................................. 5
1.2.2. Drawing/Displaying Digital Images ............................................................................................... 6
1.2.3. Creating Digital Image ................................................................................................................... 7
1.2.4. Writing Digital Images ................................................................................................................... 7
1.2.5. Experiment 1: Reading, Displaying, and Writing Digital Images ................................................... 7
1.3. Manipulating Pixels ........................................................................................................................... 8
1.3.1. Experiment 2: Manipulating Pixels (I) ........................................................................................... 8
1.3.2. Experiment 3: Manipulating Pixels (II) .......................................................................................... 9
1.4. Creating your own Image Format ................................................................................................... 10
1.4.1. Bahir Dar Pictures (BDP) ............................................................................................................. 10
1.4.2. Experiment 4: Creating Image Formats ...................................................................................... 12
2. Color Models in Image and Video ....................................................................................................... 13
2.1. Introduction .................................................................................................................................... 13
2.2. Experiment 5: Working on Color Models........................................................................................ 13
3. Audio ................................................................................................................................................... 14
3.1. Introduction .................................................................................................................................... 14
3.2. Reading and Writing Sound Files .................................................................................................... 15
3.3. Converting Audio Data Formats...................................................................................................... 16
3.4. Experiment 6: Reading, writing and converting Audio Data .......................................................... 16
3.5. Playing Back and Recording Audio Data ......................................................................................... 17
3.6. Experiment 7: Playing and Recording Audio Data ......................................................................... 18
4. Video ................................................................................................................................................... 19
4.1. Introduction .................................................................................................................................... 19
4.2. Experiment 8: Playing a Movie Using JMF ..................................................................................... 20
4.3. Experiment 9: Capturing Video from Webcam .............................................................................. 21
5. Image Compression............................................................................................................................. 23

3|Page
5.1. Introduction .................................................................................................................................... 23
5.2. Experiment 10: Lossless Compression Techniques ......................................................................... 23
5.3. Experiment 11: Lossy Compression Techniques ............................................................................. 25
6. Animation............................................................................................................................................ 26
6.1. Introduction to Macromedia Flash ................................................................................................. 26
6.2. Using the Drawing Tools ................................................................................................................. 27
6.3. Working with Layers ....................................................................................................................... 28
6.4. Working with the Timeline.............................................................................................................. 29
6.5. Creating Animations........................................................................................................................ 30
6.6. Publishing and Exporting ................................................................................................................ 33
6.7. Experiment 12: Animation Basics (I) ............................................................................................... 34
6.8. Experiment 13: Animation Basics (I) ............................................................................................... 34

4|Page
1. Digital Images

1.1. Introduction

An image is typically a rectangular two-dimensional array of pixels, where each pixel represents
the color at that position of the image and where the dimensions represent the horizontal extent
(width) and vertical extent (height) of the image as it is displayed.

Below are the main classes that you must learn about to work with images:

1. java.awt.Image: super class that represents graphical images as rectangular arrays of


pixels.
2. java.awt.image.BufferedImage: extends the Image class to allow the application to
operate directly with image data (for example, retrieving or setting up the pixel color). A
BufferedImage is essentially an Image with an accessible data buffer. It has a
ColorModel and a Raster of image data.
1. The ColorModel provides a color interpretation of the image's pixel data.
2. The Raster performs the following functions:
 Represents the rectangular coordinates of the image
 Maintains image data in memory
 Provides a mechanism for creating multiple sub-images from a single
image data buffer
 Provides methods for accessing specific pixels within the image
3. javax.imageio.ImageIO: used to read and write recognized image formats such as
JPEG,PNG,GIF,BMP and WBMP

1.2. Reading, Writing and Displaying Digital Images

1.2.1. Reading Digital Images

External image formats are loaded into BufferedImage format using the javax.imageio.ImageIO
class. ImageIO class has built-in support for GIF, PNG, JPEG, BMP, and WBMP.

Example: Loading an external image

The following code shows how to load an image from a specific file:

BufferedImage img = null;


try {
img = ImageIO.read(new File("myImage.jpg"));
} catch(IOException e) {
}

5|Page
Note: Image I/O recognizes the contents of the file as a JPEG format image, and decodes it into a
BufferedImage which can be directly used by Java 2D.

1.2.2. Drawing/Displaying Digital Images

An image can be drawn using methods of the Graphics or Graphics2D classes. An instance of
Graphics or Graphics2D is known as a graphics context. It represents a surface onto which we
can draw images, text or other graphics primitives.

A graphics context could be associated with an output device such as a printer, or it could be
derived from another image (allowing us to draw images inside other images); however, it is
typically associated with a GUI component that is to be displayed on the screen.

For example, to display an image using the Abstract Window Toolkit (AWT), we must extend
an existing AWT component and override its paint () method. In very simple applets or
applications, extending Applet or Frame would be sufficient.

Example: Displaying an image on a frame

public class MyFrame extends JFrame{


BufferedImage img=null;
public MyFrame(){
img=ImageIO.read(new File("images/myImage.jpg"));
}
public void paint(Graphics g){
g.drawImage(img, 0, 0, null);
}
public static void main(String args[]) {
MyFrame f=new MyFrame();
f.setSize(600, 600);
f.setVisible(true);
f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
}

The general syntax for calling method drawImage of class Graphics is given below:

boolean Graphics.drawImage(Image img, int x, int y, ImageObserver observer);

Where x and y specifies the position for the top-left of the image. The observer parameter
notifies the application of updates to an image that is loaded asynchronously. The observer
parameter is not needed for the BufferedImage class, so it usually is null.

6|Page
1.2.3. Creating Digital Image

We already know how to load an existing image, which was created and stored in your system or
in any network location. But, you probably would like also to create a new image as a pixel data
buffer.

You can create a BufferedImage object manually, using one of the three constructors of this
class as follows:

BufferedImage(width, height, type) - constructs a BufferedImage of one of the


predefined image types.

Example1: Creating a 200 by 200 image of type three byte RGB

BufferedImage img = new BufferedImage(200,200, BufferedImage.TYPE_3BYTE_BGR);

1.2.4. Writing Digital Images

The ImageIO class provides a simple way to save images in a variety of image formats as shown
below: (Note: The BufferedImage class implements the RenderedImage interface).

boolean ImageIO.write(RenderedImage im, String formatName, File output)

Example: Saving an image in a “png” format

try {
BufferedImage bimg = getMyImage();
File outputfile = new File("savedImage.png");
ImageIO.write(bimg, "png", outputfile);
} catch (IOException e) {
...
}

1.2.5. Experiment 1: Reading, Displaying, and Writing Digital


Images

1. Write a program that loads an external jpeg image into a BufferedImage bimg.
2. Modify your program so that it displays width, height and type of the image. Use
getWidth(), getHeight(), and getType() methods of the BufferedImage.
3. Write a method that displays the image in Q(1) on a frame
4. Modify your program so that it displays two different images side by side
5. Write a method that writes the image in Q(1) in PNG format.

7|Page
1.3. Manipulating Pixels

As we mentioned earlier in this chapter a digital image can be represented by a BufferedImge


class of JAVA 2D. One of the objects used by this class is a Raster class. The Raster class
provides methods for accessing specific pixels within the image. Some of these methods are
given below:

Return pixel values as array of integer, float or double

int[] getPixel( int x, int y , int[] data)

float[] getPixel( int x, int y , float [] data)

double[] getPixel( int x, int y , double[] data)

Return pixel value of a particular band as integer, float or double

int getSample(int x, int y, int band)

float getSampleFloat(int x, int y, int band)

double getSampleDouble(int x, int y, int band)

Note that Raster is a read-only class; its methods can be used to inspect pixel values but not to
modify them. A subclass of Raster, called WritableRaster, adds methods that change a pixel's
value. The basic methods provided by WritableRaster to modify pixel values are given below:

void setPixel (int x, int y, int [] data)

void setPixel(int x, int y, float [] data)

void setPixel(int x, int y, double[] data)

void setSample(int x, int y, int band, int value)

void setSample(int x, int y, int band, float value)

void setSample(int x, int y, int band, double value)

1.3.1. Experiment 2: Manipulating Pixels (I)

1. Load a true color external image called “myImage.jpg” into a buffered image

8|Page
2. Display RGB values of the first row of the image starting from column 1 to 10
3. Replace the first and last two row of the image with red color
4. Invert the image data (upside-down) and store the inverted data in a new buffered image
called bimg2
5. Display both original and inverted images and observe their differences
6. Create a method that takes two buffered images as arguments and returns the average of
the two buffered images
7. Write a method that takes a buffered image as an argument, convert the buffered image
into grayscale and returns the converted image

1.3.2. Experiment 3: Manipulating Pixels (II)

1. Write a method for each of the following image operations


a. Flip an image vertically
b. Flip an image horizontally
c. Rotate an image by 900
d. Rotate an image by -900
2. Create a method that makes 50% opaque the input image

3. Write an application or applet that


e. Reads a GIF or JPEG image into an Image object
f. Extract a square region of pixels from the centre of the image, the dimensions of
this region being half those of the image
g. Create a new Image from the extracted data
h. Displays the new image
4. Write a program that reads JPEG-compressed grayscale image data into a Bufferedlmage
and then iterates over all pixels in the image to determine the minimum, maximum and
mean grey levels, writing this information to System.out.

5. Write a program that reads a color image from a JPEG file into a BufferedImage object
and then counts the number of pixels with a color similar to some reference color. This
reference color should be specified as red, green and blue values from the user interface.
'Similar' in this case means that the distance between a color and the reference color in
RGB space is less than 10. What happens when you attempt to run the program on a
grayscale image?

9|Page
1.4. Creating your own Image Format

1.4.1. Bahir Dar Pictures (BDP)

In this section we will create a new image format called “BDP” which stands for Bahir Dar
Pictures. The BDP format supports 8-bit grayscale and 24-bit RGB color images, which may or
may not be compressed using a lossless compression technique.

A BDP format has a 12-byte. The first four bytes are the signature which indicates the image
type and compression status used in the image [Table 1.1]. This is followed by a pair of 32-bit
integers representing the width and height of the image, respectively. All remaining bytes in the
file are compressed or uncompressed image data. The design of the encoder and decoder of the
BDP format is shown in Figure 1.1

Signature Image Type


GIMG 8-bit grayscale, uncompressed
gIMG 8-bit grayscale, compressed
CIMG 24-bit RGB color, uncompressed
cIMG 24-bit RGB color, compressed

Table 1.1: Image Types supported by the BDP format

BDPEncoder BDPDecoder
- DataInputStream input
- DataOutputStream output
- byte[] signature
- boolean compression - int type
+ BDPEncoder() - int width
- int height
+ BDPEncoder(String fileName)
+ BDPDecoder()
+ void encode(BufferedImage img)
+ BDPDecoder(String fileName)
+ void enableCompression() + BufferedImage decode ()
+ void disableCompression() + int getType()
+ int getWidth()
+ int getHeight()

Figure 1.1: UML diagrams showing the design of BDPEncoder and BDPDecoder classes.

10 | P a g e
Example: Implementing the BDPEncoder class

public class BDPEncoder {


private DataOutputStream output;
private boolean compression=false;

public BDPEncoder(){ }
public BDPEncoder(String fileName){
output=new DataOutputStream(new FileOutputStream(fileName));
}
public void enableCompression() {
compression = true;
}
public void disableCompression() {
compression = false;
}
public void encode(BufferedImage img) {
writeHeader(img);
if (img.getType() == BufferedImage.TYPE_BYTE_GRAY ||
img.getType() == BufferedImage.TYPE_3BYTE_BGR) {
DataBufferByte db = (DataBufferByte)
img.getRaster().getDataBuffer();
byte[] data = db.getData();
if (compression) {
//will be implemented on its own topic (Compression)
}
else {
output.write(data);
output.flush();
}
}
else {
System.err.println("Unsupported file format");
}

}
private void writeHeader(BufferedImage img) {
if (img.getType() == BufferedImage.TYPE_BYTE_GRAY) {
if (compression)
output.write("gIMG".getBytes());
else

11 | P a g e
output.write("GIMG".getBytes());
}
else{
if(compression)
output.write("cIMG".getBytes());
else
output.write("CIMG".getBytes());
}
output.writeInt(img.getWidth());
output.writeInt(img.getHeight());
output.flush();
}
}

1.4.2. Experiment 4: Creating Image Formats

1. Implement the BDPDecoder class based on the UML given in Figure 1.1

12 | P a g e
2. Color Models in Image and Video

2.1. Introduction

There are different color models used in image and video. The most known color models are:
RGB, CMY, HSV, YIQ, and YCbCr.

2.2. Experiment 5: Working on Color Models

1. Write a program to display colors based on the inputs of the underline color model
parameters (could be RGB)
2. Write a method for each of the following operations
2.1. convert RGB to CMYK and v-versa
2.2. convert RGB to YCbCr
2.3. convert HSV to RGB

13 | P a g e
3. Audio

3.1.Introduction

This section introduces the basic concepts of the JavaSound API and applies the API in a hands-
on example which enables you to play a background music file (such as an MP3), pick up the
microphone and start to sing along, recording your performance as an audio file, convert audio
data from one audio format/audio file format to other format, manipulate audio samples,
applying different effects such as, changing the volume, gain, sample rate etc on an audio data.

A JavaSound Primer

Java Sound API supports two types of audio: Sampled and MIDI(MIDI will not be covered in
this experiment). It is a low-level API for manipulation of audio playback, audio recording, and
MIDI music synthesizers. Low level, because you have direct access to the bits that represent the
audio data, and you can directly control many features of the underlying sound hardware.

Sampled Audio is represented as a sequence of time-sampled data of the amplitude of sound


wave. The samples can be either 8-bit or 16-bit, with sampling rate from 8 kHz to 48 kHz. The
Java Sound can directly support "wav", "au" and "aiff" file formats.

The main classes related to sound files and sound data are given below

• AudioInputStream : represents a stream of audio data. It can be read from or written to a


file
• AudioFormat: represents format of audio data which specifies how the audio samples
themselves are arranged. It includes the following attributes:
– Encoding technique, usually pulse code modulation (PCM)
– Number of channels
– Sample rate (number of samples per second, per channel)
– Number of bits per sample (per channel)
– Frame rate
– Frame size in bytes
– Byte order (big-endian or little-endian)
• AudioFileFormat: specifies the structure of a sound file and contains the following
attributes:
– File type (WAVE, AIFF, etc.)
– File's length in bytes
– Length, in frames, of the audio data in the file
– AudioFormat object that specifies the data format of the audio data contained in
the file
• AudioSystem: this class provides methods for reading and writing sounds in different file
formats, converting between different data formats, and more …..

14 | P a g e
• Mixer: In Java Sound API a Mixer object represents either a hardware or a software
device. A mixer object can be used for input (capturing audio) or output (playing back
audio).
• In the case of input, the source from which the mixer gets audio for mixing is one
or more input ports. the mixer sends the captured and mixed audio streams to its
target, the target is an object with a buffer from which an application program can
retrieve this mixed audio data.
• In the case of audio output, the situation is reversed. The mixer's source for audio
is one or more objects containing buffers into which one or more application
programs write their sound data; and the mixer's target is one or more output
ports.
• Ports are simple lines for input or output of audio to or from audio devices. Common
types of ports are: microphone, line input, CD-ROM drive, speaker, headphone, and line
output.
• TargetDataLine receives audio data from a mixer. It provides methods for reading the
data from the target data line's buffer and determining how much data is currently
available for reading.
• SourceDataLine receives audio data for play back. It provides methods for writing data
to the source data line's buffer for playback, and determining how much data the line is
prepared to receive without blocking.
• Clip is a data line into which audio data can be loaded prior to playback.

3.2.Reading and Writing Sound Files

The AudioSystem class provides two types of file-reading services using the methods:
getAudioFileFormat(InputStream/File /URL) and getAudioInputStream(InputStream/File /URL).

The following method, in class AudioSystem, creates a disk file of a specified file type:
write(AudioInputStream, AudioFileFormat.Type, File)

Example: The program below reads an audio file format and audio data from a given audio file.

import javax.sound.sampled.*;
import java.io.*;
public class AudioExample {
AudioInputStream audioIn;

15 | P a g e
AudioFileFormat fileFormat;
public void read(File file){
try{
fileFormat=AudioSystem.getAudioFileFormat(file);
audioIn=AudioSystem.getAudioInputStream(file);

}catch(Exception ex){
System.err.println(ex.getMessage());
}
}
}

3.3.Converting Audio Data Formats

To create a specific AudioFormat, we can use one of the two constructors of the AudioFormat
class shown below:

AudioFormat(float sampleRate, int sampleSizeInBits, int channels, boolean signed, boolean


bigEndian):constructs an AudioFormat with a linear PCM encoding and the given
parameters

AudioFormat(AudioFormat.Encoding encoding, float sampleRate, int sampleSizeInBits, int


channels, int frameSize, float frameRate, boolean bigEndian):which also constructs an
AudioFormat, but lets you specify the encoding, frame size, and frame rate, in addition to the
other parameters.

Example: A method that converts the data format of a given audio data

AudioInputStream lowResAIS;
public void convert() {
AudioFormat format=new AudioFormat(8000.0f,16,1,true, false);
lowResAIS =AudioSystem.getAudioInputStream(format, audioIn);
}

3.4.Experiment 6: Reading, writing and converting Audio Data

1. Write a method that performs the following operation


a. Read an audio file
b. displays the encoding technique, number of channels, sample rate, number of bits
per sample, frame rate, and frame size used by the audio file in Q(a)

16 | P a g e
2. Write a method that creates a new audio file of type “AIFF” from the input audio file.
3. Write a method that converts the audio file in Q2 to a lower resolution

3.5.Playing Back and Recording Audio Data

Two kinds of line that you can use for playing sound: a Clip and a SourceDataLine. Use a Clip
when you have non-real-time sound data that can be preloaded into memory. Use a
SourceDataLine for streaming data, such as a long sound file that won't all fit in memory at once,
or a sound whose data can't be known in advance of playback.

Example 1: Playing back audio using Clip

public void playClip(){


try{
Clip c=AudioSystem.getClip();
c.open(audioIn);
c.start();
}catch(Exception ex){
System.err.println(ex.getMessage());
}
}

Example 2: Playing back audio using SourceDataLine

public void playStream() {


try{
SourceDataLine sourceLine;
sourceLine=AudioSystem.getSourceDataLine(formatIn);
sourceLine.open(formatIn);
sourceLine.start();
int numRead=0;
byte[] buff=new byte[40];
while((numRead=audioIn.read(buff))>0)
sourceLine.write(buff, 0, numRead);
}catch(Exception ex){
System.err.println(ex.getMessage());
}
}

Example 3: Capturing Audio Data from Microphone

public void record() {


try{

17 | P a g e
TargetDataLine targetLine ;
targetLine=AudioSystem.getTargetDataLine(formatIn);
targetLine.open(formatIn);
targetLine.start();
ByteArrayOutputStream out=new ByteArrayOutputStream();
int numRead=0;
byte[] buff=new byte[40];
while((numRead=targetLine.read(buff,0,buff.length))>0 &&)
out.write(buff, 0, numRead);

}catch(Exception ex){
System.err.println(ex.getMessage());
}
}

3.6.Experiment 7: Playing and Recording Audio Data

1. Write a method that performs each of the following operation


a. Read an audio stream and store it in an AudioInputStream object called
audioFromFile
b. Play half of the audio data
c. Record audio data from microphone and store it in an AudioInputStream
object called audioFromMic
d. Take the average of the two audio data and create store it in a new
AudioInputStream object called audioAvg
e. Play half of the audio data in audioAvg

18 | P a g e
4. Video

4.1.Introduction

Java Media Framework (JMF) is a framework for handling streaming media in Java programs.
JMF is an optional package of Java 2 standard platform. JMF provides a unified architecture and
messaging protocol for managing the acquisition, processing and delivery of time-based media.

Representing media

All multimedia contents are invariably stored in a compressed form using one of the various
standard formats. Each format basically defines the method used to encode the media. Therefore
we need a class to define the format of the multimedia contents we are handling.

To this end JMF defines the class Format that specifies the common attributes of the media
Format. The class Format is further specialized into the classes AudioFormat and
VideoFormat.

Specifying the source of media

The next most important support an API should offer is the ability to specify the media data
source. Using an URL object we can specify the media source for some file. JMF provides
another class called MediaLocator to locate a media source of any hardware device like
microphone or webcam. The source of the media can be of varying nature. The JMF class
“DataSource” abstracts a source of media and offers a simple connect-protocol to access the
media data.

Specifying the media destination

A DataSink abstracts the location of the media destination and provides a simple protocol for
rendering media into destination. A DataSink can read the media from a DataSource and render
the media to a file or a stream

Important components

Player: A Player takes as input a stream of audio or video data and renders it to a speaker or a
screen. much like a CD player reads a CD and outputs music to the speaker. A Player can have
states, which exist naturally because a Player has to prepare itself and its data source before it

19 | P a g e
can start playing the media. Java Player has many methods like : getVisualComponent();
getControlPanelComponent(); start(); stop(); deallocate();

Processor: A Processor is a type of Player. In the JMF API, a Processor interface extends
Player. As such, a Processor supports the same presentation controls as a Player. Unlike a Player,
a Processor has control over what processing is performed on the input media stream.

In addition to rendering a data source, a Processor can also output media data through a
DataSource so it can be presented by another Player or Processor.

Manager: Manager class is used to create players, processors, datasinks and so on. You can
imagine it as a mapper between JMF components.

Important notes on the use of components

Two ways to get data either from URL or MediaLocator :

 If you are getting your media from some file then use URL.
 If you are getting your media from some hardware device : microphone or a webcam for
example, then use MediaLocator

After you choose this option you have to extract your DataSource form them and use it in the
creation of either Player or Processor.

 If you want only to display your data then you can use Player.
 If you want to make any changes in the data and then display it or if you want to send it
anywhere using network or save it to some file then you have to use Processor.

4.2.Experiment 8: Playing a Movie Using JMF

Example: Load and play an external movie

File file=new File(“videos/test.mov”);

URL url = file.toURL();

final Player player;

// setting our manager to light weight components e.g. swing

Manager.setHint(Manager.LIGHTWEIGHT_RENDERER, true);

// here we are using Manager class to create a Player from our


player = Manager.createRealizedPlayer(url);

20 | P a g e
// constructing a frame to display our player

JFrame f = new JFrame("test");

f.setLayout(new BorderLayout());

// adding the player's visual component to the frame

f.add(player.getVisualComponent(), BorderLayout.CENTER);

// adding the control component of the player


f.add(player.getControlPanelComponent(), BorderLayout.SOUTH);

//Initialize the frame

f.setSize(400, 400);

f.setLocationRelativeTo(null);

f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
f.setVisible(true);

player.start();

4.3.Experiment 9: Capturing Video from Webcam

Example: Capturing Video from Webcam

//Suppose the name our camera connected to the computer is


//“myCam” we can connect to the camera as follows

CaptureDeviceInfo webcamInfo = new CaptureDeviceInfo("Camera",


new MediaLocator("myCam"),null);

//extract the medialoctor of this device

MediaLocator webcamMediaLocator =webcamInfo.getLocator();

Player player;

//Creating Player

player = Manager.createRealizedPlayer(webcamMediaLocator);

21 | P a g e
Component comp;//for Getting Visual Player Component of Camera
if((comp = player.getVisualComponent()) != null) {

JFrame f = new JFrame("test");

f.setLayout(new BorderLayout());

f.add(comp, BorderLayout.CENTER);

f.add(player.getControlPanelComponent(), BorderLayout.SOUTH);
f.setSize(400, 400);

f.setLocationRelativeTo(null);

f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);

f.setVisible(true);

player.start();

22 | P a g e
5. Image Compression

5.1.Introduction

Compression is the process of coding that will effectively reduce the total number of bits needed
to represent certain information. Generally we can classify compression methods into two:
lossless and lossy compressions.

Encoder Storage or Decoder


(Compression) networks (Decompression)
Input data Output data

Fig. 10.1: A General Data Compression Scheme

Lossless compression: data compressed by this method are digitally identical to the original data
when decoded. It only achieves a modest amount of compression. It is used for applications that
do not tolerate for some errors or losses such as legal and medical documents, computer
programs. Some of the lossless compression methods are Run Length Coding, Huffman Coding,
Dictionary-Based Coding, Arithmetic, etc

Lossy Compression: discards components of the signal that are known to be redundant
(including psycho-visual redundancy), therefore signal is changed from input. It achieves much
higher compression under normal viewing conditions no visible loss is perceived (visually
lossless). It is used for applications in which some errors or losses are tolerated. Some of the
lossy compression methods are Block Transform Coding such as Discrete Cosine Transform
Coding, Discrete Wavelet Transform Coding, Lossy Predictive Coding, etc.

5.2.Experiment 10: Lossless Compression Techniques

1. Write a method called “encode” that compresses a sequence of characters based on the
Run Length Coding. Run-length coding is a very widely used and simple compression
technique. In this method we replace runs of symbols with pairs of (run-length, symbol).

public void encode(String original){


char c[]=original.toCharArray();
String compressed="";
int index=0;
int run=0;
char symbol=c[index];
char nextSymbol;

23 | P a g e
while(index<c.length){
do{
nextSymbol=c[index];
if(nextSymbol==symbol){
run++;
}else{
break;
}
index++;
}while(index<c.length);
compressed+=run;
compressed+=symbol;
symbol=nextSymbol;
run=0;
}
System.out.println("Original="+original);
System.out.println("Compressed="+compressed);
}

2. Implement a Run Length encoder for grayscale images

public void encode(BufferedImage img){


ByteArrayOutputStream outStrm=new
ByteArrayOutputStream();
WritableRaster raster=img.getRaster();
DataBufferByte db=(DataBufferByte)
raster.getDataBuffer();
byte[] buf=db.getData();

int index=0;
int run=0;
byte symbol=buf[index];
byte nextSymbol;
while(index<buf.length){
do{
nextSymbol=buf[index];
if(nextSymbol==symbol){
run++;
}else{
break;
}
index++;
}while(index<buf.length);
outStrm.write(run);
outStrm.write(symbol);
symbol=nextSymbol;
run=0;

24 | P a g e
}
for(int i=0;i<img.getWidth();i++)
for(int j=0;j<img.getHeight();j++)
System.out.println("Original="+raster.getSample(i,j,0));

byte b[]=outStrm.toByteArray();
for(int i=0;i<b.length;i++)
System.out.println("Compressed="+b[i]);
}

3. Implement the decoder methods of in Q(2)

5.3.Experiment 11: Lossy Compression Techniques

In this experiment we will be using our own lossy compressiong technique called “RowJumper”.
As shown in the Figure 5.1, the row jumper compression technique drops every second row of
an image. During the reconstruction, the technique substitutes the missing row by averaging the
two immediate rows: the row above the dropped row and the row below it.

250 6 25 250 6 25 250 6 25


20 40 100 150 33 57
50 60 90 50 60 90 50 60 90
50 100 0 50 130 145
50 200 200 50 200 200 50 200 200

Original image Compressed image Reconstructed image

Figure 5.1: RowJumber

Exercises:

1. Write a method called “encoder” that performs the above compression technique on an
input gray level image.
2. Write a method called “decoder” that takes an image compressed by the above technique
and returns a reconstructed image based on the above de-compression technique
3. Display both the original and reconstructed images and observe their differences

25 | P a g e
6. Animation

6.1.Introduction to Macromedia Flash

Macromedia Flash is a program that is utilized to create movies that include graphics and
animation for Web sites. Flash movies consist primarily of vector graphics, but they can also
contain imported bitmap graphics and sounds. Flash movies can incorporate interactivity to
permit input from users. Flash can also be utilized to create nonlinear movies that can interact
with other Web applications. Web designers use Flash to create navigation controls, animated
logos, long-form animations with synchronized sound and even complete, sensory-rich Web
sites. Flash movies are compact, vector graphics, so they can be downloaded rapidly.

Flash files that are viewable on the Internet are in SWF (Shockwave Flash) file format. The SWF
file is created from an FLA file at the time of publication. An FLA file is the actual project used
to work with in Flash. The FLA holds all of the keyframes and individual movies that are
sandwiched together to make the final animation or SWF file.

Creating a new document


 Open the Flash application
 Click Flash Document from the Create New column on the Start Page to create a blank document.
 Select File > Save As from the main menu.
 Name the file myfirst.fla

Getting familiar with the flash work environment

Timeline: the timeline indicates where graphics are animated over time.
Stage: the area where the movie plays.

26 | P a g e
Work area: a place to work on objects, it is not viewable when you play your movie.
Toolbox: The toolbox contains all tools necessary for drawing, viewing, coloring and modifying
your objects. Each tool in the toolbox comes with a specific set of options to modify that tool.

Changing document properties


 Open the Property inspector (Window  Properties Properties)
 Click the button next to the text that says Size to open the Document Properties dialog
box.
 Enter values for width, height, background color and frame rate
 Click OK when you finish entering the new values to the document properties.

6.2.Using the Drawing Tools

Snapping: Snapping automatically aligns objects one with another.

To turn Snapping on or off, do one of the following:


 Select the Arrow Tool, and click on the Snap to Objects option on the Tools panel. or
 Select ViewSnap to Objects.

To draw with the Pencil Tool:

 Select the Pencil Tool.


 Select a stroke color, line weight, and style. To specify stroke attributes:
o Open the Stroke and Fill panels by going through WindowPanels Stroke.
o Select a Stroke style, height (in pixels), and color.
 Choose the drawing mode under Options.
o Select Straighten to convert approximations of triangles, ovals, circles, rectangles,
and squares into these shapes.
o Select Smooth to draw smooth curved lines.
o Select Ink to draw freehand lines with no modification applied.
 Drag on the stage to draw with the Pencil Tool. Hold down the <Shift> key while
dragging to constrain lines to vertical or horizontal directions.

To draw a straight line, oval, or rectangle:


 Select the Line, Oval, or Rectangle Tool.
 Select stroke and fill attributes. To select stroke and fill attributes:
o Select Stroke Color, and choose a color from the palette.
o Select Fill Color, and choose a color from the palette.
 For the Rectangle Tool, specify rounded corners by clicking the Rounded Rectangle
Radius option ( ) and entering a corner radius value. A corner radius value of zero
indicates square corners.
 Drag on the stage.
o If using the Rectangle Tool, press the Up and Down arrows while dragging to
adjust the corner radius value.

27 | P a g e
o If using the Rectangle or the Oval Tools, press the <Shift> key while dragging to
constrain shapes to squares and circles.
o If using the Line Tool, press the <Shift> key while dragging to constrain the line
angles to 45 degrees.

Using Strokes and Fill Colors: Rectangle and Oval Tools create shapes that have stroke
(outline color) and fill (interior color) areas.

To create a group:
 Select the objects to include in the group, such as shapes, symbols, and text.
 Select Modify Group, or press <Ctrl> + <G>.

6.3. Working with Layers

Layers are like transparencies stacked on top of each other. When a new Flash movie is created,
it contains one layer. More layers can be added to organize artwork, animation, and other movie
elements. Objects can be drawn and edited on one layer without affecting objects on another
layer.
An unlimited number of layers can be created, and layers do not increase the file size of a
published movie. You can hide layers, lock layers, or display layer contents as outlines. You can
also change the order of layers. Layers are controlled on the Timeline.

To create a new layer, do one of the following:


 Select EditInsert Layer , OR
 Click on the Insert Layer button, OR
 Right click on the layer name, and select Insert Layer from the shortcut menu.

To show or hide a layer, do one of the following:


 Click on the Eye column to the right of the layer’s name to hide that layer. Click it again
to show the layer. OR
 Click the Eye icon to hide all layers. Click again to show layers.

To change a layer’s outline color:

 Right click on the layer name and select Properties from the shortcut menu.
 In the Layer Properties dialog, next to Outline Color, select a color from the palette.
Click OK.

Exercise: selecting, renaming, deleting and locking layers.

28 | P a g e
6.4.Working with the Timeline

The Timeline organizes and controls a movie’s content over time in layers and frames. The
major components of the Timeline are layers, frames, and the Play Head. Layers in a movie are
listed in a column on the left side of the Timeline. Frames contained in each layer appear in a
row to the right of the layer name. The Timeline header at the top of the Timeline indicates
frame numbers. The Play Head indicates the current frame displayed on the Stage.

Moving the Play Head: The Play Head moves through the Timeline to indicate the current
frame displayed on the stage. The Timeline Header shows the frame numbers of the animation.

To display a frame on the Stage:


 move the Play Head to the frame in the Timeline.
To go to the frame:
 click the location in the Timeline Header, or drag the Play Head to the frame.

To move the timeline:


 Drag from the area of the Timeline header.

Frame Labels and Movie Comments: Frame labels are useful for identifying keyframes in the
Timeline and should be used instead of frame numbers when targeting frames in actions.

To create a frame label or comment:


 Select the frame,
 Go to the Properties inspector.
 In the Frame panel, enter text for a frame label or comment in the Label text box. To
make the text a comment, enter two slashes (//) at the beginning of each line of the text.

Keyframes: A keyframe is a frame in which changes in animation are defined. With frame-by-
frame animation, every frame is a keyframe. In tweened animation, keyframes are defined at
Flash displays the interpolated frames of a tweened animation as light blue or green with an
arrow drawn between keyframes.

29 | P a g e
Flash redraws shapes in each keyframe. Keyframes should only be created at the points in which
something in the artwork changes. Keyframes are indicated in the Timeline. A solid circle
represents a keyframe with content on it, and a vertical line before the frame represents an empty
keyframe. Subsequent frames added to the same layer will have the same content as the
keyframe.

To insert frames in a timeline, do one of the following:


 To insert a new frame select Insert New Frame.
 To create a new keyframe, select Insert Keyframe, or right click in the desired frame,
and select Insert Keyframe from the shortcut menu.
 To create a new blank keyframe, select Insert Blank Keyframe, or right click in the
desired frame and select Insert Blank Keyframe from the shortcut menu.

To delete or modify a keyframe, do one of the following:
 To delete a frame, keyframe, or frame sequence, select the frame, keyframe, or sequence,
and select Insert Remove Frames, or right click and select Remove Frames from the
shortcut menu.
 To move a keyframe or frame sequence and its contents, drag the keyframe or sequence
to the desired location.
 To extend the duration of a keyframe, press the <Alt> key and drag the keyframe to the
final frame of the new sequence duration.
 To copy a keyframe or frame sequence by dragging, press the <Alt> key and drag the
keyframe to a new location.
 To copy and past a frame or frame sequence, select the frame or sequence and select
EditCopy Frames. Select the frame or sequence to replace, and select EditPaste
Frames.
 To convert a keyframe to a frame, select the keyframe and select Insert Clear
Keyframe, or right click on the keyframe and select Clear Keyframe from the shortcut
menu. The cleared keyframe and all frames up to the subsequent keyframe are replaced
with the contents of the frame preceding the cleared keyframe.

6.5.Creating Animations

Changing the content of successive frames creates animation. With animation, you can make an
object move across the stage, increase or decrease its size, rotate, change color, fade in or out, or
change shape. Changes can occur independently of or in concert with other changes. For
example, an object can be made to rotate and fade in while it moves across the stage.

There are two methods for creating an animation sequence in Flash: frame-by-frame animation
and tweened animation.

Frame-by-frame animation: In frame-by- frame animation you create that object in every
frame, and every frame is a key frame. This is indicated on the timeline with a black circle in
every frame as shown in the figure below.

30 | P a g e
Tweened animation: In tweened animation, starting and ending frames are created, and Flash
creates the frames in between. Tweened animation is indicated on the timeline with a black circle
in the beginning and ending frames and an arrow over the interpolated frames. Flash varies the
object’s size, rotation, color, or other attributes evenly between the starting and ending frames to
create the appearance of movement. Tweened animation is an effective way to create movement
and changes over time while minimizing file size. In tweened animation, only the values for the
changes between frames are stored. In frame-by-frame animation, the values for each complete
frame are stored.

Flash can create two types of tween animation: shape tweening and motion tweening.

Shape tweening

In shape tweening, you draw a shape at one point in time, and then you change that shape or
draw another shape at another point in time. Flash interpolates the values or shapes for the
frames in between, creating the animation. Shape tweening has the effect of morphing shapes,
making one shape appear to change into another shape over time. If tweening is performed on
multiple shapes, all of the shapes must be on the same layer. The location, size and color of
shapes can also be changed. Tweening one shape at a time usually has the best results.

Note: Flash cannot tween the shape of groups, symbols, text blocks, or bitmap images.
To apply shape tweening to grouped objects, use Modify Break Apart.

To tween a shape:
 Click a layer name and make it the current layer, and select an empty keyframe where
you want the animation to start.
 Create the image for the first frame of the sequence. Use any of the drawing tools to
create the shape.
 Create a second keyframe after the desired number of frames from the first keyframe.
 Create an image for the last keyframe in the sequence.
 Go to the property inspector
 In the Frame panel, for Tweening, select Shape.

Motion Tweening

Motion tweening is a technique that tweens the changes in properties of instances, groups, and
type. Flash can tween position, size, rotation, and skew of instances, groups, and type.

31 | P a g e
Additionally, it can tween the color of instances and type, creating gradual color shifts or making
an instance fade in or out.

Note: Flash cannot apply motion tweening to shapes. Motion tweening only applies to instances,
groups, and text.

To tween an instance, group, or type:

1. Click a layer name and make it the current layer, and select an empty keyframe where
you want the animation to start.
2. Create the image for the first frame of the sequence. Create and arrange any instances,
groups, and types.
3. Create a second keyframe the desired number of frames after the first keyframe.
4. Do one of the following to modify the instance, group, or text block in the ending frame:
o Move the item to a new position.
o Modify the item’s size, rotation, or skew.
o Modify the item’s color (instance or text block only).
o To tween the color of elements other than instances or text blocks, use shape
tweening.
5. Go to the property inspector.
6. In the Frame panel, for Tweening, select Shape.
7. If the size of the item was modified in step 4, select the Scale to tween the size of the
selected item.
8. Click and drag the arrow next to the Easing value or enter a value to adjust the rate of
change between tweened frames:
o To begin the motion tween slowly and accelerate the tween toward the end of the
animation, drag the slider up or enter a value between –1 and –100.
o To begin the motion tween rapidly and decelerate the tween toward the end of the
animation, drag the slider down or enter a value between 1 and 100.
9. To rotate the selected item while tweening, choose an option from the Rotate menu:
o Select None (the default setting) to apply no rotation.
o Select Auto to rotate the object once in the direction requiring the least motion.
o Select Clockwise (CW) or Counterclockwise (CCW) to rotate the object as
indicated, and then enter a number to specify the number of rotations.

Tweening Motion along a Path: on the Timeline, motion guide layers let you draw paths along
which tweened instances, groups, or text blocks can be animated. You can link multiple layers to
a motion guide layer to have multiple objects follow the same path. A normal layer that is linked
to a motion guide layer becomes a guided layer.

To create a motion path for tweened animation:


1. Create a motion-tweened animation sequence as described above. If you select Orient to
Path, the baseline of the tweened element will orient to the motion path. If you select
Snap, the registration point of the tweened element will snap to the motion path.
2. Do one of the following:
o Select the layer containing the animation and select Insert Motion Guide.
32 | P a g e
o Right click on the layer containing the animation and select Add Motion guide
from the shortcut menu.
Note: A new layer is created above the selected layer with a motion guide icon to the left
of the layer name.
3. Use the Pen, Pencil, Line, Circle, Rectangle, or Brush Tool to draw the desired path.
4. Snap the center to the beginning of the line in the first frame, and to the end of the line in
the last frame. Note: Drag the symbol by its registration point for the best snapping
results.
5. To hide the motion guide layer and the line so that only the object’s movement is visible
while you work, click in the Eye column on the motion guide layer.

To link layers to a motion guide layer, do one of the following:


o Drag an existing layer below the motion guide layer. The layer is indented under the
motion guide layer, and all objects in this layer automatically snap to the motion path.
o Create a new layer under the motion guide layer. Objects tweened on this layer are
automatically tweened along the motion path.
o Select a layer below a motion guide layer. Select ModifyLayer and select Guided in
the Layer Properties dialog box
o Press <Alt> and click on the layer.

To unlink layers from the motion guide layer:

1. Select the layer to unlink.


2. Do one of the following:
 Drag the layer above the motion guide layer.
 Select ModifyLayer and select Normal as the layer type in the Layer Properties
dialog box.
 Press <Alt> and click on the layer.

6.6.Publishing and Exporting

To deliver a Flash animation to an audience, the FLA file must first be published or exported to
another format for playback. The Flash Publish feature is designed for presenting animation on
the Web. The Publish command creates the Flash Player (SWF) file and an HTML document
that inserts the Flash Player file in a browser window.

Publishing Flash Movies

Publishing a Flash movie on the Web is a two-step process. First, prepare all required files for
the complete Flash application with the Publish Settings command. Then, publish the movie and
all of its files with the Publish command. The Publish settings command lets you choose formats
and specify settings for the individual files included in the movie – including GIF, JPEG, or

33 | P a g e
PNG, and then store these settings with the movie file. Depending on what you specified in the
Publish Settings dialog box, the Publish command then creates the following files:
 The Flash movie for the Web file (SWF).
 Alternate images in a variety of formats that appear automatically if the Flash Player is
not available (GIF, JPEG, PNG, and QuickTime).
 The supporting HTML document required to display the movie (or alternative image) in a
browser and control browser settings.
 Stand-alone projectors for both Windows and Macintosh systems and Quicktime videos
from Flash movies (EXE, HQX, or MOV files, respectively).

6.7.Experiment 12: Animation Basics (I)

1) Create a flash animation for each of the following operations


a) Simulate the bouncing ball
b) Simulate collision of two balls
c) Change a circle into a square
2) Publish your work in html, gif, and shockwave format

6.8.Experiment 13: Animation Basics (I)

1) Create a flash animation for each of the following operations


a) Represent the growing moon
b) Simulate the movement of a cloud
2) Publish your work in html, gif, and shockwave format

34 | P a g e

You might also like