This action might not be possible to undo. Are you sure you want to continue?

SEMINAR REPORT

On

SUMMER TRAINING

Undergone at

Pyrotech Electronics Pvt. Ltd., Udaipur

Submitted

By

Hussain Cementwala

Department of Electronics & Communication Engineering

GLOBAL INSTITUTE OF TECHNOLOGY

ITS -1, IT Park, EPIP, SITAPURA, JAIPUR

I

DEPARTMENT

Of

ELECTRONICS & COMMUNICATION ENGINEERING

CERTIFICATE

This is to certify that Mr. Hussain Cementwala of Electronics and Communication Dept. of

2008- 2012 batch presented her work in partial fulfillment of the requirements for the award

of the degree of BACHELOR OF TECHNOLOGY in Electronics & Communication,

submitted in Rajasthan Technical University, Kota.

He has undergone 30 days of industrial training from 13-06-2011 to 13-07-2011 at

PYROTECH ELECTRONICS PVT. LTD., UDAIPUR which is part of curriculum as

prescribed by RTU.

He has shown the full report of her work in very efficient and attractive manner.

Date: / / 2011

Place: Jaipur

Mr. J.P. Agarwal Mr. Pranay Sharma

(HOD, ECE Dept.) (Asst. Professor)

II

ACKNOWLEDGEMENT

The beatitude, bliss and euphoria that accompany the successful completion of any task

would not be completed without the expression of simple virtues to the people who made it

possible.

I feel immense pleasure in carrying my heartiest thanks and gratitude to respected

Faculty Member Mr. J.P. Agarwal for their guidance, suggestion and encouragement.

The Acknowledgement would not complete if I fail to express my deep sense of Obligation

to almighty God and my family, without their help this work would not have been

completed.

Last but not least, I thank all the concerned ones who directly or indirectly helped me in this

work.

Signature

Hussain Cementwala

PAGE INDEX

III

Topic Page No.

ABSTRACT

1.

1.1 HEADING

1.2 HEADING

2. TITLE OF CHAPTER TWO

2.1 HEADING

2.2 HEADING

:

:

:

:

:

N-1

N. CONCLUSION

BIBLIOGRAPHY

APPENDIX – A. POWER POINT SLIDES

IV

FIGURE INDEX

Figure Page No.

1.1 Figure 1 About

1.2 Figure 2 About

2.1 Figure 3 About

2.2 Figure 4 About

: :

: :

: :

N Figure N About

V

PYROTECH ELECTRONICS PVT. LTD.

About Pyrotech

Pyrotech a renowned company functioning in the field of Control room Equipments,

Electronics and Sensors has climbed up the value chain by collaborating with Global

Companies like Planar (Large Video Screen), Subklew (Mosaic Tile System), Karus &

Naimer (Switches), Weigel Meters (Meters) and others. Truly it can now be referred as a

“CONTROL ROOM SOLUTIONS PROVIDER”

History

Pyrotech was established by a dedicated team of four energetic technocrats in 1976. A

small house in the small city of Udaipur witnessed the battling wits and wisdom of four

visionaries, capitalizing their intellect to sketch out a fresh new concept.

The quantum of success can be measured from the fact that the Company has registered

an average growth rate of 55% since its inception. The reason behind the success is

strong emphasis on customer satisfaction in all respects. Over the past three decades the

organization has been registered as an ISO 9001, EMS 14001, OHSAS 18001.

Products

1. Turnkey Control Room Solutions

2. Mosaic Panels and Desks

3. Industrial and Office Furniture

4. Local Instrument Racks and Enclosures

5. Large Video Screen (LVS)

6. Explosion Proof and Purge Panels

7. Gas analyzer Panel/Shetter

8. Relay Panels

9. Electrical Panels

10. Mimic Panels

11. Marshalling Panels

12. PLC panels

13. LT Switching Panels

14. MCC Panels

15. Pneumatic Panels

16. Centralized control desks

17. Computer controls and CRT desks

18. Test bench

19. 19” Rack Enclosure

Pyrotech Pressure in India

Head Office:-

Pyrotech Electronics Pvt. Ltd. Unit-II

E-329, Road No. 12 MIA, Udaipur-313003, (Rajasthan) India

Tel no.: 0294-2492122/31/34

Fax No.: 0294-2492130,2414458

Mobile No.: 09352501210

Email: pyrotech@pyrotechindia.com

info@pyrotech-furniture.com

VI

Head Marketing Office:-

Pyrotech Marketing & Projects Pvt. Ltd.

917-920 International Trade Tower Nehru Place, Delhi- 110019

Mobile No.: 09312269440

Tel No.: 011-26464922,26419702,26423648

Fax.: 011-26464922, 26965932

Email: pmppl@pyrotechindia.com

Southern Regional Office:-

Mr. P.D. Raghuveer

Pyrotech Southern Region office

No.22, NHCS Layout, 13th Main Road Vijaynagar, Bangalore-560040

Mobile No.: 09342816887

Tel No.:080-23509680

Fax No.:080-23509680

Email: bangalore@pyrotechindia.com

Our Regional Offices:-

Baroda

Mr. Y.K. Shah

Email: baroda@pyrotechindia.com

Ahmedabad

Mr. K.G. Singh

Email: ahmedabad@pyrotechindia.com

Bhopal

Mr. Sanjiv Bisht

Email: bhopal@pyrotechindia.com

Mumbai

Mr. Sanjiv P. Nambiar

Email: hp@pyrotechindia.com

Hyderabad

Mr. Hridesh Porwal

Email: hyderabad@pyrotechindia.com

Kolkata

Mr. Prasantha Mulujee

Email: kolkata@pyrotechindia.com

VII

VIII

1. INTRODUCTION

Often signals we wish to process are in the time-domain, but in order to process them more

easily other information, such as frequency, is required. Mathematical transforms translate

the information of signals into different representations

Image processing

Why do we use image processing?

Image processing has been developed in response to three major problem concerned

with pictures

A.) Picture digitization & coding to facilitate transmission, printing and storage of

pictures

B.) Picture enhancement and restoration in order, for eg. ,to interpret more easily

pictures of the surface of other planets taken by various probes

C.) picture segmentation and description as an early stage to machine vision.

Image processing now a day’s refers mainly to the processing of digital images

What is an image ?

A panchromatic image is a 2 d light intensity function f(x,y) where x and y are spatial

co-ordinates and the value of F at(x,y) is propotional to the brightness of the sense at

that point. If we have a multi spectral image F(x,y) is a vector each component of which

indicate the brightness of the scene at point (x,y) at the corresponding spectral band

3.what is a digital image ?

IX

A digital image in an image f(x,y) that has been disceretised both in spatial coordinate

and in brightness .it is represented by 2 d integer array or a series of 2 d array. One for

each color band .the digitized brightness value in called gray level,each element of the

array is called pixel or pl,derived from the term “picture element” .

1.1 History of image processing:-

Many of the techniques of digital image processing, or digital picture processing as it often

was called, were developed in the 1960s at the Jet Propulsion Laboratory, Massachusetts

Institute of Technology, Bell Laboratories, University of Maryland, and a few other

research facilities, with application to satellite imagery, wire-photo standards conversion,

medical imaging, videophone, character recognition, and photograph enhancement. The

cost of processing was fairly high, however, with the computing equipment of that era. That

changed in the 1970s, when digital image processing proliferated as cheaper computers and

dedicated hardware became available. Images then could be processed in real time, for

some dedicated problems such as television standards conversion. As general-purpose

computers became faster, they started to take over the role of dedicated hardware for all but

the most specialized and computer-intensive operations.

X

With the fast computers and signal processors available in the 2000s, digital image

processing has become the most common form of image processing and generally, is used

because it is not only the most versatile method, but also the cheapest.

Digital image processing technology for medical applications was inducted into the Space

Foundation Space Technology Hall of Fame in 1994.

1.2 Typical operations of image processing:-

• Euclidean geometry transformations such as enlargement, reduction, and rotation

• Color corrections such as brightness and contrast adjustments, color mapping, color

balancing, quantization, or color translation to a different color space

• Digital compositing or optical compositing (combination of two or more images),

which is used in film-making to make a "matte"

• Interpolation , demosaicing, and recovery of a full image from a raw image format

using a Bayer filter pattern

• Image registration , the alignment of two or more images

• Image differencing and morphing

• Image recognition , for example, may extract the text from the image using optical

character recognition or checkbox and bubble values using optical mark recognition

• Image segmentation

• High dynamic range imaging by combining multiple images

• Geometric hashing for 2-D object recognition with affine invariance

XI

2. IMAGE PROCESSING:

Image processing is a technique to enhance raw images received from

cameras/sensors placed on satellites, space probes and aircrafts or pictures taken in

normal day-to-day life in various applications.

Most image processing techniques involve treating the image as a two-dimensional

signal and applying standard signal processing techniques to it.

May remove noise.

Improve the contrast of the image.

Remove blurring caused by movement of the camera during image acquisition.

2.1 DIGITAL IMAGE PROCESSING:-

A few decades ago, image processing was done largely in the analog domain,

chiefly by optical devices. Analog image processing refers to the alteration of image

through electrical means. These optical methods are still essential to applications

such as holography.

Due to the significant increase in computer speed, these techniques are increasingly

being replaced by digital image processing methods.

Digital processing image : problems and applications

The term digital image processing generally refered to processing of a two

dimensional picture by any two dimensional data. A Digital image is an array of real

or complex number represented by a finite number of bits. An image given in the

form of a transparency, slide. Photograph or chart is first digitized and stored as

matrix of binary digits in computer memory. this digitized image can then be

processed and displayed correctly on a high resolution TC monitor.

Digital image processing has a broad spectrum of application , such as remote

sensing via satellite and other space craft, image transmission and storage for

XII

business application, medical processing , radar . sonar, and robotics automated

inspection of industrial parts.

XIII

2.2 COMPONENT OF IMAGE PROCESSING SYSTEM

As recently as the mid 1980s numorous models of image processing system being

solid throughout the world were rather substantial peripheral device that attached to

equally substantial to host computer . late in 1980 and early in 1990 the market

shifted o image processing hardware in the form of single board designed to be

compatible with industry standard busses and to fit into the engineering work station

cabinets and personal computer.

Although large scale image processing system still are being sold for massive

imaging application , such as processing of satellite images ,the trend continous

toward miniaturizing and blending of genral purpose small computers with

specialzed image processing hardware.

XIV

With reference to sensing, to elements are required to acquire digital images. The

first is a physical device that is sensitive to the energy radiated by the object we wish

to image . the second, is called as a digitizer is a device for converting the output oh

the physical sensing into digital form .For instance in a digital video camera, the

sensor produces an electrical output proportional to light intensity the digitizer

converts these output to digital data.

Specialized image processing hardware usually consist of digitizer just mentionaed ,

plus hardware that performs other primitive operations, such as airtmatic logic unit

which performs arithmetic and logical operations. This type of hardware is called

front end subsystem, and has high speed.

The computer in an image processing system is designed to achieve required level of

performance. Almost any equipped pc type is suitable for image processing task.

XV

Software for image processing consist of specialized modules that perform specific

tasks. A well designed package also includes the capability for the user to write code

that, as a minimum, utilizes the specialized module.

Mass storage capability is a must in image processing application. An image of size

1024*1024 pixels is an 8 bit quantity that requires 1 Mb of storage if the image is

not compressed. Digital storage for image processing falls into three category

1 short term storage for use during image processing

2 on-line storage for relatively fast recall

3 archival storage

Image displace in use today are mainly coloured tv monitors. Monitors are driven by

the output of image and graphics display cards. In some cases it is necessary to have

stereo displace.

Hardcopy devices for recording images includes laser printers. inkjet printers , heat

sensitive devices and digital units, such as optical and CD ROM disk.for

presentation , image are displayed on film transparency or ina a digital medium. If

image projection equipment is used.

Networking is almost a default function in nay computer system in used today. The

key consideration in image transmission is bandwidth.in dedicated networks this

typiacally in not a problem but communication with remote site via the internet are

not always as efficient. This situation is improving quicly as a result of optical fibre

and other broadband technologies.

3. STEPS IN IMAGE PROCESSING

• Image acquisition

• Preprocessing

• Segmentation

XVI

• Representation and Description

• Recognition

• Interpretation

• Knowledge base

3.1 IMAGE ACQUSIITION

Image data is, conceptually, a three-dimensional array of pixels. Each of the three arrays in

the example is called a band. The number of rows specifies the image height of a band, and

the number of columns specifies the image width of a band.

Monochrome images, such as a grayscale image, have only one band. Color images have

three or more bands, although a band does not necessarily have to represent color. For

example, satellite images of the earth may be acquired in several different spectral bands,

such as red, green, blue, and infrared.

In a color image, each band stores the red, green, and blue (RGB) components of an

additive image, or the cyan, magenta, and yellow (CMY) components of a three-color

subtractive image, or the cyan, magenta, yellow, and black (CMYK) components of a four-

color subtractive image. Each pixel of an image is composed of a set of samples. For an

RGB pixel, there are three samples; one each for red, green, and blue.

An image is sampled into a rectangular array of pixels. Each pixel has an (x,y) coordinate

that corresponds to its location within the image. The x coordinate is the pixel's horizontal

location; the coordinate is the pixel's vertical location. Within JAI, the pixel at location (0,0)

is in the upper left corner of the image, with the x coordinates increasing in value to the

right and y coordinates increasing in value downward. Sometimes the x coordinate is

referred to as the pixel number and the y coordinate as the line number.

XVII

3.2 SEGMENTATION

Segmentation refers to the process of partitioning a digital image into

multiple segments (sets of pixels, also known as super pixels). The goal of segmentation is

to simplify and/or change the representation of an image into something that is more

meaningful and easier to analyze. Image segmentation is typically used to locate objects and

boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the

process of assigning a label to every pixel in an image such that pixels with the same label

share certain visual characteristics.

The result of image segmentation is a set of segments that collectively cover the entire

image, or a set of contours extracted from the image (see edge detection). Each of the pixels

in a region are similar with respect to some characteristic or computed property, such

as color, intensity, or texture.

3.3 REPRESENTATION AND DESCRIPTION

After segmentation, the image needs to be described and interpreted.

• Representation: an object may be represented by its boundary.

• Description: the object boundary may be described by its length, orientation, or number of

XVIII

Concavities.

4. SIGNAL PROCESSING TECHNIQUE

4.1 ONE DIMENSIONAL TECHNIQUE

Resolution.

Dynamic range.

Bandwidth.

Filtering.

Differential operators.

Edge detection.

Domain modulation

4.2 TWO- DIMENSIONAL TECHNIQUE:-

Image representation.

Image preprocessing.

Image enhancement.

Image restoration.

Image analysis.

Image reconstruction.

Image data compression.

XIX

5. IMAGING GEOMETRY

• TRANSLATION

• SCALING .

• ROTATION

• PERSPECTIVE TRANSFORMATION

5.1 TRANSLATION

Image translation is a term related to machine translation services for mobile devices

(mobile translation). Image translation refers to an additional service provided by mobile

translation applications where the user can take a photo of some printed text (menu list, road

sign, document etc.), apply optical character recognition (OCR) technology to it to extract

any text contained in the image, and then have this text translated into a language of their

choice.

5.2 MAGNIFICATION

XX

• This is usually done to improve the scale of display for visual interpretation or

sometimes to match the scale of one image to other.

5.3 REDUCTION

• Image reduction increases the incidence

5.4 SCALING

Scaling is the process of resizing a digital image. Scaling is a non-trivial process that

involves a trade-off between efficiency, smoothness and sharpness. As the size of an

image is increased, so the pixels which comprise the image become increasingly visible,

making the image appears "soft". Conversely, reducing an image will tend to enhance its

smoothness and

of high frequencies and causes several

Pixels to collapse into one.

• The image is magnified vertically and reduced horizontally.

XXI

5.4 ROTATION

• Image rotation is performed by computing the inverse

Transformation for every destination pixel.

One of the techniques of rotation is 3-pass shear rotation

5.5 IMAGE

ENHANCEMENT

• Image enhancement is the

improvement of digital image

quality, without knowledge

about the source of

degradation To make an

image lighter or darker, or to

increase or decrease contrast,

pseudo coloring, noise

filtering, sharpening and

magnifying.

• Programs -->image enhancements -->image editors.

The aim of image enhancement is to improve the interpretability or perception of

information in image

EXAMPLE:

XXII

1. NOISE SMOOTHING.

2. CONTRAST MANIPULATION.

6. IMAGE RECTIFICATION AND REGISTRATION

Geometric distortions manifest themselves as errors in the position of a

pixel relative to other pixels in the scene and with respect to their absolute

position within some defined map projection. If left uncorrected, these

geometric distortions render any data extracted from the image useless. This

is particularly so if the information is to be compared to other data sets, be it

XXIII

from another image or a GIS data set. Distortions occur for many reasons.

Screen Color Gun

Assignment

Blue Gun

Green Gun

Red Gun

Green

Infrared

Red

6.1 DIGITAL IMAGE PROCESSING

For instance distortions occur due to changes in platform attitude (roll, pitch and yaw),

altitude, earth rotation, earth curvature, panoramic distortion and detector delay. Most of

these distortions can be modeled mathematically and are removed before you buy an image.

Changes in attitude however can be Difficult to account for mathematically and so a

procedure called image Rectification is performed. Satellite systems are however

geometrically quite Stable and geometric rectification is a simple procedure based on a

mapping Transformation relating real ground coordinates, say in easting and northing, to

image line and pixel coordinates.

6.2 RECTIFICATION

Rectification is a process of geometrically correcting an image so that it can be represented

on a planar surface , conform to other images or conform to a map. That is, it is the process

by which geometry of an image is made plan metric. It is necessary when accurate area,

distance and direction Measurements are required to be made from the imagery. It is

achieved by Transforming the data from one grid system into another grid system using a

Geometric transformation. Rectification is not necessary if there is no distortion in the

image. For example, if an image file is produced by scanning or digitizing a paper map that

is in the desired projection system, then that image is already planar and

does not require rectification unless there is some skew or rotation of the image. Scanning

and digitizing produce images that are planar, but do not contain any map coordinate

information. These images need only to be geo-referenced, which is a much simpler process

XXIV

than rectification. In many cases, the image header can simply be updated with new map

coordinate information. This involves redefining the map coordinate of the upper left corner

of the image and the cell size (the area represented by each pixel). Ground Control Points

(GCP) are the specific pixels in the input image for which the output map coordinates are

known. By using more points than

necessary to solve the transformation equations a least squares solution may be found that

minimizes the sum of the squares of the errors. Care should be exercised when selecting

ground control points as their number, quality and distribution affect the result of the

rectification.

Once the mapping transformation has been determined a procedure called resampling is

employed. Resampling matches the coordinates of image pixels to their real world

coordinates and writes a new image on a pixel by pixel basis. the reference image, the pixels

are resample so that new data file values for the output file can be calculated.

7. IMAGE ENHANCEMENT TECHNIQUES

Image enhancement techniques improve the quality of an image as Perceived by a human.

These techniques are most useful because many satellite Images when examined on a color

display give inadequate information for image interpretation. There is no conscious effort to

improve the fidelity of the image with regard to some ideal form of the image. There exists

a wide variety of techniques for improving image quality. The contrast stretch, density

slicing, edge enhancement, and spatial filtering are the more commonly used techniques.

Image enhancement is attempted after the image is corrected for geometric and radiometric

distortions. Image enhancement methods are applied separately to each band of a

multispectral image. Digital techniques Image Rectification Input and reference image with

GCP locations, using polynomial equations the grids are fitted together, using resampling

method the output grid pixel values are assigned (source modified from ERDAS Field

guide) Digital Image Processing have been found to be most satisfactory than the

photographic technique for image enhancement, because of the precision and wide variety

of digital processes

7.1 CONTRAST

Contrast generally refers to the difference in luminance or grey level values in an image and

is an important characteristic. It can be defined as the ratio of the maximum intensity to the

minimum intensity over an image. Contrast ratio has a strong bearing on the resolving

XXV

power and detectability of an image. Larger this ratio, more easy it is to interpret the image.

Satellite images lack adequate contrast and require contrast improvement.

7.1.1 Contrast Enhancement

Contrast enhancement techniques expand the range of brightness values in an image so that

the image can be efficiently displayed in a manner desired by the analyst. The density

values in a scene are literally pulled farther apart, that is, expanded over a greater range.

The effect is to increase the visual contrast between two areas of different uniform densities.

This enables the analyst to discriminate easily between areas initially having a small

difference in density.

7.1.2 Linear Contrast Stretch

This is the simplest contrast stretch algorithm. The grey values in the original image and the

modified image follow a linear relation in this algorithm. A density number in the low range

of the original histogram is assigned to extremely black and a value at the high end is

assigned to extremely white. The remaining pixel values are distributed linearly between

these extremes. The features or details that were obscure on the original image will be clear

in the contrast stretched image. Linear contrast stretch operation can be represented

graphically. To provide optimal contrast and color variation in color composites the small

range of grey values in each band is stretched to the full brightness range of the output or

display unit.

7.1.3 Non-Linear Contrast Enhancement

In these methods, the input and output data values follow a non-linear transformation. The

general form of the non-linear contrast enhancement is Defined by y = f (x), where x is the

input data value and y is the output data value. The non-linear contrast enhancement

techniques have been found tobe useful for enhancing the color contrast between the nearly

classes and subclasses of a main class.

A type of non linear contrast stretch involves scaling the input data logarithmically. This

enhancement has greatest impact on the brightness values found in the darker part of

histogram. It could be reversed to enhance values in brighter part of histogram by scaling

the input data using an inverse .

XXVI

7.2 SPATIAL FILTERING

A characteristic of remotely sensed images is a parameter called spatial Frequency defined

as number of changes in Brightness Value per unit distance for any particular part of an

image. If there are very few changes in Brightness Value once a given area in an image, this

is referred to as low frequency area. Conversely, if the Brightness Value changes

dramatically over short distances, this is an area of high frequency. Spatial filtering is the

process of dividing the image into its constituent Spatial frequencies, and selectively

altering certain spatial frequencies to Emphasize some image features. This technique

increases the analyst’s ability to discriminate detail. The three types of spatial filters used in

remote sensor data processing are : Low pass filters, Band pass filters and High pass filters.

7.2.1 Low-Frequency Filtering in the Spatial Domain

Image enhancements that de-emphasize or block the high spatial frequency Detail is low-

frequency or low-pass filters. The simplest low-frequency filter Evaluates a particular input

pixel brightness value, BVin, and the pixels Surrounding the input pixel, and outputs a new

brightness value, BVout, that is the mean of this convolution. The size of the

neighbourhood convolution

Mask or kernel is usually 3x3, 5x5, 7x7, or 9x9. The simple smoothing operation will,

however, blur the image, especially at the edges of objects. Blurring becomes more severe

as the size of the kernelincreases. Using a 3x3 kernel can result in the low-pass image being

two lines and

two columns smaller than the original image. Techniques that can be applied to deal with

this problem include artificially extending the original image

beyond its border by repeating the original border pixel brightness values or replicating the

averaged brightness values near the borders, based on the image behavior within a view

pixels of the border. The most commonly used low pass filters are mean, median and mode

filters.

7.2.2 High-Frequency Filtering in the Spatial Domain

High-pass filtering is applied to imagery to remove the slowly varying components and

enhance the high-frequency local variations. Brightness values tend to be highly correlated

in a nine-element window. Thus, the high frequency filtered image will have a relatively

XXVII

narrow intensity histogram. This suggests that the output from most high-frequency filtered

images must be

contrast stretched prior to visual analysis.

7.2.3 Edge Enhancement in the Spatial Domain

For many remote sensing earth science applications, the most valuable information that may

be derived from an image is contained in the edges surrounding various objects of interest.

Edge enhancement delineates these edges and makes the shapes and details comprising the

image more conspicuous and perhaps easier to analyze. Generally, what the eyes see as

pictorial edges

are simply sharp changes in brightness value between two adjacent pixels. The edges may

be enhanced using either linear or nonlinear edge enhancement techniques.

7.2.4 Linear Edge Enhancement

A straightforward method of extracting edges in remotely sensed imagery is the application

of a directional first-difference algorithm and approximates the first derivative between two

adjacent pixels. The algorithm produces the first difference of the image input in the

horizontal, vertical, and diagonal directions. The Laplacian operator generally highlights

point, lines, and edges in the image and suppresses uniform and smoothly varying regions.

Human vision physiological research suggests that we see objects in much the same way.

Hence, the use of this operation has a more natural look than many of the other edge-

enhanced images.

7.2.5 Band rationing

Sometimes differences in brightness values from identical surface materials are caused by

topographic slope and aspect, shadows, or seasonal changes in 90 Digital Image Processing

sunlight illumination angle and intensity. These conditions may hamper the ability of an

interpreter or classification algorithm to identify correctly surface materials or land use in a

remotely sensed image. Fortunately, ratio transformations of the remotely sensed data can,

in certain instances, be applied to reduce the effects of such environmental conditions. In

addition to minimizing the effects of environmental factors, ratios may also provide unique

XXVIII

information not available in any single band that is useful for discriminating between soils

and vegetation.

8. IMAGE TRANSFORMATION

8.1 INTRODUCTION

Two dimensional unitary transforms play an important role in image processing. The term

image transform refers to a class of unitary matrices used for representation of images.

In analogy with I-D signals that can be represented by an orthogonal series of basis

functions , we can similarly represent an image in terms of a discrete set of basis arrays

called “basis images”. These are generated by unitary matrices.

Alternatively an

× ( N N )

image can be represented as

2

1 × ( N )

vector. An image

transform provides a set of coordinates or basis vectors for the vector space.

8.2 I-D-Transforms:

For a one dimensional sequence

{ } 0 1 1 · − u( n ), n , ......N

representing a vector

r

u

of size N , a unitary transform is :

r

v

= A

r

u

⇒

v(k) =

1

0

−

·

∑

N

n

a( k,n )u( n )

, for 0 1 ≤ ≤ − K N (1) (1)

where

1 −

A =

T

A

*

T *

(unitary)

This implies ,

r

u

=

T

A

*

r

v

XXIX

or, u(n) =

1

0

−

·

∑

N

k

v( k )a * ( k , n )

, for 0≤n 1 − ≤ N (2) Equation

(2) can be viewed as a series representation of sequence u(n) . The columns of

T

A

*

i.e the vectors

r

*

k

a

∆

·

{

} 0 1 ≤ ≤ −

T

*

a ( k,n ) , n N are called the “basis vectors” of A

.

The series coefficients v(k) give a representation of original sequence u(n) and are useful in

compression , filtering , feature extraction and other analysis.

8.3 2-D ORTHOGONAL & UNITARY TRANSFORMS:

As applied to image processing, a general orthogonal series expansion for an N N × image

is a pair of transformations of the form :

v(k,l) =

1

0

N

k ,l

m,n

u( m,n )a ( m,n )

−

·

∑ ∑

, 0 1 k,l N ≤ ≤ − (3)

u(m,n) =

1

0

N

*

k ,l

k ,l

v( k,l )a ( m,n )

−

·

∑ ∑ , 0 1 m,n N ≤ ≤ − (4)

where { }

k,l

a ( m,n )

is called an ” image transform.”

It is a set of complete orthogonal discrete basis functions satisfying the properties:-

1) Orthonormality:

1

0

/ /

N

*

k,l

k ,l

m,n

a ( m,n )a ( m,n )

−

·

∑ ∑ =

δ( k k ,l l ) ′ ′ − −

2) Completeness :

1

0

N

k

k ,l

k ,l

k,l

a ( m,n )a ( m ,n )

−

·

′ ′

∑ ∑

=

δ( m m ,n n ) ′ ′ − −

The elements v

( k,l )

are transform coefficients and V ∆

{ } v( k,l )

is the transformed

image.

The orthonomality property assures that any truncated series expansion of the form

XXX

P,Q

U ( m,n ) ∆

1

0

P

k

−

·

∑

1

0

Q

*

k,l

l

v( k,l )a ( m,n )

−

·

∑

, for

P N ≤

,

Q N ≤

will minimize the sum of squares error

1

2

2

0

σ

N

e P,Q

m,n

u( m,n ) U ( m,n )

−

·

1

· − ∑ ∑

¸ ]

where coefficients

v( k,l )

are given by (3).

The completeness property assures that this error will be zero for

P Q N · ·

8.4 SEPARABLE UNITARY TRANSFORM

The number of multiplications and additions required to compute transform coefficients

v( k,l )

in equation(3) is

4

O( N )

. This is too large for practical size images.

If the transform is restricted to be separable,

i.e

k ,l k l

a ( m,n ) a ( m)b ( n ) ·

∆ a( k,m)b( l ,n )

where { } 0 1 1

k

a ( m),k ( )n · −

,

and { ) } 0 1 1

l

b ( n ),l ( N · −

are 1D complete orthogonal sets of basis vectors.

On imposition of completeness and orthonormality properties we can show that A

∆

{ } a( k,m) ,

and B

∆

{ } b( l ,n )

are unitary matrices.

XXXI

i.e

T

*

AA

=

I

=

T *

A A and

T

*

BB

=

I

=

T *

B B

Often one chooses B same as A

∴

v( k,l )

=

1

0

N

m,n

a( k,m)u( m,n )a( l ,n )

−

·

∑ ∑

↔ V = A U

T

A

(5)

And

u( m,n )

=

1

0

N

* *

k ,l

a ( k,m)v( k,l )a ( l ,n )

−

·

∑ ∑

↔ U =

T

*

A

V

*

A

(6)

Eqn (5) can be written as

T

V

=

T

A( AU )

⇒ Eqn (5) can be performed by first transforming each column of U and then

transforming each row of the result to obtain rows of

V

Basis Images : Let

*

k

a

r

denote

th

k

column of

T

*

A

.Let us define the matrices

T

* * *

k l k ,l

A a a ·

r

and matrix inner product of two N N × matrices F and G as

F,G

=

1

0

N

*

m,n

f ( m,n )g ( m,n )

−

·

∑ ∑

Then equ (6) and (5) give a series representation.

U =

1

0

N

k,l

v( k,l )

−

·

∑ ∑

*

k,l

A and

v( k,l )

=

*

k,l

u , A

Any image U can be expressed as linear combination of

2

N

matrices.

*

k,l

A called

“basis images”.

XXXII

Therefore any N N × image can be expanded in a series using a complete set of

2

N

basis

images.

Example: Let A =

1 1

1

1 1

2

¸ _

−

¸ ,

; U =

1 2

3 4

1

1

¸ ]

Transformed image V = A U

T

A

=

5 1

2 0

− ¸ _

−

¸ ,

And Basis images are found as outer product of columns of

*

T

A

i.e

0 0

*

,

A =

1

1

(1 1)

1 2

¸ _

¸ ,

0 1

*

,

A =

1

1

(1 1)

1 2

¸ _

−

¸ ,

=

1 1

1

1 1 2

− ¸ _

−

¸ ,

=

1 0

*

,

T

A

11

*

,

A =

1 1

1

1 1 2

− ¸ _

−

¸ ,

→

1

(1 1)

1

¸ _

−

−

¸ ,

The inverse transformation

* *

T

A V A

=

1 1

1

1 1 2

¸ _

−

¸ ,

5 1

2 0

− ¸ _

−

¸ ,

1 1

1 1

¸ _

−

¸ ,

=

1 2

3 4

¸ _

¸ ,

= U

8.5 DIMENSIONALITY OF IMAGE TRANSFORMS

The

3

2N

computations for V can also be reduced by restricting the choice of A to fast

transforms. This implies that A has a structure that allows factorization of the type

A =

1 2 ( ) ( ) ( )

.........

p

A A A

where

( ) i

A

, 1 i · ,

p( p N ) <<

are matrices with just a few non zero entries say

r

where

r N <<

Therefore a multiplication of the type :

y

r

= Ax

r

is accomplished in

rpN

operations.

XXXIII

For several transforms like Fourier, Sine, Cosine, Hadamard etc,

2

p log N ·

, and

operations reduce to the order of

2

N log N

or

2

2

N log N (for N N × images).

Depending on the transform, an operation is defined as 1 multiplication + 1 addition. Or, 1

addition or subtraction as in Hadamard Transform.

Kronecker products:

If A and B are

1 2

M M ×

and

1 2

N N ×

matrices we define Kronecker product as:

A ⊗ B ∆

}

{

a( m,n )B

Consider the transform, V = A

U

T

A

or,

v( k,l )

=

1

0

N

m,n

a( k,m)u( m,n )a( l ,n )

−

·

∑ ∑

(7)

If

k

v

r

and

k

u

r

denote

th

k

and

th

m

row vectors of V and U then (1) becomes ,

T

k

v

r

=

m

a( k,m) ∑

1

1

¸ ]

uuur

T

m

AU

=

[ ]

T

m

m

k,m

A A u ⊗ ∑

r

where

[ ]

k,m

is the

th

( k,m)

block of [ ]

A A ⊗

If U and V are row ordered into vectors v

r

and u

r

respectively, then V =

T

AUA

⇒

v

r

= ( A A ⊗ ) u

r

The number of operations required for implementing equation(7) reduces from

4

O( N )

to

3

2 O( N )

.

8.6 PROPERTIES OF UNITARY TRANSFORMS

1) Energy conservation:

XXXIV

In the unitary transformation, v

r

= Au

r

,

2

v

r

=

2

u

r

Proof

2

v ∆

r

2

1

0

N

k

v( k )

−

·

∑

=

T

*

v v

r r

=

T

*

u

r

T

A

r

Au

=

2

u

r

.

⇒ unitary transformation preserves signal energy or equivalently the

length of vector u

r

in N dimensional vector space. That is , every unitary

transformation is simply a rotation of u

in N − dimensional vector space. Alternatively

, a unitary transform is a rotation of basis coordinates and components of v

r

are

projections of u

on the new basis. Similarly , for 2D unitary transformations, it can be

proved that

2

1

0 ,

) , (

∑∑

−

·

N

n m

n m u =

2

1

0 ,

) , (

∑∑

−

·

N

l k

l k v

Example: Consider the vector x

=

1

]

1

¸

1

0

x

x

and

A

=

1

]

1

¸

− θ θ

θ θ

cos sin

sin cos

(diagram)

This ⇒,

0

y = x a

T

0

;

1

y

= x a

T

1

Transformation

y

=

x A

can be written as

XXXV

y

=

1

]

1

¸

1

0

y

y

=

1

]

1

¸

− θ θ

θ θ

cos sin

sin cos

1

]

1

¸

1

0

x

x

=

,

_

¸

¸

+ −

+

θ θ

θ θ

cos sin

sin cos

1 0

1 0

x x

x x

with new basis as

0

a

,

1

a

.

For 2D unitary transforms we have

2

) , (

∑∑

m n

n m u

=

2

) , (

∑∑

k l

l k v

.

2) Energy Compaction Property:

Most unitary transforms have a tendency to pack a large fraction of average energy of

an image into relatively few transform coefficients. Since total energy is preserved this

implies that many transform coefficients will contain very little energy. If

u

µ

and

u

R

denote the mean and covariance of vector u

then corresponding quantities for v

are

v

µ

∆ [ ] v E

=

] [ u A E

=

u

Aµ

And v

R

=

[ ]

T

u v

v v E

*

) ( ) ( µ µ

− −

= ] ) ( ) ( [

*

T

u u

A u A A u A E µ µ

− −

= A ] ) ( ) ( [

*

T

u u

A u A A u A E µ µ

− −

T

A

*

=

T

A R A

u

*

Variances of the transform coefficients are given by the diagonal elements of

v

R i.e

[ ]

k k v

R

,

= ) (

2

k

v

σ

⇒ ) (

2

k

v

σ =

[ ]

k k

kT

u

A R A

,

Since A is unitary , it implies:

2

1

0

) (

∑

−

·

N

k

v

k µ =

v v

T

µ µ

*

=

u

A A

T T

µ µ

* *

=

2

1

0

) (

∑

−

·

N

n

u

u µ

XXXVI

and

) (

1

0

2

k

N

k

v

∑

−

·

σ

=

1

]

1

¸

T

A R A T

u r

*

= [ ]

u r

R T

=

) (

1

0

2

n

N

n

n

∑

−

·

σ

[ ]

2

1

) (k v E

N

k

∑

−

=

[ ]

∑

−

·

1

0

2

) (

N

n

n u E

The average energy ( )

2

) (k v E of transform coefficients

) (k v

tends to be unevenly

distributed, although it may be evenly distributed for input sequence

) (n u

.

For a 2D random field

) , ( n m u

, with mean

( ) n m

u

, µ

and covariance ( ) n m n m r ′ ′, ; ,

, its transform coefficients

( ) l k v ,

satisfy the properties,

) , ( l k

v

µ

=

) , ( ) , ( ) , (

1

0 ,

n m n l a m k a

N

n m

u

∑ ∑

−

·

µ

) , ( l k

v

µ

=

( ) [ ]

∑ ∑

−

·

1

0 ,

, ) , ( ) , (

N

n m

n m u E n l a m k a

and ) , (

2

l k

v

σ = ( ) ( ) { }

2

, , l k l k v E

u

µ −

=

( ) ( ) n l a m k a

m n m n

, ,

∑∑∑∑

′ ′

( ) ( ) ( ) n l a m k a n m n m r ′ × ′ ′ ′ , , , ; ,

* *

If covariance of

( ) n m u ,

is separable i.e

( ) n m n m r ′ ′, ; , = ( ) ( ) n n r m m r ′ ′ , ,

2 1

Then variances of transform coefficients can be written as a separable product

( ) l k

v

,

2

σ = ( ) ( ) k k

2

2

2

1

σ σ

k k k k

T T

A R A A R A

,

*

2

,

*

1

1

]

1

¸

1

]

1

¸

∆

where

1

R = ( ) { } m m r ′ ,

1

; ( ) { } n n r R ′ · ,

2 2

XXXVII

3) Decorrelation : When input vector elements are highly correlated , the transform

coefficients tend to be uncorrelated. That is , the off-diagonal terms of covariance

matrix

v

R tend to be small compared to diagonal elements.

4) Other properties : (a) The determinant and eigenvalues of a unitary matrix have unity

magnitute.

(b) Entropy of a random vector is observed under unitary transformation average

information of the random vector is preserved.

Example:

Given the entropy of an 1 × N Gaussian random vector u

with mean

µ

and covariance

u

R , as :

( ) u H

= ( )

2 1

2

2 log

2

u

R e

N

π

To show

( ) u H

is invariant under any unitary transformation.

Let v

= u A

⇒ u

= v A

1 −

=

v A

T

*

∴

( ) u A H

=

,

_

¸

¸

N

u

eR

N 1

2

2 log

2

π

Use the definition of

u

R

we have

( ) ( )

N

u u

T

A u A A u A E

1

*

1

]

1

¸

− × − ′ µ µ

=

N

T

u

A R A

1

*

= N

u

R

1

XXXVIII

Now N

v

R

1

= N

A

1

N

u

R

1

N

T

A

1

*

= 4

1

u

R

v

R =

T

A R A

u

*

Also

T

A A

*

=

*

A A

T

= I

∴ A R A

v

′ =

A A R A A

T

u

*

′

=

( )

T

T

u

A A R

*

=

u

R

∴

( ) u A H

=

,

_

¸

¸

N

v

A R A e

N

T

1

*

2 log

2

π

= ) 2 log(

2

1 1 1

N N

v

N

A R A ex

N

′ π

9. S-TRANSFORM

The S-transform is a time-frequency representation known for its local spectral phase

properties. A key feature of the S-transform is that it uniquely combines a frequency

dependent resolution of the time-frequency space and absolutely referenced local phase

information.This allows one to define the meaning of phase in a local spectrum setting, and

results in many desirable characteristics. One drawback to the S-transform is the redundant

representation of the time-frequency space and the consumption of computing resources this

requires

The cost of this redundancy is amplified in multidimensional applications such as image

analysis. A more efficient representation is introduced here as a orthogonal set of basis

XXXIX

functions that localizes the spectrum and retains the advantageous phase properties of the S-

transform.

this approach allows one to directly collapse the orthogonal local spectral representation

over time to the complex-valued Fourier transform spectrum. Because it maintains the

phase properties of the S-transform, one can perform localized cross spectral analysis to

measure phase shifts between each of multiple components of two time series as a function

of both time and frequency. In addition, one can define a generalized instantaneous

frequency (IF) applicable to broadband nonstationary signals.

One popular method is the short time Fourier transform and the related Gabor transform . A

closely related method is complex demodulation which produces a series of band pass

filtered voices and is also related to the filter bank theory of wavelets. Another family of

time-frequency representations is the Cohen class of generalized time-frequency

distributions (GTFD)

The S-transform

The continuous S-transform of a function h(t) is

A voice S(τ,f0) is defined as a one-dimensional function of time for a constant frequency f0,

which shows how the

amplitude and phase for this exact frequency changes over time.

In the discrete case, there are computational advantages to using the equivalent frequency

domain definition of the

Why use the S-transform approach?

What distinguishes the S-transform from the many time-frequency representations available

is that the S-transform uniquely combines progressive resolution with absolutely referenced

phase information. Daubechies has stated that progressive resolution gives a fundamentally

more sound time-frequency representation. Absolutely referenced phase means that the

XL

phase information given by the S-transform is always referenced to time t = 0, which is also

true for the phase given by the Fourier transform. This is true for each S-transform sample

of the time-frequency space.

This is in contrast to a wavelet approach, where the phase of the wavelet transform is

relative to the center (in time) of the analyzing wavelet. Thus as the wavelet translates, the

reference point of the phase translates. This is called “locally referenced phase” to

distinguish it from the phase properties of the S-transform.

From one point of view, local spectral analysis is a generalization of the global Fourier

spectrum. In fact, since no record of observations is infinite, all discrete spectral analysis

ever performed on measured data have been local (i.e., restricted to the time of observation).

Thus there must be a direct relationship between a local spectral representation and the

global Fourier spectrum. This philosophy can be stated as the fundamental principle of S-

transform analysis: The time average of the local spectral representation should result

identically in the complex-valued global Fourier

spectrum. This leads to phase values of the local spectrum that are obvious and

meaningful.While there can be several important

definitions of “proper phase,” a natural one would be as follows. Consider a signal h(t) =

Aexp(i2πf0t + φ). The Fourier transform spectrum of this signal at the frequency f0 would

return the amplitude of A and the phase constant φ.

In order to carry this understanding of phase into the realm of wavelet theory, time-

frequency representations and local spectra, it would require a transform that would return a

voice (function of time) for the frequency f0 that had a constant amplitude A and a constant

phase φ. Such is the case with the S-transform, and thus it can be described as a

generalization of the Fourier transform to the case of nonstationary signals. In summary the

S-transform has the following unique properties. It uniquely combines frequency dependent

resolution with absolutely reference phase, and therefore the time average of the S-

transform equals the Fourier spectrum.

It simultaneously estimates the local amplitude spectrum and the local phase spectrum,

whereas a wavelet approach is only capable of probing the local amplitude/power spectrum.

It independently probes the positive frequency spectrum and the negative frequency

spectrum, whereas many wavelet approaches are incapable of being applied to a complex

time series. It is sampled at the discrete Fourier transform frequencies unlike the CWT

where the sampling is arbitrary.

XLI

9.1 DOST

9.1.1 The basis functions of the discrete orthonormal S-transform (DOST)

There are several reasons to desire an orthonormal time-frequency version of the S-

transform. An orthonormal transformation takes an N-point time series to an N-point time-

frequency representation, thus achieving the maximum efficiency of representation. Also,

each point of the result is linearly independent from any other point. The transformation

matrix (taking the time series to the DOST representation) is orthogonal, meaning that the

inverse matrix is equal to the complex conjugate transpose. By being an orthonormal

transformation, the vector norm is preserved.

Thus a Parseval theorem applies, stating that the norm of the time series equals the norm of

the DOST. An orthonormal transform is referred to as an energy preserving transform.

The efficient representation of the S-transform can be defined as the inner products between

a time series h[kT ] and the basis functions defined as a function of [kT ], with the

parameters ν (a frequency variable indicative of the center of a frequency band and

analogous to the “voice” of the wavelet transform), β (indicating the width of the frequency

band), and τ (a time variable indicating the time localization).

9.2 Derivation of the S-Transform from the Wavelet Transform

The Continuous Wavelet Transform can be defined as a series of correlations

of the time series with a function called a wavelet:

The S-Transform of a function h(t) can be defined as a CWT with a specific

mother wavelet multiplied by a phase factor

XLII

NOTE: in the original IEEE 1996 paper, the negative sign is omitted from the

phase facter. It should read as it appears above.

where the mother wavelet is defined as

The mother wavelet above does not satisfy the admissibility condition

of having a zero mean, and therefore it is not strictly a CWT. Writing

out the above explicitly gives the S-Transform:

9.3 Properties of the S-Transform

Inverse of the S-Transform is the Fourier Transform.

If the S-Transform is indeed a representation of the local spectrum, one

would expect that the simple operation of averaging the local spectra over

time would give the Fourier Transform spectrum. This is indeed the

case with the S-Transform:

where H(f) is the Fourier Transform of h(t). It follows that

h(t) is exactly recoverable from S( au,f)

XLIII

9.3.1 Linearity.

The S-Transform is a linear operation

on the time series h(t). This is important for the case of

additive noise in which one can model the data as data(t) = signal(t)

+ noise(t) and thus the operation of the S-Transform leads to

This is an advantage over the bilinear class of TFRs where

one finds

9.3.2 The S-Transform and Generalized Instantaneous Frequency.

It can be shown that the S-Transform provides an extension

of Instantaneous Frequency to broadband signals. A particular

voice of the S-Transform can be written as

And

and

XLIV

10. FIELDS OF IMAGE PROCESSING

• Medicine

• Astronomy

• Microscopy

• Seismology

• Defense

• Industrial quality control

• Publication and entertainment industries Digital images are widely available from

the Internet, CD-ROMs, and inexpensive charge-coupled-device (CCD) cameras,

scanners, and frame grabbers. Software for manipulating images is also widely

available

10.1 Applications:

• Photography and printing

• Satellite image processing

• Medical image processing

• Face detection, feature detection, face identification

• Microscope image processing

• Car barrier detection

• Morphological image processing

XLV

11. CONCLUSION

Improvements:

block artifact.

mosquito noise reduction.

adaptive contrast enhancement.

Benefits: Refined images can be obtained, faster report turnaround, easing of growing workload,

etc...

sharpness and texture enhancement.

selective color correction.

By embracing the new image processing technologies and further refinements in image processing

techniques, users are likely to find it more beneficial, not less, in future, while more refinements in

image processing techniques will be appreciated at a reduced cost

XLVI

References

1)www.google.com

2)Fundamentals of Image processing by Anil K Jain

3) www.cs.dartmouth.edu/~farid

4) IEEE site

XLVII

Sign up to vote on this title

UsefulNot useful- Digital Image Enhancement Techniques
- Lan Based Education System
- Image Processing
- Chapter 1
- Copy of Send Today_aircc_templateIJGA
- i Jcs It 20140502264
- Chapter 2
- Lossless Image Enhancement technique used in Digital Media
- Review
- Unit1-SH
- A Hybrid Method For Enhancement Of MRI Knee Images
- EDGE Detection Filter for Gray Image and Observing Performances
- Unit1
- Multi Spectral Image Enhancement in Satellite Imagery - Acsijci.org-Ag
- Region Incrementing Visual Cryptography
- Image Processing 101
- assignemnt 3 grafix
- 32 Digital Image Processing_1 (1)
- A Short Introduction to Digital Image Processing
- Toolbox
- A Novel Document Image Binarization For Optical Character Recognition
- A Binarization Technique for Extraction of Devanagari Text from Camera Based Images
- digital image processing
- Mandatory Appendix Iv_asme V
- Water Marking
- base paper
- Digital Image Processing
- DSC-HX100_HX100V _ Still Image Size_Panorama Image Size _ Cyber-Shot User Guide
- Sharing of Securing A Secret Images Using Media Technique
- 4ip
- hussrep