You are on page 1of 5

Image Processing Part 3

Accessing the Pixel data

There is a one-to-one correspondence between pixel coordinates and the coordinates


MATLAB® uses for matrix subscripting. This correspondence makes the relationship between
an image's data matrix and the way the image is displayed easy to understand.

For example, the data for the pixel in the fifth row, second column is stored in the matrix element
(5,2). You use normal MATLAB matrix subscripting to access values of individual pixels. For
example, the MATLAB code A(2,15) returns the value of the pixel at row 2, column 15 of the
image

A. MIRROR IMAGE GENERATION


% this program produces mirror image of the image passed to it n also
% displays both the original and mirror image
a=imread('pout.tif');
[r,c]=size(a);
for i=1:1:r k=1;
for j=c:-1:1
temp=a(i,k);
result(i,k)=a(i,j);
result(i,j)=temp;
k=k+1;
end
end
subplot(1,2,1),imshow(a)
subplot(1,2,2),imshow(result)
TASK 1
Write a MATLAB code that reads a gray scale image and generates the flipped image of
original image.
Your output should be like the one given below.
Task 2
Write a MATLAB code that will do the following
1. Read any gray scale image.
2. Display that image.
3. Again display the image such that the pixels having intensity values below than 50 will display as
black and pixels having intensity values above than 150 will display as white. And the pixels between
these will display as it is.

Edge detection using Operators.

Aim:

To detect the edge of the Gray scale images.

Syntax
To demonstrate edge detection
% numbers of colors
sncols=128;
ncols=32;
% get image from MATLAB image
load('trees');
% show original image
figure(1);
showgimg(real(X),sncols);
drawnow;
% construct convolution functions
[m,n] = size(X);
gs = [1 -1]; ge = [];
hs = [1 -1]; he = [];
g = [gs,zeros(1,m-length(gs)-length(ge)),ge];
h = [hs,zeros(1,n-length(hs)-length(he)),he];
% construct convolution matrices as sparse matrices
Y = spcnvmat(g);
Z = spcnvmat(h);
Wg = Y*X;
Wh = X*Z';
% show transformed images
figure(2);
showgimg(Wg,ncols);
drawnow;
figure(3)
showgimg(Wh,ncols);
drawnow;
figure(4)
showgimg(abs(Wg)+abs(Wh),ncols);
drawnow;
Theory

Edges characterize boundaries and are therefore a problem of fundamental importance in image
processing. Edges in images are areas with strong intensity contrasts – a jump in intensity from
one pixel to the next. Edge detecting an image significantly reduces the amount of data and
filters out useless information, while preserving the important structural properties in an image.
There are many ways to perform edge detection. However, the majority of different methods
may be grouped into two categories, gradient and Laplacian. The gradient method detects the
edges by looking for the maximum and minimum in the first derivative of the image. The
Laplacian method searches for zero crossings in the second derivative of the image to find edges.
An edge has the one-dimensional shape of a ramp and calculating the derivative of the image can
highlight its location. Suppose we have the following signal, with an edge shown by the jump in
intensity below: The intensity changes thus discovered in each of the channels are then
represented by oriented primitives called zero-crossing segments, and evidence is given that this
representation is complete. (2) Intensity changes in images arise from surface discontinuities or
from reflectance or illumination boundaries, and these all have the property that they are
spatially localized. Because of this, the zero-crossing segments from the different channels are
not independent, and rules are deduced for combining them into a description of the image. This
description is called the raw primal sketch.

You might also like