This action might not be possible to undo. Are you sure you want to continue?
INTRODUCTION TO DIGITAL IMAGE
1.1 Representation of a digital image
The information contained in images can be represented in entirely different ways. Among them the most important is the spatial representation. A monochrome image is a 2-dimensional light intensity function, f (m, m), where m and n are spatial coordinates and the value of f at (m, n) is proportional to the brightness of the image at that point. A digital image is an image that has been discretized both in spatial co- ordinates and in brightness. It is represented by a 2-dimensional integer array. The digitized brightness value is called the grey level value. Thus, a digital image can be represented by following matrix:
Each element of the array is called a pixel or a pel derived from the term "picture element". Each pixel represents not just a point in the image but rather a rectangular region, the elementary cell of the grid. The position of the pixel is given in the common notation for matrices. The first index, m, denotes the position of the row, the second, n, the position of the column. If the digital image contains M ×N pixels, i.e., is represented by an M ×N matrix, the index n runs from 0 to N-1,and the index m from 0 to M-1. M gives the number of rows, N the number of columns.
Figure 1.1: Representation of digital images by arrays of discrete points on a rectangular grid
1.2 Some Basic Concept of Digital Image
1.2.1 Pixels and Bitmaps
Digital images are composed of pixels (short for picture elements). Each pixel represents the color (or gray level for black and white photos) at a single point in the image, so a pixel is like a tiny dot of a particular color. By measuring the color of an image at a large number of points, we can create a digital approximation of the image from which a copy of the original can be reconstructed. Pixels are a little like grain particles in a conventional photographic image, but arranged in a regular pattern of rows and columns and store information somewhat differently. A digital image is a rectangular array of pixels sometimes called a bitmap. How many pixels are sufficient? There is no general answer to this question. For visual observation of a digital image, the pixel size should be smaller than the spatial resolution of the visual system from a nominal observer distance. For a given task the pixel size should be smaller than the finest scales of the objects that we want to study.
This is called “dots per inch. 1.2. c 48 × 64. b 12 × 16. and d 192× 256 pixels. This value is the image resolution. it appears to be continuous.” or dpi.Figure 1. Remember. 3 . higher resolution [higher dpi] results in better image quality. however. inch] that determines the quality of the image. Image resolution most commonly refers to the number of pixels per inch. In most cases. that final image quality is limited by the quality of your image source. If the image contains sufficient pixels.g.2: The figure shows the same image with a 3 × 4. increasing resolution will not improve image quality. While image resolution can always be reduced.2 Resolution The number of pixels packed into a unit of measure [e.
Since there are only two possible values for each pixel. A 24-bit RGB image (8-bits each for red. defines a pixel’s color value in combination with other bits. fingerprints. Figure 1. green and blue color channels) can assign one of 16. Images for which a binary representation may be suitable include text (printed or handwriting). we only need one bit per pixel. Where a bit is the lowest level of electronic value in a digital image.3 Types of Digital Image There are three basic types of digital image 1) Binary 2) Grayscale 3) True color or RGB. Such images can therefore be very efficient in terms of storage. For example A one-bit image can assign only one of two values to a single pixel: 1 or 0 (black or white). 1) Binary:-Each pixel is just black or white. 1. or architectural plans.3: A binary image 4 .8 (224) million colors to a single pixel.1. Each bit can have one of two values: 1 or 0. An 8-bit grayscale image can assign one of 256 (28) colors to a single pixel.2.3 Bit-depth Bit-depth refers to the number of bits assigned to a single pixel and determines the number of colors from which a particular pixel value can be selected.
but generally they are a power of 2. This range means that each pixel can be represented by eight bits.2) Grayscale:-. or exactly one byte. green and blue values for each pixel. Such images arise in medicine (X-rays). 5 . This is enough colors for any image.216 different possible colors in the image.3: A grayscale image 3) True color or RGB: Here each pixel has a particular color. This means that for every pixel there correspond three values.777. If each of these components has a range 0-255 this gives a total of 2553 =16. Figure 1. Each pixel is a shade of grey. such images are also called 24-bit color images. and indeed 256 different grey levels is sufficient for the recognition of most natural objects. images of printed works. Such an image may be considered as consisting of a “stack” of three matrices. that color being described by the amount of red. Since the total number of bits required for each pixel is 24. green and blue in it. Other grayscale ranges are used. normally from 0 (black) to) 255 (white). This is a very natural range for image _le handling. representing the red.
A lossless compressed file discards no information. while making no compromises in accuracy. GIF. BMP.4 Image File Type and Size 1.4. One reason for so many file types is the need for compression. In contrast. and larger files mean more disk usage and slower download. lossy compressed files accept some degradation in the image in order to achieve smaller file size. Image files can be quite large. Compression is a term used to describe ways of cutting the size of the file. TIFF. Another reason for the many file types is that images 6 .1 Image File Type There are lots of file types that are used to encode digital images such as JPG. There are two types of compression "lossy" and "lossless". It looks for more efficient ways to represent an image.4: A true color image 1. PNG.Figure 1.
JPG works by analyzing images and discarding kinds of information that the eye is least likely to notice. a file type can be designed to exploit this as a way of reducing file size.4. This means the RAW file contains 12 to 14 bits of color information for each pixel. A camera RAW file contains the 'raw' data from the camera's imaging sensor. However. the more pixels. The 5 most common digital image file types are as follows: 1) TIFF (Tagged Image File Format): It is a very flexible format that can be lossless or lossy. The file format of an image document can also affect its file size. 1. GIF can render the image exactly. Better algorithms search the image to find an optimum set of 256 colors. When the image contains many colors. The advantage of camera RAW is that it contains the full range of color information from the sensor. Since resolution measures dots per square inch. The compression is exactly reversible. 5) Camera RAW: It is a lossless compressed file format that is proprietary for each digital camera manufacturer and model. File size also depends on the kind of pixels that comprise the image. camera RAW is the most common type of RAW file. It stores information as 24 bit color. If an image has few colors. TIFF is used almost exclusively as a lossless image storage format that uses no compression at all. the file size of a 300 dpi image is 9 times that of a 100 dpi image. software that creates the GIF uses any of several algorithms to approximate the colors in the image with the limited palette of 256 colors available.g. so the image is recovered exactly. In practice. the greater the file size. However. File size is directly proportional to the number of pixels in an image. 2) PNG (Portable Network Graphics): It is also a lossless storage format. The details of the image storage algorithm are included as part of the file. file size is proportional to the square of Image Resolution. 3) GIF (Graphics Interchange Format): It creates a table of up to 256 colors from a pool of 16 million. 7 . If the image has fewer than 256 colors. since a fullcolor pixel needs more memory than a black & white pixel. Consequently. Some image editing programs have their own version of RAW too. It can achieve astounding compression ratios even while maintaining very high image quality. Most graphics programs that use TIFF do not compress. GIF compression is unkind to such images. in contrast with common TIFF usage. it looks for patterns in the image that it can use to compress file size. A good rule of thumb is that color images are approximately three times larger than grayscale images. colors.. e. a 100 dpi color image will consume more memory than a 100 dpi grayscale image. For instance. 4) JPG (Joint Photographic Experts Group): It is optimized for photographs and similar continuous tone images that contain many.2 Image File Size File size refers to the amount of memory needed to store a given image document. file sizes are quite big.differ in the number of colors they contain.
it is important to increase the dynamic range of the chosen features in the image. feature extraction. The process of image enhancement. which is essentially the process of image enhancement. such as contrast. Image enhancement techniques. It increases the dynamic range of the chosen features with the final aim of improving the image quality. that is to undo the degradation effects which might have been caused by the imaging system or the channel. Image enhancement thus has both a subjective and an objective role and may be viewed as a set of techniques for improving the subjective quality of an image and also for enhancing the accuracy rate in automated object detection and picture interpretation. When such pictures are converted from one form to another by processes such as imaging. Other enhancement techniques may perform local neighborhood operations as in convolution. boundaries. Enhancement refers to accentuation or sharpening of image features. and employ linear or nonlinear local or global filters. All these images create elegant perception in our sensory organs. knowledge of the degradation process may help in the choice of the enhancement technique. noise filtering. or transmitting. where the input gray levels are mapped so that the output gray level distribution is uniform. They also contain a lot of important information and convey specific meanings in diverse domains of application. However. the quality of the output image may be inferior to that of the original input picture. etc. There is thus a need to improve the quality of such images. The realm of image enhancement covers contrast and edge enhancement. and so on. Modeling of the degradation process. in general. and so on. map each gray level into another gray level using a predetermined transformation function. application dependent. These methods find applications in visual information display. edges. so that the output image is visually more pleasing to human observers from a subjective point of view.CHAPTER 2 IMAGE ENHANCEMENT 2. This is a powerful method for the enhancement of low-contrast images. in no way increases the information content of the image data. object recognition. such as contrast stretching. The growing need to develop automated systems for image interpretation necessitates that the quality of the picture to be interpreted should be free from noise and other aberrations. Thus it is important to perform preprocessing operations on the image so that the resultant preprocessed image is better suited for machine interpretation. One example of it is histogram equalization method. To perform this task. scanning.1 Objective of Image Enhancement Millions of pictures ranging from biomedical images to the images of natural surroundings and activities around us enrich our daily visual experience. is not required for enhancement. however. feature sharpening. 8 . These algorithms are generally interactive. Enhancement has another purpose as well.
s = T(r) (2. The results of this transformation are mapped into the grey scale range as we are dealing here only with grey scale digital images. An important issue in image enhancement is quantifying the criterion for enhancement. Where T is the transformation. It means that. transforming an image f into image g using T. contrast or the distribution of the grey levels. 2. The pixel values are manipulated to achieve desired enhancement. the Fourier Transform of the image is computed first. 9 . The values of pixels in images f and g are denoted by r and s. Spatial Domain Methods 2. In frequency domain methods. As a consequence the pixel value (intensities) of the output image will be modified according to the transformation function applied on the input values. the pixel values r and s are related by the expression.transform operations as in discrete Fourier transform. As said. Many image enhancement techniques are empirical and require interactive procedures to obtain satisfactory results. These enhancement operations are performed in order to modify the image brightness. and other operations as in pseudo-coloring where a gray level image is mapped into a color image by assigning different colors to different features. 2. The enhancement methods can broadly be divided into the following two categories: 1. the image is first transferred in to frequency domain. Frequency Domain Methods In spatial domain techniques.2 Mathematical Model for Image Enhancement Image enhancement simply means.1) Where T is a transformation that maps a pixel value r into a pixel value s. All the enhancement operations are performed on the Fourier transform of the image and then the Inverse Fourier transform is performed to get the resultant image.3 Classification of Image Enhancement Techniques There exist many techniques that can enhance a digital image without spoiling it. we directly deal with the image pixels. respectively.
It plots the number of pixels for each grey level value. By looking at the histogram for a specific image a viewer will be able to judge the entire brightness distribution at a glance.CHAPTER 3 HISTOGRAM EQUALIZATION 3.1: A grayscale image and it’s histogram 10 . Figure 3.1 Image Histogram An image histogram is a graphical representation of the brightness distribution in a digital image.
and L is the total number of possible gray levels 3. It gives a measure of how likely is for a pixel to have a certain gray value.1. which can be used for image enhancement. … . If we consider the gray values in the image as realizations of a random variable R. is number of pixel with grey level in the image. For example.1.The sum of the normalised histogram function over the range of all gray values is 1. Similarly a widely distributed histogram means that almost all the gray levels are present in the image and thus the overall contrast and visibility increases.3 11 .1.2 Normalised Histogram Function The normalised histogram function is the histogram function divided by the total number of the pixels of the image. consequently n=MN is the total number of pixel in the image. 1 3.That is. 1 3.1. L-1] is the discrete function 0. if the image histogram is narrow.1 where.1 Histogram Function The histogram of a digital image with gray levels in the range [0. it gives the probability of occurrence the intensity. Pr 0. normalised histogram provides an approximation to this probability density. 0.1. 1 3. then it means that the image is poorly visible because the difference in gray levels present in the image is generally low.2 where M and N are the row and column dimension of the image. The shape of the histogram of an image reveals important contrast information. with some probability density.1. … . 3.1.The histogram gives primarily the global description of the image. … .1. In other words. The function represents the fraction of the total number of pixels with gray value .
2: A dark image and its histogram 12 . in addition. and their corresponding histograms are given below. Figure 3. Finally. washed-out gray look. An image with low contrast has a histogram that will be narrow and will be centered toward the middle of the gray scale. The net effect will be an image that shows a great deal of gray-level detail and has high dynamic range.2 Different Types of Image Histogram In a dark image. it is reasonable to conclude that an image. Intuitively. whose pixels tend to occupy the entire range of possible gray levels and.3. light. low contrast. that the distribution of pixels is not too far from uniform. Four basic image types: dark. further. For a monochrome image this implies a dull. the components of the histogram of the bright image are biased toward the high side of the gray scale. will have an appearance of high contrast and will exhibit a large variety of gray tones. Similarly. with very few vertical lines being much higher than the others. tend to be distributed uniformly. high contrast. the components of the histogram are concentrated on the low (dark) side of the gray scale. we see that the components of the histogram in the highcontrast image cover a broad range of the gray scale and.
Figure 3.3: A bright image and its histogram Figure 3.4: A low contrast image and its histogram 13 .
we assume that r is in the range [0. 3. For any r satisfying these conditions. The basic assumption used here is that the information conveyed by an image is related to the probability of occurrence of gray levels in the form of histogram in the image.3 Histogram Equalization Histogram equalization is a technique which consists of adjusting the gray scale of the image so that the gray level histogram of the input image is mapped onto a uniform histogram. L-1] with r = 0 representing black and r = L-1 representing white. By uniformly distributing the probability of occurrence of gray levels in the image.3. we focus attention on transformations (intensity mappings) of the form 0 14 1 3. it becomes easier to perceive the information content of the image.. Figure 3.1 . As usual.5: A high contrast image and its histogram 3.3.1 Development of the Method Consider for a moment continuous intensity values and let the variable r denote the intensities of an image to be processed.
and (b) 0 ≤ T(r) ≤ L-1 for 0 ≤ r ≤ L-1. Following figure gives an example of a transformation function that satisfies these two conditions. we assume that the transformation function T(r) satisfies the following conditions: (a) T(r) is single-valued and monotonically increasing in the interval 0 ≤ r ≤ L-1.2. 15 .6: A function that satisfies condition (a) and (b). Condition (b) guarantees that the range of output intensities is the same as input. Figure 3. The inverse transformation from s back to r is denoted by 0 s L 1 3.that produce a level s for every pixel value r in the original image. The requirement in condition (a) that T(r) be monotonically increasing guarantees that output intensities values will never be less than the corresponding input values thus preventing artifacts created by reversals of intensity.2 It can be shown that even if T(r) satisfies conditions (a) and (b). it is possible that the corresponding inverse may fail to be single valued. For reasons that will become obvious shortly.
Let and denote the probability density functions of random variables r and s. therefore.3. We know from basic calculus (Leibniz’s rule) that the derivative of a definite integral with respect to its upper limit is simply the integrand evaluated at that limit. 1]. (3.3). then the probability density function of the transformed variable s can be obtained using a rather simple formula.3.The gray levels in an image may be viewed as random variables in the interval [0. is determined by the gray-level PDF of the input image and by the chosen transformation function.5 Substituting this result for dr/ds into Eq. if and T(r) are known and satisfies condition (a). the probability density function of the transformed variable.A basic result from an elementary probability theory is that.3. and recalling that the integral of a function is the area under the function. In other words.4) where w is a dummy variable of integration.3.3 Thus.3. When the upper limit in this equation is r = (L-1) the integral evaluates to 1(the area under a PDF curve always is 1). A transformation function of particular importance in image processing has the form 1 (3. so the maximum value of s is (L-1) and condition (b) is satisfied also. where the subscripts on p are used to denote that and are different functions . 3. The right side of Eq. satisfies condition (a). Since probability density functions are always positive. 1 1 3. and. One of the most fundamental descriptors of a random variable is its probability density function (PDF). it follows that this transformation function is single valued and monotonically increasing. respectively. and keeping in mind that all probability values are positive. (3.4) is recognized as the cumulative distribution function (CDF) of random variable r. s.3). To find the corresponding to the transformation just discussed we use equation (3.3. yields 16 .
use of Eq. 17 .8 Thus. 1 3. (3.4) yields a random variable s characterized by a uniform probability density function. n is the total number of pixels in the image.3-8) is called histogram equalization.3.3.1 We recognize the form of p s given in Eq.4) is 1 1 0. The probability of occurrence of gray level approximated by 1 1 1 0 1 3. as indicated by Eq. is the number of pixels that have gray level .8) does have the general tendency of spreading the histogram of the input image so that the levels of the histogram equalized image will span a fuller range of the gray scale. … . as will be seen shortly. The discrete version of the transformation function given in Eq. (3. (3. The transformation (mapping) given in Eq.3. and L is the total number of possible gray levels in the image. (3.1. it cannot be proved in general that this discrete transformation will produce the discrete equivalent of a uniform probability density function.6 0.3.3.7 where. which would be a uniform histogram.4) that T(r) depends on p r .3. a processed (output) image is obtained by mapping each pixel with level r in the input image into a corresponding pixel with level s in the output image via Eq. (3. It is important to note from Eq.1.8).3. … . independent of the form p r For discrete values we deal with probabilities and summations instead of probability density in an image is functions and integrals.3. (3. As indicated earlier. 1 3. Unlike its continuous counterpart. However.6) as a uniform probability density function. as noted at the beginning of this chapter. the resulting p s always is uniform. Simply stated. but. a plot of p r versus r is called a histogram. (3.3-6). (3.3. we have demonstrated that performing the transformation function given in Eq.
with a few dark dots on it.2 A Numerical Example of Histogram Equalization. To equalize this histogram.3. we form running totals of the n . 18 .7: A histogram indicating poor contrast Suppose a 4-bit grayscale image has the histogram shown in figure associated with a table of the numbers n of grey values: Gray level i n 0 15 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 70 10 11 110 45 12 80 13 40 14 0 15 0 With n=MN=360 and L=16. Figure 3. we would expect this image to be uniformly bright.3. and multiply each by 15/360 =1/24.
7 after equalization The histogram of the values is shown in figure 3.65 8.13 10 13.63 0.63 0. 19 .63 0.63 3. and so the resulting image should exhibit greater contrast.8: The Histogram of figure 3.63 0.Gray level i 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 15 0 0 0 0 0 0 0 0 70 110 45 80 40 0 0 n 15 15 15 15 15 15 15 15 15 85 195 240 320 360 360 360 ∑ (1/24) ∑ 0.63 0.63 0. This is far more spread out than the original histogram.63 0. obtained by reading of the first and last columns in the above table: Original gray level i Final gray level j 0 1 1 2 1 1 3 1 4 1 5 1 6 1 7 1 8 1 9 4 10 8 11 10 12 13 13 15 14 15 15 15 Figure 3.33 15 15 15 Rounded value 1 1 1 1 1 1 1 1 1 4 8 10 13 15 15 15 We now have the following transformation of grey values.8.63 0.
3.9: A dark image and its histogram The result of histogram equalization on the above dark image is Figure 3. Figure 3.3.10: Histogram eqalized image and its histogram 20 .3 Results of Histogram Equalization istogram We have the following dark image.
CHAPTER 4 HISTOGRAM MATCHING As indicated in the preceding chapter.2 21 .1 where w is a dummy variable of integration. while is the specified probability density function that we wish the output image to have. 4. We recognize this expression as the continuous version of histogram equalization given in Eq. and let and denote their corresponding continuous probability density functions. or by calculating the histogram of a target image. In particular. Suppose next that we define a random variable z with the property 1 4. (3.1 Development of the Method Let us return for a moment to continuous gray levels r and z (considered continuous random variables). This can be accomplished by specifying a particular histogram shape. The method used to generate a processed image that has a specified histogram is called histogram matching or histogram specification. Let s be a random variable with the property 1 4. We show in this chapter that there are applications in which attempting to base enhancement on a uniform histogram is not the best approach. this is a good approach because the results from this technique are predictable and the method is simple to implement. In this notation. When automatic enhancement is desired.1. r and z denote the gray levels of the input and output (processed) images. it is useful sometimes to be able to specify the shape of the histogram that we wish the processed image to have.1.3-4). respectively. histogram equalization automatically determines a transformation function that seeks to produce an output image that has a uniform histogram. We can estimate from the given input image.
the latter being the desired values. 22 . (4. It then follows from these two equations that G(z)=T(r) and.2) shows that an image whose intensity levels have a specified probability density function can be obtained from a given image by using the following procedure.where t is a dummy variable of integration.4.1) once has been estimated from the input image.2) because is given. (4.1).1) through (4. the probability density function of the output image will be equal to the specified probability function.1.1: Process of histogram matching.2) to obtain the transformation function G(z). the pixel values in this image are the s values. 1) Obtain from the input image and use Eq. this process is a mapping from s to z. the transformation function G(z) can be obtained using Eq.1.1.(4.1. 3) Obtain the inverse transformation .1. For each pixel with value s in equalized image. The following diagram shows the process of histogram matching Figure 4. 4) Obtain the output image by first equalizing the input image using Eq. perform the inverse mapping to obtain the corresponding pixel values in output image.1. therefore.1. When all pixels have been thus processed. because z is obtain from s. Equation (4.3 The transformation T(r) can be obtained from Eq. (4. Similarly. that z must satisfy the condition 4.1) to obtain the values of s 2) Use the specified probability density function in equation (4.
2) involves computing the transformation function 1 1 0. Similarly. (4. this operation gives a value of z for each value of s. thus it performs a mapping from s to z 23 .5 for a value of q so that 188.8.131.52. … . MN is the total number of pixels in the image.1.1.4 Where as before.1. We find the desired value the inverse transformation: by obtaining 4. … .17 In other words.1. 1 4. is the number of pixels that have intensity value and L is the total number of possible intensity levels in the image. given a specific value of .6 where is the ith value of specified histogram. 1 4.The discrete formulation of Eq.1) is 1 1 0. the discrete formulation of Eq. (4.
the mapping is not unique). where are the value of specified histogram. to the integer range [0. Gray level r n 0 1 2 3 4 5 6 7 790 1023 850 656 329 245 122 81 It is desired to transform this histogram so that it will have the value specified in the following Gray level z n 0 0 1 0 2 0 3 4 5 614 819 1230 6 7 819 614 24 . L-1. of this image to the corresponding value in the histogram specified image using the mappings found in step 3. 2) Compute all values of the transformation function G using the Eq. Store the values of G in a table. When more than one value of satisfies the given (i. Round the values of G to integers in the range [0. choose the smallest value by convention. Round the resulting values. L-1].… L-1.2 Algorithm for histogram specification 1) Compute the histogram of the given image and use it to find the histogram equalization transformation in Eq.3 Numerical Example of Histogram matching Consider a 3-bit image (L=8) of size 64 64 pixel (MN=4096) has the intensity distribution shown in following table.5) for q = 0. 1. use the stored value of G from step 2 to find the corresponding value of is closest to so that and store these mappings from s to z . L-1] = [0. 4. . ….e.4). 3) For every value of . 7]. where the intensity levels are integers in the range [0. 4) From the histogram specified image by first histogram-equalizing the input image and then mapping every equalized pixel values.L-1].k =0.4.1. . (4.1. 1. (4.
45 4.23 6.09 4.67 6.35 3.First we compute the scaled histogram equalized values for input image like as follows Gray level r 0 1 2 3 4 5 6 7 n 790 1023 850 656 329 245 122 81 ∑ 790 1813 2663 3319 3648 3893 4015 4096 (7/4096) ∑ 1.65 6.55 5. (4. which is a perfect match in this case.95 7 0 0 0 614 819 1230 819 614 0 0 0 614 1433 2663 3482 4096 0 0 0 1 2 5 6 7 In the third step of the procedure we find the smallest value of z so that the value is the closest to s . s =1 . For example. G. and we see that 1. so we have the correspondence s . s z 1 3 3 4 5 5 6 6 7 7 In the final step of procedure we use above mapping to map every pixel in the histogram equalize image into a corresponding pixel in the newly created histogram specified image.86 7 Equalized gray level s 1 3 5 6 6 7 7 7 Next we compute the values of the transformation function.55 5. using Eq.1. That is every pixel whose value is 1 in the histogram equalized image would map to a pixel valued 3 (in the corresponding location) in the histogram specified image.5) Gray level z 0 1 2 3 4 5 6 7 n ∑ (7/4096) ∑ 0 0 0 1. We do this every value of s to create the required mapping from s to z.05 2. Continuing in this manner we get the following mapping. 25 .
4.2 : Input image and its histogram. 26 . This is because we are dealing with discrete histograms.4 Result of Histogram Matching A low contrast image and its histogram are given below Figure 4.r s z r n 0 1 3 1 3 4 2 5 5 3 6 6 4 6 6 5 7 7 6 7 7 7 7 7 After that we get the following image histogram 0 0 1 0 2 0 3 4 5 6 7 790 1023 850 985 448 We see. the actual histogram of the output image does not exactly but only approximately matches with the specified histogram.
4 : Specified histogram for matching 27 .Histogram equalized image of input image and its histogram are given below. Figure 4. We see histogram equalized image is not very satisfactory for visual perception.3 : Histogram equalized image and its histogram Now we try to match the histogram of input image with following specified histogram Figure 4.
Figure 4. We see histogram specified image is better than histogram equalized image.5 : Histogram specified image and its histogram 28 .Histogram specified image of input image and its histogram are given below.
29 . we want to manipulate the pixel distribution of one image to mirror the pixel distribution of another image. A disadvantage of the method is that it is indiscriminate. For example. Histogram equalization will work the best when applied to images with much higher color depth than palette size. while decreasing the usable signal. The pixel distribution of an image is. there are no rule for specifying histograms. A key advantage of the method is that it is a fairly straightforward technique and an invertible operator. In general. Histogram equalization often produces unrealistic effects in photographs. if applied to 8-bit image displayed with 8-bit gray-scale palette it will further reduce color depth (number of unique shades of gray) of the image. of course. often the same class of images that user would apply false-color to. The difference is that in histogram matching we don't use an even distribution. if the histogram equalization function is known. however. Also histogram equalization can produce undesirable effects (like visible image gradient) when applied to images with low color depth. In other words. Histogram matching is similar in concept to histogram equalization. like continuous data or 16-bit gray-scale images. then the original histogram can be recovered. and to better detail in photographs that are over or under-exposed. satellite or x-ray images. So in theory. Histogram matching is a trial-and-error process.CHAPTER 5 DISCUSSION AND CONCLUSION Histogram equalization is useful in images with backgrounds and foregrounds that are both bright or both dark. its histogram. It may increase the contrast of background noise. and one must resort to analysis on a case-by-case basis for any given enhancement task. The calculation is not computationally intensive. however it is very useful for scientific images like thermal. In particular. but a distribution that is supplied to us from another image or perhaps from a subset of the same image. the method can lead to better views of bone structure in x-ray images.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.