This action might not be possible to undo. Are you sure you want to continue?
Audio- Signals representing sound and speech which travels throw a medium to transfer data are called audio. The speed of signal is depending on the medium which is normally air or metal. Sound signal is of two types1) Analog sound-It travels in a continuous wave format which changes from the lowest value to the highest value and stores all the values between the ranges. 2) Digital sound- Digital sounds are discrete value which store only two values highest and lowest.
Characteristics of sound signal
1) Period (Oscillation):
Regular intervals of sound which repeat multiple times to construct sound signal are called period. 2) FrequencyNumbers of period travels in one second is called frequency of audio signal. It is measured in Hertz (Hz) Amplitude- It is the maximum strength or intensity of sound signal. Pitch- Sound is an objective quantity which shows the relative frequency of sound signal. Frequency is number of cycle transfer per second, which is a subjective quantity so can b easily measure. Pitch is signal which can be measured by comparison of periodic frequency signal. Pitch is mostly used in musical nodes. Bandwidth- Bandwidth is the total frequency of a medium from its lowest frequency range to its highest frequency. It is measured in BPS (Bits/sec). Its shows the data handling capacity of a medium. Wave Length- The total distance cover in on cycle by a wave is called wave length. Frequency of sound (f) and speed(c) so the wave length isWave Length= C/F Decibel System- Acoustics is branch of science which study about sound. Decibel system is used to measure sound pressure or loudness of sound. The decibel (dB) is a logarithmic unit that indicates the ratio of a physical quantity (usually power or intensity) relative to a specified or implied reference level. A ratio in decibels is ten times the logarithm to base 10 of the ratio of two power quantities. Firstly it is used to measure electric loss in wire. But now it is mostly used in measuring sound intensity or radio signal strength. It is shown as “dB”. Some time it is used to measure some
5) 6) 7)
other unit like- to measure kilowatt loss we use “dBK” while to measure Voltage loss we use “dBV”. The range of audible sound for human is up to 80Db. Computer Represent sound in digital format while normally sound travels in analog format, so we have to convert analog sound signal into digital format which is called digitalization of sound. Analog to digital conversion: - Analog signals are converted in digital signal for using them in computers. A PCM stream is a digital representation of an analog signal, in which the magnitude of the analog signal is sampled regularly at uniform intervals, with each sample being quantized to the nearest value within a range of digital steps. Pulse code modulation is used to convert analog signal to digital signal. The basic steps of pulse code modulation is1) Sampling 2) Quantinization 3) Binary encoding 4) Line encoding
1) Sampling- In
this stage sample of amplitude are noted ion fixed time interval the timing of samples are kept low to increase the sampling rate and the quality of digital signal increase with this increased sample rate 2) Quantinization- The sampled are valued comparatively. Each of the samples is assign to a value specified by the scale with comparison of other value. 3) Binary encoding- The value of quantinized signals are then converted into decimal to binary. By this we get a continue string of bits which is called binary encoded signal. 4) Line encoding- The string we got from binary encoding are then converted into digital sound form. Here one is the maximized value while 0 is the minimized value.
Advantage of digital signal over analog signal1) 2) 3) 4) Digital sound can be store easily in digital medium like CD, DVD etc. Digital sound is less sensitive for interference than analog sound. Digital sound can be easily regenerated. Process of editing like cut a track, copy it or eco track etc can be easily applied on digital sound.
Type of Sound- Sound can be categorized into two typesi) Periodic sound- sound which generate on fix time interval is called periodic sound. It
has constant amplitude and frequency with time.
ii) Aperiodic Sound- frequency and amplitude is change with time in these types of
sound. So it is also called sound generating on varying time interval.
AIFF Is the propriety file format of apple. Audio file Formats Audio Interchange File Format (AIFF) 1. Wave Format1. 6. 2. Bit rate is used to define size of a audio file. It have extension . It have extension of . IT limits the file size to less than 4GB . 3. bit depth. and sample rate or application specific data area. It depends of following factorsi) Sampling rate of original date ii) Number of bits used in sample iii) Data encoding scheme iv) Data compression technique algorithm Bit rate can be control by changing sample rate by reducing the bit rate the size of file decrease. For this we have to know the total time of file.wav It’s also called audio for window. while the sound quality is negliable affected by this. 3. The formula of calculating the Bit rate isFile size= bit rate * play time Here bit rate is in bits per second Play time is total time consume to play a track File size is in bits per second In Digital multimedia bits rate show the information store in files per unit time. So decreasing bit rate is used to reduce file size. It doesn’t support data compression but it provides an alternative format which support data compression and called AIFF compressed file format. IT is the standard format of Microsoft and IBM pc. 5.Data rate is the number of bits transfer in a unit time.PERIODIC SIGNAL APERIODIC SIGNAL Date Rate. When data rate is measure in bits/ second it is also called bit rate.ief 4. It is based on RIFF format (Resource interchange file format) Audio can be easily edited on this file format It used uncompressed format where data is store by using Linear Pulse Code modulation(LPCM) 7. AIFF file row data channel information. 4. 5. Commonly this data rate is measured in mega bits/second or kilo bits per second. It can use both mono and stereo channel for transferring data. 2.aif or .
the main source used for creating MP3 files. 2. 5.amr. The main goal of this network is to provide industry wide multimedia interoperability. 2. It is most commonly used to store digital video and digital audio streams. Later extended in MPEG-2 standard. 4. 4.rm. 4. 3.1 kHz is almost always used. Many modern mobile telephone handsets can store short audio recordings in the AMR format. It can also be used as a streaming audio format that is played at the same time as it is downloaded. It has a file header with it which stores all the properties of whole file in it.mp4. AMR was adopted as the standard speech codec by 3GPP in October 1999 and is now widely used in GSM and UMTS. It is created with the combination of more than one media stream. 9:1 and 7:1 respectively. 5. 3.rm) 1. Asf (Advance streaming format) 1. 6. because this is also used for CD audio.ra / . Still sound like a faithful reproduction of the original uncompressed audio for most listeners. Both free and proprietary programs exist (see Software support) to convert between this and other formats. 2. 5. AMR is also a file format for storing spoken audio using the AMR codec. MP4 (MPEG part4) 1. 4. MPEG-4 Part 14 allows streaming over the Internet The only official filename extension for MPEG-4 Part 14 files is . to highfidelity formats for music. . Asf file synchronize the entire stored data stream on a common time line before presenting or delivering information. 3. We can deliver data on different network with the help of this format 3. RealAudio files were originally identified by a filename extension of . The common filename extension is . 2. Sampling frequency 8 kHz/13-bit MP3(MPEG-1 and MPEG-2)1. A sample rate of 44. The use in MP3 of a lossy compression algorithm is designed to greatly reduce the amount of data required to represent the audio recording. MP3 is an audio-specific format that was designed by the Moving Picture Experts 2. It ranging from low-bit rate formats that can be used over dialup modems.ra 5. 4. This format is developed by Microsoft for storing synchronized streaming data. A multimedia container format standard specified as a part of MPEG-4 It can also be used to store other data such as subtitles and still images. 5. Adaptive Multi Rate(AMR)1. Real Audio(. Represent compression ratios of approximately 11:1. Group (MPEG) as part of its MPEG-1 standard. 6. 6. The combination of the audio and video formats was called Real Media and used the file extension . 3. It is a proprietary audio format developed by Real Networks and first released in April 1995.
There are three major groups of audio file formats: Uncompressed audio formats.It is used to reduce file size of file. such as MP3. It provides better synchronisation of audio and video in less space.e. The popular MP3 format is probably the best-known example. 2) Compression/decompression algorithm (CODEC) is used for compression. Formats with lossy compression. Lossy compressed audio formats-Lossy compression enables even greater reductions in file size by removing some of the data. 4) After this they are send on IP network. however. COMPRESSION. the music would occupy a smaller portion of the file and the silence would take up almost no space at all. where receiver receives data packets from IP data network. For this we have to follow given steps1) While recording the sample sound compression is provided according to data format and the audio recording frequency is limited according to the recording. Lossless compression formats enable the original uncompressed data to be recreated exactly. AVI (Audio video Interleaved) – 1. 6) At the time of rearrangement the lost packet are regenerated by using “filling the gaps algorithm” . TRANSFER OF AUDIO OVER INTERNET. etc Uncompressed Audio Format-Uncompressed audio formats encode both sound and silence with the same number of bits per unit of time. AIFF etc. It is used to store sound or moving picture in RIFF Format. 5) After transmission the packets are rearranged on receiver side according to packet number. such as WAV. TTA. Most formats offer a range of degrees of compression. Lossy compression typically achieves far greater compression but somewhat reduced quality than lossless compression by simplifying the complexities of the data. 2. this process is known as packetization. the smaller the file and the more significant the quality loss. Audio transmission is done on the form of data packets. Formats with lossless compression. Here the lower the rate. Encoding an uncompressed minute of absolute silence produces a file of the same size as encoding an uncompressed minute of music. generally measured in bit rate.To send Audio on internet a specific protocol is used which is called Voice over IP protocol. such as APE. their files take up half the space of the originals). Lossless compressed audio formats. 3) Sample sound is then collected and converted in data packets. Voice over IP. It store audio and video in a single frame. They provide a compression ratio of about 2:1 (i. and WMA etc. 3. ii) Progressive downloading where media is downloaded as well as played at a time. This makes them suitable file formats for storing and archiving an original recording.It is use to send audio file on internet. Downloading and play back can be done using two methods which arei) File downloading then streaming where first complete file is downloaded and played.In a lossless compressed format.
7) Some time packets are sending multiple times to stop packet loss which is called redundancy. Voice over IP is used within TCP/IP protocol suit.Additive synthesis adds various amplitudes of the harmonics of a chosen pitch until the desired timbre is obtained. Special effects – Effects are used to increase sound quality. Additive synthesis builds sounds by adding together waveforms (which are usually harmonically related). This process is used to balance recording. A sound synthesizer (often abbreviated as "synthesizer" or "synth") is an electronic instrument capable of producing a wide range of sounds. Modern sound synthesis makes increasing use of MIDI for sequencing and communication between devices. 8) Forward error correction mechanism is also used to stop packet loss. . This is also used to increase sound file size without changing sound. In this method every packet have some information of previous packet which is matched at the time of rearrangement 9) Delay packets are treated as loss. Synthesizers generate electric signals (waveforms). But it can be removed to reduce file size by trimming. Type Of Synthesis. and is commonly used in low-end MIDI instruments (such as educational keyboards) and low-end sound cards.By changing sampling rate both file quality and file size are affected. To implement real-time additive synthesis. Fade in/ Fade out – At end of Sound beat sound is mild down to give a soft ending. Reverse effect – Sound is reverse play back. Variations in delay are called Jitter.It is used to remove beginning or ending blank space. By decreasing sampling rate we can decrease the file size this is called Resampling. Volume control . Slicing – It is used to extract sound beat from the recording at desire position. Some other audio transmission protocols are also used to find receiver and synchronize sender and receiver. But it has negative effect on sound quality also. Reassembling – To rearrange the multiple small sound portion to construct a new sound piece are called reassembling. 4) 5) 6) 7) 8) SYNTHESIZER – Periodic electric signals can be converted into sound by amplifying them and driving a loudspeaker with them. Time stretching – This is used to increase recoding overall length/time without changing sound pitch. It is also used to remove unwanted sound beat from recording. Resampling .To change volume in sound beat with time is called the process of volume control. and can finally be converted to sound through the loudspeakers or headphones. FUNCTION PERFORMED ON SOUND1) Trimming. Wavetable synthesis is useful for reducing required hardware/processing power. This blank 2) 3) space is used to show the gap in two continue recordings. Equalization – In this process long section are provided smoothing by using fade in fade out effect. Mix – to combine two or more sound track in one. To overcome this problem buffering queue is used. The most commonly used effects are – Echo effect – a mild same node sound is inserted with the original sound.
MIDI Musical Instrument Digital Interface) is an industry specification for encoding. Granular synthesis . sound cards.Subtractive synthesis is based on filtering harmonically rich waveforms. and any number of partials. It uses two basic message types channel and system.0 and General MIDI (GM) technology to allow MIDI data files to be shared between multiple devices. drum machines. a sound that does not change over time will include a fundamental partial or harmonic. The resulting analysis data is then used in a second stage to resynthesize the sound using a band of oscillators. Often. Synthesis may attempt to mimic the amplitude and pitch of the partials in an acoustic sound source. Channel messages can be sent from machine to machine over any one of 16 channels to control an instrument's voice parameters or to control the way the instrument responds to voice messages. synchronizing. The MIDI file is just a digital representation of the sequence of notes with information about pitch. Within the MIDI protocol.. duration. to start with geometric waves. Imitative synthesis is a sound synthesis can be used to mimic acoustic sound sources. or some other physical source of sound. System messages can be directed to all devices in the system (called "common" messages) or can be directed to a specific machine (exclusive).combining of several small sound segments into a new sound. and filter the harmonics to produce a new sound. commonly referred to as "operators" in FM-only synthesizers) to create and modify a voice.subtractive synthesis. etc. samplers). storing. Midi’s composition takes advantage of MIDI 1. It uses a serial data connection with five leads. eliminating compatibility issues by using a standard set of commands and parameters. This is the technique used in "sampling". FM synthesis (frequency modulation synthesis) is a process that usually involves the use of at least two signal generators (sine-wave oscillators. a basic set of standards has been developed called the General MIDI specification. and then play back its recordings at different speeds to produce different tones. Physical modelling synthesis is the synthesis of sound by using a set of equations and algorithms to simulate a real instrument. . MIDI • • • • • • • • Musical Instrument Digital Interface (MIDI) is a data transfer protocol which is widely used with music synthesizers. Analysis/resynthesis is a form of synthesis that uses a series of band pass filters or Fourier transforms to analyze the harmonic content of a sound. It attempts to standardize common practices within MIDI and make it more accessible to the general user. computers) and other electronic equipment (MIDI controllers. which are rich in harmonic content. Sample-based synthesis is one of the easiest synthesis systems is to record a real instrument as a digitized waveform. voice. and that takes much less memory than the digitally recorded image of the complex sound. or just GM. this is done through the analog or digital generation of a signal that modulates the tonal and amplitude characteristics of a base carrier signal Resynthesis is modification of digitally sampled sounds before playback. Generally. and transmitting the musical performance and control data of electronic musical instruments (synthesizers.
General MIDI Level 2) MIDI files are typically created using computer-based sequencing software (or sometimes a hardware-based MIDI instrument or workstation) that organizes MIDI messages into one or more parallel "tracks" for independent recording and editing. (MIDI Machine Control. The synthesizer receiving the MIDI data must generate the actual sounds. A MIDI sequencer is a device which allows MIDI data sequences to be captured. . and clock signals. labelled IN. there is an internal link between the keyboard and the sound module which may be enabled or disabled by setting the "local control" function of the instrument to ON or OFF respectively. 8 data bits. and one stop bit). A data encoding scheme for storage and transmission of musical performance and control event data as messages. also referred to as patches or programs. panning. The MIDI interface on a MIDI instrument will generally include three different MIDI connectors. (MIDI Interface. MIDI Basics • • • • • • • • MIDI information is transmitted in "MIDI messages". MIDI time code. combined. or by a MIDI sequencer. Typical message types include musical notation. Note that many MIDI keyboard instruments include both the keyboard controller and the MIDI sound module functions within the same unit. such as a musical instrument keyboard. cues. stored. OUT. (MIDI messages. with 10 bits transmitted per byte (a start bit. edited. and velocity. In these units. MIDI file) Communication protocols for transmitting and synchronizing musical performance and control event data. consisting of a MIDI keyboard controller and a MIDI sound module. (General MIDI. MIDI SYSTEM Figure shows a simple MIDI system. MIDI Cable).The current MIDI specification includes: • • • • • A hardware scheme for physically connecting electronic musical instruments and associated electronic equipment together. The MIDI data stream is a unidirectional asynchronous bit stream at 31. The MIDI data output from a MIDI controller or sequencer is transmitted via the devices' MIDI OUT connector. and THRU. and replayed.25 Kbits/sec. Song Position Pointer) Schemes for categorizing instrument and percussive sounds or timbres. The MIDI data stream is usually originated by a MIDI controller. pitch. which can be thought of as instructions which tell a music synthesizer how to play a piece of music. control signals for parameters (such as volume. MIDI Adapter. vibrato. and it translates the performance into a MIDI data stream in real time (as it is played). MIDI controller is a device which is played as an instrument. MIDI Show Control.
can be set to receive on specific MIDI Channel(s). or a scene displayed on a cathode ray tube. ii) . For example. A MIDI sound source. Unit IV IMAGES An image (from Latin: imago) is an artifact. the sound module would have to be set to receive the Channel which the keyboard controller is transmitting on in order to play sounds. is one that has been recorded on a material object. In the system depicted in Figure 1.The single physical MIDI Channel is divided into 16 logical channels by the inclusion of a 4 bit Channel number within many of the MIDI messages. Sound module number 1 might be set to play the part received on Channel 1 using a piano sound. for example a two-dimensional picture. This may be a reflection of an object by a mirror. The sequencer would then play the parts back together through the sound modules. A musical instrument keyboard can generally be set to transmit on any one of the sixteen MIDI channels. The composer would play the individual parts on the keyboard one at a time. also called a hard copy. and the drum machine plays the percussion part received on MIDI Channel 10. a projection of a camera obscura. that has a similar appearance to some subject—usually a physical object or a person. In this case. such as paper or textile by photography or digital processes. and there are several sound modules connected to the sequencer's MIDI OUT port. a MIDI keyboard controller is used as an input device to a MIDI sequencer. Each part would be played on a different MIDI Channel. Figure 2 shows a more elaborate MIDI system. where each part is written for a different instrument. A composer might utilize a system like this to write a piece of music consisting of several different parts. and the sound modules would be set to receive different channels. while module 2 plays the information received on Channel 5 using an acoustic bass sound. or sound module. Types i) A volatile image is one that exists only for a short period of time. A fixed image. and these individual parts would be captured by the sequencer. .
iii) A mental image exists in an individual's mind: something one remembers or imagines.. Other names for scalability are progressive coding or embedded bit streams. Construction of integer array at the time of digital representation of image is called raster scanning and this array is called raster map. then color. Digital image representation – For representing image in digital media we have to convert it in numeric or binary form. Despite its contrary nature. For representing a 2D image a array of 3 rows are used. which store each image pixel as X coordinate. ii) Meta information Compressed data may contain information about the image which may be used to categorize. in a web browser) or for providing variable quality access to e. search. function. visual media and the computer industry to emphasize that one is not talking about movies. or browse images. scalability also may be found in lossless codecs. then encode the difference to higher resolutions. or "imaginary" entity. On the Basis Of Motion Images Are 2 Typesi) A still image is a single static image.g. is the incoming light. Y coordinate and intensity. The input to the photosensitive device. Such information may include color and texture statistics. v) Intensity images measure the amount of light impinging on a photosensitive device. Resolution progressive: First encode a lower image resolution. There are several types of scalability: Quality progressive or layer progressive: The bit stream successively refines the reconstructed image. small preview images. The subject of an image need not be real. it may be an abstract concept.g. Scalability is especially useful for previewing images while downloading them (e. typically a camera. Properties of image – i) Scalability generally refers to a quality reduction achieved by manipulation of the bit stream or file (without decompression and re-compression). such as a graph. Pixel is the smallest individual element in an image holding quantized value that represents the brightness of the giving color at any specific point. and author or copyright information. iv) A Range imaging is the name for a collection of techniques which are used to produce a 2D image showing the distance to points in a scene from a specific point. or in very precise or pedantic technical writing such as a standard. usually in form of coarseto-fine pixel scans. This phrase is used in photography. normally associated with some type of sensor device. databases. . which enters the camera's lens and hits the image plane. ii) A film still is a photograph taken on the set of a movie or television program during production.. as distinguished from a kinetic image (see below). Component progressive: First encode grey. used for promotional purposes.
overexposed images will have less noise and can actually be advantageous. Here we have an example of using only four bits. This means that images which are underexposed will have more visible noise — even if you brighten them up to a more natural level afterwards. resulting in a higher overall SNR. In many situations an artifacts does not significantly affect object visibility and diagnostic accuracy. noise. When a value is assigned to contrast. This value represents the width of the blurred image of a small object. vi) Pixel bit depth is the number of bits that have been made available in the digital system to represent each pixel in the image. The primary effect of image blur is to reduce the contrast and visibility of small objects or detail. blur. assuming that you can darken them later and that no region has become solid white where there should be texture. as shown Image Contrast: Contrast means difference. Noise: Image noise. it refers to the difference between a specific structure and object in the image and the area around it or its background. Contrast is the most fundamental characteristic of an image. or colors. The physical contrast of an object must represent a difference in one or more object characteristics. Some high compression algorithms require high processing power. gives an image a textured or grainy appearance. These are image artifacts. On the other hand. v) Image Quality. but should give an accurate impression of their size.is not a single factor but is a composite of at least five factors: contrast. sometimes referred to as image mottle. Noise will affect the boundary between visible and invisible objects. and relative positions. But artifacts can obscure a part of an image or may be interpreted as an anatomical feature. Blur: Each imaging method has a limit as to the smallest object that can be imaged and thus on visibility of detail. The general effect of increasing image noise is to lower the curtain and reduce object visibility. This may be combined with scalability (encode these parts first. The source and amount of image noise depend on the imaging method. In an image. Visibility of detail is limited because all imaging methods introduce blurring into the process. light intensities. others later). . The amount of blur in an image can be quantified in units of length. How noise becomes less pronounced as the tones become brighter. and distortion. Distortion: An image should not only make internal objects visible.iii) Region of interest coding Certain parts of the image are encoded with higher quality than others. contrast can be in the form of different shades of grey. shape. iv) Processing power Compression algorithms require different amounts of processing power to encode and decode. Artifacts: Most imaging methods can create image features that do not represent a body structure or object. artifacts. Brighter regions have a stronger signal due to more light. This is smaller than would be used in any actual medical image because with four bits.
they can have higher spectral resolution. The term applies to raster digital images. vii) Pixel size: When an image is in digital form. Image resolution: This is an umbrella term that describes the detail viii) an image holds. The physical size of a pixel. Spectral resolution .a pixel would be limited to having only 16 different values (brightness levels or shades of grey). lines per mm. Resolution units can be tied to physical sizes (e. That is the strength of each band that is created. The resolution of digital images can be described in many different ways. A resolution of 10 lines per millimetre means 5 dark lines alternating with 5 light lines. Resolution of an image can be of following types- i) Pixel Resolution. Higher resolution means more image detail. where the first number is the number of pixel columns (width) and the second is the number of pixel rows (height).Color images distinguish light of different spectra. for example 8 bits or 256 levels that is typical of computer image files.g. or 5 line pairs per millimetre (5 LP/mm). is the amount of blurring added to the imaging process by the digitizing of the image. and other types of images. In effect. it is actually blurred by the size of the pixel.The measure of how closely lines can be resolved in an image is called spatial resolution. Multi-spectral images resolve even finer differences of spectrum or wavelength than is needed to reproduce color. film images. spatial resolution refers to the number of independent pixel values per unit length. A line is either a dark line or a light line. Spatial resolution . resolution quantifies how close lines can be to each other and still be visibly resolved. Image resolution can be measured in various ways. the better subtle ii) iii) iv) . and it depends on properties of the system creating the image. Here we see that an image with small pixels (less blurring) displays much more detail than an image made up of larger pixels. The higher the radiometric resolution. This is because all anatomical detail within an individual pixel is "blurred together" and represented by one number. not just the pixel resolution in pixels per inch (ppi). Basically. also known simply as lines. relative to the anatomical objects. for example as 640 by 480. and is usually expressed as a number of levels or a number of bits. The size of a pixel (and image detail) is determined by the ratio of the actual image size and the size of the image matrix.The pixel resolution with the set of two positive integer numbers. a line pair comprises a dark line and an adjacent light line. Photographic lens and film are most often quoted in line pairs per millimetre. lines per inch).Radiometric resolution determines how finely a system can represent or distinguish differences of intensity. Line pairs are often used instead of lines. TV lines. That is. Radiometric resolution . to the overall size of a picture (lines per picture height. or TVL). or to angular subtenant.
The level of compression is the factor by which the numerical size is reduced. per pixel. x) Image compression is the process of reducing the numerical size of digital images. because in this case JPG files are usually smaller than PNG files. PNG can store gamma and chromaticity data for improved color matching on heterogeneous platforms. PNG is designed to work well in online viewing applications like web browsers so it is fully stream able with a progressive display option. are best for the final distribution of photographic images. vector (geometric) data. and truecolor images are supported. . ix) Numeric size – The numerical size (number of bits) of an image is the product of two factors: The number of pixels which is found by multiplying the pixel length and width of the image. providing both full file integrity checking and simple detection of common transmission errors. rather than by the number of bits of representation. uniformly colored areas. plus an optional alpha channel. The bit depth (bits per pixel). Image Format – Image file formats are standardized means of organizing and storing digital images. This is usually in the range of 8-16 bits. Lossless compression is when there is no loss of image quality. and the lossy formats. or 1-2 bytes. and is commonly used in many medical applications. the effective radiometric resolution is typically limited by the noise level. Lossy compression results in some loss of image quality and must be used with care for diagnostic images. Image files are composed of pixels. at least in theory. PNG is robust. greyscale. or a combination of the two. The PNG file format supports truecolor (16 million colors) while the GIF supports only 256 colors. The lossless PNG format is best suited for editing pictures. It depends on the compression method and the selected level of compression. The PNG file excels when the image has large. open-source successor to the GIF. PNG (Portable Network Graphics) – The PNG file format was created as the free.differences of intensity or reflectivity can be represented. In practice. PNG provides a patent-free replacement for GIF and can also replace many common uses of TIFF. like JPG. Indexed-color.
TIFF (Tagged Image File Format) – The TIFF format is a flexible format that normally saves 8 bits or 16 bits per color (red. TIFF's flexibility can be both an advantage and disadvantage. such as the CMYK defined by a particular set of printing press inks. EXIF (Exchangeable image file format) – The Exif format is a file standard similar to the JFIF format with TIFF extensions. blue) for 24-bit and 48-bit totals. TIFFs can be lossy and lossless. logos and cartoon style images. Its purpose is to record and to standardize the exchange of images with image metadata between digital cameras and editing and viewing software. Usually using either the TIFF or TIF filename extension. and ineffective for detailed images or dithered images. The advantage is their simplicity and wide acceptance in Windows programs. It also uses a lossless compression that is more effective when large areas have a single color. since a reader that reads every type of TIFF file does not exist. It is incorporated in the JPEG-writing software used in most cameras. green. BMP (Windows bitmap) – The BMP file format handles graphics files within the Microsoft Windows OS. shapes. or 256 colors. . BMP files are uncompressed. TIFF remains widely accepted as a photograph file standard in the printing business. TIFF can handle device-specific color spaces. This makes the GIF format suitable for storing graphics with relatively few colors such as simple diagrams. TIFF image format is not widely supported by web browsers. OCR (Optical Character Recognition) software packages commonly generate some (often monochromatic) form of TIFF image for scanned text pages. respectively.GIF (Graphics Interchange Format) – GIF is limited to an 8-bit palette. hence they are large. The GIF format supports animation and is still widely used to provide image animation effects.
compression. color information. JPEG files suffer generational degradation when repeatedly edited and saved. It supports 8 bits per color (red. image size. When images are viewed or edited by image editing software. It stores Meta information. Encoding Image 3. producing relatively small files. JPEG (Joint Photographic Experts Group) – JPEG-compressed images are usually stored in the JFIF (JPEG File Interchange Format) file format JPEG compression is (in most cases) lossy compression. shutter speed. blue) for a 24-bit total. exposure. The JPEG/JFIF format also is used as the image compression algorithm in many PDF files. Scanning And Compression 2. The JPEG/JFIF filename extension is JPG or JPEG. time and date. name of camera. The metadata are recorded for individual images and include such things as camera settings. green. Quantinization 4. It Use Following Steps For Compression1. all of this image information can be displayed. DCT (Digital Cosine Transform) .
Each color component is represented as an integer between 0 and 255 and so requires one byte of computer storage. green and blue should be mixed together.750 pixels.Lets Take an image as Example asFirst. The color of each pixel is determined by specifying how much of the colors red. . the image is arranged in a rectangular grid of pixels whose dimensions are 250 by 375 giving a total of 93. each pixel requires three bytes of storage implying that the entire image should require 93. Therefore.750 3 = 281.250 bytes.
B) components. and blue and red chrominance. there is a significant amount of correlation between these components. In a typical image. the image has been compressed by a factor of roughly nine.. f1. we will use a color space transform to produce a new vector whose components represent luminance. and blue components. we could record. . one corresponding to each component. The JPEG compression algorithm First. This is the essence of the Discrete Cosine Transform (DCT).However. f7. which will now be explained. the image is divided into 8 by 8 blocks of pixels. We store these coefficients in another 8 by 8 block as shown: . G. In other words. The Discrete Cosine Transform Instead of recording the individual values of the components.414 bytes. we obtain three new blocks. When we apply this transformation to each pixel in our block.G. We will first focus on one of the three components in one row in our block and imagine that the eight values are represented by f0.. Since each block is processed without reference to the others. We would like to represent these values in a way so that the variations become more apparent. The luminance describes the brightness of the pixel while the chrominance carries information about its hue. green.B) consisting of its red. Cb and Cr. we'll concentrate on a single block. the average values and how much each pixel differs from this average value. These three quantities are typically less correlated than the (R. Y.. We may think of the color of each pixel as represented by a threedimensional vector (R. For this reason. say. the JPEG image is only 32.
Rather than simply rounding the coefficients Fw. Moving to the right increases the horizontal frequency while moving down increases the vertical frequency. magenta.u. yellow. we do this in a way that facilitates greater compression. The main purpose of the RGB color model is for the sensing. When this model is associated with a precise description of how the components are to be interpreted (viewing conditions. we can simply say how many appear. B) vector is recovered by inverting the color space transform.Media that transmit light (such as television) use additive color mixing with primary colors of red. though it has also been used in conventional photography.).u / Qw. Mainly used models are followingRGB (Red Green Blue) color model.u) when a JPEG file is created. RGB is a device-dependent color model: different devices detect or reproduce a given RGB value differently. we will first divide by a quantizing factor and then record round(Fw. This section describes ways in which human color vision can be modelled. COLOR MODEL . Cb. G. each of which stimulates one of the three types of the eye's color receptors with as little stimulation as possible of the other two. representation. the algorithm asks for a parameter to control the quality of the image and how much the image is compressed. and yellow transparent dyes/inks . the resulting set of colors is called color space. Then the (R.A color model is an abstract mathematical model describing the way colors can be represented as tuples of numbers. Instead of recording all the zeroes. green. From here.It is possible to achieve a large range of colors seen by humans by combining cyan. the (Y.the coefficients Fw. etc. which will be stored as integers. typically as three or four values or color components. and blue. What is important here is that there are lots of zeroes. We now order the coefficients as shown below so that the lower frequencies appear first. such as televisions and computers. This means that we will need to round the coefficients. CMYK (Cyon. key or black) . This is called "RGB" color space. Reconstructing the image from the information is rather straightforward. The quantization matrices are stored in the file so that approximate values of the DCT coefficients may be recomputed. Cr) vector is found through the Inverse Discrete Cosine Transform.u. are real numbers. magenta. as we'll see. The entry in the upper left corner essentially represents the average over the block. and display of images in electronic systems.
Color Plates. Color Depth 1 bit per pixel 4 bit per pixel 8 bit per pixel 16 bit per pixel 24 bit per pixel Or truecolor Color available 2 color 16 color 256 color 65536 color 16.7million color Halftone Halftone is the reprographic technique that simulates continuous tone imagery through the use of dots. Brightness defined the amount of black or white color in image.For describing light source hue means dominant frequency saturation means priority while Brightness or luminance are used. 2. This binary reproduction relies on a basic optical illusion. It is used to define color of image. This is called "CMY" or "CMYK" color space. The general idea is the same. Where continuous tone imagery contains an infinite range of colors or greys. in shape or in spacing. It show color on x and y axis and on third dimension it shows the ………………It is also an addictive model. It is a three dimension model. so there is a risk of a pattern. It shows the total number of colors that a given system is able to generate or manage. The resolution of a halftone screen is measured in lines per inch (lpi). It generates color by combining x and y axis. These are the subtractive primary colors. . Saturation is used to define the intensity of color. are not RGB devices. HSB/HSL (Hue. Hue can be defined on angle in color wheel from 0 to 360. it may not be able to display them all simultaneously. It has limited uses as not used by scanner etc device. Due to video memory limitations. measured parallel with the screen's angle Halftoning is also commonly used for printing color pictures. magenta is red + blue. Elliptical dots: appropriate for images with many objects. suitable for light images. yellow and black Shape of dots can be- 1. Cyan is green + blue.A palette is either a given. magenta. and yellow is red + green. Often a fourth black is added to improve reproduction of some dark colors. but subtractive color devices (typically CMYK color model). They meet at a tonal value of 70%. by varying the density of the four primary printing colors. varying either in size. Chromaticity Model – It show color on the basis of frequency. Saturation.on a white substrate. saturation and chrominance. Brightness/Lightness) . Round dots: most common. finite set of colors for the management of digital images. especially for skin tones. Elliptical dots meet at the tonal values 40% (pointed ends) and 60% (long side). on the other hand. This is the number of lines of dots in one inch. cyan. It is used to define colors on the bases of numbers of bits provided and define the numbers of standard color in image. Color printers. the halftone process reduces visual reproductions to a binary image that is printed with only one color of ink.
Human eye dynamic range is from reduce sunlight to bright sunlight. A simple example is an image with only black and white in the color palette. It can also view object in moon light which illumination is of 1/10 but the range use in digital camera are normally less than human eye.3. Dithering is the most common means of reducing the color range of images down to the 256 (or fewer) colors seen in 8-bit GIF images. and blue channel pixel values. The corners meet at a tonal value of 50%. An important goal of this adjustment is to render specific colors – particularly neutral colors – correctly. In shooting film. color balance is the global adjustment of the intensities of the colors (typically red. color balance is typically achieved by using color correction filters over the lights or on the camera lens Color balances normally reserved to refer to correction for differences in the ambient illumination conditions. and blue primary colors). or white balance. the general method is sometimes called Gray balance. Gamma Correction/ Gamma encoding / Gamma Nonlinearity/ Gamma- . not recommended for skin tones. Dithering is the process of juxtaposing pixels of two colors to create the illusion that a third color is present. It is used to developed range of luminance. Dynamic range sensor is used to adjust the value of dark area in digital photography. neutral balance. Dithering Full-color photographs may contain an almost infinite range of color values. green. green. without respect to any color sensing or reproduction model. By this color cast can be created which is used to define lighting in a image. Square dots: best for detailed images. Color temperature is used to measure light source intensity which is the ration of blue light on red light. By combining black and white pixels in complex patterns a graphics program like Adobe Photoshop can create the illusion of gray values White Balance In photography and image processing. Dynamic range correction This term is used to show the measure of possible changeable quality of a value. The color balance operations in popular image editing applications usually operate directly on the red. hence.
through analog or digital means. contains luminance (intensity). in the simplest cases. chrominance (colour). Gamma correction is.to maximize the use of the bits or bandwidth relative to how humans perceive light and color. The special S. video.Video connector also carries left and right signals for stereo. background improvement etc. and the process of encoding with this compressive power-law nonlinearity is called gamma compression.video: Also called separated video or super video. It gives best color reproduction since there is no cross talk between the three different channels. Creative retouching etc. Gamma encoding of images is required to compensate for properties of human vision . transmitting. Green and Blue image plane. recording. It used two wires. Gamma encoding of floating point images is not required (and may be counterproductive) because the floating point format already provides a pseudologarithmic encoding. This type of signals used in broadcast Colour TV. UNIT V . There are several subtypes of digital image-retouching: Technical retouching. . in the common case of A = 1. processing. A gamma value γ < 1 is sometimes called an encoding gamma. inputs and outputs are typically in the range 0–1. blanking and sync). S. Composite Video: It is also called CVBS (colour video baseband signal OR colour. sync information in one signal. storing. IT is used in advertising photography. Photo Retouching – It is a application of image editing techniques to photographs in order to create an illusion (in contrast to mere enhancement or correction).VIDEO AND ANIMATION Videos – Video is the technology of electronically capturing. and reconstructing a sequence of still motion. Videos are mainly of following types- images representing scenes in Component Video: Higher End video makes use of three separate video signals Red.Gamma is the name of a nonlinear operation used to code and decode luminance in video or still image systems. defined by the following power-law expression: Where A is a constant and the input and output values are non-negative real values. one for luminance (intensity) and second for chrominance (colour) signal. conversely a gamma value γ > 1 is called a decoding gamma and the application of the expansive power-law nonlinearity is called gamma expansion.
The slight delay between odd and even line refreshes creates some distortion or flicker. or about 1. Video frame is one of the many still (or nearly so) images which compose the complete moving picture.It can be of two types1.3.4. 2. It is quantified using the bit per second (bit/s or bps) unit or Megabits per second (Mbit/s). The screen aspect ratio of a traditional television screen is 4:3. The aspect ratio of a full 35 mm film frame with soundtrack (also known as the Academy ratio) is 1. 2. Difference between interlace and progressive scanning 1) Interlace scans odd lines first and alternates to scan even lines whereas a progressive system scans sequentially. Unlike a interlace system which scans odd and alternately even lines (1. ranges from six or eight frames per second (frame/s) for old mechanical cameras to 120 or more frames per second for new professional cameras. Aspect Ratio: Aspect ratio describes the dimensions of video screens and video picture elements. Video scanning. 2) Interlace scans every 1/30th of a second and progressive every 1/60th. 3. 3) Progressive produces much better and finer picture quality. or about 1. Interlace scanning: Interlacing divides each line into odd and even ones and then alternately scans and refreshes them at 30 frames per second.6 etc.3…) every 1/60th and produces a complete and flicker less picture. Two fields comprise one video frame. Bit rate is a measure of the rate of information content in a video stream. The minimum frame rate to achieve the illusion of a moving image is about fifteen frames per second. a smoother and much detailed image with finer details can be produced.) every 1/30th of a second.78:1. progressive system scans the line sequentially (1. All popular video formats are rectilinear.33:1.In video.375:1. High definition televisions use an aspect ratio of 16:9.. 4) Less native source is available in 1080p format. Using progressive scanning.5 etc and 2. and so can be described by a ratio between width and height. 1. Frame rate is the number of still pictures per unit of time of video.Characteristics of video streams Video field . This is because only half the lines keep up with the moving image while the other half waits to be refreshed. A higher bit rate allows better video quality. a field is one of the many still images which are displayed sequentially to create the impression of motion on the screen. This type of scanning can be commonly seen in the traditional CRT monitors. The display resolution of a digital television or display device is the number of distinct pixels in each dimension that can be displayed. It is usually quoted . Progressive scanning: With the advent of LCD (Liquid Crystal Display) more efficient and better way of scanning the image was introduced know as ‘progressive scanning’.2.
372. This example video has the following properties: pixels per frame = 640 * 480 = 307. with the units in pixels: for example.25Mbits/sec video size (VS) = 184Mbits/sec * 3600sec = 662. 4.MPEG(I /P / B frame) MPEG 1/ 2/ 4/ 21 Tape format. Higher refresh rate generate flicker less images. Monitor Refresh Rate: To make and hold the image the image is reconstruct multiple times in order to maintain the appearance of image. "1024×768" means the width is 1024 pixels and the height is 768 pixels. a frame size of 640x480 (WxH) at a color depth of 24bits and a frame rate of 25fps. Analog video may be carried in separate channels.as width × height. as in two channel S-Video (YC) and multi-channel component video formats. Number of times pixel reconstructed on a monitor screen in per unit time is called monitor refresh rate. Dot-Pitch – It govern picture sharpness in color monitor.37Mbits bit rate (BR) = 7. An example video can have a duration (T) of 1 hour (3600sec). Analog video is used in both consumer and professional television production applications.400Mbits = 82.37 * 25 = 184. When combined in to one channel.800Mbytes = 82. Mov. It is the physical distance of 2 pixel which is measure in mm. By decreasing the dot pitch distance we can increase the image quality. In the context of video these images are called frames. Digital video comprises a series of orthogonal bitmap digital images displayed in rapid succession at a constant rate. it is called composite video. brightness (Y) and chrominance (C) of an analog television image.DAt. 5. An analog color video signal contains luminance.AVI.200 * 24 = 7.200 bits per frame = 307.matic/ Betamax /Betacam / VCR/ cvc / cam coder/dv/vcd/dvd . We measure the rate at which frames are displayed in frames per second (FPS).Ampex/ VERA/ U. Analog video is a video signal transferred by an analog signal.8Gbytes Video compression Loss less Lossy File format. Digital video is a type of digital recording system that works by using a digital rather than an analog video signal. Swf.800 = 7.
Cel animation Computer animation Morphing .
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.