You are on page 1of 53

CHP – 1

1) What is multimedia and its properties?


• Multimedia is a computer based interactive communication process that
includes text, video, audio, animation and video.

Properties:
1. Independency –
• Multimedia system uses different media.
• These media should be independent.
• It requires different level of independence.
• Computer recorded video has information of both audio and video.
• Both audio and video are coupled through the common medium of tape.

2. Computer Support Integration –


• Media types are bundled together for ease of delivery, storage etc.
• Multimedia system should be programmable by professional user or end
user.
• When integrating different media it should be consider synchronization
between media.
• The need for interchange between different multimedia application has led
to the evolution of common interchange file format.

3. Communication System –
• Variety of multimedia applications running on different platforms will
need to communicate with each other.
• Communication System is required for real – time delivery over
distributed network.
• It is also required for inter-application exchange of data.

4. Interactivity –
• If the user has the ability to control what elements are delivered and when,
the system is called interactive system.
• Tradition technologies used to deliver audio, graphic and text but was
predefined and inflexible.
• Thus while designing multimedia system we have to decide the level of
interactivity we wish to provide the user of the system.
2) What is Global Structure of Multimedia?

1. Device Domain –
• It contains all multimedia elements such as graphics & images, video &
animation, audio etc.
• It also consists of compression, storage and network for these elements.
• It also specifies how these elements are digitized and processed.
• Network allows exchange of data.
• Compression schemes used are CCITT group 3 & 4, JPEG, MPEG etc.

2. System Domain –
• It contains three services.
• Operating System is used for interaction between computer hardware and
software and also with user.
• Database system is used to store data at different levels.
• Communication system is used to allow communication between different
multimedia services.
• Computer technology specifies the interface between device domain and
system domain.

3. Application Domain –
• Document consists of a set of structural information that can be in
different form of media.
• Abstraction is the process of hiding the details and showing only essential
features of a particular concept.
• User interface should be responsive to user need.
• It also include multimedia tools and application such as special editor and
other document processing tools.

4. Cross Domain –
• Synchronization is used for temporary relation between media objects
in multimedia system.
• Hence, it must be considered at all levels.
3) What are Multimedia Objects and/or Elements?
1. Text –
• A broad term for something that contains words to express something.
• Text is the most basic element of multimedia.
• A good choice of words could help convey the intended message to the users.
• Used in contents, menus, navigational buttons etc.

Example:

2. Images –
I) Document images:
• Scanning and storing copies of business documents.
• Avoids paper work and making several copies becomes easier.

II) Facsimile:
• Transmission of document images over telephone line.
• Typical density 100 to 200 dpi for true representation and legibility

III) Photographic images:


• Normally captured by a digital camera or a similar device and then stored.
• For photo identification, profile creation.

3. Graphics –
• A graphic, or graphical image, is a digital representation of non-text
information such as a drawing, chart, or figure.
• 2D or 3D figure or illustration.
• Used in multimedia to show more clearly what a particular information is all
about.
4. Holographic Images –
• Unique photographic image without the use of lens.
• When illuminated by coherent light like laser beam organizes the light into a 3-D
representation of the original object.

5. Animation –
• The illusion of motion created by the consecutive display of images of static
elements.
• In multimedia, animation is used to further enhance / enrich the experience of the
user to further understand the information conveyed to them.

6. Audio –
• Audio is sound within the acoustic range available to humans.
• An audio frequency (AF) is an electrical alternating current within the range 20 to
20,000 hertz that can be used to produce acoustic sound.
• In multimedia, audio could come in the form of speech, sound effects and also
music.
• Voice Commands and Voice Synthesis:
o For hands free operation of a computer program.
o Direct computer operations by spoken commands
• Audio messages
o Annotated voice mail which use audio or voice messages as attachments.

7. Video –
• It is the technology of capturing, recording, processing, transmitting, and
reconstructing moving pictures.
• Video is more towards photo realistic image sequence / live recording as in
comparison to animation.
• Video messages
o Can be used as an attachment with the mail.
o Full motion stored and live video
o Live video presentations, video conferencing,3D video techniques to
create concept of virtual reality.

8. Geographic information system maps –


• A system designed to capture, store, manipulate, analyze, manage, and present
spatial or geographic data.
• Graphical information of a location, highlighting certain map elements like
elevations, longitudes, latitudes, wildlife statistics etc.
• Example: At airports for tracking pavement condition, runway extensions, routing
purposes
• In tourism one application could be route optimization.

4) What is multimedia architecture?


ANSWER FROM TECHMAX.
5) What is IMA architecture framework?

• The Interactive Multimedia Association has a task group to define the architectural
framework for multimedia to provide the ability to exchange and use information.
• It defines necessary interchange formats across multivendor solutions
• The architectural approach taken by IMA is based on defining interfaces to a
multimedia interface bus.
• This bus would be the interface between systems and multimedia sources.
6) What is Network Architecture for Multimedia Systems?
• Multimedia systems need special networking requirements.
• Because not just small volumes of data but large volumes of images, voice and video
messages are being transmitted.
• To meet the high-speed multimedia needs below network topologies are used.
• Simple document imaging and text can be satisfied with even ethernet but integrated
multimedia applications need higher topologies.
• ATM (622 Mbps): ATM is an acronym for Asynchronous Transfer Mode. Its
topology was originally designed for broadband applications in public networks.
Asynchronous Transfer Mode technology (ATM) simplifies transfers across LANs
and WANs.
• FDDI (100 Mbps): FDDI is an acronym of Fiber Distributed Data Interface. This
FDDI network is an excellent candidate that interconnects different types of LANs.
FDDI presents a potential for standardization for high speed networks. FDDI allows
large-distance networking.
• ATM+SONET (10Gbps) – It transfer file at a speed of 10Gbps across LAN.
7) What are Types or classification of medium?
• Medium is the means of distribution and presentation of information.
• Medium can be classified into:

1. Perception medium –
• Perception medium is the medium through which information is perceived and
processed by the user.
• Eg: Sound as perceived by human ear; graphics as perceived by human eye.

2. Representation medium –
• Representation medium refers to the way information is constructed and
represented.
• E.g. Text is encoded using ASCII codes or UNICODE, audio is encoded using
PCM, Image is encoded using JPEG.

3. Presentation medium –
• Presentation medium is the way in which information is presented to the user.
This medium engages all the human senses.
• Eg: Keyboard, Mouse, Microphone, Screen, Speaker, Printer.

4. Storage medium –
• Storage medium is the medium where you store and from where you retrieve data.
• Both primary and secondary storages are considered.
• Eg: Hard drive, RAM, CD-ROM, DVDs etc.

5. Transmission medium –
• Transmission medium describes the physical system used to carry communication
signal from one system to another.
• Eg: guided transmission media like metallic cables, optical fibers. unguided media
like satellites and radio signals.

6. Information exchange medium –


• Information exchange medium is the application medium that is used to distribute
the information from one point to the other.
• Eg: Email service.
8) What are various Interaction Techniques?
• An interaction technique is a combination of hardware and software elements that
provides a way for computer users to accomplish a single task.
• It is a way of using physical input/output devices to facilitate human – computer
interaction to perform a task.
o Eg: For going back to a previously visited page on a web browser by either
clicking a button, pressing a key, performing a mouse gesture or uttering a
speech command.
• Any interaction technique involves:
o One or several input devices
o One or several output devices
• Interaction techniques can be viewed from two perspectives:
o 1) User’s view
o 2) Designer’s view
• Interaction Styles:
o 1) Command line
o 2) Direct manipulation
9) Explain Input devices – Electronic Pen, Scanner and Digital Camera?
Electronic Pen –
• Main components Electronic Pen include:
o Electronic pen and digitizer: It generate the pen position and pen status.

o Pen driver: A device driver that collects all pen information and builds pen
packets for the recognition context manager.

o Recognition Context Manager: It works with the device driver, recognizer,


dictionary, and application to perform the recognition and the requested
tasks.

o Recognizer: Recognizes handwritten characters and figures.

o Dictionary: The recognizer feeds the characters to a dictionary, which


selects the most likely character string combinations in the form of words.

o Display Driver: It displays objects such as characters, symbols, or graphical on


the screen.

Scanner –
— A Scanner is used for converting a paper document to a digital image.
— It acts as a camera eye and takes a photograph of document.
— Types of Scanners:
1) Based on Size:
o Normally they come in A (8.5inch X 11 inch) and B (11inch X 17 inch)
sizes.
o Also, called A and B size scanners (portrait and landscape mode).
o Large Form factor scanners are available for capturing large drawings,
engineering and architectural drawings.

2) Based on scanning mechanism:


Flatbed scanner –
o Fixed scanning bed with a glass plate.
o Light source mounted on a traction mechanism.
o Mirror to reflect each illuminated scanline to a fixed CCD array.
o Charge in the CCD cell generates voltage which is fed to A/D converter
for conversion to a digital value.
o CCD determines the pixel intensity of each pixel in the selected line
being scanned.
o A high-end flatbed scanner can scan up to 5400 ppi.

Rotary drum scanner –


o Paper is fed from the feed tray and is hold and wrapped around the drum.
o There are three sets of rollers to guide the paper.
o Two digital cameras with CCD arrays are mounted at fixed position near the
drum.
o Front side of the paper is scanned in position 1 as it rolls around the drum
and backside is scanned in position 2 as the transport mechanism pulls the
paper from the drum and ejects it out to the stacking tray.
o Drum moves along with paper and both sides can be scanned at the same
time.
o Drum scanners can scan between 3,000 and 24,000 ppi.

Handheld Scanners –
o Scans part of a page where width of scan area is about 3 to 6 inches.
o Convenient, portable and low cost.
o Not preferred for high volume, professional quality scanning.
o Light is reflected from document as user moves the scanner across the
document.
o CCDs absorb reflected light and generate voltage which is converted into
digital value by A/D converter and stored.
Digital Camera –

— In digital camera, one can take pictures without a roll of film.


— Images are stored on magnetic or optical disk in the camera.
— Images and videos can be loaded directly to the computer.
— Images can be viewed, copied and printed number of times, can be mailed, can be
altered or enhanced.
o CCD array is located just behind the lens.
o Light that reflected falls on the CCD array.
o CCDs generate voltages.
o Voltages converted gets into digital value and stored in the camera’s memory.
o Binary value generated can range anywhere from 1 bit/pixel to 8 bits/pixel for
black and white, gray scale or colored capture.
o Higher number of bits represent a better resolution in terms of the number of
gray scales or colors represented by the pixel.
o The clarity of the photos taken from a digital camera depends on the resolution
of the camera.
o This resolution is always measured in the pixels.
o If the numbers of pixels are more, the resolution increases, thereby increasing the
picture quality.
— There are many types of resolutions available for cameras:
o 256×256 – This is the basic resolution a camera has. The images taken in such a
resolution will look blurred.
o 640×480 – This is a little more high-resolution. Though a clearer image than the
former resolution, they are frequently considered to be low end. These types of
cameras are suitable for posting pics and images on websites.
o 1216×912 – This resolution is normally used in studios for printing pictures.
o 1600×1200 – This is the high-resolution type. The pictures are in their high end
and can be used to get the same quality as you get from a photo lab.
o 2240×1680 – This the high-resolution type referred to 4 MP cameras. With this
resolution you can easily take a high-quality photo print.
o There are even higher resolution cameras up to 20 million pixels or so.

10) What are output devices Printer and Plotter?


Printer –
There are two types:
— Laser Printer –
o Laser printing technology is the most common for multimedia systems.
o Resolutions ranging from 600 to 1200 dpi are useful for specialized multimedia
applications.
o A high-end production printer has a higher resolution.
o It can be attached to user workstations, workgroup LANs or as centralized
resources for high speed, high volume multimedia document output.
o Millions of bytes of data stream into the printer from the computer.
o An electronic circuit in the printer figures out how to print this data so it looks
correct on the page.
o The electronic circuit activates the corona wire.
o The corona wire charges up the photoreceptor drum so that the drum gains a
positive charge.
o At the same time, the circuit activates the laser to make it draw the image of
the page onto the drum.
o When the laser beam hits the drum, it erases the positive charge that was there
and creates an area of negative charge instead.
o Gradually, an image of the entire page builds up on the drum, the area of the page
that contains positive charge is kept white and the area of the page that contains
negative charge is kept black.
o An ink roller touching the photoreceptor drum coats it with powdered ink.
o No ink is coated to the parts of the drum that have a positive charge.
o The image is transferred from the drum onto the paper.
o The inked paper passes through two hot rollers.
o The heat and pressure from the rollers fuse the toner particles permanently into
the fibers of the paper.
o The printout emerges from the side of the copier.
o The paper is still warm.
2. Dye Sublimation printer –

— Dye-sublimation printer is a digital printing technology using full color artwork that
works with polyester and polymer-coated surface.
— This process is commonly used for signs and banners, as well as novelty items such as
cell phone covers, coffee mugs etc.
— It has a thermal printing head with thousands of tiny heating elements.
— Plastic film transfers roll mounted on two rollers and a drum.
— Transfer roll contains panels of cyan, yellow, magenta and black dyes.
— Heating is carried at 256 different temperature levels.
— It is applicable for multimedia applications because its print quality is very high.
— Graphic artists, advertising agencies use these for photographic quality print.
— The most common process lays only one color at a time, as the dye has each color
on a separate panel.
— During the printing cycle, the rollers will move the medium.
— Tiny heating elements on the head change temperature rapidly, laying different
amounts of dye.
— After the printer finishes printing the medium in one color, it shifts the ribbon on to
the next color panel to prepare for the next cycle.
— The entire process is repeated four or five times in total.
Plotter –
— A plotter is a computer vector graphic printer that gives a hard copy of the output
based on instructions from the system.
— A plotter is a special output device used to produce hard copies of large graphs and
designs on paper, such as construction maps, engineering drawings,
architectural plans and business charts.
— The plotter is either a peripheral component that you add to your computer system
or a standalone device with its own internal processor.
— Plotters work in combination with CAD software on the computer, to output line
drawings for plans, blueprints and other technical drawings.
— Due to the mechanical actions involved, compared to other types of printers such as
ink jet and laser printers, old plotters were slow.
— Only a small number of pen plotters are still in use commercially.

11) What are different storage devices?


Optical storage –
— It is the storage of data on an optically readable medium. Data is recorded by making
marks in a pattern that can be read back.
o Compact Disk (CD) are storage media that hold content in digital form and
that are written and read by a laser.
o Optical media have a number of advantages over magnetic media.
o Optical disk capacity ranges up to 6 GB.
o One optical disk holds about the equivalent of 500 floppies worth of data.
o Durability is another feature of optical media; they last up to seven times as
long as traditional storage media.
o Single CDs can hold around 700 MB.
o They are also very commonly used in computers to read software and
consumer media, for archival and data exchange purposes.
o In computing and optical disc recording technologies, an optical disc (OD) is
a flat, usually circular disc which encodes binary data in the form of ‘pits’ and
lands on a special material on one of its flat surfaces.
o The encoding pattern follows a continuous, spiral path covering the entire disc
surface from the innermost track to the outermost track.

— DVD "digital video disc" is a digital optical disc storage format invented and
developed by Philips and Sony.
— DVD can store any kind of digital data and is widely used for software and other
computer files as well as video are watched using DVD players.
— DVDs offer higher storage capacity than CD’s while having the same dimensions.
o Prerecorded DVDs are mass-produced and data is physically stamp onto the
DVD. Such discs are a form of DVD-ROM because data can only be read and
not written or erased.
o Blank recordable DVD discs (DVD-R) can be recorded once using a DVD
recorder.
o Rewritable DVDs (DVD-RW) can be recorded and erased many times.
— DVD has enough capacity (1.36 to 15.9 GB) and speed to provide high quality, full
motion video and sound, and low-cost delivery mechanism.
— Games for Windows were also distributed on DVD.
— Blu-ray Disc used to store 25 GB on each layer, and now they can hold 100 GB
o DVDs can store about 7 times more data than CDs
o The data in a DVD is more tightly packed than that in a CD.
o While a Blu-ray can store about 5 times more data than a DVD.

Jukebox –
— An optical jukebox is a device used for robotic data storage which can be
automatically loaded and unloaded without any outside human assistance.
— These discs are normal data storage discs such as CD’s, DVDs, or Blu-ray discs, and
offer terabytes (TB) and petabytes (PB) of secondary storage options.
— Optical Jukeboxes are also known as optical disk libraries, robotic drives and
autochangers.
— An optical jukebox can have up to 2000 slots for discs and its performance depends
on how quickly, efficiently and effectively it crosses those slots.
— Rate of transfer depends on a number of factors including sorting algorithms and
placement of discs in the slots.
— This kind of storage device is primarily used for the commercial and industrial
scale for backups.
— Jukeboxes are used in high-capacity archive storage environments such as
imaging, medical, and video.
— In this little-used or unused files are moved from fast magnetic storage to optical
jukebox devices in a process called migration.
— If the files are needed, they are migrated back to magnetic disk.
— Today one of the most important uses for jukeboxes is to store data that will last up
to 100 years.
— The data is usually written on Write Once Read Many (WORM) type discs so it
cannot be erased or changed.

12) Explain MMDB and MDBMS?

PPT 1 PG 110
CHP – 2
13) Explain RTF ?
— The Rich Text Format (often abbreviated RTF) is a document file format with
published specification developed by Microsoft.
— Most word processors are able to read and write some versions of RTF.
— Rich text is more exciting than plain text.
— It supports text formatting, such as bold, italics, and underlining, as well as
different fonts, font sizes, and colored text. Rich text documents can also include
page formatting options, such as custom page margins, line spacing, and tab
widths.
— Most word processors, such as Microsoft Word, Lotus Word Pro, and AppleWorks,
create rich text documents. However, if you save a document in a program's native
format, it may only open with the program that created it. For example, Lotus Word
Pro will not be able to open an AppleWorks text document, even though both
programs are text editors. This is because each program uses its own method of
formatting and creating text files.
— But now most word processors allow you to save rich text documents in the
generic Rich Text Format. This file format, which uses the .RTF extension keeps
most of the text formatting. However, because it is a standard format, it can be opened
by just about any word processing program and even most basic text editors.
— Key format information carried across in RTF document files:
o Character set: All the characters that are supported including ANSI, IBM PC,
Macintosh.
o Font table: Lists all the fonts used in the document. (Mapping at receiving
application)
o Color table: lists the colors used in the document for highlighting the text.
(Mapping at receiving application)
o Document Formatting: Provides true document margins and paragraph indents.
Printed page looks very similar to original page in receiving application.
o Section Formatting: Section breaks and page breaks, separation of groups of
paragraphs, specifies space above and below the section.
o Paragraph Formatting: Defines control characters for specifying paragraph
justification.
o General formatting: Footnotes, annotation, bookmarks and pictures.
o Character formatting: Bold, Italic, underline, subscript, superscript.
o Special characters: hyphens, backslashes, etc.
14) Explain TIFF?
• TIFF (Tag Image File Format) is a common format for exchanging raster graphics
(bitmap) images between application programs. A TIFF file can be identified with a
".tiff" or ".tif" file name suffix.
• Used for data storage and interchange. The general nature of TIFF allows it to be
used in any operating environment, and it is found on most platforms requiring image
data storage.
• Supporting Applications are most paint, imaging, and desktop publishing programs
• Platforms MS-DOS, Macintosh, UNIX and other O.S.
• The TIFF format is perhaps the most versatile and diverse bitmap format in
existence. Its extensible nature and support for numerous data compression schemes
allow developers to customize the TIFF format to fit any unusual data storage
needs.
• The TIFF specification was originally released by Aldus Corporation as a standard
method of storing black-and-white images.
• TIFF, 4.0, was released in 1987. TIFF 4.0 added support for uncompressed RGB
color images.
• TIFF 5.0 was released in 1988, it was first revision to add the capability of storing
palette color images and support for the LZW compression algorithm.
• TIFF 6.0 was released in 1992 and added support for CMYK and YCbCr color
images and JPEG compression method.
• TIFF's extensible nature, allows storage of multiple bitmap images of any pixel
depth, makes it ideal for most image storage needs.
• TIFF documents have a maximum file size of 4 GB. Photoshop CS and later versions
supports large documents saved in TIFF format. However, most other applications
and older versions of Photoshop do not support documents with file sizes greater than
4 GB.
Three possible physical arrangements of data in a TIFF file

Logical organization of a TIFF file

● Each IFD is a road map where all the data associated with a bitmap can be found.
The data is found by reading it directly from within the IFD data structure or by
retrieving it from an offset location whose value is stored in the IFD.
● Each IFD contains one or more data structures called tags. Each tag is a 12-byte
record that contains a specific piece of information about the bitmapped data.
● The offset values used in a TIFF file are found in three locations.
o The last four bytes of the header gives the first offset value to the position of
the first IFD.
o The last four bytes of each IFD gives offset value to the next IFD.
o The last four bytes of each tag gives an offset value to the data it represents, or
the data itself.

15) Explain compression and it’s techniques?


• Data compression is reduction in the number of bits needed to represent data.
• Compressing data can save storage capacity, speed up file transfer, and decrease
costs for storage hardware and network bandwidth or the transmission cost.
• Compression is performed by a program that uses a formula or algorithm to determine
how to shrink the size of the data.
• For instance, an algorithm may represent a string of bits -- of 0s and 1s -- with a
smaller string of 0s and 1s by using a dictionary for the conversion between them.

• Lossless compression –
o It enables the restoration of a file to its original state, without the loss of a
single bit of data, when the file is uncompressed.
o Lossless compression is the typical approach with program executables, as
well as text and spreadsheet files, where the loss of words or numbers would
change the information.
Lossless compression schemes:
● Run-length encoding
● Huffman coding
● CCITT Group 3 1D
● CCITT Group 3 2 D
● CCITT Group 4

• Lossy compression –
o Lossy compression permanently eliminates bits of data that are
redundant, unimportant or invisible.
o Lossy compression is useful with graphics, images, audio, and video, where
the removal of some data bits has little or no noticeable effect on the
representation of the content.
Lossy compression schemes:
● Joint Photographic Experts Group (JPEG)
● Moving Picture Expert Group (MPEG)
● Intel DVI
● CCITT H.261
● Fractals
Lossless Lossy

Lossless compression is reversible,original data can be Lossy compression is irreversible,original data cannot be
reconstructed as it is. reconstructed as it is.
It exploits statistical redundancy. It exploits humans perception of data.

Less Compression ratio. High compression ratio.

Used for compression of text and images. Used for compression of audio,video and images.

Packbits Encoding(Run-length encoding) Joint Photographic Experts Group(JPEG)


Huffman coding Moving Picture Expert Group(MPEG)
CCITT Group 3 1D Intel DVI
CCITT Group 3 2 D CCITT H.261 Video Coding Algorithm
CCITT Group 4 Fractals
Lempel-Ziv and Welch (LZW)

16) Explain RLE?


• Run-length encoding (RLE) or pack bits encoding is a very simple form of lossless
data compression in which runs of data are stored as a single data value and
count, rather than as the original run.
• This is most useful on data that contains many runs.
• It is not useful with files that don't have many runs as it could greatly increase the
file size. It is primarily used to compress black and white (binary) images.
Example:
000000000000001111111111000000000111111111111
Coded as
(14 0’s#10 1’s#9 0’s#12 1’s)
00011100#00010101#00010010#00011001
So instead of 45 bits, 35 bits are sent. So at least one byte (more than a byte-10 bits) is
saved.
• For longer runs even better compression is achieved.
• Compression efficiency is of around 1/2 to 1/5.
• Maximum of 127 bits of run length can be represented.
• In an image where adjacent pixels vary rapidly which leads to shorter run lengths
of black and white pixels, it leads to negative or reverse compression.
17) What are different types of redundancies in digital image? Explain in detail.

• Coding Redundancy:
o Coding redundancy is associated with the representation of information.
o The information is represented in the form of codes.
o If the grey levels of an image are coded in a way that uses more code
symbols than absolutely necessary to represent each grey level then the
resulting image is said to contain coding redundancy.

• Inter-pixel Spatial Redundancy:


o Interpixel redundancy is due to the correlation between the neighbouring
pixels in an image.
o That means neighbouring pixels are not statistically independent.
o The value of any given pixel can be predicated from the value of its
neighbours, that is they are highly correlated.
o The information carried by individual pixel is relatively small.
o To reduce the interpixel redundancy the difference between adjacent pixels
can be used to represent an image.

• Inter-pixel Temporal Redundancy:


o Interpixel temporal redundancy is the statistical correlation between pixels
from successive frames in video sequence.
o Temporal redundancy is also called interframe redundancy.
o Temporal redundancy can be exploited using motion compensated
predictive (MCP) coding.
o Removing a large amount of redundancy leads to efficient video
compression.

• Psychovisual Redundancy:
o The Psychovisual redundancies exist because human perception does not
involve quantitative analysis of every pixel.
o It’s elimination is real visual information is possible only because the
information itself is not essential for normal visual processing.
18) Explain CCITT Group 3 1-D
• CCITT stands for Consultative Committee for International Telegraph and
Telephone which is an organization that sets International Communication
Standards.
● CCITT Group 3 is the universal protocol for sending fax documents through a
phone line.
● CCITT Group 3 is a lossless compressed data format for bi-level images.
● It comes in two main varieties:
● 1-dimensional and 2-dimensional although both are used on 2-dimensional images.
● 1-dimensional variety uses Modified Huffman (MH) compression.
● 2-dimensional variety uses Modified Relative Element Address Designate (M-
READ)
compression.
● In this group each scan line is encoded independently.
● A scan line is encoded as a set of runs, each representing a number of white or black
pixels, with white and black runs alternating.
● Every run is encoded using a different number of bits, which can be uniquely
identified when decoded.
o Frequently occurring lengths of runs will be encoded efficiently with shorter
codes.
o Infrequent occurring lengths runs cause the size to increase due to longer codes.
● Group 3 One-Dimensional coding encodes run lengths using a predefined Huffman
code.
● Algorithm:
o Accept the input.
o Scan the input for 0’s and 1’s.
o Divide each run-length for encoding purpose.
o Encode from make-over and terminating code table.
o The output is the compressed file.
Advantages:
It is simple to implement.
It is standard for document imaging application.

Disadvantage:
It does not provide any error protection.
Id does only horizontal run-length coding.

19) Explain CCITT Group 4-2D


• Commonly used for software-based document imaging and facsimile systems.
• It is lossless and used for black and white images.
• Group 3 achieves 10 to 20% compression whereas Group 4 achieves even better
compression up to 40%.
20) Explain Digital Image Representation and its types and examples?
• Digital Image is an image or picture represented digitally i.e., in groups of
combinations of bits ( 0 or 1) or specifically called pixels.
• The digital image itself is really a data structure within the computer, containing a
number or code for each pixel or picture element in the image. This code determines
the color of that pixel.
• The common ways that a digital image is created are via a digital camera, a scanner, a
3D rendering program, or a paint or drawing package.
21) Explain JPEG compression and its steps
• JPEG Compression is the name given to an algorithm developed by the Joint
Photographic Experts Group whose purpose is to minimize the file size of
photographic image files.
• While JPEG compression can help you greatly reduce the size of an image file, it can
also compromise the quality of an image as it is lossy.

• STEPS OF JPEG COMPRESSION –


CHP – 3
22) Explain audio/sound and its characteristics?
• Audio is one of the important components of multimedia.
• It is music, speech, or any other sound.
• Sound is produced by vibration leading to pressure variations in air surrounding it.
• This alternation of high and low pressure is propagated through air in a wave-like
motion.
• When the waves reach our ears, we hear sound.

Characteristics of Sound waves –


— Frequency is defined as number of vibrations, oscillations or cycles occurring
per unit time.
— Amplitude is the maximum change in the value of pressure during oscillation of
the wave.
— Phase of a wave is how far through its cycle of oscillation it has progressed.
— Period is the time required by a wave to pass a single point of reference.
— Wavelength is the distance between one peak of a sound wave and the next peak.
— Velocity(speed) of a wave = Wavelength X Frequency

23) How computer represent sound?


• A computer measures the amplitude of a sound waveform at regular intervals to
produce a series of numbers rather than representing it as continuous waveform.
Each of this measurement is called a sample.
• The rate at which a waveform is sampled is called sampling rate.
• Highest frequency that a digitally sampled audio signal can represent is equal to half
the sampling rate.
• Quantization is the number of bits used in measuring the height of the waveform.
• In case of 8-bit quantization this value is 0 to 255, in 16 bit it is 0 to 65535.
• Higher the number of bits used to store the sampled value leads to more accurate
information with less noise.
24) WAVE FILE FORMAT

• It is supported by all computers running Windows, and by all the most popular
web browsers .
• Sounds stored in the WAVE format have the extension .wav.
• WAV file can hold both compressed and uncompressed audio.
• The header of a WAV file is 44 bytes and has the following format:
o Chunk ID: It holds the letters "RIFF" in ASCII form.
o Chunk Size: This is the size of the entire file in bytes.
o WAVE file contains two types of chunks: Format chunk and Data chunk
o “fmt” Subchunk: It contains information about how waveform is stored, how
it could be played, what compression techniques are used.
o Data SubChunk: “data” this field indicates the size of the sound information
and contains the actual sound data.
o NumChannels: It shows, 1= mono sound, 2 = stereo sound.
o Sample Rate: It contains number of samples per second.
o AudioFormat: This field describes the type of compression format used.
o Byte Rate: This field indicated how many bytes of wave data must be send per
second in order to play the wave file.
o Bits per sample: This field descibes number of bits used to define each sample.

25) MIDI FILE FORMAT


Musical Instrument Digital Interface is a protocol that enables electronic musical
instruments, computer and other electronic equipment’s to communicate and
synchronize with each other.

MIDI Message:
It consists of –
A. Channel Messages:
MIDI sends 16 channel of information and channel message affects these each
channel independently.

There are two types of channel messages:


1. Channel Voice Message –
o A devices program will respond to message sent on the channel it is
tuned to.
o It will ignore all messages.

2. Channel Mode Message –


o This mode determines how an instrument will determine MIDI
message.
o It is used to set device on MONO mode, POLY mode or OMNI mode.

B. System Message:
There are three types of system messages:
1. System real-time message –
o These messages are related to synchronization.
o These messages are used for setting values for real time parameters of
a system such as start or stop.

2. System common message –


o These messages are common to whole system.
o It includes selecting a song, setting song position or sending a tune.
3. System exclusive message –
o These messages contains exclusive data.
o It includes identification, model number and other information.

26) PCM
A signal is pulse code modulated to convert its analog information into binary
sequence.

PCM consists of:


1. Low Pass Filter –
It is used to eliminates high frequency components present in the analog signal.

2. Sampler –
It is used to collect sample data at instantaneous values of message signals.
It is used to reconstruct original signal.

3. Quantizer –
Quantizing is a process of reducing the excessive bits and limit the data.

4. Encoder –
It is used to digitize the analog signal.
It does sample and hold process.

5. Regenerative repeater –
It is used to compensate the lost signal and reconstruct it.
It is also used to increase its strength.

6. Decoder –
It is used to decode the pulse coded waveform to reproduce the original signal.

7. Reconstruction filter –
It is used to reconstruct and get the original signal back.
27) ADPCM
• Adaptive Differential Pulse Code Modulation is widely used.
• It is a lossy code scheme.
• It, instead of quantizing the sound directly, quantize the difference between
the sound signal and a prediction made by the sound signal.
• If the prediction is accurate then the difference will have a lower variance
than real sound.
• At the decoder the quantized difference signal is added to the predicted
signal to reconstruct the original sound signal.
• This techniques achieves 40-80% compression.
• To process redundant information and to have better output, it is better to take
predicted value, assumed from its previous output.
• ADPCM or DPCM differs from PCM because it quantizes the difference
of predicated value and actual value.

28) DM
• In Delta Modulation sampling rate is much higher.
• It has smaller step size.
• It takes over sampled input to make full use of signal correlation.
• The quantization design is simple.
• The input sequence is much higher than Nyquist rate.
• The quality is moderate.
• The design of modulator and demodulator is simple.
• The bit rate can be decided by user.
CHP – 4

29) Types of Video Signals?


• Component Video –
o Component video signal is a video signal that has been split by two or more
components.
o It is transmitted or stored as three separate signals.
o Is does not use RGB but uses colorless component termed luma.
o The luminance (Y) has two color difference signals.
o It is used in professional video production and it provides best quality.

• Composite Video –
o Composite video signals are analog signals that combines luminance and
chrominance.
o It then gives single analog signal that can be transmitted.
o It uses three source signals.
o YUV, Y - brightness of the picture, U and V carries color information.

• S – Video –
o S – video is a method of separating video signal into different components for
transmission.
o S video cables carry four or more wires wrapped together.
o S – video connector has four pins, one for chrome signal, one for luma and two
ground wires.

30) Explain MPEG Video/ MPEG/MPEG -1 compression

• MPEG format is the most popular format on the internet.


• It is cross flatform and supported by all the most popular web browsers.
• Video stored in the MPEG format have extension .mpg or .mpeg.

• MPEG is a method for video compression, which involves the compression of digital
images and sound, as well as synchronization of the two.

There currently are several MPEG standards:

• MPEG-1
• MPEG-2
• MPEG-3
• MPEG-4
• A motion picture is a rapid flow of a set of frames, where each frame is an image.
• Compressing video, then, means spatially compressing each frame and temporally
compressing a set off names.
• Temporal Compression:
o In temporal compression, redundant frames are removed.
o To temporally compress data, the MPEG method first divides frames into
three categories:
o I-frames, P-frames, and B-frames.

I-frames:
o I-frame is an independent frame that is not related to any other frame.
o They are present at regular intervals.
o An I-frame must appear periodically to handle some sudden change in the frame
that the previous and following frames cannot show
P-frames:
o P-frame is related to the preceding I-frame or P-frame.
o In other words, each P-frame contains only the changes from the preceding frame.
o The changes, however, cannot cover a big segment.
B-frames:
o B-frame is related to the preceding and following I-frame or P-frame.
o In other words, each B-frame is relative to the past and the future.

• Spatial Compression:
o The spatial compression of each frame is done with JPEG. Each frame is a picture
that can be independently compressed.
o The compression data takes advantage of redundancy within each block.
o Decoding is done using MPEG system codes which are put into the data.
o This compression gives good quality compression similar to images from storage
media.
o The quality is dependent on type of picture and level of redundancy.
o The quality is also dependent on how well the sequence has been coded.
o This compression allows true flexibility, retaining the format and ensuring the
compatibility in data stream.

31) Explain H.261


• It was designed for data rate which are multiple of 64 Kbits/s.
• It is called as p * 64 (where p = 1- 30).
• H.261transfers video stream using RTP protocol.
• H.261 supports motion compensation.
• In motion compensation a search area is constructed in the previous frame to
determine the best frame.
• H.261 supports two image resolutions – QCIF and CIF.
• The video multiplexer structures the compressed data into a hierarchical bitstream.
• The hierarchy has four layers:
I. Picture Layer – Corresponds to one frame.
II. Group of blocks – Corresponds to 1/3 of QCIF or 1/12 of CIF.
III. Macroblocks – Corresponds to 16 x 16 pixels of luminance and two spatially
corresponding 8 x 8 chrominance components.
IV. Blocks – Corresponds to 8 x 8 pixels.

• H.261 Encoder:
o Pictures are coded as luminance and two color difference components.
o (Y,Cb,Cr) where Cb and Cr matrices are half the size of Y matrix

• Prediction:
H.261 has two types of coding –
o INTRA coding where block of 8 x 8 pixels each are encoded and sent
directly to block transformation.
o INTER coding frames are encoded with respect to another reference
frame.
o A prediction error is calculated between 16 x 16 pixel region.
o Prediction error off transmitted block are sent to block transformation.

Quantization:
o The purpose of this step is to achieve further compression by representing the
DCT coefficients with no greater precision than required to achieve the
required quality.
o The number of quantizers are 1 for INTRA DC coefficients and 31 for all
others.

Entropy Encoding :
o Entropy encoding involves extra compression.
o It is done by assigning shorter code words to frequent events and longer code
words to less frequent events.
o Huffman coding is used to implement this step.

CHP – 5

32) Explain Quality if service?


ANSWER FROM TECHMAX

33) Explain Authoring system. Why it is needed? Different design issues faced
• Authoring systems can also be defined as process of creating multimedia
application.
• Multimedia authoring tools provide the framework for organizing and editing the
elements of a multimedia project.
• Authoring software provides an integrated environment for combining the content
and functions of a project.
• Design Issues for Authoring System:
o Display Resolution
o Data Format
o Compression Algorithm
o Network Interface
o Storage Formats
• Needs:
o It provides lots of graphics, interactions and other tools education software
needed.
• Types of authoring systems:
• Dedicated authoring system
o Dedicated authoring systems are designed for a single user.
o In the case of dedicated authoring system, users need not to be experts in
multimedia or a professional artist.
o Dedicated authoring systems are extremely simple since they provide drag
and drop concept.
o Authoring is done on objects captured by video camera, image scanner or
objects stored in multimedia library.
o It does not provide effective presentation due to single stream.
o Examples Paint, MS PowerPoint etc.
• Telephone Authoring Systems
o There is an application where the phone is linking into multimedia e-mail
application.
o Telephone can be used as a reading device by providing full text to-speech
synthesis capability.
o The phone can be used for voice command input for setting up and
managing voice mail messages.
o Digitized voice clips are captured via phone and embedded in e-mail
messages.
• Programmable authoring system
o Structured authoring tools were not able to allow the users to express
automatic function.
o But, programmable authoring system has improved in providing powerful
functions based on image processing and analysis.
o E.g. Visual Basic, Net beans, Visual Studio
• Timeline Based Authoring
o It has an ability to develop an application like movie.
o It can create complex animations and transitions.
o All the tracks can be played simultaneously carrying different data.
o Jumps to any location in a sequence

Explain RTP, RTSP, RTCP, RSVP


RTP - Real-time Transport Protocol
• Real-time transport protocol (RTP) is an IP-based protocol providing support for the
transport of real-time data such as video and audio streams.
• The services provided by RTP include time reconstruction, loss detection, security
and content identification.
• RTP is primarily designed for multicast of real-time data, but it can be also used in
unicast.
• It can be used for one-way transport such as video-on-demand.
• There are two transport layer protocols in the Internet protocol suite, TCP and UDP.
• UDP can be used, as it provides a connectionless unreliable datagram service.
• To use UDP, some functionality has to be added.

RTSP - Real-Time Streaming Protocol


• It provide an extensible framework to enable controlled delivery of real-time data,
such as audio and video.
• Sources of data can include both live data feeds, such live audio and video, and
stored content, such as pre-recorded events.
• It is designed to work with established protocols such as RTP, HTTP, and others to
provide a complete solution for streaming media over the Internet.
• It supports multicast as well as unicast.

RTCP---Real-Time Control Protocol


• RTCP is the control protocol that works in conjunction with RTP.
• It provides support for real time conferencing for large groups within an internet,
including source identification and support for gateways.
• It provide information regarding the quality of data distribution.
• It is used to keep track of the participants in an RTP session.

RSVP - Resource Reservation Protocol


• RSVP is a network control protocol that allows Internet applications to obtain special
qualities of-service for their data flows.
• RSVP is used to set up reservations for network resources.
• When an application in a host requests a specific quality of service it uses RSVP to
deliver its request .
• RSVP is responsible for the negotiation of connection parameters with the routers.
• If the reservation is setup, RSVP is also responsible for maintaining router.

CHP – 6

34) Explain requirements of multimedia systems?


35) Explain Digital Signature
36) Explain Steganography and its types

Steganography –

• Steganography is a technique of hiding message, files and images within other

message, files or images.

• The goal is to fool attacker and not even allow attacker to detect that there is

another message hidden in original message.

• The main aim of steganography is to achieve high security and encode the sensitive

data in any cover media like images, audio, video and send it over insecure channel

over Internet.

• A small change in steganographic data will completely change the meaning of

message.

TYPES –

1. Image Steganography –
• Image Steganography is used to hide secret message inside the image.
• The most widely used technique is to hide inside the LSB of the cover
image.
• Because this method uses bits of each pixel, it is necessary to use
lossless compression, otherwise the hidden information will be lost.
• When using 24 bit color image, a bit of each of red, green and blue
color component can be used and can store up to 3 bits.
• Files like BMP, PNG, JPEG etc are used to hide data.
• In Spatial Domain Embedding steganography algorithm is based on
modifying LSB layer of image.
• This technique uses the fact that LSB in an image could be random
noise and making any changes to them won’t affect original image.
• The messages are permuted which results in distributing the bits
evenly, thus on average only half of the LSB will be modified.
• Different techniques vary in their approach of hiding the information.

2. Audio Steganography –
• Audio Steganography is based on modification of LSB.
• The main objective is to hide maximum amount of information and
prevent audio degradation.
• The best format is WAVE format since reading if the bits is easier and
distortion is less.
• In phase encoding techniques, it encodes the message bits as phase
shifts in the phase spectrum, achieving an inaudible encoding.
• Phase coding relies on the fact that phase components of sounds are
not perceptible to human ears.
• The basic Spread Spectrum method attempts to spread secret
information across the audio signal’s frequency spectrum.
37) Explain User Interface Design
38)Explain Distributed Multimedia System



A Distributed Multimedia System comprises of several components:
1. Media Server –
o A media server is a device that stores and shares media.
o It is responsible for hardware as well as software aspects of successful
storing and retrieval as well as sharing of media files and data.
o A media server can be any device having network access and adequate
bandwidth for sharing and saving of media.
o A server, PC, or any other device with such storage capability can be used
as a media server.
o Commercial media servers act as aggregators of information: video,
audio, photos and books, and other types of media can all be accessed via
a network.
2. Proxy Server –
o A client initially connects with a proxy server to send a request, such as
accessing a file or opening a Web page.
o The proxy server filters and evaluates each IP address and request.
o The verified request is forwarded to the relevant server, which requests
the service on behalf of the client.
3. Meta-database –
It is multimedia indexing framework.
It is cost based query optimization for range and k-nearest neighbour
searches.
4. Media Player –
Supports different media streaming in different qualities.

Components of Distributed Multimedia System:


1. User Terminal –
User terminal consists of computer with special hardware such as microphone,
stereo speaker and HD graphics display.
Many user terminals still resembles traditional computers.
The terminal should include compression and decompression hardware.

2. Network and Communication –


The multimedia traffic requires to transfer high volume of data at high speed which
can be done using high speed internet.
Communication between multimedia objects must be end to end as delay may result
in loss of data.

3. Multimedia Server –
Tradition computers works well with traditional servers.
But, tradition computers cannot handle multimedia servers because they are high
speed


39)Explain Information Based/Intelligent Multimedia System



• Input Mode: Information Based/Intelligent Multimedia System accepts inputs from
three input devices – speech input device, keyboard and mouse.

• Input Coordinator: It takes input from input mode and fuses into single compound
stream.

• Multimedia Parser Interpreter: It accepts the compound stream input and produces
execution of this compound stream.
• Multimedia Output Planner: It produces multimedia output stream with
components targeted from different devices.

• Output Coordinator: It produces multimedia output in coordinated manner.

• Output Mode: Information Based/Intelligent Multimedia System gives output from


three output devices – HD graphics display, monochrome display and speech output
device.

• Lexicon: It consists of words, graphics figures etc.

• Grammar: It is used to check how tokens and signals of lexicon can be combined.

• Discourse Model: It ensures continuity and relevance.

• User Model: It is used to show the stage of current task on which the user is
engaged.

• Domain Knowledge Base: It is used to show different types of plans that the user
would be engaged in constructing

40)Explain Conference Multimedia System


ANSWER FROM TECHMAX

You might also like