Professional Documents
Culture Documents
Properties:
1. Independency –
• Multimedia system uses different media.
• These media should be independent.
• It requires different level of independence.
• Computer recorded video has information of both audio and video.
• Both audio and video are coupled through the common medium of tape.
3. Communication System –
• Variety of multimedia applications running on different platforms will
need to communicate with each other.
• Communication System is required for real – time delivery over
distributed network.
• It is also required for inter-application exchange of data.
4. Interactivity –
• If the user has the ability to control what elements are delivered and when,
the system is called interactive system.
• Tradition technologies used to deliver audio, graphic and text but was
predefined and inflexible.
• Thus while designing multimedia system we have to decide the level of
interactivity we wish to provide the user of the system.
2) What is Global Structure of Multimedia?
1. Device Domain –
• It contains all multimedia elements such as graphics & images, video &
animation, audio etc.
• It also consists of compression, storage and network for these elements.
• It also specifies how these elements are digitized and processed.
• Network allows exchange of data.
• Compression schemes used are CCITT group 3 & 4, JPEG, MPEG etc.
2. System Domain –
• It contains three services.
• Operating System is used for interaction between computer hardware and
software and also with user.
• Database system is used to store data at different levels.
• Communication system is used to allow communication between different
multimedia services.
• Computer technology specifies the interface between device domain and
system domain.
3. Application Domain –
• Document consists of a set of structural information that can be in
different form of media.
• Abstraction is the process of hiding the details and showing only essential
features of a particular concept.
• User interface should be responsive to user need.
• It also include multimedia tools and application such as special editor and
other document processing tools.
4. Cross Domain –
• Synchronization is used for temporary relation between media objects
in multimedia system.
• Hence, it must be considered at all levels.
3) What are Multimedia Objects and/or Elements?
1. Text –
• A broad term for something that contains words to express something.
• Text is the most basic element of multimedia.
• A good choice of words could help convey the intended message to the users.
• Used in contents, menus, navigational buttons etc.
Example:
2. Images –
I) Document images:
• Scanning and storing copies of business documents.
• Avoids paper work and making several copies becomes easier.
II) Facsimile:
• Transmission of document images over telephone line.
• Typical density 100 to 200 dpi for true representation and legibility
3. Graphics –
• A graphic, or graphical image, is a digital representation of non-text
information such as a drawing, chart, or figure.
• 2D or 3D figure or illustration.
• Used in multimedia to show more clearly what a particular information is all
about.
4. Holographic Images –
• Unique photographic image without the use of lens.
• When illuminated by coherent light like laser beam organizes the light into a 3-D
representation of the original object.
5. Animation –
• The illusion of motion created by the consecutive display of images of static
elements.
• In multimedia, animation is used to further enhance / enrich the experience of the
user to further understand the information conveyed to them.
6. Audio –
• Audio is sound within the acoustic range available to humans.
• An audio frequency (AF) is an electrical alternating current within the range 20 to
20,000 hertz that can be used to produce acoustic sound.
• In multimedia, audio could come in the form of speech, sound effects and also
music.
• Voice Commands and Voice Synthesis:
o For hands free operation of a computer program.
o Direct computer operations by spoken commands
• Audio messages
o Annotated voice mail which use audio or voice messages as attachments.
7. Video –
• It is the technology of capturing, recording, processing, transmitting, and
reconstructing moving pictures.
• Video is more towards photo realistic image sequence / live recording as in
comparison to animation.
• Video messages
o Can be used as an attachment with the mail.
o Full motion stored and live video
o Live video presentations, video conferencing,3D video techniques to
create concept of virtual reality.
• The Interactive Multimedia Association has a task group to define the architectural
framework for multimedia to provide the ability to exchange and use information.
• It defines necessary interchange formats across multivendor solutions
• The architectural approach taken by IMA is based on defining interfaces to a
multimedia interface bus.
• This bus would be the interface between systems and multimedia sources.
6) What is Network Architecture for Multimedia Systems?
• Multimedia systems need special networking requirements.
• Because not just small volumes of data but large volumes of images, voice and video
messages are being transmitted.
• To meet the high-speed multimedia needs below network topologies are used.
• Simple document imaging and text can be satisfied with even ethernet but integrated
multimedia applications need higher topologies.
• ATM (622 Mbps): ATM is an acronym for Asynchronous Transfer Mode. Its
topology was originally designed for broadband applications in public networks.
Asynchronous Transfer Mode technology (ATM) simplifies transfers across LANs
and WANs.
• FDDI (100 Mbps): FDDI is an acronym of Fiber Distributed Data Interface. This
FDDI network is an excellent candidate that interconnects different types of LANs.
FDDI presents a potential for standardization for high speed networks. FDDI allows
large-distance networking.
• ATM+SONET (10Gbps) – It transfer file at a speed of 10Gbps across LAN.
7) What are Types or classification of medium?
• Medium is the means of distribution and presentation of information.
• Medium can be classified into:
1. Perception medium –
• Perception medium is the medium through which information is perceived and
processed by the user.
• Eg: Sound as perceived by human ear; graphics as perceived by human eye.
2. Representation medium –
• Representation medium refers to the way information is constructed and
represented.
• E.g. Text is encoded using ASCII codes or UNICODE, audio is encoded using
PCM, Image is encoded using JPEG.
3. Presentation medium –
• Presentation medium is the way in which information is presented to the user.
This medium engages all the human senses.
• Eg: Keyboard, Mouse, Microphone, Screen, Speaker, Printer.
4. Storage medium –
• Storage medium is the medium where you store and from where you retrieve data.
• Both primary and secondary storages are considered.
• Eg: Hard drive, RAM, CD-ROM, DVDs etc.
5. Transmission medium –
• Transmission medium describes the physical system used to carry communication
signal from one system to another.
• Eg: guided transmission media like metallic cables, optical fibers. unguided media
like satellites and radio signals.
o Pen driver: A device driver that collects all pen information and builds pen
packets for the recognition context manager.
Scanner –
A Scanner is used for converting a paper document to a digital image.
It acts as a camera eye and takes a photograph of document.
Types of Scanners:
1) Based on Size:
o Normally they come in A (8.5inch X 11 inch) and B (11inch X 17 inch)
sizes.
o Also, called A and B size scanners (portrait and landscape mode).
o Large Form factor scanners are available for capturing large drawings,
engineering and architectural drawings.
Handheld Scanners –
o Scans part of a page where width of scan area is about 3 to 6 inches.
o Convenient, portable and low cost.
o Not preferred for high volume, professional quality scanning.
o Light is reflected from document as user moves the scanner across the
document.
o CCDs absorb reflected light and generate voltage which is converted into
digital value by A/D converter and stored.
Digital Camera –
Dye-sublimation printer is a digital printing technology using full color artwork that
works with polyester and polymer-coated surface.
This process is commonly used for signs and banners, as well as novelty items such as
cell phone covers, coffee mugs etc.
It has a thermal printing head with thousands of tiny heating elements.
Plastic film transfers roll mounted on two rollers and a drum.
Transfer roll contains panels of cyan, yellow, magenta and black dyes.
Heating is carried at 256 different temperature levels.
It is applicable for multimedia applications because its print quality is very high.
Graphic artists, advertising agencies use these for photographic quality print.
The most common process lays only one color at a time, as the dye has each color
on a separate panel.
During the printing cycle, the rollers will move the medium.
Tiny heating elements on the head change temperature rapidly, laying different
amounts of dye.
After the printer finishes printing the medium in one color, it shifts the ribbon on to
the next color panel to prepare for the next cycle.
The entire process is repeated four or five times in total.
Plotter –
A plotter is a computer vector graphic printer that gives a hard copy of the output
based on instructions from the system.
A plotter is a special output device used to produce hard copies of large graphs and
designs on paper, such as construction maps, engineering drawings,
architectural plans and business charts.
The plotter is either a peripheral component that you add to your computer system
or a standalone device with its own internal processor.
Plotters work in combination with CAD software on the computer, to output line
drawings for plans, blueprints and other technical drawings.
Due to the mechanical actions involved, compared to other types of printers such as
ink jet and laser printers, old plotters were slow.
Only a small number of pen plotters are still in use commercially.
DVD "digital video disc" is a digital optical disc storage format invented and
developed by Philips and Sony.
DVD can store any kind of digital data and is widely used for software and other
computer files as well as video are watched using DVD players.
DVDs offer higher storage capacity than CD’s while having the same dimensions.
o Prerecorded DVDs are mass-produced and data is physically stamp onto the
DVD. Such discs are a form of DVD-ROM because data can only be read and
not written or erased.
o Blank recordable DVD discs (DVD-R) can be recorded once using a DVD
recorder.
o Rewritable DVDs (DVD-RW) can be recorded and erased many times.
DVD has enough capacity (1.36 to 15.9 GB) and speed to provide high quality, full
motion video and sound, and low-cost delivery mechanism.
Games for Windows were also distributed on DVD.
Blu-ray Disc used to store 25 GB on each layer, and now they can hold 100 GB
o DVDs can store about 7 times more data than CDs
o The data in a DVD is more tightly packed than that in a CD.
o While a Blu-ray can store about 5 times more data than a DVD.
Jukebox –
An optical jukebox is a device used for robotic data storage which can be
automatically loaded and unloaded without any outside human assistance.
These discs are normal data storage discs such as CD’s, DVDs, or Blu-ray discs, and
offer terabytes (TB) and petabytes (PB) of secondary storage options.
Optical Jukeboxes are also known as optical disk libraries, robotic drives and
autochangers.
An optical jukebox can have up to 2000 slots for discs and its performance depends
on how quickly, efficiently and effectively it crosses those slots.
Rate of transfer depends on a number of factors including sorting algorithms and
placement of discs in the slots.
This kind of storage device is primarily used for the commercial and industrial
scale for backups.
Jukeboxes are used in high-capacity archive storage environments such as
imaging, medical, and video.
In this little-used or unused files are moved from fast magnetic storage to optical
jukebox devices in a process called migration.
If the files are needed, they are migrated back to magnetic disk.
Today one of the most important uses for jukeboxes is to store data that will last up
to 100 years.
The data is usually written on Write Once Read Many (WORM) type discs so it
cannot be erased or changed.
PPT 1 PG 110
CHP – 2
13) Explain RTF ?
The Rich Text Format (often abbreviated RTF) is a document file format with
published specification developed by Microsoft.
Most word processors are able to read and write some versions of RTF.
Rich text is more exciting than plain text.
It supports text formatting, such as bold, italics, and underlining, as well as
different fonts, font sizes, and colored text. Rich text documents can also include
page formatting options, such as custom page margins, line spacing, and tab
widths.
Most word processors, such as Microsoft Word, Lotus Word Pro, and AppleWorks,
create rich text documents. However, if you save a document in a program's native
format, it may only open with the program that created it. For example, Lotus Word
Pro will not be able to open an AppleWorks text document, even though both
programs are text editors. This is because each program uses its own method of
formatting and creating text files.
But now most word processors allow you to save rich text documents in the
generic Rich Text Format. This file format, which uses the .RTF extension keeps
most of the text formatting. However, because it is a standard format, it can be opened
by just about any word processing program and even most basic text editors.
Key format information carried across in RTF document files:
o Character set: All the characters that are supported including ANSI, IBM PC,
Macintosh.
o Font table: Lists all the fonts used in the document. (Mapping at receiving
application)
o Color table: lists the colors used in the document for highlighting the text.
(Mapping at receiving application)
o Document Formatting: Provides true document margins and paragraph indents.
Printed page looks very similar to original page in receiving application.
o Section Formatting: Section breaks and page breaks, separation of groups of
paragraphs, specifies space above and below the section.
o Paragraph Formatting: Defines control characters for specifying paragraph
justification.
o General formatting: Footnotes, annotation, bookmarks and pictures.
o Character formatting: Bold, Italic, underline, subscript, superscript.
o Special characters: hyphens, backslashes, etc.
14) Explain TIFF?
• TIFF (Tag Image File Format) is a common format for exchanging raster graphics
(bitmap) images between application programs. A TIFF file can be identified with a
".tiff" or ".tif" file name suffix.
• Used for data storage and interchange. The general nature of TIFF allows it to be
used in any operating environment, and it is found on most platforms requiring image
data storage.
• Supporting Applications are most paint, imaging, and desktop publishing programs
• Platforms MS-DOS, Macintosh, UNIX and other O.S.
• The TIFF format is perhaps the most versatile and diverse bitmap format in
existence. Its extensible nature and support for numerous data compression schemes
allow developers to customize the TIFF format to fit any unusual data storage
needs.
• The TIFF specification was originally released by Aldus Corporation as a standard
method of storing black-and-white images.
• TIFF, 4.0, was released in 1987. TIFF 4.0 added support for uncompressed RGB
color images.
• TIFF 5.0 was released in 1988, it was first revision to add the capability of storing
palette color images and support for the LZW compression algorithm.
• TIFF 6.0 was released in 1992 and added support for CMYK and YCbCr color
images and JPEG compression method.
• TIFF's extensible nature, allows storage of multiple bitmap images of any pixel
depth, makes it ideal for most image storage needs.
• TIFF documents have a maximum file size of 4 GB. Photoshop CS and later versions
supports large documents saved in TIFF format. However, most other applications
and older versions of Photoshop do not support documents with file sizes greater than
4 GB.
Three possible physical arrangements of data in a TIFF file
● Each IFD is a road map where all the data associated with a bitmap can be found.
The data is found by reading it directly from within the IFD data structure or by
retrieving it from an offset location whose value is stored in the IFD.
● Each IFD contains one or more data structures called tags. Each tag is a 12-byte
record that contains a specific piece of information about the bitmapped data.
● The offset values used in a TIFF file are found in three locations.
o The last four bytes of the header gives the first offset value to the position of
the first IFD.
o The last four bytes of each IFD gives offset value to the next IFD.
o The last four bytes of each tag gives an offset value to the data it represents, or
the data itself.
• Lossless compression –
o It enables the restoration of a file to its original state, without the loss of a
single bit of data, when the file is uncompressed.
o Lossless compression is the typical approach with program executables, as
well as text and spreadsheet files, where the loss of words or numbers would
change the information.
Lossless compression schemes:
● Run-length encoding
● Huffman coding
● CCITT Group 3 1D
● CCITT Group 3 2 D
● CCITT Group 4
• Lossy compression –
o Lossy compression permanently eliminates bits of data that are
redundant, unimportant or invisible.
o Lossy compression is useful with graphics, images, audio, and video, where
the removal of some data bits has little or no noticeable effect on the
representation of the content.
Lossy compression schemes:
● Joint Photographic Experts Group (JPEG)
● Moving Picture Expert Group (MPEG)
● Intel DVI
● CCITT H.261
● Fractals
Lossless Lossy
Lossless compression is reversible,original data can be Lossy compression is irreversible,original data cannot be
reconstructed as it is. reconstructed as it is.
It exploits statistical redundancy. It exploits humans perception of data.
Used for compression of text and images. Used for compression of audio,video and images.
• Coding Redundancy:
o Coding redundancy is associated with the representation of information.
o The information is represented in the form of codes.
o If the grey levels of an image are coded in a way that uses more code
symbols than absolutely necessary to represent each grey level then the
resulting image is said to contain coding redundancy.
• Psychovisual Redundancy:
o The Psychovisual redundancies exist because human perception does not
involve quantitative analysis of every pixel.
o It’s elimination is real visual information is possible only because the
information itself is not essential for normal visual processing.
18) Explain CCITT Group 3 1-D
• CCITT stands for Consultative Committee for International Telegraph and
Telephone which is an organization that sets International Communication
Standards.
● CCITT Group 3 is the universal protocol for sending fax documents through a
phone line.
● CCITT Group 3 is a lossless compressed data format for bi-level images.
● It comes in two main varieties:
● 1-dimensional and 2-dimensional although both are used on 2-dimensional images.
● 1-dimensional variety uses Modified Huffman (MH) compression.
● 2-dimensional variety uses Modified Relative Element Address Designate (M-
READ)
compression.
● In this group each scan line is encoded independently.
● A scan line is encoded as a set of runs, each representing a number of white or black
pixels, with white and black runs alternating.
● Every run is encoded using a different number of bits, which can be uniquely
identified when decoded.
o Frequently occurring lengths of runs will be encoded efficiently with shorter
codes.
o Infrequent occurring lengths runs cause the size to increase due to longer codes.
● Group 3 One-Dimensional coding encodes run lengths using a predefined Huffman
code.
● Algorithm:
o Accept the input.
o Scan the input for 0’s and 1’s.
o Divide each run-length for encoding purpose.
o Encode from make-over and terminating code table.
o The output is the compressed file.
Advantages:
It is simple to implement.
It is standard for document imaging application.
Disadvantage:
It does not provide any error protection.
Id does only horizontal run-length coding.
• It is supported by all computers running Windows, and by all the most popular
web browsers .
• Sounds stored in the WAVE format have the extension .wav.
• WAV file can hold both compressed and uncompressed audio.
• The header of a WAV file is 44 bytes and has the following format:
o Chunk ID: It holds the letters "RIFF" in ASCII form.
o Chunk Size: This is the size of the entire file in bytes.
o WAVE file contains two types of chunks: Format chunk and Data chunk
o “fmt” Subchunk: It contains information about how waveform is stored, how
it could be played, what compression techniques are used.
o Data SubChunk: “data” this field indicates the size of the sound information
and contains the actual sound data.
o NumChannels: It shows, 1= mono sound, 2 = stereo sound.
o Sample Rate: It contains number of samples per second.
o AudioFormat: This field describes the type of compression format used.
o Byte Rate: This field indicated how many bytes of wave data must be send per
second in order to play the wave file.
o Bits per sample: This field descibes number of bits used to define each sample.
MIDI Message:
It consists of –
A. Channel Messages:
MIDI sends 16 channel of information and channel message affects these each
channel independently.
B. System Message:
There are three types of system messages:
1. System real-time message –
o These messages are related to synchronization.
o These messages are used for setting values for real time parameters of
a system such as start or stop.
26) PCM
A signal is pulse code modulated to convert its analog information into binary
sequence.
2. Sampler –
It is used to collect sample data at instantaneous values of message signals.
It is used to reconstruct original signal.
3. Quantizer –
Quantizing is a process of reducing the excessive bits and limit the data.
4. Encoder –
It is used to digitize the analog signal.
It does sample and hold process.
5. Regenerative repeater –
It is used to compensate the lost signal and reconstruct it.
It is also used to increase its strength.
6. Decoder –
It is used to decode the pulse coded waveform to reproduce the original signal.
7. Reconstruction filter –
It is used to reconstruct and get the original signal back.
27) ADPCM
• Adaptive Differential Pulse Code Modulation is widely used.
• It is a lossy code scheme.
• It, instead of quantizing the sound directly, quantize the difference between
the sound signal and a prediction made by the sound signal.
• If the prediction is accurate then the difference will have a lower variance
than real sound.
• At the decoder the quantized difference signal is added to the predicted
signal to reconstruct the original sound signal.
• This techniques achieves 40-80% compression.
• To process redundant information and to have better output, it is better to take
predicted value, assumed from its previous output.
• ADPCM or DPCM differs from PCM because it quantizes the difference
of predicated value and actual value.
28) DM
• In Delta Modulation sampling rate is much higher.
• It has smaller step size.
• It takes over sampled input to make full use of signal correlation.
• The quantization design is simple.
• The input sequence is much higher than Nyquist rate.
• The quality is moderate.
• The design of modulator and demodulator is simple.
• The bit rate can be decided by user.
CHP – 4
• Composite Video –
o Composite video signals are analog signals that combines luminance and
chrominance.
o It then gives single analog signal that can be transmitted.
o It uses three source signals.
o YUV, Y - brightness of the picture, U and V carries color information.
• S – Video –
o S – video is a method of separating video signal into different components for
transmission.
o S video cables carry four or more wires wrapped together.
o S – video connector has four pins, one for chrome signal, one for luma and two
ground wires.
• MPEG is a method for video compression, which involves the compression of digital
images and sound, as well as synchronization of the two.
• MPEG-1
• MPEG-2
• MPEG-3
• MPEG-4
• A motion picture is a rapid flow of a set of frames, where each frame is an image.
• Compressing video, then, means spatially compressing each frame and temporally
compressing a set off names.
• Temporal Compression:
o In temporal compression, redundant frames are removed.
o To temporally compress data, the MPEG method first divides frames into
three categories:
o I-frames, P-frames, and B-frames.
I-frames:
o I-frame is an independent frame that is not related to any other frame.
o They are present at regular intervals.
o An I-frame must appear periodically to handle some sudden change in the frame
that the previous and following frames cannot show
P-frames:
o P-frame is related to the preceding I-frame or P-frame.
o In other words, each P-frame contains only the changes from the preceding frame.
o The changes, however, cannot cover a big segment.
B-frames:
o B-frame is related to the preceding and following I-frame or P-frame.
o In other words, each B-frame is relative to the past and the future.
• Spatial Compression:
o The spatial compression of each frame is done with JPEG. Each frame is a picture
that can be independently compressed.
o The compression data takes advantage of redundancy within each block.
o Decoding is done using MPEG system codes which are put into the data.
o This compression gives good quality compression similar to images from storage
media.
o The quality is dependent on type of picture and level of redundancy.
o The quality is also dependent on how well the sequence has been coded.
o This compression allows true flexibility, retaining the format and ensuring the
compatibility in data stream.
• H.261 Encoder:
o Pictures are coded as luminance and two color difference components.
o (Y,Cb,Cr) where Cb and Cr matrices are half the size of Y matrix
• Prediction:
H.261 has two types of coding –
o INTRA coding where block of 8 x 8 pixels each are encoded and sent
directly to block transformation.
o INTER coding frames are encoded with respect to another reference
frame.
o A prediction error is calculated between 16 x 16 pixel region.
o Prediction error off transmitted block are sent to block transformation.
Quantization:
o The purpose of this step is to achieve further compression by representing the
DCT coefficients with no greater precision than required to achieve the
required quality.
o The number of quantizers are 1 for INTRA DC coefficients and 31 for all
others.
Entropy Encoding :
o Entropy encoding involves extra compression.
o It is done by assigning shorter code words to frequent events and longer code
words to less frequent events.
o Huffman coding is used to implement this step.
CHP – 5
33) Explain Authoring system. Why it is needed? Different design issues faced
• Authoring systems can also be defined as process of creating multimedia
application.
• Multimedia authoring tools provide the framework for organizing and editing the
elements of a multimedia project.
• Authoring software provides an integrated environment for combining the content
and functions of a project.
• Design Issues for Authoring System:
o Display Resolution
o Data Format
o Compression Algorithm
o Network Interface
o Storage Formats
• Needs:
o It provides lots of graphics, interactions and other tools education software
needed.
• Types of authoring systems:
• Dedicated authoring system
o Dedicated authoring systems are designed for a single user.
o In the case of dedicated authoring system, users need not to be experts in
multimedia or a professional artist.
o Dedicated authoring systems are extremely simple since they provide drag
and drop concept.
o Authoring is done on objects captured by video camera, image scanner or
objects stored in multimedia library.
o It does not provide effective presentation due to single stream.
o Examples Paint, MS PowerPoint etc.
• Telephone Authoring Systems
o There is an application where the phone is linking into multimedia e-mail
application.
o Telephone can be used as a reading device by providing full text to-speech
synthesis capability.
o The phone can be used for voice command input for setting up and
managing voice mail messages.
o Digitized voice clips are captured via phone and embedded in e-mail
messages.
• Programmable authoring system
o Structured authoring tools were not able to allow the users to express
automatic function.
o But, programmable authoring system has improved in providing powerful
functions based on image processing and analysis.
o E.g. Visual Basic, Net beans, Visual Studio
• Timeline Based Authoring
o It has an ability to develop an application like movie.
o It can create complex animations and transitions.
o All the tracks can be played simultaneously carrying different data.
o Jumps to any location in a sequence
CHP – 6
Steganography –
• The goal is to fool attacker and not even allow attacker to detect that there is
• The main aim of steganography is to achieve high security and encode the sensitive
data in any cover media like images, audio, video and send it over insecure channel
over Internet.
message.
TYPES –
1. Image Steganography –
• Image Steganography is used to hide secret message inside the image.
• The most widely used technique is to hide inside the LSB of the cover
image.
• Because this method uses bits of each pixel, it is necessary to use
lossless compression, otherwise the hidden information will be lost.
• When using 24 bit color image, a bit of each of red, green and blue
color component can be used and can store up to 3 bits.
• Files like BMP, PNG, JPEG etc are used to hide data.
• In Spatial Domain Embedding steganography algorithm is based on
modifying LSB layer of image.
• This technique uses the fact that LSB in an image could be random
noise and making any changes to them won’t affect original image.
• The messages are permuted which results in distributing the bits
evenly, thus on average only half of the LSB will be modified.
• Different techniques vary in their approach of hiding the information.
2. Audio Steganography –
• Audio Steganography is based on modification of LSB.
• The main objective is to hide maximum amount of information and
prevent audio degradation.
• The best format is WAVE format since reading if the bits is easier and
distortion is less.
• In phase encoding techniques, it encodes the message bits as phase
shifts in the phase spectrum, achieving an inaudible encoding.
• Phase coding relies on the fact that phase components of sounds are
not perceptible to human ears.
• The basic Spread Spectrum method attempts to spread secret
information across the audio signal’s frequency spectrum.
37) Explain User Interface Design
38)Explain Distributed Multimedia System
A Distributed Multimedia System comprises of several components:
1. Media Server –
o A media server is a device that stores and shares media.
o It is responsible for hardware as well as software aspects of successful
storing and retrieval as well as sharing of media files and data.
o A media server can be any device having network access and adequate
bandwidth for sharing and saving of media.
o A server, PC, or any other device with such storage capability can be used
as a media server.
o Commercial media servers act as aggregators of information: video,
audio, photos and books, and other types of media can all be accessed via
a network.
2. Proxy Server –
o A client initially connects with a proxy server to send a request, such as
accessing a file or opening a Web page.
o The proxy server filters and evaluates each IP address and request.
o The verified request is forwarded to the relevant server, which requests
the service on behalf of the client.
3. Meta-database –
It is multimedia indexing framework.
It is cost based query optimization for range and k-nearest neighbour
searches.
4. Media Player –
Supports different media streaming in different qualities.
3. Multimedia Server –
Tradition computers works well with traditional servers.
But, tradition computers cannot handle multimedia servers because they are high
speed
39)Explain Information Based/Intelligent Multimedia System
• Input Mode: Information Based/Intelligent Multimedia System accepts inputs from
three input devices – speech input device, keyboard and mouse.
• Input Coordinator: It takes input from input mode and fuses into single compound
stream.
• Multimedia Parser Interpreter: It accepts the compound stream input and produces
execution of this compound stream.
• Multimedia Output Planner: It produces multimedia output stream with
components targeted from different devices.
• Grammar: It is used to check how tokens and signals of lexicon can be combined.
• User Model: It is used to show the stage of current task on which the user is
engaged.
• Domain Knowledge Base: It is used to show different types of plans that the user
would be engaged in constructing