You are on page 1of 32

M?

M notes
Q-1 multimedia applications
1. Medicine: In Medicine, doctors can get trained by looking at a virtual surgery or they
can simulate how the human body is affected by diseases spread by viruses and
bacteria and then develop techniques to prevent it
2. Industry: In the Industrial sector, multimedia is used as a way to help present
information to shareholders, superiors and co-workers. Multimedia is also helpful for
providing employee training, advertising and selling products all over the world via
virtually unlimited web-based technology
3. Engineering: Software engineers may use multimedia in computer simulations for
anything from entertainment to training such as military or industrial training.
Multimedia for software interfaces are often done as collaboration between creative
professionals and software engineers
4. Education: Multimedia has brought revolutionary changes in the field of education. It
has made the system of education more attractive and effective than any other
periods in the past. Now a day, multimedia presents the contents of education such
as information, still and moving pictures, sounds pronunciations of words, clearly
and attractively to the students. It is now possible to earn knowledge even staying at
home through various multimedia software.
5. Communication: Today, multimedia has brought audio and video conference. These
computer and electronic based communication create dynamism in business, social,
political, economic and international activities.
6. Multimedia in Public Places: In hotels, railway stations, shopping malls, museums,
and grocery stores, multimedia will become available at stand-alone terminals or
kiosks to provide information and help. Such installation reduce demand on
traditional information booths and personnel, add value, and they can work around
the clock, even in the middle of the night, when live help is off-duty
7. Entertainment: Invention of multimedia has opened a new horizon in the field of
entertainment. Now a day, the various media of entertainment such as radio,
television, VCR, VCD can be enjoyed through multimedia program. Besides this,
playing games and drawing pictures in computer, on line chatting are the
contribution of multimedia.
8. Research: Researchers require different kinds of information to conduct their
research and development works. Internet plays a vital role in obtaining the
necessary information. By searching the Internet, the researchers can gather their
required information and make successful research.
9. Marketing: In order to increase sales, large companies develop CD Rom software
with different information like prices, uses, terms of sales of their products and
services and distribute them to the probable customers. Therefore, the buyers can
buy their required products by knowing the prices, terms of sale etc. from the CD
even staying at home or office.
Q-2 explain various features of authoring tools in mm
Editing Features- Most authoring environment and packages exhibit capabilities to create
edit and transform different kinds of media that they support. For example, Macromedia
Flash comes bundled with its own sound editor. This eliminates the need for buying
dedicated software to edit sound data. So authoring systems include editing tools to create,
edit and convert multimedia components such as animation and video clips.
Organizing Features- The process of organization, design and production of multimedia
involve navigation diagrams or storyboarding and flowcharting. Some of the authoring tools
provide a system of visual flowcharting or overview facility to showcase your project's
structure at a macro level. Navigation diagrams help to organize a project. Many web-
authoring programs like Dreamweaver include tools that create helpful diagrams and links
among the pages of a website.
Visual programming with icons or objects- It is simplest and easiest authoring process. For
example, if you want to play a sound then just clicks on its icon.
Programming with a scripting language- Authoring software offers the ability to write
scripts for software to build features that are not supported by the software itself. With
script you can perform computational tasks - sense user input and respond, character
creation, animation, launching other application and to control external multimedia devices.
Document Development tools- Some authoring tools offers direct importing of pre-
formatted text, to index facilities, to use complex text search mechanism and to use
hypertext link-ing tools.
Interactivity Features- Interactivity empowers the end users to control the content and flow
of information of the project. Authoring tools may provide one or more levels of
interactivity.
Simple branching- Offers the ability to go to another section of the multimedia production.
Conditional branching- Supports a go to base on the result of IF-THEN decision or events.
Playback Features- When you are developing multimedia project, you will continousally
assembling elements and testing to see how the assembly looks and performs. Therefore
authoring system should have playback facility.
Supporting CD-ROM or Laser Disc Sources- This software allows over all control of CD-drives
and Laser disc to integrate audio, video and computer files. CD-ROM drives, video and
laserdisc sources are directly controlled by authoring programs.
Supporting Video for Windows- Videos are the right media for your project which are
stored on the hard disk. Authoring software has the ability to support more multimedia
elements like video for windows.
Hypertext- Hypertext capabilities can be used to link graphics, some animation and other
text. The help system of window is an example of hypertext. Such systems are very useful
when a large amount of textual information is to be represented or referenced.

Cross-Platform Capability- Some authoring programs are available on several platforms and
provide tools for transforming and converting files and programs from one to the other.
Run-time Player for Distribution- Run time software is often included in authoring software
to explain the distribution of your final product by packaging playback software with
content. Some advanced authoring programs provide special packaging and run-time
distribution for use with devices such as CD-ROM.
Internet Playability- Due to Web has become a significant delivery medium for multimedia,
authoring systems typically provide a means to convert their output so that it can be
delivered within the context of HTML or DHTML.

Q-3 authoring tools types explain


1.Card or Page based authoring tools
In these authoring systems, elements are organized as pages of a book or a stack of cards.
These tools are best used when the bulk of your content consists of elements that can be
viewed individually, like the pages of a book or cards in a card file. The authoring system lets
you link these pages or cards into organized sequences. You can jump, on command, to any
page you wish in the structured navigation pattern. It allows you to play sound elements
and launch animations and digital video.
One page may have a hyperlink to another page that comes at a much later stage and by
clicking on the same you might have effectively skipped several pages in between. Some
examples of card or page tools are:

• HyperCard (Mac)
• Tool book (Windows)
• PowerPoint (Windows)
• Super card (Mac)

Advantages
Following are the advantages of card-based authoring tools.

• Easy to understand.
• One screen is equal to 1card or 1page.
• Easy to use as these tools provide template.
• Short development time.
Disadvantages
Following are the disadvantages of card-based authoring tools.

• Some run only on one platform.


• Tools not as powerful as equivalent stand alone
2.Icon based; event driven tools

In these authoring system, multimedia elements and interactions cues are organized as
objects in a structural framework or process. Icon-base, event-driven tools simplify the
organization of your project and typically display flow diagrams of activities along branching
paths. In complicate structures, this charting is particularly useful during development.
Some examples of icon-based tools are:

• Author ware Professional (Mac/Windows)


• Icon Author (Windows)
Advantages:
It has a clear structure [appropriately designed flow charts].

Easy to edit and update the elements.


Disadvantages:
Learning process is very difficult.
Very expensive in nature.
3.Time based tools:
Time based tools are best suited for a message with a beginning and an end so that a
message can be passed within a stipulated time period. Few time-based tools facilitate
navigation and interactive control. It has the branching technique so that different loops can
be formed for different multimedia applications and time period can be set for these
individual applications. The software required is Adobe’s Directors.
Advantages:

• These tools are good for creating animation.


• Branching, user control interactivity.
Disadvantages:

• Steep learning curve for advance features.


• Music and sound files embedded in Flash movies increases the file size and increases
the download time.
• Very expensive.
Q-4 steps for creating multimedia presentation
1. Conceptual Analysis and Planning

The process of multimedia making begins with a conceptual ignition point. Conceptual
analysis identifies a appropriate theme, budget and content availability on that selected
theme. Additional criteria like copyright issues also are considered in this phase.

2. Project design
Once the theme is finalized objectives, goals, and activities are drawn for the multimedia
project. General statements are termed as goals. The specific statements in the project is
known as the objectives. Activities are series of actions performed to implement an
objective. These activities contribute to the Project design phase.

3. pre-production
Based on the planning and design, it is necessary to develop the project. The following are
the steps involved in pre-production:

4. Budgeting
Budgeting for each phase like consultants, hardware, software, travel, communication and
publishing is estimated for all the multimedia projects.

5. Multimedia Production Team


The production team for a high-end multimedia project requires a team effort. The team
comprises of members playing various roles and responsibilities like Script writer,
Production manager, Editor, Graphics Architect, Multimedia Architect and Web Master.

6. Hardware/Software Selection
All multimedia Application requires appropriate tools to develop and playback the
application. Hardware includes the selection of fastest CPU, RAM and huge monitors,
sufficient disc for storing the records. Selection of the suitable software and file formats
depends on the funds available for the project being developed.
7. Defining the Content
Content is the “stuff” provided by content specialist to the multimedia architect with which
the application is developed, who prepares the narration, bullets, charts and tables etc.

8. Preparing the structure


A detailed structure must have information about all the steps along with the timeline of
the future action. This structure defines the activities, responsible person for each activity
and the start/end time for each activity.

9. Production
In the multimedia application, after the pre-production activities, the production phase
starts. This phase includes the activities like background music selection, sound recording
and so on. Text is incorporated using OCR software, Pictures shot by digital camera, Video
clips are shot, edited and compressed. A pilot project is ready by this time.

10. Testing
The complete testing of the pilot product is done before the mass production to ensure that
everything is in place, thereby avoiding the failure after launch. If it’s an web based product,
its functioning is tested with different browsers like Internet Explorer, Chrome, Mozilla and
Netscape Navigator. After the testing process is over, the product is incorporated with valid
suggested changes.

11. Documentation
User documentation is a mandatory feature of all multimedia projects. The documentation
has all the valuable information’s starting from the system requirement till the completion
of testing. Contact details, e-mail address and phone numbers are provided for technical
support and sending suggestions and comments.

12. Delivering the Multimedia Product


Multimedia applications are best delivered on CD/DVD or in the website . In reality various
challenges are faced while delivering through internet, like bandwidth problems, huge
number of plug-ins required to play audio and video and long downloading time. Finally, a
multimedia application is delivered in a more effective way by the integration of two
mediums CD-ROM/DVD and Internet .
Q-5 secondary storage in detail
A secondary storage device refers to any non-volatile storage device that is
internal or external to the computer. It can be any storage device beyond the
primary storage that enables permanent data storage. A secondary storage
device is also known as an auxiliary storage device, backup storage device, tier
2 storage, or external storage. These devices store virtually all programs and
applications on a computer, including the operating system, device drivers,
applications and general user data.

Characteristics of Secondary Storage Devices


These are some characteristics of secondary memory, which distinguish it from primary
memory, such as:

o It is non-volatile, which means it retains data when power is switched off


o It allows for the storage of data ranging from a few megabytes to petabytes.
o It is cheaper as compared to primary memory.
o Secondary storage devices like CDs and flash drives can transfer the data from
one device to another.

1. Fixed Storage

Fixed storage is an internal media device used by a computer system to store data.
Usually, these are referred to as the fixed disk drives or Hard Drives.

Fixed storage devices are not fixed. These can be removed from the system for
repairing work, maintenance purposes, and also for an upgrade, etc. But in general,
this cannot be done without a proper toolkit to open up the computer system to
provide physical access, which needs to be done by an engineer.

Technically, almost all data, i.e. being processed on a computer system, is stored on
some built-in fixed storage device. We have the following types of fixed storage:

o Internal flash memory (rare)


o SSD (solid-state disk) units
o Hard disk drives (HDD)
2. Removable Storage

Removable storage is an external media device that is used by a computer system to


store data. Usually, these are referred to as the Removable Disks drives or the External
Drives. Removable storage is any storage device that can be removed from a computer
system while the system is running. Examples of external devices include CDs, DVDs,
Blu-ray disk drives, and diskettes and USB drives. Removable storage makes it easier
for a user to transfer data from one computer system to another.

The main benefit of removable disks in storage factors is that they can provide the fast
data transfer rates associated with storage area networks (SANs). We have the
following types of Removable Storage:

o Optical discs (CDs, DVDs, Blu-ray discs)


o Memory cards
o Floppy disks
o Magnetic tapes
o Disk packs
o Paper storage (punched tapes, punched cards)
Q-6 DVD short note
DVD stands for Digital versatile disk. DVD is a digital optical disc storage format. DVDs can
also be known as “Digital video disc”. DVD technology allows for the storage of a large
amount of data using digital technology. DVDs can store up to 17 gigabytes, compared to
the storage capacity of a compact disc (CD). DVD-Video became the dominant form of home
video distribution in Japan when it first went on sale in 1995. It is a highly compact disc. This
disc can store enough data for about 17Gb. You must have a DVD disc drive or player to use
DVD discs. It stores a large amount of data.
Physical formats of DVD:

• DVD-video: It is a digital storage medium for a motion picture.


• DVD-audio: It is an audio only storage format, similar to CD-audio.
• DVD-R: Is similar to CD/R which offers a write once, read many storages format.
• DVD-ROM: Is a high-capacity storage medium.
• DVD-RAM: Was first rewritable(erasable) disc
Characteristics

• DVDs are used to hold very large files several Gb.


• DVDs are portable.
• DVD’s have five to ten times the capacity compared to CD.
• Increased capacity.
• Better Interactivity.
• DVD are used by software companies for distributing software programs and
data
Advantages

• Can store large amount of data.


• Does not transmit virus.
• Digitally recorded is reliable.
• Durable.
• Not susceptible to magnetic fields. Resistant to heat.
• DVD player can read CD’s.
Disadvantages

• Not fully supported by HDTV.


• Incompatibility of discs and players.
• There is no single standard of DVD.
• Copy protection.
• DVD’s do not work in CD ROM drives.
Q-7 magnetic media short note
Magnetic Storage Devices

(i) Floppy Disk: It is also known as a floppy diskette. It is generally used on a personal
computer to store data externally. A Floppy disk is made up of a plastic cartridge and
secures with a protective case. Nowadays floppy disk is replaced by new and effective
storage devices like USB, etc.

(ii) Hard Disk: It is a storage device (HDD) that stores and retrieves data using magnetic
storage. It is a non-volatile storage device that can be modified or deleted n number of
times without any problem. Most of the computers and laptops have HDDs as their
secondary storage device. It is actually a set of stacked disks, just like phonograph records.
In every hard disk, the data is recorded electromagnetically in the concentric circles or we
can say track present on the hard disk, and with the help of a head just like a phonograph
arm (but fixed in a position) to read the information present on the track. The read-write
speed of HDDs is not so fast but decent. It ranges from a few GBs to a few and more TB.

(iii) Magnetic Card: It is a card in which data is stored by modifying or rearranging the
magnetism of tiny iron-based magnetic particles present on the band of the card. It is also
known as a swipe card. It is used like a passcode (to enter into house or hotel room), credit
card, identity card, etc.

(iv) Tape Cassette: It is also known as a music cassette. It is a rectangular flat container in
which the data is stored in an analogy magnetic tape. It is generally used to store audio
recordings.

(v) Super Disk: It is also called LS-240 and LS-120. It is introduced by Imation corporation
and it is popular with OEM computers. It can store data up to 240 MB.
Q-8 optical media short note
Optical Storage Devices

Optical Storage Devices is also a secondary storage device. It is a removable storage device.
Following are some optical storage devices:

(i) CD: It is known as Compact Disc. It contains tracks and sectors on its surface to store data.
It is made up of polycarbonate plastic and is circular in shape. CD can store data up to
700MB. It is of two types:

CD-R: It stands for Compact Disc read-only. In this type of CD, once the data is written can
not be erased. It is read-only.

CD-RW: It stands for Compact Disc read Write. In this type of CD, you can easily write or
erase data multiple times.

(ii) DVD: It is known as Digital Versatile Disc. DVDs are circular flat optical discs used to store
data. It comes in two different sizes one is 4.7GB single-layer discs and another one is 8.5GB
double-layer discs. DVDs look like CDs but the storage capacity of DVDs is more than as
compared to CDs. It is of two types:

DVD-R: It stands for Digital Versatile Disc read-only. In this type of DVD, once the data is
written can not be erased. It is read-only. It is generally used to write movies, etc.

DVD-RW: It stands for Digital Versatile Disc read Write. In this type of DVD, you can easily
write or erase data multiple times.

(iii) Blu-ray Disc: It is just like CD and DVD but the storage capacity of blu ray is up to 25GB.
To run a Blu-ray disc you need a separate Blu-ray reader. This Blu-ray technology is used to
read a disc from a blue-violet laser due to which the information is stored in greater density
with a longer wavelength.
Q-9 DVD VIDEO AND DVD AUDIO
DVD-Audio (DVD-A) is a Digital Versatile Disc (DVD) format, developed by Panasonic, that is
specifically designed to hold audio data, and particularly, high-quality music. The DVD
Forum, consisting of 230 leading companies worldwide, released the final DVD-A
specification in March of 1999. The new DVD format is said to provide at least twice the
sound quality of audio CD on disks that can contain up to seven times as much information.
Various types of DVD-A-compatible DVD players are being manufactured, in addition to the
DVD-A players specifically developed for the format.

Almost all of the space on a DVD video disc is devoted to containing video data. As a
consequence, the space allotted to audio data, such as a Dolby Digital 5.1 soundtrack, is
severely limited. A lossy compression technique - so-called because some of the data is lost -
is used to enable audio information to be stored in the available space, both on standard
CDs and DVD-Video disks. In addition to using lossless compression methods, DVD-A also
provides more complexity of sound by increasing the sampling rateand the frequency range
beyond what is possible for the space limitations of CDs and DVD-Video. DVD-Audio is 24-
bit, with a sampling rate of 96 kHz; in comparison, DVD-Video soundtrack is 16-bit, with a
sampling rate of 48 kHz, and standard audio CD is 16-bit, with a sampling rate of 44.1 kHz.

DVD-Video is a consumer video format used to store digital video on DVD discs. DVD-Video
was the dominant consumer home video format in Asia, North America, Europe, and
Australia in the 2000s until it was supplanted by the high-definition Blu-ray Disc. Discs using
the DVD-Video specification require a DVD drive and an MPEG-2 decoder (e.g., a DVD
player, or a computer DVD drive with a software DVD player). Commercial DVD movies are
encoded using a combination MPEG-2 compressed video and audio of varying formats
(often multi-channel formats as described below). Typically, the data rate for DVD movies
ranges from 3 to 9.5 Mbit/s, and the bit rate is usually adaptive. DVD-Video was first
available in Japan on November 1, 1996 (with major releases beginning December 20, 1996)
Q-10 use of text in multimedia explain
Text is a basic element of many multimedia titles. Wherever possible, this text should be
kept to a minimum unless the application includes a great deal of reference material.
Reading volumes of text on a computer screen is difficult and tiring. Moreover, it may not be
the best way to communicate an idea, concept, or even a fact
Text can be used in many ways in multimedia:

o in a website
o in films such as titles and credits
o as subtitles in a film or documentary that provide a translation
o it may be used in advertisements
o it is used in text messaging
1 Use of Text in Webs
Text has more benefits, other than simply driving visitors’ attention to itself, other than
spending time and energy on graphical elements that do not contribute to understanding of
the page.
To improve website speed-
When a website is built primarily of text (such as this one), it loads much faster, than the
one,
which uses:
o excessive images, graphics
o JavaScript (for menus, including various stat tracking scripts, such as Google
Analytics)
o table-based layouts (which are twice larger in file size, than the ones built in
CSS)
o internal code (not placed in external CSS, JS, etc. files and linked to)
o sound and video on the page (especially without transcripts, which hurts
accessibility if you do use audio/video, do not auto-launch it and have a
button to turn it on/off)
2 Use of Text in Films Such as Titles and Credits-
Most movies start with titles and end with credits. The text is sometimes over some
background video or pictures, and at other times it is over a plain coloured background. It
has been traditional since the days of silent movies.
Decisions that you make, when incorporating text in multimedia, depend on a number of
factors including:
o the content of the information
o the amount of text needed
o the theme or look of the multimedia product
o the placement of the text (is it a heading or body text or a logo?)
o the format of the project (is it a video, website, blog, video, slideshow, etc?
3 Use of Text in Subtitles in a Film or Documentary

If you are starting the process of adding subtitles to your film, you will need to take into
consideration font styles, spacing, and font colour and font size. There are certain fonts that
lend themselves well to web design, others that work better in print and others that are
optimal for use against dynamic content such as moving images.

While web fonts such as Tahoma, Verdana and Georgia are great for use in web media, they
were designed to work well in static design environments where the background does not
change.
There are three fonts that are widely used for subtitles in films and documentaries. They
are:
o Univers45
o Antique Olive
o Tiresias
These three fonts work well as subtitles over dynamic content and will allow you to
communicate most effectively with your audience.
4 Use Text in Advertisement-
Event planners and party organizers:
Tell your event guests to text your keyword and their name to your short code. Now you
have a list of attendees that you can instantly reach with last minute changes, updates to
the schedule or great deals on merchandise.
Speakers, performers, bands, artists:
Let your followers know about your text program. They sign up and now you can update
them regarding shows, concerts, new releases, showings, guest appearances or anything
Professionals like plumbers and dentists:
Have your customers opt-in to your list and you have got an instant way to remind them of
their upcoming appointments or fill a hole in your schedule within minutes.
Any local business, such as a restaurant or bookstore, coffee shop, boutique or
convenience store:
You can place flyers and table tents around your business to show your current customers
how to get great text coupons. Or add a one-liner in your other advertising to outside
potential customers to give them even more incentive to walk into your business. Before
you know it, you are attracting new customers and building repeat business with your
current customers
Q-11 types of text in mm
Q-12 difference between hypertext and hypermedia
Hypertext
• It refers to the system of managing the information related to the plain
text.
• It involves only text.
• It becomes a part of the link.
• It is the part of hypermedia.
• It allows the user to traverse through text in a non-linear fashion.
• It allows users to move from one document to another in a single click.
• The user can click on the hypertext or the ‘goto’ links.
• It helps the user move to the next document.
• It also helps the user move from one page of a document to the other
page.
• It doesn’t provide a great user experience to the user.
• Example includes reading a blog on a website, and click on goto links to
move to the next part.

Hypermedia
• It refers to connecting the hypertext with media such as graphics,
sounds, and animations.
• It involves graphics, image, video, and audio.
• It can be understood as the improved version of hypertext.
• Text with multimedia is a part of the link.
• It allows the user to click on the text or any other multimedia to move
from one page to another page.
• It gives flexibility of movement.
• It attracts a greater number of users.
• It provides a better user experience.
• Example includes reading an article on a website, and click on an image
takes the user to its associated page.

Q-13 images in multimedia

Images can be created by using different techniques of representation of data called data
type like monochrome and coloured images. Monochrome image is created by using single
colour whereas coloured image is created by using multiple colours. Some important data
types of images are following:

1-bit images- An image is a set of pixels. Note that a pixel is a picture element in digital
image. In 1-bit images, each pixel is stored as a single bit (0 or 1). A bit has only two states
either on or off, white or black, true or false. Therefore, such an image is also referred to as a
binary image, since only two states are available. 1-bit image is also known as 1-bit
monochrome images because it contains one color that is black for off state and white for on
state.

A 1-bit image with resolution 640*480 needs a storage space of 640*480 bits.

640 x 480 bits. = (640 x 480) / 8 bytes = (640 x 480) / (8 x 1024) KB= 37.5KB.

The clarity or quality of 1-bit image is very low.

8-bit Gray level images- Each pixel of 8-bit Gray level image is represented by a single
byte (8 bits). Therefore, each pixel of such image can hold 28=256 values between 0 and 255.
Therefore, each pixel has a brightness value on a scale from black (0 for no brightness or
intensity) to white (255 for full brightness or intensity). For example, a dark pixel might have
a value of 15 and a bright one might be 240.

A 8-bit image with resolution 640 x 480 needs a storage space of 640 x 480 bytes=(640 x
480)/1024 KB= 300KB. Therefore an 8-bit image needs 8 times more storage space than 1-bit
image.

24-bit color images - In 24-bit color image, each pixel is represented by three bytes, usually
representing RGB (Red, Green and Blue). Usually true color is defined to mean 256 shades of
RGB (Red, Green and Blue) for a total of 16777216 color variations. It provides a method of
representing and storing graphical image information an RGB color space such that a colors,
shades and hues in large number of variations can be displayed in an image such as in high
quality photo graphic images or complex graphics.

Many 24-bit color images are stored as 32-bit images, and an extra byte for each pixel used
to store an alpha value representing special effect information.
A 24-bit color image with resolution 640 x 480 needs a storage space of 640 x 480 x 3 bytes =
(640 x 480 x 3) / 1024=900KB without any compression. Also 32-bit color image with
resolution 640 x 480 needs a storage space of 640 x 480 x 4 bytes= 1200KB without any
compression.

Disadvantages
• Require large storage space
• Many monitors can display only 256 different colors at any one time. Therefore, in
this case it is wasteful to store more than 256 different colors in an image.

8-bit color images - 8-bit color graphics is a method of storing image information
in a computer's memory or in an image file, where one byte (8 bits) represents each
pixel. The maximum number of colors that can be displayed at once is 256. 8-bit color
graphics are of two forms. The first form is where the image stores not color but an 8-bit
index into the color map for each pixel, instead of storing the full 24-bit color value.
Therefore, 8-bit image formats consists of two parts: a color map describing what colors
are present in the image and the array of index values for each pixel in the image. In
most color maps each color is usually chosen from a palette of 16,777,216 colors (24
bits: 8 red, 8green, 8 blue).

The other form is where the 8-bits use 3 bits for red, 3 bits for green and 2 bits for blue.
This second form is often called 8-bit true color as it does not use a palette at all. When
a 24-bit full color image is turned into an 8-bit image, some of the colors have to be
eliminated, known as color quantization process.

A 8-bit color image with resolution 640 x 480 needs a storage space of 640 x 480
bytes=(640 x 480) / 1024KB= 300KB without any compression.
Q-14 image colour model
Additive Color Model

1. These type of models use light which is emitted directly from a source to display
colors.
2. These models mixes different amount of RED, GREEN, and BLUE (primary colors)
light to produce rest of the colors.

3. Adding these three primary colors results in WHITE image.


4. Example: RGB model is used for digital displays such as laptops, TVs, tablets, etc.
Subtractive Color Model
1. These type of models use printing inks to display colors.
2. Subtractive color starts with an object that reflects light and uses colorants to
subtract portions of the white light illuminating an object to produce other colors.
3. If an object reflects all the white light back to the viewer, it appears white, and if it
absorbs all the light then it appears black.
4. Example: Graphic designers used the CMYK model for printing purpose.

RGB Color Model (Additive Model)


Color images are encoded as integer triplet (R,G,B) values. These triplets encode how
much the corresponding phosphor should be excited in devices such as a monitor.
For images produced from computer graphics, we store integers proportional to
intensity in the frame buffer.

CMY Color Model (Subtractive Color)


Additive color: Namely, when two light beams affect on a target, their colors add;
when two phosphors on a CRT screen are turned on, their colors add. However, for
ink deposited on paper, the opposite situation holds: yellow ink subtracts blue from
white illumination, but reflects red and green; it appears yellow. Instead of red,
green, and blue primaries, we need primaries that amount to -red, -green, and -blue.
i.e we need to subtract R, or G, or B. These subtractive color primaries are Cyan (C),
Magenta (M) and Yellow (Y) inks.
Q-15 image file formats in mm
TIFF (.tif, .tiff)

TIFF or Tagged Image File Format are lossless images files meaning that they do not need to
compress or lose any image quality or information (although there are options for
compression), allowing for very high-quality images but also larger file sizes.
Compression: Lossless - no compression. Very high-quality images.

Best For: High quality prints, professional publications, archival copies


Special Attributes: Can save transparencies

Bitmap (.bmp)
BMP or Bitmap Image File is a format developed by Microsoft for Windows. There is no
compression or information loss with BMP files which allow images to have very high
quality, but also very large file sizes. Due to BMP being a proprietary format, it is generally
recommended to use TIFF files.
Compression: None
Best For: High quality scans, archival copies
Learn more about BMP file types

JPEG (.jpg, .jpeg)


JPEG, which stands for Joint Photographic Experts Groups is a “lossy” format meaning that
the image is compressed to make a smaller file. The compression does create a loss in
quality but this loss is generally not noticeable. JPEG files are very common on the Internet
and JPEG is a popular format for digital cameras - making it ideal for web use and non-
professional prints.
Compression: Lossy - some file information is compressed or lost
Best For: Web Images, Non-Professional Printing, E-Mail, Powerpoint
Special Attributes: Can choose amount of compression when saving in image editing
programs like Adobe Photoshop or GIMP.
GIF (.gif)
GIF or Graphics Interchange Format files are widely used for web graphics, because they are
limited to only 256 colors, can allow for transparency, and can be animated. GIF files are
typically small is size and are very portable.
Compression: Lossless - compression without loss of quality
Best For: Web Images

Special Attributes: Can be Animated, Can Save Transparency

PNG (.png)
PNG or Portable Network Graphics files are a lossless image format originally designed to
improve upon and replace the gif format. PNG files are able to handle up to 16 million
colors, unlike the 256 colors supported by GIF.
Compression: Lossless - compression without loss of quality
Best For: Web Images
Special Attributes: Save Transparency
EPS (.eps)
An EPS or Encapsulated PostScript file is a common vector file type. EPS files can be opened
in many illustration applications such as Adobe Illustrator or CorelDRAW.
Compression: None - uses vector information
Best For: Vector artwork, illustrations
Special Attributes: Saves vector information
RAW Image Files (.raw, .cr2, .nef, .orf, .sr2, and more)
RAW images are images that are unprocessed that have been created by a camera or
scanner. Many digital SLR cameras can shoot in RAW, whether it be a .raw, .cr2, or .nef.
These RAW images are the equivalent of a digital negative, meaning that they hold a lot of
image information, but still need to be processed in an editor such as Adobe Photoshop or
Lightroom.

Compression: None
Best For: Photography
Special Attributes: Saves metadata, unprocessed, lots of information
Q-16 difference between digital audio and midi
Q-17 audio file formats
Lossy formats.

Lossy audio formats lose data in the transmission. They don’t decompress back to their
original file size, so they end up smaller, and some sound waves are lost. Artists and
engineers who send audio files back and forth prefer not to use lossy formats, because the
files degrade every time they’re exported.

MP3
MP3 (MPEG-1 Audio Layer III) is the most popular of the lossy formats. MP3 files work on
most devices, and the files can be as small as one-tenth the size of lossless files. MP3 is fine
for the consumer, since most of the sound it drops is inaudible, but that’s not the case when
it comes to bit depth. “MP3 files can only be up to 16-bit, which is not what you want to be
working in,” says producer, mixer, and engineer Gus Berry. “You want to be working in at
least 24-bit or higher when recording and mixing.”

AAC
Advanced Audio Coding, or AAC files (also known as MPEG-4 AAC), take up very little space
and are good for streaming, especially over mobile devices. Requiring less than 1 MB per
minute of music and sounding better than MP3 at the same bitrate, the AAC format is used
by iTunes/Apple Music, YouTube, and Android.
Ogg Vorbis
Ogg Vorbis is the free, open-source audio codec that Spotify uses. It’s great for streaming,
but the compression results in some data loss. Experts consider it a more efficient format
than MP3, with better sound at the same bitrate.
Lossless formats.
These files decompress back to their original size, keeping sound quality intact. Audio
professionals want all of the original sound waves, so they prefer lossless. These files can be
several times larger than MP3s. Lossless bitrates depend on the volume and density of the
music, rather than the quality of the audio.
FLAC
Free Lossless Audio Codec offers lossless compression, and it’s free and open-source.
ALAC

Apple’s Lossless Audio Codec allows for lossless compression, but it works only on Apple
devices.
Uncompressed formats.
These files remain the same size from origin to destination.
WAV
WAV (Waveform Audio File) retains all the original data, which makes it the ideal format for
sound engineers. “WAV has greater dynamic range and greater bit depth,” creative
producer and sound mixer Lo Boutillette says of her preferred format. “It’s the highest
quality,” Berry agrees. “It can be 24-bit, 32-bit, all the way up to 192kHz sample rate and
even higher these days.” If you’re collaborating and sending files back and forth, WAV holds
its time code. This can be especially useful for video projects in which exact synchronization
is important.

AIFF
Originally created by Apple, AIFF (Audio Interchange File Format) files are like WAV files in
that they retain all of the original sound and take up more space than MP3s. They can play
on Macs and PCs, but they don’t hold time codes, so they’re not as useful for editing and
mixing.

DSD
Direct Stream Digital is an uncompressed, high-resolution audio format. These files encode
sound using pulse-density modulation. They are very large, with a sample rate as much as
64 times that of a regular audio CD, so they require top-of-the-line audio systems.

PCM

Pulse-Code Modulation, used for CDs and DVDs, captures analog waveforms and turns them
into digital bits. Until DSD, this was thought to be the closest you could get to capturing
complete analog audio quality.
Q-18 various Animation techniques
Stop Motion

Cut-out animations are a great example of a stop motion style. A single drawing is made
rather than sequential images and it is then cut into pieces. Those cut-outs are joined together
with pins or wires. This allows the animator to move specific parts of the character to show
animation.

Traditional

This includes the oldest techniques that were used by Disney and other entertainment brands
in the earlier days. The idea is simple – you draw sequential images for each frame and show
them in series to give an illusion of animation. It’s just like those animation books that give an
illusion when you flip the pages quickly.

2D Motion

This can refer to two things: traditional animations made from paper or other similar
mediums and vector-based animations made on computers. We will focus on the latter here.
The idea is the same as the traditional one. However, the only major difference is the lack of
a solid medium. You make all the drawings digitally on a computer and play those images to
give an animation effect. So, it’s comparatively easier and quicker than the traditional
technique.

3D Animations

3D movies are quite popular entertainment these days. This technique is quite similar to
playing with puppets but in a digital environment. The underlying process is quite technical,
so let’s focus on the key principles. 3D animations combine the frames of 2D with the
modelling of characters. The computer prepares the images in a digital space and does all the
calculations to make the objects move in the desired way. Generally, this is quite an expensive
style of animation and is seldom used.

Motion Graphics
This one is quite different from the ones listed above. Why? Motion Graphics is not
dependent on any storyline. Rather it is just the movement of static images and texts to add
some special effects. For instance, the credits list at the end of a movie, animated logos, and
even explainer videos are all motion graphics.
Q-19 working of hard disk drive in detail
A hard drive consists of a stack of disks or platters that spin at a significantly high speed. A
recording head is typically attached to the top and the bottom portion of each platter.

A layer of microscopic magnetized metal grains is applied to the surface of the disks. The
main purpose of the coating of magnetized metal grains present on the surface of disks is to
form magnetic patterns to hold the information or store the data.

For this purpose, the grains tend to arrange themselves in the form of groups. Here, each
group formed by the grains is known as a bit. The two states in which the magnetization of
the grains can be achieved denote the binary bits 0 and 1. The data is stored on to the disk
by converting digital data or the binary combination of bits into analogue data or the
electric current.

The transfer of bits takes place with the help of an electromagnet that is attached to the
internal mechanism of the hard drive. The magnetic field generated by the electromagnet is
highly intense and is capable of reversing or changing the direction of magnetization of the
metal grains.

To retrieve the information stored on the drive, a magnetic reader is used. The information
that is stored on the surface of the hard disk drive is arranged in a specific order. The data
bits containing the information are arranged in concentric circular paths. These paths are
known as tracks. The tracks can be further divided into smaller areas known as sectors.

Whenever the user provides a command to save the data, the read-write head of the
device tries to locate the free sectors of the platter and establish magnetisation and
demagnetisation of the magnetic grains present in that particular area according to the
input signal. A portion of the hard disk drive is specifically dedicated to keep a track of the
free and used up portions of the drive.

The map that displays the usage of the drive is known as the file allocation table or FAT.
When the user provides a command to the computer to save information on the surface of
the disk, then the computer approaches the file allocation table to find the appropriate
place required to save the data. Once the suitable place is located by the computer, the
read-write head is made to move on the surface of the platter accordingly.
Q-20 working of DVD player
The pits and bumps in the DVD are hit by the laser from the optical
mechanism of the DVD player. This laser will be reflected differently
according to the change of pits and bumps. Though the laser hits a single
spot, the DVD moves in a circular motion so that the entire area is covered.
Mirrors are also used to change the spot.

These reflected laser beams are then collected by a light sensor (eg. photo-
detector) which converts the different signals into a binary code. In short, the
optical system helps in converting the data from the DVD into a digital code.
Q-21 Animation file formats

• MP4: is a file format created by the Moving Picture


Experts Group (MPEG) as a multimedia container format
designed to store audiovisual data.

• MOV: is a multimedia container file format developed by


Apple and compatible with both Macintosh and Windows
platforms. It can contain multiple tracks that store different
types of media data and is often used for saving movies and
other video files.

• GIF: is an image encoded in Graphics Interchange Format


(GIF) which contains a number of images or frames in a
single file and is described by its own graphic control
extension. The frames are presented in a specific order in
order to convey animation. An animated GIF can loop
endlessly or stop after a few sequences.

• Adobe After Effects: is a digital visual effects, motion


graphics, and compositing application developed by Adobe
Systems and used in the post-production process of film
making and television production.It can be used for keying,
tracking, compositing and animation.

• CSS (Cascading Style Sheets): make it possible to


animate transitions from one CSS style configuration to
another. We use these a lot in our Headless CMS websites.
Essentially CSS animations consist of two components: a
style describing the CSS animation and a set of keyframes
that indicate the start and end states of the animation’s
style, as well as possible intermediate waypoints.
Q-22 explain video signal formats in detail
Component Video
Component video is a video signal that has been split into two or more component
channels. In popular use, it refers to a type of component analog video (CAV) information
that is transmitted or stored as three separate signals. Component video can be contrasted
with composite video (NTSC, PAL or SECAM) in which all the video information is combined
into a single line - level signal that is used in analog television. Like composite, component -
video cables do not carry audio and are often paired with audio cables.
When used without any other qualifications the term component video generally refers to
analogue YPBPR component video with sync on luma.
Composite Video
Composite video (1 channel) is an analogue video transmission (no audio) that carries
standard definition video typically at 480i or 576i resolution. Video information is encoded
on one channel in contrast with slightly higher quality S - video (2 channel), and even higher
quality component video (3 channels).

Composite video is usually in standard formats such as NTSC, PAL, and SECAM and is often
designated by the CVBS initialism, meaning "Color, Video, Blanking, and Sync."
S - Video
Separate Video (2 channel), more commonly known as S - Video and Y/C, is an analogue
video transmission (no audio) that carries standard definition video typically at 480i or 576i
resolution. Video information is encoded on two channels: luma (luminance, intensity, "Y")
and chroma (colour, "C"). This separation is in contrast with slightly lower quality composite
video (1 channel) and higher quality component video (3 channels). It's often referred to by
JVC (who introduced the DIN - connector pictured) as both an S - VHS connector and
as Super Video.
Q-23 explain digital video standards
Digital video (DV) is video that is captured and stored in a digital format as ones and zeros,
rather than a series of still pictures captured in film. Digital, versus analog, signals are used.
Information is processed and stored as a sequence of digital data for easy manipulation by
computers, but the video is still presented to the viewer through a screen in analog form.

1.EDTV

EDTV evolved from SDTV, which incorporated 480 scan lines, along with 45 extra blank lines
allotted for resetting the display time. The display type for SDTVs are cited as interlaced
(480i or 525i). This display type did not prove to be successful for large screen TVs, as it
caused poor picture quality due to the visible jagged lines. Progressive scan displays came
into the picture, and these are usually known as an enhanced definition TV, i.e. EDTV.

EDTV displays high-definition broadcasts by down-converting the signals to 480 scan lines,
thus resulting in a loss of clarity. On the other hand, HDTV broadcasts high-definition
programs very well, preserving the extra clarity.

1.EDTV supports only progressive scan displays, while HDTV supports both

progressive scan displays and interlaced displays.

2. EDTV is specified by 480p; whereas HDTV is specified by both 720p and 1080i,

which yield a better picture quality.

3. EDTV has less clarity while down-converting high definition broadcasts, and

contrarily, HDTV preserves this extra clarity.

4.While watching 480i standard broadcasting, a high quality EDTV has a greater capability to

process the interlaced signals than a low-quality HDTV.

5.All TV programs, DVDs, and DVD players are compatible with an EDTV; whereas, most of

them are incompatible with HDTV.


2.CCIR

CCIR System I is an analogue broadcast television system. It was first used in the Republic of

Ireland starting in 1962 as the 625-line broadcasting standard to be used on VHF Band I and

Band III, sharing Band III with 405-line System A signals radiated in the north and east of the

country.

CCIR Standards for Digital Video

(CCIR -- Consultative Committee for International Radio)


CCIR 601 CCIR 601 CIF QCIF
525/60 625/50
NTSC PAL/SECAM
-------------------- ----------- ----------- ----------- -----------

Luminance resolution 720 x 485 720 x 576 352 x 288 176 x 144

Chrominance resolut. 360 x 485 360 x 576 176 x 144 88 x 72

Color Subsampling 4:2:2 4:2:2

Fields/sec 60 50 30 30

Interlacing Yes Yes No No

• CCIR 601 uses interlaced scan, so each field only has half as much vertical
resolution (e.g., 243 lines in NTSC). The CCIR 601 (NTSC) data rate is
~165 Mbps.

• CIF (Common Intermediate Format) -- an acceptable temporary standard


o Approximately the VHS quality
o Uses progressive (non-interlaced) scan
o Uses NTSC frame rate, and half the active lines of PAL signals -->
To play on existing TVs, PAL systems need to do frame rate
conversion, and NTSC systems need to do line-number conversion.

• QCIF -- Quarter-CIF

You might also like