You are on page 1of 18



Course Code


Course Title

Management Information Systems

Assignment Code :



All Blocks

Note: Attempt all the questions and submit this assignment on or before 30th April, 2015 to the
coordinator of your study center.

Q1. What is the role played by business information in an

organization? Define Management Information System and discuss
various characteristics expected of a good MIS.

Role played by business information in an organization

For many companies, email is the principal means of communication between
employees, suppliers and customers. Email was one of the early drivers of the
Internet, providing a simple and inexpensive means to communicate. Over the years,
a number of other communications tools have also evolved, allowing staff to
communicate using live chat systems, online meeting tools and video-conferencing
systems. Voice over internet protocol (VOIP) telephones and smart-phones offer
even more high-tech ways for employees to communicate.
Inventory Management
When it comes to managing inventory, organizations need to maintain enough stock
to meet demand without investing in more than they require. Inventory
management systems track the quantity of each item a company maintains,
triggering an order of additional stock when the quantities fall below a predetermined amount. These systems are best used when the inventory management
system is connected to the point-of-sale (POS) system. The POS system ensures that
For more solved assignments go to

each time an item is sold, one of that item is removed from the inventory count,
creating a closed information loop between all departments.
Data Management
The days of large file rooms, rows of filing cabinets and the mailing of documents is
fading fast. Today, most companies store digital versions of documents on servers
and storage devices. These documents become instantly available to everyone in the
company, regardless of their geographical location. Companies are able to store and
maintain a tremendous amount of historical data economically, and employees
benefit from immediate access to the documents they need.
Management Information Systems
Storing data is only a benefit if that data can be used effectively. Progressive
companies use that data as part of their strategic planning process as well as the
tactical execution of that strategy. Management Information Systems (MIS) enable
companies to track sales data, expenses and productivity levels. The information can
be used to track profitability over time, maximize return on investment and identify
areas of improvement. Managers can track sales on a daily basis, allowing them to
immediately react to lower-than-expected numbers by boosting employee
productivity or reducing the cost of an item.
Customer Relationship Management
Companies are using IT to improve the way they design and manage customer
relationships. Customer Relationship Management (CRM) systems capture every
interaction a company has with a customer, so that a more enriching experience is
possible. If a customer calls a call center with an issue, the customer support
representative will be able to see what the customer has purchased, view shipping
information, call up the training manual for that item and effectively respond to the
issue. The entire interaction is stored in the CRM system, ready to be recalled if the
customer calls again. The customer has a better, more focused experience and the
company benefits from improved productivity.

Management Information System

Management information systems (MIS) is the study of people, technology, organizations, and the
relationships among them. MIS professionals help firms realize maximum benefit from investment
in personnel, equipment, and business processes. MIS is a people-oriented field with an emphasis
on service through technology. Management information systems are typically computer systems
used for data managing to make searching, analyzing data, and spring information easier.
For more solved assignments go to

Management information systems are distinct from other information systems in that they are used
to analyze and facilitate strategic and operational activities.
Management information system, or MIS, broadly refers to a computer-based system that provides
managers with the tools to organize, evaluate and efficiently manage departments within an
organization. In order to provide past, present and prediction information, a management
information system can include software that helps in decision making, data resources such as
databases, the hardware resources of a system, decision support systems, people management and
project management applications, and any computerized processes that enable the department to
run efficiently.
Management Information System Managers

The role of the management information system (MIS) manager is to focus on the organization's
information and technology systems. The MIS manager typically analyzes business problems and
then designs and maintains computer applications to solve the organization's problems.

Various characteristics expected of a good MIS.

Information should be relevant to the strategic decision that company management is
currently reviewing. Because companies may review several business opportunities at one
time, avoiding information not relating to the decision is essential.
MIS information should be accurate and avoid any inclusions of estimates or probable
costs. Making decisions based on estimates can lead to cost overruns or lower profits from
future operations.
Many management decisions are based on information from a certain time period, such as
quarterly or annual periods. Information outside of the requested time frame may skew
information and lead to an improperly informed decision.
MIS information gathering should resemble an upside-down triangle. The early stages of
information gathering should be exhaustive, including all types of company information. As
management narrows its decision-making process, the information is refined to include
only the most relevant pieces.
For more solved assignments go to

The MIS needs to be a cost-effective and efficient system for gathering information. Most of
these systems are developed internally, creating costs that cannot be passed to clients.

Q2. Explain and distinguish the following concepts with reference to

their use in real-time systems:
i) Multiprocessing
ii) Time sharing
Multiprocessing is the use of two or more central processing units (CPUs) within a single
computer system. The term also refers to the ability of a system to support more than one
processor and/or the ability to allocate tasks between them. There are many variations on
this basic theme, and the definition of multiprocessing can vary with context, mostly as a
function of how CPUs are defined (multiple cores on one die, multiple dies in one package,
multiple packages in one system unit, etc.).
Time Sharing
In computing, time-sharing is the sharing of a computing resource among many users by
means of multiprogramming and multi-tasking. Its introduction in the 1960s, and
emergence as the prominent model of computing in the 1970s, represented a major
technological shift in the history of computing.
By allowing a large number of users to interact concurrently with a single computer, timesharing dramatically lowered the cost of providing computing capability, made it possible
for individuals and organizations to use a computer without owning one, and promoted the
interactive use of computers and the development of new interactive applications.
Difference between the Multiprocessing and Time Sharing
The earliest computers were extremely expensive devices, and very slow in comparison to
recent models. Machines were typically dedicated to a particular set of tasks and operated
by control panels, the operator manually entering small programs via switches in order to
load and run a series of programs. These programs might take hours, or even weeks, to run.
As computers grew in speed, run times dropped, and soon the time taken to start up the
next program became a concern. Batch processing methodologies evolved to decrease
For more solved assignments go to

these "dead periods" by queuing up programs so that as soon as one program completed,
the next would start.

To support a batch processing operation, a number of comparatively inexpensive card

punch or paper tape writers were used by programmers to write their programs "offline".
When typing (or punching) was complete, the programs were submitted to the operations
team, which scheduled them to be run. Important programs were started quickly; how long
before less important programs were started was unpredictable. When the program run
was finally completed, the output (generally printed) was returned to the programmer. The
complete process might take days, during which time the programmer might never see the
The alternative of allowing the user to operate the computer directly was generally far too
expensive to consider. This was because users might have long periods of entering code
while the computer remained idle. This situation limited interactive development to those
organizations that could afford to waste computing cycles: large universities for the most
part. Programmers at the universities decried the behaviors that batch processing imposed,
to the point that Stanford students made a short film humorously critiquing it.They
experimented with new ways to interact directly with the computer, a field today known as
humancomputer interaction.
Time Sharing
Time-sharing was developed out of the realization that while any single user would make
inefficient use of a computer, a large group of users together would not. This was due to the
pattern of interaction: Typically an individual user entered bursts of information followed
by long pauses but a group of users working at the same time would mean that the pauses
of one user would be filled by the activity of the others. Given an optimal group size, the
overall process could be very efficient. Similarly, small slices of time spent waiting for disk,
tape, or network input could be granted to other users.
Implementing a system able to take advantage of this would be difficult. Batch processing
was really a methodological development on top of the earliest systems; computers still ran
single programs for single users at any time, all that batch processing changed was the time
delay between one program and the next. Developing a system that supported multiple
users at the same time was a completely different concept; the "state" of each user and
their programs would have to be kept in the machine, and then switched between quickly.
This would take up computer cycles, and on the slow machines of the era this was a
concern. However, as computers rapidly improved in speed, and especially in size of core
memory in which users' states were retained, the overhead of time-sharing continually
decreased, relatively.
For more solved assignments go to

The concept was first described publicly in early 1957 by Bob Bemer as part of an article in
Automatic Control Magazine. The first project to implement a time-sharing system was
initiated by John McCarthy in late 1957, on a modified IBM 704, and later on an additionally
modified IBM 7090 computer. Although he left to work on Project MAC and other projects,
one of the results of the project, known as the Compatible Time-Sharing System or CTSS,
was demonstrated in November 1961. CTSS has a good claim to be the first time-sharing
system and remained in use until 1973. Another contender for the first demonstrated timesharing system was PLATO II, created by Donald Bitzer at a public demonstration at Robert
Allerton Park near the University of Illinois in early 1961. Bitzer has long said that the
PLATO project would have gotten the patent on time-sharing if only the University of
Illinois had known how to process patent applications faster, but at the time university
patents were so few and far between, they took a long time to be submitted. The first
commercially successful time-sharing system was the Dartmouth Time Sharing System.
A familiar example of time-sharing is provided by flight-reservation systems for air travel.
A system may have thousands of terminals in offices of airlines and travel agents across the
country. These terminals are tied to a central computer facility through an extensive set of
communication links. A request from a travel agent for space on a certain flight is input
directly to the central computer, where data on all flights are stored in a large hard-disk
memory bank. After the request has been keyed in, the CPU of the computer system
processes the request using the flight information stored on the disk. The output, such as a
flight confirmation, is then transmitted back to the agent. If the agents client buys a ticket,
the ticketing information is transmitted to the computer, which stores it for future use and
subtracts one seat from the space available.
Just as time-sharing systems let one computer work nearly simultaneously at many jobs,
multiprocessing systems have two or more CPUs assigned to a single function.
Multiprocessing systems are used for applications where the breakdown of a single main
computer cannot be tolerated, including flight-reservation systems and the strategic
defense systems used by the military. Often time-sharing and multiprocessing are
combined in the same system, as in the case of flight-reservation systems.

Q3. What are expert systems and how do they help in decision-making?
Can you give examples to illustrate the same? What kinds of decisions
can be appropriately programmed on expert systems? Give examples.
In artificial intelligence, an expert system is a computer system that emulates
the decision-making ability of a human expert.Expert systems are designed to
For more solved assignments go to

solve complex problems by reasoning about knowledge, represented

primarily as ifthen rules rather than through conventional procedural
code.The first expert systems were created in the 1970s and then proliferated
in the 1980s.Expert systems were among the first truly successful forms of AI
An expert system is divided into two sub-systems: the inference engine and
the knowledge base. The knowledge base represents facts and rules. The
inference engine applies the rules to the known facts to deduce new facts.
Inference engines can also include explanation and debugging capabilities.
An expert system is an example of a knowledge-based system. Expert
systems were the first commercial systems to use a knowledge-based
architecture. A knowledge-based system is essentially composed of two subsystems: the knowledge base and the inference engine.
Expert system

Decision to be made

Expert system: Personality profiler.

HR (Human Resources) want to assign
only one of a large group of people to a
new role. Each person answers a number
of psychological questions that the
expert system presents them with.

HR interrogates the expert system. It

provides a set of personality profiles
based on each persons' response. HR
make a short list of which people are
likely to be good at the job. HR interview
those people for a final decision. The
expert system is assisting them in
cutting down the work load but does not
make the ultimate decision.

Expert system: Sales mix modeler

A car company has a large number of
options on a new model they are
about launch. The car has a dozen
paint colour options, three engine
sizes, 24 optional accessories. The
expert system has a complete record
of what sold well in the past with
previous models and some trends for
the future.

The sales planner has the task of

picking three car combinations out of
configurations to make stock cars
ready for sale. These are the three
cars that will be pre-built in their
thousands by the factory, so he better
get it right. He uses the sales mix
expert system to put together a
number of scenarios. Each scenario is
management to make a final decision.
Expert system: Medical diagnostics
The doctor meets a patient in the
This contains a body of knowledge clinic who has an unusual set of
For more solved assignments go to

about thousands of diseases. It is used

by the doctors at a hypothetical clinic
to help diagnose patient illnesses
based on their symptoms.

Expert system: A loan approval

The expert system has a body of
knowledge about the results of past
loans made by the credit company. It
has knowledge about all the factors
that point to a low risk or high risk

symptoms. He loads those symptoms

into the expert system. It returns a
number of possible illnesses. The
doctor then phones a specialist
consultant to also ask their opinion
before making a decision. The expert
system is assisting but not providing
the final answer.
The credit approval manager meets
the client and poses a number of
questions about their financial status
along with a history of their past
loans. This information is loaded into
the expert system. The system comes
back with a risk factor for the loan.
The manager will use this as part of
his decision to approve the loan (or

Kinds of decisions can be appropriately programmed on expert systems

In general, there are five types of expert system tools.

Inductive tools
Simple rule based tools
Structured rule based tools
Hybrid tools
Domain specific tools

Inductive tools: Inductive tools generate rules from examples. Here a

developer feeds in a large number of examples from the machines
information base. The tools use an algorithm to convert the examples into
rules and determine the order the system will follow when questioning the
Simple rule based tools: They use IF-THEN rules to represent knowledge.
They are useful for developing expert systems containing fewer than 500

For more solved assignments go to

rules. The only problem with these tools is that they lack high end editing
facility for design of tools.
Structured rule based tools: They offer context trees, multiple instantiation,
confidence factors, and more powerful editors compared to simple rule based
tools. Here IF-THEN rules are arranged into sets. These rule sets act as
separate knowledge bases. One set of rules can inherit the information
acquired when other rule sets are examined. These tools are more useful
when we need to process large number of rules and rules can be sub divided
into sets.

Hybrid tools: Hybrid tools enable complex expert system development.

These tools use object oriented programming techniques to represent
elements of every problem as objects. Here graphical user interface can also
be provided to users.
Domain specific tools: They are specially designed to be used only to develop
expert systems for a particular domain. It provides special development and
user interface that make it possible to develop an expert system faster. They
are also referred as narrow tools.
Selection criteria for expert system tool:
The following criteria will help us to select the right kind of tool for design and
development of expert system.

Type of knowledge representation

Inference and control
Developer interface
User interface
System interface
Training and support


The following are the examples of expert system shells.

Ex -sys
For more solved assignments go to

Knowledge Pro
Xi Plus
Xpert Rule

Q4. Identify the most important factors inhibiting an organizations

move towards a DBMS. Why should an organization be careful

about placing over reliance on benchmark tests in selecting a

Important factors inhibiting an organizations move towards a

Probably the most fundamental choice to make in the DBMS hierarchy is the model
used to store, manage, and query databases. Besides affecting what software you
need to acquire, this affects the very way you will think about the data, and can be a
surprisingly hard choice to undo later on. Note that as I keep the discussion to
database models that are in significant current use; this is not meant to be a
comprehensive survey of DBMS methodologies through history since some
significant models, such as network databases, have been omitted because of a lack
of modern tools and practice.
Ad-hoc databases
The earliest databases were merely ordered aggregations of raw data, which one
could call an ad-hoc database. They are not stored so as to optimize storage or
queries (a query is a request for database records that conform to a given pattern),
but are usually designed to be read in as a whole by the application (although there
have been "random access" ad hoc database systems). Ad hoc databases are still
used quite often. From address books that are merely sequences of the address
information in files, to the bulk e-mail lists most notoriously used by spammers,
For more solved assignments go to

these are most useful when you have fully anticipated the use of the represented
data and you are certain that more efficient or query-friendly formats won't ever be
necessary. In this case, you needn't read much further. Your DBMS is encoded in the
standard library of your favorite language: fwrite for C users, BufferedFileStream for
Java users, etc. Ad-hoc databases are very efficient and convenient if all the data is of
low volume, is typically accessed together by an application, and a single application
at that (that is, there is not much current or future need for sharing the data
between applications). However, they become very unwieldy, inefficient and
unmanageable once they grow beyond the size of available memory, as access
patterns change, and if they need to be shared between applications.
Hash-based databases

Early algorithm specialists were quick to attack the problem of inefficient queries by
coming up with systems for creating hashes of data records, which are compact keys
that uniquely identify the record. Hashes are easy to manage and with a key, one can
rapidly retrieve the entire record. Hash-based DBMSes, which use these techniques,
are quite popular because of their simplicity and the fact that they come for free
with most UNIX systems. They are very fast, and almost every programming
language provides APIs for their access, but they tend to be quite bare on features.
They are very well-suited to situations where an application wants to pluck records
one at a time from the database, using a well-defined key. An example is for user
profiles and authentication; where the application looks up a record by user ID, does
its thing with the data, and moves on. They are less well suited to situations where
records need to be cross-referenced, or the information in the records needs to be
sliced and diced in some clever way.
Hierarchical databases
An early development in DBMS was to organize information into regular records
containing other regular records in a more structured way. These are known as
hierarchical databases and have enjoyed a bit of a revival with XML's popularity,
because XML has a general hierarchical structure. Hierarchical databases can be
quite suitable for data such as purchase histories that consist of tightly coupled
records of information, for example, customer information to purchases made to
support calls placed. The problem with hierarchical databases is that they have a
way of accumulating redundant data (which was one of the main claims of relational
databases in their battle to wrest dominance from hierarchical databases in the
For more solved assignments go to

'70s). Another problem is that they can be hard to query flexibly in ways that go
against the tight coupling of the data hierarchies.
Relational databases
Relational databases are, of course, the current king of the hill in database
technologies. This doesn't mean that more data is kept in relational databases than
any other model, but rather that when one goes about asking about what the "real"
database model of choice is, he or she is most likely to be told to get relational
religion. There is some good reason for this. Relational databases are wonderful for
discouraging redundant data and for the speed of complex queries; they also have a
huge number of tools and APIs to support them. They are best used in situations
where a lot of records are being combined and cross-referenced to synthesize
results. An example might be the production data of a manufacturing firm, where
information about inventory, part specifications, personnel availability, costs, sales
and supplies need to be thoroughly analyzed in order to make production decisions.
However, like any power tool, they can be quite dangerous. Relational DBMSes
(RDBMS) are designed to model very highly structured data which has been
modeled with mathematical precision. If one's database design is not up to snuff, not
only might the advantages of the relational model be lost, but the result can actually
be worse for maintainability than with less stringent models. If you do opt for
relational databases, be sure you understand concepts such as normalization and
referential integrity. These days, almost every RDBMS uses the Structured Query
Language (SQL) for description and querying of the records.
Object databases
Object databases emerged as a way to translate the techniques of object-oriented
programming to data storage models. The data are organized as distinct objects,
each of which belongs to a class, which might use inheritance to acquire aspects of
other classes. Each object can have a set of attributes of simple types such as integer
and string, and relationships to other objects. As you can imagine, they provide a
very natural API for access using object-oriented languages such as C++, Java and
Python. Object databases can be a great choice for this reason, but it can also seduce
programmers into poor data design: techniques that make sense when the data lives
in memory can be very slow and resource-intensive when the data is stored on disk.
Semi-structured databases

For more solved assignments go to

The emergence of XML has enlivened another corner of database modeling: semistructured databases. As RDBMS took over the universe, many developers lamented
that their rigorousness made them unsuitable for modeling data designed directly
for human consumption, that is, more loosely organized records including
structured documents and systems that made it easy to make changes in the model.
Efforts to provide DBMSes that accommodated such "semi-structured" data thrived
in academia until XML took them to the mainstream. Most XML formats define semistructured data, and so XML -- whether directly stored in files or in an XML
repository -- provide a great deal of flexibility, especially in web-based systems
where the documents are as important as the structured records. The main
drawback is lack of efficiency. The data typically take up much more of the available
resources than with other database models, and queries can be slow and
cumbersome to set up. Semi-structured databases are very strong where documents
and more structured data coexist, such as Intranets and web-based applications.

Since most databases a vital organ for a complete application, the interface between
the database and the application development language is quite important. The
DBMS of choice should have a natural and efficient API in your programming
language of choice, and preferably more than one, since competition improves
quality across the board. Because developer effort is usually more expensive than
run-time resources, it is probably most important that you choose a DBMS that
supports the APIs and languages with which you are comfortable. It's often worth
even going to a database model that is less suitable to the data in question if it is
more suitable to the skills of the available developers.
Of course the DBMS of choice must work on the platform used by the rest of the
application (or must at least be accessible from this platform), but there might be
other platform needs as well. Be sure to consider who might end up using your
system, and choose a DBMS that would run on other important platforms in future.
This is not always directly obvious; for instance, do you think your database might
grow until it is too much for your current hardware to handle? If so, does your
DBMS run on platforms that support clustering? Does it have special features to take
advantage of clustering?
For more solved assignments go to

Probably the most important general features to consider in your DBMS hunt are
security-related. Consider how thoroughly the DBMS requires authentication from
users and keeps an audit trail of the accesses. But security goes beyond keeping out
malicious users. Be sure your backup supports backup and restore, not just by
archiving your raw database files, but also the ability to integrate into incremental
backup regimens. It might be enough to have options to dump to or restore from
structured text (which can be incrementally backed up using tools such as diff), and
of course direct integration into the backup software for the system as a whole is
even better. Structured text dump and restore are also a boon for interchange with
other systems. Examples include comma-delimited formats and dumped sequences
of SQL commands.

Benchmark tests in selecting a DBMS

Benchmarking is a process of comparing an organization's or company's

performance to that of other organizations or companies using objective and
subjective criteria. The process compares programs and strategic positions of
competitors or exemplary organizations to those in the company reviewing its
status for use as reference points in the formation of organization decisions and
objectives. Comparing how an organization or company performs a specific activity
with the methods of a competitor or some other organization doing the same thing
is a way to identify the best practice and to learn how to lower costs, reduce defects,
increase quality, or improve outcomes linked to organization or company
Organizations and companies use benchmarking to determine where inputs,
processes, outputs, systems, and functions are significantly different from those of
competitors or others. The common question is, What is the best practice for a
particular activity or process? Data obtained are then used by the organization or
company to introduce change into its activities in an attempt to achieve the best
practice standard if theirs is not best. Comparison with competitors and exemplary
organizations is helpful in determining whether the organization's or company's
capabilities or processes are strengths or weaknesses. Significant favorable input,
process, and output benchmark variances become the basis for strategies,
objectives, and goals. Often, a general idea that improvement is possible is the
reason for undertaking benchmarking. Benchmarking, then, means looking for and
For more solved assignments go to

finding organizations or companies that are doing something in the best possible
way and learning how they do it in order to emulate them. Organizations or
companies often attempt to benchmark against the best in the world rather than the
best in their particular industry.
- it may restrict the focus to what is already being done.
-By emulating current exemplary processes, benchmarking is a catch-up
MANAGERIAL tool or technique rather than a way for the organization or company
to gain managerial dominance or marketing share.
-can kill creativity.
-may not generate new ideas.
-may not be a competitive analysis.
-What is best for someone else may not suit you
-Poorly defined benchmarks may lead to wasted effort and meaningless results.
-Incorrect comparisons
-Reluctance to share information

Q5) Differentiate among Trojans, Worms and Viruses. Give one example
for each.Computer virus is a major threat to computer security. Justify
the statement.
Trojan: Trojans are malicious programs that perform actions that have not
been authorized by the user. These actions can include:

Deleting data
Blocking data
Modifying data
Copying data
Disrupting the performance of computers or computer networks
For more solved assignments go to

In order for a Trojan horse to spread, you must, in effect, invite these
programs onto your computers--for example, by opening an email attachment.
The PWSteal. Trojan is a Trojan .Govware Trojans, Trojan Horse

Trojan viruses are most likely to gain entry into a computer by way of an
email attachment. When a user opens an attachment it executes the code,
allowing it to install. Another way is through various messengers that
create connections between computers.
When the code is executed the virus goes to work infecting file after file.
The virus designer will ultimately gain control over the computer and
will be able to access all available files. An infected computer will begin
to operate slowly and will exhibit pop-ups from time to time. Eventually
the computer will cease to operate, or crash.
Antivirus software is a good way to protect your computer from Trojan
horse and other types of viruses. Regular updates to this software are the
key to preventing viruses. It is also good practice to delete emails from
unknown sources without opening them.
Once a computer is infected, the virus is extremely difficult to eradicate.
The best protection is prevention.
I Love You Virus
One well-known Trojan virus was the "I Love You" virus that infected
millions of computers worldwide. The virus exploited users through a
Visual Basic script that was attached to an email. This virus infected
computers in the United States, Asia and Europe forcing businesses to
completely shut down email servers.
Computer worms are similar to viruses in that they replicate functional
copies of themselves and can cause the same type of damage. In contrast
to viruses, which require the spreading of an infected host file, worms
are standalone software and do not require a host program or human
For more solved assignments go to

help to propagate. To spread, worms either exploit a vulnerability on the

target system or use some kind of social engineering to trick users into
executing them. A worm enters a computer through a vulnerability in the
system and takes advantage of file-transport or information-transport
features on the system, allowing it to travel unaided.
The entire document will travel from computer to computer, so the
entire document should be considered the worm. PrettyPark.Worm is a
particularly prevalent example.
A computer virus is a type of malware that propagates by inserting a
copy of itself into and becoming part of another program. It spreads from
one computer to another, leaving infections as it travels. Viruses can
range in severity from causing mildly annoying effects to damaging data
or software and causing denial-of-service (DoS) conditions. Almost all
viruses are attached to an executable file, which means the virus may
exist on a system but will not be active or able to spread until a user runs
or opens the malicious host file or program. When the host code is
executed, the viral code is executed as well. Normally, the host program
keeps functioning after it is infected by the virus. However, some viruses
overwrite other programs with copies of themselves, which destroys the
host program altogether.
Viruses spread when the software or document they are attached to is
transferred from one computer to another using the network, a disk, file
sharing, or infected e-mail attachments.
Examples of macro viruses include W97M.Melissa, WM.NiceDay, and
Computer virus is a major threat to computer security
For more solved assignments go to

Computer security threats are relentlessly inventive. Masters of disguise and

manipulation, these threats constantly evolve to find new ways to annoy, steal and
harm. Arm yourself with information and resources to safeguard against complex
and growing computer security threats and stay safe online.
Computer Virus Threats
Perhaps the most well-known computer security threat, a computer virus is a
program written to alter the way a computer operates, without the permission or
knowledge of the user. A virus replicates and executes itself, usually doing damage
to your computer in the process. Learn how to combat computer virus threats and
stay safe online.
Spyware Threats
A serious computer security threat, spyware is any program that monitors your
online activities or installs programs without your consent for profit or to capture
personal information. Weve amassed a wealth of knowledge that will help you
combat spyware threats and stay safe online.
Hackers & Predators
People, not computers, create computer security threats and malware. Hackers and
predators are programmers who victimize others for their own gain by breaking
into computer systems to steal, change or destroy information as a form of cyberterrorism. What scams are they using lately? Learn how to combat dangerous
malware and stay safe online.
Phishing Threats
Masquerading as a trustworthy person or business, phishers attempt to steal
sensitive financial or personal information through fraudulent email or instant
messages. How can you tell the difference between a legitimate message and a
phishing scam? Educate yourself on the latest tricks and scams.

For more solved assignments go to