You are on page 1of 36



A mathematical model is a description of a system using mathematical concepts and
language. The process of developing a mathematical model is termed mathematical modeling.
Mathematical models are used not only in the natural sciences (such as physics, biology, earth
science, meteorology) and engineering disciplines (e.g. computer science, artificial intelligence),
but also in the social sciences (such as economics, psychology, sociology and political science);
physicists, engineers, statisticians, operations research analysts and economists use mathematical
models most extensively. A model may help to explain a system and to study the effects of
different components, and to make predictions about behavior.
Mathematical models can take many forms, including but not limited to dynamical systems,
statistical models, differential equations, or game theoretic models. These and other types of
models can overlap, with a given model involving a variety of abstract structures. In general,
mathematical models may include logical models, as far as logic is taken as a part of
mathematics. In many cases, the quality of a scientific field depends on how well the
mathematical models developed on the theoretical side agree with results of repeatable
experiments. Lack of agreement between theoretical mathematical models and experimental
measurements often leads to important advances as better theories are developed.

Examples of mathematical models

Many everyday activities carried out without a thought are uses of mathematical models.
A geographical map projection of a region of the earth onto a small, plane surface is a
model which can be used for many purposes such as planning travel.

Another simple activity is predicting the position of a vehicle from its initial position,
direction and speed of travel, using the equation that distance travelled is the product of
time and speed. This is known as dead reckoning when used more formally.
Mathematical modeling in this way does not necessarily require formal mathematics;
animals have been shown to use dead reckoning.

Population Growth. A simple model of population growth is the Malthusian growth

model. A slightly more realistic and largely used population growth model is the logistic
function, and its extensions.

Model of a particle in a potential-field. In this model we consider a particle as being a

point of mass which describes a trajectory in space which is modeled by a function giving
its coordinates in space as a function of time. The potential field is given by a function V :
R3 R and the trajectory is a solution of the differential equation



Page 1

Note this model assumes the particle is a point mass, which is certainly known to be false
in many cases in which we use this model; for example, as a model of planetary motion.

Model of rational behavior for a consumer. In this model we assume a consumer faces a
choice of n commodities labeled 1,2,...,n each with a market price p1, p2,..., pn. The
consumer is assumed to have a cardinal utility function U (cardinal in the sense that it
assigns numerical values to utilities), depending on the amounts of commodities x1, x2,...,
xn consumed. The model further assumes that the consumer has a budget M which is used
to purchase a vector x1, x2,..., xn in such a way as to maximize U(x1, x2,..., xn). The problem
of rational behavior in this model then becomes an optimization problem, that is:

subject to:

This model has been used in general equilibrium theory, particularly to show existence
and Pareto efficiency of economic equilibriums. However, the fact that this particular
formulation assigns numerical values to levels of satisfaction is the source of criticism
(and even ridicule). However, it is not an essential ingredient of the theory and again this
is an idealization.

Neighbour-sensing model explains the mushroom formation from the initially chaotic
fungal network.

Computer Science: models in Computer Networks, data models, surface model.

Mechanics: movement of rocket model.

Some applications
A mathematical model usually describes a system by a set of variables and a set of equations that
establish relationships between the variables. Variables may be of many types; real or integer
numbers, boolean values or strings, for example. The variables represent some properties of the
system, for example, measured system outputs often in the form of signals, timing data, counters,
and event occurrence (yes/no). The actual model is the set of functions that describe the relations
between the different variables.

Building blocks


Page 2

There are six basic groups of variables namely: decision variables, input variables, state
variables, exogenous variables, random variables, and output variables. Since there can be many
variables of each type, the variables are generally represented by vectors.
Decision variables are sometimes known as independent variables. Exogenous variables are
sometimes known as parameters or constants. The variables are not independent of each other as
the state variables are dependent on the decision, input, random, and exogenous variables.
Furthermore, the output variables are dependent on the state of the system (represented by the
state variables).
Objectives and constraints of the system and its users can be represented as functions of the
output variables or state variables. The objective functions will depend on the perspective of the
model's user. Depending on the context, an objective function is also known as an index of
performance, as it is some measure of interest to the user. Although there is no limit to the
number of objective functions and constraints a model can have, using or optimizing the model
becomes more involved (computationally) as the number increases.

Classifying mathematical models

Many mathematical models can be classified in some of the following ways:
1. Linear vs. nonlinear: Mathematical models are usually composed by variables, which
are abstractions of quantities of interest in the described systems, and operators that act
on these variables, which can be algebraic operators, functions, differential operators, etc.
If all the operators in a mathematical model exhibit linearity, the resulting mathematical
model is defined as linear. A model is considered to be nonlinear otherwise.
The question of linearity and nonlinearity is dependent on context, and linear models may
have nonlinear expressions in them. For example, in a statistical linear model, it is
assumed that a relationship is linear in the parameters, but it may be nonlinear in the
predictor variables. Similarly, a differential equation is said to be linear if it can be
written with linear differential operators, but it can still have nonlinear expressions in it.
In a mathematical programming model, if the objective functions and constraints are
represented entirely by linear equations, then the model is regarded as a linear model. If
one or more of the objective functions or constraints are represented with a nonlinear
Nonlinearity, even in fairly simple systems, is often associated with phenomena such as
chaos and irreversibility. Although there are exceptions, nonlinear systems and models
tend to be more difficult to study than linear ones. A common approach to nonlinear
problems is linearization, but this can be problematic if one is trying to study aspects such
as irreversibility, which are strongly tied to nonlinearity.
2. Deterministic vs. probabilistic (stochastic): A deterministic model is one in which
every set of variable states is uniquely determined by parameters in the model and by sets


Page 3

of previous states of these variables. Therefore, deterministic models perform the same
way for a given set of initial conditions. Conversely, in a stochastic model, randomness is
present, and variable states are not described by unique values, but rather by probability
3. Static vs. dynamic: A static model does not account for the element of time, while a
dynamic model does. Dynamic models typically are represented with difference
equations or differential equations.
4. Discrete vs. Continuous: A discrete model does not take into account the function of
time and usually uses time-advance methods, while a Continuous model does.
Continuous models typically are represented with f(t) and the changes are reflected over
continuous time intervals.
5. Deductive, inductive, or floating: A deductive model is a logical structure based on a
theory. An inductive model arises from empirical findings and generalization from them.
The floating model rests on neither theory nor observation, but is merely the invocation
of expected structure. Application of mathematics in social sciences outside of economics
has been criticized for unfounded models. [4] Application of catastrophe theory in science
has been characterized as a floating model.[5]
Mathematical Relations
Consider two sets A = {x, y}, B = {2, 4, 6}, Cartesian product of these sets (A x B) is a set that
consists of ordered pairs where first element of the ordered pair belongs to set A whereas second
element belongs to set B, as shown below:
A X B= {(x,2), (x,4), (x,6), (y,2), (y,4), (y,6)}
A relation is some subset of this Cartesian product, Applying the same concept in a real world
scenario, consider two sets Name and Age having the elements:
Name = {Ali, Sana, Ahmed, Sara}
Age = {15, 16, 17, 18... 25}
Now consider a subset CLASS of the Cartesian product
CLASS = {(Ali, 18), (Sana, 17), (Ali, 20), (Ahmed, 19)}
This subset CLASS is a relation mathematically




Page 4

Probabilistic graphical models are graphs in which nodes represent random variables, and the
(lack of) arcs represent conditional independence assumptions. Hence they provide a compact
representation of joint probability distributions. Undirected graphical models, also called Markov
Random Fields (MRFs) or Markov networks, have a simple definition of independence: two (sets
of) nodes A and B are conditionally independent given a third set, C, if all paths between the
nodes in A and B are separated by a node in C. By contrast, directed graphical models also called
Bayesian Networks or Belief Networks (BNs), have a more complicated notion of independence,
which takes into account the directionality of the arcs, as we explain below.
Undirected graphical models are more popular with the physics and vision communities, and
directed models are more popular with the AI and statistics communities. (It is possible to have a
model with both directed and undirected arcs, which is called a chain graph.) For a careful study
of the relationship between directed and undirected graphical models, see the books by Pearl88,
Whittaker90, and Lauritzen96.
Although directed models have a more complicated notion of independence than undirected
models, they do have several advantages. The most important is that one can regard an arc from
A to B as indicating that A ``causes'' B. This can be used as a guide to construct the graph
structure. In addition, directed models can encode deterministic relationships, and are easier to
learn (fit to data). In the rest of this tutorial, we will only discuss directed graphical models, i.e.,
Bayesian networks.
In addition to the graph structure, it is necessary to specify the parameters of the model. For a
directed model, we must specify the Conditional Probability Distribution (CPD) at each node. If
the variables are discrete, this can be represented as a table (CPT), which lists the probability that
the child node takes on each of its different values for each combination of values of its parents.
Consider the following example, in which all nodes are binary, i.e., have two possible values,
which we will denote by T (true) and F (false).



Page 5

We see that the event "grass is wet" (W=true) has two possible causes: either the water sprinker
is on (S=true) or it is raining (R=true). The strength of this relationship is shown in the table. For
example, we see that Pr(W=true | S=true, R=false) = 0.9 (second row), and hence, Pr(W=false |
S=true, R=false) = 1 - 0.9 = 0.1, since each row must sum to one. Since the C node has no
parents, its CPT specifies the prior probability that it is cloudy (in this case, 0.5). (Think of C as
representing the season: if it is a cloudy season, it is less likely that the sprinkler is on and more
likely that the rain is on.)
The simplest conditional independence relationship encoded in a Bayesian network can be stated
as follows: a node is independent of its ancestors given its parents, where the ancestor/parent
relationship is with respect to some fixed topological ordering of the nodes.
By the chain rule of probability, the joint probability of all the nodes in the graph above is
P(C, S, R, W) = P(C) * P(S|C) * P(R|C,S) * P(W|C,S,R)

By using conditional independence relationships, we can rewrite this as

P(C, S, R, W) = P(C) * P(S|C) * P(R|C)

* P(W|S,R)

where we were allowed to simplify the third term because R is independent of S given its parent
C, and the last term because W is independent of C given its parents S and R.
We can see that the conditional independence relationships allow us to represent the joint more
compactly. Here the savings are minimal, but in general, if we had n binary nodes, the full joint


Page 6

would require O(2^n) space to represent, but the factored form would require O(n 2^k) space to
represent, where k is the maximum fan-in of a node. And fewer parameters makes learning
Tree structure
A tree structure is a way of representing the hierarchical nature of a structure in a graphical
form. It is named a "tree structure" because the classic representation resembles a tree, even
though the chart is generally upside down compared to an actual tree, with the "root" at the top
and the "leaves" at the bottom.
A tree structure is conceptual, and appears in several forms. For a discussion of tree structures in
specific fields, see Tree (data structure) for computer science: insofar as it relates to graph theory,
see tree (graph theory), or also tree (set theory). Other related pages are listed below.
Classical node-link diagrams that connects nodes together with line segments:



A tree is an undirected simple graph G that satisfies any of the following equivalent conditions:

G is connected and has no cycles.

G has no cycles, and a simple cycle is formed if any edge is added to G.

G is connected, but is not connected if any single edge is removed from G.

G is connected and the 3-vertex complete graph K3 is not a minor of G.

Any two vertices in G can be connected by a unique simple path.

If G has finitely many vertices, say n of them, then the above statements are also equivalent to
any of the following conditions:

G is connected and has n 1 edges.

G has no simple cycles and has n 1 edges.

EXAMPLE: Organizational chart for company



Page 7

The example tree shown to the right has 6 vertices and 6 1 = 5 edges. The unique simple path
connecting the vertices 2 and 6 is 2-4-5-6.

Hierarchical database model

A hierarchical database model is a data model in which the data is organized into a tree-like
structure. The structure allows representing information using parent/child relationships: each
parent can have many children, but each child has only one parent (also known as a 1-to-many
relationship). All attributes of a specific record are listed under an entity type.



Page 8

Organizational Information Flow

Information flow in an organization in two ways:
1. Vertically - Flow up and down among managers
Example: Production supervisors constantly communicate with with production-line
workers and their own managers.
2. Horizontally - Flow sideways among departments
Example: Regional sales managers from the marketing department set their sales goals by
coordinating with production managers in the production department.

Organizational Functions
Most organizations have departments that perform five basic functions:
1. Accounting - Keep track of all financial activities.
2. Production - Makes company product.
3. Marketing - Advertises, promotes, ands sells the product.
4. Human Resources - Finds and hires people and handle personnel matters.
5. Research - Does product research and relates new discoveries to the firm's current or

new products.



Page 9

Management Levels
There are three management levels in most organizations:
1. Supervisors
A. Manage and monitor the employees or workers.
B. Responsible for operational matters (day-to-day operations).
C. Example: production supervisor monitors materials needed to build a product.
2. Middle Management
A. Deal with control planning, tactical planning, and decision-making.
B. Implement long-term goals of the organization.
C. Example: regional sales manager sets sales goals for sales in several states.
3. Top Management


Page 10

A. Concerned with long-range planning (strategic planning)

B. Need information to help them plan future growth and direction of the
C. Example: vice president of marketing determines demand for current products and
sales strategies for new products.

Information flow
a. Information must flow in different directions to support the different information needs
of management.



Page 11

b. Each level of management has different information needs.

1. Strategic Needs of Top-level managers
A. Information that reveals overall condition of the business in capsule form.
B. Information from all departments below and from outside the
C. Information to plan for long-range events.
D. Example: planning for new facilities
2. Tactical Needs of Middle-level managers
A. Summarized information (weekly or monthly reports).
B. Information both horizontal and vertical across functional lines within the
C. Historical, internal information to develop budgets and evaluate
D. Example: developing production goals, concurring with top-level
managers and supervisors
3. Operational Needs of Supervisors


Page 12

A. Detailed current day-to-day information.

B. Information flow is primarily vertical.
C. Communicate mainly with middle managers and workers beneath them.
D. Day-to-day internal information to keep operations running smoothly.
E. Example: monitoring current supplies, current inventory, and production

Strategy Analysis: Process Flow

Understanding process flow within your organization allows for a greater visibility into how
things actually get done across different jobs and departments. It makes clear which activities
have "always been done" but aren't really adding any real value to your customers. And it
exposes things that you should be doing to help move your organization forward in its goals and



Page 13

For some organizations, their primary resources are raw materials, and physical assets like
plant and equipment. They use their equipment to process raw materials for sale to
customers. At each step of the way as a piece of material or part goes down their
assembly line, there is some work that is done. Maybe connecting two parts together,
maybe polishing, may be sorting. Smart companies analyze every step of the process and
ask themselves, "is this activity adding value for the end customer". Some call this "value
mapping", others call it "value stream engineering". Whatever the fancy term, the idea
was pioneered by Japanese manufacturing and companies like Toyota. To better
understand process flow, and implement process flow improvements in your organization,
there are three simple questions you must ask:
What is the real value that we provide our customers?
The best organizations are constantly asking their customers what they like and DON'T
like about the products and services they purchase. They are asking their customers what
else they are wanting. And most importantly, they are thinking through their customers
business and creatively finding new ways of solving problems for them. Ironically, this
means at times they decide not to service a customer if it means doing something that is
outside their core competency of what they can do excellently while still making money.
What activities should we eliminate that aren't adding value?
Once you identify which products and services that are creating real value for your
members, you then work backward down the value chain ending at the beginning of each
process, analyzing each point where there is some sort of "processing" going on. One tool
to accomplish this is to create a Process Flow diagram with swim lines. The process flow
describes the end value added product or service and then documents every step within the
organization where someone is involved in this process. Each process sits in a swim line;
each swim line represents a person (or maybe group) that is responsible for accomplishing
that particular work. So, if you were analyzing your event registration process, you would
document every place information flows through the organization to the end customer-either through paper or digital means. Once you have created your process flow diagram,
you should have some pretty clear wasteful activities that you can get rid of. For Toyota,
and lean manufacturing companies, they have identified seven wasteful activities,
and they work hard to eliminate them from any process. The Toyota production
system identifies seven kinds of waste: Overproduction, delay, transport, extra
processing, extra inventory, wasted motions, and making defective parts.2 In an
office environment there are very similar wastes.You might have a customer waiting on



Page 14

the phone to register. You might have a paper registration moving across two desks, and
then into a spreadsheet. You might find out that people are creating confirmation letters
manually. Don't assume anything as you do your analysis, really get the facts of how
people do their job.
How can software aid in adding value and eliminating waste?
A simple example of waste we see over and over in associations is that of data duplicate
data entry. Sometimes this is caused by multiple software systems, other times it is
caused by a "double checking" process where one person does the work and another
redoes the work just to make sure it is correct. This activity leads to two potential wastes.
First, is extra processing, because two people are doing the same thing? The second is
rework, where you have inaccurate data that needs to be corrected because the data is
being "touched" by too many people. These activities provide no value to your member,
and are not a something that they would be willing to pay extra for in their membership.
You can't add a line item to their dues that says, "Added data entry".
It is important to note that good process flow organization must supersede software
implementation. Smart organizations realize that spending money on software and
computers to aid a broken process only makes things "break faster". Associations need to
learn this lesson as well. The number one reason for software implementation failure is
feature creep, which is the process of software trying to do too much. You end up with
software that is too expensive and is too complicated. If you have done good process
mapping in your organization, chances are you have already been putting your people
through changes that help them do their job better. Training people is much cheaper than
changing software to fit what you think you need. And many times what you think you
need is not what you actually need. A good honest process flow assessment ensures your
activities are really adding value to your members.



Page 15

Process flow mapping

Many organizations, both expectant and small, carry on passing on the fragments of important
policies and processes to employees through with one-off emails and the like. Process limpidity
is unmatchable of the ternion identify focuses Indiana organizational design, on with the great
unwashed and technology. With the management of random appendage plebeian inward many
companies, notwithstanding it is no enquire that employees are struggling to bash amp unspoilt
job. atomic number 33 an bill is processed, managed client ill operating room applied skill
draught sanctioned indium many organizations depends Sir Thomas More than atomic number 2
does and what Clarence Shepard Day Jr. of the week has been made on well-grounded intelligent
kinda than business. Where it lacks the clearness of mental process and personal idiosyncrasies,
and political maneuvering postulate over.
In addition, search indicates that less than 20% of ware defects and religious service problems ar
imputable to random factors, such arsenic malicious employees, breakdown of machinery and in
the buff materials. The early 80% operating theater Sir Thomas More of the problems is
ascribable to systemic weaknesses with processes. Thus, eventide if the mathematical function of


Page 16

business organization processes is relatively simple to serve and does non need expensive cap
expenditures, it pays Brobdingnagian dividends Indiana efficiency and employee commitment. If
you are mentation almost map your processes, here are x name pointers to continue Hoosier State
1. regard employees who actually coiffure the work on inwards function
employees who answer the literal mold ar nuclear number 49 the outdo lieu to screw the
elaborate stairs inwards apiece process. They are besides more than familiar(p) with the vulgar
roadblocks and bottlenecks and describe contacts atomic number 49 your administration to find
things done. require the employees ahead inviting them to juncture teams of work on mapping.
celebrate managers and supervisors out of doors of serve map sessions, every bit rich person amp
disposition to overlook the Roger Huntington Sessions with his own experience.
2. key out the swear out get down and remnant activities/
for from each one process, understandably key out the first and end. If the team ignores this
important mistreat nuclear number 85 the rootage of apiece seance mapping, the ebullience of
the team, redundant activities speedily leave creeping into the picture until the swear out
becomes unmanageable. conceive of an body process that triggers the process, such arsenic an
bill that appear Hoosier State type A tray. This is the beginning. and so reckon of the live on
activity. Can, for example, poster an particular for the world-wide Ledger.
3. key out the objective cognitive process and inputs and outputs/
this is where the operate begins to take on on antiophthalmic factor newly signification for
employees. The team drawing card should postulate employees because it is course apiece work
and what ar the expected results of apiece process. Not entirely does this aid to nidus care on
removing non-value adding activities, just besides gives employees a horse sense of aim inward
their functional lives.
Asking teams to describe the comment to the operation and outputs provided will attend to to
clear up what necessarily swear out ahead it put up initiate and what customers adjacent work
gets in front it fire begin. For example, agreeing that thingamajig gathering cannot set out until
the connection screws ar provided it leave baseless a unit tidy sum of lick inward progress.



Page 17


Heuristic Evaluation

Also called: Heuristic Review, Discount Usability Engineering, Usability

Evaluation, User Interface Inspection, Expert Review

Lifecycle stages: All

A usability evaluation method in which one or more reviewers, preferably experts, compare a
software, documentation, or hardware product to a list of design principles (commonly referred
to as heuristics) and list where the product does not follow those principles.

Heuristic evaluation falls within the category of usability engineering methods known as
Discount Usability Engineering (Nielsen, 1989). The primary benefits of these methods are that
they are less expensive than other types of usability engineering methods and they require fewer
resources (Nielsen, 1989). The beneficiaries are the stakeholders responsible for producing the
product it costs less money to perform a heuristic evaluation than other forms of usability
evaluation, and this will reduce the cost of the project. Of course, the users benefit from a more
usable product.



Page 18


Inexpensive relative to other evaluation methods (Nielsen & Molich, 1990).

Intuitive, and easy to motivate potential evaluators to use the method (Nielsen & Molich,

Advanced planning not required (Nielsen & Molich, 1990).

Evaluators do not have to have formal usability training. In their study, Nielsen and
Molich used professional computer programmers and computer science students (Nielsen
& Molich, 1990; Nielsen, 1992).

Can be used early in the development process (Nielsen & Molich, 1990).

Faster turnaround time than laboratory testing (Kantner & Rosenbaum, 1997).


As originally proposed by Nielsen and Molich, the evaluators would have

knowledge of usability design principles, but were not usability experts
(Nielsen & Molich, 1990). However, Nielsen subsequently showed that
usability experts would identify more issues than non-experts, and double
experts usability experts who also had expertise with the type of interface
(or the domain) being evaluated identified the most issues (Nielsen, 1992).
Such double experts may be hard to come by, especially for small companies
(Nielsen, 1992).

Individual evaluators identify a relatively small number of usability issues

(Nielsen & Molich, 1990). Multiple evaluators are recommended since a single
expert is likely to find only a small percentage of problems. The results from
multiple evaluators must be aggregated. (Nielsen & Molich, 1990).

Heuristic evaluations and other discount methods may not identify as many
usability issues as other usability engineering methods, for example, usability
testing. (Nielsen, 1989).

Heuristic evaluation may identify more minor issues and fewer major issues
than would be identified in a think-aloud usability test (Jeffries and Desurvire,

Heuristic reviews may not scale well for complex interfaces (Slavkovic &
Cross, 1999). In complex interfaces, a small number of evaluators may not
find a majority of the problems in an interface and may miss some serious

Does not always readily suggest solutions for usability issues that are
identified (Nielsen & Molich).

Biased by the preconceptions of the evaluators (Nielsen & Molich, 1990).



Page 19

As a rule, the method will not create eureka moments in the design process
(Nielsen & Molich, 1990).

In heuristic evaluations, the evaluators only emulate the users they are not
the users themselves. Actual user feedback can only be obtained from
laboratory testing (Kantner and Rosenbaum, 1997) or by involving users in
the heuristic evaluation (Muller, Matheson, Page, & Gallup, 1995).

Heuristic evaluations may be prone to reporting false alarms problems that

are reported that are not actual usability problems in application (Jeffries,

Appropriate Uses
Heuristic evaluation can be used throughout the design life cycle at any point where it is
desirable to evaluate the usability of a product or product component. Of course, the closer the
evaluation is to the end of the design lifecycle, the more it is like traditional quality assurance
and further from usability evaluation. So, as a matter of practicality, if the method is going to
have an impact on the design of the interface (i.e. the usability issues are to be resolved before
release) the earlier in the lifecycle the review takes place the better. Specifically, heuristic
reviews can be used as part of requirements gathering (to evaluate the usability of the
current/early versions of the interface), competitive analysis (to evaluate your competitors to find
their strengths and weaknesses) and prototyping (to evaluate versions of the interface as the
design evolves).
Nielsen and Molich described heuristic evaluation as an informal method of usability analysis
where a number of evaluators are presented with an interface design and asked to comment on
it (Nielsen & Molich, 1990). In this paper, they presented nine usability heuristics:

Simple and natural dialog

Speak the users language

Minimize user memory load

Be consistent

Provide feedback

Provide clearly marked exits

Provide shortcuts

Good error messages

Prevent errors

Heuristic evaluation is not limited to one of the published lists of heuristics. The list of heuristics
can be as long as the evaluators deem appropriate for the task at hand. For example, you can



Page 20

develop a specialized list of heuristics for specific audiences, like senior citizens, children, or
disabled users, based on a review of the literature.

1. Decide which aspects of a product and what tasks you want to review. For
most products, you cannot review the entire user interface so you need to
consider what type of coverage will provide the most value.
2. Decide which heuristics will be used.
3. Select a team of three to five evaluators (you can have more, but the time to
aggregate and interpret the results will increase substantially) and give them
some basic training on the principles and process.
4. Create a list of representative tasks for the application or component you are
evaluating. You might also describe the primary and secondary users of your
product if the team is not familiar with the users.
5. Ask each evaluator to perform the representative tasks individually and list
where the product violates one or more heuristics, After the evaluators work
through the tasks, they are asked to review any other user interface objects
that were not involved directly in the tasks and note violations of heuristics.
You may also ask evaluators to rate how serious the violations would be from
the users perspective.
6. Compile the individual evaluations and ratings of seriousness.
7. Categorize and report the findings so they can be presented effectively to the
product team.

Participants and Other Stakeholders

The basic heuristic inspection does not involve users of the product under consideration. As
originally proposed by Nielsen and Molich (1990), the heuristic review method was intended for
use by people with no formal training or expertise in usability. However, Nielsen (1992) and
Desurvire, Kondziela, and Atwood (1992) found that usability experts would find more issues
than non experts. For some products a combination of usability practitioners and domain experts
would be recommended.
The stakeholders are those who will benefit from the cost savings that may be realized from
using a discount (i.e. low cost) usability method. These stakeholders may include the
ownership and management of the company producing the product and the users who will
purchase the product.

Materials Needed


A list of heuristics with a brief description of each heuristic.


Page 21

A list of tasks and/or the components of the product that you want inspected
(for example, for a major Web site, you might designated 10 tasks, plus 10
pages that you want reviewed).

Access to the specification, screen shots, prototypes, or working product.

A standard form for recording violations of the heuristics.

Who Can Facilitate

Heuristic evaluations are generally organized by a usability practitioner who introduces the
method and the principles, though with some training, other members of a product could

Common Problems

Insufficient resources (too few evaluators) are committed to the evaluation.

As a result, major usability issues may be overlooked.

Evaluators do not fully understand the heuristics.

Evaluators may report problems at different levels of granularity (for

example, The error messages are bad versus Error message 27 does not
state how to resolve this problem).

Some organizations find heuristic evaluation such a popular method that they
are reluctant to use other methods like usability testing or participatory

Data Analysis Approach

The data are collected in a list of usability problems and issues. Analysis can include assignment
of severity codes and recommendations for resolving the usability issues. The problems should
be organized in a way that is efficient for the people who will be fixing the problems.

Practitioners in usability and user-centered design (UCD) employ a wide range of methods for
gathering information about users and their tasks, analyzing needs, creating design solutions,
iterating designs through testing and evaluation, and measuring efficiency, effectiveness, and
This section presents descriptions of methods, including procedures, resources needed,
outcomes, appropriate uses, benefits, and costs. These descriptions will form the core of a
knowledgebase that defines our field and will help communicate usability methods to clients,
project managers, and team members. Usability practitioners will also benefit from crossreferencing of related methods and pointers to outside resources for more details.

Affinity Diagramming


Page 22

Also called: Affinity mapping, affinity process, KJ method (the process of affinity diagramming
is derived from the KJ method)
Lifecycle stage: Requirements
Contributors: Nigel Bevan, Karen Shor, Chauncey Wilson. Originally based on the UsabilityNet
Version: 6/2009
Affinity diagramming is a participatory method where concepts written on cards are sorted into
related groups and sub-groups. The original intent of affinity diagramming was to help diagnose
complicated problems by organizing qualitative data to reveal themes associated with the
Existing items and new items identified by individuals are written on cards or sticky notes which
are sorted into categories as a workshop activity. Affinity diagramming can be used to:

Analyse findings from field studies

Identify and group user functions as part of design

Analyse findings from a usability evaluation

Building an affinity diagram is a way to interpret customer data and:

Show the range of a problem

Uncover similarity among problems from multiple customers

Give boundaries to a problem

Identify areas for future study

Methods: Brainstorming
Also called: Creative Thinking, Thought Showers, Lateral Thinking
Lifecycle stages: All
Version: 1/2010
Brainstorming is an individual or group process for generating alternative ideas or solutions for a
specific topic.
Good brainstorming focuses on the quantity and creativity of ideas: the quality of ideas is much
less important than the sheer quantity. After ideas are generated, they are often grouped into
categories and prioritized for subsequent research or application. The outcomes of brainstorming


Page 23

A list of ideas or solutions related to a particular problem.

The ideas or solutions organized into groups.

Some form of prioritization based on attributes like cost and feasibility.

Benefits, Advantages and Disadvantages


Many ideas can be generated in a short time.

Requires few material resources.

The results can be used immediately or "preserved" for possible use in other


Is a "democratic" way of generating ideas (assuming a good facilitator).

Is a useful way to get over "design" blocks that are slowing development.

The concept of brainstorming is easy to understand.


Requires an experienced and sensitive facilitator who understands the social

psychology of small groups.

Requires a dedication to quantity rather than quality.

Can be chaotic and intimidating to introverts.

May not be appropriate for some business or international cultures.

Card Sorting
Types: Closed card sort, Open card sort, Reverse card sort
Lifecycle stage: Requirements, Design
See also: Affinity diagramming, Information architecture
Contributors: Bill Killam, Alice Preston, Shannon McHarg, Chauncey Wilson. Based on Jacek
Wachowicz contributions in COST Action 294: MAUSE.
Version: 6/2009
The card sorting method is used to generate information about the associations and grouping of
specific data items. Participants in a card sort are asked to organize individual, unsorted items
into groups and may, depending on the technique, also provide labels for these groups. In a userUNIT 2


Page 24

centered design process, it is commonly used when developing a site architecture but has also
been applied to developing workflows, menus, toolbars, and other elements of system design.

Card sorting may be conducted as a low tech method using index cards or
post-it notes, or may be automated using one of several software packages

Card sorting may be conducted as a series of individual exercises, as a

concurrent activity of a small group, or as a hybrid approach where individual
activity is followed by group discussion of individual differences

Card sorting is usually conducted as a specific activity in the early design

phase of a project for defining an architecture, but can similarly be used
during a product evaluation to determine if usability issues are due to
problems with grouping or group labels

Sorting and grouping have long been studied within psychology and the research dates back at
least to the 1950s. Numerous, non-peer reviewed descriptions, case studies, and blogs have been
written in the last several years on the technique and its use in the user-centered design process,
but only a few peer reviewed articles on the technique have been published and little is known of
its validity or reliability as a means of directly producing a useful and usable architecture.
Instead, card sorts are generally used to provide insight that is used by a practitioner to generate
an architecture.

Benefits, Advantages and Disadvantages


Simple Card sorts are easy for the organizer and the participants.

Cheap Typically the cost is a stack of index cards, sticky notes, a pen or
printing labels, and some time.

Quick to execute it is possible to perform several sorts in a short period of

time, which provides significant amount of data.

Established The technique has been used for over 10 years, by many

Involves users Because the information structure suggested by a card sort

is based on real user input

Provides a good foundation for the structure of a site or product.


Does not consider users tasks Card sorting is an inherently content-centric

technique. If used without considering users tasks, it may lead to an
information structure that is not usable when users are attempting real tasks.

Results may vary The card sort may provide fairly consistent results between
participants, or may vary widely.



Page 25

Analysis can be time consuming The sorting is quick, but the analysis of the
data can be difficult and time consuming, particularly if there is little
consistency between participants.

May capture surface characteristics only Participants may not consider

what the content is about or how they would use it to complete a task and
may just sort it by surface characteristics

Appropriate Uses
Card sorting can be used to:

Identify themes or patterns from qualitative data

Develop the information and navigational architecture for a Web site or


Design or redesign a site or application

Organize icons, images, menu items, and other objects into related groups

Determine how a specific individual classifies items from a particular domain

Examine how different groups (users versus developers, for example) view
the same subject matter

Rank or rate items on specific dimensions.


June 5th, 2010

The process of decomposition and aggregation of credit risks involves more judgment than that
required for market risk. As a general rule of thumb the better the relationship between the
likelihood of a default occurring and the factor the lower down the decomposition tree it should
Historic correlation coefficients should be applied as we move up the tree although at the higher
levels where correlations start to become weaker and less reliable it is probably better to simply
make a policy decision as to whether to treat these as independent or assume perfect positive
The most popular approach to developing mathematical models of organizational decision
processes, the decomposition approach, begins with a mathematical statement of an ideal
organizational problem and follows a process of decomposition to derive sub-problems solved by
separate units at (possibly) different levels of the organization. Conversely, the composition
approach starts with mathematical statements of the subproblems solved by the separate units
and proceeds to develop a solution algorithm as a means of coordinating the activities of the
separate units. A process of composition must then be followed if one is to discover the derived


Page 26

organizational problem actually solved. This paper offers an operational definition of an

organizational decision process and relates this process to the anatomy of mathematical models
purported as being representative of it. Implications regarding the potential of the decision
process models as tools of organizational design are explored.
Aggregation: Selecting the data in group of records is called aggregation. There
are five aggregate system functions they are viz. Sum, Min, Max, Avg, Count. They
all have their own purpose Decomposition: Selecting all data without
any grouping and aggregate functions is called Decomposition. The data is selected,
as it is present in the table.
Generalization: while generalization seems to be simplification of data, i.e. to
bring the data from Un-normalized form to normalized form.
Aggregation: A concept which is used to model a relationship between a
collection of entities and relationships. It is used when we need to express a
relationship among relationships.

Design pattern: aggregation

Sometimes a class type really represents a collection of individual components. Although this
pattern can be modeled by an ordinary association, its meaning becomes much clearer if we use
the UML notation for an aggregation.
Example: a small business needs to keep track of its computer systems. They want to record
information such as model and serial number for each system and its components.
A very nave way to do this would be to put all of the information into a single class type. You
should recognize that this class contains a set of repeated attributes with all of the problems of
the phone book pattern. You could fix it as we show here:

Incorrect model (with improvement)

The improved model will accommodate the addition of more types of components (a scanner,
perhaps), a system with more than one monitor or printer, or a replacement component on the
shelf that dont belong to any system right now. But UML allows us to show the association in a
more semantically correct way.


Page 27

Better model (with UML aggregation)

The system is an aggregation of components. In UML, aggregation is shown by an open

diamond on the end of the association line that points to the parent (aggregated) class. There is
an implied multiplicity on this end of 0..1, with multiplicity of the other end shown in the
diagram as usual. To describe this association, we would say that each system is composed of
one or more components and each component is part of zero or one system.
Since the component can exist by itself in this model (without being part of a system), the
system name cant be part of its PK. We'll use the only candidate key {type, mfgr, model, SN} as
PK, since this class is not a parent. The system name, just an FK here, will be filled in if this
component is installed as part of a system; it will be null otherwise.

In UML, there is a stronger form of aggregation that is called composition. The notation is
similar, using a filled-in diamond instead of an open one. In composition, component instances
cannot exist on their own without a parent; they are created with (or after) the parent and they are
deleted if the parent is deleted. The implied multiplicity on the diamond end of the association
is therefore 1..1.
Information Architecture is a discipline and a set of methods that aim to identify and
organize information in a purposeful and service-oriented way. It is also a term used
to describe the resulting document or documents that define the facets of a given
information domain. The goal of Information Architecture is to improve information
access, relevancy, and usefulness to a given audience, as well as improve the
publishing entity's ability to maintain and develop the information over time. It is
primarily associated with website design and it is directly related to the following
professional disciplines: User interface design, content development, content
management, usability engineering, interaction design, and user experience design.



Page 28

It is also indirectly related to database design, document design, and knowledge


What is information architecture?

Organising functionality and content into a structure that people are able to navigate intuitively
doesnt happen by chance. Organizations must recognise the importance of information
architecture or else they run the risk of creating great content and functionality that no one
can ever find.
It also discusses the relationship between information architecture and usability, in the context of
real-world projects.

The problem: finding is the new doing

Computer systems used to be frustrating because they did very little quite badly. People using
systems became frustrated because they simply werent capable of doing what they were
required to do.
But technology has progressed and now technology can do practically whatever people want it to
do. So why doesnt everyone using a computer have a large smile on their face?
The shear wealth of functionality and information has become the new problem. The challenge
facing organisations is how to guide people through the vast amount of information on offer, so
they can successfully find the information they want and thus find value in the system?
Intuitive navigation doesnt happen by chance

The cost of failure

Not only is this extremely frustrating for users, but it has serious repercussions for organisations.

For intranets it means low adoption rates and staff reverting to unsupported
off-line resources.

For websites with online shopping facilities it has a significant impact on

revenue. Research suggests that a significant number of shopping attempts
fail not because the user has evaluated the products on offer and decided
against a purchase, but because the navigation system has failed and user
cant find the product they are interested in.

This problem is only set to get worse as the quantity of information available through sites
increases. What can an organisation do to increase the chances that people can successfully
navigate their site and find the information they require?

What is information architecture

Information architecture is the term used to describe the structure of a system, i.e the way
information is grouped, the navigation methods and terminology used within the system.


Page 29

An effective information architecture enables people to step logically through a system confident
they are getting closer to the information they require.
Most people only notice information architecture when it is poor and stops them from finding the
information they require.
Information architecture is most commonly associated with websites and intranets, but it can be
used in the context of any information structures or computer systems.

The evolution of information architecture

The term information architecture was first coined by Richard Saul Wurman in 1975. Wurman
was trained as an architect, but became interested in the way information is gathered, organised
and presented to convey meaning. Wurmans initial definition of information architecture was
organising the patterns in data, making the complex clear.
The term was largely dormant until in 1996 it was seized upon by a couple of library scientists,
Lou Rosenfeld and Peter Morville. They used the term to define the work they were doing
structuring large-scale websites and intranets.
In Information Architecture for the World Wide Web: Designing Large-Scale Web Sites they
define information architecture as:
1. The combination of organisation, labelling, and navigation schemes within an
information system.
2. The structural design of an information space to facilitate task completion
and intuitive access to content.
3. The art and science of structuring and classifying web sites and intranets to
help people find and manage information.
4. An emerging discipline and community of practice focused on bringing
principles of design and architecture to the digital landscape.

Today Wurmans influence on information architecture is fairly minimal, but many of the
metaphors used to describe the discipline echo the work done by architects. For example,
information architecture is described as the blueprint developers and designers use to build the

Common problems
The most common problem with information architectures is that they simply mimic a
companys organisational structure.
Although this can often appear logical and an easy solution for those involved in defining the
architecture, people using systems (even intranets) often dont know or think in terms of
organisational structure when trying to find information.



Page 30

How to create an effective information architecture

An effective information architecture comes from understanding business objectives and
constraints, the content, and the requirements of the people that will use the site.
Information architecture is often described using the following diagram:

Understanding an organisations business objectives, politics, culture, technology, resources and
constraints is essential before considering development of the information architecture.
Techniques for understanding context include:

Reading existing documentation

Mission statements, organisation charts, previous research and vision

documents are a quick way of building up an understanding of the context in
which the system must work.

Stakeholder interviews

Speaking to stakeholders provides valuable insight into business context and

can unearth previously unknown objectives and issues.

For further information about stakeholder interviews, see our article

Selecting staff for stakeholder interviews



Page 31

The most effective method for understanding the quantity and quality of content (i.e.
functionality and information) proposed for a system is to conduct a content inventory.
Content inventories identify all of the proposed content for a system, where the content currently
resides, who owns it and any existing relationships between content.
Content inventories are also commonly used to aid the process of migrating content between the
old and new systems.
Effective IA must reflect the way people think

An effective information architecture must reflect the way people think about the subject matter.
Techniques for getting users involved in the creation of an information architecture include:

Card sorting

Card sorting involves representative users sorting a series of cards, each labelled with a
piece of content or functionality, into groups that make sense to them. Card sorting
generates ideas for how information could be grouped and labelled.
For further information about card sorting, see the article Card sorting a definitive

Card-based classification evaluation

Card-based classification evaluation is a technique for testing an information architecture

before it has been implemented.
The technique involves writing each level of an information architecture on a large card,
and developing a set of information-seeking tasks for people to perform using the
For further information about card-based classification evaluation, see the article
Cardbased classification evaluation

Styles of information architecture

There are two main approaches to defining an information architecture. These are:

Top-down information architecture

This involves developing a broad understanding of the business strategies and user needs,
before defining the high level structure of site, and finally the detailed relationships
between content.


Bottom-up information architecture


Page 32

This involves understanding the detailed relationships between content, creating

walkthroughs (or storyboards) to show how the system could support specific user
requirements and then considering the higher level structure that will be required to
support these requirements.
Both of these techniques are important in a project. A project that ignores top-down approaches
may result in well-organised, findable content that does not meet the needs of users or the
business. A project that ignores bottom-up approaches may result in a site that allows people to
find information but does not allow them the opportunity to explore related content.
Take a structured approach to creating an effective IA

Creating an effective information architecture in 9 steps

The following steps define a process for creating an effective information architectures.
1. Understand the business/contextual requirements and the proposed content
for the system. Read all existing documentation, interview stakeholders and
conduct a content inventory.
2. Conduct cards sorting exercises with a number of representative users.
3. Evaluate the output of the card sorting exercises. Look for trends in grouping
and labelling.
4. Develop a draft information architecture (i.e. information groupings and
5. Evaluate the draft information architecture using the card-based classification
evaluation technique.
6. Dont expect to get the information architecture right first time. Capturing the
right terminology and hierarchy may take several iterations.
7. Document the information architecture in a site map. This is not the final site
map, the site map will only be finalised after page layouts have been defined.
8. Define a number of common user tasks, such as finding out about how to
request holiday leave. On paper sketch page layouts to define how the user
will step through the site. This technique is known as storyboarding.
9. Walk other members of the project team through the storyboards and leave
them in shared workspaces for comments.
10.If possible within the constraints of the project, it is good to conduct taskbased usability tests on paper prototypes as it provides valuable feedback
without going to the expense of creating higher quality designs.
11.Create detailed page layouts to support key user tasks. Page layouts should
be annotated with guidance for visual designers and developers.



Page 33

Developing information architecture in this way enables you to design and build a system
confident that it will be successful.
There are many ways to document an IA

Products from the information architecture process

Various methods are used to capture and define an information architecture. Some of the most
common methods are:

Site maps

Annotated page layouts

Content matrices

Page templates

There are also a number other possible by-products from the process. Such as:




Each of these methods and by-products is explained in detail below.

Site maps
Site maps are perhaps the most widely known and understood deliverable from the process of
defining an information architecture.
A site map is a high level diagram showing the hierarchy of a system. Site maps reflect the
information structure, but are not necessarily indicative of the navigation structure.

Annotated page layouts

Page layouts define page level navigation, content types and functional elements. Annotations
are used to provide guidance for the visual designers and developers who will use the page
layouts to build the site.
Page layouts are alternatively known as wireframes, blue prints or screen details.

Content matrix
A content matrix lists each page in the system and identifies the content that will appear on that



Page 34

Page templates
Page templates may be required when defining large-scale websites and intranets. Page templates
define the layout of common page elements, such as global navigation, content and local
navigation. Page templates are commonly used when developing content management systems.

Personas are a technique for defining archetypical users of the system. Personas are a cheap
technique for evaluating the information architecture without conducting user research.
Prototypes can be used to bring an IA to life

Prototypes are models of the system. Prototypes can be as simple as paper-based sketches, or as
complex as fully interactive systems. Research shows that paper-based prototypes are just as
effective for identifying issues as fully interactive systems.
Prototypes are often developed to bring the information architecture to life. Thus enabling users
and other members of the project team to comment on the architecture before the system is built.

Storyboards are another technique for bringing the information architecture to life without
building it. Storyboards are sketches showing how a user would interact with a system to
complete a common task.
Storyboards enable other members of the project team to understand the proposed information
architecture before the system is built.

Information architecture and usability

Some people find the relationship and distinction between information architecture and usability
Information architecture is not the same as usability, but the two are closely related. As described
in a previous KM Column (What is usability, November 2004), usability encompasses two
related concepts:

Usability is an attribute of the quality of a system:

we need to create a usable intranet

Usability is a process or set of techniques used during a design and

development project:

we need to include usability activities in this project

In both cases usability is a broader concept, whereas information architecture is far more


Page 35

Information architecture as an attribute of the quality of a

An effective information architecture is one of a number of attributes of a usable system. Other
factors involving the usability of a system include:

visual design

interaction design


content writing.

Information architecture as a process

The process for creating an effective information architecture is a sub-set of the usability
activities involved in a project.
Although weighted to the beginning of the project, usability activities should continue
throughout a project and evaluate issues beyond simply the information architecture.

Who creates the information architecture?

Increasingly companies are realising the importance of information architecture and are
employing specialist information architects to perform this role.
But information architecture is also defined by:

intranet designers and managers

website designers and managers

visual designers

other people designing information systems



technical writers



Page 36