You are on page 1of 2

Simple fuzzy propositions,

1. unconditional and unqualified propositions; p : temperature (V) is high (F).


2. unconditional and qualified propositions; Tina is young is very true
3. conditional and unqualified propositions; 4. conditional and qualified propositions.

Conditional Propositions. A proposition of the form “if p then q” or “p implies q”, represented “p → q” is called a conditional
proposition. For instance: “if John is from Chicago then John is from Illinois”. The proposition p is called hypothesis or
antecedent, and the proposition q is the conclusion or consequent.

Generalized Eigenvectors
Let A be an n×n matrix and let λ be an eigenvalue of A. A Generalized Eigenvector of Rank k corresponding to the eigenvalue λ is
a vector v such that (A−λI)^kv=0 and (A−λI)^(k−1)v≠0. It is important to note that regular eigenvectors are the same as
generalized eigenvectors of rank 1.

The Cholesky decomposition or Cholesky factorization is a decomposition of a Hermitian, positive-definite matrix into the product
of a lower triangular matrix and its conjugate transpose. The Cholesky decomposition is roughly twice as efficient as the LU
decomposition for solving systems of linear equations.

In linear algebra, the singular-value decomposition (SVD) is a factorization of a real or complex matrix. It is the generalization of
the eigendecomposition of a positive semidefinite normal matrix.

In linear algebra, a QR decomposition (also called a QR factorization) of a matrix is a decomposition of a matrix A into a product
A = QR of an orthogonal matrix Q and an upper triangular matrix R. QR decomposition is often used to solve the linear least
squares problem and is the basis for a particular eigenvalue algorithm, the QR algorithm.

Principle of Optimality: An optimal policy has the property that whatever the initial state and initial decision are, the remaining
decisions must constitute an optimal policy with regard to the state resulting from the first decision.
fN(x)=max.[r(dn)+fN−1T(x,dn)]dn∈x
where fN(x)=the optimal return from an N-stage process when initial state is x
r(dn)=immediate return due to decision dn
{x}=set of admissible decisions
T(x,dn)=the transfer function which gives the resulting state

Dynamic programming is a method for solving a complex problem by breaking it down into a collection of simpler sub-problems,
solving each of those sub-problems just once, and storing their solutions – ideally, using a memory- based data structure. It solves
every sub problem just once and then Saves its answer in a table
Scheduling, determine the inventory level, allocation of scarc resource, optimal combination

Little’s Formula: (Ls - Number in System, Lq - Number in Queue, λ-ArrivalRate, μ-ServiceRate, Ws-WaitingTime in
S/m, Wq-WaitingTime in Q)
Ls = λ/(μ- λ) = λ Ws
Ls = λ/(μ- λ) = Lq + λ/μ
Ws = 1 / (μ- λ) = Wq + 1/μ
Lq = λ2/{μ (μ- λ)} = λ Wq

Kendal Notation
Generally queueing model can be specified in following symbol form: (a/b/c) : (d/e)
a - probability of arrival b - probability law according to which customers are saved
c - number of channel d - capacity of the systems e - Queue discipline

Binomial distribution E(Y) = np V(Y) = np(1 - p)


The mean of variance of the negative binomial distribution and geometric distributions:
E(X) = 1/p V(X) = (1-p)/p2
e t ( t) x 
The Poisson distribution: p(x, t)  , x = 0, 1, 2, ...Mean and Variance  
x!
 1
 a xb 2
Uniform: f ( x)   b  a E(x) = (a+b)/2 V(x) = (b-a) /12

 0 elsewhere

1 - (x - )2 /2 2
f(x) = e
Normal: 2  E(X) =  V(X) = 
2

You might also like