Professional Documents
Culture Documents
rubenszmm@gmail.com
http://github.com/RubensZimbres
NAÏVE
BAYES
MIXTURE
MODELS
𝑃 𝑐 𝑎 . 𝑃(𝑎) 𝑃 𝐵 = 𝑃 𝐵|𝐴 . 𝑃(𝐴)
𝑃 𝑎𝑐 =
𝑃(𝑐)
BAYES
OPTIMAL
CLASSIFIER
MIXTURE
OF
GAUSSIANS
ANOMALY
DETECTION
arg max 𝑃 𝑥 𝑇 . 𝑃(𝑇|𝐷)
1 1 𝑥−𝑥 !
𝑃 𝑥𝑥 = . 𝑒𝑥𝑝 −
2𝜋𝜎 ! 2 𝜎
NAÏVE
BAYES
CLASSIFIER
𝑁! 𝐶! + 𝑁! 𝐶!
𝑍!" =
𝑁! + 𝑁!
arg max 𝑃 𝑆𝑝𝑜|𝑇𝑜𝑡 . 𝑃(𝑆𝑜𝑐|𝑆𝑝𝑜)
𝑃(𝑍!" ) → 0.50
BAYES
MAP
(maximum
a
posteriori)
EM
ALGORITHM
ℎ!"# = arg max 𝑃 𝑐|𝑎 . 𝑃(𝑎)
𝑃 𝑥 . 𝑃 𝑥|𝑥
𝐸 𝑠𝑡𝑒𝑝 𝑃 𝑥|𝑥 =
𝑃 𝑥 .𝑃 𝑥
MAXIMUM
LIKELIHOOD
𝑃(𝑥|𝑥)
𝑀 𝑠𝑡𝑒𝑝 𝑃 𝑥′ =
ℎ!" = arg max 𝑃 𝑐|𝑎
𝑛
𝐸 𝑠𝑡𝑒𝑝 𝑃 𝑥|𝑥 = 𝐴𝑠𝑠𝑖𝑔𝑛 𝑣𝑎𝑙𝑢𝑒
TOTAL
PROBABILITY
𝑀 𝑠𝑡𝑒𝑝 𝑃 𝑥′ = 𝑃(𝐵 = 1|𝐴 = 1, 𝐶 = 0)
𝑇𝑜𝑡𝑎𝑙𝑃 𝐵 = 𝑃 𝐵|𝐴 . 𝑃(𝐴)
𝑑 𝑓(𝑥) 𝑓′ 𝑥 𝑔 𝑥 + 𝑓 𝑥 . 𝑔′(𝑥)
LAPLACE
ESTIMATE
(small
samples)
=
𝑑𝑥 𝑔(𝑥) 𝑔(𝑥)!
𝐴 + 0.5 𝑑 𝑑
𝑃 𝐴 =
2𝑓 𝑥 = 2 𝑓 𝑥
𝐴+𝐵+1 𝑑𝑥 𝑑𝑥
BAYESIAN
NETWORKS
𝑑 𝑑 𝑑
𝑓 𝑥 +𝑔 𝑥 = 𝑓 𝑥 + 𝑔 𝑥
𝑑𝑥 𝑑𝑥 𝑑𝑥
𝑡𝑢𝑝𝑙𝑒𝑠 ¬ 𝑓𝑜𝑟 𝑦 = 0 ∧ 𝑦 = 1
𝑑 𝑑 𝑑
LIMITES
𝑓 𝑥 + 2𝑔 𝑥 = 𝑓 𝑥 + 2 𝑔 𝑥
𝑑𝑥 𝑑𝑥 𝑑𝑥
𝑓 𝑥 + ℎ − 𝑓(𝑥)
lim
!→! ℎ CHAIN
RULE
ℎ = Δ𝑥 = 𝑥′ − 𝑥
𝑑
𝑔 𝑓 𝑥 = 𝑔! 𝑓(𝑥) . 𝑓′(𝑥)
𝑑𝑥
solve
f(x)
apply
in
g’(x)
DERIVADAS
𝜕 !
𝑥 = 𝑛. 𝑥 !!!
𝜕𝑥
VARIANCE
𝜕 ! 𝜕𝑦 ! 𝜕𝑦
𝑦 = .
𝜕𝑥 𝜕𝑦 𝜕𝑥 (𝑥 − 𝑥)!
𝑉𝑎𝑟 =
𝑛−1
PRODUCT
RULE
𝑑 STANDARD
DEVIATION
𝑓 𝑥 . 𝑔 𝑥 = 𝑓′ 𝑥 𝑔 𝑥 + 𝑓 𝑥 . 𝑔′(𝑥)
𝑑𝑥
𝑉𝑎𝑟𝑖𝑎𝑛𝑐𝑒
COVARIANCE
SUM
OF
SQUARED
ERRORS
(𝑦 − 𝑦)!
𝑥 − 𝑥 . (𝑦 − 𝑦) 𝐸𝑤 =
𝐶𝑜𝑣 =
2
𝑛−1
COST
FUNCTION
CONFIDENCE
INTERVAL
(𝑦 − 𝑦)!
𝐽 𝜃! ≔ 𝜃! − 𝜂.
𝜎 2
𝑥 ± 1.96
𝑛
NUMBER
OF
EXAMPLES
CHI
SQUARED
1
log(𝑁! ) + log (𝛿 )
(𝑦 − 𝑦)! 𝛿 ! 𝑚≥
𝐶ℎ𝑖 = =
𝜖
𝑦 𝑦
𝑦
𝑤ℎ𝑒𝑟𝑒 𝜖 = ∧ 𝛿 = 𝑦 − 𝑦
𝑦
R
SQUARED
MARKOV
CHAINS
𝑛 𝑥𝑦 − 𝑥. 𝑦
𝑅! =
𝑛 𝑥 ! − ( 𝑥)! . 𝑛 𝑦 ! − ( 𝑦)!
𝑃!!! 𝑋 = 𝑥 = 𝑃! . 𝑋 = 𝑥 . 𝑇(𝑥 → 𝑥)
!
LOSS
𝐿𝑜𝑠𝑠 = 𝐵𝑖𝑎𝑠 ! + 𝑉𝑎𝑟𝑖𝑎𝑛𝑐𝑒 ! + 𝑁𝑜𝑖𝑠𝑒
K
NEAREST
NEIGHBOR
LINEAR
REGRESSION
𝑓(𝑥)
𝑓 𝑥 ←
!
𝑥! 𝑥! 𝑦 − 𝑥! 𝑥! 𝑥! 𝑦
𝑘
𝑚! =
𝑥!! 𝑥!! − ( 𝑥! 𝑥! )!
𝐷𝐸 𝑥! , 𝑥! = 𝑥! − 𝑥!
!
+ (𝑦!" − 𝑦!" )!
𝑏 = 𝑦 − 𝑚! 𝑥! − 𝑚! 𝑥!
WEIGHTED
NEAREST
NEIGHBOR
!
𝑓 𝑥 = 𝑚! 𝑥! + 𝑏
!!!
𝑓(𝑥)
𝑓 𝑥 = . 𝐷(𝑥! 𝑥! )!
𝐷(𝑥! 𝑥! )!
LOGISTIC
REGRESSION
PRINCIPAL
COMPONENTS
ANALYSIS
𝑃
𝑂𝑑𝑑𝑠 𝑅𝑎𝑡𝑖𝑜 = 𝑙𝑜𝑔 = 𝑚𝑥 + 𝑏
𝑥′ = 𝑥 − 𝑥
1−𝑃
𝐸𝑖𝑔𝑒𝑛𝑣𝑎𝑙𝑢𝑒 = 𝐴 − 𝜆𝐼
𝑃
= 𝑒 !"!!
1−𝑃
𝐸𝑖𝑔𝑒𝑛𝑣𝑒𝑐𝑡𝑜𝑟 = 𝐸𝑛𝑔𝑒𝑛𝑣𝑎𝑙𝑢𝑒. [𝐴]
𝑓 𝑥 = 𝐸𝑖𝑔𝑒𝑛𝑣𝑒𝑐𝑡𝑜𝑟 ! . [𝑥!! . . . 𝑥!" ]
𝑦. log (𝑦) + 1 − 𝑦 . log (1 − 𝑦)
𝐽 𝜃 =−
𝑛
1
𝑤ℎ𝑒𝑟𝑒 𝑦 =
1 + 𝑒 !"!!
𝑓𝑜𝑟 𝑦 = 0 ∧ 𝑦 = 1
−2𝐿𝐿 → 0
ENTROPY
𝑥 ! ~ 𝑥! ≠ 𝑥! ′ ~ 𝑥! ′
𝐻 𝐴 =− 𝑃 𝐴 . 𝑙𝑜𝑔𝑃(𝐴)
𝑝
𝑚𝑥 + 𝑏 =
1−𝑝
JOINT
ENTROPY
𝑚𝑥 + 𝑏
𝑃 𝑎𝑐 =
𝑚𝑥 + 𝑏 + 1 𝐻 𝐴, 𝐵 = − 𝑃 𝐴, 𝐵 . 𝑙𝑜𝑔𝑃(𝐴, 𝐵)
1
𝐿𝑜𝑔𝑖𝑡 =
100. log (𝑃(𝑎|𝑐))
CONDITIONAL
ENTROPY
DECISION
TREES
𝐻 𝐴|𝐵 = − 𝑃 𝐴, 𝐵 . 𝑙𝑜𝑔𝑃(𝐴|𝐵)
!
𝐸𝑛𝑡𝑟𝑜𝑝𝑦 = −𝑃. log (𝑃)
!!!
MUTUAL
INFORMATION
𝐼𝑛𝑓𝑜𝐺𝑎𝑖𝑛 = 𝑃! . −𝑃!! . log 𝑃!! − 𝑃!(!!!) −. log (𝑃!(!!!) )
𝐼 𝐴, 𝐵 = 𝐻 𝐴 − 𝐻(𝐴|𝐵)
RULE
INDUCTION
EIGENVECTOR
CENTRALITY
=
PAGE
RANK
𝐺𝑎𝑖𝑛 = 𝑃. [ −𝑃!!! . log (𝑃) − (−𝑃! . log (𝑃))]
1−𝑑 𝑃𝑅(𝐵) 𝑃𝑅(𝑛)
𝑃𝑅 𝐴 = −d +
𝑛 𝑂𝑢𝑡(𝐵) 𝑂𝑢𝑡(𝑛)
RULE
VOTE
where
d=1
few
connections
Weight=accuracy
.
coverage
RATING
BATCH
GRADIENT
DESCENT
𝑅 = 𝑅! + 𝛼 𝑤! . (𝑅!" − 𝑅! )
(𝑦 − 𝑦)! . 𝑥
𝐽 𝜃! ≔ 𝜃! ± 𝜂.
2𝑛
SIMILARITY
STOCHASTIC
GRADIENT
DESCENT
! 𝑅!" − 𝑅! . (𝑅!" − 𝑅! )
𝑤!" =
! 𝑅!" − 𝑅! ! . (𝑅!" − 𝑅! )! 𝐽 𝜃! ≔ 𝜃! ± 𝜂. (𝑦 − 𝑦)! . 𝑥
CONTENT-‐BASED
RECOMMENDATION
NEURAL
NETWORKS
!"#$$ ! !
𝑅𝑎𝑡𝑖𝑛𝑔 = 𝑥! 𝑦!
𝑓 𝑥 = 𝑜 = 𝑤! + 𝑤! 𝑥!
!!! !!! !!!
COLLABORATIVE
FILTERING
LOGIT
𝑝
log 𝑜𝑑𝑑𝑠 = 𝑤𝑥 + 𝑏 = 𝑙𝑜𝑔
𝑅!" = 𝑅! + 𝛼.
1−𝑝
! 𝑅!" − 𝑅! . (𝑅!" − 𝑅! )
𝑅!" − 𝑅! .
𝑅!" − 𝑅! ! . (𝑅!" − 𝑅! )! SOFTMAX
NORMALIZATION
!
𝑒 !"!!
𝑆(𝑓 𝑥 ) =
𝑒 !"!!
CROSS
ENTROPY
PERCEPTRON
!
𝐻(𝑆 𝑓 𝑥 , 𝑓 𝑥 =− 𝑓 𝑥 . 𝑙𝑜𝑔𝑆(𝑓 𝑥 )
𝑓 𝑥 = 𝑠𝑖𝑔𝑛 𝑤! 𝑥!"
!!!
LOSS
PERCEPTRON
TRAINING
𝐻(𝑆(𝑓 𝑥 , 𝑓(𝑥))
𝐿𝑜𝑠𝑠 =
𝑤! ← 𝑤! + ∆𝑤!
𝑁
∆𝑤! = 𝜂. 𝑡 − 𝑜 . 𝑥
L2
REGULARIZATION
ERROR
FOR
A
SIGMOID
𝜆. 𝑤 !
𝑤 ← 𝑤 − 𝜂. 𝛿. 𝑥 +
2 𝜖= 𝑡 − 𝑜 . 𝑜. 1 − 𝑜 . 𝑥
SIGMOID
1 AVOID
OVERFIT
NEURAL
NETWORKS
L2
1 + 𝑒 !(!"!!) (𝑡 − 𝑜)!
!"# !"#
𝑤= + F. 𝑤!"!
2
RADIAL
BASIS
FUNCTION
where
F=penalty
(!!!)!
!
ℎ 𝑥 =𝑒 !!
BACKPROPAGATION
NESTEROV
𝛿! = 𝑜! . 1 − 𝑜! . (𝑡 − 𝑜! )
𝜃 = 𝜃 − (𝛾𝑣!!! + 𝜂. ∇𝐽(𝜃 − 𝛾𝑣!!! ))
𝛿! = 𝑜! . 1 − 𝑜! . 𝑤!" 𝛿!
ADAGRAD
𝜂
𝑤!" ← 𝑤!" + 𝜂!" . 𝛿! . 𝑥!"
𝜃=𝜃− . ∇𝐽(𝜃)
𝑆𝑆𝐺!"#$ + 𝜖
𝑤! = 1 + (𝑡 − 𝑜! )
ADADELTA
∆𝑤!" (𝑛) = 𝜂. 𝛿! . 𝑥!" + 𝑀. ∆𝑤!" (𝑛 − 1)
𝑅𝑀𝑆[∆𝜃]!!!
where
M=momentum
𝜃=𝜃−
𝑅𝑀𝑆∇𝐽(𝜃)
NEURAL
NETWORKS
COST
FUNCTION
𝑅𝑀𝑆 Δ𝜃 = 𝐸 ∆𝜃 ! + 𝜖
!! !!!! !
!
!!!
!
!!! 𝑡! . log 𝑜 + 1 − 𝑡 . log (1 − 𝑜)
𝜆 !!!!! !!! !!! 𝜃!"
𝐽! = +
𝑁 2𝑁 RMSprop
𝜂
𝜃=𝜃− . ∇𝐽(𝜃)
MOMENTUM
Υ
𝐸 𝑔! + 𝜖
𝜃 = 𝜃 − (𝛾𝑣!!! + 𝜂. ∇𝐽 𝜃 )
ADAM
𝜂
𝜃=𝜃− . 𝑚
𝑣+𝜖
𝑥! − 𝑥! ! + (𝑦! − 𝑦! )!
𝛽! 𝑚!!! + 1 − 𝛽! . ∇𝐽(𝜃) 𝑥! ∙ 𝑥! = (𝑥! ! + 𝑦! ! ). 1 −
𝑚=
𝑥! ! + 𝑦! !
1 − 𝛽!
𝛽! 𝑣!!! + 1 − 𝛽! . ∇𝐽(𝜃)!
𝑣=
1 − 𝛽!
SUPPORT
VECTOR
REGRESSION
𝑌 = 𝜆. 𝐾 𝑥! ∙ 𝑥! + 𝑏
SUPPORT
VECTOR
MACHINES
!
𝑓 𝑥 = 𝑠𝑖𝑔𝑛 𝜆. 𝑦. 𝐾(𝑥! ∙ 𝑥! )
𝑥! − 𝑥! + (𝑦! − 𝑦! )!
𝐾 𝑥! ∙ 𝑥! = 𝑒𝑥𝑝 −
𝑤𝑖𝑑𝑡ℎ!!"#
!
𝑥! − 𝑥! + (𝑦! − 𝑦! )!
𝐾 𝑥! ∙ 𝑥! = 𝑒𝑥𝑝 −
𝑤𝑖𝑑𝑡ℎ!!"# 𝜆 → arg 𝑚𝑖𝑛 ∇𝐿
𝜆 → ∇𝐿 = 0
RIDGE
REGRESSION
-‐
REGULARIZATION
𝑦 = 1 ∧ 𝑦 = −1
𝑦 − 𝑦 ! 𝜆. 𝑚
𝑚≔𝑚− −
𝑁 𝑁
𝐷𝑜𝑡𝑃𝑟𝑜𝑑𝑢𝑐𝑡 = 𝑥! . 𝑐𝑜𝑠𝜃
𝜆
𝑦 = 𝜆. 𝑚𝑥 + 𝑏 −
𝑁
𝑐𝑜𝑠 𝜃 + 𝑠𝑒𝑛! 𝜃 = 1
!
LASSO
REGRESSION
-‐
REGULARIZATION
!
𝑥! − 𝑥! + (𝑦!" − 𝑦!" )!
𝑠𝑒𝑛𝜃 =
𝑥!
(𝑦 − 𝑦)! 𝜆. 𝑏 MÉDIA
GEOMÉTRICA
𝑏≔ +
𝑁 𝑁
!
𝑚 → 0
1,2,4 = 1.2.4
𝜆
𝑦 = 𝑚𝑥 + 𝜆. 𝑏 +
𝑁
MEDIANA
SKEWNESS
𝑀𝑎𝑥 − 𝑀𝑖𝑛
2
Skewness
<
1
KOLMOGOROV
SMIRNOV
TESTE
t
Normal
sig
>
.005
𝑥! − 𝑥! − (𝜇! − 𝜇! )
𝑡=
𝑥! − 𝑥!
Diferença
significante
sig
<
.05
NÃO
PARAMÉTRICOS
TESTE
t
2
AMOSTRAS
T
test
=
Normal
Teste
U
Mann
Whitney
sig
<
.05
Levene
Variância
CRONBACH
ANOVA
+
3
>
.60
.70
𝑉𝑎𝑟𝑖â𝑛𝑐𝑖𝑎 𝑒𝑛𝑡𝑟𝑒 𝑔𝑟𝑢𝑝𝑜𝑠
𝐹=
𝑉𝑎𝑟𝑖â𝑛𝑐𝑖𝑎 𝑑𝑒𝑛𝑡𝑟𝑜 𝑑𝑜 𝑔𝑟𝑢𝑝𝑜
MÉDIA
ARITMÉTICA
Sig
<
.05
𝑥
𝑁
TOLERÂNCIA
ANÁLISE
DISCRIMINANTE
Tolerância
>
.1
1 Box
M
sig
<
.05
rejeita
H0
𝑇𝑜𝑙𝑒𝑟â𝑛𝑐𝑖𝑎 =
𝑉𝐼𝐹
Wilk’s
Lambda
sig
<
.05
VARIANCE
INFLATION
FACTOR
VIF
<10
𝑥 ! ~ 𝑥! ≠ 𝑥! ′ ~ 𝑥! ′
1 1 𝑥−𝑥 !
𝑃 𝑥𝑥 = . 𝑒𝑥𝑝 −
2𝜋𝜎 ! 2 𝜎
ENTER
METHOD
𝑁! 𝐶! + 𝑁! 𝐶!
+
15
cases
/
Variable
𝑍!" =
𝑁! + 𝑁!
STEPWISE
METHOD
ERROR
MARGIN
+
50
cases
/
Variable
𝜎
1.96
𝑁
VARIABLE
SELECTION
ACCURACY
F
Test
=
47
sig
<
.05
Confidence
Interval
~
P
value
MISSING
DATA
HYPOTHESES
TESTING
Delete
if
>
15%
P
value
<
.05
TRANSFORMATION
OK
MAHALANOBIS
DISTANCE
same
variable
𝑥
< 4
𝜎
(𝑥! − 𝑥! )!
𝑀=
𝜎!
MULTICOLLINEARITY
Correlation
>
.90
MANHATTAN
DISTANCE
L
VIF
<10
𝑀𝑎𝑛ℎ = |𝑥! − 𝑥! | + |𝑦! − 𝑦! |
Tolerance
>
.1
NET
PRESENT
VALUE
SUM
OF
SQUARES
(explain)
𝑃! = 𝑃! . 𝜃 !
𝑆𝑆!"#!"$$%&' . (𝑁 − 𝑐𝑜𝑒𝑓) 𝑃! = 𝑃! . 𝜃 !!
𝐹!"#$% =
𝑐𝑜𝑒𝑓 − 1 . 𝑆𝑆!"#$%&'(#
MARKOV
DECISION
PROCESS
𝑈! = 𝑅! + 𝛿 max 𝑇 𝑠, 𝑎, 𝑠′ . 𝑈(𝑠′)
!
!
STANDARD
ERROR
ESTIMATE
(SEE)
𝜋! = argmax 𝑇 𝑠, 𝑎, 𝑠′ . 𝑈(𝑠′)
!
!
𝑆𝑢𝑚𝑆𝑞𝑢𝑎𝑟𝑒𝑑𝐸𝑟𝑟𝑜𝑟𝑠
𝑆𝐸𝐸 =
𝑛−2 𝑄!,! = 𝑅! + 𝛿 max 𝑇 𝑠, 𝑎, 𝑠 ! . max 𝑄(𝑠 ! , 𝑎′)
!! ! !
!
𝑄!,! ←! 𝑅! + 𝛿 max 𝑄 𝑠 ! , 𝑎′
(𝑦 − 𝑦)! !
𝑆𝐸𝐸 =
𝑛−2
PROBABILIDADE
(coins)
𝑃(𝐴𝑈𝐵𝑈𝐶)!Ã! !"#$%&!'(!)
= 𝑃 𝐴 + 𝑃 𝐵 + 𝑃 𝐶 − 𝑃 𝐴 ∩ 𝐵 − 𝑃(𝐴 ∩ 𝐶) − 𝑃(𝐵
∩ 𝐶) − 𝑃(𝐴 ∩ 𝐵 ∩ 𝐶)
𝑃(𝑎)
𝑃 𝑎 =
𝑃(𝐴)
EVENTO
COMPLEMENTAR
𝑃 Ã = 1 − 𝑃(𝐴)
FREQUENTISTA
PROBABILIDADE
MARGINAL
𝑚 𝑠𝑢𝑐𝑒𝑠𝑠𝑜𝑠 𝑒𝑣𝑒𝑛𝑡𝑜𝑠
lim = = =
𝑃(𝐴 = 𝑎)
!→! 𝑛 𝑡𝑜𝑑𝑎𝑠 𝑝𝑜𝑠𝑠𝑖𝑏𝑖𝑙𝑖𝑑𝑎𝑑𝑒𝑠 𝑒𝑠𝑝𝑎ç𝑜 𝑎𝑚𝑜𝑠𝑡𝑟𝑎𝑙
𝑃 𝑎 =
𝑃(𝐴)
AXIOMÁTICA
𝑃(𝐴) ≥ 0
PROBABILIDADE
A
e
B
𝑃(𝐴, 𝐵, 𝐶) = 1
𝑃(𝐴 ∩ 𝐵)
𝑃 𝐴 𝑒 𝐵 =
𝑃(𝐵)
TEROREMAS
DE
PROBABILIDADE
PROBABILIDADE
CONDICIONAL
UNIÃO
=
A
ou
B
𝑃(𝐴𝑈𝐵)!"#$%&!'(!) = 𝑃 𝐴 + 𝑃(𝐵)
𝑃 𝐴 𝐵 !"#$%$"#$"&$' = 𝑃(𝐴)
𝑃(𝐴𝑈𝐵)!Ã! !"#$%&!'(!) = 𝑃 𝐴 + 𝑃 𝐵 − 𝑃(𝐴 ∩ 𝐵)
BAYES
(52
cartas
,
cancer)
INTEGRAIS
!
𝑃(𝐴 ∩ 𝐵) 𝑃 𝐵 𝐴 . 𝑃(𝐴) 𝐹 𝑏 − 𝐹 𝑎
𝑃 𝐴𝐵 = =
!
𝑃(𝐵) 𝑃(𝐵)
!
1 1 1
BINOMIAL
DISTRIBUTION
(0,1
sucesso)
𝑥 ! 𝑑𝑥 = 𝑥 ! = 2! − 1!
! 3 3 3
𝑒𝑠𝑝𝑎ç𝑜 𝑎𝑚𝑜𝑠𝑡𝑟𝑎𝑙
𝑃 𝐷 = . 𝑃 𝑠 ! . (1 − 𝑃 𝑠 )!
PRODUCT
RULE
𝑠𝑢𝑐𝑒𝑠𝑠𝑜
𝑒𝑠𝑝𝑎ç𝑜 𝑎𝑚𝑜𝑠𝑡𝑟𝑎𝑙
𝑃 𝐷 = . 𝑃 𝑠 ! . (𝑃 𝑠 )!
𝑠𝑢𝑐𝑒𝑠𝑠𝑜 𝑐. 𝑓′ 𝑥 . 𝑑𝑥 = 𝑐 𝑓′ 𝑥 . 𝑑𝑥
𝑆
𝑃 𝑆𝑢𝑐𝑒𝑠𝑠𝑜 = 𝑃 𝑠 . 𝑃(𝑠)
𝑠 CHAIN
RULE
!∈! !∈!
𝑐!
𝑃 𝐷 = . 𝑃 𝑎 ! . (1 − 𝑃 𝑎 )!
𝑎! 𝑐 − 𝑎 ! 𝑓 𝑥 + 𝑔 𝑥 . 𝑑𝑥 = 𝑓 𝑥 . 𝑑𝑥 + 𝑔 𝑥 . 𝑑(𝑥)
INTEGRATION
PROBABILIDADE
TOTAL
(urnas)
Δ𝑥 = 0
𝑓′ 𝑥 . Δ𝑥
𝑃 𝐵 = 𝑃 𝐴∩𝐵 = 𝑃 𝐴 . 𝑃(𝐵|𝐴)
𝑁→∞
DIFFERENTIATION
PROBABILITY
k
SUCCESS
in
n
TRIALS
𝑛 𝑓 𝑎 + Δ𝑥 − 𝑓(𝑎)
𝑃 𝑘 𝑖𝑛 𝑛 = . 𝑝! . (1 − 𝑝)!!!
lim
𝑘 !→! Δ𝑥
1 2 3 1 1 2 3 5
LINEAR
ALGEBRA
1 4 5 ∗ 2 = 1 ∗ 1 + 2 ∗ 4 + 0 ∗ 5 = 9
0 3 2 0 0 3 2 6
ADDITION
x
Matrix:
Colunas
A
=
Linhas
B
1 2 2 2 2 4 Linhas
A
=
Colunas
B
+ =
4 3 5 3 9 6
𝑨𝟐,𝟏 = 𝟐𝒂 𝒍𝒊𝒏𝒉𝒂 𝒙 𝟏𝒂 𝒄𝒐𝒍𝒖𝒏𝒂
SCALAR
MULTIPLY
0 3
1 2 3 8 24
∗ 1 3 =
0 4 5 14 37
2 5
2 2 6 6
3∗ =
5 3 15 9
1 2 3
1 2 0 ∗ 4 5 6 = 12 30 0
MATRIX
VECTOR
MULTIPLICATION
7 8 9
Linhas
x
Colunas
IMPORTANTE
x
Vetor:
Colunas
A
=
Linhas
B
𝑨𝟐,𝟑 = 𝟐𝒂 𝒍𝒊𝒏𝒉𝒂 𝒙 𝟑𝒂 𝒄𝒐𝒍𝒖𝒏𝒂
𝐴!,! ∗ 𝐵!,! = 𝐶!,!
0 3 6 1 0 0 1 2 1
1 −3 1 0 ∗ 3 8 1 =
1 3 ∗ = 7
2 0 0 1 0 4 1
2 4 9
𝐴!,! 𝐴!,! 𝐴!,! 1 2 1
1 2 3 1 5 𝐴
= !,! 𝐴 !,! 𝐴 !,! = 0 2 −2
1 4 5 ∗ 2 = 9
𝐴!,! 𝐴!,! 𝐴!,! 0 4 1
0 3 2 0 6
OU
PERMUTATION
PROPRIEDADES
LEFT=exchange
rows
Not
commutative
0 1 𝑎 𝑏 𝑐 𝑑 𝐴 ∗ 𝐵 ≠ 𝐵 ∗ 𝐴
∗ =
1 0 𝑐 𝑑 𝑎 𝑏
RIGHT=exchange
columns
Associative
𝐴 ∗ 𝐵 ∗ 𝐶 = 𝐴 ∗ (𝐵 ∗ 𝐶)
𝑎 𝑏 0 1 𝑏 𝑎
∗ =
𝑐 𝑑 1 0 𝑑 𝑐
Inverse
(only
squared)
1
IDENTIDADE
𝐴!! ≠
𝐴
1 0 0
1 0
0 1 0
𝐴!! . 𝐴 = 𝐼 =
0 1
0 0 1
DETERMINANTE
DIAGONAL
1 3
2 0 0 = 1.2 − 3.4 = −10
4 2
0 2 0
0 0 2
1 4 7 1 4
2 5 8 2 5 = 1.5.9 + 4.8.3 + 7.2.6 − 7.5.3 − 1.8.6 − 4.2.9
3 6 9 3 6
TRANSPOSE
ELASTICIDADE
DE
DEMANDA
1 4 (𝑄! − 𝑄! ) (𝑃! + 𝑃! )
1 2 3 ! 𝜌= .
𝐴= 𝐴 = 2 5
4 5 6 (𝑄! + 𝑄! ) (𝑃! − 𝑃! )
3 6