You are on page 1of 80

Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

ADAKAH PERLU PENORMALAN DAN PENDISKRETAN


DATA BAGI PENGELAS PINTAR ?

Prof. Madya Dr. Siti Mariyam Shamsuddin


Soft Computing Research Group
Universiti Teknologi Malaysia, JOHOR
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

LATIHAN MINDA
MINDA #1

MINDA #2
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

KATA KUNCI

INTELLIGENT CLASSIFIER
ARTIFICIAL NEURAL NETWORK

DATA NORMALIZATION

DATA DISCRETIZATION
ACCURACY
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

APAKAH
INTELLIGENT CLASSIFIER
An Intelligent Classifier:
An autonomous system, which on analysis of its target
problem, automatically selects and implements a suitable
classification of its input data using dynamic classifiers.

These are processes that modify themselves over time


•to improve performance,
• to exploit unsupervised pattern recognition as well as
supervised and
• to reinforcement learning techniques.

The resulting classification can be used not only for data-


mining purposes or for action selection in agent environment
but also for other field.
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

MATERIAL MOTIVATION
IN-HOUSE PRODUCT
1. ImmA-D
2. Bio-Cache
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

WHAT IS THE SPIRITUAL MOTIVATION


OF ESTABLISHING THE CONCEPT OF
INTELLIGENT CLASSIFIER??
ARTIFICIAL NEURAL NETWORKS (ANN)…cont

Biological Brain vs ANN


Soma Dendrites
Soma
(nucleus)
(nucleus)

Dendrites
Axon

Output layer

weights
Input layer Hidden layer
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

MOTIVASI…

Mengapa Struktur BADAN MANUSIA


(cells, emotion, perception, spiritual??), and
LIVING ORGANISMS menjadi PEMANGKIN
kepada cetusan terma
KECERDASAN/KEPINTARAN?
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

SISTEM SARAF OTAK

ARTIFICIAL NEURAL NETWORK


1. Neurons, Dendrites and Pattern Classification
2. The cerebellum as liquid state machine
3. Dendrites' Computing
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

SISTEM KEBAL BADAN

SISTEM KEBAL BUATAN (AIS)

PILIHAN NEGATIF (2007)


CROSS DICIPLINE AIS (2008)
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

MEMBRANE SYSTEM
MEMBRANE COMPUTING
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

MOTIVATION…

INTELLIGENT WEAPON AND SOLDIER OF THE FUTURE


One of the greatest advantages an army can have over an adversary is
technological superiority. Examples in history abound. Imagine how it was when
the first sword of iron met one of bronze on the field of combat, or the first chariots
ran down foot soldier, or the first airplane destroyed something on the ground. A
historical study of technology is also one of warfare. Currently, the West's best
troops have small, powerful automatic weapons, night-vision capability, individual
radios, and some of the best training on the planet. Our forces are enhanced (the
military calls them "force multipliers") by devices ranging from battlefield artillery
computers to radio jammers to tanks with computers that check windage, distance,
and the relative motions of both the shooter and the target for a near-perfect shot.

The American military is currently researching next-generation weapons and


infrastructure under programs with names like Land Warrior and the Joint
Expeditionary Digital Information (JEDI) project. Tomorrow's soldier will have
individual satellite location and communications, a wearable computer with a head-
up monitor providing data on all his teammates. They would be able to link to a gun
camera, enabling them to fire it around corners without exposing their body. (I
believe that this is the primary reason the Iridium satellite system was rescued. The
next-generation infantry weapon will also have a smart grenade launcher. It will
initially be distributed at the squad level, but will eventually get into every soldier's
hands, at least in the elite units (unless they choose another based on mission
requirements.)
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

(Al-Quran – Surah At-Tin (95) : 4)

‫ﺑﺴﻢ اﷲ اﻟﺮﺣﻤﻦ اﻟﺮﺣﻴﻢ‬


‫وﻟﻘﺪ ﺧﻠﻘﻨﺎ اﻻﻧﺴﺎن ﻓﻲ اﺣﺴﻦ ﺗﻘﻮﻳﻢ‬
‫ﺻﺪق اﷲ اﻟﻌﻈﻴﻢ‬
“Sesungguhnya Kami telah menciptakan manusia
dalam bentuk (kejadian) yang sebaik-baiknya..”

“Surely WE created man of the BEST stature”


Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

PENGETAHUAN PRA-SYARAT DALAM


KEPINTARAN BUATAN (INTELLIGENCE)
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

WHAT IS AN INTELLIGENCE?
There are probably as many definitions of intelligence as there are experts
who study it. Simply put, however, intelligence is the ability to learn about,
learn from, understand, and interact with one’s environment. This general
ability consists of a number of specific abilities, which include these
specific abilities:

•Adaptability to a new environment or to changes in the current


environment
•Capacity for knowledge and the ability to acquire it
•Capacity for reason and abstract thought
•Ability to comprehend relationships
•Ability to evaluate and judge
•Capacity for original and productive thought
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

JENIS-JENIS KECERDASAN

KECERDASAN INTELEKTUAL (INTELLIGENCE QUOTIENT - IQ) oleh William


Stern, 1989.
Bentuk utama kecerdasan diukur mengikut markah-markah tertentu.
Lazimnya melibatkan aspek Matematik dan Bahasa.

KECERDASAN EMOSI (EMOTION INTELLIGENCE- EQ)


leh Daniel Goleman, 1996
Bentuk utama kecerdasan diukur mengikut ketegaran aspek emosional seperti
empati, motivasi diri, kesedaran kendiri dan kerjasama sosial.

KECERDASAN MAJMUK (MULTIPLE INTELLIGENCE- MI) oleh Howard


Gardner, 1999.
Bentuk utama kecerdasan diukur mengikut ketegaran aspek kognitif seperti
muzik, sukan, dan juga aspek linguistik, matematik, visual, kinestetik,
interpersonal dan intrapersonal.
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

Maka, dalam konteks AGAMA, KECERDASAN


MAJMUK YANG DITITIKBERATKAN..

PEMBANGUNAN MODAL
INSAN YANG BERKUALITI
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

WHAT IS ARTIFICIAL INTELLIGENCE (AI)

AI is a broad field, and means different things to different people. It is concerned with getting
computers to do tasks that require human intelligence which require complex and
sophisticated reasoning processes and knowledge.

If John McCarthy, were to coin a new phrase for “artificial intelligence” today, he would
probably use “computational intelligence.”

(IEEE Intelligent Systems, 2002)

HOWEVER,

L.A. Zadeh claimed that computational intelligence is actually Soft Computing techniques
(1994)
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

ROULETTE WHEEL OF AI

NATURAL
SOFT COMPUTATIONAL COMPUTING
COMPUTING INTELLIGENCE

MEMBRANE
COMPUTING
BIO-INSPIRED
COMPUTING

QUANTUM
COMPUTING
INTELLIGENT
SYSTEMS ARTIFICIAL
INTELLIGENCE
DNA
COMPUTING

OTHERS

CELL
COMPUTING

OTHERS


Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

INTELLIGENT SYSTEMS

Computational systems and methods which simulate


aspects of intelligent behaviour. The intention is to
learn from nature and human performance in order
to build more powerful systems. The aim is to learn
from cognitive science, neuroscience, biology,
engineering, and linguistics for building more
powerful computational system architectures
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

ARTIFICIAL NEURAL NETWORK


ARTIFICIAL NEURAL NETWORKS (ANN)…cont

Biological Brain vs ANN


Soma Dendrites
Soma
(nucleus)
(nucleus)

Dendrites
Axon

Output layer

weights
Input layer Hidden layer
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

ARTIFICIAL NEURAL NETWORK

LOOSE INSPIRATION FROM


BIOLOGICAL NERVOUS SYSTEM

It is estimated that there is in the order of 10-500


BILLION neurons in the human cortex, With 60
TRILLION synapses. The neurons are arranged in
approximately 1000 main modules, each module
having about 500 neural networks.
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

How Brain Works

z Signals are propagated from neuron to neuron


by electrochemical reaction. Chemical
transmitter substances are released from the
synapses and enter the dendrite, raising or
lowering the electrical potential of the cell
body. When the potential reaches a
threshold, an electrical pulse or action
potential is sent down the axon.
z Synapses that increase the potential are
excitatory. Those that decrease potential are
inhibitory
z Synaptic connections exhibit plasticity – long-
term changes in the strength of connections in
response to pattern of stimulation
z Over time, neurons may form new connections
with other neurons, and sometime entire
collections of neurons can migrate from one
place to another
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

Biological Neuron Structure

• Neuron (~100 billion): a nerve cell that generates an


output dependent on the sum of its inputs.
• Connections: Synapses (~100 trillion), Axons &
Dendrites
• Very Complex System- current supercomputers
cannot model a single biological neuron!

Axon

Dendrites
Cell Body

Synapses
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

Biological Neuron

Synapse

Synapse Dendrites
Axon
Axon

Soma Soma
Dendrites
Synapse
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

A Typical Neuron

Dendrites

Nucleus
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

HENCE, THE BIOLOGICAL TERMS


OF BRAIN SYSTEMS ARE
INTRODUCED FOR DESIGNING
THE ARTIFICIAL BRAIN SYSTEM
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

Biological Nervous System Artificial NN

Soma Neuron
Dendrite Input
Axon Output
Synapse Weight
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

THE MODELLING OF BIOLOGICAL NERVOUS


SYSTEM TO ARTIFICIAL NEURAL NETWORKS
Biological Artificial NN
• Information processing occurs at Nervous System
many neurons
Soma Neuron
• Signals are passed between neurons
over connection links Dendrite Input
• Each neuron has an associated Axon Output
weight which multiplies the signal Synapse Weight
transmitted each neuron applies an
activation function to its net input to
determine its output signal
• Its method of determining the
weights on the connections (learning
algorithm)
• Its activation function
• Its applications
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

Soma/Neuron
Soma / Neuron (nucleus)
(nucleus)

Axon/Output
Dendrites/
Input
Synapse/weights

Output layer

Synapse/weights
Input layer
Hidden layer
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

ARTIFICIAL NEURAL NETWROK

ARCHITECTURE
OF ANN, I.E.,
MULTILAYER
PERCEPTRON
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

ANN NOTATIONS

NEUROCOMPUTING or
BRAIN-LIKE COMPUTATION or
NEUROCOMPUTATION or
BUT more often referred to
as ARTIFICIAL NEURAL NETWORK
can be defined as
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

Definition of ANN

Information processing systems designed with inspiration taken


from the nervous system, specifically the brain, and with
particular emphasis in problem solving

S.Haykin (1991) defined ANN as :

A massively parallel distributed processor made up of


simple processing units, which has a natural propensity
for storing experiential knowledge and making it available
for use.
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

Nomenclature of ANN
Architectures
– Single layer versus multilayer
– Feedforward versus recurrent
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

Supervised Learning

Supervised training: accomplished by presenting a sequence of


training vectors, each with an associated target output vector.
Multilayer ANNs can be trained to perform nonlinear mappings
from an n-dimensional space of input vectors to an m-dimensional
output space
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

Supervised Learning

The aim is to discover patterns or features in the input


data with no assistance from an external structure. It is
basically performs a clustering of the training patterns.
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

Reinforcement Learning

The aim is to reward the neuron for good


performance, and to penalize the neuron for bad
performance
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

A Simple Neuron

Input Output
Signals Weights Signals
w1 Y
Final solution/input
w2 Neuron Y Y of other neurons

● Y

● wn Synapse

Synapse Dendrites
Axon
Computes activation Axon
Raw data/outputs level
of other neurons
Soma Soma
Dendrites
Synapse
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

CALCULATING THE NET INPUT

The net input signal to an ANN is usually computed as


the weighted sum of all input signals,
I
net = ∑ xi wi
i =0
Artificial neurons that compute the net input signal as
the weighted sum of input signals are referred as
summation units (SU)
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

Identity function Activation


Binary step function
functions of a neuron
Bipolar sigmoid function

Binary Sigmoid function

Bipolar step function Sigmoid function


Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

ARTIFICIAL NEURAL NETWORK


ARCHITECTURE DEVELOPMENT
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

SINGLE LAYER PERCEPTRON

The aim of the perceptron is to classify inputs


x1 , x2 , ..., xn , into one of two classes, say A1 and A2 .
n

∑x i =1
i wi − θ = 0. Linearly separable function

x1 w1 θ : Threshold

(∑ w x +θ )
0

x
w2 y= f i i 0
2
w
x 3

3
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

The Perceptron Model I

• Frank Rosenblatt’s model consist of only an input and


an output layer with no hidden layer.

• Perceptron Learning Theorem:

– If a set of patterns is learnable by a perceptron, then


the perceptron is guaranteed to find the appropriate
weight set.
– Uses a threshold function as the transfer function.
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

The Perceptron Model II

` An object will be classified by neuron j into Class A if

∑w x ij i >θ
where wij is the weight from neuron j to neuron i, xi is the input from
neuron i, and θ is the threshold on neuron j.

If not, the object will be classified as Class B.

` The weights on a perceptron model are adjusted by

wij (t + 1) = wij (t ) + Δwij

where wij(t) is the weight from neuron j to neuron i at time t (to the
t th iteration) and Δwij is the weight adjustment.
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

The Perceptron Model III


The weight change is computed by using the delta rule:

Δwij = ηδ j xi
where η is the learning rate (0<η<1) and δj is the error at neuron j;

δ j = Tj - O j

where Tj is the target output value and Oj is the actual output of the
network at neuron j.

The process is repeated iteratively until convergence is achieved.


Convergence is the process whereby the errors are minimized to an
acceptable level.
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

Problems with the Perceptron

• Can only learn linear patterns thus severely


limiting the type of problems it can solve.
• Cannot solve any ‘interesting problems’-linearly
non separable problems e.g. exclusive-or
function (XOR)-simplest non separable function.
X1 X2 Output

0 0 0

0 1 1

1 0 1

1 1 0
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

A plot of the Exclusive-Or function showing that the two groups of inputs
(represented by squares and circles) cannot be separated with a single line.

X1 θ = w1x1 + w2x2
1

X2

-1 1

-1
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

A Possible Solution to the XOR Problem By Using Two Lines to Separate


the Plane into Three Regions

X1
1 1.5
Output = 0
0.5 X2
1.5
0.5
-1 1 Output = 1

Output = 0
-1
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

HOW TO SOLVE THIS???


Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

Multilayer Perceptron (MLP)

n A multilayer perceptron (MLP) is a


feedforward neural network with one or more
hidden layers.
n The network consists of an input layer of
source neurons, at least one middle or
hidden layer of computational neurons, and
an output layer of computational neurons.
n The input signals are propagated in a forward
direction on a layer-by-layer basis.
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

Multilayer perceptron
with two hidden layers

Out put Signals


Input Signals

First Second
Input hidden hidden Output
layer layer layer layer
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

MULTILAYER PERCEPTRON
LEARNING MECHANISM

9Learning in a multilayer network proceeds the


same way as for a perceptron.

9A training set of input patterns is presented to


the network.

9The network computes its output pattern, and


if there is an error – or in other words, there
exist a difference between actual and desired
output patterns – the weights are adjusted to
reduce this error.
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

MLP Learning Sequence

The following is an algorithm for a 3-layer MLP network

1) A data pattern, consisting of input and output data


values for a specific event in the study domain, is
presented to the network.

2) Input parameter information is scaled in the input


layer
– one neuron per input
– scaling done according to a linear scaling
function scaling range is generally 0 to 1
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

3) Output information from each input layer neuron is


weighted and transferred to the hidden layer

4) Using the propagation rule, the incoming weighted


inputs to each hidden layer neuron are summed

5) The sum is processed through the hidden layer


activation function
– the resulting output is the new state of activation
for the neuron
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

6) Output information from each hidden layer neuron


is weighted and transferred to the output layer

7) Using the propagation rule, the incoming weighted


inputs to the output layer neuron are summed

8) The sum is processed through the output layer


activation function and is scaled to the output
parameter’s numeric range

9) The resulting model predicted value of the output


parameter is compared to the actual value of the
output parameter
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

Apply Inputs From A Pattern

Apply the value of each


input parameter to each
input node
Feedforward

Outputs
Inputs
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

Calculate Outputs For Each Neuron


Based On The Pattern

The output from neuron j for pattern p is Opj where


1
Oij (net j ) = − λ net j
1+ e
and

net j = bias * W bias + ∑Ok


ik W jk

k ranges over the input indices and Wjk is the weight on the connection

from input k to neuron j, and λis the gain term.


Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

Adakah Perlu Penormalan dan


Pendiskretan Data untuk Pengelasan
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

SAMPLE OF DATA WITH LARGER VALUES


Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

POST-NORMALIZATION
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

KESAN PRA-PENORMALAN DAN


PASCA-PENORMALAN
TERHADAP PENGELAS PINTAR
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

KESAN PASCA PENORMALAN


Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

KESAN PASCA PENORMALAN


Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

KESAN PASCA PENORMALAN


Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

KESAN PASCA PENORMALAN


Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

PENDISKRETAN DATA

KESAN PENDISKRETAN DATA

Apakah itu Pendiskretan??


Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

PENDISKRETAN DATA

Pendiskretan boleh dinyatakan sebagai satu operasi


carian partisi terhadap domain atribut ke dalam
bentuk selang dan menggabungkan nilai tersebut
kepada selang yang berkaitan.

Proses pendiskretan melibatkan carian terhadap


potongan yang menentukan selang tersebut. Ke
semua nilai yang berada pada setiap selang
dipetakan terhadap nilai yang sama, iaitu dengan
menukarkan nilai atribut yang berkaitan kepada
satu nilai berangka.
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

Proses Pendiskretan

Pra-Pendiskretan Pasca-Pendiskretan

attr a b d attr a b d

u1 0.6 0.5 0 u1 0 0 0

u2 0.8 0.7 1 ⇒ u2 1 0 1

u3 0.4 0.4 0 u3 0 0 0

u4 0.6 0.9 1 u4 0 1 1

u5 0.8 0.9 1 u5 1 1 1
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

Data Saham

Pendiskretan Data
Saham (PDF)
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

PERFORMANCE MEASUREMENT
OF INTELLIGENT CLASSIFIER
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

PERFORMANCE MEASUREMENT

PERFORMANCE MEASUREMENT OF
ARTIFICIAL NEURAL NETWORK
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

PERFORMANCE MEASUREMENT
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009
Wacana yang dibentangkan di Fakulti Teknologi Maklumat & Komunikasi, UTeM, 25hbMAC2009

THANK YOU

You might also like