AI2 Lecture 6 notes PDF

Title AI2 Lecture 6 notes
Author Emad Abdelhamied
Course English commerce
Institution جامعة المنصورة
Pages 10
File Size 1.2 MB
File Type PDF
Total Downloads 108
Total Views 148

Summary

NO description for this fuckin aad dn jjjj nnn...


Description

Single-Input Neuron

Multiple-Input Neuron

Layer of Neurons

2

Multilayer Network

3

Using Neural network § Choose a neural network architecture ► Feed-forward network ► Recurrent network

§ Specify the network architec architecture ture ► How many layers ► How many neurons at each layer?

§ Choose a learning algorithm ► Specify the parameters of the learning algorithm ► Decide how to train the network

§ Testing ► A trained network tested on data that are not used during training ► A network’s ability to process outside training is called generalization

Design Neural network 1. Number of network inputs (R) = number of problem inputs 2. Number of neurons in output layer (S) = number of problem outputs • A single neuron can classify input vectors into two categories. • S neurons in output layer can classify input vectors into 2S categories. • Example: 2 Neuron can classify input vectors into 4 categories First output Neuron 0, Second output Neuron 0 è Rectangle First output Neuron 0, Second output Neuron 1 è Triangle First output Neuron 1, Second output Neuron 0 è Square First output Neuron 1, Second output Neuron 1 è Circle

: 00 : 01 : 10 : 11

4

• If we have two classes § à use 2 output neurons ( one for each class ) § à we can use one output neuron • EX if we have 2 classes class (X ) & class ( O) • Use one neuron

1 à bel belong ong tto o X

0 ((--1) à belong tto o O

• How to determine number of neurons in input layer? Number of neurons in input layer == number of values in one pattern Not the number of patterns

Example We have a classification problem with four classes of input vector. The four classes are

Number of patterns =4 Number of values in a pattern = 2 Number of input neurons = 2 Number of output neurons = 2 (22)

Net Input , Learning , Testing , Data Consider the following figure

n

§ Net input

y - in = b + å xi wi i =1

Training in classification o Given input patterns and output patterns ( target ) o Get Weight use training algorithm o Use all input patterns in training Training all input patterns one time th that at c cal al alled led one epoch (i (itter erati ati ation) on)

Testing in classification o Given the weight from training o Test using each of the training patterns or other patterns o Test mean to compute output

How to compute output of a neuron? ü Compute net input = ∑ XW ü Apply activation function using the given threshold Ө

Data representation ü Binary representation • 0 • 1 ü Bipolar representation • -1 • 1

Bipolar representation gives better results than binary representation

6

Hebb Net Algorithm § Hebb rule : supervised training algorithm , so used for classification § Hebbian lea learning rning law: weight wij increases only if the outputs of both units xi and yi have the same sign.

§ Algorithm :

Step 0.

Initialization : (Weight and bias) b=0 , W=0

Step 1.

For each input training vector s : t do steps 2 -4

/* s is the input pattern, t the target output of the sample */ Step 2.

set x to input units

Step 3.

set t to the target

Step 4.

update weight W(new) = W(old) + Δ w

Δw

=X*T

update bias b(new) = b(old) + Δ b

Δb

=

T

7

§ Activation function o If binary output : binary step function

if net ³ q if net < q

ì1 y = f ( net) = í î0

o If bipolar output : bipolar sign function

if net ³ q if net < q

1 y = f ( net ) = ì í-1 î

function)) binary input and output 1) Hebb net for (and function Input

Target

(x1, (1, (1, (0, (0,

x2, 1) 1, 1) 0, 1) 1, 1) 0, 1)

w1 0

w2 0

y=t 1 0 0 0 b 0

Weight update 𝑻 = [𝟏

𝟏 𝟏 𝑿 =( 𝟎 𝟎

𝟎

𝟏 𝟎 ) 𝟏 𝟎

𝟎

𝟎]

𝑾𝒐𝒍𝒅 = [𝟎

𝑾𝒏𝒆𝒘 = 𝑾𝒐𝒍𝒅 + 2𝚫𝑾 𝑾𝒏𝒆𝒘 = 2𝚫𝑾

𝑾𝒏𝒆𝒘 = 𝑻𝑿 = [𝟏

𝟎

𝟎

𝟏 𝟏 𝟎] ( 𝟎 𝟎

𝚫𝑾 = 2𝑻𝑿 𝟏 𝟏 ) = [𝟏 𝟏 𝟏

𝟎] 𝟏] 8

Bias update 𝒃𝒏𝒆𝒘 = 𝒃𝒐𝒍𝒅 + 2𝚫𝒃 𝒃𝒏𝒆𝒘 = 2𝚫𝐛

𝒃𝒏𝒆𝒘 = 𝑻𝑿 = [𝟏

𝒃𝒐𝒍𝒅 = 𝟎

𝟎

𝟎

𝚫𝒃 = 2𝑻𝑿

𝟏 𝟏 𝟎] ( ) = 𝟏 𝟏 𝟏

Testing second pattern : (1,0) 𝑾 = [𝟏 𝑿 = [𝟏

𝟏]222222222222𝒃 = 𝟏

𝟎]2222222222222𝒕 = 𝟎

𝒚 = 𝑨𝒄𝒕𝒊𝒗𝒂𝒕𝒊𝒐𝒏𝑭𝒖𝒏𝒄𝒕𝒊𝒐𝒏 (𝑾𝑿𝑻 + 𝒃 ) 𝒚 = 𝑨𝒄𝒕𝒊𝒗𝒂𝒕𝒊𝒐𝒏𝑭𝒖𝒏𝒄𝒕𝒊𝒐𝒏 A[𝟏

𝟏 𝟏 ] B C + 𝟏D 𝟎

𝒚 = 𝑨𝒄𝒕𝒊𝒗𝒂𝒕𝒊𝒐𝒏𝑭𝒖𝒏𝒄𝒕𝒊𝒐𝒏 (𝟐 ) = 𝟏

𝒕 ≠ 𝒚.22222222222𝒊𝒏𝒄𝒐𝒓𝒓𝒆𝒄𝒕2𝒄𝒍𝒂𝒔𝒔𝒊𝒇𝒊𝒄𝒂𝒕𝒊𝒐𝒏2

2) Hebb net for (and function) bipolar in input put an and d output Input

Target

(x1, (1, (1, (-1, (-1,

x2, 1) 1, 1) -1, 1) 1, 1) -1, 1)

w1 0

w2 0

y=t 1 -1 -1 -1 b 0

9

𝑻 = [𝟏

𝟏 𝟏 𝑷 = (−𝟏 −𝟏

−𝟏

−𝟏

𝟏 −𝟏 ) 𝟏 −𝟏

−𝟏]

Weight update

𝑾𝒐𝒍𝒅 = [𝟎

𝑾𝒏𝒆𝒘 = 𝑾𝒐𝒍𝒅 + 2𝚫𝑾 𝑾𝒏𝒆𝒘 = 2𝚫𝑾

𝑾𝒏𝒆𝒘 = 𝑻𝑿 = [𝟏

−𝟏

𝚫𝑾 = 2𝑻𝑿

𝟏 𝟏 −𝟏] ( −𝟏 −𝟏

−𝟏

Bias update

𝒃𝒏𝒆𝒘 = 𝒃𝒐𝒍𝒅 + 2𝚫𝒃 𝒃𝒏𝒆𝒘 = 2𝚫𝐛

𝒃𝒏𝒆𝒘 = 𝑻𝑿 = [𝟏

𝟎]

𝟏 −𝟏 ) = [𝟐 𝟏 −𝟏

𝟐]

𝒃𝒐𝒍𝒅 = 𝟎

−𝟏

−𝟏

𝚫𝒃 = 2𝑻𝑿

𝟏 𝟏 −𝟏] ( ) = −𝟐 𝟏 𝟏

Testing Second pattern : (1, -1) 𝑾 = [𝟐 𝑿 = [𝟏

𝟐]222222222222𝒃 = −𝟐

−𝟏 ]2222222222222𝒕 = −𝟏

𝒚 = 𝑨𝒄𝒕𝒊𝒗𝒂𝒕𝒊𝒐𝒏𝑭𝒖𝒏𝒄𝒕𝒊𝒐𝒏(𝑾𝑿𝑻 + 𝒃 ) 𝒚 = 𝑨𝒄𝒕𝒊𝒗𝒂𝒕𝒊𝒐𝒏𝑭𝒖𝒏𝒄𝒕𝒊𝒐𝒏 A[𝟐

𝟏 𝟐 ] B C + (−𝟐)D −𝟏

𝒚 = 𝑨𝒄𝒕𝒊𝒗𝒂𝒕𝒊𝒐𝒏𝑭𝒖𝒏𝒄𝒕𝒊𝒐𝒏 (−𝟐 ) = −𝟏

𝒕 = 𝒚.222222222222222𝑪𝒐𝒓𝒓𝒆𝒄𝒕2𝒄𝒍𝒂𝒔𝒔𝒊𝒇𝒊𝒄𝒂𝒕𝒊𝒐𝒏2

10

3) Hebb net: classify pattern "X" and pattern "O" [character recognition]

#. . .# . #.#. . . #. . à 1 .#. #. #. . . #

. # # #. #. . . # #. . . # #. . . # . # ## . (#) è 1

Convert char to a pattern First pattern :

Input : X

à -1

,,, (dot) è -1

target T

1 -1 -1 -1 1 , -1 1 -1 1 -1, -1 -1 1 -1 -1 , -1 1 -1 1 -1, 1 -1 -1 -1 1 è 1 Second patte pattern rn :

-1 1 1 1 -1 , 1 -1 -1 -1 1, 1 -1 -1 -1 1, 1 -1 -1 -1 1, -1 1 1 1 -1 è -1 𝑾𝒏𝒆𝒘 = 𝑾𝒐𝒍𝒅 + 2𝚫𝑾 𝑾𝒏𝒆𝒘 = 2𝚫𝑾

𝚫𝑾 = 2𝑻𝑿

1 -1 -1 -1 1 , -1 1 -1 1 -1, -1 -1 1 -1 -1 , -1 1 -1 1 -1 , 1 -1 -1 -1 1 1

-1 -1

𝑾𝒏𝒆𝒘 =

1 1 1 -1 , 1 -1 -1 -1 1, 1 -1 -1 -1 1, 1 -1 -1 -1 1, -1 1 1 1 -1

2 -2 -2 -2 2 , -2 2 0 2 -2, -2 0 2 0 -2 , -2 2 0 2 -2, 2 -2 -2 -2 2

Disadvantages of Hebb Net...


Similar Free PDFs