Skip to content

athirupathiraja/digitInterpreter

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Digit Interpreter

Introduction

Three layered Neural Network to interpret the numerical value of digits from its visual representations.

This is introductory project to ML & Neural Network Architecture.

Made using numpy and math.

Neural Network Architecture

Dataset: Built using the MNIST handwritten digit database, consisting of 'm' training images, with each image spanning 28 x 28 pixels.

Input Layer (l=[0]): Input Layer has 784 nodes, each representing one pixel in an image of 28 by 28 pixels.

Output Layer (l=[2]): Output Layer has 10 nodes, each representing a possible numerical prediction ranging from 0 to 9.

Forward Propogation:

Description:

Z[i] : Z(X) is a set of functions for each unit to predict the output given the set of inputs 'X'. Each function is a linear combination of the scalar product of the weight 'w [i]', a descriptor of the relative significance of the input and the previous, 'A [i]' plus a constant bias term, 'b [i]', controlling the affect of the activation function on each node.

A[0] : input layer with the set of inputs, 'X'.

A[1] : ReLU activation function for Z[1]. This function returns 0 for any negative value of x and returns the value x for any positive value.

A[2] : Softmax activation function for Z[1] which provides a multinomial probability distribution for each of the possible numerical outputs from 0 through 9.

Backward Propogation:

Description:

dZ[i] : Calculating error in each layer

dW[i] & db[i] : Calculating the contribution of weights and biases to error in each layer.

Updating Parameters:

Updating individual parameters with a user-defined learning rate, α for gradient descent

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages