Beta

Vanilla Neural Network from scratch

Description:

Here we will implement backpropagation and gradient descent from scratch (numpy).

We will solve a textbook problem, xor problem, namely,

1 ^ 0 = 1;

1 ^ 1 = 0;

0 ^ 0 = 0;

0 ^ 1 = 1.

But it is not a linear separable problem,

neural networks are good at non-linear problems!

a two-hidden-layer vanilla neural network is

pred_y = lambda x: _sigmoid(w2 * f(w1 * (f(w0 * x + b0)) + b1) + b2)

whose w and b are the parameters we hope the neural network learnt.

let's backprop and optimize it!

In this kata,

  1. We will implement the train function to learn w and b automatically.
  2. We will look for proper hyperparameters. I think this will help us find out what's happen in the neural network.

if you don't know backprop and gradient descent, start from

ufldl forward propagate

ufldl back propagate

if you want to learn in depth,

deep learning book

Machine Learning
Neural Networks
Algorithms
Data Science

Stats:

CreatedMar 22, 2018
PublishedMar 23, 2018
Warriors Trained151
Total Skips3
Total Code Submissions255
Total Times Completed11
Python Completions11
Total Stars9
% of votes with a positive feedback rating70% of 5
Total "Very Satisfied" Votes2
Total "Somewhat Satisfied" Votes3
Total "Not Satisfied" Votes0
Total Rank Assessments4
Average Assessed Rank
2 kyu
Highest Assessed Rank
1 kyu
Lowest Assessed Rank
2 kyu
Ad
Contributors
  • Aeolian Avatar
Ad