Beta
Vanilla Neural Network from scratch
11Aeolian
Description:
Here we will implement backpropagation and gradient descent from scratch (numpy).
We will solve a textbook problem, xor problem, namely,
1 ^ 0 = 1;
1 ^ 1 = 0;
0 ^ 0 = 0;
0 ^ 1 = 1.
But it is not a linear separable problem,
neural networks are good at non-linear problems!
a two-hidden-layer vanilla neural network is
pred_y = lambda x: _sigmoid(w2 * f(w1 * (f(w0 * x + b0)) + b1) + b2)
whose w and b are the parameters we hope the neural network learnt.
let's backprop and optimize it!
In this kata,
- We will implement the train function to learn w and b automatically.
- We will look for proper hyperparameters. I think this will help us find out what's happen in the neural network.
if you don't know backprop and gradient descent, start from
if you want to learn in depth,
Machine Learning
Neural Networks
Algorithms
Data Science
Similar Kata:
Stats:
Created | Mar 22, 2018 |
Published | Mar 23, 2018 |
Warriors Trained | 151 |
Total Skips | 3 |
Total Code Submissions | 255 |
Total Times Completed | 11 |
Python Completions | 11 |
Total Stars | 9 |
% of votes with a positive feedback rating | 70% of 5 |
Total "Very Satisfied" Votes | 2 |
Total "Somewhat Satisfied" Votes | 3 |
Total "Not Satisfied" Votes | 0 |
Total Rank Assessments | 4 |
Average Assessed Rank | 2 kyu |
Highest Assessed Rank | 1 kyu |
Lowest Assessed Rank | 2 kyu |