Skip to content

Commit b124977

Browse files
authored
Created ReadMe
1 parent d613272 commit b124977

File tree

1 file changed

+81
-2
lines changed

1 file changed

+81
-2
lines changed

README.md

Lines changed: 81 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,81 @@
1-
# numpy_cnn
2-
Small NeuralNet-Framework implemented with numpy (Convolution|TransposeConv|Linear)
1+
# NumPy CNN
2+
3+
This is a small NeuralNet-Framework implemented only with NumPy.
4+
It contains Linear-, Convolution-, TransposedConv- and Pooling-Layers.
5+
The Framework is not really intended to be used, because you can't save the model (yet) and it's poorly optimized,
6+
but more for curious GitHub-user, who want to learn more about the popular Layers/Activations/etc. in NeuralNets.
7+
8+
9+
10+
### Requirements
11+
12+
For the core CNN-Frame you only need NumPy.
13+
```
14+
pip install numpy
15+
```
16+
If you want to run the FashionNet example, you need Matplotlib (for live-plotting), tqdm (loading-bar)
17+
and gzip (read compressed trainings data).
18+
```
19+
pip install matplotlib
20+
pip install tqdm
21+
(gzip is part of the python standard library)
22+
```
23+
For the test_CNN script you will also need PyTorch, because I confirmed my results with the PyTorch-Autograd Engine.
24+
```
25+
pip install torch===1.4.0
26+
```
27+
or use [the PyTorch website](https://pytorch.org/)
28+
29+
30+
31+
### Testing
32+
33+
The ```test_CNN.py``` script runs the forward- and backwardpass of all Layers, Activations and Losses with random shaped inputs
34+
and checks the results with the PyTorch-Autograd Engine.
35+
36+
I also wrote a small Network in the ```FashionNet.py``` file, which trains a small Model with the FashionMNIST dataset.
37+
The Model was trained for only one epoch and returned some descend results. They aren't the best, but my test with the same Model in PyTorch got a similar result, so it must be the bad architecture and the short training of only one epoch.
38+
![Plot of Loss and Accuracy](FashionMNIST_model_graph.png)
39+
*Note: the Testing Loss and Accuracy is more stable because the testing batch was four times the size of the training batch*
40+
41+
42+
43+
### Features
44+
45+
**Layers:**
46+
- Linear
47+
- Convolution (2D)
48+
- Transposed Convolution (2D) - MaxPool (2D)
49+
50+
**Activations:**
51+
- ReLU
52+
- LeakyReLU
53+
- Tanh
54+
- Sigmoid
55+
- Softmax
56+
- LogSoftmax
57+
58+
**Losses:**
59+
- MSELoss, MAELoss (L1-Loss)
60+
- BinaryCrossEntropyLoss
61+
- CrossEntropyLoss
62+
- SoftmaxLoss (Softmax + CrossEntropy)
63+
64+
**Optimizer:**
65+
- SGD (Momentum, Nesterov)
66+
- Adagrad
67+
- RMSprop
68+
- Adam
69+
70+
71+
72+
### Acknowledgments
73+
For the Softmax, LogSoftmax and CrossEntropyLoss-Module I used
74+
the numerical more stable functions implemented in the PyTorch Library!
75+
You should definetly check this amazing Library out ;) luv u :*
76+
77+
Also a great source for Convolutions and Optimizer were [the CS231n course notes](http://cs231n.github.io/)
78+
79+
80+
81+

0 commit comments

Comments
 (0)