Skip to content

Commit e836ac2

Browse files
authored
added Future Updates and small changes
1 parent cccbb90 commit e836ac2

File tree

1 file changed

+16
-10
lines changed

1 file changed

+16
-10
lines changed

README.md

Lines changed: 16 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -24,19 +24,21 @@ For the test_CNN script you will also need PyTorch, because I confirmed my resul
2424
```
2525
pip install torch===1.4.0
2626
```
27-
or use [the PyTorch website](https://pytorch.org/)
27+
Alternatively use [the PyTorch website](https://pytorch.org/).
2828

2929

3030

31-
### Testing
31+
### Testing and Example
3232

33+
**Testing:**
3334
The ```test_CNN.py``` script runs the forward- and backwardpass of all Layers, Activations and Losses with random shaped inputs
3435
and checks the results with the PyTorch-Autograd Engine.
3536

37+
**Example:**
3638
I also wrote a small Network in the ```FashionNet.py``` file, which trains a small Model with the FashionMNIST dataset.
3739
The Model was trained for only one epoch and returned some descend results. They aren't the best, but my test with the same Model in PyTorch got a similar result, so it must be the bad architecture and the short training of only one epoch.
3840
![Plot of Loss and Accuracy](FashionMNIST_model_graph.png)
39-
*Note: the Testing Loss and Accuracy is more stable because the testing batch was four times the size of the training batch*
41+
*NOTE: The Testing Loss and Accuracy is more stable because the testing batch was four times the size of the training batch.*
4042

4143

4244

@@ -69,16 +71,20 @@ The Model was trained for only one epoch and returned some descend results. They
6971

7072

7173

72-
### Acknowledgments
73-
For the Softmax, LogSoftmax and CrossEntropyLoss-Module I used
74-
the numerical more stable functions implemented in the PyTorch Library!
75-
You should definetly check [this amazing Library out](https://pytorch.org/) ;) luv u :*
74+
### Future Updates
7675

77-
Also a great source for Convolutions and Optimizer were [the CS231n course notes](http://cs231n.github.io/)
76+
- update BinaryCrossEntropyLoss for more numerical stability
77+
- add BatchNorm and Dropout-Layer
78+
- add CELU and ELU activations
7879

79-
To learn more about Transposed Convolutions: [Paper](https://arxiv.org/pdf/1603.07285.pdf) and [Animations](https://github.com/vdumoulin/conv_arithmetic)
8080

81-
8281

82+
### Acknowledgments
83+
For the Softmax, LogSoftmax and CrossEntropyLoss-Module I used
84+
the numerical more stable functions implemented in the PyTorch Library!
85+
You should definetly check [this amazing Library out](https://pytorch.org/)! ;) luv u :*
86+
87+
Also a great source for Convolutions and Optimizer were [the CS231n course notes](http://cs231n.github.io/).
8388

89+
To learn more about Transposed Convolutions: [Paper](https://arxiv.org/pdf/1603.07285.pdf) and [Animations](https://github.com/vdumoulin/conv_arithmetic).
8490

0 commit comments

Comments
 (0)