|
5 | 5 | [](https://pypi.org/project/osculari/)
|
6 | 6 | [](https://pypi.org/project/osculari/)
|
7 | 7 | [](https://osculari.readthedocs.io/en/latest/?badge=latest)
|
8 |
| -[](https://pypi.org/project/osculari/) |
9 |
| -[](https://app.codecov.io/gh/ArashAkbarinia/osculari) |
| 8 | +[](https://github.com/ArashAkbarinia/osculari) |
| 9 | +[](https://app.codecov.io/gh/ArashAkbarinia/osculari) |
10 | 10 | [](https://pytorch.org/get-started/locally/)
|
11 | 11 | [](LICENSE)
|
12 | 12 | [](https://zenodo.org/doi/10.5281/zenodo.10214005)
|
13 | 13 |
|
14 |
| -Exploring and interpreting pretrained deep neural networks. |
| 14 | +Exploring artificial neural networks with psychophysical experiments. |
15 | 15 |
|
16 | 16 | ## Overview
|
17 | 17 |
|
18 | 18 | The `osculari` package provides an easy interface for different techniques to explore and interpret
|
19 | 19 | the internal presentation of deep neural networks.
|
20 | 20 |
|
21 |
| -- Supporting for following pretrained models: |
| 21 | +- Supporting the following pretrained models: |
22 | 22 | * All classification and segmentation networks
|
23 | 23 | from [PyTorch's official website](https://pytorch.org/vision/stable/models.html).
|
24 | 24 | * All OpenAI [CLIP](https://github.com/openai/CLIP) language-vision models.
|
@@ -78,7 +78,7 @@ achieved by calling the `paradigm_2afc_merge_concatenate` from the `osculari.mod
|
78 | 78 |
|
79 | 79 | ``` python
|
80 | 80 |
|
81 |
| -architecture = 'resnet50' # networks' architecture |
| 81 | +architecture = 'resnet50' # network's architecture |
82 | 82 | weights = 'resnet50' # the pretrained weights
|
83 | 83 | img_size = 224 # network's input size
|
84 | 84 | layer = 'block0' # the readout layer
|
|
0 commit comments