Skip to content

Commit 3664996

Browse files
Minor commit correcting the readme and index.rst
1 parent a57dd32 commit 3664996

File tree

2 files changed

+10
-9
lines changed

2 files changed

+10
-9
lines changed

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,20 +5,20 @@
55
[![PyPi Status](https://img.shields.io/pypi/v/osculari.svg)](https://pypi.org/project/osculari/)
66
[![Python version](https://img.shields.io/pypi/pyversions/osculari)](https://pypi.org/project/osculari/)
77
[![Documentation Status](https://readthedocs.org/projects/osculari/badge/?version=latest)](https://osculari.readthedocs.io/en/latest/?badge=latest)
8-
[![Documentation Status](https://static.pepy.tech/badge/osculari)](https://pypi.org/project/osculari/)
9-
[![Documentation Status](https://codecov.io/gh/ArashAkbarinia/osculari/branch/main/graph/badge.svg)](https://app.codecov.io/gh/ArashAkbarinia/osculari)
8+
[![Number of downloads](https://static.pepy.tech/badge/osculari)](https://github.com/ArashAkbarinia/osculari)
9+
[![Test Status](https://codecov.io/gh/ArashAkbarinia/osculari/branch/main/graph/badge.svg)](https://app.codecov.io/gh/ArashAkbarinia/osculari)
1010
[![Pytorch version](https://img.shields.io/badge/PyTorch_1.9.1+-ee4c2c?logo=pytorch&logoColor=white)](https://pytorch.org/get-started/locally/)
1111
[![Licence](https://img.shields.io/pypi/l/osculari.svg)](LICENSE)
1212
[![DOI](https://zenodo.org/badge/717052640.svg)](https://zenodo.org/doi/10.5281/zenodo.10214005)
1313

14-
Exploring and interpreting pretrained deep neural networks.
14+
Exploring artificial neural networks with psychophysical experiments.
1515

1616
## Overview
1717

1818
The `osculari` package provides an easy interface for different techniques to explore and interpret
1919
the internal presentation of deep neural networks.
2020

21-
- Supporting for following pretrained models:
21+
- Supporting the following pretrained models:
2222
* All classification and segmentation networks
2323
from [PyTorch's official website](https://pytorch.org/vision/stable/models.html).
2424
* All OpenAI [CLIP](https://github.com/openai/CLIP) language-vision models.
@@ -78,7 +78,7 @@ achieved by calling the `paradigm_2afc_merge_concatenate` from the `osculari.mod
7878

7979
``` python
8080

81-
architecture = 'resnet50' # networks' architecture
81+
architecture = 'resnet50' # network's architecture
8282
weights = 'resnet50' # the pretrained weights
8383
img_size = 224 # network's input size
8484
layer = 'block0' # the readout layer

docs/source/index.rst

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Osculari
1515
.. image:: https://img.shields.io/pypi/pyversions/osculari.svg
1616
:target: https://pypi.org/project/osculari/
1717
.. image:: https://static.pepy.tech/badge/osculari
18-
:target: https://pypi.org/project/osculari/
18+
:target: https://github.com/ArashAkbarinia/osculari
1919
.. image:: https://codecov.io/gh/ArashAkbarinia/osculari/branch/main/graph/badge.svg
2020
:target: https://app.codecov.io/gh/ArashAkbarinia/osculari
2121
.. image:: https://img.shields.io/badge/PyTorch_1.9.1+-ee4c2c?logo=pytorch&logoColor=white
@@ -26,10 +26,11 @@ Osculari
2626
:target: https://zenodo.org/doi/10.5281/zenodo.10214005
2727

2828

29-
**Osculari** (ōsculārī; Latin; to embrace, kiss) is a Python library providing an easy interface
30-
for different techniques to explore and interpret the internal presentation of deep neural networks.
29+
**Osculari** (ōsculārī; Latin; to embrace, kiss) is a Python package providing an easy interface
30+
for different psychophysical techniques to explore and interpret the internal presentation of
31+
artificial neural networks.
3132

32-
- Supporting for following pretrained models:
33+
- Supporting the following pretrained models:
3334
* All classification and segmentation networks
3435
from `PyTorch's official website <https://pytorch.org/vision/stable/models.html>`_.
3536
* All OpenAI `CLIP <(https://github.com/openai/CLIP>`_ language-vision models.

0 commit comments

Comments
 (0)