Experiments in Joint Embedding Predictive Architectures (JEPAs).
-
Updated
Jan 5, 2024 - Python
Experiments in Joint Embedding Predictive Architectures (JEPAs).
👆PyTorch Implementation of JEDi Metric described in "Beyond FVD: Enhanced Evaluation Metrics for Video Generation Quality"
A Video Joint Embedding Predictive Architecture (JEPA) that runs on a personal computer.
Project for Yann Lecun's Deep Learning class. In this project, we train a JEPA world model on a set of pre-collected trajectories from a toy environment involving an agent in two rooms.
Train a JEPA world model on a set of pre-collected trajectories from a toy environment involving an agent in two rooms.
Joint Embedding Predictive Architecture (JEPA) world model trained on agent trajectories to predict future latent states from pixel inputs and actions. Uses VICReg loss with RNN dynamics to evaluate how well learned embeddings reflect spatial behavior in toy environments.
A simple and efficient implementation of Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture (I-JEPA)
Training backend and some self-supervised pretraining methods for Cell Observatory models
Add a description, image, and links to the jepa topic page so that developers can more easily learn about it.
To associate your repository with the jepa topic, visit your repo's landing page and select "manage topics."