Project Page | Paper | Bibtex
Milad Abdollahzadeh*, Guimeng Liu*, Touba Malekzadeh†, Christopher T. H. Teo†, Keshigeyan Chandrasegaran†, Ngai-Man Cheung‡
(* Equal first-author contribution, † Equal second-author contribution,‡ Corresponding author)
This repo contains the list of papers with public code implementations for Generative Modeling under Data Constraint (GM-DC). For each work, we determine the generative task(s) addressed, the approach, and the type of generative model used.
First, we define the generative tasks, and the approach definition, and then provide our comprehensive list of the works for GM-DC with the required details for each work.
In machine learning, generative modeling aims to learn to generate new data statistically similar to the training data distribution. In this paper, we survey learning generative models under limited data, few shots and zero shot, referred to as Generative Modeling under Data Constraint (GM-DC). This is an important topic when data acquisition is challenging, e.g. healthcare applications. We discuss background, challenges, and propose two taxonomies: one on GM-DC tasks and another on GM-DC approaches. Importantly, we study interactions between different GM-DC tasks and approaches. Furthermore, we highlight research gaps, research trends, and potential avenues for future exploration.
- July 15, 2025: 120 new works added (233 works total)!
- Oct 28, 2024: The slides for our ICIP tutorial on "Generative Modeling for Limited Data, Few Shots and Zero Shot" can be found here.
- July 28, 2023: First release (113 works included)!
We define 8 different generative tasks under data constraints based on the rigorous review of the literature. The description of these tasks can be found in the follwing table:
Task | Description & Example | Illustration |
---|---|---|
uGM-1 |
Description: Given Example: ADA learns a StyleGAN2 using 1k images from AFHQ-Dog |
![]() |
uGM-2 |
Description: Given a pre-trained generator on a source domain Example: CDC adapts a pre-trained GAN on FFHQ (Human Faces) to Sketches using 10 samples |
![]() |
uGM-3 |
Description: Given a pre-trained generator on a source domain Example: StyleGAN-NADA adapts pre-trained GAN on FFHQ to the painting domain using Fernando Botero Painting as input |
![]() |
cGM-1 |
Description: Given Example: CbC trains conditional generator on 20 classes of ImageNet Carnivores using 100 images per class |
![]() |
cGM-2 |
Description: Given a pre-trained generator on the seen classes Example: LoFGAN learns from 85 classes of Flowers to generate images for an unseen class with only 3 samples |
![]() |
cGM-3 |
Description: Given a pre-trained generator on a source domain Example: VPT adapts a pre-trained conditional generator on ImageNet to Places365 with 500 images per class |
![]() |
IGM |
Description: Given Example: SinDDM trains a generator using a single image of Marina Bay Sands, and generates variants of it |
![]() |
SGM |
Description: Given a pre-trained generator, Example: DreamBooth trains a generator using 4 images of a particular backpack and adapts it with a text-prompt to be in the grand canyon
|
![]() |
Please refer to our survey for a more detailed discussion of these generative tasks including the attributes of each task and the data limitation range that addressed for each task.
Click to expand/collapse 131 works
- Transferring GANs: generating images from limited data
ECCV 2018
[Paper] [Official Code] - Image Generation from Small Datasets via Batch Statistics Adaptation
ICCV 2019
[Paper] [Official Code] - Freeze the Discriminator: a Simple Baseline for Fine-tuning GANs
CVPR 2020-W
[Paper] [Official Code] - On Leveraging Pretrained GANs for Generation with Limited Data
ICML 2020
[Paper] [Official Code] - Few-Shot Image Generation with Elastic Weight Consolidation
NeurIPS 2020
[Paper] - GAN Memory with No Forgetting
NeurIPS 2020
[Paper] [Official Code] - Few-Shot Adaptation of Generative Adversarial Networks
arXiv 2020
[Paper] [Official Code] - Effective Knowledge Transfer from GANs to Target domains with Few Images
CVPR 2021
[Paper] [Official Code] - Few-Shot Image Generation via Cross-domain Correspondence
CVPR 2021
[Paper] [Official Code] - Efficient Conditional GAN Transfer with Konwledge Propagation across Classes
CVPR 2021
[Paper] [Official Code] - CAM-GAN: Continual Adaptation Modules for Generative Adversarial Networks
NeurIPS 2021
[Paper] [Official Code] - Contrastive Learning for Cross-domain Correspondence in Few-shot Image Generation
NeurIPS 2021-W
[Paper] - Instance-Conditioned GAN
NeurIPS 2021
[Paper] [Official Code] - When, Why, and Which Pre-trained GANs are useful?
ICLR 2022
[Paper] [Official Code] - Domain Gap Control for Single Shot Domain Adaptation for Generative Adversarial Networks
ICLR 2022
[Paper] [Official Code] - A Closer Look at Few-Shot Image Generation
CVPR 2022
[Paper] - Few shot generative model adaption via relaxed spatial structural alignment
CVPR 2022
[Paper] [Official Code] - One Shot Face Stylization
ECCV 2022
[Paper] [Official Code] - Few-shot Image Generation via Adaptation-Aware Kernel Modulation
NeurIPS 2022
[Paper] [Official Code] - Universal Domain Adaptation for Generative Adversarial Networks
NeurIPS 2022
[Paper] [Official Code] - Generalized One-shot Domain Adaptation of Generative Adversarial Networks
NeurIPS 2022
[Paper] [Official Code] - Towards Diverse and Faithful One-shot Adaption of Generative Adversarial Networks
NeurIPS 2022
[Paper] [Official Code] - CLIP-Guided Domain Adaptation of Image Generators
ACM-TOG 2022
[Paper] [Official Code] - Dynamic Few-shot Adaptation of GANs to Multiple Domains
SIGGRAPH-Asia 2022
[Paper] [Official Code] - Exploiting Knowledge Distillation for Few-Shot Image Generation
arXiv 2022
[Paper] - Few-shot Artistic Portraits Generation with Contrastive Transfer Learning
arXiv 2022
[Paper] - Dynamic Weighted Semantic Correspondence for Few-Shot Image Generative Adaptation
ACM-MM 2022
[Paper] - Fine-tuning Diffusion Models with Limited Data
NeurIPS 2022 Workshop
[Paper] - Fair Generative Models via Transfer Learning
AAAI 2023
[Paper] [Official Code] - Progressive Few-Shot Adaptation of Generative Model with Align-Free Spatial Correlation
AAAI 2023
[Paper] [Official Code] - Few-shot Cross-domain Image Generation via Inference-time Latent-code Learning
ICLR 2023
[Paper] [Official Code] - Mining Generative Models for Efficient Knowledge Transfer to Limited Data Domains
IJCV 2023
[Paper] [Official Code] - One-Shot Generative Domain Adaptation
ICCV 2023
[Paper] [Official Code] - Exploring Incompatible Knowledge Transfer in Few-shot Image Generation
CVPR 2023
[Paper] [Official Code] - Zero-shot Generative Model Adaptation via Image-specific Prompt Learning
CVPR 2023
[Paper] [Official Code] - Visual Prompt Tuning for Generative Transfer Learning
CVPR 2023
[Paper] [Official Code] - SINgle Image Editing with Text-to-Image Diffusion Models
CVPR 2023
[Paper] [Official Code] - DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation
CVPR 2023
[Paper] - Multi-Concept Customization of Text-to-Image Diffusion
CVPR 2023
[Paper] [Official Code] - Plug-and-Play Sample-Efficient Fine-Tuning of Text-to-Image Diffusion Models to Learn Any Unseen Style
CVPR 2023
[Paper] - Target-Aware Generative Augmentations for Single-Shot Adaptation
ICML 2023
[Paper] [Official Code] - MultiDiffusion:Fusing Diffusion Paths for Controlled Image Generation
ICML 2023
[Paper] [Official Code] - Data-Dependent Domain Transfer GANs for Image Generation with Limited Data
AC-MTMCCA 2023
[Paper] - One-Shot Adaptation of GAN in Just One CLIP
TPAMI 2023
[Paper] [Official Code] - Few-shot Image Generation via Masked Discrimination
arXiv 2023
[Paper] - Domain Re-Modulation for Few-Shot Generative Domain Adaptation
NeurIPS 2023
[Paper] [Official Code] - Improving Diversity in Zero-Shot GAN Adaptation with Semantic Variations
ICCV 2023
[Paper] [Official Code] - Smoothness Similarity Regularization for Few-Shot GAN Adaptation
ICCV 2023
[Paper] [Official Code] - Lifelong Few-Shot Image Generation
ICCV 2023
[Paper] [Official Code] - ELITE: Encoding Visual Concepts into Textual Embeddings for Customized Text-to-Image Generation
ICCV 2023
[Paper] [Official Code] - E4T: Encoder-based Domain Tuning for Fast Personalization of Text-to-Image Models
ACM TOG 2023
[Paper] [Official Code] - Efficient Transfer Learning in Diffusion Models via Adversarial Noise
arXiv 2023
[Paper] - Overcoming Catastrophic Forgetting for Fine-Tuning Pre-trained GANs
ECML PKDD 2023
[Paper] - Multi-national COVID-19 CT Image-label Pairs Synthesis via Few-Shot GANs Adaptation
ISBI 2023
[Paper] - Faster Few-Shot Face Image Generation with Features of Specific Group Using Pivotal Tuning Inversion and PCA
ICAIIC 2023
[Paper] - Phasic Content Fusing Diffusion Model with Directional Distribution Consistency for Few-Shot Model Adaption
ICCV 2023
[Paper] [Official Code] - Efficient and Lightweight Parameterizations of StyleGAN for One-shot and Few-shot Domain Adaptation
ICCV 2023
[Paper] - Ablating Concepts in Text-to-Image Diffusion Models
ICCV 2023
[Paper] [Official Code] - Domain Expansion of the Image Generators
CVPR 2023
[Paper] [Official Code] - DATID-3D: Diversity-Preserved Domain Adaptation Using Text-to-Image Diffusion for 3D Generative Model
CVPR 2023
[Paper] [Official Code] - Personalized Image Generation for Color Vision Deficiency Population
ICCV 2023
[Paper] [Official Code] - Zero-shot generation of coherent storybook from plain text story using diffusion models
arXiv 2023
[Paper] - ProSpect: Prompt Spectrum for Attribute-Aware Personalization of Diffusion Models
SIGGRAPH-ASIA 2023
[Paper] [Official Code] - Highly Personalized Text Embedding for Image Manipulation by Stable Diffusion
arXiv 2023
[Paper] [Official Code] - StyO: Stylize Your Face in Only One-Shot
arXiv 2023
[Paper] - Diffusion in Style
ICCV 2023
[Paper] [Official Code] - Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models
NeurIPS 2023
[Paper] [Official Code] - NICE: NoIse-modulated Consistency rEgularization for Data-Efficient GANs
NeurIPS 2023
[Paper] [Official Code] - Faster Few-Shot Face Image Generation with Features of Specific Group Using Pivotal Tuning Inversion and PCA
ICAIIC 2023
[Paper] - Few-shot Image Generation with Diffusion Models
arXiv 2023
[Paper] - Rethinking cross-domain semantic relation for few-shot image generation
Applied-Inteligence 2023
[Paper] [Official Code] - An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion
ICLR 2023
[Paper] [Official Code] - BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing
NeurIPS 2023
[Paper] [Official Code] - Few-shot Image Generation via Latent Space Relocation
AAAI 2024
[Paper] - SoLAD: Sampling Over Latent Adapter for Few-Shot Generation
IEEE Signal Processing Letters 2024
[Paper] - DomainGallery: Few-shot Domain-driven Image Generation by Attribute-centric Finetuning
NuerIPS 2024
[Paper] [Official Code] - Lightweight dual-path octave generative adversarial networks for few-shot image generation
Multimedia Systems 2024
[Paper] - Few-Shot Image Generation by Conditional Relaxing Diffusion Inversion
ECCV 2024
[Paper] [Official Code] - **High-Quality and Diverse Few-Shot Image Generation via Masked Discrimination **
TIP 2024
[Paper] - Few-Shot Generative Model Adaption via Optimal Kernel Modulation
TCSVT 2024
[Paper] - Deformable One-shot Face Stylization via DINO Semantic Guidance
CVPR 2024
[Paper] [Official Code] - Few-Shot Generative Model Adaptation via Style-Guided Prompt
TMM 2024
[Paper] [Official Code] - Few-shot Hybrid Domain Adaptation of Image Generator
ICLR 2024
[Paper] [Official Code] - CFTS-GAN: Continual Few-Shot Teacher Student for Generative Adversarial Networks
ICPR 2024
[Paper] - Enhancing Fine-Tuning Performance of Text-to-Image Diffusion Models for Few-Shot Image Generation Through Contrastive Learning
IGTA 2024
[Paper] - HyperGAN-CLIP: A Unified Framework for Domain Adaptation, Image Synthesis and Manipulation
SIGGRAPH-ASIA 2024
[Paper] [Official Code] - AnomalyDiffusion: Few-Shot Anomaly Image Generation with Diffusion Model
AAAI 2024
[Paper] [Official Code] - TF²: Few-Shot Text-Free Training-Free Defect Image Generation for Industrial Anomaly Inspection
TCSVT 2024
[Paper] - StyleGAN-Fusion: Diffusion Guided Domain Adaptation of Image Generators
WACV 2024
[Paper] [Official Code] - Few-shot adaptation of GANs using self-supervised consistency regularization
Knowledge-Based Systems 2024
[Paper] - Towards Lifelong Few-Shot Customization of Text-to-Image Diffusion
arXiv 2024
[Paper] - Bridging Data Gaps in Diffusion Models with Adversarial Noise-Based Transfer Learning
ICML 2024
[Paper] - Few-Shot Image Generation via Style Adaptation and Content Preservation
TNNLS 2024
[Paper] - Dual-path hypernetworks of style and text for one-shot domain adaptation
Applied Intelligence 2024
[Paper] - A Unified and Versatile Framework for Multi-Modal Hybrid Domain Adaptation
arXiv 2024
[Paper] - Enhancing DreamBooth With LoRA for Generating Unlimited Characters With Stable Diffusion
IJCNN 2024
[Paper] - FairQueue: Rethinking Prompt Learning for Fair Text-to-Image Generation
NeurIPS 2024
[Paper] [Official Code] - FairTL: A Transfer Learning Approach for Bias Mitigation in Deep Generative Models
J-STSP 2024
[Paper] - OMG: Occlusion-Friendly Personalized Multi-concept Generation in Diffusion Models
ECCV 2024
[Paper] [Official Code] - Tuning-Free Image Customization with Image and Text Guidance
ECCV 2024
[Paper] [Official Code] - HybridBooth: Hybrid Prompt Inversion for Efficient Subject-Driven Generation
ECCV 2024
[Paper] [Official Code] - MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation
ECCV 2024
[Paper] [Official Code] - ComFusion: Enhancing Personalized Generation by Instance-Scene Compositing and Fusion
ECCV 2024
[Paper] - Lego: Learning to Disentangle and Invert Personalized Concepts Beyond Object Appearance in Text-to-Image Diffusion Models
ECCV 2024
[Paper] [Official Code] - Powerful and Flexible: Personalized Text-to-Image Generation via Reinforcement Learning
ECCV 2024
[Paper] [Official Code] - MultiGen: Zero-Shot Image Generation from Multi-modal Prompts
ECCV 2024
[Paper] - MasterWeaver: Taming Editability and Face Identity for Personalized Text-to-Image Generation
ECCV 2024
[Paper] [Official Code] - Customized Generation Reimagined: Fidelity and Editability Harmonized
ECCV 2024
[Paper] [Official Code] - LogoSticker: Inserting Logos Into Diffusion Models for Customized Generation
ECCV 2024
[Paper] [Official Code] - DreamBlend: Advancing Personalized Fine-tuning of Text-to-Image Diffusion Models
WACV 2025
[Paper] - HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models
CVPR 2024
[Paper] [Official Code] - AnyDoor: Zero-shot Object-level Image Customization
CVPR 2024
[Paper] [Official Code] - FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention
IJCV 2024
[Paper] [Official Code] - ZipLoRA: Any Subject in Any Style by Effectively Merging LoRAs
ECCV 2024
[Paper] [Official Code] - Orthogonal Adaptation for Modular Customization of Diffusion Models
CVPR 2024
[Paper] [Official Code] - DreamMatcher: Appearance Matching Self-Attention for Semantically-Consistent Text-to-Image Personalization
CVPR 2024
[Paper] [Official Code] - SSR-Encoder: Encoding Selective Subject Representation for Subject-Driven Generation
CVPR 2024
[Paper] [Official Code] - PortraitBooth: A Versatile Portrait Model for Fast Identity-preserved Personalization
CVPR 2024
[Paper] [Official Code] - Instantbooth: Personalized text-to-image generation without test-time finetuning
CVPR 2024
[Paper] [Official Code] - Hyperdreambooth: Hypernetworks for fast personalization of text-to-image models
CVPR 2024
[Paper] [Official Code] - IDAdapter: Learning Mixed Features for Tuning-Free Personalization of Text-to-Image Models
CVPR 2024 Workshop
[Paper] - Create Your World: Lifelong Text-to-Image Diffusion
TPAMI 2024
[Paper] [Official Code] - Decoupled Textual Embeddings for Customized Image Generation
AAAI 2024
[Paper] [Official Code] - PALP: Prompt Aligned Personalization of Text-to-Image Models
SIGGRAPH-ASIA 2024
[Paper] [Official Code] - MagiCapture: High-Resolution Multi-Concept Portrait Customization
AAAI 2024
[Paper] [Official Code] - Attention Calibration for Disentangled Text-to-Image Personalization
CVPR 2024
[Paper] [Official Code] - RealCustom: Narrowing Real Text Word for Real-Time Open-Domain Text-to-Image Customization
CVPR 2024
[Paper] [Official Code] - FaceChain-SuDe: Building Derived Class to Inherit Category Attributes for One-shot Subject-Driven Generation
CVPR 2024
[Paper] [Official Code] - Improving Subject-Driven Image Synthesis with Subject-Agnostic Guidance
CVPR 2024
[Paper] - Cross Initialization for Face Personalization of Text-to-Image Models
CVPR 2024
[Paper] [Official Code] - Few-Shot Diffusion Models Escape the Curse of Dimensionality
NeurIPS 2024
[Paper]
Click to expand/collapse 16 works
- Consistency Regularization for Generative Adversarial Networks
ICLR 2019
[Paper] [Official Code] - Training generative adversarial networks with limited data
NeurIPS 2020
[Paper] [Official Code] - Differentiable Augmentation for Data-efficient GAN Training
NeurIPS 2020
[Paper] [Official Code] - Image Augmentations for GAN Training
arXiv 2020
[Paper] - Improved Consistency Regularization for GANs
AAAI 2021
[Paper] - DeceiveD: Adaptive pseudo augmentation for gan training with limited data
NeurIPS 2021
[Paper] [Official Code] - Data-efficient gan training beyond (just) augmentations: A lottery ticket perspective
NeurIPS 2021
[Paper] [Official Code] - Self-Supervised GANs with Label Augmentation
NeurIPS 2021
[Paper] [Official Code] - On Data Augmentation for GAN Training
TIP 2021
[Paper] [Official Code] - Adaptive Feature Interpolation for Low-Shot Image Generation
ECCV 2022
[Paper] [Official Code] - Feature Statistics Mixing Regularization for Generative Adversarial Networks
CVPR 2022
[Paper] [Official Code] - Training GANs with Diffusion
ICLR 2023
[Paper] [Official Code] - Faster and More Data-Efficient Training of Diffusion Models
NeurIPS 2023
[Paper] - Augmentation-Aware Self-Supervision for Data-Efficient GAN Training
NeurIPS 2023
[Paper] [Official Code] - Improving the Leaking of Augmentations in Data-Efficient GANs via Adaptive Negative Data Augmentation
WACV 2024
[Paper] [Official Code] - Improving the Training of the GANs with Limited Data via Dual Adaptive Noise Injection
ACM MM 2024
[Paper] [Official Code]
Click to expand/collapse 18 works
- Towards faster and stabilized gan training for high-fidelity few-shot image synthesis
ICLR 2021
[Paper] [Official Code] - Data-efficient gan training beyond (just) augmentations: A lottery ticket perspective
NeurIPS 2021
[Paper] [Official Code] - Projected GANs Converge Faster
NeurIPS 2021
[Paper] [Official Code] - Prototype Memory and Attention Mechanisms for Few Shot Image Generation
ICLR 2022
[Paper] [Official Code] - Collapse by conditioning: Training class-conditional GANs with limited data
ICLR 2022
[Paper] [Official Code] - Ensembling Off-the-shelf Models for GAN Training
CVPR 2022
[Paper] [Official Code] - Hierarchical Context Aggregation for Few-Shot Generation
ICML 2022
[Paper] [Official Code] - Improving GANs with A Dynamic Discriminator
NeurIPS 2022
[Paper] [Official Code] - Conditional GAN for Small Datasets
IEEE ISM 2022
[Paper] - Data InStance Prior (DISP) in Generative Adversarial Networks
WACV 2022
[Paper] [Official Code] - Data-Efficient GANs Training via Architectural Reconfiguration
CVPR 2023
[Paper] [Official Code] - Introducing editable and representative attributes for few-shot image generation
Engineering Applications of AI 2023
[Paper] [Official Code] - Toward a better image synthesis GAN framework for high-fidelity few-shot datasets via NAS and contrastive learning
Elsevier KBS 2023
[Paper] [Official Code] - Peer is Your Pillar: A Data-unbalanced Conditional GANs for Few-shot Image Generation
TCSVT 2024
[Paper] - CNN hybrid vits for training GANs under limited data
Pattern Recognition 2024
[Paper] - Efficient Variant Convolution for Few-Shot Image Generation
ICPR 2024
[Paper] - RG-GAN: Dynamic Regenerative Pruning for Data-Efficient Generative Adversarial Networks
AAAI 2024
[Paper] [Official Code] - P2D: Plug and Play Discriminator for accelerating GAN frameworks
WACV 2024
[Paper]
Click to expand/collapse 32 works
- Image Augmentations for GAN Training
arXiv 2020
[Paper] - Regularizing generative adversarial networks under limited data
CVPR 2021
[Paper] [Official Code] - Contrastive Learning for Cross-domain Correspondence in Few-shot Image Generation
NeurIPS 2021-W
[Paper] - Data-Efficient Instance Generation from Instance Discrimination
NeurIPS 2021
[Paper] [Official Code] - Diffusion-Decoding Models for Few-Shot Conditional Generation
NeurIPS 2021
[Paper] [Official Code] - Generative Co-training for Generative Adversarial Networks with Limited Data
AAAI 2022
[Paper] [Official Code] - Prototype Memory and Attention Mechanisms for Few Shot Image Generation
ICLR 2022
[Paper] [Official Code] - A Closer Look at Few-Shot Image Generation
CVPR 2022
[Paper] - Few-shot Image Generation with Mixup-based Distance Learning
ECCV 2022
[Paper] [Official Code] - Exploring Contrastive Learning for Solving Latent Discontinuity in Data-Efficient GANs
ECCV 2022
[Paper] [Official Code] - Any-resolution Training for High-resolution Image Synthesis
ECCV 2022
[Paper] [Official Code] - Discriminator gradIent Gap Regularization for GAN Training with Limited Data
NeurIPS 2022
[Paper] [Official Code] - Masked Generative Adversarial Networks are Data-Efficient Generation Learners
NeurIPS 2022
[Paper] - Exploiting Knowledge Distillation for Few-Shot Image Generation
arXiv 2022
[Paper] - Few-shot Artistic Portraits Generation with Contrastive Transfer Learning
arXiv 2022
[Paper] - Few-Shot Diffusion Models
arXiv 2022
[Paper] [Official Code] - Few-shot image generation based on contrastive meta-learning generative adversarial network
Visual Computer 2022
[Paper] - Training GANs with Diffusion
ICLR 2023
[Paper] [Official Code] - Data Limited Image Generation via Knowledge Distillation
CVPR 2023
[Paper] - Adaptive IMLE for Few-shot Pretraining-free Generative Modelling
ICML 2023
[Paper] [Official Code] - Few-shot Image Generation via Masked Discrimination
arXiv 2023
[Paper] - Faster and More Data-Efficient Training of Diffusion Models
NeurIPS 2023
[Paper] - Towards high diversity and fidelity image synthesis under limited data
Information Sciences 2023
[Paper] [Official Code] - Regularizing Label-Augmented Generative Adversarial Networks Under Limited Data
IEEE Access 2023
[Paper] - Dynamically Masked Discriminator for Generative Adversarial Networks
NeurIPS 2023
[Paper] [Official Code] - Leveraging Friendly Neighbors to Accelerate GAN Training
CVPR 2023
[Paper] [Official Code] - Few-Shot Defect Image Generation via Defect-Aware Feature Manipulation
AAAI 2023
[Paper] [Official Code] - Few-shot image generation with reverse contrastive learning
Nueral Networks 2024
[Paper] [Official Code] - Mutual Information Compensation for High-Fidelity Image Generation With Limited Data
IEEE Signal Processing Letters 2024
[Paper] - CHAIN: Enhancing Generalization in Data-Efficient GANs via lipsCHitz continuity constrAIned Normalization
CVPR 2024
[Paper] [Official Code] - BK-SDM: A Lightweight, Fast, and Cheap Version of Stable Diffusion
ECCV 2024
[Paper] [Official Code] - Designing Priors for Better Few-Shot Image Synthesis
ECCV 2024
[Paper] [Official Code]
Click to expand/collapse 6 works
- Generative Co-training for Generative Adversarial Networks with Limited Data
AAAI 2022
[Paper] [Official Code] - Frequency-Aware GAN for High-Fidelity Few-Shot Image Generation
ECCV 2022
[Paper] [Official Code] - Improving GANs with A Dynamic Discriminator
NeurIPS 2022
[Paper] [Official Code] - Exploiting Frequency Components for Training GANs under Limited Data
NeurIPS 2022
[Paper] [Official Code] - Improving Few-shot Image Generation by Structural Discrimination and Textural Modulation
ACM MM 2023
[Paper] [Official Code] - Frequency-Auxiliary One-Shot Domain Adaptation of Generative Adversarial Networks
Electronics 2024
[Paper]
Click to expand/collapse 27 works
- Data Augmentaion Generative Adversarial Networks
arXiv 2017
[Paper] [Official Code] - Few-shot Generative Modelling with Generative Matching Networks
AISTATS 2018
[Paper] - Few-shot Image Generation with Reptile
arXiv 2019
[Paper] [Official Code] - A domain adaptive few shot generation framework
arXiv 2020
[Paper] - Matching-based Few-shot Image Generation
ICME 2020
[Paper] [Official Code] - Fusing-and-Filling GAN for Few-shot Image Generation
ACM-MM 2020
[Paper] [Official Code] - Fusing Local Representations for Few-shot Image Generation
ICCV 2021
[Paper] [Official Code] - Fast Adaptive Meta-Learning for Few-Shot Image Generation
TMM 2021
[Paper] [Official Code] - Few Shot Image Generation via Implicit Autoencoding of Support Sets
NeurIPS 2021 Workshop
[Paper] - Frequency-Aware GAN for High-Fidelity Few-Shot Image Generation
ECCV 2022
[Paper] [Official Code] - Towards Diverse Few-shot Image Generation with Sample-Specific Delta
ECCV 2022
[Paper] [Official Code] - Few-shot image generation based on contrastive meta-learning generative adversarial network
Visual Computer 2022
[Paper] - Few-shot Image Generation Using Discrete Content Representation
ACM MM 2022
[Paper] - Learning to Memorize Feature Hallucination for One-shot Image Generation
CVPR 2022
[Paper] - The Euclidean Space is Evil: Hyperbolic Attribute Editing for Few-shot Image Generation
ICCV 2023
[Paper] [Official Code] - Where is My Spot? Few-shot Image Generation via Latent Subspace Optimization
CVPR 2023
[Paper] [Official Code] - Attribute Group Editing for Reliable Few-shot Image Generation
CVPR 2023
[Paper] [Official Code] - Adaptive multi-scale modulation generative adversarial network for few-shot image generation
Applied Intelligence 2023
[Paper] - Stable Attribute Group Editing for Reliable Few-shot Image Generation
arXiv 2023
[Paper] [Official Code] - Improving Few-shot Image Generation by Structural Discrimination and Textural Modulation
ACM MM 2023
[Paper] [Official Code] - IoT-Enabled Few-Shot Image Generation for Power Scene Defect Detection Based on Self-Attention and Global–Local Fusion
Sensors 2023
[Paper] - SAGAN: Skip attention generative adversarial networks for few-shot image generation
Digital Signal Processing 2024
[Paper] [Official Code] - TAGE: Trustworthy Attribute Group Editing for Stable Few-shot Image Generation
ICSPS 2024
[Paper] - Conditional Distribution Modelling for Few-Shot Image Synthesis with Diffusion Models
ACCV 2024
[Paper] - Exact Fusion via Feature Distribution Matching for Few-shot Image Generation
CVPR 2024
[Paper] [Official Code] - Semantic Mask Reconstruction and Category Semantic Learning for few-shot image generation
Neural Networks 2025
[Paper] - EqGAN: Feature Equalization Fusion for Few-shot Image Generation
ICASSP 2025
[Paper]
Click to expand/collapse 17 works
- Learning a Generative Model from a Single Natural Image
ICCV 2019
[Paper] [Official Code] - Learning to generate samples from single images and videos
CVPR 2021-W
[Paper] [Official Code] - Improved techniques for training single image gans
WACV 2021
[Paper] [Official Code] - Recurrent SinGAN: Towards Scale-agnostic Single Image GANs
ACM EITCE 2021
[Paper] - Sa-SinGAN: Self-Attention for Single-Image Generation Adversarial Networks
Machine Vision and Applications 2021
[Paper] - ExSinGAN: Learning an Explainable Generative Model from a Single Image
BMVC 2021
[Paper] [Official Code] - PetsGAN: Rethinking Priors for Single Image Generation
AAAI 2022
[Paper] [Official Code] - CCASinGAN: Cascaded Channel Attention Guided Single-Image GANs
ICSP 2022
[Paper] - Training Diffusion Models on a Single Image or Video
ICML 2023
[Paper] [Official Code] - Learning and Blending the Internal Distributions of Single Images by Spatial Image-Identity Conditioning
arXiv 2022
[Paper] - A Single Image Denoising Diffusion Model
ICML 2023
[Paper] [Official Code] - Diverse Attribute Transfer for Few-Shot Image Synthesis
VISIGRAPP 2023
[Paper] [Official Code] - TCGAN: Semantic-aware and Structure-preserved GANs with Individual Vision Transformer for Fast Arbitrary One-shot Image Generation
arXiv 2023
[Paper] - Prompt-Based Learning for Image Variation Using Single Image Multi-Scale Diffusion Models
IEEE Access 2024
[Paper] - A Faster Single-Image Denoising Diffusion Model: Emphasizing the Role of the Latent Image Code
SSRN 2024
[Paper] - A single-image GAN model using self-attention mechanism and DenseNets
Neurocomputing 2024
[Paper] - Learning a Diffusion Model from a Single Natural Image
TPAMI 2025
[Paper] [Official Code]
If you find this repo useful, please cite our paper
@article{abdollahzadeh2023survey,
title={A Survey on Generative Modeling with Limited Data, Few Shots, and Zero Shot},
author={Milad Abdollahzadeh and Touba Malekzadeh and Christopher T. H. Teo and Keshigeyan Chandrasegaran and Guimeng Liu and Ngai-Man Cheung},
year={2023},
eprint={2307.14397},
archivePrefix={arXiv},
primaryClass={cs.CV}
}