Skip to content

Commit 85279df

Browse files
authored
Merge branch 'main' into local-model-info
2 parents 2d993b7 + 20e0740 commit 85279df

29 files changed

+1669
-99
lines changed

.github/workflows/mirror_community_pipeline.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -79,14 +79,14 @@ jobs:
7979
8080
# Check secret is set
8181
- name: whoami
82-
run: huggingface-cli whoami
82+
run: hf auth whoami
8383
env:
8484
HF_TOKEN: ${{ secrets.HF_TOKEN_MIRROR_COMMUNITY_PIPELINES }}
8585

8686
# Push to HF! (under subfolder based on checkout ref)
8787
# https://huggingface.co/datasets/diffusers/community-pipelines-mirror
8888
- name: Mirror community pipeline to HF
89-
run: huggingface-cli upload diffusers/community-pipelines-mirror ./examples/community ${PATH_IN_REPO} --repo-type dataset
89+
run: hf upload diffusers/community-pipelines-mirror ./examples/community ${PATH_IN_REPO} --repo-type dataset
9090
env:
9191
PATH_IN_REPO: ${{ env.PATH_IN_REPO }}
9292
HF_TOKEN: ${{ secrets.HF_TOKEN_MIRROR_COMMUNITY_PIPELINES }}

docs/source/en/_toctree.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -179,7 +179,7 @@
179179
isExpanded: false
180180
sections:
181181
- local: quantization/overview
182-
title: Getting Started
182+
title: Getting started
183183
- local: quantization/bitsandbytes
184184
title: bitsandbytes
185185
- local: quantization/gguf

docs/source/en/api/quantization.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -27,19 +27,19 @@ Learn how to quantize models in the [Quantization](../quantization/overview) gui
2727

2828
## BitsAndBytesConfig
2929

30-
[[autodoc]] BitsAndBytesConfig
30+
[[autodoc]] quantizers.quantization_config.BitsAndBytesConfig
3131

3232
## GGUFQuantizationConfig
3333

34-
[[autodoc]] GGUFQuantizationConfig
34+
[[autodoc]] quantizers.quantization_config.GGUFQuantizationConfig
3535

3636
## QuantoConfig
3737

38-
[[autodoc]] QuantoConfig
38+
[[autodoc]] quantizers.quantization_config.QuantoConfig
3939

4040
## TorchAoConfig
4141

42-
[[autodoc]] TorchAoConfig
42+
[[autodoc]] quantizers.quantization_config.TorchAoConfig
4343

4444
## DiffusersQuantizer
4545

docs/source/en/index.md

Lines changed: 16 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -12,37 +12,24 @@ specific language governing permissions and limitations under the License.
1212

1313
<p align="center">
1414
<br>
15-
<img src="https://raw.githubusercontent.com/huggingface/diffusers/77aadfee6a891ab9fcfb780f87c693f7a5beeb8e/docs/source/imgs/diffusers_library.jpg" width="400"/>
15+
<img src="https://raw.githubusercontent.com/huggingface/diffusers/77aadfee6a891ab9fcfb780f87c693f7a5beeb8e/docs/source/imgs/diffusers_library.jpg" width="400" style="border: none;"/>
1616
<br>
1717
</p>
1818

1919
# Diffusers
2020

21-
🤗 Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you're looking for a simple inference solution or want to train your own diffusion model, 🤗 Diffusers is a modular toolbox that supports both. Our library is designed with a focus on [usability over performance](conceptual/philosophy#usability-over-performance), [simple over easy](conceptual/philosophy#simple-over-easy), and [customizability over abstractions](conceptual/philosophy#tweakable-contributorfriendly-over-abstraction).
22-
23-
The library has three main components:
24-
25-
- State-of-the-art diffusion pipelines for inference with just a few lines of code. There are many pipelines in 🤗 Diffusers, check out the table in the pipeline [overview](api/pipelines/overview) for a complete list of available pipelines and the task they solve.
26-
- Interchangeable [noise schedulers](api/schedulers/overview) for balancing trade-offs between generation speed and quality.
27-
- Pretrained [models](api/models) that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems.
28-
29-
<div class="mt-10">
30-
<div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
31-
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./tutorials/tutorial_overview"
32-
><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div>
33-
<p class="text-gray-700">Learn the fundamental skills you need to start generating outputs, build your own diffusion system, and train a diffusion model. We recommend starting here if you're using 🤗 Diffusers for the first time!</p>
34-
</a>
35-
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./using-diffusers/loading_overview"
36-
><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div>
37-
<p class="text-gray-700">Practical guides for helping you load pipelines, models, and schedulers. You'll also learn how to use pipelines for specific tasks, control how outputs are generated, optimize for inference speed, and different training techniques.</p>
38-
</a>
39-
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./conceptual/philosophy"
40-
><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div>
41-
<p class="text-gray-700">Understand why the library was designed the way it was, and learn more about the ethical guidelines and safety implementations for using the library.</p>
42-
</a>
43-
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./api/models/overview"
44-
><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div>
45-
<p class="text-gray-700">Technical descriptions of how 🤗 Diffusers classes and methods work.</p>
46-
</a>
47-
</div>
48-
</div>
21+
Diffusers is a library of state-of-the-art pretrained diffusion models for generating videos, images, and audio.
22+
23+
The library revolves around the [`DiffusionPipeline`], an API designed for:
24+
25+
- easy inference with only a few lines of code
26+
- flexibility to mix-and-match pipeline components (models, schedulers)
27+
- loading and using adapters like LoRA
28+
29+
Diffusers also comes with optimizations - such as offloading and quantization - to ensure even the largest models are accessible on memory-constrained devices. If memory is not an issue, Diffusers supports torch.compile to boost inference speed.
30+
31+
Get started right away with a Diffusers model on the [Hub](https://huggingface.co/models?library=diffusers&sort=trending) today!
32+
33+
## Learn
34+
35+
If you're a beginner, we recommend starting with the [Hugging Face Diffusion Models Course](https://huggingface.co/learn/diffusion-course/unit0/1). You'll learn the theory behind diffusion models, and learn how to use the Diffusers library to generate images, fine-tune your own models, and more.

docs/source/en/quantization/overview.md

Lines changed: 17 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -11,27 +11,33 @@ specific language governing permissions and limitations under the License.
1111
1212
-->
1313

14-
# Quantization
14+
# Getting started
1515

1616
Quantization focuses on representing data with fewer bits while also trying to preserve the precision of the original data. This often means converting a data type to represent the same information with fewer bits. For example, if your model weights are stored as 32-bit floating points and they're quantized to 16-bit floating points, this halves the model size which makes it easier to store and reduces memory usage. Lower precision can also speedup inference because it takes less time to perform calculations with fewer bits.
1717

1818
Diffusers supports multiple quantization backends to make large diffusion models like [Flux](../api/pipelines/flux) more accessible. This guide shows how to use the [`~quantizers.PipelineQuantizationConfig`] class to quantize a pipeline during its initialization from a pretrained or non-quantized checkpoint.
1919

2020
## Pipeline-level quantization
2121

22-
There are two ways you can use [`~quantizers.PipelineQuantizationConfig`] depending on the level of control you want over the quantization specifications of each model in the pipeline.
22+
There are two ways to use [`~quantizers.PipelineQuantizationConfig`] depending on how much customization you want to apply to the quantization configuration.
2323

24-
- for more basic and simple use cases, you only need to define the `quant_backend`, `quant_kwargs`, and `components_to_quantize`
25-
- for more granular quantization control, provide a `quant_mapping` that provides the quantization specifications for the individual model components
24+
- for basic use cases, define the `quant_backend`, `quant_kwargs`, and `components_to_quantize` arguments
25+
- for granular quantization control, define a `quant_mapping` that provides the quantization configuration for individual model components
2626

27-
### Simple quantization
27+
### Basic quantization
2828

2929
Initialize [`~quantizers.PipelineQuantizationConfig`] with the following parameters.
3030

3131
- `quant_backend` specifies which quantization backend to use. Currently supported backends include: `bitsandbytes_4bit`, `bitsandbytes_8bit`, `gguf`, `quanto`, and `torchao`.
32-
- `quant_kwargs` contains the specific quantization arguments to use.
32+
- `quant_kwargs` specifies the quantization arguments to use.
33+
34+
> [!TIP]
35+
> These `quant_kwargs` arguments are different for each backend. Refer to the [Quantization API](../api/quantization) docs to view the arguments for each backend.
36+
3337
- `components_to_quantize` specifies which components of the pipeline to quantize. Typically, you should quantize the most compute intensive components like the transformer. The text encoder is another component to consider quantizing if a pipeline has more than one such as [`FluxPipeline`]. The example below quantizes the T5 text encoder in [`FluxPipeline`] while keeping the CLIP model intact.
3438

39+
The example below loads the bitsandbytes backend with the following arguments from [`~quantizers.quantization_config.BitsAndBytesConfig`], `load_in_4bit`, `bnb_4bit_quant_type`, and `bnb_4bit_compute_dtype`.
40+
3541
```py
3642
import torch
3743
from diffusers import DiffusionPipeline
@@ -56,13 +62,13 @@ pipe = DiffusionPipeline.from_pretrained(
5662
image = pipe("photo of a cute dog").images[0]
5763
```
5864

59-
### quant_mapping
65+
### Advanced quantization
6066

61-
The `quant_mapping` argument provides more flexible options for how to quantize each individual component in a pipeline, like combining different quantization backends.
67+
The `quant_mapping` argument provides more options for how to quantize each individual component in a pipeline, like combining different quantization backends.
6268

6369
Initialize [`~quantizers.PipelineQuantizationConfig`] and pass a `quant_mapping` to it. The `quant_mapping` allows you to specify the quantization options for each component in the pipeline such as the transformer and text encoder.
6470

65-
The example below uses two quantization backends, [`~quantizers.QuantoConfig`] and [`transformers.BitsAndBytesConfig`], for the transformer and text encoder.
71+
The example below uses two quantization backends, [`~quantizers.quantization_config.QuantoConfig`] and [`transformers.BitsAndBytesConfig`], for the transformer and text encoder.
6672

6773
```py
6874
import torch
@@ -85,7 +91,7 @@ pipeline_quant_config = PipelineQuantizationConfig(
8591
There is a separate bitsandbytes backend in [Transformers](https://huggingface.co/docs/transformers/main_classes/quantization#transformers.BitsAndBytesConfig). You need to import and use [`transformers.BitsAndBytesConfig`] for components that come from Transformers. For example, `text_encoder_2` in [`FluxPipeline`] is a [`~transformers.T5EncoderModel`] from Transformers so you need to use [`transformers.BitsAndBytesConfig`] instead of [`diffusers.BitsAndBytesConfig`].
8692

8793
> [!TIP]
88-
> Use the [simple quantization](#simple-quantization) method above if you don't want to manage these distinct imports or aren't sure where each pipeline component comes from.
94+
> Use the [basic quantization](#basic-quantization) method above if you don't want to manage these distinct imports or aren't sure where each pipeline component comes from.
8995
9096
```py
9197
import torch
@@ -129,4 +135,4 @@ Check out the resources below to learn more about quantization.
129135

130136
- The Transformers quantization [Overview](https://huggingface.co/docs/transformers/quantization/overview#when-to-use-what) provides an overview of the pros and cons of different quantization backends.
131137

132-
- Read the [Exploring Quantization Backends in Diffusers](https://huggingface.co/blog/diffusers-quantization) blog post for a brief introduction to each quantization backend, how to choose a backend, and combining quantization with other memory optimizations.
138+
- Read the [Exploring Quantization Backends in Diffusers](https://huggingface.co/blog/diffusers-quantization) blog post for a brief introduction to each quantization backend, how to choose a backend, and combining quantization with other memory optimizations.

docs/source/en/tutorials/using_peft_for_inference.md

Lines changed: 16 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -319,6 +319,19 @@ If you expect to varied resolutions during inference with this feature, then mak
319319

320320
There are still scenarios where recompulation is unavoidable, such as when the hotswapped LoRA targets more layers than the initial adapter. Try to load the LoRA that targets the most layers *first*. For more details about this limitation, refer to the PEFT [hotswapping](https://huggingface.co/docs/peft/main/en/package_reference/hotswap#peft.utils.hotswap.hotswap_adapter) docs.
321321

322+
<details>
323+
<summary>Technical details of hotswapping</summary>
324+
325+
The [`~loaders.lora_base.LoraBaseMixin.enable_lora_hotswap`] method converts the LoRA scaling factor from floats to torch.tensors and pads the shape of the weights to the largest required shape to avoid reassigning the whole attribute when the data in the weights are replaced.
326+
327+
This is why the `max_rank` argument is important. The results are unchanged even when the values are padded with zeros. Computation may be slower though depending on the padding size.
328+
329+
Since no new LoRA attributes are added, each subsequent LoRA is only allowed to target the same layers, or subset of layers, the first LoRA targets. Choosing the LoRA loading order is important because if the LoRAs target disjoint layers, you may end up creating a dummy LoRA that targets the union of all target layers.
330+
331+
For more implementation details, take a look at the [`hotswap.py`](https://github.com/huggingface/peft/blob/92d65cafa51c829484ad3d95cf71d09de57ff066/src/peft/utils/hotswap.py) file.
332+
333+
</details>
334+
322335
## Merge
323336

324337
The weights from each LoRA can be merged together to produce a blend of multiple existing styles. There are several methods for merging LoRAs, each of which differ in *how* the weights are merged (may affect generation quality).
@@ -673,4 +686,6 @@ Browse the [LoRA Studio](https://lorastudio.co/models) for different LoRAs to us
673686
height="450"
674687
></iframe>
675688
676-
You can find additional LoRAs in the [FLUX LoRA the Explorer](https://huggingface.co/spaces/multimodalart/flux-lora-the-explorer) and [LoRA the Explorer](https://huggingface.co/spaces/multimodalart/LoraTheExplorer) Spaces.
689+
You can find additional LoRAs in the [FLUX LoRA the Explorer](https://huggingface.co/spaces/multimodalart/flux-lora-the-explorer) and [LoRA the Explorer](https://huggingface.co/spaces/multimodalart/LoraTheExplorer) Spaces.
690+
691+
Check out the [Fast LoRA inference for Flux with Diffusers and PEFT](https://huggingface.co/blog/lora-fast) blog post to learn how to optimize LoRA inference with methods like FlashAttention-3 and fp8 quantization.

examples/advanced_diffusion_training/train_dreambooth_lora_flux_advanced.py

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,20 @@
1313
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1414
# See the License for the specific language governing permissions and
1515

16+
# /// script
17+
# dependencies = [
18+
# "diffusers @ git+https://github.com/huggingface/diffusers.git",
19+
# "torch>=2.0.0",
20+
# "accelerate>=0.31.0",
21+
# "transformers>=4.41.2",
22+
# "ftfy",
23+
# "tensorboard",
24+
# "Jinja2",
25+
# "peft>=0.11.1",
26+
# "sentencepiece",
27+
# ]
28+
# ///
29+
1630
import argparse
1731
import copy
1832
import itertools

examples/advanced_diffusion_training/train_dreambooth_lora_sd15_advanced.py

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,20 @@
1313
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1414
# See the License for the specific language governing permissions and
1515

16+
# /// script
17+
# dependencies = [
18+
# "diffusers @ git+https://github.com/huggingface/diffusers.git",
19+
# "torch>=2.0.0",
20+
# "accelerate>=0.31.0",
21+
# "transformers>=4.41.2",
22+
# "ftfy",
23+
# "tensorboard",
24+
# "Jinja2",
25+
# "peft>=0.11.1",
26+
# "sentencepiece",
27+
# ]
28+
# ///
29+
1630
import argparse
1731
import gc
1832
import hashlib

examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,20 @@
1313
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1414
# See the License for the specific language governing permissions and
1515

16+
# /// script
17+
# dependencies = [
18+
# "diffusers @ git+https://github.com/huggingface/diffusers.git",
19+
# "torch>=2.0.0",
20+
# "accelerate>=0.31.0",
21+
# "transformers>=4.41.2",
22+
# "ftfy",
23+
# "tensorboard",
24+
# "Jinja2",
25+
# "peft>=0.11.1",
26+
# "sentencepiece",
27+
# ]
28+
# ///
29+
1630
import argparse
1731
import gc
1832
import itertools

examples/dreambooth/train_dreambooth_flux.py

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,20 @@
1313
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1414
# See the License for the specific language governing permissions and
1515

16+
# /// script
17+
# dependencies = [
18+
# "diffusers @ git+https://github.com/huggingface/diffusers.git",
19+
# "torch>=2.0.0",
20+
# "accelerate>=0.31.0",
21+
# "transformers>=4.41.2",
22+
# "ftfy",
23+
# "tensorboard",
24+
# "Jinja2",
25+
# "peft>=0.11.1",
26+
# "sentencepiece",
27+
# ]
28+
# ///
29+
1630
import argparse
1731
import copy
1832
import gc

0 commit comments

Comments
 (0)