Skip to content

Integrate Bria 3.1/3.2 Models and ControlNet Pipelines into InvokeAI #8248

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 12 commits into
base: main
Choose a base branch
from

Conversation

ilanbria
Copy link

@ilanbria ilanbria commented Jul 10, 2025

Summary

This feature PR integrates Bria 3.1/3.2 text-to-image models and their ControlNet pipelines into InvokeAI.

image

How

Area Key Changes
Backend Added invokeai/backend/bria/** including:- pipeline_bria.py, pipeline_bria_controlnet.py (diffusers workflows)- transformer_bria.py, controlnet_bria.py, helper & utility scripts- controlnet_aux detectors (Canny/OpenPose)
Model Manager Added model_loaders/bria.py and updated probe/taxonomy for auto-discovery of Bria Main & ControlNet models.
Invocations / Nodes New nodes under invokeai/nodes/bria_nodes/* including model loader, encoder, decoder, denoiser, latent sampler, and ControlNet handlers.
UI / API - New field components (BriaMainModelFieldInputComponent.tsx, BriaControlNetModelFieldInputComponent.tsx)- Extended UIType enum for Node Editor integration- Updated Model Manager badges and schemas
Dependencies Added scikit-image (required for OpenPose detector), updated uv.lock.
Scope 46 files changed, ~5,500 new LOC, 3 LOC removed

Representative commits:

  • Add Bria text to image model and controlnet support

  • Setup Probe and UI to accept bria main/controlnet models

  • Added scikit-image required for Bria's OpenposeDetector model


Related Issues / Discussions


QA Instructions

Requires Bria model weights and a CUDA-11+ GPU (CPU supported, slower).

Steps:

  1. Install Models

    • Set HF token with access to Bria's models.

    • Install briaai/BRIA-3.2.

    • Install briaai/BRIA-3.2-ControlNet-Union.

  2. Load Workflow

    • Load the workflow from the attached file:

      [Bria's ControlNet workflow.json](https://github.com/user-attachments/files/21248912/Bria.s.ControlNet.workflow.json) [Bria's text-to-image workflow.json](https://github.com/user-attachments/files/21248924/Bria.s.text-to-image.workflow.json)
  3. Run Workflow

    • Run workflow without controlnet.

    • Run workflow with controlnet.


Merge Plan

  • Rebase onto main after current release.


Checklist

  • The PR has a short but descriptive title, suitable for a changelog
  • Tests added / updated (if applicable)
  • Documentation added / updated (if applicable)
  • Updated What's New copy (if doing a release after this PR)

@github-actions github-actions bot added python PRs that change python files invocations PRs that change invocations backend PRs that change backend files frontend PRs that change frontend files labels Jul 10, 2025
@ilanbria ilanbria force-pushed the ilan/support_bria_3.2_pipeline branch from 7d2a666 to 1076d8e Compare July 10, 2025 20:15
@ilanbria ilanbria force-pushed the ilan/support_bria_3.2_pipeline branch from 1076d8e to 2d55dbe Compare July 14, 2025 13:32
@github-actions github-actions bot added Root python-deps PRs that change python dependencies labels Jul 14, 2025
@ilanbria ilanbria changed the title Add Bria text to image model and controlnet support Integrate Bria 3.1/3.2 Models and ControlNet Pipelines into InvokeAI Jul 14, 2025
@hipsterusername
Copy link
Member

Hey! Confirming, have you tested that this works with the new location? I ran into some issues and may need to poke around in my local install to see what's going on if you were able to run these without issue

@ilanbria ilanbria force-pushed the ilan/support_bria_3.2_pipeline branch from 3af3004 to c08a6a8 Compare July 15, 2025 21:01
@ilanbria
Copy link
Author

Hey! Confirming, have you tested that this works with the new location? I ran into some issues and may need to poke around in my local install to see what's going on if you were able to run these without issue

Hey! yeah had an import error, fixed it. Should work now

@ilanbria ilanbria force-pushed the ilan/support_bria_3.2_pipeline branch from 37ebc28 to 29aed4b Compare July 17, 2025 11:03
@hipsterusername
Copy link
Member

Thanks - Trying to get this to run in between a lot of other stuff. I'm getting an error

Device-side assertions were explicitly omitted for this error check; the error probably arose while initializing the DSA handlers.
[2025-07-17 11:54:32,972]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/home/kent/InvokeAI/invokeai/app/services/session_processor/session_processor_default.py", line 130, in run_node
output = invocation.invoke_internal(context=context, services=self._services)
File "/home/kent/InvokeAI/invokeai/app/invocations/baseinvocation.py", line 241, in invoke_internal
output = self.invoke(context)
File "/home/kent/InvokeAI/invokeai/app/invocations/bria_latent_sampler.py", line 54, in invoke
generator = torch.Generator(device=device).manual_seed(self.seed)
RuntimeError: CUDA error: unknown error
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Device-side assertions were explicitly omitted for this error check; the error probably arose while initializing the DSA handlers.


Looking at the above, it may be wise to reference some of the device references elsewhere -- Is the noise generation something that needs to happen on GPU?

@ilanbria
Copy link
Author

ilanbria commented Jul 17, 2025

Thanks - Trying to get this to run in between a lot of other stuff. I'm getting an error

Device-side assertions were explicitly omitted for this error check; the error probably arose while initializing the DSA handlers. [2025-07-17 11:54:32,972]::[InvokeAI]::ERROR --> Traceback (most recent call last): File "/home/kent/InvokeAI/invokeai/app/services/session_processor/session_processor_default.py", line 130, in run_node output = invocation.invoke_internal(context=context, services=self._services) File "/home/kent/InvokeAI/invokeai/app/invocations/baseinvocation.py", line 241, in invoke_internal output = self.invoke(context) File "/home/kent/InvokeAI/invokeai/app/invocations/bria_latent_sampler.py", line 54, in invoke generator = torch.Generator(device=device).manual_seed(self.seed) RuntimeError: CUDA error: unknown error CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Device-side assertions were explicitly omitted for this error check; the error probably arose while initializing the DSA handlers.

Looking at the above, it may be wise to reference some of the device references elsewhere -- Is the noise generation something that needs to happen on GPU?

It would be significantly faster if it runs on the gpu, but I suspect that if it fails here it will also fail in the denoiser node.
I'm not getting it on my g6e.xlarge instance (uses L40S) though, what machine are you running it on?
I've set the device to be as the transformer device, can you check again?

@ilanbria ilanbria force-pushed the ilan/support_bria_3.2_pipeline branch from 29aed4b to c296fd2 Compare July 17, 2025 17:54
@hipsterusername
Copy link
Member

hipsterusername commented Jul 18, 2025

I am able to run the node now (progress) although it's extremely slow - I suspect that there are a number of missing hooks into existing device references, tools for memory management, etc.

Did you reference other inference nodes for conventions, or is this primarily a simple code port?

(Re: Device I'm on - testing on a box with 4 ADA 6000s)

@ilanbria
Copy link
Author

I am able to run the node now (progress) although it's extremely slow - I suspect that there are a number of missing hooks into existing device references, tools for memory management, etc.

Did you reference other inference nodes for conventions, or is this primarily a simple code port?

I referenced flux and sd3 inference nodes, but I didn't found anything special other then the use of model_on_device() that didn't improve vram usage or loading time when testing.
Aside from the initial model load slowdown, once cached, the denoiser node completes a 1024 × 1024 image at 50 steps in about ~20 s, and the text-encoder and noise nodes finish within a few hundred milliseconds on my l40s.
If there are other hooks you rely on or further optimizations I should include, just let me know and I'll wire them in.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend PRs that change backend files frontend PRs that change frontend files invocations PRs that change invocations python PRs that change python files python-deps PRs that change python dependencies Root
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants