Skip to content

Conversation

@songkey
Copy link

@songkey songkey commented Jan 26, 2026

What does this PR do?

The Flux2 model features the same underlying architecture as its successor, the Flux2 Klein series, but varies in the number of Transformer Blocks — these block differences correspond to the two variants with distinct computational footprints, namely the 4B and 9B versions.

Previously, this architectural mismatch was the root cause of errors when loading LoRAs fine-tuned on Flux2 Klein models (context: ostris/ai-toolkit#667).

The update now adaptively enumerates the number of Transformer Blocks, thereby enabling seamless LoRA loading across both the original Flux2 and the entire Flux2 Klein series.

Who is this for?

Users trying to load LoRAs trained on Flux2 Klein (4B/9B variants) into the diffusers Flux pipeline.

import torch
from diffusers import Flux2KleinPipeline

# Load base model
pipe = Flux2KleinPipeline.from_pretrained("black-forest-labs/FLUX.2-klein-4B", torch_dtype=torch.bfloat16)
pipe.to("cuda")

# Before: This would raise a shape mismatch error due to block count differences
# After: This now loads correctly by adaptively matching blocks
# Replace with actual path to a FLUX.2-klein-base-4B-trained LoRA
pipe.load_lora_weights("path/to/flux2-klein-base-4b-lora.safetensors")

Checklist:

  • I have checked that the script runs locally.
  • I have run make style and make quality.

@songkey songkey marked this pull request as draft January 27, 2026 01:48
@songkey songkey changed the title Resolve Flux2 Klein 4B/9B LoRA loading errors [Flux2] Fix LoRA loading for Flux2 Klein by adaptively enumerating transformer blocks Jan 27, 2026
@songkey songkey marked this pull request as ready for review January 27, 2026 02:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant