Skip to content

Conversation

@Pfannkuchensack
Copy link
Contributor

Summary

Add support for loading Flux LoRA models in the xlabs format.

The xlabs format uses a different key structure than other Flux LoRA formats:

  • double_blocks.X.processor.{qkv|proj}_lora{1|2}.{down|up}.weight

Where:

  • lora1 → image attention stream (img_attn)
  • lora2 → text attention stream (txt_attn)
  • qkv → query/key/value projection
  • proj → output projection

Changes

  • Add FluxLoRAFormat.XLabs enum value to taxonomy
  • Add flux_xlabs_lora_conversion_utils.py with format detection and conversion
  • Update formats.py to include xlabs in the detection cascade
  • Update lora.py loader to handle xlabs format
  • Update model probe in configs/lora.py to accept recognized Flux LoRA formats (fixes installation of xlabs LoRAs)
  • Add unit tests for xlabs format detection and conversion

Related Issues / Discussions

Adds support for xlabs-format Flux LoRAs which were previously rejected with "model does not match LyCORIS LoRA heuristics".

Example LoRA using this format: Flux Realism LoRA

QA Instructions

  1. Download an xlabs-format Flux LoRA (e.g., flux-RealismLora.safetensors from XLabs-AI)
  2. Install the LoRA via Model Manager → Import Models
  3. Verify it's recognized as a FLUX LoRA
  4. Use the LoRA in a Flux generation
  5. Verify the LoRA effect is applied to the output

Merge Plan

Standard merge, no special considerations.

Checklist

  • The PR has a short but descriptive title, suitable for a changelog
  • Tests added / updated (if applicable)
  • ❗Changes to a redux slice have a corresponding migration
  • Documentation added / updated (if applicable)
  • Updated What's New copy (if doing a release after this PR)

Add support for loading Flux LoRA models in the xlabs format, which uses
keys like `double_blocks.X.processor.{qkv|proj}_lora{1|2}.{down|up}.weight`.

The xlabs format maps:
- lora1 -> img_attn (image attention stream)
- lora2 -> txt_attn (text attention stream)
- qkv -> query/key/value projection
- proj -> output projection

Changes:
- Add FluxLoRAFormat.XLabs enum value
- Add flux_xlabs_lora_conversion_utils.py with detection and conversion
- Update formats.py to detect xlabs format
- Update lora.py loader to handle xlabs format
- Update model probe to accept recognized Flux LoRA formats
- Add unit tests for xlabs format detection and conversion
@github-actions github-actions bot added python PRs that change python files backend PRs that change backend files python-tests PRs that change python tests labels Dec 19, 2025
Copy link
Collaborator

@lstein lstein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tested four different XLabs-format Flux LoRAs downloaded from the HuggingFace XLabs AI site. All were correctly recognized as Flux LoRAs and had the expected influence on generated images. In addition, I compared an XLabs format LoRA (disney_lora) to one that XLabs preconverted to standard format (disney_lora_comfy_converted), and both generated exactly the same image, confirming that the PR's conversion code is working properly.

@lstein lstein enabled auto-merge (squash) December 24, 2025 20:13
@lstein lstein merged commit ac245cb into invoke-ai:main Dec 24, 2025
13 checks passed
@Pfannkuchensack Pfannkuchensack deleted the feature/xlabs-flux-lora-support branch December 24, 2025 20:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backend PRs that change backend files python PRs that change python files python-tests PRs that change python tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants