Skip to content

Conversation

@Jayce-Ping
Copy link

What does this PR do?

Fixes #13034

Before submitting

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

Copy link
Collaborator

@yiyixuxu yiyixuxu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks!

@yiyixuxu yiyixuxu requested a review from dg845 January 28, 2026 04:22
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@dg845
Copy link
Collaborator

dg845 commented Jan 28, 2026

@bot /style

@github-actions
Copy link
Contributor

github-actions bot commented Jan 28, 2026

Style bot fixed some files and pushed the changes.

Copy link
Collaborator

@dg845 dg845 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR! Left a comment about using self.transformer.config.patch_size instead of hardcoding the patch_size to 2.

As an aside, I think this solution is superior to forcing the height and width to be multiples of 32 because WanPipeline is meant to work with both Wan 2.1 and Wan 2.2 models; because the VAE spatial downsample factor is 8 for the Wan 2.1 models but 16 for Wan 2.2 models, this check would be too restrictive for Wan 2.1 models (which can accept height/width which are multiples of 16 but not 32).

I agree with you for the idea of using `patch_size` instead. Thanks!😊

Co-authored-by: dg845 <58458699+dg845@users.noreply.github.com>
@Jayce-Ping
Copy link
Author

Jayce-Ping commented Jan 28, 2026

@dg845 Hi! Thanks for your suggestion. Sorry that I hardcoded 2 instead of using patch_size. I agree with the idea that aut-resize is better than warn out.

Another potential issue (but unlikely happen) is that Wan2.2 has two transformers. What if in the future that some models have two transformers with different config.patch_size? But I think it is unlikely to happen since they need to have the same shape of latents to process. Anyway, the current solution is elegant enough.😊

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] WanPipeline height and width

4 participants