Telegram Web
Perplexity AI PRO - 1 YEAR at 90% Discount – Don’t Miss Out!
https://redd.it/1o9vghr
@rStableDiffusion
New Wan 2.2 dstill model

I’m little bit confused why no one discussed or uploaded a test run for the new dstill models.


My understanding this model is fine-tuned and has lightx2v baked in, which means when u use it you do not need a lightx2v on low lora.


But idk about the speed/results comparing this to the native fp8 or the gguf versions.


If you have any information or comparison about this model please share.


https://huggingface.co/lightx2v/Wan2.2-Distill-Models/tree/main

https://redd.it/1o9v767
@rStableDiffusion
About that WAN T2V 2.2 and "speed up" LORAs.

I don't have big problems with I2V, but T2V...? I'm lost. I think I have something about ~20 random speed up loras, some of them work, some of them (rCM for example) don't work at all, so here is my question - what exactly setup of speed up loras you use with T2V?

https://redd.it/1o9wyqj
@rStableDiffusion
Brie's Qwen Edit Lazy Repose workflow

Hey everyone\~

I've released a new version of my Qwen Edit Lazy Repose. It does what it says on the tin.

The main new feature is replacement of Qwen Edit 2509, with the All-in-One finetune. This simplifies the workflow a bit, and also improves quality.

Take note that the first gen involving model load will take some time, because the loras, vae and CLIP are all shoved in there. Once you get past the initial image, the gen times are typical for Qwen Edit.

Get the workflow here:
https://civitai.com/models/1982115

The new AIO model is by the venerable Phr00t, found here:
https://huggingface.co/Phr00t/Qwen-Image-Edit-Rapid-AIO/tree/main/v5

Note that there's both a SFW and the other version.
The other version is very horny, even if your character is fully clothed, something may just slip out. Be warned.

Stay cheesy and have a good one!\~


Here are some examples:


Frolicking about. Both pose and expression are transferred.

Works if the pose image is blank. Sometimes the props carry over too.

Works when the character image is on a blank background too.


All character images generated by me (of me)
All pose images yoinked from the venerable Digital Pastel, maker of the SmoothMix series of models, of which I cherish.

https://redd.it/1o9zqer
@rStableDiffusion
Best way to iterate through many prompts in comfyui?
https://redd.it/1o9zlqq
@rStableDiffusion
Training a Qwen Image LORA on a 3080ti in 2 and a half hours on Onetrainer.

With the lastest update of Onetrainer i notice close to a 20% performance improvement training Qwen image Loras (from 6.90s/it to 5s/it).
Using a 3080ti (12gb, 11,4 peak utilization), 30 images, 512 resolution and batch size 2 (around 1400 steps, 5s/it), takes about 2 and a half hours to complete a training.
I use the included 16gb VRAM preset and change the layer offloading fraction to 0.64. I have 48 gb of 2.9gz ddr4 ram, during training total system ram utilization is just below 32gb in windows 11, preparing for training goes up to 97gb (including virtual). I'm still playing with the values, but in general, i am happy with the results, i notice that maybe using 40 images the lora responds better to promps?.
I shared specific numbers to show why i'm so surprised at the performance.
Thanks to the Onetrainer team the level of optimisation is incredible.


https://redd.it/1oa1wp3
@rStableDiffusion
Wan 2.2 i2V Quality Tip (For Noobs)

Lots of new users out there, so I'm not sure if everyone already knows this (I just started in wan myself), but I thought I'd share a tip.

If you're using a high-resolution image for your input, don't downscale it to match the resolution you're going for before running Wan. Just leave it as-is and let Wan do the downscale on its own. I've discovered that you'll get much better quality. There is a slight trade-off in speed -I don't know if it's doing some extra processing or whatever - but it only puts a "few" extra seconds on the clock for me. But I'm running an RTX 3090 TI, so not sure how that would effect smaller cards. But it's worth it.

Otherwise, if you want some speed gains, downscale the image to the target resolution and it should run faster, at least in my tests.

Also, increasing steps on the speed LoRAs can boost quality too, with just a little sacrifice in speed. When I started, I thought 4-step meant only 4-steps. But I regularly use 8 steps and I get noticeable quality gains, with only a little sacrifice in speed. 8-10 seems to be the sweet spot. Again, it's worth it.

https://redd.it/1o9zcyj
@rStableDiffusion
2025/10/21 09:53:13
Back to Top
HTML Embed Code: