Change Image Style With Qwen Edit 2509 + Qwen Image+Fsampler+ LORA
https://youtu.be/_XOV4KMxdug
https://redd.it/1o9sil7
@rStableDiffusion
https://youtu.be/_XOV4KMxdug
https://redd.it/1o9sil7
@rStableDiffusion
YouTube
ComfyUI Tutorial: How To Change Image Style With Qwen Edit 2509 #comfyui #comfyuitutorial #qwenimage
On this tutorial I will show you how to do style transfer using qwen image edit 2509 combined with lora model to create a raw image that gonna pass trought fine tunning step in order to improve the quality and detail of the image. The workflow is optimized…
Perplexity AI PRO - 1 YEAR at 90% Discount – Don’t Miss Out!
https://redd.it/1o9vghr
@rStableDiffusion
https://redd.it/1o9vghr
@rStableDiffusion
New Wan 2.2 dstill model
I’m little bit confused why no one discussed or uploaded a test run for the new dstill models.
My understanding this model is fine-tuned and has lightx2v baked in, which means when u use it you do not need a lightx2v on low lora.
But idk about the speed/results comparing this to the native fp8 or the gguf versions.
If you have any information or comparison about this model please share.
https://huggingface.co/lightx2v/Wan2.2-Distill-Models/tree/main
https://redd.it/1o9v767
@rStableDiffusion
I’m little bit confused why no one discussed or uploaded a test run for the new dstill models.
My understanding this model is fine-tuned and has lightx2v baked in, which means when u use it you do not need a lightx2v on low lora.
But idk about the speed/results comparing this to the native fp8 or the gguf versions.
If you have any information or comparison about this model please share.
https://huggingface.co/lightx2v/Wan2.2-Distill-Models/tree/main
https://redd.it/1o9v767
@rStableDiffusion
huggingface.co
lightx2v/Wan2.2-Distill-Models at main
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
About that WAN T2V 2.2 and "speed up" LORAs.
I don't have big problems with I2V, but T2V...? I'm lost. I think I have something about ~20 random speed up loras, some of them work, some of them (rCM for example) don't work at all, so here is my question - what exactly setup of speed up loras you use with T2V?
https://redd.it/1o9wyqj
@rStableDiffusion
I don't have big problems with I2V, but T2V...? I'm lost. I think I have something about ~20 random speed up loras, some of them work, some of them (rCM for example) don't work at all, so here is my question - what exactly setup of speed up loras you use with T2V?
https://redd.it/1o9wyqj
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Brie's Qwen Edit Lazy Repose workflow
Hey everyone\~
I've released a new version of my Qwen Edit Lazy Repose. It does what it says on the tin.
The main new feature is replacement of Qwen Edit 2509, with the All-in-One finetune. This simplifies the workflow a bit, and also improves quality.
Take note that the first gen involving model load will take some time, because the loras, vae and CLIP are all shoved in there. Once you get past the initial image, the gen times are typical for Qwen Edit.
Get the workflow here:
https://civitai.com/models/1982115
The new AIO model is by the venerable Phr00t, found here:
https://huggingface.co/Phr00t/Qwen-Image-Edit-Rapid-AIO/tree/main/v5
Note that there's both a SFW and the other version.
The other version is very horny, even if your character is fully clothed, something may just slip out. Be warned.
Stay cheesy and have a good one!\~
Here are some examples:
Frolicking about. Both pose and expression are transferred.
Works if the pose image is blank. Sometimes the props carry over too.
Works when the character image is on a blank background too.
All character images generated by me (of me)
All pose images yoinked from the venerable Digital Pastel, maker of the SmoothMix series of models, of which I cherish.
https://redd.it/1o9zqer
@rStableDiffusion
Hey everyone\~
I've released a new version of my Qwen Edit Lazy Repose. It does what it says on the tin.
The main new feature is replacement of Qwen Edit 2509, with the All-in-One finetune. This simplifies the workflow a bit, and also improves quality.
Take note that the first gen involving model load will take some time, because the loras, vae and CLIP are all shoved in there. Once you get past the initial image, the gen times are typical for Qwen Edit.
Get the workflow here:
https://civitai.com/models/1982115
The new AIO model is by the venerable Phr00t, found here:
https://huggingface.co/Phr00t/Qwen-Image-Edit-Rapid-AIO/tree/main/v5
Note that there's both a SFW and the other version.
The other version is very horny, even if your character is fully clothed, something may just slip out. Be warned.
Stay cheesy and have a good one!\~
Here are some examples:
Frolicking about. Both pose and expression are transferred.
Works if the pose image is blank. Sometimes the props carry over too.
Works when the character image is on a blank background too.
All character images generated by me (of me)
All pose images yoinked from the venerable Digital Pastel, maker of the SmoothMix series of models, of which I cherish.
https://redd.it/1o9zqer
@rStableDiffusion
Civitai
Brie's Qwen Edit Lazy Repose - v3.0 | Qwen Workflows | Civitai
Using the power of Qwen Edit, repose a reference character to a new pose to your heart's content! Version 3.0 The workflow now uses the Qwen Edit 2...
Training a Qwen Image LORA on a 3080ti in 2 and a half hours on Onetrainer.
With the lastest update of Onetrainer i notice close to a 20% performance improvement training Qwen image Loras (from 6.90s/it to 5s/it).
Using a 3080ti (12gb, 11,4 peak utilization), 30 images, 512 resolution and batch size 2 (around 1400 steps, 5s/it), takes about 2 and a half hours to complete a training.
I use the included 16gb VRAM preset and change the layer offloading fraction to 0.64. I have 48 gb of 2.9gz ddr4 ram, during training total system ram utilization is just below 32gb in windows 11, preparing for training goes up to 97gb (including virtual). I'm still playing with the values, but in general, i am happy with the results, i notice that maybe using 40 images the lora responds better to promps?.
I shared specific numbers to show why i'm so surprised at the performance.
Thanks to the Onetrainer team the level of optimisation is incredible.
https://redd.it/1oa1wp3
@rStableDiffusion
With the lastest update of Onetrainer i notice close to a 20% performance improvement training Qwen image Loras (from 6.90s/it to 5s/it).
Using a 3080ti (12gb, 11,4 peak utilization), 30 images, 512 resolution and batch size 2 (around 1400 steps, 5s/it), takes about 2 and a half hours to complete a training.
I use the included 16gb VRAM preset and change the layer offloading fraction to 0.64. I have 48 gb of 2.9gz ddr4 ram, during training total system ram utilization is just below 32gb in windows 11, preparing for training goes up to 97gb (including virtual). I'm still playing with the values, but in general, i am happy with the results, i notice that maybe using 40 images the lora responds better to promps?.
I shared specific numbers to show why i'm so surprised at the performance.
Thanks to the Onetrainer team the level of optimisation is incredible.
https://redd.it/1oa1wp3
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Wan 2.2 i2V Quality Tip (For Noobs)
Lots of new users out there, so I'm not sure if everyone already knows this (I just started in wan myself), but I thought I'd share a tip.
If you're using a high-resolution image for your input, don't downscale it to match the resolution you're going for before running Wan. Just leave it as-is and let Wan do the downscale on its own. I've discovered that you'll get much better quality. There is a slight trade-off in speed -I don't know if it's doing some extra processing or whatever - but it only puts a "few" extra seconds on the clock for me. But I'm running an RTX 3090 TI, so not sure how that would effect smaller cards. But it's worth it.
Otherwise, if you want some speed gains, downscale the image to the target resolution and it should run faster, at least in my tests.
Also, increasing steps on the speed LoRAs can boost quality too, with just a little sacrifice in speed. When I started, I thought 4-step meant only 4-steps. But I regularly use 8 steps and I get noticeable quality gains, with only a little sacrifice in speed. 8-10 seems to be the sweet spot. Again, it's worth it.
https://redd.it/1o9zcyj
@rStableDiffusion
Lots of new users out there, so I'm not sure if everyone already knows this (I just started in wan myself), but I thought I'd share a tip.
If you're using a high-resolution image for your input, don't downscale it to match the resolution you're going for before running Wan. Just leave it as-is and let Wan do the downscale on its own. I've discovered that you'll get much better quality. There is a slight trade-off in speed -I don't know if it's doing some extra processing or whatever - but it only puts a "few" extra seconds on the clock for me. But I'm running an RTX 3090 TI, so not sure how that would effect smaller cards. But it's worth it.
Otherwise, if you want some speed gains, downscale the image to the target resolution and it should run faster, at least in my tests.
Also, increasing steps on the speed LoRAs can boost quality too, with just a little sacrifice in speed. When I started, I thought 4-step meant only 4-steps. But I regularly use 8 steps and I get noticeable quality gains, with only a little sacrifice in speed. 8-10 seems to be the sweet spot. Again, it's worth it.
https://redd.it/1o9zcyj
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Open-source release! Face-to-Photo Transform ordinary face photos into stunning portraits.
Open-source release! Face-to-Photo Transform ordinary face photos into stunning portraits.
Built on Qwen-Image-Edit**, the Face-to-Photo model excels at precise facial detail restoration.** Unlike previous models (e.g., InfiniteYou), it captures fine-grained facial features across angles, sizes, and positions — producing natural, aesthetically pleasing portraits.
Model download: https://modelscope.cn/models/DiffSynth-Studio/Qwen-Image-Edit-F2P
Try it online: https://modelscope.cn/aigc/imageGeneration?tab=advanced&imageId=17008179
Inference code: https://github.com/modelscope/DiffSynth-Studio/blob/main/examples/qwen\_image/model\_inference/Qwen-Image-Edit.py
Can be used in ComfyUI easily with the qwen-image-edit v1 model
https://preview.redd.it/4l8vnu4gawvf1.jpg?width=1398&format=pjpg&auto=webp&s=27d80eff424cf8ced9153f641896da1fbb573d2b
https://preview.redd.it/76ai6q4gawvf1.jpg?width=1398&format=pjpg&auto=webp&s=b895f8dfc16aa0dbf437d5de0b58193e64c1c570
https://preview.redd.it/dyg1gf2gawvf1.jpg?width=1398&format=pjpg&auto=webp&s=b01a227a115881a5ef7d1886dccd290d6c52287b
https://preview.redd.it/kcf67h2gawvf1.jpg?width=2592&format=pjpg&auto=webp&s=1de1b763c6ac0486e8e9a43214193bdd89d22914
https://preview.redd.it/5cpzbi2gawvf1.png?width=2216&format=png&auto=webp&s=1dae933989e8bd1086a895e0b187866dc5231547
https://redd.it/1o9zxe2
@rStableDiffusion
Open-source release! Face-to-Photo Transform ordinary face photos into stunning portraits.
Built on Qwen-Image-Edit**, the Face-to-Photo model excels at precise facial detail restoration.** Unlike previous models (e.g., InfiniteYou), it captures fine-grained facial features across angles, sizes, and positions — producing natural, aesthetically pleasing portraits.
Model download: https://modelscope.cn/models/DiffSynth-Studio/Qwen-Image-Edit-F2P
Try it online: https://modelscope.cn/aigc/imageGeneration?tab=advanced&imageId=17008179
Inference code: https://github.com/modelscope/DiffSynth-Studio/blob/main/examples/qwen\_image/model\_inference/Qwen-Image-Edit.py
Can be used in ComfyUI easily with the qwen-image-edit v1 model
https://preview.redd.it/4l8vnu4gawvf1.jpg?width=1398&format=pjpg&auto=webp&s=27d80eff424cf8ced9153f641896da1fbb573d2b
https://preview.redd.it/76ai6q4gawvf1.jpg?width=1398&format=pjpg&auto=webp&s=b895f8dfc16aa0dbf437d5de0b58193e64c1c570
https://preview.redd.it/dyg1gf2gawvf1.jpg?width=1398&format=pjpg&auto=webp&s=b01a227a115881a5ef7d1886dccd290d6c52287b
https://preview.redd.it/kcf67h2gawvf1.jpg?width=2592&format=pjpg&auto=webp&s=1de1b763c6ac0486e8e9a43214193bdd89d22914
https://preview.redd.it/5cpzbi2gawvf1.png?width=2216&format=png&auto=webp&s=1dae933989e8bd1086a895e0b187866dc5231547
https://redd.it/1o9zxe2
@rStableDiffusion
modelscope.cn
Qwen-Image-Edit-F2P 人脸生成图像
ModelScope——汇聚各领域先进的机器学习模型,提供模型探索体验、推理、训练、部署和应用的一站式服务。在这里,共建模型开源社区,发现、学习、定制和分享心仪的模型。
Qwen Edit - Sharing prompts: Rotate camera - shot from behind
https://redd.it/1oa8qde
@rStableDiffusion
https://redd.it/1oa8qde
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Qwen Edit - Sharing prompts: Rotate camera - shot from behind
Explore this post and more from the StableDiffusion community