Qwen-Image-Edit-AIO-FP8

Qwen-Image-Edit-AIO-FP8 is an FP8-compressed edition of the Qwen-Image-Edit series developed by the Qwen team at Alibaba, designed to deliver high-precision multimodal diffusion-based image editing with significantly reduced memory consumption and accelerated inference performance. Built upon the advanced capabilities introduced in Qwen-Image-Edit-2509, including native multi-image conditioning that enables seamless fusion of multiple references such as identity, product, and environment within a single coherent generation pipeline, and further enhanced by the large-scale MMDiT-based architecture of Qwen-Image-Edit-2511 with its 20B-parameter backbone for superior identity consistency and minimal drift during iterative refinement, this FP8 release preserves structural control, text-in-image editing accuracy, and ControlNet compatibility while dramatically improving deployment efficiency on modern hardware. Optimized for industrial design workflows, high-fidelity multi-person composition, material replacement, geometric reasoning, and annotation-aware generation, Qwen-Image-Edit-AIO-FP8 maintains professional-grade lighting and viewpoint stability, supports integrated community LoRA adaptations, and enables scalable production-ready image editing pipelines with lower VRAM requirements and minimal quality degradation compared to full-precision checkpoints.


Download FP8

Quick Start with Diffusers 🧨

Install the required packages

transformers # - transformers@v4.57.6
torch        # - torch@v2.9.1+cu128
diffusers    # - diffusers@v0.37.0.dev0
accelerate   # - accelerate@v1.12.0

Qwen-Image-Edit-AIO-FP8 [Demo]

import torch
from diffusers import QwenImageEditPipeline  # or your compatible pipeline

model_path = "./Qwen-Image-Edit-AIO-FP8"

pipe = QwenImageEditPipeline.from_pretrained(
    model_path,
    torch_dtype=torch.bfloat16
)

pipe.to("cuda")

# Example usage
# outputs = pipe(image=input_image, prompt="Your edit prompt")

This repository follows the same release notes, terms and conditions, and license as the original model page, Qwen-Image-Edit.

This repository also contains experimental LLM compressed (compressed-tensors) editions of the Qwen image edit models. Feel free to ignore those editions if you are struggling with them or unable to mount them.

Downloads last month
106
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Qwen-Image-Edit-AIO-FP8

Finetuned
(31)
this model

Collection including prithivMLmods/Qwen-Image-Edit-AIO-FP8