π UniPic3-Teacher-Model
π Introduction
UniPic3-Teacher-Model is the high-quality teacher diffusion model used in the UniPic 3.0 framework. It is trained with full multi-step diffusion sampling and optimized for maximum perceptual quality, semantic consistency, and realism.
This model serves as the teacher backbone for:
- Distribution Matching Distillation (DMD)
- Consistency / trajectory distillation
- Few-step student model training
Rather than being optimized for fast inference, the teacher model prioritizes generation fidelity and stability, providing a strong and reliable supervision signal for downstream distilled models.
π§ Model Characteristics
- Role: Teacher model (not a distilled student)
- Sampling: Multi-step diffusion (high-fidelity)
- Architecture: Unified UniPic3 Transformer
- Tasks Supported:
- Single-image editing
- Multi-image composition (2β6 images)
- HumanβObject Interaction (HOI)
- Resolution: Flexible, within pixel budget constraints
- Training Objective:
- Flow Matching / Diffusion loss
- Used as teacher for DMD & consistency training
π Benchmarks
This teacher model achieves state-of-the-art performance on:
- Image editing benchmarks
- Multi-image composition benchmarks
It provides high-quality supervision targets for distilled UniPic3 student models.
β οΈ Important Note
This repository hosts the teacher model.
It is not optimized for few-step inference.
If you are looking for:
- β‘ 4β8 step fast inference
- π Deployment-friendly distilled models
please refer to the UniPic3-DMD / distilled checkpoints instead.
π§ Usage (Teacher Model)
1. Clone the Repository
git clone https://github.com/SkyworkAI/UniPic
cd UniPic-3
2. Set Up the Environment
conda create -n unipic python=3.10
conda activate unipic3
pip install -r requirements.txt
3.Batch Inference
transformer_path = "Skywork/Unipic3"
python -m torch.distributed.launch --nproc_per_node=1 --master_port 29501 --use_env \
qwen_image_edit_fast/batch_inference.py \
--jsonl_path data/val.jsonl \
--output_dir work_dirs/output \
--distributed \
--num_inference_steps 50 \
--true_cfg_scale 4.0 \
--transformer transformer_path \
--skip_existing
π License
This model is released under the MIT License.
Citation
If you use Skywork-UniPic in your research, please cite:
@misc{wang2025skyworkunipicunifiedautoregressive,
title={Skywork UniPic: Unified Autoregressive Modeling for Visual Understanding and Generation},
author={Peiyu Wang and Yi Peng and Yimeng Gan and Liang Hu and Tianyidan Xie and Xiaokun Wang and Yichen Wei and Chuanxin Tang and Bo Zhu and Changshi Li and Hongyang Wei and Eric Li and Xuchen Song and Yang Liu and Yahui Zhou},
year={2025},
eprint={2508.03320},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2508.03320},
}
@misc{wei2025skyworkunipic20building,
title={Skywork UniPic 2.0: Building Kontext Model with Online RL for Unified Multimodal Model},
author={Hongyang Wei and Baixin Xu and Hongbo Liu and Cyrus Wu and Jie Liu and Yi Peng and Peiyu Wang and Zexiang Liu and Jingwen He and Yidan Xietian and Chuanxin Tang and Zidong Wang and Yichen Wei and Liang Hu and Boyi Jiang and William Li and Ying He and Yang Liu and Xuchen Song and Eric Li and Yahui Zhou},
year={2025},
eprint={2509.04548},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.04548},
}
- Downloads last month
- 9