# GLIGEN (Grounded Language-to-Image Generation)

The GLIGEN model was created by researchers and engineers from [University of Wisconsin-Madison, Columbia University, and Microsoft](https://github.com/gligen/GLIGEN). The [StableDiffusionGLIGENPipeline](/docs/diffusers/v0.37.1/en/api/pipelines/stable_diffusion/gligen#diffusers.StableDiffusionGLIGENPipeline) and [StableDiffusionGLIGENTextImagePipeline](/docs/diffusers/v0.37.1/en/api/pipelines/stable_diffusion/gligen#diffusers.StableDiffusionGLIGENTextImagePipeline) can generate photorealistic images conditioned on grounding inputs. Along with text and bounding boxes with [StableDiffusionGLIGENPipeline](/docs/diffusers/v0.37.1/en/api/pipelines/stable_diffusion/gligen#diffusers.StableDiffusionGLIGENPipeline), if input images are given, [StableDiffusionGLIGENTextImagePipeline](/docs/diffusers/v0.37.1/en/api/pipelines/stable_diffusion/gligen#diffusers.StableDiffusionGLIGENTextImagePipeline) can insert objects described by text at the region defined by bounding boxes. Otherwise, it'll generate an image described by the caption/prompt and insert objects described by text at the region defined by bounding boxes. It's trained on COCO2014D and COCO2014CD datasets, and the model uses a frozen CLIP ViT-L/14 text encoder to condition itself on grounding inputs.

The abstract from the [paper](https://huggingface.co/papers/2301.07093) is:

*Large-scale text-to-image diffusion models have made amazing advances. However, the status quo is to use text input alone, which can impede controllability. In this work, we propose GLIGEN, Grounded-Language-to-Image Generation, a novel approach that builds upon and extends the functionality of existing pre-trained text-to-image diffusion models by enabling them to also be conditioned on grounding inputs. To preserve the vast concept knowledge of the pre-trained model, we freeze all of its weights and inject the grounding information into new trainable layers via a gated mechanism. Our model achieves open-world grounded text2img generation with caption and bounding box condition inputs, and the grounding ability generalizes well to novel spatial configurations and concepts. GLIGEN’s zeroshot performance on COCO and LVIS outperforms existing supervised layout-to-image baselines by a large margin.*

> [!TIP]
> Make sure to check out the Stable Diffusion [Tips](https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality and how to reuse pipeline components efficiently!
>
> If you want to use one of the official checkpoints for a task, explore the [gligen](https://huggingface.co/gligen) Hub organizations!

[StableDiffusionGLIGENPipeline](/docs/diffusers/v0.37.1/en/api/pipelines/stable_diffusion/gligen#diffusers.StableDiffusionGLIGENPipeline) was contributed by [Nikhil Gajendrakumar](https://github.com/nikhil-masterful) and [StableDiffusionGLIGENTextImagePipeline](/docs/diffusers/v0.37.1/en/api/pipelines/stable_diffusion/gligen#diffusers.StableDiffusionGLIGENTextImagePipeline) was contributed by [Nguyễn Công Tú Anh](https://github.com/tuanh123789).

## StableDiffusionGLIGENPipeline[[diffusers.StableDiffusionGLIGENPipeline]]

#### diffusers.StableDiffusionGLIGENPipeline[[diffusers.StableDiffusionGLIGENPipeline]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen.py#L111)

Pipeline for text-to-image generation using Stable Diffusion with Grounded-Language-to-Image Generation (GLIGEN).

This model inherits from [DiffusionPipeline](/docs/diffusers/v0.37.1/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.).

__call__diffusers.StableDiffusionGLIGENPipeline.__call__https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen.py#L539[{"name": "prompt", "val": ": str | list[str] = None"}, {"name": "height", "val": ": int | None = None"}, {"name": "width", "val": ": int | None = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "gligen_scheduled_sampling_beta", "val": ": float = 0.3"}, {"name": "gligen_phrases", "val": ": list = None"}, {"name": "gligen_boxes", "val": ": list = None"}, {"name": "gligen_inpaint_image", "val": ": PIL.Image.Image | None = None"}, {"name": "negative_prompt", "val": ": str | list[str] | None = None"}, {"name": "num_images_per_prompt", "val": ": int | None = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": torch._C.Generator | list[torch._C.Generator] | None = None"}, {"name": "latents", "val": ": torch.Tensor | None = None"}, {"name": "prompt_embeds", "val": ": torch.Tensor | None = None"}, {"name": "negative_prompt_embeds", "val": ": torch.Tensor | None = None"}, {"name": "output_type", "val": ": str | None = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "cross_attention_kwargs", "val": ": dict[str, typing.Any] | None = None"}, {"name": "clip_skip", "val": ": int | None = None"}]- **prompt** (`str` or `list[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **gligen_phrases** (`list[str]`) --
  The phrases to guide what to include in each of the regions defined by the corresponding
  `gligen_boxes`. There should only be one phrase per bounding box.
- **gligen_boxes** (`list[list[float]]`) --
  The bounding boxes that identify rectangular regions of the image that are going to be filled with the
  content described by the corresponding `gligen_phrases`. Each rectangular box is defined as a
  `list[float]` of 4 elements `[xmin, ymin, xmax, ymax]` where each value is between [0,1].
- **gligen_inpaint_image** (`PIL.Image.Image`, *optional*) --
  The input image, if provided, is inpainted with objects described by the `gligen_boxes` and
  `gligen_phrases`. Otherwise, it is treated as a generation task on a blank input image.
- **gligen_scheduled_sampling_beta** (`float`, defaults to 0.3) --
  Scheduled Sampling factor from [GLIGEN: Open-Set Grounded Text-to-Image
  Generation](https://huggingface.co/papers/2301.07093). Scheduled Sampling factor is only varied for
  scheduled sampling during inference for improved quality and controllability.
- **negative_prompt** (`str` or `list[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale 0[StableDiffusionPipelineOutput](/docs/diffusers/v0.37.1/en/api/pipelines/stable_diffusion/gligen#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/v0.37.1/en/api/pipelines/stable_diffusion/gligen#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.

The call function to the pipeline for generation.

Examples:
```py
>>> import torch
>>> from diffusers import StableDiffusionGLIGENPipeline
>>> from diffusers.utils import load_image

>>> # Insert objects described by text at the region defined by bounding boxes
>>> pipe = StableDiffusionGLIGENPipeline.from_pretrained(
...     "masterful/gligen-1-4-inpainting-text-box", variant="fp16", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")

>>> input_image = load_image(
...     "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/livingroom_modern.png"
... )
>>> prompt = "a birthday cake"
>>> boxes = [[0.2676, 0.6088, 0.4773, 0.7183]]
>>> phrases = ["a birthday cake"]

>>> images = pipe(
...     prompt=prompt,
...     gligen_phrases=phrases,
...     gligen_inpaint_image=input_image,
...     gligen_boxes=boxes,
...     gligen_scheduled_sampling_beta=1,
...     output_type="pil",
...     num_inference_steps=50,
... ).images

>>> images[0].save("./gligen-1-4-inpainting-text-box.jpg")

>>> # Generate an image described by the prompt and
>>> # insert objects described by text at the region defined by bounding boxes
>>> pipe = StableDiffusionGLIGENPipeline.from_pretrained(
...     "masterful/gligen-1-4-generation-text-box", variant="fp16", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")

>>> prompt = "a waterfall and a modern high speed train running through the tunnel in a beautiful forest with fall foliage"
>>> boxes = [[0.1387, 0.2051, 0.4277, 0.7090], [0.4980, 0.4355, 0.8516, 0.7266]]
>>> phrases = ["a waterfall", "a modern high speed train running through the tunnel"]

>>> images = pipe(
...     prompt=prompt,
...     gligen_phrases=phrases,
...     gligen_boxes=boxes,
...     gligen_scheduled_sampling_beta=1,
...     output_type="pil",
...     num_inference_steps=50,
... ).images

>>> images[0].save("./gligen-1-4-generation-text-box.jpg")
```

**Parameters:**

vae ([AutoencoderKL](/docs/diffusers/v0.37.1/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) : Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.

text_encoder ([CLIPTextModel](https://huggingface.co/docs/transformers/v5.3.0/en/model_doc/clip#transformers.CLIPTextModel)) : Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).

tokenizer ([CLIPTokenizer](https://huggingface.co/docs/transformers/v5.3.0/en/model_doc/clip#transformers.CLIPTokenizer)) : A `CLIPTokenizer` to tokenize text.

unet ([UNet2DConditionModel](/docs/diffusers/v0.37.1/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) : A `UNet2DConditionModel` to denoise the encoded image latents.

scheduler ([SchedulerMixin](/docs/diffusers/v0.37.1/en/api/schedulers/overview#diffusers.SchedulerMixin)) : A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of [DDIMScheduler](/docs/diffusers/v0.37.1/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/v0.37.1/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/v0.37.1/en/api/schedulers/pndm#diffusers.PNDMScheduler).

safety_checker (`StableDiffusionSafetyChecker`) : Classification module that estimates whether generated images could be considered offensive or harmful. Please refer to the [model card](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) for more details about a model's potential harms.

feature_extractor ([CLIPImageProcessor](https://huggingface.co/docs/transformers/v5.3.0/en/model_doc/clip#transformers.CLIPImageProcessor)) : A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.

**Returns:**

`[StableDiffusionPipelineOutput](/docs/diffusers/v0.37.1/en/api/pipelines/stable_diffusion/gligen#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple``

If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/v0.37.1/en/api/pipelines/stable_diffusion/gligen#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.
#### enable_vae_slicing[[diffusers.StableDiffusionGLIGENPipeline.enable_vae_slicing]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/pipeline_utils.py#L2257)

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
#### disable_vae_slicing[[diffusers.StableDiffusionGLIGENPipeline.disable_vae_slicing]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/pipeline_utils.py#L2270)

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.
#### enable_vae_tiling[[diffusers.StableDiffusionGLIGENPipeline.enable_vae_tiling]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/pipeline_utils.py#L2283)

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.
#### disable_vae_tiling[[diffusers.StableDiffusionGLIGENPipeline.disable_vae_tiling]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/pipeline_utils.py#L2297)

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.
#### enable_model_cpu_offload[[diffusers.StableDiffusionGLIGENPipeline.enable_model_cpu_offload]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/pipeline_utils.py#L1179)

Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the accelerator when its
`forward` method is called, and the model remains in accelerator until the next model runs. Memory savings are
lower than with `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution
of the `unet`.

**Parameters:**

gpu_id (`int`, *optional*) : The ID of the accelerator that shall be used in inference. If not specified, it will default to 0.

device (`torch.Device` or `str`, *optional*, defaults to None) : The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will automatically detect the available accelerator and use.
#### prepare_latents[[diffusers.StableDiffusionGLIGENPipeline.prepare_latents]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen.py#L490)
#### enable_fuser[[diffusers.StableDiffusionGLIGENPipeline.enable_fuser]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen.py#L512)
#### encode_prompt[[diffusers.StableDiffusionGLIGENPipeline.encode_prompt]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen.py#L220)

Encodes the prompt into text encoder hidden states.

**Parameters:**

prompt (`str` or `list[str]`, *optional*) : prompt to be encoded

device : (`torch.device`): torch device

num_images_per_prompt (`int`) : number of images that should be generated per prompt

do_classifier_free_guidance (`bool`) : whether to use classifier free guidance or not

negative_prompt (`str` or `list[str]`, *optional*) : The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).

prompt_embeds (`torch.Tensor`, *optional*) : Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input argument.

negative_prompt_embeds (`torch.Tensor`, *optional*) : Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input argument.

lora_scale (`float`, *optional*) : A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.

clip_skip (`int`, *optional*) : Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings.

## StableDiffusionGLIGENTextImagePipeline[[diffusers.StableDiffusionGLIGENTextImagePipeline]]

#### diffusers.StableDiffusionGLIGENTextImagePipeline[[diffusers.StableDiffusionGLIGENTextImagePipeline]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L163)

Pipeline for text-to-image generation using Stable Diffusion with Grounded-Language-to-Image Generation (GLIGEN).

This model inherits from [DiffusionPipeline](/docs/diffusers/v0.37.1/en/api/pipelines/overview#diffusers.DiffusionPipeline). Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.).

__call__diffusers.StableDiffusionGLIGENTextImagePipeline.__call__https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L714[{"name": "prompt", "val": ": str | list[str] = None"}, {"name": "height", "val": ": int | None = None"}, {"name": "width", "val": ": int | None = None"}, {"name": "num_inference_steps", "val": ": int = 50"}, {"name": "guidance_scale", "val": ": float = 7.5"}, {"name": "gligen_scheduled_sampling_beta", "val": ": float = 0.3"}, {"name": "gligen_phrases", "val": ": list = None"}, {"name": "gligen_images", "val": ": list = None"}, {"name": "input_phrases_mask", "val": ": int | list[int] = None"}, {"name": "input_images_mask", "val": ": int | list[int] = None"}, {"name": "gligen_boxes", "val": ": list = None"}, {"name": "gligen_inpaint_image", "val": ": PIL.Image.Image | None = None"}, {"name": "negative_prompt", "val": ": str | list[str] | None = None"}, {"name": "num_images_per_prompt", "val": ": int | None = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": torch._C.Generator | list[torch._C.Generator] | None = None"}, {"name": "latents", "val": ": torch.Tensor | None = None"}, {"name": "prompt_embeds", "val": ": torch.Tensor | None = None"}, {"name": "negative_prompt_embeds", "val": ": torch.Tensor | None = None"}, {"name": "output_type", "val": ": str | None = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "callback", "val": ": typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None"}, {"name": "callback_steps", "val": ": int = 1"}, {"name": "cross_attention_kwargs", "val": ": dict[str, typing.Any] | None = None"}, {"name": "gligen_normalize_constant", "val": ": float = 28.7"}, {"name": "clip_skip", "val": ": int = None"}]- **prompt** (`str` or `list[str]`, *optional*) --
  The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
- **height** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The height in pixels of the generated image.
- **width** (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`) --
  The width in pixels of the generated image.
- **num_inference_steps** (`int`, *optional*, defaults to 50) --
  The number of denoising steps. More denoising steps usually lead to a higher quality image at the
  expense of slower inference.
- **guidance_scale** (`float`, *optional*, defaults to 7.5) --
  A higher guidance scale value encourages the model to generate images closely linked to the text
  `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- **gligen_phrases** (`list[str]`) --
  The phrases to guide what to include in each of the regions defined by the corresponding
  `gligen_boxes`. There should only be one phrase per bounding box.
- **gligen_images** (`list[PIL.Image.Image]`) --
  The images to guide what to include in each of the regions defined by the corresponding `gligen_boxes`.
  There should only be one image per bounding box
- **input_phrases_mask** (`int` or `list[int]`) --
  pre phrases mask input defined by the correspongding `input_phrases_mask`
- **input_images_mask** (`int` or `list[int]`) --
  pre images mask input defined by the correspongding `input_images_mask`
- **gligen_boxes** (`list[list[float]]`) --
  The bounding boxes that identify rectangular regions of the image that are going to be filled with the
  content described by the corresponding `gligen_phrases`. Each rectangular box is defined as a
  `list[float]` of 4 elements `[xmin, ymin, xmax, ymax]` where each value is between [0,1].
- **gligen_inpaint_image** (`PIL.Image.Image`, *optional*) --
  The input image, if provided, is inpainted with objects described by the `gligen_boxes` and
  `gligen_phrases`. Otherwise, it is treated as a generation task on a blank input image.
- **gligen_scheduled_sampling_beta** (`float`, defaults to 0.3) --
  Scheduled Sampling factor from [GLIGEN: Open-Set Grounded Text-to-Image
  Generation](https://huggingface.co/papers/2301.07093). Scheduled Sampling factor is only varied for
  scheduled sampling during inference for improved quality and controllability.
- **negative_prompt** (`str` or `list[str]`, *optional*) --
  The prompt or prompts to guide what to not include in image generation. If not defined, you need to
  pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale 0[StableDiffusionPipelineOutput](/docs/diffusers/v0.37.1/en/api/pipelines/stable_diffusion/gligen#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple`If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/v0.37.1/en/api/pipelines/stable_diffusion/gligen#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.

The call function to the pipeline for generation.

Examples:
```py
>>> import torch
>>> from diffusers import StableDiffusionGLIGENTextImagePipeline
>>> from diffusers.utils import load_image

>>> # Insert objects described by image at the region defined by bounding boxes
>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained(
...     "anhnct/Gligen_Inpainting_Text_Image", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")

>>> input_image = load_image(
...     "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/livingroom_modern.png"
... )
>>> prompt = "a backpack"
>>> boxes = [[0.2676, 0.4088, 0.4773, 0.7183]]
>>> phrases = None
>>> gligen_image = load_image(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/backpack.jpeg"
... )

>>> images = pipe(
...     prompt=prompt,
...     gligen_phrases=phrases,
...     gligen_inpaint_image=input_image,
...     gligen_boxes=boxes,
...     gligen_images=[gligen_image],
...     gligen_scheduled_sampling_beta=1,
...     output_type="pil",
...     num_inference_steps=50,
... ).images

>>> images[0].save("./gligen-inpainting-text-image-box.jpg")

>>> # Generate an image described by the prompt and
>>> # insert objects described by text and image at the region defined by bounding boxes
>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained(
...     "anhnct/Gligen_Text_Image", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")

>>> prompt = "a flower sitting on the beach"
>>> boxes = [[0.0, 0.09, 0.53, 0.76]]
>>> phrases = ["flower"]
>>> gligen_image = load_image(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/pexels-pixabay-60597.jpg"
... )

>>> images = pipe(
...     prompt=prompt,
...     gligen_phrases=phrases,
...     gligen_images=[gligen_image],
...     gligen_boxes=boxes,
...     gligen_scheduled_sampling_beta=1,
...     output_type="pil",
...     num_inference_steps=50,
... ).images

>>> images[0].save("./gligen-generation-text-image-box.jpg")

>>> # Generate an image described by the prompt and
>>> # transfer style described by image at the region defined by bounding boxes
>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained(
...     "anhnct/Gligen_Text_Image", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")

>>> prompt = "a dragon flying on the sky"
>>> boxes = [[0.4, 0.2, 1.0, 0.8], [0.0, 1.0, 0.0, 1.0]]  # Set `[0.0, 1.0, 0.0, 1.0]` for the style

>>> gligen_image = load_image(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png"
... )

>>> gligen_placeholder = load_image(
...     "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png"
... )

>>> images = pipe(
...     prompt=prompt,
...     gligen_phrases=[
...         "dragon",
...         "placeholder",
...     ],  # Can use any text instead of `placeholder` token, because we will use mask here
...     gligen_images=[
...         gligen_placeholder,
...         gligen_image,
...     ],  # Can use any image in gligen_placeholder, because we will use mask here
...     input_phrases_mask=[1, 0],  # Set 0 for the placeholder token
...     input_images_mask=[0, 1],  # Set 0 for the placeholder image
...     gligen_boxes=boxes,
...     gligen_scheduled_sampling_beta=1,
...     output_type="pil",
...     num_inference_steps=50,
... ).images

>>> images[0].save("./gligen-generation-text-image-box-style-transfer.jpg")
```

**Parameters:**

vae ([AutoencoderKL](/docs/diffusers/v0.37.1/en/api/models/autoencoderkl#diffusers.AutoencoderKL)) : Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.

text_encoder ([CLIPTextModel](https://huggingface.co/docs/transformers/v5.3.0/en/model_doc/clip#transformers.CLIPTextModel)) : Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).

tokenizer ([CLIPTokenizer](https://huggingface.co/docs/transformers/v5.3.0/en/model_doc/clip#transformers.CLIPTokenizer)) : A `CLIPTokenizer` to tokenize text.

processor ([CLIPProcessor](https://huggingface.co/docs/transformers/v5.3.0/en/model_doc/clip#transformers.CLIPProcessor)) : A `CLIPProcessor` to process reference image.

image_encoder ([CLIPVisionModelWithProjection](https://huggingface.co/docs/transformers/v5.3.0/en/model_doc/clip#transformers.CLIPVisionModelWithProjection)) : Frozen image-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).

image_project (`CLIPImageProjection`) : A `CLIPImageProjection` to project image embedding into phrases embedding space.

unet ([UNet2DConditionModel](/docs/diffusers/v0.37.1/en/api/models/unet2d-cond#diffusers.UNet2DConditionModel)) : A `UNet2DConditionModel` to denoise the encoded image latents.

scheduler ([SchedulerMixin](/docs/diffusers/v0.37.1/en/api/schedulers/overview#diffusers.SchedulerMixin)) : A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of [DDIMScheduler](/docs/diffusers/v0.37.1/en/api/schedulers/ddim#diffusers.DDIMScheduler), [LMSDiscreteScheduler](/docs/diffusers/v0.37.1/en/api/schedulers/lms_discrete#diffusers.LMSDiscreteScheduler), or [PNDMScheduler](/docs/diffusers/v0.37.1/en/api/schedulers/pndm#diffusers.PNDMScheduler).

safety_checker (`StableDiffusionSafetyChecker`) : Classification module that estimates whether generated images could be considered offensive or harmful. Please refer to the [model card](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) for more details about a model's potential harms.

feature_extractor ([CLIPImageProcessor](https://huggingface.co/docs/transformers/v5.3.0/en/model_doc/clip#transformers.CLIPImageProcessor)) : A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.

**Returns:**

`[StableDiffusionPipelineOutput](/docs/diffusers/v0.37.1/en/api/pipelines/stable_diffusion/gligen#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or `tuple``

If `return_dict` is `True`, [StableDiffusionPipelineOutput](/docs/diffusers/v0.37.1/en/api/pipelines/stable_diffusion/gligen#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned,
otherwise a `tuple` is returned where the first element is a list with the generated images and the
second element is a list of `bool`s indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.
#### enable_vae_slicing[[diffusers.StableDiffusionGLIGENTextImagePipeline.enable_vae_slicing]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/pipeline_utils.py#L2257)

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
#### disable_vae_slicing[[diffusers.StableDiffusionGLIGENTextImagePipeline.disable_vae_slicing]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/pipeline_utils.py#L2270)

Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
computing decoding in one step.
#### enable_vae_tiling[[diffusers.StableDiffusionGLIGENTextImagePipeline.enable_vae_tiling]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/pipeline_utils.py#L2283)

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images.
#### disable_vae_tiling[[diffusers.StableDiffusionGLIGENTextImagePipeline.disable_vae_tiling]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/pipeline_utils.py#L2297)

Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
computing decoding in one step.
#### enable_model_cpu_offload[[diffusers.StableDiffusionGLIGENTextImagePipeline.enable_model_cpu_offload]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/pipeline_utils.py#L1179)

Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the accelerator when its
`forward` method is called, and the model remains in accelerator until the next model runs. Memory savings are
lower than with `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution
of the `unet`.

**Parameters:**

gpu_id (`int`, *optional*) : The ID of the accelerator that shall be used in inference. If not specified, it will default to 0.

device (`torch.Device` or `str`, *optional*, defaults to None) : The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will automatically detect the available accelerator and use.
#### prepare_latents[[diffusers.StableDiffusionGLIGENTextImagePipeline.prepare_latents]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L528)
#### enable_fuser[[diffusers.StableDiffusionGLIGENTextImagePipeline.enable_fuser]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L550)
#### complete_mask[[diffusers.StableDiffusionGLIGENTextImagePipeline.complete_mask]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L587)

Based on the input mask corresponding value `0 or 1` for each phrases and image, mask the features
corresponding to phrases and images.
#### crop[[diffusers.StableDiffusionGLIGENTextImagePipeline.crop]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L567)

Crop the input image to the specified dimensions.
#### draw_inpaint_mask_from_boxes[[diffusers.StableDiffusionGLIGENTextImagePipeline.draw_inpaint_mask_from_boxes]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L555)

Create an inpainting mask based on given boxes. This function generates an inpainting mask using the provided
boxes to mark regions that need to be inpainted.
#### encode_prompt[[diffusers.StableDiffusionGLIGENTextImagePipeline.encode_prompt]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L251)

Encodes the prompt into text encoder hidden states.

**Parameters:**

prompt (`str` or `list[str]`, *optional*) : prompt to be encoded

device : (`torch.device`): torch device

num_images_per_prompt (`int`) : number of images that should be generated per prompt

do_classifier_free_guidance (`bool`) : whether to use classifier free guidance or not

negative_prompt (`str` or `list[str]`, *optional*) : The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).

prompt_embeds (`torch.Tensor`, *optional*) : Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input argument.

negative_prompt_embeds (`torch.Tensor`, *optional*) : Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input argument.

lora_scale (`float`, *optional*) : A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.

clip_skip (`int`, *optional*) : Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings.
#### get_clip_feature[[diffusers.StableDiffusionGLIGENTextImagePipeline.get_clip_feature]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L603)

Get image and phrases embedding by using CLIP pretrain model. The image embedding is transformed into the
phrases embedding space through a projection.
#### get_cross_attention_kwargs_with_grounded[[diffusers.StableDiffusionGLIGENTextImagePipeline.get_cross_attention_kwargs_with_grounded]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L627)

Prepare the cross-attention kwargs containing information about the grounded input (boxes, mask, image
embedding, phrases embedding).
#### get_cross_attention_kwargs_without_grounded[[diffusers.StableDiffusionGLIGENTextImagePipeline.get_cross_attention_kwargs_without_grounded]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L691)

Prepare the cross-attention kwargs without information about the grounded input (boxes, mask, image embedding,
phrases embedding) (All are zero tensor).
#### target_size_center_crop[[diffusers.StableDiffusionGLIGENTextImagePipeline.target_size_center_crop]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L578)

Crop and resize the image to the target size while keeping the center.

## StableDiffusionPipelineOutput[[diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput]]

#### diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput[[diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/pipelines/stable_diffusion/pipeline_output.py#L10)

Output class for Stable Diffusion pipelines.

**Parameters:**

images (`list[PIL.Image.Image]` or `np.ndarray`) : list of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width, num_channels)`.

nsfw_content_detected (`list[bool]`) : list indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or `None` if safety checking could not be performed.

