Elastic model: FLUX.1-dev

Overview

ElasticModels are the models produced by TheStage AI ANNA: Automated Neural Networks Accelerator. ANNA allows you to control model size, latency and quality with a simple slider movement, routing different compression algorithms to different layers. For each model, we have produced a series of optimized models:

  • XL: Mathematically equivalent neural network, optimized with our DNN compiler.
  • L: Near lossless model, with less than 1% degradation obtained on corresponding benchmarks.
  • M: Faster model, with accuracy degradation less than 1.5%.
  • S: The fastest model, with accuracy degradation less than 2%.

Models can be accessed via TheStage AI Python SDK: ElasticModels, or deployed as Docker containers with REST API endpoints (see Deploy section).


Installation

System Requirements

Property Value
GPU L40s, RTX 5090, H100, B200
Python Version 3.10-3.12
CPU Intel/AMD x86_64
CUDA Version 12.8+

TheStage AI Access token setup

Install TheStage AI CLI and setup API token:

pip install thestage
thestage config set --access-token <YOUR_ACCESS_TOKEN>

ElasticModels installation

Install TheStage Elastic Models package:

pip install 'thestage-elastic-models[nvidia]' \
    --extra-index-url https://thestage.jfrog.io/artifactory/api/pypi/pypi-thestage-ai-production/simple

Usage example

Elastic Models provides the same interface as HuggingFace Diffusers. Here is an example of how to use the FLUX.1-dev model:

import torch
from elastic_models.diffusers import FluxPipeline

mode_name = 'black-forest-labs/FLUX.1-dev'
hf_token = ''
device = torch.device("cuda")

pipeline = FluxPipeline.from_pretrained(
    mode_name,
    torch_dtype=torch.bfloat16,
    token=hf_token,
    # 'original' for original model
    # 'S', 'M', 'L', 'XL' for accelerated models
    mode='S'
)
pipeline.to(device)

prompts = ["Kitten eating a banana"]
output = pipeline(prompt=prompts)

for prompt, output_image in zip(prompts, output.images):
    output_image.save((prompt.replace(' ', '_') + '.png'))

Quality Benchmarks

We have used PartiPrompts and DrawBench datasets to evaluate the quality of images generated by different sizes of FLUX.1-dev models (S, M, L, XL) compared to the original model. The evaluation metrics include ARNIQA, CLIP IQA, PSNR, SSIM, and VQA Faithfulness.

Quality Benchmarking

Quality Benchmark Results

Metric/Model Size S M L XL Original
ARNIQA (PartiPrompts) 64.1 63.2 61.9 66.8 66.9
ARNIQA (DrawBench) 64.3 63.5 63.6 68.2 68.5
CLIP IQA (PartiPrompts) 85.5 86.4 83.8 88.3 87.9
CLIP IQA (DrawBench) 86.4 86.5 84.5 89.5 90.0
VQA Faithfulness (PartiPrompts) 87.5 85.5 85.5 85.5 88.6
VQA Faithfulness (DrawBench) 69.3 64.7 64.8 67.8 65.2
PSNR (PartiPrompts) 30.22 30.24 30.38 N/A N/A
SSIM (PartiPrompts) 0.72 0.72 0.76 1.0 1.0

Datasets

  • PartiPrompts: A benchmark dataset created by Google Research, containing 1,632 diverse and challenging prompts that test various aspects of text-to-image generation models. It includes categories such as abstract concepts, complex compositions, properties and attributes, counting and numbers, text rendering, artistic styles, and fine-grained details.

  • DrawBench: A comprehensive benchmark dataset developed by Google Research, containing 200 carefully curated prompts designed to test specific capabilities and challenge areas of diffusion models. It includes categories such as colors, counting, conflicting requirements, DALL-E inspired prompts, detailed descriptions, misspellings, positional relationships, rare words, Reddit user prompts, and text generation.


Metrics

  • ARNIQA: No-reference image quality assessment metric that predicts perceptual quality without reference images.
  • CLIP_IQA: No-reference image quality metric using contrastive learning to assess image quality without references.
  • VQA Faithfulness: Metric measuring how accurately generated images represent the text prompts.
  • PSNR: Peak Signal-to-Noise Ratio measuring similarity between generated by accelerated model and original model images.
  • SSIM: Structural Similarity Index measuring perceptual similarity between generated by accelerated model and original model images.

Latency Benchmarks

We have measured the latency of different sizes of FLUX.1-dev model (S, M, L, XL, original) on various GPUs. The measurements were taken for generating images of size 1024x1024 pixels.

Latency Benchmarking

Latency Benchmark Results

Latency (in seconds) for generating a 1024x1024 image using different model sizes on various hardware setups.

GPU/Model Size S M L XL Original
H100 2.88 3.06 3.25 4.18 6.46
L40s 9.22 10.07 10.67 14.39 16
B200 1.93 2.04 2.15 2.77 4.52
GeForce RTX 5090 5.79 N/A N/A N/A N/A

Benchmarking Methodology

The benchmarking was performed on a single GPU with a batch size of 1. Each model was run for 10 iterations, and the average latency was calculated.

Algorithm summary:

  1. Load the FLUX.1-dev model with the specified size (S, M, L, XL, original).
  2. Move the model to the GPU.
  3. Prepare a sample prompt for image generation.
  4. Run the model for a number of iterations (e.g., 10) and measure the time taken for each iteration. On each iteration:
    • Synchronize the GPU to flush any previous operations.
    • Record the start time.
    • Generate the image using the model.
    • Synchronize the GPU again.
    • Record the end time and calculate the latency for that iteration.
  5. Calculate the average latency over all iterations.

Reproduce benchmarking

import torch
from elastic_models.diffusers import FluxPipeline

mode_name = 'black-forest-labs/FLUX.1-dev'
hf_token = ''
device = torch.device("cuda")

pipeline = FluxPipeline.from_pretrained(
    mode_name,
    torch_dtype=torch.bfloat16,
    token=hf_token,
    # 'original' for original model
    # 'S', 'M', 'L', 'XL' for accelerated models
    mode='S'
)
pipeline.to(device)

prompt = ["Kitten eating a banana"]
generate_kwargs={
    "height": 1024,
    "width": 1024,
    "num_inference_steps": 28,
    "cfg_scale": 0.0
}

import time

def evaluate_pipeline():
    torch.cuda.synchronize()
    start_time = time.time()
    output = pipeline(
        prompt=prompt,
        **generate_kwargs
    )
    torch.cuda.synchronize()
    end_time = time.time()

    return end_time - start_time

# Warm-up
for _ in range(5):
    evaluate_pipeline()

# Benchmarking
num_runs = 10
total_time = 0.0

for _ in range(num_runs):
    latency = evaluate_pipeline()
    total_time += latency

average_latency = total_time / num_runs
print(f"Average Latency over {num_runs} runs: {average_latency} seconds")

Serving with Docker Image

For serving with Nvidia GPUs, we provide ready-to-go Docker containers with OpenAI-compatible API endpoints. Using our containers you can set up an inference endpoint on any desired cloud/serverless providers as well as on-premise servers. You can also use this container to run inference through TheStage AI platform.

Prebuilt image from ECR

GPU Docker image name
H100, L40s public.ecr.aws/i3f7g5s7/thestage/elastic-models:0.1.2-diffusers-nvidia-24.09b
B200, RTX 5090 public.ecr.aws/i3f7g5s7/thestage/elastic-models:0.1.2-diffusers-blackwell-24.09b

Pull docker image for your Nvidia GPU and start inference container:

docker pull <IMAGE_NAME>
docker run --rm -ti \
  --name serving_thestage_model \
  -p 8000:80 \
  -e AUTH_TOKEN=<AUTH_TOKEN> \
  -e MODEL_REPO=black-forest-labs/FLUX.1-dev \
  -e MODEL_SIZE=<MODEL_SIZE> \
  -e MODEL_BATCH=<MAX_BATCH_SIZE> \
  -e HUGGINGFACE_ACCESS_TOKEN=<HUGGINGFACE_ACCESS_TOKEN> \
  -e THESTAGE_AUTH_TOKEN=<THESTAGE_ACCESS_TOKEN> \
  -v /mnt/hf_cache:/root/.cache/huggingface \
  <IMAGE_NAME_DEPENDING_ON_YOUR_GPU>
Parameter Description
<MODEL_SIZE> Available: S, M, L, XL.
<MAX_BATCH_SIZE> Maximum batch size to process in parallel.
<HUGGINGFACE_ACCESS_TOKEN> Hugging Face access token.
<THESTAGE_ACCESS_TOKEN> TheStage token generated on the platform (Profile -> Access tokens).
<AUTH_TOKEN> Token for endpoint authentication. You can set it to any random string; it must match the value used by the client.
<IMAGE_NAME> Image name which you have pulled.

Invocation

You can invoke the endpoint using CURL as follows:

curl -X POST <http://127.0.0.1:8000/v1/images/generations>  \
    -H "Authorization: Bearer <AUTH_TOKEN>"  \
    -H "Content-Type: application/json" \
    -H "X-Model-Name: flux-1-dev-<MODEL_SIZE>-bs<MAX_BATCH_SIZE>" \
    -d '{
          "prompt": "Cat eating banana",
          "seed": 12,
          "aspect_ratio": "1:1",
          "guidance_scale": 6.5,
          "num_inference_steps": 28
        }' \
    --output sunset.webp -D -

Or using Python requests:

import requests
import json
url = "http://127.0.0.1:8000/v1/images/generations"
payload = json.dumps({
  "prompt": "sunset",
  "seed": 12,
  "aspect_ratio": "1:1",
  "guidance_scale": 6.5,
  "num_inference_steps": 28
})
headers = {
  'Authorization': 'Bearer <AUTH_TOKEN>',
  'Content-Type': 'application/json',
  'X-Model-Name': 'flux-1-dev-<MODEL_SIZE>-bs<MAX_BATCH_SIZE>'
}
response = requests.request("POST", url, headers=headers, data=payload)
with open("sunset.webp", "wb") as f:
    f.write(response.content)

Or using OpenAI python client:

import os, base64, pathlib, json
from openai import OpenAI

BASE_URL = "http://<your_ip>/v1"
API_KEY  = ""
MODEL    = "flux-1-dev-<MODEL_SIZE>-bs<MAX_BATCH_SIZE>"

client = OpenAI(
    api_key=API_KEY,
    base_url=BASE_URL,
    default_headers={"X-Model-Name": MODEL}
)

response = client.with_raw_response.images.generate(
    model=MODEL,
    prompt="Cat eating banana",
    n=1,
    extra_body={
        "seed": 111,
        "aspect_ratio": "1:1",
        "guidance_scale": 3.5,
        "num_inference_steps": 28
    },
)

with open("thestage_image.webp", "wb") as f:
    f.write(response.content)

Endpoint Parameters

Method

POST /v1/images/generations

Header Parameters

Authorization: string

Bearer token for authentication. Should match the AUTH_TOKEN set during container startup.

Content-Type: string

Must be set to application/json.

X-Model-Name: string

Specifies the model to use for generation. Format: flux-1-dev-<size>-bs<batch_size>, where <size> is one of S, M, L, XL, original and <batch_size> is the maximum batch size configured during container startup.

Input Body

prompt : string

The text prompt to generate an image for.

seed: int32

Random seed for generation.

num_inference_steps: int32

Number of diffusion steps to use for generation. Higher values yield better quality but take longer. Default is 28

aspect_ratio: string

Aspect ratio of the generated image. Supported values:

"1:1": (1024, 1024),
"16:9": (1280, 736),
"21:9": (1280, 544),
"3:2": (1248, 832),
"2:3": (832, 1248),
"4:3": (1184, 896),
"3:4": (896, 1184),
"5:4": (1152, 928),
"4:5": (928, 1152),
"9:16": (736, 1280),
"9:21": (544, 1280)

guidance_scale: float32

Guidance scale for classifier-free guidance. Higher values increase adherence to the prompt.


Deploy on Modal

For more details please use the tutorial Modal deployment

Clone modal serving code

git clone https://github.com/TheStageAI/ElasticModels.git
cd ElasticModels/examples/modal

Configuration of environment variables

Set your environment variables in modal_serving.py:

# modal_serving.py

ENVS = {
    "MODEL_REPO": "black-forest-labs/FLUX.1-dev",
    "MODEL_BATCH": "4",
    "THESTAGE_AUTH_TOKEN": "",
    "HUGGINGFACE_ACCESS_TOKEN": "",
    "PORT": "80",
    "PORT_HEALTH": "80",
    "HF_HOME": "/cache/huggingface",
}

Configuration of GPUs

Set your desired GPU type and autoscaling variables in modal_serving.py:

# modal_serving.py

@app.function(
    image=image,
    gpu="B200",
    min_containers=8,
    max_containers=8,
    timeout=10000,
    ephemeral_disk=600 * 1024,
    volumes={"/opt/project/.cache": HF_CACHE},
    startup_timeout=60*20
)
@modal.web_server(
    80,
    label="black-forest-labs/FLUX.1-dev-test",
    startup_timeout=60*20
)
def serve():
    pass

Run serving

modal serve modal_serving.py

Links

Downloads last month
295
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TheStageAI/Elastic-FLUX.1-dev

Quantized
(60)
this model

Collection including TheStageAI/Elastic-FLUX.1-dev