Instructions to use Kwaipilot/KAT-Dev-72B-Exp-FP8 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Kwaipilot/KAT-Dev-72B-Exp-FP8 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Kwaipilot/KAT-Dev-72B-Exp-FP8") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Kwaipilot/KAT-Dev-72B-Exp-FP8") model = AutoModelForCausalLM.from_pretrained("Kwaipilot/KAT-Dev-72B-Exp-FP8") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Kwaipilot/KAT-Dev-72B-Exp-FP8 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Kwaipilot/KAT-Dev-72B-Exp-FP8" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Kwaipilot/KAT-Dev-72B-Exp-FP8", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Kwaipilot/KAT-Dev-72B-Exp-FP8
- SGLang
How to use Kwaipilot/KAT-Dev-72B-Exp-FP8 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Kwaipilot/KAT-Dev-72B-Exp-FP8" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Kwaipilot/KAT-Dev-72B-Exp-FP8", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Kwaipilot/KAT-Dev-72B-Exp-FP8" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Kwaipilot/KAT-Dev-72B-Exp-FP8", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Kwaipilot/KAT-Dev-72B-Exp-FP8 with Docker Model Runner:
docker model run hf.co/Kwaipilot/KAT-Dev-72B-Exp-FP8
This repository contains an FP8 quantized version of the Kwaipilot/KAT-Dev-72B-Exp model. The FP8 version achieves 68.5% on SWE-Bench Verified.
News
🔥 We’re thrilled to announce the release of KAT-Dev-72B-Exp, our latest and most powerful model yet!
🔥 You can now try our strongest proprietary coder model KAT-Coder directly on the StreamLake platform for free.
Highlights
KAT-Dev-72B-Exp is an open-source 72B-parameter model for software engineering tasks.
On SWE-Bench Verified, KAT-Dev-72B-Exp achieves 74.6% accuracy ⚡ — when evaluated strictly with the SWE-agent scaffold.
KAT-Dev-72B-Exp is the experimental reinforcement-learning version of the KAT-Coder model. Through this open-source release, we aim to reveal the technical innovations behind KAT-Coder’s large-scale RL to developers and researchers.
Introduction
We rewrote the attention kernel and redesigned the training engine for shared prefix trajectories to achieve highly efficient RL training, especially for scaffolds leveraging context management.
Furthermore, to prevent exploration collapse observed in RL training, we reshaped advantage distribution based on pass rates: amplifying the advantage scale of highly exploratory groups while reducing that of low-exploration ones.
Quickstart
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "KAT-Dev-72B-Exp"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=65536
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
content = tokenizer.decode(output_ids, skip_special_tokens=True)
print("content:", content)
SWE agent Evaluation Parameters
temperature: 0.6
max_turns: 150
history_processors.n: 100
For full settings please refer to inference.yaml
- Downloads last month
- 11
