Instructions to use JamAndTeaStudios/DeepSeek-R1-Distill-Qwen-1.5B-FP8-Dynamic with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use JamAndTeaStudios/DeepSeek-R1-Distill-Qwen-1.5B-FP8-Dynamic with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="JamAndTeaStudios/DeepSeek-R1-Distill-Qwen-1.5B-FP8-Dynamic") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("JamAndTeaStudios/DeepSeek-R1-Distill-Qwen-1.5B-FP8-Dynamic") model = AutoModelForCausalLM.from_pretrained("JamAndTeaStudios/DeepSeek-R1-Distill-Qwen-1.5B-FP8-Dynamic") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use JamAndTeaStudios/DeepSeek-R1-Distill-Qwen-1.5B-FP8-Dynamic with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "JamAndTeaStudios/DeepSeek-R1-Distill-Qwen-1.5B-FP8-Dynamic" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "JamAndTeaStudios/DeepSeek-R1-Distill-Qwen-1.5B-FP8-Dynamic", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/JamAndTeaStudios/DeepSeek-R1-Distill-Qwen-1.5B-FP8-Dynamic
- SGLang
How to use JamAndTeaStudios/DeepSeek-R1-Distill-Qwen-1.5B-FP8-Dynamic with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "JamAndTeaStudios/DeepSeek-R1-Distill-Qwen-1.5B-FP8-Dynamic" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "JamAndTeaStudios/DeepSeek-R1-Distill-Qwen-1.5B-FP8-Dynamic", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "JamAndTeaStudios/DeepSeek-R1-Distill-Qwen-1.5B-FP8-Dynamic" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "JamAndTeaStudios/DeepSeek-R1-Distill-Qwen-1.5B-FP8-Dynamic", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use JamAndTeaStudios/DeepSeek-R1-Distill-Qwen-1.5B-FP8-Dynamic with Docker Model Runner:
docker model run hf.co/JamAndTeaStudios/DeepSeek-R1-Distill-Qwen-1.5B-FP8-Dynamic
Model Overview
- Model Optimizations:
- Weight quantization: FP8
- Activation quantization: FP8
- Release Date: 1/28/2025
Quantized version of deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B to FP8 data type, ready for inference with SGLang >= 0.3 or vLLM >= 0.5.2. This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%. Only the weights and activations of the linear operators within transformers blocks are quantized.
Deployment
Use with SGLang
python -m sglang.launch_server --model-path JamAndTeaStudios/DeepSeek-R1-Distill-Qwen-1.5B-FP8-Dynamic \
--port 30000 --host 0.0.0.0
Creation
This model was created with llm-compressor by running the code snippet below.
Model Creation Code
from transformers import AutoModelForCausalLM, AutoTokenizer
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.transformers import oneshot
MODEL_ID = "google/gemma-2-27b-it"
# 1) Load model.
model = AutoModelForCausalLM.from_pretrained(
MODEL_ID, device_map="auto", torch_dtype="auto"
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
# 2) Configure the quantization algorithm and scheme.
# In this case, we:
# * quantize the weights to fp8 with per channel via ptq
# * quantize the activations to fp8 with dynamic per token
recipe = QuantizationModifier(
targets="Linear", scheme="FP8_DYNAMIC", ignore=["lm_head"]
)
# 3) Apply quantization and save in compressed-tensors format.
OUTPUT_DIR = MODEL_ID.split("/")[1] + "-FP8-Dynamic"
oneshot(
model=model,
recipe=recipe,
tokenizer=tokenizer,
output_dir=OUTPUT_DIR,
)
# Confirm generations of the quantized model look sane.
print("========== SAMPLE GENERATION ==============")
input_ids = tokenizer("Hello my name is", return_tensors="pt").input_ids.to("cuda")
output = model.generate(input_ids, max_new_tokens=20)
print(tokenizer.decode(output[0]))
print("==========================================")
Evaluation
TBA
Play Retail Mage
Retail Mage (Steam) is an immersive sim that uses online LLM inference in almost all features in the gameplay!
Reviews
“A true to life experience detailing how customer service really works.” 10/10 – kpolupo
“I enjoyed how many things were flammable in the store.” 5/5 – mr_srsbsns
“I've only known that talking little crow plushie in MageMart for a day and a half but if anything happened to him I would petrify everyone in this store and then myself.” 7/7 – neondenki
- Downloads last month
- 13
Model tree for JamAndTeaStudios/DeepSeek-R1-Distill-Qwen-1.5B-FP8-Dynamic
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B