allenai/openbookqa
Viewer β’ Updated β’ 11.9k β’ 143k β’ 129
How to use upstage/llama-30b-instruct with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="upstage/llama-30b-instruct") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("upstage/llama-30b-instruct")
model = AutoModelForCausalLM.from_pretrained("upstage/llama-30b-instruct")How to use upstage/llama-30b-instruct with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "upstage/llama-30b-instruct"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "upstage/llama-30b-instruct",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/upstage/llama-30b-instruct
How to use upstage/llama-30b-instruct with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "upstage/llama-30b-instruct" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "upstage/llama-30b-instruct",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "upstage/llama-30b-instruct" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "upstage/llama-30b-instruct",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use upstage/llama-30b-instruct with Docker Model Runner:
docker model run hf.co/upstage/llama-30b-instruct
### System:
{System}
### User:
{User}
### Assistant:
{Assistant}
rope_scaling optionimport torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
tokenizer = AutoTokenizer.from_pretrained("upstage/llama-30b-instruct")
model = AutoModelForCausalLM.from_pretrained(
"upstage/llama-30b-instruct",
device_map="auto",
torch_dtype=torch.float16,
load_in_8bit=True,
rope_scaling={"type": "dynamic", "factor": 2} # allows handling of longer inputs
)
prompt = "### User:\nThomas is healthy, but he has to go to the hospital. What could be the reasons?\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
del inputs["token_type_ids"]
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
output = model.generate(**inputs, streamer=streamer, use_cache=True, max_new_tokens=float('inf'))
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
ARC-Challenge, HellaSwag, MMLU, and TruthfulQA
We used the lm-evaluation-harness repository, specifically commit b281b0921b636bc36ad05c0b0b0763bd6dd43463| Model | H4(Avg) | ARC | HellaSwag | MMLU | TruthfulQA | MT_Bench | |
|---|---|---|---|---|---|---|---|
| Llama-2-70b-instruct-v2(Ours, Open LLM Leaderboard) | 73 | 71.1 | 87.9 | 70.6 | 62.2 | 7.44063 | |
| Llama-2-70b-instruct (Ours, Open LLM Leaderboard) | 72.3 | 70.9 | 87.5 | 69.8 | 61 | 7.24375 | |
| llama-65b-instruct (Ours, Open LLM Leaderboard) | 69.4 | 67.6 | 86.5 | 64.9 | 58.8 | ||
| Llama-2-70b-hf | 67.3 | 67.3 | 87.3 | 69.8 | 44.9 | ||
| llama-30b-instruct-2048 (Ours, Open LLM Leaderboard) | 67.0 | 64.9 | 84.9 | 61.9 | 56.3 | ||
| llama-30b-instruct (Ours, Open LLM Leaderboard) | 65.2 | 62.5 | 86.2 | 59.4 | 52.8 | ||
| llama-65b | 64.2 | 63.5 | 86.1 | 63.9 | 43.4 | ||
| falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 |
# clone the repository
git clone https://github.com/EleutherAI/lm-evaluation-harness.git
# check out the specific commit
git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
# change to the repository directory
cd lm-evaluation-harness