Text Generation
Transformers
Safetensors
mistral
finetuned
4-bit precision
AWQ
text-generation-inference
chatml
arxiv:2304.12244
arxiv:2306.08568
arxiv:2308.09583
awq
Instructions to use solidrust/WizardLM-2-7B-AWQ with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use solidrust/WizardLM-2-7B-AWQ with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="solidrust/WizardLM-2-7B-AWQ")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("solidrust/WizardLM-2-7B-AWQ") model = AutoModelForCausalLM.from_pretrained("solidrust/WizardLM-2-7B-AWQ") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use solidrust/WizardLM-2-7B-AWQ with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "solidrust/WizardLM-2-7B-AWQ" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "solidrust/WizardLM-2-7B-AWQ", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/solidrust/WizardLM-2-7B-AWQ
- SGLang
How to use solidrust/WizardLM-2-7B-AWQ with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "solidrust/WizardLM-2-7B-AWQ" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "solidrust/WizardLM-2-7B-AWQ", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "solidrust/WizardLM-2-7B-AWQ" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "solidrust/WizardLM-2-7B-AWQ", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use solidrust/WizardLM-2-7B-AWQ with Docker Model Runner:
docker model run hf.co/solidrust/WizardLM-2-7B-AWQ
| license: apache-2.0 | |
| tags: | |
| - transformers | |
| - safetensors | |
| - mistral | |
| - finetuned | |
| - 4-bit | |
| - AWQ | |
| - text-generation | |
| - text-generation-inference | |
| - autotrain_compatible | |
| - endpoints_compatible | |
| - chatml | |
| - arxiv:2304.12244 | |
| - arxiv:2306.08568 | |
| - arxiv:2308.09583 | |
| model_creator: microsoft | |
| model_name: WizardLM-2-7B | |
| base_model: microsoft/WizardLM-2-7B | |
| inference: false | |
| pipeline_tag: text-generation | |
| quantized_by: Suparious | |
| # microsoft/WizardLM-2-7B AWQ | |
| - Model creator: [microsoft](https://huggingface.co/microsoft) | |
| - Original model: [WizardLM-2-7B](https://huggingface.co/microsoft/WizardLM-2-7B) | |
| ## Model Summary | |
| We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, | |
| which have improved performance on complex chat, multilingual, reasoning and agent. | |
| New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B. | |
| ## How to use | |
| ### Install the necessary packages | |
| ```bash | |
| pip install --upgrade accelerate autoawq autoawq-kernels transformers | |
| ``` | |
| ### Example Python code | |
| ```python | |
| from awq import AutoAWQForCausalLM | |
| from transformers import AutoTokenizer, TextStreamer | |
| model_path = "solidrust/WizardLM-2-7B-AWQ" | |
| system_message = "You are WizardLM, incarnated as a powerful AI." | |
| # Load model | |
| model = AutoAWQForCausalLM.from_quantized(model_path, | |
| fuse_layers=True) | |
| tokenizer = AutoTokenizer.from_pretrained(model_path, | |
| trust_remote_code=True) | |
| streamer = TextStreamer(tokenizer, | |
| skip_prompt=True, | |
| skip_special_tokens=True) | |
| # Convert prompt to tokens | |
| prompt_template = """\ | |
| <|im_start|>system | |
| {system_message}<|im_end|> | |
| <|im_start|>user | |
| {prompt}<|im_end|> | |
| <|im_start|>assistant""" | |
| prompt = "You're standing on the surface of the Earth. "\ | |
| "You walk one mile south, one mile west and one mile north. "\ | |
| "You end up exactly where you started. Where are you?" | |
| tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), | |
| return_tensors='pt').input_ids.cuda() | |
| # Generate output | |
| generation_output = model.generate(tokens, | |
| streamer=streamer, | |
| max_new_tokens=512) | |
| ``` | |
| ### About AWQ | |
| AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. | |
| AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. | |
| It is supported by: | |
| - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ | |
| - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. | |
| - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) | |
| - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers | |
| - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code | |
| ## Prompt template: ChatML | |
| ```plaintext | |
| <|im_start|>system | |
| {system_message}<|im_end|> | |
| <|im_start|>user | |
| {prompt}<|im_end|> | |
| <|im_start|>assistant | |
| ``` |