Instructions to use IHaBiS/PiVoT-MoE-exl2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use IHaBiS/PiVoT-MoE-exl2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="IHaBiS/PiVoT-MoE-exl2") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("IHaBiS/PiVoT-MoE-exl2") model = AutoModelForCausalLM.from_pretrained("IHaBiS/PiVoT-MoE-exl2") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use IHaBiS/PiVoT-MoE-exl2 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "IHaBiS/PiVoT-MoE-exl2" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "IHaBiS/PiVoT-MoE-exl2", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/IHaBiS/PiVoT-MoE-exl2
- SGLang
How to use IHaBiS/PiVoT-MoE-exl2 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "IHaBiS/PiVoT-MoE-exl2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "IHaBiS/PiVoT-MoE-exl2", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "IHaBiS/PiVoT-MoE-exl2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "IHaBiS/PiVoT-MoE-exl2", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use IHaBiS/PiVoT-MoE-exl2 with Docker Model Runner:
docker model run hf.co/IHaBiS/PiVoT-MoE-exl2
Exl2 version of maywell/PiVoT-MoE
branch
main : 2.4bpw h8
3bh8 : 3bpw h8
4bh8 : 4bpw h8
6bh8 : 6bpw h8
8bh8 : 8bpw h8
Using ThePile 0007.parquet as dataset
Quantization settings : python convert.py -i models/maywell_PiVoT-MoE -o PiVoT-MoE-temp -cf PiVoT-MoE-8bpw-h8-exl2 -c 0007.parquet -l 8192 -b 8 -hb 8 -ml 8192python convert.py -i models/maywell_PiVoT-MoE -o PiVoT-MoE-temp2 -cf PiVoT-MoE-6bpw-h8-exl2 -c 0007.parquet -l 8192 -b 6 -hb 8 -m PiVoT-MoE-temp/measurement.json -ml 8192python convert.py -i models/maywell_PiVoT-MoE -o PiVoT-MoE-temp3 -cf PiVoT-MoE-4bpw-h8-exl2 -c 0007.parquet -l 8192 -b 4 -hb 8 -m PiVoT-MoE-temp/measurement.json -ml 8192python convert.py -i models/maywell_PiVoT-MoE -o PiVoT-MoE-temp4 -cf PiVoT-MoE-3bpw-h8-exl2 -c 0007.parquet -l 8192 -b 3 -hb 8 -m PiVoT-MoE-temp/measurement.json -ml 8192python convert.py -i models/maywell_PiVoT-MoE -o PiVoT-MoE-temp5 -cf PiVoT-MoE-2.4bpw-h8-exl2 -c 0007.parquet -l 8192 -b 2.4 -hb 8 -m PiVoT-MoE-temp/measurement.json -ml 8192
below this line is original readme
PiVot-MoE
Model Description
PiVoT-MoE, is an advanced AI model specifically designed for roleplaying purposes. It has been trained using a combination of four 10.7B sized experts, each with their own specialized characteristic, all fine-tuned to bring a unique and diverse roleplaying experience.
The Mixture of Experts (MoE) technique is utilized in this model, allowing the experts to work together synergistically, resulting in a more cohesive and natural conversation flow. The MoE architecture allows for a higher level of flexibility and adaptability, enabling PiVoT-MoE to handle a wide variety of roleplaying scenarios and characters.
Based on the PiVoT-10.7B-Mistral-v0.2-RP model, PiVoT-MoE takes it a step further with the incorporation of the MoE technique. This means that not only does the model have an expansive knowledge base, but it also has the ability to mix and match its expertise to better suit the specific roleplaying scenario.
Prompt Template - Alpaca (ChatML works)
{system}
### Instruction:
{instruction}
### Response:
{response}
- Downloads last month
- 3
