Instructions to use liswei/Taiwan-ELM-270M-Instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use liswei/Taiwan-ELM-270M-Instruct with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="liswei/Taiwan-ELM-270M-Instruct", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("liswei/Taiwan-ELM-270M-Instruct", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use liswei/Taiwan-ELM-270M-Instruct with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "liswei/Taiwan-ELM-270M-Instruct" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "liswei/Taiwan-ELM-270M-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/liswei/Taiwan-ELM-270M-Instruct
- SGLang
How to use liswei/Taiwan-ELM-270M-Instruct with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "liswei/Taiwan-ELM-270M-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "liswei/Taiwan-ELM-270M-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "liswei/Taiwan-ELM-270M-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "liswei/Taiwan-ELM-270M-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use liswei/Taiwan-ELM-270M-Instruct with Docker Model Runner:
docker model run hf.co/liswei/Taiwan-ELM-270M-Instruct
Efficient LLM for Taiwan
Taiwan ELM
Taiwan ELM is a family of Efficient LLMs for Taiwan base on apple/OpenELM. The project aims to provide an efficient model for researchers without access to large-scale computing resources.
The model is trained using a custom fork of LLaMA-Factory on 2B Traditional Chinese tokens and 500K instruction samples. We will extend the model to train on larger data sets and different base models if there is sufficient demand.
What is being released?
We release both pre-trained base models and instruction tuned variants with 270M and 1.1B parameters. Along with the model, datasets used to train the base and instruction-tuned models are also released.
List of released models:
List of released datasets:
Usage Examples
We adapt the LLaMA2 template:
<s>[INST] <<SYS>>
{{ system_prompt }}
<</SYS>>
{{ user_message }} [/INST]
The model could be load via AutoModelForCausalLM with trust_remote_code=True:
taiwanelm_270m = AutoModelForCausalLM.from_pretrained("liswei/Taiwan-ELM-270M", trust_remote_code=True)
We also support additional generation methods and speculative generation, please find reference at OpenELM#usage.
- Downloads last month
- 9
Model tree for liswei/Taiwan-ELM-270M-Instruct
Base model
apple/OpenELM-270M