Text Generation
Transformers
TensorBoard
Safetensors
opt
trl
sft
Generated from Trainer
text-generation-inference
Instructions to use gnumanth/tmp_trainer with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use gnumanth/tmp_trainer with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="gnumanth/tmp_trainer")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("gnumanth/tmp_trainer") model = AutoModelForCausalLM.from_pretrained("gnumanth/tmp_trainer") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use gnumanth/tmp_trainer with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "gnumanth/tmp_trainer" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "gnumanth/tmp_trainer", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/gnumanth/tmp_trainer
- SGLang
How to use gnumanth/tmp_trainer with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "gnumanth/tmp_trainer" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "gnumanth/tmp_trainer", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "gnumanth/tmp_trainer" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "gnumanth/tmp_trainer", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use gnumanth/tmp_trainer with Docker Model Runner:
docker model run hf.co/gnumanth/tmp_trainer
| license: other | |
| base_model: facebook/opt-125m | |
| tags: | |
| - trl | |
| - sft | |
| - generated_from_trainer | |
| model-index: | |
| - name: tmp_trainer | |
| results: [] | |
| <!-- This model card has been generated automatically according to the information the Trainer had access to. You | |
| should probably proofread and complete it, then remove this comment. --> | |
| # tmp_trainer | |
| This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on an unknown dataset. | |
| ## Model description | |
| More information needed | |
| ## Intended uses & limitations | |
| More information needed | |
| ## Training and evaluation data | |
| More information needed | |
| ## Training procedure | |
| ### Training hyperparameters | |
| The following hyperparameters were used during training: | |
| - learning_rate: 5e-05 | |
| - train_batch_size: 8 | |
| - eval_batch_size: 8 | |
| - seed: 42 | |
| - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 | |
| - lr_scheduler_type: linear | |
| - num_epochs: 3.0 | |
| ### Training results | |
| ### Framework versions | |
| - Transformers 4.38.1 | |
| - Pytorch 2.1.0+cu121 | |
| - Datasets 2.17.1 | |
| - Tokenizers 0.15.2 | |