PEFT
Safetensors
mistral
alignment-handbook
trl
sft
unsloth
Generated from Trainer
4-bit precision
bitsandbytes
Instructions to use Peter/uploadtestsmallstep2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use Peter/uploadtestsmallstep2 with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("unsloth/mistral-7b") model = PeftModel.from_pretrained(base_model, "Peter/uploadtestsmallstep2") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Unsloth Studio new
How to use Peter/uploadtestsmallstep2 with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Peter/uploadtestsmallstep2 to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Peter/uploadtestsmallstep2 to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Peter/uploadtestsmallstep2 to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="Peter/uploadtestsmallstep2", max_seq_length=2048, )
| license: apache-2.0 | |
| library_name: peft | |
| tags: | |
| - alignment-handbook | |
| - trl | |
| - sft | |
| - unsloth | |
| - generated_from_trainer | |
| - trl | |
| - sft | |
| - unsloth | |
| - generated_from_trainer | |
| - unsloth | |
| datasets: | |
| - zeta-labs/mind2web_combined_236_18_01 | |
| base_model: unsloth/mistral-7b | |
| model-index: | |
| - name: uploadtestsmallstep2 | |
| results: [] | |
| <!-- This model card has been generated automatically according to the information the Trainer had access to. You | |
| should probably proofread and complete it, then remove this comment. --> | |
| # uploadtestsmallstep2 | |
| This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the zeta-labs/mind2web_combined_236_18_01 dataset. | |
| It achieves the following results on the evaluation set: | |
| - Loss: 0.3438 | |
| ## Model description | |
| More information needed | |
| ## Intended uses & limitations | |
| More information needed | |
| ## Training and evaluation data | |
| More information needed | |
| ## Training procedure | |
| ### Training hyperparameters | |
| The following hyperparameters were used during training: | |
| - learning_rate: 0.0002 | |
| - train_batch_size: 8 | |
| - eval_batch_size: 8 | |
| - seed: 42 | |
| - gradient_accumulation_steps: 2 | |
| - total_train_batch_size: 16 | |
| - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 | |
| - lr_scheduler_type: cosine | |
| - lr_scheduler_warmup_ratio: 0.1 | |
| - training_steps: 5 | |
| ### Training results | |
| ### Framework versions | |
| - PEFT 0.7.1 | |
| - Transformers 4.37.0 | |
| - Pytorch 2.1.2 | |
| - Datasets 2.16.1 | |
| - Tokenizers 0.15.1 |