Instructions to use mbruton/spa_XLM-R with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use mbruton/spa_XLM-R with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="mbruton/spa_XLM-R")# Load model directly from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("mbruton/spa_XLM-R") model = AutoModelForTokenClassification.from_pretrained("mbruton/spa_XLM-R") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- a3f915f9947b89fdad72ab8e93618d1f166f1d409d210b23659da3cfb3fbfa34
- Size of remote file:
- 14.5 kB
- SHA256:
- d57880ddd9994a73e711c3bf0c557ed02001d8e17f908c224558ab14723de0ec
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.