Instructions to use bertin-project/bertin-base-random with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use bertin-project/bertin-base-random with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="bertin-project/bertin-base-random")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("bertin-project/bertin-base-random") model = AutoModelForMaskedLM.from_pretrained("bertin-project/bertin-base-random") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- cc5971e0db2bad9c1832c8c55f61552309f5a337e3b573c228f3c8fa3353b22d
- Size of remote file:
- 250 MB
- SHA256:
- b9a2f07fa078f8668c364de4b6f0fc61f7b5e17bfc2f05a4052e5719a2bf6609
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.