Text Generation
fastText
Latin
wikilangs
nlp
tokenizer
embeddings
n-gram
markov
wikipedia
feature-extraction
sentence-similarity
tokenization
n-grams
markov-chain
text-mining
babelvec
vocabulous
vocabulary
monolingual
family-romance_other
Instructions to use wikilangs/la with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- fastText
How to use wikilangs/la with fastText:
from huggingface_hub import hf_hub_download import fasttext model = fasttext.load_model(hf_hub_download("wikilangs/la", "model.bin")) - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- af61b2a8495d67f37ee3585e411a176754856fd31a3ff1866137d79202d93594
- Size of remote file:
- 374 kB
- SHA256:
- d36c242ee32cee28f1e27d4ee08c16b2a7e0f92fdac1571dedc6eb36cefd6501
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.