Instructions to use valurank/distilbert-quality with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use valurank/distilbert-quality with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="valurank/distilbert-quality")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("valurank/distilbert-quality") model = AutoModelForSequenceClassification.from_pretrained("valurank/distilbert-quality") - Notebooks
- Google Colab
- Kaggle
DistilBERT fine-tuned for news classification
This model is based on distilbert-base-uncased pretrained weights, with a classification head fine-tuned to classify news articles into 3 categories (bad, medium, good).
Training data
The dataset used to fine-tune the model is news-small, the 300 article news dataset manually annotated by Alex.
Inputs
Similar to its base model, this model accepts inputs with a maximum length of 512 tokens.
- Downloads last month
- 12