CreativeLang/vua20_metaphor
Viewer โข Updated โข 182k โข 503 โข 3
How to use CreativeLang/metaphor_detection_roberta_seq with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("token-classification", model="CreativeLang/metaphor_detection_roberta_seq") # Load model directly
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("CreativeLang/metaphor_detection_roberta_seq")
model = AutoModelForTokenClassification.from_pretrained("CreativeLang/metaphor_detection_roberta_seq")# Load model directly
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("CreativeLang/metaphor_detection_roberta_seq")
model = AutoModelForTokenClassification.from_pretrained("CreativeLang/metaphor_detection_roberta_seq")Creative Language Toolkit (CLTK) Metadata
This model is a easy to use metaphor detection baseline realised with roberta-base fine-tuned on CreativeLang/vua20_metaphor dataset.
To use this model, please use the inference.py in the FrameBERT repo.
Just run:
python inference.py CreativeLang/metaphor_detection_roberta_seq
Check out inference.py to learn how to apply the model on your own data.
For the details of this model and the dataset used, we refer you to the release paper.
| Metric | Value |
|---|---|
| eval_loss | 0.2656 |
| eval_accuracy_score | 0.9142 |
| eval_precision | 0.9142 |
| eval_recall | 0.9142 |
| eval_f1 | 0.9142 |
| eval_f1_macro | 0.7315 |
| eval_runtime | 8.9802 |
| eval_samples_per_second | 411.7960 |
| eval_steps_per_second | 51.5580 |
| epoch | 3.0000 |
If you find this dataset helpful, please cite:
@article{Li2023FrameBERTCM,
title={FrameBERT: Conceptual Metaphor Detection with Frame Embedding Learning},
author={Yucheng Li and Shunyu Wang and Chenghua Lin and Frank Guerin and Lo{\"i}c Barrault},
journal={ArXiv},
year={2023},
volume={abs/2302.04834}
}
If you have any queries, please open an issue or direct your queries to mail.
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="CreativeLang/metaphor_detection_roberta_seq")