| | --- |
| | license: apache-2.0 |
| | language: |
| | - en |
| | tags: |
| | - ColBERT |
| | - sentence-similarity |
| | - feature-extraction |
| | - generated_from_trainer |
| | - dataset_size:497901 |
| | - loss:Contrastive |
| | base_model: google/bert_uncased_L-2_H-128_A-2 |
| | datasets: |
| | - sentence-transformers/msmarco-bm25 |
| | pipeline_tag: sentence-similarity |
| | --- |
| | |
| | # Model card for ColBERT v2 BERT Tiny |
| |
|
| | This is a [ColBERT](https://github.com/stanford-futuredata/ColBERT) model finetuned from [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the [msmarco-bm25](https://huggingface.co/datasets/sentence-transformers/msmarco-bm25) dataset. It maps sentences & paragraphs to sequences of 128-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator. |
| |
|
| | This model is primarily designed for unit tests in limited compute environments such as GitHub Actions. But it does work to an extent for basic use cases. |
| |
|