abliterated using Householder

no refusals related to dangerous questions in testing benchmarks

I ran a zero-shot benchmark on HellaSwag using lm-eval-harness, and the model had an acc_norm score of 0.7601 vs the base model 0.7758

Downloads last month
195
Safetensors
Model size
31B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support