Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
gorilla-llm
/
gorilla-falcon-7b-hf-v0-gguf
like
2
Follow
Gorilla LLM (UC Berkeley)
206
GGUF
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
No model card
Downloads last month
115
GGUF
Model size
7B params
Architecture
falcon
Hardware compatibility
Log In
to add your hardware
2-bit
Q2_K
3.86 GB
3-bit
Q3_K_S
4.13 GB
Q3_K_M
4.37 GB
Q3_K_L
4.56 GB
4-bit
Q4_K_S
4.75 GB
Q4_K_M
4.98 GB
5-bit
Q5_K_S
5.34 GB
Q5_K_M
5.73 GB
6-bit
Q6_K
7.03 GB
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Collection including
gorilla-llm/gorilla-falcon-7b-hf-v0-gguf
Gorilla
Collection
10 items
•
Updated
Apr 5, 2024
•
1