| --- |
| license: apache-2.0 |
| tags: |
| - text-generation |
| - instruction-tuned |
| - maincoder |
| - gguf |
| - chatbot |
| library_name: llama.cpp |
| language: en |
| datasets: |
| - custom |
| model-index: |
| - name: Corelyn Leonicity Leon |
| results: [] |
| base_model: |
| - yourGGUF/Maincoder-1B_GGUF |
| --- |
| |
|  |
|
|
| # Corelyn Leon GGUF Model |
|
|
| ## Specifications : |
| - Model Name: Corelyn Leonicity Leon |
| - Base Name: Leon_1B |
| - Type: Instruct / Fine-tuned |
| - Architecture: Maincoder |
| - Size: 1B parameters |
| - Organization: Corelyn |
| |
| ## Model Overview |
| |
| Corelyn Leonicity Leon is a 1-billion parameter LLaMA-based instruction-tuned model, designed for general-purpose assistant tasks and knowledge extraction. It is a fine-tuned variant optimized for instruction-following use cases. |
| |
| - Fine-tuning type: Instruct |
| |
| - Base architecture: Maincoder |
| |
| - Parameter count: 3B |
| |
| |
| ### This model is suitable for applications such as: |
| |
| - Algorithms |
| |
| - Websites |
| |
| - Python, JavaScript, Java... |
| |
| - Code and text generation |
| |
| ## Usage |
| |
| Download from : [LeonCode_1B](https://huggingface.co/CorelynAI/LeonCode/blob/main/LeonCode_1B.gguf) |
| |
| ```python |
| |
| # pip install pip install llama-cpp-python |
| |
| from llama_cpp import Llama |
|
|
| # Load the model (update the path to where your .gguf file is) |
| llm = Llama(model_path="path/to/the/file/LeonCode_1B.gguf") |
|
|
| # Create chat completion |
| response = llm.create_chat_completion( |
| messages=[{"role": "user", "content": "Create a python sorting algorithm"}] |
| ) |
| |
| # Print the generated text |
| print(response.choices[0].message["content"]) |
|
|
|
|
| ``` |