Instructions to use WithinUsAI/WithIn-Us-Coder-4B.gguf with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use WithinUsAI/WithIn-Us-Coder-4B.gguf with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="WithinUsAI/WithIn-Us-Coder-4B.gguf", filename="WithIn-Us-Coder-4B.Q4_K_M.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use WithinUsAI/WithIn-Us-Coder-4B.gguf with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf WithinUsAI/WithIn-Us-Coder-4B.gguf:Q4_K_M # Run inference directly in the terminal: llama-cli -hf WithinUsAI/WithIn-Us-Coder-4B.gguf:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf WithinUsAI/WithIn-Us-Coder-4B.gguf:Q4_K_M # Run inference directly in the terminal: llama-cli -hf WithinUsAI/WithIn-Us-Coder-4B.gguf:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf WithinUsAI/WithIn-Us-Coder-4B.gguf:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf WithinUsAI/WithIn-Us-Coder-4B.gguf:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf WithinUsAI/WithIn-Us-Coder-4B.gguf:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf WithinUsAI/WithIn-Us-Coder-4B.gguf:Q4_K_M
Use Docker
docker model run hf.co/WithinUsAI/WithIn-Us-Coder-4B.gguf:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use WithinUsAI/WithIn-Us-Coder-4B.gguf with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "WithinUsAI/WithIn-Us-Coder-4B.gguf" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "WithinUsAI/WithIn-Us-Coder-4B.gguf", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/WithinUsAI/WithIn-Us-Coder-4B.gguf:Q4_K_M
- Ollama
How to use WithinUsAI/WithIn-Us-Coder-4B.gguf with Ollama:
ollama run hf.co/WithinUsAI/WithIn-Us-Coder-4B.gguf:Q4_K_M
- Unsloth Studio new
How to use WithinUsAI/WithIn-Us-Coder-4B.gguf with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for WithinUsAI/WithIn-Us-Coder-4B.gguf to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for WithinUsAI/WithIn-Us-Coder-4B.gguf to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for WithinUsAI/WithIn-Us-Coder-4B.gguf to start chatting
- Pi new
How to use WithinUsAI/WithIn-Us-Coder-4B.gguf with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf WithinUsAI/WithIn-Us-Coder-4B.gguf:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "WithinUsAI/WithIn-Us-Coder-4B.gguf:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use WithinUsAI/WithIn-Us-Coder-4B.gguf with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf WithinUsAI/WithIn-Us-Coder-4B.gguf:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default WithinUsAI/WithIn-Us-Coder-4B.gguf:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use WithinUsAI/WithIn-Us-Coder-4B.gguf with Docker Model Runner:
docker model run hf.co/WithinUsAI/WithIn-Us-Coder-4B.gguf:Q4_K_M
- Lemonade
How to use WithinUsAI/WithIn-Us-Coder-4B.gguf with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull WithinUsAI/WithIn-Us-Coder-4B.gguf:Q4_K_M
Run and chat with the model
lemonade run user.WithIn-Us-Coder-4B.gguf-Q4_K_M
List all available models
lemonade list
Run and chat with the model
lemonade run user.WithIn-Us-Coder-4B.gguf-Q4_K_MList all available models
lemonade listWithIn-Us-Coder-4B.gguf
WithIn-Us-Coder-4B.gguf is a GGUF release from WithIn Us AI, built for local inference and coding-focused assistant use cases. It is based on Qwen/Qwen3.5-4B and distributed in quantized GGUF formats for efficient deployment in llama.cpp-compatible runtimes.
Model Summary
This model is intended as a coding-oriented conversational assistant with emphasis on:
- code generation
- code reasoning
- implementation planning
- debugging assistance
- instruction following
- general assistant-style chat for development workflows
This repository currently provides the following GGUF variants:
WithIn-Us-Coder-4B.Q4_K_M.ggufWithIn-Us-Coder-4B.Q5_K_M.gguf
Creator
WithIn Us AI is the creator of this model release, including the model packaging, fine-tuning / merging concept, process, naming, and GGUF distribution.
Base Model
This model is based on:
- Qwen/Qwen3.5-4B
Credit and appreciation go to the original creators of the base LLM architecture and weights.
Training Data
The current repository metadata lists the following datasets as part of the model’s training / fine-tuning lineage:
WithinUsAI/Python_GOD_Coder_50kreedmayhew/gemini-3.1-pro-2048-reasoning-1100xm-a-p/Code-Feedbackcrownelius/Opus-4.6-Reasoning-2100x-formattedcrownelius/Opus4.6-No-Reasoning-260xcrownelius/Creative_Writing_Multiturn_EnhancedHuggingFaceH4/llava-instruct-mix-vsftRoman1111111/gemini-3-pro-10000x-hard-high-reasoning
Attribution note:
WithIn Us AI does not claim ownership over third-party base models or third-party datasets. Full credit, thanks, and attribution belong to the original model and dataset creators.
Intended Use
This model is intended for:
- local coding assistants
- offline development help
- code explanation
- bug-fixing support
- prompt-based code generation
- experimentation in llama.cpp and GGUF-compatible environments
Suggested Use Cases
- generating Python, JavaScript, C++, and other programming language snippets
- explaining code blocks
- rewriting or improving functions
- brainstorming implementation strategies
- creating scaffolding and prototypes
- assisting with debugging and refactoring
Out-of-Scope Use
This model is not guaranteed to be reliable for:
- high-stakes legal advice
- medical advice
- financial decision-making
- autonomous execution without review
- security-critical production decisions without human verification
Users should always validate generated code before deployment.
Quantization Formats
This repository currently includes:
- Q4_K_M for smaller memory footprint and faster local inference
- Q5_K_M for improved quality while remaining efficient
Choose the quant level based on your hardware budget and quality needs.
Prompting Notes
As a coding-focused conversational model, best results usually come from prompts that are:
- specific
- structured
- explicit about language, framework, or goal
- clear about desired output format
Example prompt style:
Write a Python function that parses a CSV file, removes duplicate rows by email, and saves the cleaned result. Include error handling and comments.
Limitations
Like other language models, this model may:
- hallucinate APIs or library behavior
- generate insecure or inefficient code
- make reasoning mistakes
- produce outdated patterns
- require prompt iteration for best results
Human review is strongly recommended, especially for production code.
License
This repository uses a custom WithIn Us AI license approach.
- The base model may be subject to its original upstream license and terms.
- Third-party datasets remain the property of their respective creators / licensors.
- WithIn Us AI claims authorship of the fine-tuning / merging concept, process, packaging, naming, and release structure for this model distribution.
- This repository does not claim ownership over third-party datasets or the underlying upstream base model.
You can include a LICENSE file in this repository with the exact custom terms you want enforced.
Acknowledgments
Special thanks to:
- Qwen for the base model
- all third-party dataset creators listed above
- the open-source GGUF / llama.cpp ecosystem
- the broader Hugging Face community
Files
Current repository files include:
WithIn-Us-Coder-4B.Q4_K_M.ggufWithIn-Us-Coder-4B.Q5_K_M.gguf
Disclaimer
This model may generate incorrect, biased, insecure, or incomplete outputs.
Use responsibly, validate important results, and review all generated code before real-world use.
- Downloads last month
- 411
4-bit
5-bit
Pull the model
# Download Lemonade from https://lemonade-server.ai/lemonade pull WithinUsAI/WithIn-Us-Coder-4B.gguf:Q4_K_M