GaussDB SQL Expert 7B
A domain-specific database assistant fine-tuned on Qwen2.5-Coder-7B-Instruct, specialized in SQL generation, optimization, cross-database migration, error diagnosis, and more.
Model Overview
| Item | Details |
|---|---|
| Base Model | Qwen/Qwen2.5-Coder-7B-Instruct |
| Parameters | 7.6B (Dense) |
| Fine-tuning | LoRA (rank=64, alpha=128, target=all linear layers) |
| Trainable Params | 161M (2.08% of total) |
| Training Data | 29,863 ShareGPT conversations + 1,571 validation |
| Hardware | 1x NVIDIA H100 80GB |
| Training Time | 3.5 hours |
| Framework | LLaMA-Factory v0.9.4 |
| Precision | BF16 |
Core Capabilities
- Text2SQL: Natural language to SQL with support for window functions, recursive CTEs, MERGE, subqueries, and more
- SQL Tuning: Index invalidation analysis, execution plan interpretation, parameter optimization advice
- SQL Migration: Oracle / MySQL / SQL Server → GaussDB syntax conversion (50+ difference points)
- Error Diagnosis: Deadlock, WAL bloat, connection exhaustion, OOM, and 20+ common production issues
- SQL Explanation: Logic breakdown and readability analysis of complex queries
- Boundary Safety: Dangerous operation interception, clarification requests, out-of-scope rejection
Supports 9 major databases: GaussDB, Oracle, MySQL, PostgreSQL, SQL Server, PolarDB, DM (Dameng), KingBase, Sybase
Benchmark Results
Evaluated on 100 automated test cases (20 per category) using keyword matching:
| Category | Score | Notes |
|---|---|---|
| Text2SQL | 20/20 (100%) | Window functions, CTE, MERGE, pagination all correct |
| SQL Tuning | 18/20 (90%) | Index invalidation, implicit conversion, parameter tuning |
| SQL Migration | 20/20 (100%) | Oracle/MySQL/SQL Server → GaussDB conversion |
| Error Diagnosis | 20/20 (100%) | Deadlock, WAL, OOM, connection exhaustion |
| Boundary Safety | 16/20 (80%) | Dangerous operation alerts, out-of-scope rejection |
| Overall | 94/100 (94%) |
Quick Start
Requirements
- Python >= 3.9
- PyTorch >= 2.0
- GPU with >= 16GB VRAM (recommended) or CPU (slower)
- ~15GB disk space for model weights
Installation
# 1. Install dependencies
pip install torch transformers accelerate
# 2. (Optional) Install Flash Attention 2 for faster inference on NVIDIA GPUs
pip install flash-attn --no-build-isolation
Download Model
The model will be downloaded automatically on first use via from_pretrained(). You can also download it manually:
# Option A: Using huggingface-cli
pip install huggingface_hub
huggingface-cli download lanfers/gaussdb-sql-expert-7b --local-dir ./gaussdb-sql-expert-7b
# Option B: Using git-lfs
git lfs install
git clone https://huggingface.co/lanfers/gaussdb-sql-expert-7b
# Option C: Using Python
python -c "
from huggingface_hub import snapshot_download
snapshot_download('lanfers/gaussdb-sql-expert-7b', local_dir='./gaussdb-sql-expert-7b')
"
China Mainland Users: If download is slow, use a mirror:
HF_ENDPOINT=https://hf-mirror.com huggingface-cli download lanfers/gaussdb-sql-expert-7b --local-dir ./gaussdb-sql-expert-7b
Inference
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "lanfers/gaussdb-sql-expert-7b"
device = "cuda" # or "mps" (Mac) / "cpu"
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map=device,
trust_remote_code=True,
)
model.eval()
messages = [
{"role": "system", "content": "You are a professional database expert assistant, proficient in Oracle, MySQL, PostgreSQL, GaussDB, PolarDB, DM, KingBase, Sybase, SQL Server."},
{"role": "user", "content": "Database: GaussDB 505\n\nFind the top 3 highest-paid employees in each department, showing department name, employee name, salary, and rank."},
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=1024,
temperature=0.7,
do_sample=True,
top_p=0.8,
repetition_penalty=1.05,
)
response = tokenizer.decode(outputs[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)
print(response)
Example Outputs
Input: Find the top 3 highest-paid employees in each department
Output:
SELECT dept_name, emp_name, salary, rk
FROM (
SELECT d.dept_name, e.emp_name, e.salary,
DENSE_RANK() OVER(PARTITION BY e.dept_id ORDER BY e.salary DESC) AS rk
FROM employees e
JOIN departments d ON e.dept_id = d.dept_id
) t
WHERE rk <= 3;
Input: Migrate Oracle to GaussDB: SELECT NVL(name, 'unknown') FROM users WHERE ROWNUM <= 10
Output:
SELECT COALESCE(name, 'unknown') FROM users LIMIT 10;
-- NVL → COALESCE, ROWNUM → LIMIT
Training Details
Hyperparameters
# LoRA config
lora_rank: 64
lora_alpha: 128
lora_dropout: 0.05
lora_target: all # q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
# Training config
learning_rate: 2.0e-5
lr_scheduler_type: cosine
warmup_ratio: 0.1
num_train_epochs: 3
per_device_train_batch_size: 8
gradient_accumulation_steps: 4 # effective batch_size = 32
cutoff_len: 2048
optim: adamw_torch
bf16: true
gradient_checkpointing: true
Training Loss
Total steps: 2,799 | Duration: 3h 29m
Step Epoch Train Loss Eval Loss
200 0.21 1.217 1.216
600 0.64 1.038 1.104
1000 1.07 1.035 1.076
1400 1.50 1.062 1.058
1800 1.93 1.062 1.045
2200 2.36 0.966 1.044
2600 2.79 0.959 1.042 ← best checkpoint
Final train_loss=1.039, eval_loss=1.042. Near-identical values indicate no overfitting.
Training Data Distribution
| Category | Proportion | Description |
|---|---|---|
| Text2SQL | ~30% | Natural language → SQL generation |
| SQL Tuning | ~20% | Slow query analysis, index optimization |
| SQL Migration | ~15% | Cross-database syntax conversion |
| Error Diagnosis | ~15% | Production incident troubleshooting |
| Operations | ~10% | Parameter tuning, backup & recovery |
| Boundary Safety | ~10% | Dangerous operation alerts, scope rejection |
Limitations
- Boundary safety has room for improvement: may execute
DELETEwithoutWHEREorDROP DATABASEwithout warning - Limited coverage of GaussDB 505 advanced features (e.g., column-store tables, distributed features)
- Text-only input; does not support images (e.g., execution plan screenshots)
- Recommended to add inference-side safety rules for production environments
Citation
If this model is helpful, please cite:
@misc{gaussdb-sql-expert-7b,
title={GaussDB SQL Expert 7B},
author={lanfers},
year={2026},
publisher={HuggingFace},
url={https://huggingface.co/lanfers/gaussdb-sql-expert-7b}
}
License
Fine-tuned from Qwen2.5-Coder-7B-Instruct under the Apache 2.0 License.
- Downloads last month
- 437
Model tree for lanfers/gaussdb-sql-expert-7b
Evaluation results
- Text2SQL Accuracyself-reported100.000
- SQL Migration Accuracyself-reported100.000
- Error Diagnosis Accuracyself-reported100.000
- SQL Tuning Accuracyself-reported90.000
- Boundary Safety Accuracyself-reported80.000
- Overall Accuracyself-reported94.000