Zagreus-0.4B-por
Zagreus-0.4B-por is a bilingual English/Portuguese foundational Small Language Model (SLM) trained from scratch by the mii-llm community (Made in Italy – Large Language Model) on the Seeweb HPC infrastructure.
This is a base (pre-trained) model — it is not instruction-tuned and is intended for researchers, developers, and practitioners who want to fine-tune or build upon a high-quality bilingual English/Portuguese foundation.
The Zagreus family represents one of the few openly released, high-performing small language models dedicated to European Romance languages, trained entirely from first principles with a fully transparent pipeline.
Model Details
| Property | Value |
|---|---|
| Architecture | Modified Llama-3.2 (fully dense) |
| Parameters | ~400M |
| Hidden size | 960 |
| Intermediate size | 2560 |
| Layers | 32 |
| Attention heads | 15 (KV heads: 5) |
| Activation | SiLU |
| Context length | 4096 tokens |
| Tokenizer | Llama-3.2 (vocab_size: 128,256) |
| Positional encoding | RoPE (theta: 10000.0) |
| Tied embeddings | Yes |
| Precision | BF16 |
| Languages | English ( |
| Training tokens | ~1 trillion |
| Training framework | Nanotron (mii-llm fork) |
| Infrastructure | 64× NVIDIA A100 GPUs (8 nodes × 8 GPUs), Seeweb HPC |
Training Data
All datasets used are fully open source and released by Hugging Face:
| Dataset | Tokens | Description |
|---|---|---|
| FineWeb (350BT sample) | ~350B | High-quality English web text |
| FineWeb-2 (por_Latn) | — | Portuguese web text |
| FinePDFs (por_Latn) | — | Portuguese PDF documents |
| StarCoder Data | ~250B | Multilingual code |
Token distribution: ~400B English + ~400B Portuguese + ~200B Code ≈ 1 trillion tokens total
Tokenization
Raw datasets were tokenized using the Llama-3.2 tokenizer (meta-llama/Llama-3.2-1B) via the datatrove library. The process ran for over three weeks of continuous computation on CPU nodes via Slurm, generating approximately 3–5 TB of tokenized data shards.
Architecture Choice
We adopted a modified Llama-3.2 fully dense architecture. The choice of a dense model over Mixture-of-Experts (MoE) in the small-parameter regime (~500M) was deliberate: in tightly constrained capacity settings, routing overhead and expert under-utilization typical of MoE architectures may offset their theoretical efficiency advantages. Dense models provide better compute utilization and more stable training dynamics at this scale.
Pre-training Configuration
Full Nanotron YAML configuration used for training:
checkpoints:
checkpoint_interval: 5000
checkpoints_path: checkpoints_zagreus_por
checkpoints_path_is_shared_file_system: false
save_final_state: false
save_initial_state: false
data_stages:
- data:
dataset:
dataset_folder:
- /training/pretraining/fineweb-por/tokenized
- /training/pretraining/fineweb-edu-350BT/000_tokenized_output
- /training/pretraining/fineweb-edu-350BT/011_tokenized_output
- /training/pretraining/fineweb-edu-350BT/012_tokenized_output
- /training/pretraining/fineweb-edu-350BT/013_tokenized_output
- /training/pretraining/fineweb-edu-350BT/014_tokenized_output
- /training/pretraining/fineweb-edu-350BT/015_tokenized_output
- /training/pretraining/fineweb-edu-350BT/016_tokenized_output
- /training/pretraining/finepdf-por/000_tokenized_output
- /training/pretraining/starcoder_tokenized/000_tokenized_output
num_loading_workers: 0
seed: 8
name: stable phase
start_training_step: 1
general:
benchmark_csv_path: null
consumed_train_samples: null
ignore_sanity_checks: true
project: zagreus
run: zagreus-350M-por
seed: 8
step: null
logging:
iteration_step_info_interval: 1
log_level: info
log_level_replica: info
model:
ddp_bucket_cap_mb: 100
dtype: bfloat16
init_method:
std: 0.03227
make_vocab_size_divisible_by: 1
model_config:
bos_token_id: 128000
eos_token_id: 128001
hidden_act: silu
hidden_size: 960
initializer_range: 0.02
intermediate_size: 2560
is_llama_config: true
max_position_embeddings: 4096
num_attention_heads: 15
num_hidden_layers: 32
num_key_value_heads: 5
pad_token_id: null
pretraining_tp: 1
rms_norm_eps: 1.0e-05
rope_interleaved: false
rope_scaling: null
rope_theta: 10000.0
tie_word_embeddings: true
use_cache: true
vocab_size: 128256
optimizer:
accumulate_grad_in_fp32: true
clip_grad: 1.0
learning_rate_scheduler:
learning_rate: 0.003
lr_decay_starting_step: 750000
lr_decay_steps: 50000
lr_decay_style: linear
lr_warmup_steps: 4000
lr_warmup_style: linear
min_decay_lr: 1.0e-7
optimizer_factory:
adam_beta1: 0.9
adam_beta2: 0.95
adam_eps: 1.0e-08
name: adamW
torch_adam_is_fused: true
weight_decay: 0.01
zero_stage: 0
parallelism:
dp: 64
expert_parallel_size: 1
pp: 1
pp_engine: 1f1b
recompute_layer: false
tp: 1
tp_linear_async_communication: true
tp_mode: REDUCE_SCATTER
tp_recompute_allgather: true
profiler: null
tokenizer:
tokenizer_max_length: null
tokenizer_name_or_path: meta-llama/Llama-3.2-1B
tokenizer_revision: null
tokens:
batch_accumulation_per_replica: 1
limit_test_batches: 0
limit_val_batches: 0
micro_batch_size: 4
sequence_length: 4096
train_steps: 2000000
val_check_interval: 5000
Slurm Launch Script
#SBATCH --job-name=350_pt
#SBATCH --account=YOUR_ACCOUNT
#SBATCH --partition=PARTITION
#SBATCH --nodes=8
#SBATCH --gres=gpu:8 # 8 A100 per node = 64 total
#SBATCH --cpus-per-task=32
#SBATCH --time=4-00:00:00
#SBATCH --output=slurm-%j.out
################ 0. Environment ################
module purge
module load profile/global
module load python/3.11 cuda/12.2 cudnn nccl gcc
source /path/to/venv/nanotron/bin/activate
export HF_HOME=/path/to/hf_home
export TRANSFORMERS_OFFLINE=1
export HF_HUB_OFFLINE=1
export HF_DATASETS_OFFLINE=1
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export NCCL_IB_DISABLE=0
export NCCL_SOCKET_IFNAME="ib0,eno,eth"
export WANDB_MODE=disabled
################ 1. Distributed vars ############
GPUS_PER_NODE=4
NNODES=$SLURM_JOB_NUM_NODES
NODE_RANK=$SLURM_NODEID
MASTER_ADDR=$(scontrol show hostnames $SLURM_JOB_NODELIST | head -n1)
MASTER_PORT=29400
RDZV_ID=$SLURM_JOB_ID
################ 2. Launch ######################
srun torchrun \
--nnodes $NNODES \
--nproc_per_node $GPUS_PER_NODE \
--rdzv_id $RDZV_ID \
--rdzv_backend c10d \
--rdzv_endpoint $MASTER_ADDR:$MASTER_PORT \
/path/to/nanotron/run_train.py \
--config-file smollm2/zagreus_350M_por.yaml
Checkpoint Conversion to Hugging Face Format
torchrun --nproc_per_node=1 -m examples.llama.convert_nanotron_to_hf \
--checkpoint_path=checkpoints/<step> \
--save_path=hf_checkpoints/<step> \
--tokenizer_name meta-llama/Llama-3.2-1B
Evaluation
Standard Benchmarks
Evaluation Commands
lm-eval --model hf --model_args pretrained=<checkpoint> \
--tasks m_mmlu_pt --num_fewshot 5 --device cuda:0 --batch_size 1
lm-eval --model hf --model_args pretrained=<checkpoint> \
--tasks hellaswag_pt,arc_pt --device cuda:0 --batch_size 1
Checkpoint Progression
The table below tracks benchmark scores across training checkpoints, demonstrating steady model improvement throughout pre-training:
| Checkpoint | ARC PT ↑ | HellaSwag PT ↑ | MMLU PT ↑ | Average |
|---|---|---|---|---|
| 153k | 0.2667 | 0.3732 | 0.2685 | 0.3028 |
| 207k | 0.2705 | 0.3768 | 0.2671 | 0.3048 |
| 276k | 0.2718 | 0.3789 | 0.2664 | 0.3057 |
| 345k | 0.2564 | 0.3796 | 0.2669 | 0.3009 |
| 414k | 0.2682 | 0.3842 | 0.2673 | 0.3066 |
| 483k | 0.2667 | 0.3865 | 0.2658 | 0.3063 |
| 582k | 0.2786 | 0.3865 | 0.2688 | 0.3113 |
Best overall checkpoint: 582k with an average of 0.3113 across all three benchmarks. HellaSwag shows the strongest and most consistent improvement throughout training.
Portuguese-Specific Evaluation: lm-evaluation-harness-pt
For a more comprehensive and Brazil/Portugal-focused assessment, we evaluated Zagreus-0.4B-por using the lm-evaluation-harness-pt framework, a dedicated Portuguese fork of lm-eval maintained by Eduardo Garcia. This framework powers the Open Portuguese LLM Leaderboard, covering a broad set of real-world Portuguese NLP tasks spanning education, law, sentiment, hate speech, and reading comprehension.
Evaluation Command
lm_eval \
--model huggingface \
--model_args "pretrained=giux78/zagreus-3B-165000,revision=main" \
--tasks enem_challenge,bluex,oab_exams,assin2_rte,assin2_sts,faquad_nli,hatebr_offensive,portuguese_hate_speech,tweetsentbr \
--device cuda:0 \
--output_path "./"
Task Descriptions
| Task | Description |
|---|---|
| ENEM Challenge | Brazilian national university entrance exam (multidisciplinary) |
| BLUEX | Brazilian university entrance exam for top universities |
| OAB Exams | Brazilian Bar Association exam (legal reasoning) |
| ASSIN2 RTE | Recognizing Textual Entailment in Portuguese |
| ASSIN2 STS | Semantic Textual Similarity in Portuguese |
| FaQuAD NLI | Natural Language Inference on Brazilian legal FAQ data |
| HatEval / HateBR | Hate speech detection in Brazilian Portuguese |
| Portuguese Hate Speech | Cross-domain hate speech classification |
| TweetSentBR | Sentiment analysis on Brazilian Portuguese tweets |
Results vs. Qwen3-0.6B-Base
| Rank | Model | RTE ↑ | STS ↑ | BLUEX ↑ | ENEM ↑ | FaQuAD NLI ↑ | HateBR ↑ | OAB ↑ | PT Hate ↑ | TweetSent ↑ | Average |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 🥇 | Zagreus 483k | 0.4624 | 0.1650 | 0.2434 | 0.2071 | 0.4397 | 0.3327 | 0.2528 | 0.4817 | 0.3220 | 0.3230 |
| 🥈 | Zagreus 582k | 0.3361 | 0.0449 | 0.2100 | 0.1903 | 0.4397 | 0.3825 | 0.2392 | 0.4444 | 0.1542 | 0.2713 |
| 🥉 | Qwen3-0.6B-Base | 0.3333 | 0.0726 | 0.1057 | 0.0077 | 0.4397 | 0.3333 | 0.0428 | 0.4123 | 0.5646 | 0.2569 |
Discussion
Zagreus-0.4B-por achieves a first place ranking on the Portuguese LLM leaderboard comparison against Qwen3-0.6B-Base at checkpoint 483k, with an average score of 0.3230 — outperforming the larger Qwen3-0.6B-Base (0.2569) by a significant margin of +0.0661.
Key strengths at checkpoint 483k:
- RTE (0.4624): strongest textual entailment performance, nearly 13 points above Qwen3
- BLUEX (0.2434): best performance on Brazilian university entrance exams
- ENEM (0.2071): strongest result on the national university entrance exam
- OAB (0.2528): best legal reasoning score, more than 21 points ahead of Qwen3
- PT Hate Speech (0.4817): best hate speech classification
The 582k checkpoint leads on HateBR (0.3825) but scores lower on several other tasks, suggesting that 483k represents the best general-purpose checkpoint for Portuguese NLP tasks. The variability between checkpoints on this benchmark suite highlights the importance of checkpoint selection for downstream task performance.
Qwen3-0.6B-Base shows stronger TweetSentBR performance (0.5646), suggesting better sentiment classification on Twitter-style informal Portuguese text — likely due to broader multilingual pretraining data coverage.
Overall, these results confirm that a targeted bilingual pretraining strategy on Portuguese, even at the ~400M parameter scale, can surpass larger general-purpose multilingual models on Portuguese-specific benchmarks.
Usage
This is a base model — it performs causal language modelling (text completion) and is not instruction-tuned. It is best suited as a starting point for fine-tuning.
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "mii-llm/zagreus-0.4B-por"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Base model: text completion, not instruction following
prompt = "A inteligência artificial é uma disciplina que"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(
**inputs,
max_new_tokens=200,
temperature=0.8,
do_sample=True,
repetition_penalty=1.1
)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Full Model Family
Base Models (Zagreus)
| Model | Languages | HuggingFace |
|---|---|---|
| Zagreus-0.4B-ita | English + Italian | 🤗 Link |
| Zagreus-0.4B-spa | English + Spanish | 🤗 Link |
| Zagreus-0.4B-por (this model) | English + Portuguese | 🤗 Link |
| Zagreus-0.4B-fra | English + French | 🤗 Link |
Post-trained Models (Nesso) — English/Italian
| Model | Use Case | HuggingFace |
|---|---|---|
| Nesso-0.4B-instruct | Conversational / Instruction following | 🤗 Link |
| Nesso-0.4B-agentic | Function calling / Agentic | 🤗 Link |
| Open-Zagreus-0.4B | Fully open source | 🤗 Link |
Citation
If you use this model in your research, please cite:
@misc{zagreus2025,
title = {The Joy and Pain of Training an LLM from Scratch:
A Technical Report on the Zagreus and Nesso Model Families},
author = {mii-llm community},
year = {2025},
howpublished = {\url{https://github.com/mii-llm/zagreus-nesso-slm}},
}
Acknowledgements
- Antonio Baldassarra (CEO, Seeweb) and Marco Cristofanilli (Head of AI, Seeweb) for commissioning and sponsoring the infrastructure
- Eduardo Garcia for the lm-evaluation-harness-pt framework and the Open Portuguese LLM Leaderboard
- The Hugging Face team for Nanotron, datatrove, FineWeb, FineWeb-2, and FinePDFs
- The mii-llm open-source community for contributions to multilingual evaluation harnesses and the Nanotron fork
License
Released under the Apache 2.0 license.
Made with ❤️ in Italy by mii-llm
- Downloads last month
- 27