| | ---
|
| | base_model:
|
| | - GSAI-ML/LLaDA-8B-Instruct
|
| | language:
|
| | - en
|
| | library_name: transformers
|
| | ---
|
| | |
| | # Large Language Diffusion with Ordered Unmasking (LLaDOU) |
| | <a href="https://arxiv.org/abs/2505.10446"><img src="https://img.shields.io/badge/arXiv-2505.10446-b31b1b.svg" alt="ArXiv"></a> |
| | <a href="https://arxiv.org/abs/2505.10446"><img src="https://img.shields.io/badge/GitHub-LLaDOU-777777.svg" alt="ArXiv"></a> |
| |
|
| | We introduce the **L**arge **La**nguage **D**iffusion with **O**rdered **U**nmasking (**LLaDOU**), which is trained by reinforcing a new reasoning paradigm named the **D**iffusion **C**hain **o**f **L**ateral **T**hought (**DCoLT**) for diffusion language models. |
| |
|
| | Compared to standard CoT, DCoLT is distinguished with several notable features: |
| | - **Bidirectional Reasoning**: Allowing global refinement throughout generations with bidirectional self-attention masks. |
| | - **Format-Free Reasoning**: No strict rule on grammatical correctness amid its intermediate steps of thought. |
| | - **Nonlinear Generation**: Generating tokens at various positions in different steps. |
| |
|
| |  |
| |
|
| | ## Instructions |
| |
|
| | **LLaDOU-v0-Math** is a math-specific model trained on GSM8K and MATH. |
| |
|
| | For inference codes and detailed instructions, please refer our github page: [maple-research-lab/LLaDOU](https://github.com/maple-research-lab/LLaDOU). |