I need pull
#6
by visssoftjsc - opened
- AGENTS.md +0 -92
- CLAUDE.md +79 -421
- README.md +3 -109
- dots-mocr.py → dots-ocr-1.5.py +60 -95
- falcon-ocr-bucket.py +0 -278
- falcon-ocr.py +0 -433
- firered-ocr.py +7 -10
- glm-ocr-bucket.py +0 -364
- pp-doclayout.py +0 -1159
- qianfan-ocr.py +0 -628
AGENTS.md
DELETED
|
@@ -1,92 +0,0 @@
|
|
| 1 |
-
# For coding agents
|
| 2 |
-
|
| 3 |
-
This repo is a curated collection of ready-to-run OCR scripts — each one self-contained
|
| 4 |
-
via UV inline metadata, runnable over the network via `hf jobs uv run`. No clone, no
|
| 5 |
-
install, no setup.
|
| 6 |
-
|
| 7 |
-
## Don't rely on this doc — discover the current state
|
| 8 |
-
|
| 9 |
-
This file will go stale. Prefer these sources of truth:
|
| 10 |
-
|
| 11 |
-
- `hf jobs uv run --help` — job submission flags (volumes, secrets, flavors, timeouts)
|
| 12 |
-
- `hf jobs hardware` — current GPU flavors and pricing
|
| 13 |
-
- `hf auth whoami` — check HF token is set
|
| 14 |
-
- `hf jobs ps` / `hf jobs logs <id>` — monitor running jobs
|
| 15 |
-
- `ls` the repo to see which scripts actually exist (bucket variants especially)
|
| 16 |
-
- [README.md](./README.md) — the table of scripts with model sizes and notes
|
| 17 |
-
|
| 18 |
-
## Picking a script
|
| 19 |
-
|
| 20 |
-
The [README.md](./README.md) table lists every script with model size, backend, and
|
| 21 |
-
a short note. Axes that matter:
|
| 22 |
-
|
| 23 |
-
- **Model size** vs accuracy vs GPU cost. Smaller = cheaper per doc.
|
| 24 |
-
- **Backend**: vLLM scripts are usually fastest at scale. `transformers` and
|
| 25 |
-
`falcon-perception` are alternatives for specific models.
|
| 26 |
-
- **Task support**: most scripts do plain text; some expose `--task-mode`
|
| 27 |
-
(table, formula, layout, etc.) — check the script's own docstring.
|
| 28 |
-
|
| 29 |
-
For the authoritative benchmark numbers on any model in the table, query the model
|
| 30 |
-
card programmatically — every OCR model publishes eval results on its card:
|
| 31 |
-
|
| 32 |
-
from huggingface_hub import HfApi
|
| 33 |
-
info = HfApi().model_info("tiiuae/Falcon-OCR", expand=["evalResults"])
|
| 34 |
-
for r in info.eval_results:
|
| 35 |
-
print(r.dataset_id, r.value)
|
| 36 |
-
|
| 37 |
-
See the [leaderboard data guide](https://huggingface.co/docs/hub/en/leaderboard-data-guide)
|
| 38 |
-
for the full API. This is more reliable than any markdown table that might drift.
|
| 39 |
-
|
| 40 |
-
## Getting help from a specific script
|
| 41 |
-
|
| 42 |
-
Each script has a docstring at the top with a description and usage examples. To read it
|
| 43 |
-
without downloading:
|
| 44 |
-
|
| 45 |
-
curl -s https://huggingface.co/datasets/uv-scripts/ocr/raw/main/<script>.py | head -100
|
| 46 |
-
|
| 47 |
-
Or open the URL in a browser. Running `uv run <url> --help` locally may fail if the
|
| 48 |
-
script has GPU-only dependencies — reading the docstring is more reliable.
|
| 49 |
-
|
| 50 |
-
## The main pattern: dataset → dataset
|
| 51 |
-
|
| 52 |
-
Most scripts take an input HF dataset ID and push results to an output HF dataset ID:
|
| 53 |
-
|
| 54 |
-
hf jobs uv run --flavor l4x1 -s HF_TOKEN \
|
| 55 |
-
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/<script>.py \
|
| 56 |
-
<input-dataset-id> <output-dataset-id> [--max-samples N] [--shuffle]
|
| 57 |
-
|
| 58 |
-
The script adds a `markdown` column to the input dataset and pushes the merged result
|
| 59 |
-
to the output dataset ID on the Hub.
|
| 60 |
-
|
| 61 |
-
## Alternative: directory → directory (bucket variants)
|
| 62 |
-
|
| 63 |
-
A couple of scripts have `-bucket.py` variants (currently `falcon-ocr-bucket.py` and
|
| 64 |
-
`glm-ocr-bucket.py`) that read from a mounted directory and write one `.md` per image
|
| 65 |
-
(or per PDF page). Useful with HF Buckets via `-v`:
|
| 66 |
-
|
| 67 |
-
hf jobs uv run --flavor l4x1 -s HF_TOKEN \
|
| 68 |
-
-v hf://buckets/<user>/<input>:/input:ro \
|
| 69 |
-
-v hf://buckets/<user>/<output>:/output \
|
| 70 |
-
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/<script>-bucket.py \
|
| 71 |
-
/input /output
|
| 72 |
-
|
| 73 |
-
`ls` the repo to check whether a `-bucket.py` variant exists for the model you want
|
| 74 |
-
before assuming it's available.
|
| 75 |
-
|
| 76 |
-
## Common flags across dataset-mode scripts
|
| 77 |
-
|
| 78 |
-
Most scripts support: `--max-samples`, `--shuffle`, `--seed`, `--split`, `--image-column`,
|
| 79 |
-
`--output-column`, `--private`, `--config`, `--create-pr`, `--verbose`. Read the script's
|
| 80 |
-
docstring for the authoritative list — individual scripts may add model-specific options
|
| 81 |
-
like `--task-mode`.
|
| 82 |
-
|
| 83 |
-
## Gotchas
|
| 84 |
-
|
| 85 |
-
- **Secrets**: pass `-s HF_TOKEN` to forward the user's token into the job.
|
| 86 |
-
- **GPU required**: all scripts exit if CUDA isn't available. `l4x1` is the cheapest
|
| 87 |
-
GPU flavor and works for models up to ~3B. Check `hf jobs hardware` for current options.
|
| 88 |
-
- **First run is slow**: model download + `torch.compile` / vLLM warmup dominates small
|
| 89 |
-
runs. Cost per doc drops sharply past a few hundred images — test with `--max-samples 10`
|
| 90 |
-
first, then scale.
|
| 91 |
-
- **Don't poll jobs**: jobs run async. Submit once, check status later with
|
| 92 |
-
`hf jobs ps` or `hf jobs logs <id>`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
CLAUDE.md
CHANGED
|
@@ -3,17 +3,10 @@
|
|
| 3 |
## Active Scripts
|
| 4 |
|
| 5 |
### DeepSeek-OCR v1 (`deepseek-ocr-vllm.py`)
|
| 6 |
-
✅ **Production Ready**
|
| 7 |
-
-
|
| 8 |
-
-
|
| 9 |
-
-
|
| 10 |
-
- See: https://docs.vllm.ai/projects/recipes/en/latest/DeepSeek/DeepSeek-OCR.html
|
| 11 |
-
|
| 12 |
-
**Known issue (vLLM nightly, 2026-02-12):** Some images trigger a crop dimension validation error:
|
| 13 |
-
```
|
| 14 |
-
ValueError: images_crop dim[2] expected 1024, got 640. Expected shape: ('bnp', 3, 1024, 1024), but got torch.Size([0, 3, 640, 640])
|
| 15 |
-
```
|
| 16 |
-
This is a vLLM bug: the preprocessor defaults to gundam mode (image_size=640), but the tensor validator expects 1024x1024 even when the crop batch is empty (dim 0). Hit 2/10 on `davanstrien/ufo-ColPali`, 0/10 on NLS Medical History. Likely depends on image aspect ratios. No upstream issue filed yet. Related feature request: [vllm#28160](https://github.com/vllm-project/vllm/issues/28160) (no way to control resolution mode via mm-processor-kwargs).
|
| 17 |
|
| 18 |
### LightOnOCR-2-1B (`lighton-ocr2.py`)
|
| 19 |
✅ **Production Ready** (Fixed 2026-01-29)
|
|
@@ -82,117 +75,90 @@ hf jobs uv run --flavor l4x1 \
|
|
| 82 |
- Backend: Transformers (single image processing)
|
| 83 |
- Requires: `transformers>=5.0.0`
|
| 84 |
|
| 85 |
-
##
|
| 86 |
-
✅ **Production Ready** (Fixed 2026-03-14)
|
| 87 |
-
|
| 88 |
-
**Status:** Working with vLLM 0.17.1 stable
|
| 89 |
-
|
| 90 |
-
**Model availability:** The v1.5 model is NOT on HF from the original authors. We mirrored it from ModelScope to `davanstrien/dots.ocr-1.5`. Original: https://modelscope.cn/models/rednote-hilab/dots.ocr-1.5. License: MIT-based (with supplementary terms for responsible use).
|
| 91 |
-
|
| 92 |
-
**Key fix (2026-03-14):** Must pass `chat_template_content_format="string"` to `llm.chat()`. The model's `tokenizer_config.json` chat template expects string content (not openai-format lists). Without this, the model generates empty output (~1 token then EOS). The separate `chat_template.json` file handles multimodal content but vLLM uses the tokenizer_config template by default.
|
| 93 |
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
``
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
|
| 114 |
-
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
|
| 118 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 119 |
```
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
- 3/3 samples on L4: OCR mode working, ~147 toks/s output
|
| 123 |
-
- 3/3 samples on L4: layout-all mode working, structured JSON with bboxes
|
| 124 |
-
- 10/10 samples on A100: layout-only mode on NLS Highland News, ~670 toks/s output
|
| 125 |
-
- Output datasets: `davanstrien/dots-ocr-1.5-smoke-test-v3`, `davanstrien/dots-ocr-1.5-layout-test`, `davanstrien/dots-ocr-1.5-nls-layout-test`
|
| 126 |
-
|
| 127 |
-
**Prompt modes:**
|
| 128 |
-
- `ocr` — text extraction (default)
|
| 129 |
-
- `layout-all` — layout + bboxes + categories + text (JSON)
|
| 130 |
-
- `layout-only` — layout + bboxes + categories only (JSON)
|
| 131 |
-
- `web-parsing` — webpage layout analysis (JSON) [new in v1.5]
|
| 132 |
-
- `scene-spotting` — scene text detection [new in v1.5]
|
| 133 |
-
- `grounding-ocr` — text from bounding box region [new in v1.5]
|
| 134 |
-
- `general` — free-form (use with `--custom-prompt`) [new in v1.5]
|
| 135 |
-
|
| 136 |
-
**Example usage:**
|
| 137 |
-
```bash
|
| 138 |
-
hf jobs uv run --flavor l4x1 \
|
| 139 |
-
-s HF_TOKEN \
|
| 140 |
-
/path/to/dots-ocr-1.5.py \
|
| 141 |
-
davanstrien/ufo-ColPali output-dataset \
|
| 142 |
-
--model davanstrien/dots.ocr-1.5 \
|
| 143 |
-
--max-samples 10 --shuffle --seed 42
|
| 144 |
```
|
| 145 |
|
| 146 |
-
**
|
| 147 |
-
|
| 148 |
-
|
| 149 |
-
|
| 150 |
-
|
| 151 |
-
|
| 152 |
-
- GitHub: https://github.com/rednote-hilab/dots.ocr
|
| 153 |
-
|
| 154 |
-
---
|
| 155 |
-
|
| 156 |
-
## Pending Development
|
| 157 |
-
|
| 158 |
-
### DeepSeek-OCR-2 (`deepseek-ocr2-vllm.py`)
|
| 159 |
-
✅ **Production Ready** (2026-02-12)
|
| 160 |
-
|
| 161 |
-
**Status:** Working with vLLM nightly (requires nightly for `DeepseekOCR2ForCausalLM` support, not yet in stable 0.15.1)
|
| 162 |
-
|
| 163 |
-
**What was done:**
|
| 164 |
-
- Rewrote the broken draft script (which used base64/llm.chat/resolution modes)
|
| 165 |
-
- Uses the same proven pattern as v1: `llm.generate()` with PIL images + `NGramPerReqLogitsProcessor`
|
| 166 |
-
- Key v2 addition: `limit_mm_per_prompt={"image": 1}` in LLM init
|
| 167 |
-
- Added `addict` and `matplotlib` as dependencies (required by model's HF custom code)
|
| 168 |
-
|
| 169 |
-
**Test results (2026-02-12):**
|
| 170 |
-
- 10/10 samples processed successfully on L4 GPU
|
| 171 |
-
- Processing time: 6.4 min (includes model download + warmup)
|
| 172 |
-
- Model: 6.33 GiB, ~475 toks/s input, ~246 toks/s output
|
| 173 |
-
- Output dataset: `davanstrien/deepseek-ocr2-nls-test`
|
| 174 |
-
|
| 175 |
-
**Example usage:**
|
| 176 |
-
```bash
|
| 177 |
-
hf jobs uv run --flavor l4x1 \
|
| 178 |
-
-s HF_TOKEN \
|
| 179 |
-
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr2-vllm.py \
|
| 180 |
-
NationalLibraryOfScotland/medical-history-of-british-india output-dataset \
|
| 181 |
-
--max-samples 10 --shuffle --seed 42
|
| 182 |
-
```
|
| 183 |
|
| 184 |
-
**
|
| 185 |
-
-
|
| 186 |
-
-
|
| 187 |
-
- Uses same API pattern as v1: `NGramPerReqLogitsProcessor`, `SamplingParams(temperature=0, skip_special_tokens=False)`, `extra_args` for ngram settings
|
| 188 |
|
| 189 |
**Model Information:**
|
| 190 |
- Model ID: `deepseek-ai/DeepSeek-OCR-2`
|
| 191 |
- Model Card: https://huggingface.co/deepseek-ai/DeepSeek-OCR-2
|
| 192 |
- GitHub: https://github.com/deepseek-ai/DeepSeek-OCR-2
|
| 193 |
- Parameters: 3B
|
| 194 |
-
-
|
| 195 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 196 |
|
| 197 |
## Other OCR Scripts
|
| 198 |
|
|
@@ -242,314 +208,6 @@ uv run glm-ocr.py uv-scripts/ocr-smoke-test smoke-out --max-samples 5
|
|
| 242 |
|
| 243 |
---
|
| 244 |
|
| 245 |
-
|
| 246 |
-
|
| 247 |
-
**Status:** Working end-to-end (2026-02-14)
|
| 248 |
-
|
| 249 |
-
Launches N OCR models on the same dataset via `run_uv_job()`, each pushing to a shared repo as a separate config via `--config/--create-pr`. Eval done separately with `ocr-elo-bench.py`.
|
| 250 |
-
|
| 251 |
-
### Model Registry (4 models)
|
| 252 |
-
|
| 253 |
-
| Slug | Model ID | Size | Default GPU | Notes |
|
| 254 |
-
|------|----------|------|-------------|-------|
|
| 255 |
-
| `glm-ocr` | `zai-org/GLM-OCR` | 0.9B | l4x1 | |
|
| 256 |
-
| `deepseek-ocr` | `deepseek-ai/DeepSeek-OCR` | 4B | l4x1 | Auto-passes `--prompt-mode free` (no grounding tags) |
|
| 257 |
-
| `lighton-ocr-2` | `lightonai/LightOnOCR-2-1B` | 1B | a100-large | |
|
| 258 |
-
| `dots-ocr` | `rednote-hilab/dots.ocr` | 1.7B | l4x1 | Stable vLLM (>=0.9.1) |
|
| 259 |
-
|
| 260 |
-
Each model entry has a `default_args` list for model-specific flags (e.g., DeepSeek uses `["--prompt-mode", "free"]`).
|
| 261 |
-
|
| 262 |
-
### Workflow
|
| 263 |
-
```bash
|
| 264 |
-
# Launch all 4 models on same data
|
| 265 |
-
uv run ocr-bench-run.py source-dataset --output my-bench --max-samples 50
|
| 266 |
-
|
| 267 |
-
# Evaluate directly from PRs (no merge needed)
|
| 268 |
-
uv run ocr-elo-bench.py my-bench --from-prs --mode both
|
| 269 |
-
|
| 270 |
-
# Or merge + evaluate
|
| 271 |
-
uv run ocr-elo-bench.py my-bench --from-prs --merge-prs --mode both
|
| 272 |
-
|
| 273 |
-
# Other useful flags
|
| 274 |
-
uv run ocr-bench-run.py --list-models # Show registry table
|
| 275 |
-
uv run ocr-bench-run.py ... --dry-run # Preview without launching
|
| 276 |
-
uv run ocr-bench-run.py ... --wait # Poll until complete
|
| 277 |
-
uv run ocr-bench-run.py ... --models glm-ocr dots-ocr # Subset of models
|
| 278 |
-
```
|
| 279 |
-
|
| 280 |
-
### Eval script features (`ocr-elo-bench.py`)
|
| 281 |
-
- `--from-prs`: Auto-discovers open PRs on the dataset repo, extracts config names from PR title `[config-name]` suffix, loads data from `refs/pr/N` without merging
|
| 282 |
-
- `--merge-prs`: Auto-merges discovered PRs via `api.merge_pull_request()` before loading
|
| 283 |
-
- `--configs`: Manually specify which configs to load (for merged repos)
|
| 284 |
-
- `--mode both`: Runs pairwise ELO + pointwise scoring
|
| 285 |
-
- Flat mode (original behavior) still works when `--configs`/`--from-prs` not used
|
| 286 |
-
|
| 287 |
-
### Scripts pushed to Hub
|
| 288 |
-
All 4 scripts have been pushed to `uv-scripts/ocr` on the Hub with `--config`/`--create-pr` support:
|
| 289 |
-
- `glm-ocr.py` ✅
|
| 290 |
-
- `deepseek-ocr-vllm.py` ✅
|
| 291 |
-
- `lighton-ocr2.py` ✅
|
| 292 |
-
- `dots-ocr.py` ✅
|
| 293 |
-
|
| 294 |
-
### Benchmark Results
|
| 295 |
-
|
| 296 |
-
#### Run 1: NLS Medical History (2026-02-14) — Pilot
|
| 297 |
-
|
| 298 |
-
**Dataset:** `NationalLibraryOfScotland/medical-history-of-british-india` (10 samples, shuffled, seed 42)
|
| 299 |
-
**Output repo:** `davanstrien/ocr-bench-test` (4 open PRs)
|
| 300 |
-
**Judge:** `Qwen/Qwen2.5-VL-72B-Instruct` via HF Inference Providers
|
| 301 |
-
**Content:** Historical English, degraded scans of medical texts
|
| 302 |
-
|
| 303 |
-
**ELO (pairwise, 5 samples evaluated):**
|
| 304 |
-
1. DoTS.ocr — 1540 (67% win rate)
|
| 305 |
-
2. DeepSeek-OCR — 1539 (57%)
|
| 306 |
-
3. LightOnOCR-2 — 1486 (50%)
|
| 307 |
-
4. GLM-OCR — 1436 (29%)
|
| 308 |
-
|
| 309 |
-
**Pointwise (5 samples):**
|
| 310 |
-
1. DeepSeek-OCR — 5.0/5.0
|
| 311 |
-
2. GLM-OCR — 4.6
|
| 312 |
-
3. LightOnOCR-2 — 4.4
|
| 313 |
-
4. DoTS.ocr — 4.2
|
| 314 |
-
|
| 315 |
-
**Key finding:** DeepSeek-OCR's `--prompt-mode document` produces grounding tags (`<|ref|>`, `<|det|>`) that the judge penalizes heavily. Switching to `--prompt-mode free` (now the default in the registry) made it jump from last place to top 2.
|
| 316 |
-
|
| 317 |
-
**Caveat:** 5 samples is far too few for stable rankings. The judge VLM is called once per comparison (pairwise) or once per model-sample (pointwise) via HF Inference Providers API.
|
| 318 |
-
|
| 319 |
-
#### Run 2: Rubenstein Manuscript Catalog (2026-02-15) — First Full Benchmark
|
| 320 |
-
|
| 321 |
-
**Dataset:** `biglam/rubenstein-manuscript-catalog` (50 samples, shuffled, seed 42)
|
| 322 |
-
**Output repo:** `davanstrien/ocr-bench-rubenstein` (4 PRs)
|
| 323 |
-
**Judge:** Jury of 2 via `ocr-vllm-judge.py` — `Qwen/Qwen2.5-VL-7B-Instruct` + `Qwen/Qwen3-VL-8B-Instruct` on A100
|
| 324 |
-
**Content:** ~48K typewritten + handwritten manuscript catalog cards from Duke University (CC0)
|
| 325 |
-
|
| 326 |
-
**ELO (pairwise, 50 samples, 300 comparisons, 0 parse failures):**
|
| 327 |
-
|
| 328 |
-
| Rank | Model | ELO | W | L | T | Win% |
|
| 329 |
-
|------|-------|-----|---|---|---|------|
|
| 330 |
-
| 1 | LightOnOCR-2-1B | 1595 | 100 | 50 | 0 | 67% |
|
| 331 |
-
| 2 | DeepSeek-OCR | 1497 | 73 | 77 | 0 | 49% |
|
| 332 |
-
| 3 | GLM-OCR | 1471 | 57 | 93 | 0 | 38% |
|
| 333 |
-
| 4 | dots.ocr | 1437 | 70 | 80 | 0 | 47% |
|
| 334 |
-
|
| 335 |
-
**OCR job times** (all 50 samples each):
|
| 336 |
-
- dots-ocr: 5.3 min (L4)
|
| 337 |
-
- deepseek-ocr: 5.6 min (L4)
|
| 338 |
-
- glm-ocr: 5.7 min (L4)
|
| 339 |
-
- lighton-ocr-2: 6.4 min (A100)
|
| 340 |
-
|
| 341 |
-
**Key findings:**
|
| 342 |
-
- **LightOnOCR-2-1B dominates** on manuscript catalog cards (67% win rate, 100-point ELO gap over 2nd place) — a very different result from the NLS pilot where it placed 3rd
|
| 343 |
-
- **Rankings are dataset-dependent**: NLS historical medical texts favored DoTS.ocr and DeepSeek-OCR; Rubenstein typewritten/handwritten cards favor LightOnOCR-2
|
| 344 |
-
- **Jury of small models works well**: 0 parse failures on 300 comparisons thanks to vLLM structured output (xgrammar). Majority voting between 2 judges provides robustness
|
| 345 |
-
- **50 samples gives meaningful separation**: Clear ELO gaps (1595 → 1497 → 1471 → 1437) unlike the noisy 5-sample pilot
|
| 346 |
-
- This validates the multi-dataset benchmark approach — no single dataset tells the whole story
|
| 347 |
-
|
| 348 |
-
#### Run 3: UFO-ColPali (2026-02-15) — Cross-Dataset Validation
|
| 349 |
-
|
| 350 |
-
**Dataset:** `davanstrien/ufo-ColPali` (50 samples, shuffled, seed 42)
|
| 351 |
-
**Output repo:** `davanstrien/ocr-bench-ufo` (4 PRs)
|
| 352 |
-
**Judge:** `Qwen/Qwen3-VL-30B-A3B-Instruct` via `ocr-vllm-judge.py` on A100 (updated prompt)
|
| 353 |
-
**Content:** Mixed modern documents (invoices, reports, forms, etc.)
|
| 354 |
-
|
| 355 |
-
**ELO (pairwise, 50 samples, 294 comparisons):**
|
| 356 |
-
|
| 357 |
-
| Rank | Model | ELO | W | L | T | Win% |
|
| 358 |
-
|------|-------|-----|---|---|---|------|
|
| 359 |
-
| 1 | DeepSeek-OCR | 1827 | 130 | 17 | 0 | 88% |
|
| 360 |
-
| 2 | dots.ocr | 1510 | 64 | 83 | 0 | 44% |
|
| 361 |
-
| 3 | LightOnOCR-2-1B | 1368 | 77 | 70 | 0 | 52% |
|
| 362 |
-
| 4 | GLM-OCR | 1294 | 23 | 124 | 0 | 16% |
|
| 363 |
-
|
| 364 |
-
**Human validation (30 comparisons):** DeepSeek-OCR #1 (same as judge), LightOnOCR-2 #3 (same). Middle pack (GLM-OCR #2 human / #4 judge, dots.ocr #4 human / #2 judge) shuffled.
|
| 365 |
-
|
| 366 |
-
#### Cross-Dataset Comparison (Human-Validated)
|
| 367 |
-
|
| 368 |
-
| Model | Rubenstein Human | Rubenstein Kimi | UFO Human | UFO 30B |
|
| 369 |
-
|-------|:---------------:|:---------------:|:---------:|:-------:|
|
| 370 |
-
| DeepSeek-OCR | **#1** | **#1** | **#1** | **#1** |
|
| 371 |
-
| GLM-OCR | #2 | #3 | #2 | #4 |
|
| 372 |
-
| LightOnOCR-2 | #4 | #2 | #3 | #3 |
|
| 373 |
-
| dots.ocr | #3 | #4 | #4 | #2 |
|
| 374 |
-
|
| 375 |
-
**Conclusion:** DeepSeek-OCR is consistently #1 across datasets and evaluation methods. Middle-pack rankings are dataset-dependent. Updated prompt fixed the LightOnOCR-2 overrating seen with old prompt/small judges.
|
| 376 |
-
|
| 377 |
-
*Note: NLS pilot results (5 samples, 72B API judge) omitted — not comparable with newer methodology.*
|
| 378 |
-
|
| 379 |
-
### Known Issues / Next Steps
|
| 380 |
-
|
| 381 |
-
1. ✅ **More samples needed** — Done. Rubenstein run (2026-02-15) used 50 samples and produced clear ELO separation across all 4 models.
|
| 382 |
-
2. ✅ **Smaller judge model** — Tested with Qwen VL 7B + Qwen3 VL 8B via `ocr-vllm-judge.py`. Works well with structured output (0 parse failures). Jury of small models compensates for individual model weakness. See "Offline vLLM Judge" section below.
|
| 383 |
-
3. **Auto-merge in coordinator** — `--wait` could auto-merge PRs after successful jobs. Not yet implemented.
|
| 384 |
-
4. **Adding more models** — `rolm-ocr.py` exists but needs `--config`/`--create-pr` added. `deepseek-ocr2-vllm.py`, `paddleocr-vl-1.5.py`, etc. could also be added to the registry.
|
| 385 |
-
5. **Leaderboard Space** — See future section below.
|
| 386 |
-
6. ✅ **Result persistence** — `ocr-vllm-judge.py` now has `--save-results REPO_ID` flag. First dataset: `davanstrien/ocr-bench-rubenstein-judge`.
|
| 387 |
-
7. **More diverse datasets** — Rankings are dataset-dependent (LightOnOCR-2 wins on Rubenstein, DoTS.ocr won pilot on NLS). Need benchmarks on tables, formulas, multilingual, and modern documents for a complete picture.
|
| 388 |
-
8. ✅ **Human validation** — `ocr-human-eval.py` completed on Rubenstein (30/30). Tested 3 judge configs. **Kimi K2.5 (170B) via Novita + updated prompt = best human agreement** (only judge to match human's #1). Now default in `ocr-jury-bench.py`. See `OCR-BENCHMARK.md` for full comparison.
|
| 389 |
-
|
| 390 |
-
---
|
| 391 |
-
|
| 392 |
-
## Offline vLLM Judge (`ocr-vllm-judge.py`)
|
| 393 |
-
|
| 394 |
-
**Status:** Working end-to-end (2026-02-15)
|
| 395 |
-
|
| 396 |
-
Runs pairwise OCR quality comparisons using a local VLM judge via vLLM's offline `LLM()` pattern. Supports jury mode (multiple models vote sequentially on the same GPU) with majority voting.
|
| 397 |
-
|
| 398 |
-
### Why use this over the API judge (`ocr-jury-bench.py`)?
|
| 399 |
-
|
| 400 |
-
| | API judge (`ocr-jury-bench.py`) | Offline judge (`ocr-vllm-judge.py`) |
|
| 401 |
-
|---|---|---|
|
| 402 |
-
| Parse failures | Needs retries for malformed JSON | 0 failures — vLLM structured output guarantees valid JSON |
|
| 403 |
-
| Network | Rate limits, timeouts, transient errors | Zero network calls |
|
| 404 |
-
| Cost | Per-token API pricing | Just GPU time |
|
| 405 |
-
| Judge models | Limited to Inference Providers catalog | Any vLLM-supported VLM |
|
| 406 |
-
| Jury mode | Sequential API calls per judge | Sequential model loading, batch inference per judge |
|
| 407 |
-
| Best for | Quick spot-checks, access to 72B models | Batch evaluation (50+ samples), reproducibility |
|
| 408 |
-
|
| 409 |
-
**Pushed to Hub:** `uv-scripts/ocr` as `ocr-vllm-judge.py` (2026-02-15)
|
| 410 |
-
|
| 411 |
-
### Test Results (2026-02-15)
|
| 412 |
-
|
| 413 |
-
**Test 1 — Single judge, 1 sample, L4:**
|
| 414 |
-
- Qwen2.5-VL-7B-Instruct, 6/6 comparisons, 0 parse failures
|
| 415 |
-
- Total time: ~3 min (including model download + warmup)
|
| 416 |
-
|
| 417 |
-
**Test 2 — Jury of 2, 3 samples, A100:**
|
| 418 |
-
- Qwen2.5-VL-7B + Qwen3-VL-8B, 15/15 comparisons, 0 parse failures
|
| 419 |
-
- GPU cleanup between models: successful (nanobind warnings are cosmetic)
|
| 420 |
-
- Majority vote aggregation working (`[2/2]` unanimous, `[1/2]` split)
|
| 421 |
-
- Total time: ~4 min (including both model downloads)
|
| 422 |
-
|
| 423 |
-
**Test 3 — Full benchmark, 50 samples, A100 (Rubenstein Manuscript Catalog):**
|
| 424 |
-
- Qwen2.5-VL-7B + Qwen3-VL-8B jury, 300/300 comparisons, 0 parse failures
|
| 425 |
-
- Input: `davanstrien/ocr-bench-rubenstein` (4 PRs from `ocr-bench-run.py`)
|
| 426 |
-
- Produced clear ELO rankings with meaningful separation
|
| 427 |
-
- See "Benchmark Results → Run 2" in the OCR Benchmark Coordinator section above
|
| 428 |
-
|
| 429 |
-
### Usage
|
| 430 |
-
|
| 431 |
-
```bash
|
| 432 |
-
# Single judge on L4
|
| 433 |
-
hf jobs uv run --flavor l4x1 -s HF_TOKEN \
|
| 434 |
-
ocr-vllm-judge.py davanstrien/ocr-bench-nls-50 --from-prs \
|
| 435 |
-
--judge-model Qwen/Qwen2.5-VL-7B-Instruct --max-samples 10
|
| 436 |
-
|
| 437 |
-
# Jury of 2 on A100 (recommended for jury mode)
|
| 438 |
-
hf jobs uv run --flavor a100-large -s HF_TOKEN \
|
| 439 |
-
ocr-vllm-judge.py davanstrien/ocr-bench-nls-50 --from-prs \
|
| 440 |
-
--judge-model Qwen/Qwen2.5-VL-7B-Instruct \
|
| 441 |
-
--judge-model Qwen/Qwen3-VL-8B-Instruct \
|
| 442 |
-
--max-samples 50
|
| 443 |
-
```
|
| 444 |
-
|
| 445 |
-
### Implementation Notes
|
| 446 |
-
- Comparisons built upfront on CPU as `NamedTuple`s, then batched to vLLM in single `llm.chat()` call
|
| 447 |
-
- Structured output via compatibility shim: `StructuredOutputsParams` (vLLM >= 0.12) → `GuidedDecodingParams` (older) → prompt-based fallback
|
| 448 |
-
- GPU cleanup between jury models: `destroy_model_parallel()` + `gc.collect()` + `torch.cuda.empty_cache()`
|
| 449 |
-
- Position bias mitigation: A/B order randomized per comparison
|
| 450 |
-
- A100 recommended for jury mode; L4 works for single 7B judge
|
| 451 |
-
|
| 452 |
-
### Next Steps
|
| 453 |
-
1. ✅ **Scale test** — Completed on Rubenstein Manuscript Catalog (50 samples, 300 comparisons, 0 parse failures). Rankings differ from API-based pilot (different dataset + judge), validating multi-dataset approach.
|
| 454 |
-
2. ✅ **Result persistence** — Added `--save-results REPO_ID` flag. Pushes 3 configs to HF Hub: `comparisons` (one row per pairwise comparison), `leaderboard` (ELO + win/loss/tie per model), `metadata` (source dataset, judge models, seed, timestamp). First dataset: `davanstrien/ocr-bench-rubenstein-judge`.
|
| 455 |
-
3. **Integrate into `ocr-bench-run.py`** — Add `--eval` flag that auto-runs vLLM judge after OCR jobs complete
|
| 456 |
-
|
| 457 |
-
---
|
| 458 |
-
|
| 459 |
-
## Blind Human Eval (`ocr-human-eval.py`)
|
| 460 |
-
|
| 461 |
-
**Status:** Working (2026-02-15)
|
| 462 |
-
|
| 463 |
-
Gradio app for blind A/B comparison of OCR outputs. Shows document image + two anonymized OCR outputs, human picks winner or tie. Computes ELO rankings from human annotations and optionally compares against automated judge results.
|
| 464 |
-
|
| 465 |
-
### Usage
|
| 466 |
-
|
| 467 |
-
```bash
|
| 468 |
-
# Basic — blind human eval only
|
| 469 |
-
uv run ocr-human-eval.py davanstrien/ocr-bench-rubenstein --from-prs --max-samples 5
|
| 470 |
-
|
| 471 |
-
# With judge comparison — loads automated judge results for agreement analysis
|
| 472 |
-
uv run ocr-human-eval.py davanstrien/ocr-bench-rubenstein --from-prs \
|
| 473 |
-
--judge-results davanstrien/ocr-bench-rubenstein-judge --max-samples 5
|
| 474 |
-
```
|
| 475 |
-
|
| 476 |
-
### Features
|
| 477 |
-
- **Blind evaluation**: Two-tab design — Evaluate tab never shows model names, Results tab reveals rankings
|
| 478 |
-
- **Position bias mitigation**: A/B order randomly swapped per comparison
|
| 479 |
-
- **Resume support**: JSON annotations saved atomically after each vote; restart app to resume where you left off
|
| 480 |
-
- **Live agreement tracking**: Per-vote feedback shows running agreement with automated judge (when `--judge-results` provided)
|
| 481 |
-
- **Split-jury prioritization**: Comparisons where automated judges disagreed ("1/2" agreement) shown first — highest annotation value per vote
|
| 482 |
-
- **Image variety**: Round-robin interleaving by sample so you don't see the same document image repeatedly
|
| 483 |
-
- **Soft/hard disagreement analysis**: Distinguishes between harmless ties-vs-winner disagreements and genuine opposite-winner errors
|
| 484 |
-
|
| 485 |
-
### First Validation Results (Rubenstein, 30 annotations)
|
| 486 |
-
|
| 487 |
-
Tested 3 judge configs against 30 human annotations. **Kimi K2.5 (170B) via Novita** is the only judge to match human's #1 pick (DeepSeek-OCR). Small models (7B/8B/30B) all overrate LightOnOCR-2 due to bias toward its commentary style. Updated prompt (prioritized faithfulness > completeness > accuracy) helps but model size is the bigger factor.
|
| 488 |
-
|
| 489 |
-
Full results and analysis in `OCR-BENCHMARK.md` → "Human Validation" section.
|
| 490 |
-
|
| 491 |
-
### Next Steps
|
| 492 |
-
1. **Second dataset** — Run on NLS Medical History for cross-dataset human validation
|
| 493 |
-
2. **Multiple annotators** — Currently single-user; could support annotator ID for inter-annotator agreement
|
| 494 |
-
3. **Remaining LightOnOCR-2 gap** — Still #2 (Kimi) vs #4 (human). May need to investigate on more samples or strip commentary in preprocessing
|
| 495 |
-
|
| 496 |
-
---
|
| 497 |
-
|
| 498 |
-
## Future: Leaderboard HF Space
|
| 499 |
-
|
| 500 |
-
**Status:** Idea (noted 2026-02-14)
|
| 501 |
-
|
| 502 |
-
Build a Hugging Face Space with a persistent leaderboard that gets updated after each benchmark run. This would give a public-facing view of OCR model quality.
|
| 503 |
-
|
| 504 |
-
**Design ideas:**
|
| 505 |
-
- Gradio or static Space displaying ELO ratings + pointwise scores
|
| 506 |
-
- `ocr-elo-bench.py` could push results to a dataset that the Space reads
|
| 507 |
-
- Or the Space itself could run evaluation on demand
|
| 508 |
-
- Show per-document comparisons (image + side-by-side OCR outputs)
|
| 509 |
-
- Historical tracking — how scores change across model versions
|
| 510 |
-
- Filter by document type (historical, modern, tables, formulas, multilingual)
|
| 511 |
-
|
| 512 |
-
**Open questions:**
|
| 513 |
-
- Should the eval script push structured results to a dataset (e.g., `uv-scripts/ocr-leaderboard-data`)?
|
| 514 |
-
- Static leaderboard (updated by CI/scheduled job) vs interactive (evaluate on demand)?
|
| 515 |
-
- Include sample outputs for qualitative comparison?
|
| 516 |
-
- How to handle different eval datasets (NLS medical history vs UFO vs others)?
|
| 517 |
-
|
| 518 |
-
---
|
| 519 |
-
|
| 520 |
-
## Incremental Uploads / Checkpoint Strategy — ON HOLD
|
| 521 |
-
|
| 522 |
-
**Status:** Waiting on HF Hub Buckets (noted 2026-02-20)
|
| 523 |
-
|
| 524 |
-
**Current state:**
|
| 525 |
-
- `glm-ocr.py` (v1): Simple batch-then-push. Works fine for most jobs.
|
| 526 |
-
- `glm-ocr-v2.py`: Adds CommitScheduler-based incremental uploads + checkpoint/resume. ~400 extra lines. Works but has tradeoffs (commit noise, `--create-pr` incompatible, complex resume metadata).
|
| 527 |
-
|
| 528 |
-
**Decision: Do NOT port v2 pattern to other scripts.** Wait for HF Hub Buckets instead.
|
| 529 |
-
|
| 530 |
-
**Why:** Two open PRs will likely make the v2 CommitScheduler approach obsolete:
|
| 531 |
-
- [huggingface_hub#3673](https://github.com/huggingface/huggingface_hub/pull/3673) — Buckets API: S3-like mutable object storage on HF, no git versioning overhead
|
| 532 |
-
- [huggingface_hub#3807](https://github.com/huggingface/huggingface_hub/pull/3807) — HfFileSystem support for buckets: fsspec-compatible, so pyarrow/pandas/datasets can read/write `hf://buckets/` paths directly
|
| 533 |
-
|
| 534 |
-
**What Buckets would replace:** Once landed, incremental saves become one line per batch:
|
| 535 |
-
```python
|
| 536 |
-
batch_ds.to_parquet(f"hf://buckets/{user}/ocr-scratch/shard-{batch_num:05d}.parquet")
|
| 537 |
-
```
|
| 538 |
-
No CommitScheduler, no CleanupScheduler, no resume metadata, no completed batch scanning. Just write to the bucket path via fsspec. Final step: read back from bucket, `push_to_hub` to a clean dataset repo (compatible with `--create-pr`).
|
| 539 |
-
|
| 540 |
-
**Action items when Buckets ships:**
|
| 541 |
-
1. Test `hf://buckets/` fsspec writes on one script (glm-ocr is the guinea pig)
|
| 542 |
-
2. Verify: write performance, atomicity (partial writes visible?), auth propagation in HF Jobs
|
| 543 |
-
3. If it works, adopt as the standard pattern for all scripts — simple enough to inline (~20 lines)
|
| 544 |
-
4. Retire `glm-ocr-v2.py` CommitScheduler approach
|
| 545 |
-
|
| 546 |
-
**Until then:** v1 scripts stay as-is. `glm-ocr-v2.py` exists if someone needs resume on a very large job today.
|
| 547 |
-
|
| 548 |
-
---
|
| 549 |
-
|
| 550 |
-
**Last Updated:** 2026-02-20
|
| 551 |
**Watch PRs:**
|
| 552 |
-
-
|
| 553 |
-
- **HfFileSystem Buckets** ([#3807](https://github.com/huggingface/huggingface_hub/pull/3807)): fsspec support for `hf://buckets/` paths. Key for zero-boilerplate writes from scripts.
|
| 554 |
-
- DeepSeek-OCR-2 stable vLLM release: Currently only in nightly. Watch for vLLM 0.16.0 stable release on PyPI to remove nightly dependency.
|
| 555 |
-
- nanobind leak warnings in vLLM structured output (xgrammar): Cosmetic only, does not affect results. May be fixed in future xgrammar release.
|
|
|
|
| 3 |
## Active Scripts
|
| 4 |
|
| 5 |
### DeepSeek-OCR v1 (`deepseek-ocr-vllm.py`)
|
| 6 |
+
✅ **Production Ready**
|
| 7 |
+
- Fully supported by vLLM
|
| 8 |
+
- Fast batch processing
|
| 9 |
+
- Tested and working on HF Jobs
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
|
| 11 |
### LightOnOCR-2-1B (`lighton-ocr2.py`)
|
| 12 |
✅ **Production Ready** (Fixed 2026-01-29)
|
|
|
|
| 75 |
- Backend: Transformers (single image processing)
|
| 76 |
- Requires: `transformers>=5.0.0`
|
| 77 |
|
| 78 |
+
## Pending Development
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 79 |
|
| 80 |
+
### DeepSeek-OCR-2 (Visual Causal Flow Architecture)
|
| 81 |
+
|
| 82 |
+
**Status:** ⏳ Waiting for vLLM upstream support
|
| 83 |
+
|
| 84 |
+
**Context:**
|
| 85 |
+
DeepSeek-OCR-2 is the next generation OCR model (3B parameters) with Visual Causal Flow architecture offering improved quality. We attempted to create a UV script (`deepseek-ocr2-vllm.py`) but encountered a blocker.
|
| 86 |
+
|
| 87 |
+
**Blocker:**
|
| 88 |
+
vLLM does not yet support `DeepseekOCR2ForCausalLM` architecture in the official release.
|
| 89 |
+
|
| 90 |
+
**PR to Watch:**
|
| 91 |
+
🔗 https://github.com/vllm-project/vllm/pull/33165
|
| 92 |
+
|
| 93 |
+
This PR adds DeepSeek-OCR-2 support but is currently:
|
| 94 |
+
- ⚠️ **Open** (not merged)
|
| 95 |
+
- Has unresolved review comments
|
| 96 |
+
- Pre-commit checks failing
|
| 97 |
+
- Issues: hardcoded parameters, device mismatch bugs, missing error handling
|
| 98 |
+
|
| 99 |
+
**What's Needed:**
|
| 100 |
+
1. PR #33165 needs to be reviewed, fixed, and merged
|
| 101 |
+
2. vLLM needs to release a version including the merge
|
| 102 |
+
3. Then we can add these dependencies to our script:
|
| 103 |
+
```python
|
| 104 |
+
# dependencies = [
|
| 105 |
+
# "datasets>=4.0.0",
|
| 106 |
+
# "huggingface-hub",
|
| 107 |
+
# "pillow",
|
| 108 |
+
# "vllm",
|
| 109 |
+
# "tqdm",
|
| 110 |
+
# "toolz",
|
| 111 |
+
# "torch",
|
| 112 |
+
# "addict",
|
| 113 |
+
# "matplotlib",
|
| 114 |
+
# ]
|
| 115 |
+
```
|
| 116 |
+
|
| 117 |
+
**Implementation Progress:**
|
| 118 |
+
- ✅ Created `deepseek-ocr2-vllm.py` script
|
| 119 |
+
- ✅ Fixed dependency issues (pyarrow, datasets>=4.0.0)
|
| 120 |
+
- ✅ Tested script structure on HF Jobs
|
| 121 |
+
- ❌ Blocked: vLLM doesn't recognize architecture
|
| 122 |
+
|
| 123 |
+
**Partial Implementation:**
|
| 124 |
+
The file `deepseek-ocr2-vllm.py` exists in this repo but is **not functional** until vLLM support lands. Consider it a draft.
|
| 125 |
+
|
| 126 |
+
**Testing Evidence:**
|
| 127 |
+
When we ran on HF Jobs, we got:
|
| 128 |
```
|
| 129 |
+
ValidationError: Model architectures ['DeepseekOCR2ForCausalLM'] are not supported for now.
|
| 130 |
+
Supported architectures: [...'DeepseekOCRForCausalLM'...]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 131 |
```
|
| 132 |
|
| 133 |
+
**Next Steps (when PR merges):**
|
| 134 |
+
1. Update `deepseek-ocr2-vllm.py` dependencies to include `addict` and `matplotlib`
|
| 135 |
+
2. Test on HF Jobs with small dataset (10 samples)
|
| 136 |
+
3. Verify output quality
|
| 137 |
+
4. Update README.md with DeepSeek-OCR-2 section
|
| 138 |
+
5. Document v1 vs v2 differences
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 139 |
|
| 140 |
+
**Alternative Approaches (if urgent):**
|
| 141 |
+
- Create transformers-based script (slower, no vLLM batching)
|
| 142 |
+
- Use DeepSeek's official repo setup (complex, not UV-script compatible)
|
|
|
|
| 143 |
|
| 144 |
**Model Information:**
|
| 145 |
- Model ID: `deepseek-ai/DeepSeek-OCR-2`
|
| 146 |
- Model Card: https://huggingface.co/deepseek-ai/DeepSeek-OCR-2
|
| 147 |
- GitHub: https://github.com/deepseek-ai/DeepSeek-OCR-2
|
| 148 |
- Parameters: 3B
|
| 149 |
+
- Resolution: (0-6)×768×768 + 1×1024×1024 patches
|
| 150 |
+
- Key improvement: Visual Causal Flow architecture
|
| 151 |
+
|
| 152 |
+
**Resolution Modes (for v2):**
|
| 153 |
+
```python
|
| 154 |
+
RESOLUTION_MODES = {
|
| 155 |
+
"tiny": {"base_size": 512, "image_size": 512, "crop_mode": False},
|
| 156 |
+
"small": {"base_size": 640, "image_size": 640, "crop_mode": False},
|
| 157 |
+
"base": {"base_size": 1024, "image_size": 768, "crop_mode": False}, # v2 optimized
|
| 158 |
+
"large": {"base_size": 1280, "image_size": 1024, "crop_mode": False},
|
| 159 |
+
"gundam": {"base_size": 1024, "image_size": 768, "crop_mode": True}, # v2 optimized
|
| 160 |
+
}
|
| 161 |
+
```
|
| 162 |
|
| 163 |
## Other OCR Scripts
|
| 164 |
|
|
|
|
| 208 |
|
| 209 |
---
|
| 210 |
|
| 211 |
+
**Last Updated:** 2026-02-12
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 212 |
**Watch PRs:**
|
| 213 |
+
- DeepSeek-OCR-2: https://github.com/vllm-project/vllm/pull/33165
|
|
|
|
|
|
|
|
|
README.md
CHANGED
|
@@ -7,7 +7,7 @@ tags: [uv-script, ocr, vision-language-model, document-processing, hf-jobs]
|
|
| 7 |
|
| 8 |
> Part of [uv-scripts](https://huggingface.co/uv-scripts) - ready-to-run ML tools powered by UV and HuggingFace Jobs.
|
| 9 |
|
| 10 |
-
|
| 11 |
|
| 12 |
## 🚀 Quick Start
|
| 13 |
|
|
@@ -33,7 +33,6 @@ That's it! The script will:
|
|
| 33 |
|
| 34 |
| Script | Model | Size | Backend | Notes |
|
| 35 |
|--------|-------|------|---------|-------|
|
| 36 |
-
| `falcon-ocr.py` | [Falcon-OCR](https://huggingface.co/tiiuae/Falcon-OCR) | 0.3B | falcon-perception | Smallest in collection. #1 on multi-column docs and tables (olmOCR), Apache 2.0 |
|
| 37 |
| `smoldocling-ocr.py` | [SmolDocling](https://huggingface.co/ds4sd/SmolDocling-256M-preview) | 256M | Transformers | DocTags structured output |
|
| 38 |
| `glm-ocr.py` | [GLM-OCR](https://huggingface.co/zai-org/GLM-OCR) | 0.9B | vLLM | 94.62% OmniDocBench V1.5 |
|
| 39 |
| `paddleocr-vl.py` | [PaddleOCR-VL](https://huggingface.co/PaddlePaddle/PaddleOCR-VL) | 0.9B | Transformers | 4 task modes (ocr/table/formula/chart) |
|
|
@@ -44,34 +43,17 @@ That's it! The script will:
|
|
| 44 |
| `dots-ocr.py` | [DoTS.ocr](https://huggingface.co/Tencent/DoTS.ocr) | 1.7B | vLLM | 100+ languages |
|
| 45 |
| `firered-ocr.py` | [FireRed-OCR](https://huggingface.co/FireRedTeam/FireRed-OCR) | 2.1B | vLLM | Qwen3-VL fine-tune, Apache 2.0 |
|
| 46 |
| `nanonets-ocr.py` | [Nanonets-OCR-s](https://huggingface.co/nanonets/Nanonets-OCR-s) | 2B | vLLM | LaTeX, tables, forms |
|
| 47 |
-
| `dots-
|
| 48 |
| `nanonets-ocr2.py` | [Nanonets-OCR2-3B](https://huggingface.co/nanonets/Nanonets-OCR2-s) | 3B | vLLM | Next-gen, Qwen2.5-VL base |
|
| 49 |
| `deepseek-ocr-vllm.py` | [DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) | 4B | vLLM | 5 resolution + 5 prompt modes |
|
| 50 |
| `deepseek-ocr.py` | [DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) | 4B | Transformers | Same model, Transformers backend |
|
| 51 |
| `deepseek-ocr2-vllm.py` | [DeepSeek-OCR-2](https://huggingface.co/deepseek-ai/DeepSeek-OCR-2) | 3B | vLLM | Newer, requires nightly vLLM |
|
| 52 |
-
| `qianfan-ocr.py` | [Qianfan-OCR](https://huggingface.co/baidu/Qianfan-OCR) | 4.7B | vLLM | #1 OmniDocBench v1.5 (93.12), Layout-as-Thought, 192 languages |
|
| 53 |
| `olmocr2-vllm.py` | [olmOCR-2-7B](https://huggingface.co/allenai/olmOCR-2-7B-1025-FP8) | 7B | vLLM | 82.4% olmOCR-Bench |
|
| 54 |
| `rolm-ocr.py` | [RolmOCR](https://huggingface.co/reducto/RolmOCR) | 7B | vLLM | Qwen2.5-VL based, general-purpose |
|
| 55 |
| `numarkdown-ocr.py` | [NuMarkdown-8B](https://huggingface.co/numind/NuMarkdown-8B-Thinking) | 8B | vLLM | Reasoning-based OCR |
|
| 56 |
|
| 57 |
</details>
|
| 58 |
|
| 59 |
-
## Layout detection (not OCR)
|
| 60 |
-
|
| 61 |
-
`pp-doclayout.py` runs PaddleOCR's [PP-DocLayout-L](https://huggingface.co/PaddlePaddle/PP-DocLayout-L) (or M / S / plus-L) and emits per-image **bounding boxes + region classes** (text, title, table, figure, formula, list, header, footer, ...) — it does NOT extract text. Useful for filtering pages, cropping regions for downstream OCR, dataset analysis, and training-data prep.
|
| 62 |
-
|
| 63 |
-
| Script | Model | Size | Backend | Notes |
|
| 64 |
-
|--------|-------|------|---------|-------|
|
| 65 |
-
| `pp-doclayout.py` | [PP-DocLayout-L](https://huggingface.co/PaddlePaddle/PP-DocLayout-L) | 123M | paddleocr | Layout bboxes (no text). Bucket support: incremental parquet shards, resumable. |
|
| 66 |
-
|
| 67 |
-
```bash
|
| 68 |
-
hf jobs uv run --flavor l4x1 -s HF_TOKEN \
|
| 69 |
-
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/pp-doclayout.py \
|
| 70 |
-
your-dataset your-layout-output --max-samples 10
|
| 71 |
-
```
|
| 72 |
-
|
| 73 |
-
Source/sink can be either an HF dataset repo OR an `hf://buckets/...` URL (auto-detected). Bucket output writes incremental zstd parquet shards via the buckets API — resumable across runs (snapshot-backed source listing) and no git/commit overhead. See the script's `--help` for all flags.
|
| 74 |
-
|
| 75 |
## Common Options
|
| 76 |
|
| 77 |
All scripts accept the same core flags. Model-specific defaults (batch size, context length, temperature) are tuned per model based on model card recommendations and can be overridden.
|
|
@@ -409,46 +391,7 @@ Advanced reasoning-based OCR using [numind/NuMarkdown-8B-Thinking](https://huggi
|
|
| 409 |
- 🔍 **Multi-column Layouts** - Handles complex document structures
|
| 410 |
- ✨ **Thinking Traces** - Optional inclusion of reasoning process with `--include-thinking`
|
| 411 |
|
| 412 |
-
###
|
| 413 |
-
|
| 414 |
-
Advanced multilingual OCR and SVG generation using [rednote-hilab/dots.mocr](https://huggingface.co/rednote-hilab/dots.mocr) with 3B parameters:
|
| 415 |
-
|
| 416 |
-
- 🌍 **100+ Languages** - Extensive multilingual support
|
| 417 |
-
- 📝 **Document OCR** - Clean text extraction (default mode)
|
| 418 |
-
- 📊 **Layout Analysis** - Structured output with bboxes and categories
|
| 419 |
-
- 📐 **Formula recognition** - LaTeX format support
|
| 420 |
-
- 🖼️ **SVG generation** - Convert charts, UI layouts, figures to editable SVG code
|
| 421 |
-
- 🔀 **8 prompt modes** - OCR, layout-all, layout-only, web-parsing, scene-spotting, grounding-ocr, svg, general
|
| 422 |
-
- 📄 **[Paper](https://arxiv.org/abs/2603.13032)** - 83.9% on olmOCR-Bench
|
| 423 |
-
|
| 424 |
-
**SVG variant:** Use `--model rednote-hilab/dots.mocr-svg` with `--prompt-mode svg` for best SVG results.
|
| 425 |
-
|
| 426 |
-
**Quick start:**
|
| 427 |
-
|
| 428 |
-
```bash
|
| 429 |
-
# Basic OCR
|
| 430 |
-
hf jobs uv run --flavor l4x1 \
|
| 431 |
-
-s HF_TOKEN \
|
| 432 |
-
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-mocr.py \
|
| 433 |
-
your-input-dataset your-output-dataset \
|
| 434 |
-
--max-samples 100
|
| 435 |
-
|
| 436 |
-
# SVG generation from charts/figures
|
| 437 |
-
hf jobs uv run --flavor l4x1 \
|
| 438 |
-
-s HF_TOKEN \
|
| 439 |
-
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-mocr.py \
|
| 440 |
-
your-charts svg-output \
|
| 441 |
-
--prompt-mode svg --model rednote-hilab/dots.mocr-svg
|
| 442 |
-
|
| 443 |
-
# Layout analysis with bounding boxes
|
| 444 |
-
hf jobs uv run --flavor l4x1 \
|
| 445 |
-
-s HF_TOKEN \
|
| 446 |
-
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-mocr.py \
|
| 447 |
-
your-documents layout-output \
|
| 448 |
-
--prompt-mode layout-all
|
| 449 |
-
```
|
| 450 |
-
|
| 451 |
-
### DoTS.ocr v1 (`dots-ocr.py`)
|
| 452 |
|
| 453 |
Compact multilingual OCR using [rednote-hilab/dots.ocr](https://huggingface.co/rednote-hilab/dots.ocr) with only 1.7B parameters:
|
| 454 |
|
|
@@ -479,55 +422,6 @@ hf jobs uv run --flavor l4x1 \
|
|
| 479 |
--max-samples 100
|
| 480 |
```
|
| 481 |
|
| 482 |
-
### Qianfan-OCR (`qianfan-ocr.py`) — #1 on OmniDocBench v1.5
|
| 483 |
-
|
| 484 |
-
End-to-end document intelligence using [baidu/Qianfan-OCR](https://huggingface.co/baidu/Qianfan-OCR) with 4.7B parameters:
|
| 485 |
-
|
| 486 |
-
- **93.12 on OmniDocBench v1.5** — #1 end-to-end model
|
| 487 |
-
- **79.8 on OlmOCR Bench** — #1 end-to-end model
|
| 488 |
-
- 🧠 **Layout-as-Thought** — Optional reasoning phase for complex layouts (`--think`)
|
| 489 |
-
- 🌍 **192 languages** — Latin, CJK, Arabic, Cyrillic, and more
|
| 490 |
-
- 📝 **OCR mode** — Document parsing to markdown (default)
|
| 491 |
-
- 📊 **Table mode** — HTML table extraction
|
| 492 |
-
- 📐 **Formula mode** — LaTeX recognition
|
| 493 |
-
- 📈 **Chart mode** — Chart understanding and analysis
|
| 494 |
-
- 🔍 **Scene mode** — Scene text extraction
|
| 495 |
-
- 🔑 **KIE mode** — Key information extraction with custom prompts
|
| 496 |
-
|
| 497 |
-
**Prompt Modes:**
|
| 498 |
-
|
| 499 |
-
- `ocr`: Document parsing to markdown (default)
|
| 500 |
-
- `table`: Table extraction to HTML
|
| 501 |
-
- `formula`: Formula recognition to LaTeX
|
| 502 |
-
- `chart`: Chart understanding
|
| 503 |
-
- `scene`: Scene text extraction
|
| 504 |
-
- `kie`: Key information extraction (requires `--custom-prompt`)
|
| 505 |
-
|
| 506 |
-
**Quick start:**
|
| 507 |
-
|
| 508 |
-
```bash
|
| 509 |
-
# Basic OCR
|
| 510 |
-
hf jobs uv run --flavor l4x1 \
|
| 511 |
-
-s HF_TOKEN \
|
| 512 |
-
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/qianfan-ocr.py \
|
| 513 |
-
your-input-dataset your-output-dataset \
|
| 514 |
-
--max-samples 100
|
| 515 |
-
|
| 516 |
-
# Layout-as-Thought for complex documents
|
| 517 |
-
hf jobs uv run --flavor l4x1 \
|
| 518 |
-
-s HF_TOKEN \
|
| 519 |
-
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/qianfan-ocr.py \
|
| 520 |
-
your-input-dataset your-output-dataset \
|
| 521 |
-
--think --max-samples 50
|
| 522 |
-
|
| 523 |
-
# Key information extraction
|
| 524 |
-
hf jobs uv run --flavor l4x1 \
|
| 525 |
-
-s HF_TOKEN \
|
| 526 |
-
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/qianfan-ocr.py \
|
| 527 |
-
invoices extracted-fields \
|
| 528 |
-
--prompt-mode kie --custom-prompt "Extract: name, date, total. Output as JSON."
|
| 529 |
-
```
|
| 530 |
-
|
| 531 |
### olmOCR2 (`olmocr2-vllm.py`)
|
| 532 |
|
| 533 |
High-quality document OCR using [allenai/olmOCR-2-7B-1025-FP8](https://huggingface.co/allenai/olmOCR-2-7B-1025-FP8) optimized with GRPO reinforcement learning:
|
|
|
|
| 7 |
|
| 8 |
> Part of [uv-scripts](https://huggingface.co/uv-scripts) - ready-to-run ML tools powered by UV and HuggingFace Jobs.
|
| 9 |
|
| 10 |
+
14 OCR models from 0.9B to 8B parameters. Pick a model, point at your dataset, get markdown — no setup required.
|
| 11 |
|
| 12 |
## 🚀 Quick Start
|
| 13 |
|
|
|
|
| 33 |
|
| 34 |
| Script | Model | Size | Backend | Notes |
|
| 35 |
|--------|-------|------|---------|-------|
|
|
|
|
| 36 |
| `smoldocling-ocr.py` | [SmolDocling](https://huggingface.co/ds4sd/SmolDocling-256M-preview) | 256M | Transformers | DocTags structured output |
|
| 37 |
| `glm-ocr.py` | [GLM-OCR](https://huggingface.co/zai-org/GLM-OCR) | 0.9B | vLLM | 94.62% OmniDocBench V1.5 |
|
| 38 |
| `paddleocr-vl.py` | [PaddleOCR-VL](https://huggingface.co/PaddlePaddle/PaddleOCR-VL) | 0.9B | Transformers | 4 task modes (ocr/table/formula/chart) |
|
|
|
|
| 43 |
| `dots-ocr.py` | [DoTS.ocr](https://huggingface.co/Tencent/DoTS.ocr) | 1.7B | vLLM | 100+ languages |
|
| 44 |
| `firered-ocr.py` | [FireRed-OCR](https://huggingface.co/FireRedTeam/FireRed-OCR) | 2.1B | vLLM | Qwen3-VL fine-tune, Apache 2.0 |
|
| 45 |
| `nanonets-ocr.py` | [Nanonets-OCR-s](https://huggingface.co/nanonets/Nanonets-OCR-s) | 2B | vLLM | LaTeX, tables, forms |
|
| 46 |
+
| `dots-ocr-1.5.py` | [DoTS.ocr-1.5](https://huggingface.co/Tencent/DoTS.ocr-1.5) | 3B | vLLM | Updated multilingual model |
|
| 47 |
| `nanonets-ocr2.py` | [Nanonets-OCR2-3B](https://huggingface.co/nanonets/Nanonets-OCR2-s) | 3B | vLLM | Next-gen, Qwen2.5-VL base |
|
| 48 |
| `deepseek-ocr-vllm.py` | [DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) | 4B | vLLM | 5 resolution + 5 prompt modes |
|
| 49 |
| `deepseek-ocr.py` | [DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) | 4B | Transformers | Same model, Transformers backend |
|
| 50 |
| `deepseek-ocr2-vllm.py` | [DeepSeek-OCR-2](https://huggingface.co/deepseek-ai/DeepSeek-OCR-2) | 3B | vLLM | Newer, requires nightly vLLM |
|
|
|
|
| 51 |
| `olmocr2-vllm.py` | [olmOCR-2-7B](https://huggingface.co/allenai/olmOCR-2-7B-1025-FP8) | 7B | vLLM | 82.4% olmOCR-Bench |
|
| 52 |
| `rolm-ocr.py` | [RolmOCR](https://huggingface.co/reducto/RolmOCR) | 7B | vLLM | Qwen2.5-VL based, general-purpose |
|
| 53 |
| `numarkdown-ocr.py` | [NuMarkdown-8B](https://huggingface.co/numind/NuMarkdown-8B-Thinking) | 8B | vLLM | Reasoning-based OCR |
|
| 54 |
|
| 55 |
</details>
|
| 56 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 57 |
## Common Options
|
| 58 |
|
| 59 |
All scripts accept the same core flags. Model-specific defaults (batch size, context length, temperature) are tuned per model based on model card recommendations and can be overridden.
|
|
|
|
| 391 |
- 🔍 **Multi-column Layouts** - Handles complex document structures
|
| 392 |
- ✨ **Thinking Traces** - Optional inclusion of reasoning process with `--include-thinking`
|
| 393 |
|
| 394 |
+
### DoTS.ocr (`dots-ocr.py`)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 395 |
|
| 396 |
Compact multilingual OCR using [rednote-hilab/dots.ocr](https://huggingface.co/rednote-hilab/dots.ocr) with only 1.7B parameters:
|
| 397 |
|
|
|
|
| 422 |
--max-samples 100
|
| 423 |
```
|
| 424 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 425 |
### olmOCR2 (`olmocr2-vllm.py`)
|
| 426 |
|
| 427 |
High-quality document OCR using [allenai/olmOCR-2-7B-1025-FP8](https://huggingface.co/allenai/olmOCR-2-7B-1025-FP8) optimized with GRPO reinforcement learning:
|
dots-mocr.py → dots-ocr-1.5.py
RENAMED
|
@@ -13,27 +13,23 @@
|
|
| 13 |
# ///
|
| 14 |
|
| 15 |
"""
|
| 16 |
-
Convert document images to markdown using
|
| 17 |
|
| 18 |
-
|
| 19 |
-
on 100+ languages.
|
| 20 |
-
|
| 21 |
-
grounding, recognition, semantic understanding, and interactive dialogue.
|
| 22 |
|
| 23 |
Features:
|
| 24 |
- Multilingual support (100+ languages)
|
| 25 |
- Table extraction and formatting
|
| 26 |
- Formula recognition
|
| 27 |
- Layout-aware text extraction
|
| 28 |
-
- Web screen parsing
|
| 29 |
-
- Scene text spotting
|
| 30 |
-
- SVG code generation (
|
| 31 |
-
|
| 32 |
-
Model: rednote-hilab/dots.
|
| 33 |
-
|
| 34 |
-
vLLM: Officially integrated since v0.11.0
|
| 35 |
-
GitHub: https://github.com/rednote-hilab/dots.mocr
|
| 36 |
-
Paper: https://arxiv.org/abs/2603.13032
|
| 37 |
"""
|
| 38 |
|
| 39 |
import argparse
|
|
@@ -60,8 +56,8 @@ logger = logging.getLogger(__name__)
|
|
| 60 |
|
| 61 |
|
| 62 |
# ────────────────────────────────────────────────────────────────
|
| 63 |
-
#
|
| 64 |
-
# Source: https://github.com/rednote-hilab/dots.
|
| 65 |
# ────────────────────────────────────────────────────────────────
|
| 66 |
|
| 67 |
PROMPT_TEMPLATES = {
|
|
@@ -84,19 +80,11 @@ PROMPT_TEMPLATES = {
|
|
| 84 |
|
| 85 |
5. Final Output: The entire output must be a single JSON object.
|
| 86 |
""",
|
| 87 |
-
# NOTE: Bboxes from layout-all/layout-only are in the resized image coordinate
|
| 88 |
-
# space (Qwen2VLImageProcessor smart_resize: max_pixels=11289600, factor=28),
|
| 89 |
-
# NOT original image coordinates. To map back, compute:
|
| 90 |
-
# resized_h, resized_w = smart_resize(orig_h, orig_w)
|
| 91 |
-
# scale_x, scale_y = orig_w / resized_w, orig_h / resized_h
|
| 92 |
"layout-only": """Please output the layout information from this PDF image, including each layout's bbox and its category. The bbox should be in the format [x1, y1, x2, y2]. The layout categories for the PDF document include ['Caption', 'Footnote', 'Formula', 'List-item', 'Page-footer', 'Page-header', 'Picture', 'Section-header', 'Table', 'Text', 'Title']. Do not output the corresponding text. The layout result should be in JSON format.""",
|
|
|
|
| 93 |
"web-parsing": """Parsing the layout info of this webpage image with format json:\n""",
|
| 94 |
"scene-spotting": """Detect and recognize the text in the image.""",
|
| 95 |
"grounding-ocr": """Extract text from the given bounding box on the image (format: [x1, y1, x2, y2]).\nBounding Box:\n""",
|
| 96 |
-
# SVG code generation — {width} and {height} are replaced with actual image dimensions.
|
| 97 |
-
# For best results, use --model rednote-hilab/dots.mocr-svg
|
| 98 |
-
# Uses higher temperature (0.9) and top_p (1.0) per official recommendation.
|
| 99 |
-
"svg": """Please generate the SVG code based on the image. viewBox="0 0 {width} {height}" """,
|
| 100 |
"general": """ """,
|
| 101 |
}
|
| 102 |
|
|
@@ -129,12 +117,6 @@ def make_ocr_message(
|
|
| 129 |
# Convert to RGB
|
| 130 |
pil_img = pil_img.convert("RGB")
|
| 131 |
|
| 132 |
-
# For SVG mode, inject actual image dimensions into the prompt
|
| 133 |
-
if "{width}" in prompt and "{height}" in prompt:
|
| 134 |
-
prompt = prompt.replace("{width}", str(pil_img.width)).replace(
|
| 135 |
-
"{height}", str(pil_img.height)
|
| 136 |
-
)
|
| 137 |
-
|
| 138 |
# Convert to base64 data URI
|
| 139 |
buf = io.BytesIO()
|
| 140 |
pil_img.save(buf, format="PNG")
|
|
@@ -172,7 +154,7 @@ def create_dataset_card(
|
|
| 172 |
tags:
|
| 173 |
- ocr
|
| 174 |
- document-processing
|
| 175 |
-
- dots-
|
| 176 |
- multilingual
|
| 177 |
- markdown
|
| 178 |
- uv-script
|
|
@@ -181,7 +163,7 @@ tags:
|
|
| 181 |
|
| 182 |
# Document OCR using {model_name}
|
| 183 |
|
| 184 |
-
This dataset contains OCR results from images in [{source_dataset}](https://huggingface.co/datasets/{source_dataset}) using
|
| 185 |
|
| 186 |
## Processing Details
|
| 187 |
|
|
@@ -204,14 +186,13 @@ This dataset contains OCR results from images in [{source_dataset}](https://hugg
|
|
| 204 |
|
| 205 |
## Model Information
|
| 206 |
|
| 207 |
-
|
| 208 |
- 100+ Languages — Multilingual document support
|
| 209 |
- Table extraction — Structured data recognition
|
| 210 |
- Formulas — Mathematical notation preservation
|
| 211 |
- Layout-aware — Reading order and structure preservation
|
| 212 |
- Web screen parsing — Webpage layout analysis
|
| 213 |
- Scene text spotting — Text detection in natural scenes
|
| 214 |
-
- SVG code generation — Charts, UI layouts, scientific figures to SVG
|
| 215 |
|
| 216 |
## Dataset Structure
|
| 217 |
|
|
@@ -241,10 +222,10 @@ for info in inference_info:
|
|
| 241 |
|
| 242 |
## Reproduction
|
| 243 |
|
| 244 |
-
This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr)
|
| 245 |
|
| 246 |
```bash
|
| 247 |
-
uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-
|
| 248 |
{source_dataset} \\
|
| 249 |
<output-dataset> \\
|
| 250 |
--image-column {image_column} \\
|
|
@@ -264,7 +245,7 @@ def main(
|
|
| 264 |
output_dataset: str,
|
| 265 |
image_column: str = "image",
|
| 266 |
batch_size: int = 16,
|
| 267 |
-
model: str = "rednote-hilab/dots.
|
| 268 |
max_model_len: int = 24000,
|
| 269 |
max_tokens: int = 24000,
|
| 270 |
gpu_memory_utilization: float = 0.9,
|
|
@@ -283,7 +264,7 @@ def main(
|
|
| 283 |
top_p: float = 0.9,
|
| 284 |
verbose: bool = False,
|
| 285 |
):
|
| 286 |
-
"""Process images from HF dataset through
|
| 287 |
|
| 288 |
# Check CUDA availability first
|
| 289 |
check_cuda_availability()
|
|
@@ -334,12 +315,6 @@ def main(
|
|
| 334 |
gpu_memory_utilization=gpu_memory_utilization,
|
| 335 |
)
|
| 336 |
|
| 337 |
-
# SVG mode uses higher temperature/top_p per official recommendation
|
| 338 |
-
if prompt_mode == "svg" and temperature == 0.1 and top_p == 0.9:
|
| 339 |
-
logger.info("SVG mode: using recommended temperature=0.9, top_p=1.0")
|
| 340 |
-
temperature = 0.9
|
| 341 |
-
top_p = 1.0
|
| 342 |
-
|
| 343 |
sampling_params = SamplingParams(
|
| 344 |
temperature=temperature,
|
| 345 |
top_p=top_p,
|
|
@@ -355,7 +330,7 @@ def main(
|
|
| 355 |
for batch_indices in tqdm(
|
| 356 |
partition_all(batch_size, range(len(dataset))),
|
| 357 |
total=(len(dataset) + batch_size - 1) // batch_size,
|
| 358 |
-
desc="
|
| 359 |
):
|
| 360 |
batch_indices = list(batch_indices)
|
| 361 |
batch_images = [dataset[i][image_column] for i in batch_indices]
|
|
@@ -364,12 +339,8 @@ def main(
|
|
| 364 |
# Create messages for batch
|
| 365 |
batch_messages = [make_ocr_message(img, prompt) for img in batch_images]
|
| 366 |
|
| 367 |
-
# Process with vLLM
|
| 368 |
-
outputs = llm.chat(
|
| 369 |
-
batch_messages,
|
| 370 |
-
sampling_params,
|
| 371 |
-
chat_template_content_format="string",
|
| 372 |
-
)
|
| 373 |
|
| 374 |
# Extract outputs
|
| 375 |
for output in outputs:
|
|
@@ -392,7 +363,7 @@ def main(
|
|
| 392 |
# Handle inference_info tracking (for multi-model comparisons)
|
| 393 |
inference_entry = {
|
| 394 |
"model_id": model,
|
| 395 |
-
"model_name": "
|
| 396 |
"column_name": output_column,
|
| 397 |
"timestamp": datetime.now().isoformat(),
|
| 398 |
"prompt_mode": prompt_mode if not custom_prompt else "custom",
|
|
@@ -473,7 +444,7 @@ def main(
|
|
| 473 |
card = DatasetCard(card_content)
|
| 474 |
card.push_to_hub(output_dataset, token=HF_TOKEN)
|
| 475 |
|
| 476 |
-
logger.info("
|
| 477 |
logger.info(
|
| 478 |
f"Dataset available at: https://huggingface.co/datasets/{output_dataset}"
|
| 479 |
)
|
|
@@ -495,83 +466,77 @@ if __name__ == "__main__":
|
|
| 495 |
# Show example usage if no arguments
|
| 496 |
if len(sys.argv) == 1:
|
| 497 |
print("=" * 80)
|
| 498 |
-
print("
|
| 499 |
print("=" * 80)
|
| 500 |
-
print("\n3B multilingual OCR model
|
| 501 |
print("\nFeatures:")
|
| 502 |
print("- Multilingual support (100+ languages)")
|
| 503 |
print("- Fast processing with vLLM")
|
| 504 |
print("- Table extraction and formatting")
|
| 505 |
print("- Formula recognition")
|
| 506 |
print("- Layout-aware text extraction")
|
| 507 |
-
print("- Web screen parsing")
|
| 508 |
-
print("- Scene text spotting")
|
| 509 |
-
print("- SVG code generation (charts, UI, figures)")
|
| 510 |
print("\nPrompt modes:")
|
| 511 |
-
print(" ocr
|
| 512 |
-
print(" layout-all
|
| 513 |
-
print(" layout-only
|
| 514 |
-
print(" web-parsing
|
| 515 |
print(" scene-spotting - Scene text detection")
|
| 516 |
-
print(" grounding-ocr
|
| 517 |
-
print("
|
| 518 |
-
print(" general - Free-form (use with --custom-prompt)")
|
| 519 |
print("\nExample usage:")
|
| 520 |
print("\n1. Basic OCR:")
|
| 521 |
-
print(" uv run dots-
|
| 522 |
-
print("\n2.
|
| 523 |
-
print(
|
| 524 |
-
|
| 525 |
-
)
|
| 526 |
-
print("\n3. Web screen parsing:")
|
| 527 |
-
print(" uv run dots-mocr.py screenshots parsed --prompt-mode web-parsing")
|
| 528 |
print("\n4. Layout analysis with structure:")
|
| 529 |
-
print(" uv run dots-
|
| 530 |
print("\n5. Running on HF Jobs:")
|
| 531 |
print(" hf jobs uv run --flavor l4x1 \\")
|
| 532 |
print(" -s HF_TOKEN \\")
|
| 533 |
print(
|
| 534 |
-
" https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-
|
| 535 |
)
|
| 536 |
print(" input-dataset output-dataset")
|
| 537 |
print("\n" + "=" * 80)
|
| 538 |
-
print("\nFor full help, run: uv run dots-
|
| 539 |
sys.exit(0)
|
| 540 |
|
| 541 |
parser = argparse.ArgumentParser(
|
| 542 |
-
description="Document OCR using
|
| 543 |
formatter_class=argparse.RawDescriptionHelpFormatter,
|
| 544 |
epilog="""
|
| 545 |
-
Prompt Modes (official
|
| 546 |
ocr - Simple text extraction (default)
|
| 547 |
layout-all - Layout analysis with bboxes, categories, and text (JSON output)
|
| 548 |
layout-only - Layout detection with bboxes and categories only (JSON output)
|
| 549 |
-
web-parsing - Webpage layout analysis (JSON output)
|
| 550 |
-
scene-spotting - Scene text detection and recognition
|
| 551 |
-
grounding-ocr - Extract text from bounding box region
|
| 552 |
-
|
| 553 |
-
general - Free-form QA (use with --custom-prompt)
|
| 554 |
|
| 555 |
SVG Code Generation:
|
| 556 |
-
|
| 557 |
-
--
|
| 558 |
-
SVG mode automatically uses temperature=0.9, top_p=1.0 unless overridden.
|
| 559 |
|
| 560 |
Examples:
|
| 561 |
# Basic text OCR (default)
|
| 562 |
-
uv run dots-
|
| 563 |
-
|
| 564 |
-
# SVG generation with optimized variant
|
| 565 |
-
uv run dots-mocr.py charts svg-out --prompt-mode svg --model rednote-hilab/dots.mocr-svg
|
| 566 |
|
| 567 |
# Web screen parsing
|
| 568 |
-
uv run dots-
|
|
|
|
|
|
|
|
|
|
| 569 |
|
| 570 |
# Full layout analysis with structure
|
| 571 |
-
uv run dots-
|
| 572 |
|
| 573 |
# Random sampling for testing
|
| 574 |
-
uv run dots-
|
| 575 |
""",
|
| 576 |
)
|
| 577 |
|
|
@@ -590,8 +555,8 @@ Examples:
|
|
| 590 |
)
|
| 591 |
parser.add_argument(
|
| 592 |
"--model",
|
| 593 |
-
default="rednote-hilab/dots.
|
| 594 |
-
help="Model to use (default: rednote-hilab/dots.
|
| 595 |
)
|
| 596 |
parser.add_argument(
|
| 597 |
"--max-model-len",
|
|
|
|
| 13 |
# ///
|
| 14 |
|
| 15 |
"""
|
| 16 |
+
Convert document images to markdown using DoTS.ocr-1.5 with vLLM.
|
| 17 |
|
| 18 |
+
DoTS.ocr-1.5 is a 3B multilingual document parsing model with SOTA performance
|
| 19 |
+
on 100+ languages. Compared to v1 (1.7B), it adds web screen parsing, scene text
|
| 20 |
+
spotting, SVG code generation, and stronger multilingual document parsing.
|
|
|
|
| 21 |
|
| 22 |
Features:
|
| 23 |
- Multilingual support (100+ languages)
|
| 24 |
- Table extraction and formatting
|
| 25 |
- Formula recognition
|
| 26 |
- Layout-aware text extraction
|
| 27 |
+
- Web screen parsing (NEW in v1.5)
|
| 28 |
+
- Scene text spotting (NEW in v1.5)
|
| 29 |
+
- SVG code generation (requires dots.ocr-1.5-svg variant)
|
| 30 |
+
|
| 31 |
+
Model: rednote-hilab/dots.ocr-1.5
|
| 32 |
+
vLLM: Officially supported (same DotsOCRForCausalLM architecture as v1)
|
|
|
|
|
|
|
|
|
|
| 33 |
"""
|
| 34 |
|
| 35 |
import argparse
|
|
|
|
| 56 |
|
| 57 |
|
| 58 |
# ────────────────────────────────────────────────────────────────
|
| 59 |
+
# DoTS OCR 1.5 Prompt Templates (from official dots.ocr repo)
|
| 60 |
+
# Source: https://github.com/rednote-hilab/dots.ocr/blob/master/dots_ocr/utils/prompts.py
|
| 61 |
# ────────────────────────────────────────────────────────────────
|
| 62 |
|
| 63 |
PROMPT_TEMPLATES = {
|
|
|
|
| 80 |
|
| 81 |
5. Final Output: The entire output must be a single JSON object.
|
| 82 |
""",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 83 |
"layout-only": """Please output the layout information from this PDF image, including each layout's bbox and its category. The bbox should be in the format [x1, y1, x2, y2]. The layout categories for the PDF document include ['Caption', 'Footnote', 'Formula', 'List-item', 'Page-footer', 'Page-header', 'Picture', 'Section-header', 'Table', 'Text', 'Title']. Do not output the corresponding text. The layout result should be in JSON format.""",
|
| 84 |
+
# NEW in v1.5:
|
| 85 |
"web-parsing": """Parsing the layout info of this webpage image with format json:\n""",
|
| 86 |
"scene-spotting": """Detect and recognize the text in the image.""",
|
| 87 |
"grounding-ocr": """Extract text from the given bounding box on the image (format: [x1, y1, x2, y2]).\nBounding Box:\n""",
|
|
|
|
|
|
|
|
|
|
|
|
|
| 88 |
"general": """ """,
|
| 89 |
}
|
| 90 |
|
|
|
|
| 117 |
# Convert to RGB
|
| 118 |
pil_img = pil_img.convert("RGB")
|
| 119 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 120 |
# Convert to base64 data URI
|
| 121 |
buf = io.BytesIO()
|
| 122 |
pil_img.save(buf, format="PNG")
|
|
|
|
| 154 |
tags:
|
| 155 |
- ocr
|
| 156 |
- document-processing
|
| 157 |
+
- dots-ocr-1.5
|
| 158 |
- multilingual
|
| 159 |
- markdown
|
| 160 |
- uv-script
|
|
|
|
| 163 |
|
| 164 |
# Document OCR using {model_name}
|
| 165 |
|
| 166 |
+
This dataset contains OCR results from images in [{source_dataset}](https://huggingface.co/datasets/{source_dataset}) using DoTS.ocr-1.5, a 3B multilingual model with SOTA document parsing.
|
| 167 |
|
| 168 |
## Processing Details
|
| 169 |
|
|
|
|
| 186 |
|
| 187 |
## Model Information
|
| 188 |
|
| 189 |
+
DoTS.ocr-1.5 is a 3B multilingual document parsing model that excels at:
|
| 190 |
- 100+ Languages — Multilingual document support
|
| 191 |
- Table extraction — Structured data recognition
|
| 192 |
- Formulas — Mathematical notation preservation
|
| 193 |
- Layout-aware — Reading order and structure preservation
|
| 194 |
- Web screen parsing — Webpage layout analysis
|
| 195 |
- Scene text spotting — Text detection in natural scenes
|
|
|
|
| 196 |
|
| 197 |
## Dataset Structure
|
| 198 |
|
|
|
|
| 222 |
|
| 223 |
## Reproduction
|
| 224 |
|
| 225 |
+
This dataset was generated using the [uv-scripts/ocr](https://huggingface.co/datasets/uv-scripts/ocr) DoTS OCR 1.5 script:
|
| 226 |
|
| 227 |
```bash
|
| 228 |
+
uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-ocr-1.5.py \\
|
| 229 |
{source_dataset} \\
|
| 230 |
<output-dataset> \\
|
| 231 |
--image-column {image_column} \\
|
|
|
|
| 245 |
output_dataset: str,
|
| 246 |
image_column: str = "image",
|
| 247 |
batch_size: int = 16,
|
| 248 |
+
model: str = "rednote-hilab/dots.ocr-1.5",
|
| 249 |
max_model_len: int = 24000,
|
| 250 |
max_tokens: int = 24000,
|
| 251 |
gpu_memory_utilization: float = 0.9,
|
|
|
|
| 264 |
top_p: float = 0.9,
|
| 265 |
verbose: bool = False,
|
| 266 |
):
|
| 267 |
+
"""Process images from HF dataset through DoTS.ocr-1.5 model."""
|
| 268 |
|
| 269 |
# Check CUDA availability first
|
| 270 |
check_cuda_availability()
|
|
|
|
| 315 |
gpu_memory_utilization=gpu_memory_utilization,
|
| 316 |
)
|
| 317 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 318 |
sampling_params = SamplingParams(
|
| 319 |
temperature=temperature,
|
| 320 |
top_p=top_p,
|
|
|
|
| 330 |
for batch_indices in tqdm(
|
| 331 |
partition_all(batch_size, range(len(dataset))),
|
| 332 |
total=(len(dataset) + batch_size - 1) // batch_size,
|
| 333 |
+
desc="DoTS.ocr-1.5 processing",
|
| 334 |
):
|
| 335 |
batch_indices = list(batch_indices)
|
| 336 |
batch_images = [dataset[i][image_column] for i in batch_indices]
|
|
|
|
| 339 |
# Create messages for batch
|
| 340 |
batch_messages = [make_ocr_message(img, prompt) for img in batch_images]
|
| 341 |
|
| 342 |
+
# Process with vLLM
|
| 343 |
+
outputs = llm.chat(batch_messages, sampling_params)
|
|
|
|
|
|
|
|
|
|
|
|
|
| 344 |
|
| 345 |
# Extract outputs
|
| 346 |
for output in outputs:
|
|
|
|
| 363 |
# Handle inference_info tracking (for multi-model comparisons)
|
| 364 |
inference_entry = {
|
| 365 |
"model_id": model,
|
| 366 |
+
"model_name": "DoTS.ocr-1.5",
|
| 367 |
"column_name": output_column,
|
| 368 |
"timestamp": datetime.now().isoformat(),
|
| 369 |
"prompt_mode": prompt_mode if not custom_prompt else "custom",
|
|
|
|
| 444 |
card = DatasetCard(card_content)
|
| 445 |
card.push_to_hub(output_dataset, token=HF_TOKEN)
|
| 446 |
|
| 447 |
+
logger.info("DoTS.ocr-1.5 processing complete!")
|
| 448 |
logger.info(
|
| 449 |
f"Dataset available at: https://huggingface.co/datasets/{output_dataset}"
|
| 450 |
)
|
|
|
|
| 466 |
# Show example usage if no arguments
|
| 467 |
if len(sys.argv) == 1:
|
| 468 |
print("=" * 80)
|
| 469 |
+
print("DoTS.ocr-1.5 Document Processing")
|
| 470 |
print("=" * 80)
|
| 471 |
+
print("\n3B multilingual OCR model supporting 100+ languages")
|
| 472 |
print("\nFeatures:")
|
| 473 |
print("- Multilingual support (100+ languages)")
|
| 474 |
print("- Fast processing with vLLM")
|
| 475 |
print("- Table extraction and formatting")
|
| 476 |
print("- Formula recognition")
|
| 477 |
print("- Layout-aware text extraction")
|
| 478 |
+
print("- Web screen parsing (NEW in v1.5)")
|
| 479 |
+
print("- Scene text spotting (NEW in v1.5)")
|
|
|
|
| 480 |
print("\nPrompt modes:")
|
| 481 |
+
print(" ocr - Text extraction (default)")
|
| 482 |
+
print(" layout-all - Layout + bboxes + text (JSON)")
|
| 483 |
+
print(" layout-only - Layout + bboxes only (JSON)")
|
| 484 |
+
print(" web-parsing - Webpage layout analysis (JSON)")
|
| 485 |
print(" scene-spotting - Scene text detection")
|
| 486 |
+
print(" grounding-ocr - Text from bounding box region")
|
| 487 |
+
print(" general - Free-form (use with --custom-prompt)")
|
|
|
|
| 488 |
print("\nExample usage:")
|
| 489 |
print("\n1. Basic OCR:")
|
| 490 |
+
print(" uv run dots-ocr-1.5.py input-dataset output-dataset")
|
| 491 |
+
print("\n2. Web screen parsing:")
|
| 492 |
+
print(" uv run dots-ocr-1.5.py screenshots parsed --prompt-mode web-parsing")
|
| 493 |
+
print("\n3. Scene text spotting:")
|
| 494 |
+
print(" uv run dots-ocr-1.5.py photos detected --prompt-mode scene-spotting")
|
|
|
|
|
|
|
| 495 |
print("\n4. Layout analysis with structure:")
|
| 496 |
+
print(" uv run dots-ocr-1.5.py papers analyzed --prompt-mode layout-all")
|
| 497 |
print("\n5. Running on HF Jobs:")
|
| 498 |
print(" hf jobs uv run --flavor l4x1 \\")
|
| 499 |
print(" -s HF_TOKEN \\")
|
| 500 |
print(
|
| 501 |
+
" https://huggingface.co/datasets/uv-scripts/ocr/raw/main/dots-ocr-1.5.py \\"
|
| 502 |
)
|
| 503 |
print(" input-dataset output-dataset")
|
| 504 |
print("\n" + "=" * 80)
|
| 505 |
+
print("\nFor full help, run: uv run dots-ocr-1.5.py --help")
|
| 506 |
sys.exit(0)
|
| 507 |
|
| 508 |
parser = argparse.ArgumentParser(
|
| 509 |
+
description="Document OCR using DoTS.ocr-1.5 (3B multilingual model)",
|
| 510 |
formatter_class=argparse.RawDescriptionHelpFormatter,
|
| 511 |
epilog="""
|
| 512 |
+
Prompt Modes (official DoTS.ocr-1.5 prompts):
|
| 513 |
ocr - Simple text extraction (default)
|
| 514 |
layout-all - Layout analysis with bboxes, categories, and text (JSON output)
|
| 515 |
layout-only - Layout detection with bboxes and categories only (JSON output)
|
| 516 |
+
web-parsing - Webpage layout analysis (JSON output) [NEW in v1.5]
|
| 517 |
+
scene-spotting - Scene text detection and recognition [NEW in v1.5]
|
| 518 |
+
grounding-ocr - Extract text from bounding box region [NEW in v1.5]
|
| 519 |
+
general - Free-form QA (use with --custom-prompt) [NEW in v1.5]
|
|
|
|
| 520 |
|
| 521 |
SVG Code Generation:
|
| 522 |
+
For SVG output, use --model rednote-hilab/dots.ocr-1.5-svg with:
|
| 523 |
+
--custom-prompt 'Please generate the SVG code based on the image.'
|
|
|
|
| 524 |
|
| 525 |
Examples:
|
| 526 |
# Basic text OCR (default)
|
| 527 |
+
uv run dots-ocr-1.5.py my-docs analyzed-docs
|
|
|
|
|
|
|
|
|
|
| 528 |
|
| 529 |
# Web screen parsing
|
| 530 |
+
uv run dots-ocr-1.5.py screenshots parsed --prompt-mode web-parsing
|
| 531 |
+
|
| 532 |
+
# Scene text spotting
|
| 533 |
+
uv run dots-ocr-1.5.py photos spotted --prompt-mode scene-spotting
|
| 534 |
|
| 535 |
# Full layout analysis with structure
|
| 536 |
+
uv run dots-ocr-1.5.py papers structured --prompt-mode layout-all
|
| 537 |
|
| 538 |
# Random sampling for testing
|
| 539 |
+
uv run dots-ocr-1.5.py large-dataset test --max-samples 50 --shuffle
|
| 540 |
""",
|
| 541 |
)
|
| 542 |
|
|
|
|
| 555 |
)
|
| 556 |
parser.add_argument(
|
| 557 |
"--model",
|
| 558 |
+
default="rednote-hilab/dots.ocr-1.5",
|
| 559 |
+
help="Model to use (default: rednote-hilab/dots.ocr-1.5)",
|
| 560 |
)
|
| 561 |
parser.add_argument(
|
| 562 |
"--max-model-len",
|
falcon-ocr-bucket.py
DELETED
|
@@ -1,278 +0,0 @@
|
|
| 1 |
-
# /// script
|
| 2 |
-
# requires-python = ">=3.11"
|
| 3 |
-
# dependencies = [
|
| 4 |
-
# "pillow",
|
| 5 |
-
# "pymupdf",
|
| 6 |
-
# "torch>=2.5",
|
| 7 |
-
# "torchvision",
|
| 8 |
-
# "falcon-perception[ocr]",
|
| 9 |
-
# ]
|
| 10 |
-
# ///
|
| 11 |
-
|
| 12 |
-
"""
|
| 13 |
-
OCR images and PDFs from a directory using Falcon OCR, writing markdown files.
|
| 14 |
-
|
| 15 |
-
Designed to work with HF Buckets mounted as volumes via `hf jobs uv run -v ...`.
|
| 16 |
-
Reads images/PDFs from INPUT_DIR, runs Falcon OCR via the optimized falcon-perception
|
| 17 |
-
engine (CUDA graphs + paged inference), and writes one .md file per image (or per
|
| 18 |
-
PDF page) to OUTPUT_DIR, preserving directory structure.
|
| 19 |
-
|
| 20 |
-
Input: Output:
|
| 21 |
-
/input/page1.png -> /output/page1.md
|
| 22 |
-
/input/report.pdf -> /output/report/page_001.md
|
| 23 |
-
(3 pages) /output/report/page_002.md
|
| 24 |
-
/output/report/page_003.md
|
| 25 |
-
/input/sub/photo.jpg -> /output/sub/photo.md
|
| 26 |
-
|
| 27 |
-
Examples:
|
| 28 |
-
|
| 29 |
-
# Local test
|
| 30 |
-
uv run falcon-ocr-bucket.py ./test-images ./test-output
|
| 31 |
-
|
| 32 |
-
# HF Jobs with bucket volumes
|
| 33 |
-
hf jobs uv run --flavor l4x1 \\
|
| 34 |
-
-s HF_TOKEN \\
|
| 35 |
-
-v hf://buckets/user/ocr-input:/input:ro \\
|
| 36 |
-
-v hf://buckets/user/ocr-output:/output \\
|
| 37 |
-
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/falcon-ocr-bucket.py \\
|
| 38 |
-
/input /output
|
| 39 |
-
|
| 40 |
-
Model: tiiuae/Falcon-OCR (0.3B, 80.3% olmOCR, Apache 2.0)
|
| 41 |
-
Backend: falcon-perception (OCRInferenceEngine with CUDA graphs)
|
| 42 |
-
"""
|
| 43 |
-
|
| 44 |
-
import argparse
|
| 45 |
-
import logging
|
| 46 |
-
import sys
|
| 47 |
-
import time
|
| 48 |
-
from pathlib import Path
|
| 49 |
-
|
| 50 |
-
import torch
|
| 51 |
-
from PIL import Image
|
| 52 |
-
|
| 53 |
-
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(levelname)s %(message)s")
|
| 54 |
-
logger = logging.getLogger(__name__)
|
| 55 |
-
|
| 56 |
-
MODEL_ID = "tiiuae/Falcon-OCR"
|
| 57 |
-
IMAGE_EXTENSIONS = {".png", ".jpg", ".jpeg", ".tiff", ".tif", ".bmp", ".webp"}
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
def check_cuda_availability():
|
| 61 |
-
if not torch.cuda.is_available():
|
| 62 |
-
logger.error("CUDA is not available. This script requires a GPU.")
|
| 63 |
-
sys.exit(1)
|
| 64 |
-
logger.info(f"CUDA available. GPU: {torch.cuda.get_device_name(0)}")
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
def discover_files(input_dir: Path) -> list[Path]:
|
| 68 |
-
files = []
|
| 69 |
-
for path in sorted(input_dir.rglob("*")):
|
| 70 |
-
if not path.is_file():
|
| 71 |
-
continue
|
| 72 |
-
ext = path.suffix.lower()
|
| 73 |
-
if ext in IMAGE_EXTENSIONS or ext == ".pdf":
|
| 74 |
-
files.append(path)
|
| 75 |
-
return files
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
def prepare_images(
|
| 79 |
-
files: list[Path], input_dir: Path, output_dir: Path, pdf_dpi: int
|
| 80 |
-
) -> list[tuple[Image.Image, Path]]:
|
| 81 |
-
import fitz # pymupdf
|
| 82 |
-
|
| 83 |
-
items: list[tuple[Image.Image, Path]] = []
|
| 84 |
-
|
| 85 |
-
for file_path in files:
|
| 86 |
-
rel = file_path.relative_to(input_dir)
|
| 87 |
-
ext = file_path.suffix.lower()
|
| 88 |
-
|
| 89 |
-
if ext == ".pdf":
|
| 90 |
-
pdf_output_dir = output_dir / rel.with_suffix("")
|
| 91 |
-
try:
|
| 92 |
-
doc = fitz.open(file_path)
|
| 93 |
-
num_pages = len(doc)
|
| 94 |
-
logger.info(f"PDF: {rel} ({num_pages} pages)")
|
| 95 |
-
for page_num in range(num_pages):
|
| 96 |
-
page = doc[page_num]
|
| 97 |
-
zoom = pdf_dpi / 72.0
|
| 98 |
-
mat = fitz.Matrix(zoom, zoom)
|
| 99 |
-
pix = page.get_pixmap(matrix=mat)
|
| 100 |
-
img = Image.frombytes("RGB", [pix.width, pix.height], pix.samples)
|
| 101 |
-
md_path = pdf_output_dir / f"page_{page_num + 1:03d}.md"
|
| 102 |
-
items.append((img, md_path))
|
| 103 |
-
doc.close()
|
| 104 |
-
except Exception as e:
|
| 105 |
-
logger.error(f"Failed to open PDF {rel}: {e}")
|
| 106 |
-
else:
|
| 107 |
-
try:
|
| 108 |
-
img = Image.open(file_path).convert("RGB")
|
| 109 |
-
md_path = output_dir / rel.with_suffix(".md")
|
| 110 |
-
items.append((img, md_path))
|
| 111 |
-
except Exception as e:
|
| 112 |
-
logger.error(f"Failed to open image {rel}: {e}")
|
| 113 |
-
|
| 114 |
-
return items
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
def main():
|
| 118 |
-
parser = argparse.ArgumentParser(
|
| 119 |
-
description="OCR images/PDFs from a directory using Falcon OCR, output markdown files.",
|
| 120 |
-
formatter_class=argparse.RawDescriptionHelpFormatter,
|
| 121 |
-
epilog=__doc__,
|
| 122 |
-
)
|
| 123 |
-
parser.add_argument("input_dir", help="Directory containing images and/or PDFs")
|
| 124 |
-
parser.add_argument("output_dir", help="Directory to write markdown output files")
|
| 125 |
-
parser.add_argument(
|
| 126 |
-
"--batch-size", type=int, default=8, help="Images per batch (default: 8)",
|
| 127 |
-
)
|
| 128 |
-
parser.add_argument(
|
| 129 |
-
"--pdf-dpi", type=int, default=300,
|
| 130 |
-
help="DPI for PDF page rendering (default: 300)",
|
| 131 |
-
)
|
| 132 |
-
parser.add_argument(
|
| 133 |
-
"--no-compile", action="store_true", help="Disable torch.compile",
|
| 134 |
-
)
|
| 135 |
-
parser.add_argument(
|
| 136 |
-
"--no-cudagraph", action="store_true", help="Disable CUDA graph capture",
|
| 137 |
-
)
|
| 138 |
-
parser.add_argument(
|
| 139 |
-
"--verbose", action="store_true", help="Print resolved package versions",
|
| 140 |
-
)
|
| 141 |
-
|
| 142 |
-
args = parser.parse_args()
|
| 143 |
-
|
| 144 |
-
check_cuda_availability()
|
| 145 |
-
|
| 146 |
-
input_dir = Path(args.input_dir)
|
| 147 |
-
output_dir = Path(args.output_dir)
|
| 148 |
-
|
| 149 |
-
if not input_dir.is_dir():
|
| 150 |
-
logger.error(f"Input directory does not exist: {input_dir}")
|
| 151 |
-
sys.exit(1)
|
| 152 |
-
|
| 153 |
-
output_dir.mkdir(parents=True, exist_ok=True)
|
| 154 |
-
|
| 155 |
-
start_time = time.time()
|
| 156 |
-
|
| 157 |
-
# Discover files
|
| 158 |
-
logger.info(f"Scanning {input_dir} for images and PDFs...")
|
| 159 |
-
files = discover_files(input_dir)
|
| 160 |
-
if not files:
|
| 161 |
-
logger.error(f"No image or PDF files found in {input_dir}")
|
| 162 |
-
sys.exit(1)
|
| 163 |
-
|
| 164 |
-
pdf_count = sum(1 for f in files if f.suffix.lower() == ".pdf")
|
| 165 |
-
img_count = len(files) - pdf_count
|
| 166 |
-
logger.info(f"Found {img_count} image(s) and {pdf_count} PDF(s)")
|
| 167 |
-
|
| 168 |
-
# Prepare images
|
| 169 |
-
logger.info("Preparing images (rendering PDFs)...")
|
| 170 |
-
items = prepare_images(files, input_dir, output_dir, args.pdf_dpi)
|
| 171 |
-
if not items:
|
| 172 |
-
logger.error("No processable images after preparation")
|
| 173 |
-
sys.exit(1)
|
| 174 |
-
|
| 175 |
-
logger.info(f"Total images to OCR: {len(items)}")
|
| 176 |
-
|
| 177 |
-
# Load model
|
| 178 |
-
logger.info(f"Loading {MODEL_ID} via falcon-perception engine...")
|
| 179 |
-
from falcon_perception import load_and_prepare_model
|
| 180 |
-
from falcon_perception.data import ImageProcessor
|
| 181 |
-
from falcon_perception.paged_ocr_inference import OCRInferenceEngine
|
| 182 |
-
|
| 183 |
-
do_compile = not args.no_compile
|
| 184 |
-
do_cudagraph = not args.no_cudagraph
|
| 185 |
-
|
| 186 |
-
model, tokenizer, model_args = load_and_prepare_model(
|
| 187 |
-
hf_model_id=MODEL_ID,
|
| 188 |
-
device="cuda",
|
| 189 |
-
dtype="bfloat16",
|
| 190 |
-
compile=do_compile,
|
| 191 |
-
)
|
| 192 |
-
|
| 193 |
-
image_processor = ImageProcessor(patch_size=16, merge_size=1)
|
| 194 |
-
engine = OCRInferenceEngine(
|
| 195 |
-
model, tokenizer, image_processor, capture_cudagraph=do_cudagraph
|
| 196 |
-
)
|
| 197 |
-
logger.info(f"Engine loaded. compile={do_compile}, cudagraph={do_cudagraph}")
|
| 198 |
-
|
| 199 |
-
# Process in batches
|
| 200 |
-
errors = 0
|
| 201 |
-
processed = 0
|
| 202 |
-
total = len(items)
|
| 203 |
-
batch_size = args.batch_size
|
| 204 |
-
|
| 205 |
-
for batch_start in range(0, total, batch_size):
|
| 206 |
-
batch_end = min(batch_start + batch_size, total)
|
| 207 |
-
batch = items[batch_start:batch_end]
|
| 208 |
-
batch_num = batch_start // batch_size + 1
|
| 209 |
-
total_batches = (total + batch_size - 1) // batch_size
|
| 210 |
-
|
| 211 |
-
logger.info(f"Batch {batch_num}/{total_batches} ({processed}/{total} done)")
|
| 212 |
-
|
| 213 |
-
try:
|
| 214 |
-
batch_images = [img for img, _ in batch]
|
| 215 |
-
texts = engine.generate_plain(images=batch_images, use_tqdm=False)
|
| 216 |
-
|
| 217 |
-
for (_, md_path), text in zip(batch, texts):
|
| 218 |
-
md_path.parent.mkdir(parents=True, exist_ok=True)
|
| 219 |
-
md_path.write_text(text.strip(), encoding="utf-8")
|
| 220 |
-
processed += 1
|
| 221 |
-
|
| 222 |
-
except Exception as e:
|
| 223 |
-
logger.error(f"Batch {batch_num} failed: {e}")
|
| 224 |
-
for _, md_path in batch:
|
| 225 |
-
md_path.parent.mkdir(parents=True, exist_ok=True)
|
| 226 |
-
md_path.write_text(f"[OCR ERROR: {e}]", encoding="utf-8")
|
| 227 |
-
errors += len(batch)
|
| 228 |
-
processed += len(batch)
|
| 229 |
-
|
| 230 |
-
elapsed = time.time() - start_time
|
| 231 |
-
elapsed_str = f"{elapsed / 60:.1f} min" if elapsed > 60 else f"{elapsed:.1f}s"
|
| 232 |
-
|
| 233 |
-
logger.info("=" * 50)
|
| 234 |
-
logger.info(f"Done! Processed {total} images in {elapsed_str}")
|
| 235 |
-
logger.info(f" Output: {output_dir}")
|
| 236 |
-
logger.info(f" Errors: {errors}")
|
| 237 |
-
if total > 0:
|
| 238 |
-
logger.info(f" Speed: {total / elapsed:.2f} images/sec")
|
| 239 |
-
|
| 240 |
-
if args.verbose:
|
| 241 |
-
import importlib.metadata
|
| 242 |
-
|
| 243 |
-
logger.info("--- Package versions ---")
|
| 244 |
-
for pkg in ["falcon-perception", "torch", "pillow", "pymupdf"]:
|
| 245 |
-
try:
|
| 246 |
-
logger.info(f" {pkg}=={importlib.metadata.version(pkg)}")
|
| 247 |
-
except importlib.metadata.PackageNotFoundError:
|
| 248 |
-
logger.info(f" {pkg}: not installed")
|
| 249 |
-
|
| 250 |
-
|
| 251 |
-
if __name__ == "__main__":
|
| 252 |
-
if len(sys.argv) == 1:
|
| 253 |
-
print("=" * 60)
|
| 254 |
-
print("Falcon OCR Bucket Script")
|
| 255 |
-
print("=" * 60)
|
| 256 |
-
print(f"\nModel: {MODEL_ID} (0.3B, Apache 2.0)")
|
| 257 |
-
print("OCR images/PDFs from a directory -> markdown files.")
|
| 258 |
-
print("Designed for HF Buckets mounted as volumes.")
|
| 259 |
-
print()
|
| 260 |
-
print("Usage:")
|
| 261 |
-
print(" uv run falcon-ocr-bucket.py INPUT_DIR OUTPUT_DIR")
|
| 262 |
-
print()
|
| 263 |
-
print("Examples:")
|
| 264 |
-
print(" uv run falcon-ocr-bucket.py ./images ./output")
|
| 265 |
-
print()
|
| 266 |
-
print("HF Jobs with bucket volumes:")
|
| 267 |
-
print(" hf jobs uv run --flavor l4x1 -s HF_TOKEN \\")
|
| 268 |
-
print(" -v hf://buckets/user/ocr-input:/input:ro \\")
|
| 269 |
-
print(" -v hf://buckets/user/ocr-output:/output \\")
|
| 270 |
-
print(
|
| 271 |
-
" https://huggingface.co/datasets/uv-scripts/ocr/raw/main/falcon-ocr-bucket.py \\"
|
| 272 |
-
)
|
| 273 |
-
print(" /input /output")
|
| 274 |
-
print()
|
| 275 |
-
print("For full help: uv run falcon-ocr-bucket.py --help")
|
| 276 |
-
sys.exit(0)
|
| 277 |
-
|
| 278 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
falcon-ocr.py
DELETED
|
@@ -1,433 +0,0 @@
|
|
| 1 |
-
# /// script
|
| 2 |
-
# requires-python = ">=3.11"
|
| 3 |
-
# dependencies = [
|
| 4 |
-
# "datasets",
|
| 5 |
-
# "huggingface-hub",
|
| 6 |
-
# "pillow",
|
| 7 |
-
# "torch>=2.5",
|
| 8 |
-
# "torchvision",
|
| 9 |
-
# "falcon-perception",
|
| 10 |
-
# ]
|
| 11 |
-
# ///
|
| 12 |
-
|
| 13 |
-
"""
|
| 14 |
-
Convert document images to text using Falcon OCR with the falcon-perception engine.
|
| 15 |
-
|
| 16 |
-
Uses the optimized OCRInferenceEngine with CUDA graphs and paged inference
|
| 17 |
-
for much faster throughput than the raw transformers API.
|
| 18 |
-
|
| 19 |
-
Features:
|
| 20 |
-
- Compact: Only 0.3B parameters
|
| 21 |
-
- Fast: Optimized inference with CUDA graphs
|
| 22 |
-
- Multi-format: Plain text, LaTeX formulas, HTML tables
|
| 23 |
-
- Layout-aware: Optional 2-stage pipeline (layout detection + per-region OCR)
|
| 24 |
-
|
| 25 |
-
Model: tiiuae/Falcon-OCR
|
| 26 |
-
Backend: falcon-perception (OCRInferenceEngine)
|
| 27 |
-
License: Apache 2.0
|
| 28 |
-
|
| 29 |
-
Examples:
|
| 30 |
-
# Basic text OCR
|
| 31 |
-
uv run falcon-ocr.py input-dataset output-dataset
|
| 32 |
-
|
| 33 |
-
# Test with small sample
|
| 34 |
-
uv run falcon-ocr.py dataset test --max-samples 5 --shuffle
|
| 35 |
-
|
| 36 |
-
# Run on HF Jobs with GPU
|
| 37 |
-
hf jobs uv run --flavor l4x1 \\
|
| 38 |
-
-s HF_TOKEN \\
|
| 39 |
-
falcon-ocr.py \\
|
| 40 |
-
input-dataset output-dataset --max-samples 10
|
| 41 |
-
"""
|
| 42 |
-
|
| 43 |
-
import argparse
|
| 44 |
-
import io
|
| 45 |
-
import json
|
| 46 |
-
import logging
|
| 47 |
-
import os
|
| 48 |
-
import sys
|
| 49 |
-
import time
|
| 50 |
-
from datetime import datetime
|
| 51 |
-
from typing import Any, Dict, Union
|
| 52 |
-
|
| 53 |
-
import torch
|
| 54 |
-
from datasets import load_dataset
|
| 55 |
-
from huggingface_hub import DatasetCard, login
|
| 56 |
-
from PIL import Image
|
| 57 |
-
|
| 58 |
-
logging.basicConfig(level=logging.INFO)
|
| 59 |
-
logger = logging.getLogger(__name__)
|
| 60 |
-
|
| 61 |
-
MODEL_ID = "tiiuae/Falcon-OCR"
|
| 62 |
-
|
| 63 |
-
TASK_MODES = {
|
| 64 |
-
"plain": "Full-page text extraction",
|
| 65 |
-
}
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
def check_cuda_availability():
|
| 69 |
-
if not torch.cuda.is_available():
|
| 70 |
-
logger.error("CUDA is not available. This script requires a GPU.")
|
| 71 |
-
logger.error("For cloud execution, use HF Jobs with --flavor l4x1 or similar.")
|
| 72 |
-
sys.exit(1)
|
| 73 |
-
else:
|
| 74 |
-
logger.info(f"CUDA is available. GPU: {torch.cuda.get_device_name(0)}")
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
def prepare_image(image: Union[Image.Image, Dict[str, Any], str]) -> Image.Image:
|
| 78 |
-
if isinstance(image, Image.Image):
|
| 79 |
-
pil_img = image
|
| 80 |
-
elif isinstance(image, dict) and "bytes" in image:
|
| 81 |
-
pil_img = Image.open(io.BytesIO(image["bytes"]))
|
| 82 |
-
elif isinstance(image, str):
|
| 83 |
-
pil_img = Image.open(image)
|
| 84 |
-
else:
|
| 85 |
-
raise ValueError(f"Unsupported image type: {type(image)}")
|
| 86 |
-
return pil_img.convert("RGB")
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
def create_dataset_card(
|
| 90 |
-
source_dataset: str,
|
| 91 |
-
task_mode: str,
|
| 92 |
-
num_samples: int,
|
| 93 |
-
processing_time: str,
|
| 94 |
-
image_column: str = "image",
|
| 95 |
-
split: str = "train",
|
| 96 |
-
) -> str:
|
| 97 |
-
task_description = TASK_MODES[task_mode]
|
| 98 |
-
return f"""---
|
| 99 |
-
tags:
|
| 100 |
-
- ocr
|
| 101 |
-
- document-processing
|
| 102 |
-
- falcon-ocr
|
| 103 |
-
- {task_mode}
|
| 104 |
-
- uv-script
|
| 105 |
-
- generated
|
| 106 |
-
---
|
| 107 |
-
|
| 108 |
-
# Document Processing using Falcon OCR ({task_mode} mode)
|
| 109 |
-
|
| 110 |
-
This dataset contains OCR results from images in [{source_dataset}](https://huggingface.co/datasets/{source_dataset}) using [Falcon OCR](https://huggingface.co/tiiuae/Falcon-OCR), a 0.3B early-fusion vision-language model.
|
| 111 |
-
|
| 112 |
-
## Processing Details
|
| 113 |
-
|
| 114 |
-
- **Source Dataset**: [{source_dataset}](https://huggingface.co/datasets/{source_dataset})
|
| 115 |
-
- **Model**: [{MODEL_ID}](https://huggingface.co/{MODEL_ID})
|
| 116 |
-
- **Task Mode**: `{task_mode}` - {task_description}
|
| 117 |
-
- **Number of Samples**: {num_samples:,}
|
| 118 |
-
- **Processing Time**: {processing_time}
|
| 119 |
-
- **Processing Date**: {datetime.now().strftime("%Y-%m-%d %H:%M UTC")}
|
| 120 |
-
- **Backend**: falcon-perception (OCRInferenceEngine)
|
| 121 |
-
|
| 122 |
-
## Reproduction
|
| 123 |
-
|
| 124 |
-
```bash
|
| 125 |
-
uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/falcon-ocr.py \\
|
| 126 |
-
{source_dataset} \\
|
| 127 |
-
<output-dataset> \\
|
| 128 |
-
--task-mode {task_mode} \\
|
| 129 |
-
--image-column {image_column}
|
| 130 |
-
```
|
| 131 |
-
|
| 132 |
-
Generated with [UV Scripts](https://huggingface.co/uv-scripts)
|
| 133 |
-
"""
|
| 134 |
-
|
| 135 |
-
|
| 136 |
-
def main(
|
| 137 |
-
input_dataset: str,
|
| 138 |
-
output_dataset: str,
|
| 139 |
-
image_column: str = "image",
|
| 140 |
-
task_mode: str = "plain",
|
| 141 |
-
hf_token: str = None,
|
| 142 |
-
split: str = "train",
|
| 143 |
-
max_samples: int = None,
|
| 144 |
-
private: bool = False,
|
| 145 |
-
shuffle: bool = False,
|
| 146 |
-
seed: int = 42,
|
| 147 |
-
output_column: str = "markdown",
|
| 148 |
-
config: str = None,
|
| 149 |
-
create_pr: bool = False,
|
| 150 |
-
compile: bool = True,
|
| 151 |
-
cudagraph: bool = True,
|
| 152 |
-
progress: bool = False,
|
| 153 |
-
verbose: bool = False,
|
| 154 |
-
):
|
| 155 |
-
check_cuda_availability()
|
| 156 |
-
start_time = datetime.now()
|
| 157 |
-
|
| 158 |
-
HF_TOKEN = hf_token or os.environ.get("HF_TOKEN")
|
| 159 |
-
if HF_TOKEN:
|
| 160 |
-
login(token=HF_TOKEN)
|
| 161 |
-
|
| 162 |
-
if task_mode not in TASK_MODES:
|
| 163 |
-
raise ValueError(
|
| 164 |
-
f"Invalid task_mode '{task_mode}'. Choose from: {list(TASK_MODES.keys())}"
|
| 165 |
-
)
|
| 166 |
-
|
| 167 |
-
logger.info(f"Task mode: {task_mode} - {TASK_MODES[task_mode]}")
|
| 168 |
-
logger.info(f"Output column: {output_column}")
|
| 169 |
-
|
| 170 |
-
# Load dataset
|
| 171 |
-
logger.info(f"Loading dataset: {input_dataset}")
|
| 172 |
-
dataset = load_dataset(input_dataset, split=split)
|
| 173 |
-
|
| 174 |
-
if image_column not in dataset.column_names:
|
| 175 |
-
raise ValueError(
|
| 176 |
-
f"Column '{image_column}' not found. Available: {dataset.column_names}"
|
| 177 |
-
)
|
| 178 |
-
|
| 179 |
-
if shuffle:
|
| 180 |
-
logger.info(f"Shuffling dataset with seed {seed}")
|
| 181 |
-
dataset = dataset.shuffle(seed=seed)
|
| 182 |
-
|
| 183 |
-
if max_samples:
|
| 184 |
-
dataset = dataset.select(range(min(max_samples, len(dataset))))
|
| 185 |
-
logger.info(f"Limited to {len(dataset)} samples")
|
| 186 |
-
|
| 187 |
-
# Load model using falcon-perception
|
| 188 |
-
logger.info(f"Loading model: {MODEL_ID} via falcon-perception engine")
|
| 189 |
-
from falcon_perception import load_and_prepare_model
|
| 190 |
-
from falcon_perception.data import ImageProcessor
|
| 191 |
-
from falcon_perception.paged_ocr_inference import OCRInferenceEngine
|
| 192 |
-
|
| 193 |
-
model, tokenizer, model_args = load_and_prepare_model(
|
| 194 |
-
hf_model_id=MODEL_ID,
|
| 195 |
-
device="cuda",
|
| 196 |
-
dtype="bfloat16",
|
| 197 |
-
compile=compile,
|
| 198 |
-
)
|
| 199 |
-
|
| 200 |
-
image_processor = ImageProcessor(patch_size=16, merge_size=1)
|
| 201 |
-
engine = OCRInferenceEngine(
|
| 202 |
-
model, tokenizer, image_processor, capture_cudagraph=cudagraph
|
| 203 |
-
)
|
| 204 |
-
logger.info(f"Engine loaded. compile={compile}, cudagraph={cudagraph}")
|
| 205 |
-
|
| 206 |
-
# Prepare all images
|
| 207 |
-
logger.info(f"Processing {len(dataset)} images...")
|
| 208 |
-
all_outputs = []
|
| 209 |
-
|
| 210 |
-
# Batch plain OCR for better throughput
|
| 211 |
-
batch_size = 8
|
| 212 |
-
total_batches = (len(dataset) + batch_size - 1) // batch_size
|
| 213 |
-
for batch_idx, batch_start in enumerate(range(0, len(dataset), batch_size), 1):
|
| 214 |
-
batch_end = min(batch_start + batch_size, len(dataset))
|
| 215 |
-
logger.info(f"Batch {batch_idx}/{total_batches} ({batch_start}/{len(dataset)} done)")
|
| 216 |
-
batch_images = []
|
| 217 |
-
for i in range(batch_start, batch_end):
|
| 218 |
-
try:
|
| 219 |
-
batch_images.append(prepare_image(dataset[i][image_column]))
|
| 220 |
-
except Exception as e:
|
| 221 |
-
logger.error(f"Error preparing image {i}: {e}")
|
| 222 |
-
batch_images.append(Image.new("RGB", (100, 100)))
|
| 223 |
-
|
| 224 |
-
try:
|
| 225 |
-
texts = engine.generate_plain(
|
| 226 |
-
images=batch_images, use_tqdm=progress
|
| 227 |
-
)
|
| 228 |
-
all_outputs.extend(texts)
|
| 229 |
-
except Exception as e:
|
| 230 |
-
logger.error(f"Error processing batch {batch_start}-{batch_end}: {e}")
|
| 231 |
-
all_outputs.extend(
|
| 232 |
-
[f"[OCR ERROR: {str(e)[:200]}]"] * len(batch_images)
|
| 233 |
-
)
|
| 234 |
-
|
| 235 |
-
# Calculate processing time
|
| 236 |
-
processing_duration = datetime.now() - start_time
|
| 237 |
-
processing_time_str = f"{processing_duration.total_seconds() / 60:.1f} min"
|
| 238 |
-
|
| 239 |
-
# Add output column
|
| 240 |
-
logger.info(f"Adding '{output_column}' column to dataset")
|
| 241 |
-
dataset = dataset.add_column(output_column, all_outputs)
|
| 242 |
-
|
| 243 |
-
# Track inference info
|
| 244 |
-
inference_entry = {
|
| 245 |
-
"model_id": MODEL_ID,
|
| 246 |
-
"model_name": "Falcon-OCR",
|
| 247 |
-
"model_size": "0.3B",
|
| 248 |
-
"task_mode": task_mode,
|
| 249 |
-
"column_name": output_column,
|
| 250 |
-
"timestamp": datetime.now().isoformat(),
|
| 251 |
-
"backend": "falcon-perception",
|
| 252 |
-
}
|
| 253 |
-
|
| 254 |
-
if "inference_info" in dataset.column_names:
|
| 255 |
-
def update_inference_info(example):
|
| 256 |
-
try:
|
| 257 |
-
existing_info = (
|
| 258 |
-
json.loads(example["inference_info"])
|
| 259 |
-
if example["inference_info"]
|
| 260 |
-
else []
|
| 261 |
-
)
|
| 262 |
-
except (json.JSONDecodeError, TypeError):
|
| 263 |
-
existing_info = []
|
| 264 |
-
existing_info.append(inference_entry)
|
| 265 |
-
return {"inference_info": json.dumps(existing_info)}
|
| 266 |
-
|
| 267 |
-
dataset = dataset.map(update_inference_info)
|
| 268 |
-
else:
|
| 269 |
-
inference_list = [json.dumps([inference_entry])] * len(dataset)
|
| 270 |
-
dataset = dataset.add_column("inference_info", inference_list)
|
| 271 |
-
|
| 272 |
-
# Push to hub
|
| 273 |
-
logger.info(f"Pushing to {output_dataset}")
|
| 274 |
-
max_retries = 3
|
| 275 |
-
for attempt in range(1, max_retries + 1):
|
| 276 |
-
try:
|
| 277 |
-
if attempt > 1:
|
| 278 |
-
logger.warning("Disabling XET (fallback to HTTP upload)")
|
| 279 |
-
os.environ["HF_HUB_DISABLE_XET"] = "1"
|
| 280 |
-
dataset.push_to_hub(
|
| 281 |
-
output_dataset,
|
| 282 |
-
private=private,
|
| 283 |
-
token=HF_TOKEN,
|
| 284 |
-
max_shard_size="500MB",
|
| 285 |
-
**({"config_name": config} if config else {}),
|
| 286 |
-
create_pr=create_pr,
|
| 287 |
-
commit_message=f"Add {MODEL_ID} OCR results ({len(dataset)} samples)"
|
| 288 |
-
+ (f" [{config}]" if config else ""),
|
| 289 |
-
)
|
| 290 |
-
break
|
| 291 |
-
except Exception as e:
|
| 292 |
-
logger.error(f"Upload attempt {attempt}/{max_retries} failed: {e}")
|
| 293 |
-
if attempt < max_retries:
|
| 294 |
-
delay = 30 * (2 ** (attempt - 1))
|
| 295 |
-
logger.info(f"Retrying in {delay}s...")
|
| 296 |
-
time.sleep(delay)
|
| 297 |
-
else:
|
| 298 |
-
logger.error("All upload attempts failed. OCR results are lost.")
|
| 299 |
-
sys.exit(1)
|
| 300 |
-
|
| 301 |
-
# Create and push dataset card
|
| 302 |
-
logger.info("Creating dataset card")
|
| 303 |
-
card_content = create_dataset_card(
|
| 304 |
-
source_dataset=input_dataset,
|
| 305 |
-
task_mode=task_mode,
|
| 306 |
-
num_samples=len(dataset),
|
| 307 |
-
processing_time=processing_time_str,
|
| 308 |
-
image_column=image_column,
|
| 309 |
-
split=split,
|
| 310 |
-
)
|
| 311 |
-
card = DatasetCard(card_content)
|
| 312 |
-
card.push_to_hub(output_dataset, token=HF_TOKEN)
|
| 313 |
-
|
| 314 |
-
logger.info("Falcon OCR processing complete!")
|
| 315 |
-
logger.info(
|
| 316 |
-
f"Dataset available at: https://huggingface.co/datasets/{output_dataset}"
|
| 317 |
-
)
|
| 318 |
-
logger.info(f"Processing time: {processing_time_str}")
|
| 319 |
-
logger.info(
|
| 320 |
-
f"Speed: {len(dataset) / processing_duration.total_seconds():.2f} images/sec"
|
| 321 |
-
)
|
| 322 |
-
|
| 323 |
-
if verbose:
|
| 324 |
-
import importlib.metadata
|
| 325 |
-
|
| 326 |
-
logger.info("--- Resolved package versions ---")
|
| 327 |
-
for pkg in [
|
| 328 |
-
"falcon-perception", "transformers", "torch", "datasets", "pillow"
|
| 329 |
-
]:
|
| 330 |
-
try:
|
| 331 |
-
logger.info(f" {pkg}=={importlib.metadata.version(pkg)}")
|
| 332 |
-
except importlib.metadata.PackageNotFoundError:
|
| 333 |
-
logger.info(f" {pkg}: not installed")
|
| 334 |
-
|
| 335 |
-
|
| 336 |
-
if __name__ == "__main__":
|
| 337 |
-
if len(sys.argv) == 1:
|
| 338 |
-
print("=" * 70)
|
| 339 |
-
print("Falcon OCR - 0.3B Document OCR (falcon-perception engine)")
|
| 340 |
-
print("=" * 70)
|
| 341 |
-
print(f"\nModel: {MODEL_ID}")
|
| 342 |
-
print("License: Apache 2.0")
|
| 343 |
-
print("\nTask Modes:")
|
| 344 |
-
for mode, description in TASK_MODES.items():
|
| 345 |
-
print(f" {mode:10} - {description}")
|
| 346 |
-
print("\nExamples:")
|
| 347 |
-
print(" uv run falcon-ocr.py input-dataset output-dataset")
|
| 348 |
-
print(" uv run falcon-ocr.py dense-docs output --task-mode layout")
|
| 349 |
-
print("\nFor full help: uv run falcon-ocr.py --help")
|
| 350 |
-
sys.exit(0)
|
| 351 |
-
|
| 352 |
-
parser = argparse.ArgumentParser(
|
| 353 |
-
description="Document OCR using Falcon OCR (0.3B, falcon-perception engine)",
|
| 354 |
-
formatter_class=argparse.RawDescriptionHelpFormatter,
|
| 355 |
-
epilog=__doc__,
|
| 356 |
-
)
|
| 357 |
-
|
| 358 |
-
parser.add_argument("input_dataset", help="Input dataset ID from Hugging Face Hub")
|
| 359 |
-
parser.add_argument("output_dataset", help="Output dataset ID for Hugging Face Hub")
|
| 360 |
-
parser.add_argument(
|
| 361 |
-
"--image-column", default="image",
|
| 362 |
-
help="Column containing images (default: image)",
|
| 363 |
-
)
|
| 364 |
-
parser.add_argument(
|
| 365 |
-
"--task-mode", choices=list(TASK_MODES.keys()), default="plain",
|
| 366 |
-
help="Task type: plain (default), layout",
|
| 367 |
-
)
|
| 368 |
-
parser.add_argument("--hf-token", help="Hugging Face API token")
|
| 369 |
-
parser.add_argument(
|
| 370 |
-
"--split", default="train", help="Dataset split (default: train)",
|
| 371 |
-
)
|
| 372 |
-
parser.add_argument(
|
| 373 |
-
"--max-samples", type=int,
|
| 374 |
-
help="Maximum number of samples to process (for testing)",
|
| 375 |
-
)
|
| 376 |
-
parser.add_argument(
|
| 377 |
-
"--private", action="store_true", help="Make output dataset private",
|
| 378 |
-
)
|
| 379 |
-
parser.add_argument(
|
| 380 |
-
"--shuffle", action="store_true", help="Shuffle dataset before processing",
|
| 381 |
-
)
|
| 382 |
-
parser.add_argument(
|
| 383 |
-
"--seed", type=int, default=42, help="Random seed for shuffling (default: 42)",
|
| 384 |
-
)
|
| 385 |
-
parser.add_argument(
|
| 386 |
-
"--output-column", default="markdown",
|
| 387 |
-
help="Column name for output text (default: markdown)",
|
| 388 |
-
)
|
| 389 |
-
parser.add_argument(
|
| 390 |
-
"--config",
|
| 391 |
-
help="Config/subset name for Hub (for benchmarking multiple models)",
|
| 392 |
-
)
|
| 393 |
-
parser.add_argument(
|
| 394 |
-
"--create-pr", action="store_true",
|
| 395 |
-
help="Create a pull request instead of pushing directly",
|
| 396 |
-
)
|
| 397 |
-
parser.add_argument(
|
| 398 |
-
"--no-compile", action="store_true",
|
| 399 |
-
help="Disable torch.compile",
|
| 400 |
-
)
|
| 401 |
-
parser.add_argument(
|
| 402 |
-
"--no-cudagraph", action="store_true",
|
| 403 |
-
help="Disable CUDA graph capture",
|
| 404 |
-
)
|
| 405 |
-
parser.add_argument(
|
| 406 |
-
"--progress", action="store_true",
|
| 407 |
-
help="Show per-image progress bar from the inference engine",
|
| 408 |
-
)
|
| 409 |
-
parser.add_argument(
|
| 410 |
-
"--verbose", action="store_true", help="Log resolved package versions",
|
| 411 |
-
)
|
| 412 |
-
|
| 413 |
-
args = parser.parse_args()
|
| 414 |
-
|
| 415 |
-
main(
|
| 416 |
-
input_dataset=args.input_dataset,
|
| 417 |
-
output_dataset=args.output_dataset,
|
| 418 |
-
image_column=args.image_column,
|
| 419 |
-
task_mode=args.task_mode,
|
| 420 |
-
hf_token=args.hf_token,
|
| 421 |
-
split=args.split,
|
| 422 |
-
max_samples=args.max_samples,
|
| 423 |
-
private=args.private,
|
| 424 |
-
shuffle=args.shuffle,
|
| 425 |
-
seed=args.seed,
|
| 426 |
-
output_column=args.output_column,
|
| 427 |
-
config=args.config,
|
| 428 |
-
create_pr=args.create_pr,
|
| 429 |
-
compile=not args.no_compile,
|
| 430 |
-
cudagraph=not args.no_cudagraph,
|
| 431 |
-
progress=args.progress,
|
| 432 |
-
verbose=args.verbose,
|
| 433 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
firered-ocr.py
CHANGED
|
@@ -104,10 +104,10 @@ def make_ocr_message(
|
|
| 104 |
# Convert to RGB
|
| 105 |
pil_img = pil_img.convert("RGB")
|
| 106 |
|
| 107 |
-
# Convert to base64 data URI
|
| 108 |
buf = io.BytesIO()
|
| 109 |
-
pil_img.save(buf, format="
|
| 110 |
-
data_uri = f"data:image/
|
| 111 |
|
| 112 |
# Return message in vLLM format
|
| 113 |
return [
|
|
@@ -228,7 +228,7 @@ def main(
|
|
| 228 |
image_column: str = "image",
|
| 229 |
batch_size: int = 16,
|
| 230 |
model: str = "FireRedTeam/FireRed-OCR",
|
| 231 |
-
max_model_len: int =
|
| 232 |
max_tokens: int = 8192,
|
| 233 |
gpu_memory_utilization: float = 0.8,
|
| 234 |
hf_token: str = None,
|
|
@@ -335,10 +335,7 @@ def main(
|
|
| 335 |
processing_duration = datetime.now() - start_time
|
| 336 |
processing_time_str = f"{processing_duration.total_seconds() / 60:.1f} min"
|
| 337 |
|
| 338 |
-
# Add output column to dataset
|
| 339 |
-
if output_column in dataset.column_names:
|
| 340 |
-
logger.info(f"Removing existing '{output_column}' column before adding new results")
|
| 341 |
-
dataset = dataset.remove_columns([output_column])
|
| 342 |
logger.info(f"Adding '{output_column}' column to dataset")
|
| 343 |
dataset = dataset.add_column(output_column, all_outputs)
|
| 344 |
|
|
@@ -483,8 +480,8 @@ Examples:
|
|
| 483 |
parser.add_argument(
|
| 484 |
"--max-model-len",
|
| 485 |
type=int,
|
| 486 |
-
default=
|
| 487 |
-
help="Maximum model context length (default:
|
| 488 |
)
|
| 489 |
parser.add_argument(
|
| 490 |
"--max-tokens",
|
|
|
|
| 104 |
# Convert to RGB
|
| 105 |
pil_img = pil_img.convert("RGB")
|
| 106 |
|
| 107 |
+
# Convert to base64 data URI
|
| 108 |
buf = io.BytesIO()
|
| 109 |
+
pil_img.save(buf, format="PNG")
|
| 110 |
+
data_uri = f"data:image/png;base64,{base64.b64encode(buf.getvalue()).decode()}"
|
| 111 |
|
| 112 |
# Return message in vLLM format
|
| 113 |
return [
|
|
|
|
| 228 |
image_column: str = "image",
|
| 229 |
batch_size: int = 16,
|
| 230 |
model: str = "FireRedTeam/FireRed-OCR",
|
| 231 |
+
max_model_len: int = 8192,
|
| 232 |
max_tokens: int = 8192,
|
| 233 |
gpu_memory_utilization: float = 0.8,
|
| 234 |
hf_token: str = None,
|
|
|
|
| 335 |
processing_duration = datetime.now() - start_time
|
| 336 |
processing_time_str = f"{processing_duration.total_seconds() / 60:.1f} min"
|
| 337 |
|
| 338 |
+
# Add output column to dataset
|
|
|
|
|
|
|
|
|
|
| 339 |
logger.info(f"Adding '{output_column}' column to dataset")
|
| 340 |
dataset = dataset.add_column(output_column, all_outputs)
|
| 341 |
|
|
|
|
| 480 |
parser.add_argument(
|
| 481 |
"--max-model-len",
|
| 482 |
type=int,
|
| 483 |
+
default=8192,
|
| 484 |
+
help="Maximum model context length (default: 8192)",
|
| 485 |
)
|
| 486 |
parser.add_argument(
|
| 487 |
"--max-tokens",
|
glm-ocr-bucket.py
DELETED
|
@@ -1,364 +0,0 @@
|
|
| 1 |
-
# /// script
|
| 2 |
-
# requires-python = ">=3.11"
|
| 3 |
-
# dependencies = [
|
| 4 |
-
# "pillow",
|
| 5 |
-
# "pymupdf",
|
| 6 |
-
# "vllm",
|
| 7 |
-
# "torch",
|
| 8 |
-
# ]
|
| 9 |
-
#
|
| 10 |
-
# [[tool.uv.index]]
|
| 11 |
-
# url = "https://wheels.vllm.ai/nightly/cu129"
|
| 12 |
-
#
|
| 13 |
-
# [tool.uv]
|
| 14 |
-
# prerelease = "allow"
|
| 15 |
-
# override-dependencies = ["transformers>=5.1.0"]
|
| 16 |
-
# ///
|
| 17 |
-
|
| 18 |
-
"""
|
| 19 |
-
OCR images and PDFs from a directory using GLM-OCR, writing markdown files.
|
| 20 |
-
|
| 21 |
-
Designed to work with HF Buckets mounted as volumes via `hf jobs uv run -v ...`
|
| 22 |
-
(requires huggingface_hub with PR #3936 volume mounting support).
|
| 23 |
-
|
| 24 |
-
The script reads images/PDFs from INPUT_DIR, runs GLM-OCR via vLLM, and writes
|
| 25 |
-
one .md file per image (or per PDF page) to OUTPUT_DIR, preserving directory structure.
|
| 26 |
-
|
| 27 |
-
Input: Output:
|
| 28 |
-
/input/page1.png → /output/page1.md
|
| 29 |
-
/input/report.pdf → /output/report/page_001.md
|
| 30 |
-
(3 pages) /output/report/page_002.md
|
| 31 |
-
/output/report/page_003.md
|
| 32 |
-
/input/sub/photo.jpg → /output/sub/photo.md
|
| 33 |
-
|
| 34 |
-
Examples:
|
| 35 |
-
|
| 36 |
-
# Local test
|
| 37 |
-
uv run glm-ocr-bucket.py ./test-images ./test-output
|
| 38 |
-
|
| 39 |
-
# HF Jobs with bucket volumes (PR #3936)
|
| 40 |
-
hf jobs uv run --flavor l4x1 \\
|
| 41 |
-
-s HF_TOKEN \\
|
| 42 |
-
-v bucket/user/ocr-input:/input:ro \\
|
| 43 |
-
-v bucket/user/ocr-output:/output \\
|
| 44 |
-
glm-ocr-bucket.py /input /output
|
| 45 |
-
|
| 46 |
-
Model: zai-org/GLM-OCR (0.9B, 94.62% OmniDocBench V1.5, MIT licensed)
|
| 47 |
-
"""
|
| 48 |
-
|
| 49 |
-
import argparse
|
| 50 |
-
import base64
|
| 51 |
-
import io
|
| 52 |
-
import logging
|
| 53 |
-
import sys
|
| 54 |
-
import time
|
| 55 |
-
from pathlib import Path
|
| 56 |
-
|
| 57 |
-
import torch
|
| 58 |
-
from PIL import Image
|
| 59 |
-
from vllm import LLM, SamplingParams
|
| 60 |
-
|
| 61 |
-
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(levelname)s %(message)s")
|
| 62 |
-
logger = logging.getLogger(__name__)
|
| 63 |
-
|
| 64 |
-
MODEL = "zai-org/GLM-OCR"
|
| 65 |
-
|
| 66 |
-
TASK_PROMPTS = {
|
| 67 |
-
"ocr": "Text Recognition:",
|
| 68 |
-
"formula": "Formula Recognition:",
|
| 69 |
-
"table": "Table Recognition:",
|
| 70 |
-
}
|
| 71 |
-
|
| 72 |
-
IMAGE_EXTENSIONS = {".png", ".jpg", ".jpeg", ".tiff", ".tif", ".bmp", ".webp"}
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
def check_cuda_availability():
|
| 76 |
-
if not torch.cuda.is_available():
|
| 77 |
-
logger.error("CUDA is not available. This script requires a GPU.")
|
| 78 |
-
sys.exit(1)
|
| 79 |
-
logger.info(f"CUDA available. GPU: {torch.cuda.get_device_name(0)}")
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
def make_ocr_message(image: Image.Image, task: str = "ocr") -> list[dict]:
|
| 83 |
-
"""Create chat message for GLM-OCR from a PIL Image."""
|
| 84 |
-
image = image.convert("RGB")
|
| 85 |
-
buf = io.BytesIO()
|
| 86 |
-
image.save(buf, format="PNG")
|
| 87 |
-
data_uri = f"data:image/png;base64,{base64.b64encode(buf.getvalue()).decode()}"
|
| 88 |
-
|
| 89 |
-
return [
|
| 90 |
-
{
|
| 91 |
-
"role": "user",
|
| 92 |
-
"content": [
|
| 93 |
-
{"type": "image_url", "image_url": {"url": data_uri}},
|
| 94 |
-
{"type": "text", "text": TASK_PROMPTS.get(task, TASK_PROMPTS["ocr"])},
|
| 95 |
-
],
|
| 96 |
-
}
|
| 97 |
-
]
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
def discover_files(input_dir: Path) -> list[Path]:
|
| 101 |
-
"""Walk input_dir recursively, returning sorted list of image and PDF files."""
|
| 102 |
-
files = []
|
| 103 |
-
for path in sorted(input_dir.rglob("*")):
|
| 104 |
-
if not path.is_file():
|
| 105 |
-
continue
|
| 106 |
-
ext = path.suffix.lower()
|
| 107 |
-
if ext in IMAGE_EXTENSIONS or ext == ".pdf":
|
| 108 |
-
files.append(path)
|
| 109 |
-
return files
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
def prepare_images(
|
| 113 |
-
files: list[Path], input_dir: Path, output_dir: Path, pdf_dpi: int
|
| 114 |
-
) -> list[tuple[Image.Image, Path]]:
|
| 115 |
-
"""
|
| 116 |
-
Convert discovered files into (PIL.Image, output_md_path) pairs.
|
| 117 |
-
|
| 118 |
-
Images map 1:1. PDFs expand to one image per page in a subdirectory.
|
| 119 |
-
"""
|
| 120 |
-
import fitz # pymupdf
|
| 121 |
-
|
| 122 |
-
items: list[tuple[Image.Image, Path]] = []
|
| 123 |
-
|
| 124 |
-
for file_path in files:
|
| 125 |
-
rel = file_path.relative_to(input_dir)
|
| 126 |
-
ext = file_path.suffix.lower()
|
| 127 |
-
|
| 128 |
-
if ext == ".pdf":
|
| 129 |
-
# PDF → one .md per page in a subdirectory named after the PDF
|
| 130 |
-
pdf_output_dir = output_dir / rel.with_suffix("")
|
| 131 |
-
try:
|
| 132 |
-
doc = fitz.open(file_path)
|
| 133 |
-
num_pages = len(doc)
|
| 134 |
-
logger.info(f"PDF: {rel} ({num_pages} pages)")
|
| 135 |
-
for page_num in range(num_pages):
|
| 136 |
-
page = doc[page_num]
|
| 137 |
-
# Render at specified DPI
|
| 138 |
-
zoom = pdf_dpi / 72.0
|
| 139 |
-
mat = fitz.Matrix(zoom, zoom)
|
| 140 |
-
pix = page.get_pixmap(matrix=mat)
|
| 141 |
-
img = Image.frombytes("RGB", [pix.width, pix.height], pix.samples)
|
| 142 |
-
md_path = pdf_output_dir / f"page_{page_num + 1:03d}.md"
|
| 143 |
-
items.append((img, md_path))
|
| 144 |
-
doc.close()
|
| 145 |
-
except Exception as e:
|
| 146 |
-
logger.error(f"Failed to open PDF {rel}: {e}")
|
| 147 |
-
else:
|
| 148 |
-
# Image → single .md
|
| 149 |
-
try:
|
| 150 |
-
img = Image.open(file_path).convert("RGB")
|
| 151 |
-
md_path = output_dir / rel.with_suffix(".md")
|
| 152 |
-
items.append((img, md_path))
|
| 153 |
-
except Exception as e:
|
| 154 |
-
logger.error(f"Failed to open image {rel}: {e}")
|
| 155 |
-
|
| 156 |
-
return items
|
| 157 |
-
|
| 158 |
-
|
| 159 |
-
def main():
|
| 160 |
-
parser = argparse.ArgumentParser(
|
| 161 |
-
description="OCR images/PDFs from a directory using GLM-OCR, output markdown files.",
|
| 162 |
-
formatter_class=argparse.RawDescriptionHelpFormatter,
|
| 163 |
-
epilog="""
|
| 164 |
-
Task modes:
|
| 165 |
-
ocr Text recognition to markdown (default)
|
| 166 |
-
formula LaTeX formula recognition
|
| 167 |
-
table Table extraction (HTML)
|
| 168 |
-
|
| 169 |
-
Examples:
|
| 170 |
-
uv run glm-ocr-bucket.py ./images ./output
|
| 171 |
-
uv run glm-ocr-bucket.py /input /output --task table --pdf-dpi 200
|
| 172 |
-
|
| 173 |
-
HF Jobs with bucket volumes (requires huggingface_hub PR #3936):
|
| 174 |
-
hf jobs uv run --flavor l4x1 -s HF_TOKEN \\
|
| 175 |
-
-v bucket/user/input-bucket:/input:ro \\
|
| 176 |
-
-v bucket/user/output-bucket:/output \\
|
| 177 |
-
glm-ocr-bucket.py /input /output
|
| 178 |
-
""",
|
| 179 |
-
)
|
| 180 |
-
parser.add_argument("input_dir", help="Directory containing images and/or PDFs")
|
| 181 |
-
parser.add_argument("output_dir", help="Directory to write markdown output files")
|
| 182 |
-
parser.add_argument(
|
| 183 |
-
"--task",
|
| 184 |
-
choices=["ocr", "formula", "table"],
|
| 185 |
-
default="ocr",
|
| 186 |
-
help="OCR task mode (default: ocr)",
|
| 187 |
-
)
|
| 188 |
-
parser.add_argument(
|
| 189 |
-
"--batch-size", type=int, default=16, help="Batch size for vLLM (default: 16)"
|
| 190 |
-
)
|
| 191 |
-
parser.add_argument(
|
| 192 |
-
"--max-model-len",
|
| 193 |
-
type=int,
|
| 194 |
-
default=8192,
|
| 195 |
-
help="Max model context length (default: 8192)",
|
| 196 |
-
)
|
| 197 |
-
parser.add_argument(
|
| 198 |
-
"--max-tokens",
|
| 199 |
-
type=int,
|
| 200 |
-
default=8192,
|
| 201 |
-
help="Max output tokens (default: 8192)",
|
| 202 |
-
)
|
| 203 |
-
parser.add_argument(
|
| 204 |
-
"--gpu-memory-utilization",
|
| 205 |
-
type=float,
|
| 206 |
-
default=0.8,
|
| 207 |
-
help="GPU memory utilization (default: 0.8)",
|
| 208 |
-
)
|
| 209 |
-
parser.add_argument(
|
| 210 |
-
"--pdf-dpi",
|
| 211 |
-
type=int,
|
| 212 |
-
default=300,
|
| 213 |
-
help="DPI for PDF page rendering (default: 300)",
|
| 214 |
-
)
|
| 215 |
-
parser.add_argument(
|
| 216 |
-
"--temperature",
|
| 217 |
-
type=float,
|
| 218 |
-
default=0.01,
|
| 219 |
-
help="Sampling temperature (default: 0.01)",
|
| 220 |
-
)
|
| 221 |
-
parser.add_argument(
|
| 222 |
-
"--top-p", type=float, default=0.00001, help="Top-p sampling (default: 0.00001)"
|
| 223 |
-
)
|
| 224 |
-
parser.add_argument(
|
| 225 |
-
"--repetition-penalty",
|
| 226 |
-
type=float,
|
| 227 |
-
default=1.1,
|
| 228 |
-
help="Repetition penalty (default: 1.1)",
|
| 229 |
-
)
|
| 230 |
-
parser.add_argument(
|
| 231 |
-
"--verbose",
|
| 232 |
-
action="store_true",
|
| 233 |
-
help="Print resolved package versions",
|
| 234 |
-
)
|
| 235 |
-
|
| 236 |
-
args = parser.parse_args()
|
| 237 |
-
|
| 238 |
-
check_cuda_availability()
|
| 239 |
-
|
| 240 |
-
input_dir = Path(args.input_dir)
|
| 241 |
-
output_dir = Path(args.output_dir)
|
| 242 |
-
|
| 243 |
-
if not input_dir.is_dir():
|
| 244 |
-
logger.error(f"Input directory does not exist: {input_dir}")
|
| 245 |
-
sys.exit(1)
|
| 246 |
-
|
| 247 |
-
output_dir.mkdir(parents=True, exist_ok=True)
|
| 248 |
-
|
| 249 |
-
# Discover and prepare
|
| 250 |
-
start_time = time.time()
|
| 251 |
-
|
| 252 |
-
logger.info(f"Scanning {input_dir} for images and PDFs...")
|
| 253 |
-
files = discover_files(input_dir)
|
| 254 |
-
if not files:
|
| 255 |
-
logger.error(f"No image or PDF files found in {input_dir}")
|
| 256 |
-
sys.exit(1)
|
| 257 |
-
|
| 258 |
-
pdf_count = sum(1 for f in files if f.suffix.lower() == ".pdf")
|
| 259 |
-
img_count = len(files) - pdf_count
|
| 260 |
-
logger.info(f"Found {img_count} image(s) and {pdf_count} PDF(s)")
|
| 261 |
-
|
| 262 |
-
logger.info("Preparing images (rendering PDFs)...")
|
| 263 |
-
items = prepare_images(files, input_dir, output_dir, args.pdf_dpi)
|
| 264 |
-
if not items:
|
| 265 |
-
logger.error("No processable images after preparation")
|
| 266 |
-
sys.exit(1)
|
| 267 |
-
|
| 268 |
-
logger.info(f"Total images to OCR: {len(items)}")
|
| 269 |
-
|
| 270 |
-
# Init vLLM
|
| 271 |
-
logger.info(f"Initializing vLLM with {MODEL}...")
|
| 272 |
-
llm = LLM(
|
| 273 |
-
model=MODEL,
|
| 274 |
-
trust_remote_code=True,
|
| 275 |
-
max_model_len=args.max_model_len,
|
| 276 |
-
gpu_memory_utilization=args.gpu_memory_utilization,
|
| 277 |
-
limit_mm_per_prompt={"image": 1},
|
| 278 |
-
)
|
| 279 |
-
|
| 280 |
-
sampling_params = SamplingParams(
|
| 281 |
-
temperature=args.temperature,
|
| 282 |
-
top_p=args.top_p,
|
| 283 |
-
max_tokens=args.max_tokens,
|
| 284 |
-
repetition_penalty=args.repetition_penalty,
|
| 285 |
-
)
|
| 286 |
-
|
| 287 |
-
# Process in batches
|
| 288 |
-
errors = 0
|
| 289 |
-
processed = 0
|
| 290 |
-
total = len(items)
|
| 291 |
-
|
| 292 |
-
for batch_start in range(0, total, args.batch_size):
|
| 293 |
-
batch_end = min(batch_start + args.batch_size, total)
|
| 294 |
-
batch = items[batch_start:batch_end]
|
| 295 |
-
batch_num = batch_start // args.batch_size + 1
|
| 296 |
-
total_batches = (total + args.batch_size - 1) // args.batch_size
|
| 297 |
-
|
| 298 |
-
logger.info(f"Batch {batch_num}/{total_batches} ({processed}/{total} done)")
|
| 299 |
-
|
| 300 |
-
try:
|
| 301 |
-
messages = [make_ocr_message(img, task=args.task) for img, _ in batch]
|
| 302 |
-
outputs = llm.chat(messages, sampling_params)
|
| 303 |
-
|
| 304 |
-
for (_, md_path), output in zip(batch, outputs):
|
| 305 |
-
text = output.outputs[0].text.strip()
|
| 306 |
-
md_path.parent.mkdir(parents=True, exist_ok=True)
|
| 307 |
-
md_path.write_text(text, encoding="utf-8")
|
| 308 |
-
processed += 1
|
| 309 |
-
|
| 310 |
-
except Exception as e:
|
| 311 |
-
logger.error(f"Batch {batch_num} failed: {e}")
|
| 312 |
-
# Write error markers for failed batch
|
| 313 |
-
for _, md_path in batch:
|
| 314 |
-
md_path.parent.mkdir(parents=True, exist_ok=True)
|
| 315 |
-
md_path.write_text(f"[OCR ERROR: {e}]", encoding="utf-8")
|
| 316 |
-
errors += len(batch)
|
| 317 |
-
processed += len(batch)
|
| 318 |
-
|
| 319 |
-
elapsed = time.time() - start_time
|
| 320 |
-
elapsed_str = f"{elapsed / 60:.1f} min" if elapsed > 60 else f"{elapsed:.1f}s"
|
| 321 |
-
|
| 322 |
-
logger.info("=" * 50)
|
| 323 |
-
logger.info(f"Done! Processed {total} images in {elapsed_str}")
|
| 324 |
-
logger.info(f" Output: {output_dir}")
|
| 325 |
-
logger.info(f" Errors: {errors}")
|
| 326 |
-
if total > 0:
|
| 327 |
-
logger.info(f" Speed: {total / elapsed:.2f} images/sec")
|
| 328 |
-
|
| 329 |
-
if args.verbose:
|
| 330 |
-
import importlib.metadata
|
| 331 |
-
|
| 332 |
-
logger.info("--- Package versions ---")
|
| 333 |
-
for pkg in ["vllm", "transformers", "torch", "pillow", "pymupdf"]:
|
| 334 |
-
try:
|
| 335 |
-
logger.info(f" {pkg}=={importlib.metadata.version(pkg)}")
|
| 336 |
-
except importlib.metadata.PackageNotFoundError:
|
| 337 |
-
logger.info(f" {pkg}: not installed")
|
| 338 |
-
|
| 339 |
-
|
| 340 |
-
if __name__ == "__main__":
|
| 341 |
-
if len(sys.argv) == 1:
|
| 342 |
-
print("=" * 60)
|
| 343 |
-
print("GLM-OCR Bucket Script")
|
| 344 |
-
print("=" * 60)
|
| 345 |
-
print("\nOCR images/PDFs from a directory → markdown files.")
|
| 346 |
-
print("Designed for HF Buckets mounted as volumes (PR #3936).")
|
| 347 |
-
print()
|
| 348 |
-
print("Usage:")
|
| 349 |
-
print(" uv run glm-ocr-bucket.py INPUT_DIR OUTPUT_DIR")
|
| 350 |
-
print()
|
| 351 |
-
print("Examples:")
|
| 352 |
-
print(" uv run glm-ocr-bucket.py ./images ./output")
|
| 353 |
-
print(" uv run glm-ocr-bucket.py /input /output --task table")
|
| 354 |
-
print()
|
| 355 |
-
print("HF Jobs with bucket volumes:")
|
| 356 |
-
print(" hf jobs uv run --flavor l4x1 -s HF_TOKEN \\")
|
| 357 |
-
print(" -v bucket/user/ocr-input:/input:ro \\")
|
| 358 |
-
print(" -v bucket/user/ocr-output:/output \\")
|
| 359 |
-
print(" glm-ocr-bucket.py /input /output")
|
| 360 |
-
print()
|
| 361 |
-
print("For full help: uv run glm-ocr-bucket.py --help")
|
| 362 |
-
sys.exit(0)
|
| 363 |
-
|
| 364 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
pp-doclayout.py
DELETED
|
@@ -1,1159 +0,0 @@
|
|
| 1 |
-
# /// script
|
| 2 |
-
# requires-python = ">=3.10"
|
| 3 |
-
# dependencies = [
|
| 4 |
-
# "paddlepaddle-gpu>=3.0.0",
|
| 5 |
-
# "paddleocr>=3.0.0",
|
| 6 |
-
# "opencv-contrib-python-headless",
|
| 7 |
-
# "datasets>=4.0.0",
|
| 8 |
-
# "huggingface-hub>=1.6.0",
|
| 9 |
-
# "pyarrow>=15.0",
|
| 10 |
-
# "pillow",
|
| 11 |
-
# "numpy",
|
| 12 |
-
# "tqdm",
|
| 13 |
-
# ]
|
| 14 |
-
#
|
| 15 |
-
# [tool.uv]
|
| 16 |
-
# # PaddleOCR/PaddleX pull in opencv-contrib-python (full) which needs system
|
| 17 |
-
# # libGL.so.1 — not present in the slim uv-on-bookworm image used by HF Jobs.
|
| 18 |
-
# # Swap to the headless cv2 variant (same `import cv2`, no GUI deps).
|
| 19 |
-
# override-dependencies = [
|
| 20 |
-
# "opencv-contrib-python ; python_version < '0'",
|
| 21 |
-
# "opencv-python ; python_version < '0'",
|
| 22 |
-
# ]
|
| 23 |
-
#
|
| 24 |
-
# [[tool.uv.index]]
|
| 25 |
-
# name = "paddle"
|
| 26 |
-
# url = "https://www.paddlepaddle.org.cn/packages/stable/cu126/"
|
| 27 |
-
# explicit = true
|
| 28 |
-
#
|
| 29 |
-
# [tool.uv.sources]
|
| 30 |
-
# paddlepaddle-gpu = { index = "paddle" }
|
| 31 |
-
# ///
|
| 32 |
-
|
| 33 |
-
"""
|
| 34 |
-
Detect document layout regions (text/title/table/figure/formula/...) with PP-DocLayout-L.
|
| 35 |
-
|
| 36 |
-
Runs PaddleOCR's PP-DocLayout-L (or M / S / plus-L variant) over an image source
|
| 37 |
-
and emits per-image bounding-box predictions. Unlike the OCR scripts in this repo
|
| 38 |
-
this does NOT extract text — it only locates and classifies regions.
|
| 39 |
-
|
| 40 |
-
Source can be:
|
| 41 |
-
- HF dataset repo (default): "namespace/dataset"
|
| 42 |
-
- HF bucket of image files: "hf://buckets/namespace/bucket/optional/prefix"
|
| 43 |
-
|
| 44 |
-
Sink can be:
|
| 45 |
-
- HF dataset repo (default): "namespace/dataset" (one push at end + dataset card)
|
| 46 |
-
- HF bucket: "hf://buckets/namespace/bucket/run-name" (incremental parquet
|
| 47 |
-
shards, resumable, no git overhead)
|
| 48 |
-
|
| 49 |
-
Output schema (column `layout` is a JSON string):
|
| 50 |
-
[{"bbox": [x1, y1, x2, y2], "label": "text", "score": 0.97, "cls_id": 2}, ...]
|
| 51 |
-
|
| 52 |
-
Coordinates are in the original input-image pixel space.
|
| 53 |
-
|
| 54 |
-
Example commands:
|
| 55 |
-
|
| 56 |
-
# Dataset -> dataset (smoke on L4)
|
| 57 |
-
hf jobs uv run --flavor l4x1 -s HF_TOKEN https://huggingface.co/datasets/uv-scripts/ocr/raw/main/pp-doclayout.py \\
|
| 58 |
-
davanstrien/ufo-ColPali pp-doclayout-smoke \\
|
| 59 |
-
--max-samples 3 --shuffle --seed 42 --private
|
| 60 |
-
|
| 61 |
-
# Dataset -> bucket (incremental shards, resumable)
|
| 62 |
-
hf buckets create davanstrien/pp-doclayout-scratch --exist-ok
|
| 63 |
-
hf jobs uv run --flavor l4x1 -s HF_TOKEN https://huggingface.co/datasets/uv-scripts/ocr/raw/main/pp-doclayout.py \\
|
| 64 |
-
davanstrien/ufo-ColPali \\
|
| 65 |
-
hf://buckets/davanstrien/pp-doclayout-scratch/run1 \\
|
| 66 |
-
--max-samples 20 --shard-size 5
|
| 67 |
-
|
| 68 |
-
# Bucket of images -> dataset
|
| 69 |
-
hf jobs uv run --flavor l4x1 -s HF_TOKEN https://huggingface.co/datasets/uv-scripts/ocr/raw/main/pp-doclayout.py \\
|
| 70 |
-
hf://buckets/davanstrien/pp-doclayout-images \\
|
| 71 |
-
pp-doclayout-from-bucket --private
|
| 72 |
-
"""
|
| 73 |
-
|
| 74 |
-
import argparse
|
| 75 |
-
import io
|
| 76 |
-
import json
|
| 77 |
-
import logging
|
| 78 |
-
import os
|
| 79 |
-
import sys
|
| 80 |
-
import time
|
| 81 |
-
from dataclasses import dataclass
|
| 82 |
-
from datetime import datetime, timezone
|
| 83 |
-
from pathlib import Path
|
| 84 |
-
from typing import Any, Dict, Iterator, List, Optional, Tuple, Union
|
| 85 |
-
|
| 86 |
-
import numpy as np
|
| 87 |
-
from PIL import Image
|
| 88 |
-
from tqdm.auto import tqdm
|
| 89 |
-
|
| 90 |
-
logging.basicConfig(level=logging.INFO)
|
| 91 |
-
logger = logging.getLogger(__name__)
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
# ---------------------------------------------------------------------------
|
| 95 |
-
# Constants
|
| 96 |
-
# ---------------------------------------------------------------------------
|
| 97 |
-
|
| 98 |
-
VALID_MODELS = [
|
| 99 |
-
"PP-DocLayout-L",
|
| 100 |
-
"PP-DocLayout-M",
|
| 101 |
-
"PP-DocLayout-S",
|
| 102 |
-
"PP-DocLayout_plus-L",
|
| 103 |
-
]
|
| 104 |
-
|
| 105 |
-
MODEL_SIZES = {
|
| 106 |
-
"PP-DocLayout-L": "~123M params (RT-DETR-L backbone)",
|
| 107 |
-
"PP-DocLayout-M": "~22M params (PicoDet-M)",
|
| 108 |
-
"PP-DocLayout-S": "~4M params (PicoDet-S)",
|
| 109 |
-
"PP-DocLayout_plus-L": "~123M params, 20-class plus variant",
|
| 110 |
-
}
|
| 111 |
-
|
| 112 |
-
IMAGE_EXTENSIONS = {
|
| 113 |
-
".jpg", ".jpeg", ".png", ".tif", ".tiff", ".webp", ".bmp", ".jp2", ".j2k",
|
| 114 |
-
}
|
| 115 |
-
|
| 116 |
-
BUCKET_PREFIX = "hf://buckets/"
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
# ---------------------------------------------------------------------------
|
| 120 |
-
# URL helpers
|
| 121 |
-
# ---------------------------------------------------------------------------
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
def is_bucket_url(s: str) -> bool:
|
| 125 |
-
return s.startswith(BUCKET_PREFIX)
|
| 126 |
-
|
| 127 |
-
|
| 128 |
-
def parse_bucket_url(url: str) -> Tuple[str, str]:
|
| 129 |
-
"""Split `hf://buckets/ns/bucket/path/in/bucket` into (`ns/bucket`, `path/in/bucket`)."""
|
| 130 |
-
if not is_bucket_url(url):
|
| 131 |
-
raise ValueError(f"Not a bucket URL: {url}")
|
| 132 |
-
rest = url[len(BUCKET_PREFIX) :].strip("/")
|
| 133 |
-
parts = rest.split("/", 2)
|
| 134 |
-
if len(parts) < 2:
|
| 135 |
-
raise ValueError(
|
| 136 |
-
f"Bucket URL must include namespace and bucket name: {url}"
|
| 137 |
-
)
|
| 138 |
-
bucket_id = f"{parts[0]}/{parts[1]}"
|
| 139 |
-
prefix = parts[2] if len(parts) > 2 else ""
|
| 140 |
-
return bucket_id, prefix
|
| 141 |
-
|
| 142 |
-
|
| 143 |
-
# ---------------------------------------------------------------------------
|
| 144 |
-
# Image helpers
|
| 145 |
-
# ---------------------------------------------------------------------------
|
| 146 |
-
|
| 147 |
-
|
| 148 |
-
def to_pil(image: Union[Image.Image, Dict[str, Any], str, bytes]) -> Image.Image:
|
| 149 |
-
if isinstance(image, Image.Image):
|
| 150 |
-
return image.convert("RGB")
|
| 151 |
-
if isinstance(image, dict) and "bytes" in image:
|
| 152 |
-
return Image.open(io.BytesIO(image["bytes"])).convert("RGB")
|
| 153 |
-
if isinstance(image, (bytes, bytearray)):
|
| 154 |
-
return Image.open(io.BytesIO(image)).convert("RGB")
|
| 155 |
-
if isinstance(image, str):
|
| 156 |
-
return Image.open(image).convert("RGB")
|
| 157 |
-
raise ValueError(f"Unsupported image type: {type(image)}")
|
| 158 |
-
|
| 159 |
-
|
| 160 |
-
def pil_to_array(pil_img: Image.Image) -> np.ndarray:
|
| 161 |
-
"""RGB PIL -> uint8 ndarray. PaddleOCR's predict() accepts numpy arrays directly."""
|
| 162 |
-
return np.asarray(pil_img, dtype=np.uint8)
|
| 163 |
-
|
| 164 |
-
|
| 165 |
-
# ---------------------------------------------------------------------------
|
| 166 |
-
# Result extraction
|
| 167 |
-
# ---------------------------------------------------------------------------
|
| 168 |
-
|
| 169 |
-
|
| 170 |
-
def extract_detections(result: Any) -> List[Dict[str, Any]]:
|
| 171 |
-
"""Pull a clean list of detections out of a paddleocr LayoutDetection result."""
|
| 172 |
-
payload = result.json
|
| 173 |
-
res = payload.get("res", payload) if isinstance(payload, dict) else {}
|
| 174 |
-
boxes = res.get("boxes", []) if isinstance(res, dict) else []
|
| 175 |
-
detections = []
|
| 176 |
-
for box in boxes:
|
| 177 |
-
coord = box.get("coordinate") or box.get("bbox") or []
|
| 178 |
-
coord = [float(x) for x in coord]
|
| 179 |
-
detections.append(
|
| 180 |
-
{
|
| 181 |
-
"bbox": coord,
|
| 182 |
-
"label": box.get("label"),
|
| 183 |
-
"score": float(box.get("score", 0.0)),
|
| 184 |
-
"cls_id": int(box.get("cls_id", -1)),
|
| 185 |
-
}
|
| 186 |
-
)
|
| 187 |
-
return detections
|
| 188 |
-
|
| 189 |
-
|
| 190 |
-
# ---------------------------------------------------------------------------
|
| 191 |
-
# Sources
|
| 192 |
-
# ---------------------------------------------------------------------------
|
| 193 |
-
|
| 194 |
-
|
| 195 |
-
@dataclass
|
| 196 |
-
class SourceItem:
|
| 197 |
-
key: str # stable identifier per image (used for dedup/resume)
|
| 198 |
-
image: Image.Image
|
| 199 |
-
extras: Dict[str, Any] # original row fields (only populated for dataset source)
|
| 200 |
-
|
| 201 |
-
|
| 202 |
-
def iter_dataset_images(
|
| 203 |
-
dataset_id: str,
|
| 204 |
-
image_column: str,
|
| 205 |
-
split: str,
|
| 206 |
-
shuffle: bool,
|
| 207 |
-
seed: int,
|
| 208 |
-
max_samples: Optional[int],
|
| 209 |
-
):
|
| 210 |
-
"""Iterate (key, PIL) pairs from an HF dataset repo.
|
| 211 |
-
|
| 212 |
-
Returns: (iterator, total, dataset_reference). The dataset reference is the
|
| 213 |
-
post-shuffle/post-select Dataset, kept around so the dataset-repo sink can
|
| 214 |
-
`add_column("layout", ...)` and preserve the original schema (especially
|
| 215 |
-
Image-type columns).
|
| 216 |
-
"""
|
| 217 |
-
from datasets import load_dataset
|
| 218 |
-
|
| 219 |
-
logger.info(f"Loading dataset: {dataset_id} (split={split})")
|
| 220 |
-
ds = load_dataset(dataset_id, split=split)
|
| 221 |
-
|
| 222 |
-
if image_column not in ds.column_names:
|
| 223 |
-
raise ValueError(
|
| 224 |
-
f"Column '{image_column}' not found. Available: {ds.column_names}"
|
| 225 |
-
)
|
| 226 |
-
|
| 227 |
-
if shuffle:
|
| 228 |
-
logger.info(f"Shuffling with seed {seed}")
|
| 229 |
-
ds = ds.shuffle(seed=seed)
|
| 230 |
-
if max_samples:
|
| 231 |
-
ds = ds.select(range(min(max_samples, len(ds))))
|
| 232 |
-
logger.info(f"Limited to {len(ds)} samples")
|
| 233 |
-
|
| 234 |
-
total = len(ds)
|
| 235 |
-
|
| 236 |
-
def gen() -> Iterator[SourceItem]:
|
| 237 |
-
for i in range(total):
|
| 238 |
-
row = ds[i]
|
| 239 |
-
yield SourceItem(
|
| 240 |
-
key=f"row-{i:08d}",
|
| 241 |
-
image=to_pil(row[image_column]),
|
| 242 |
-
extras={}, # original schema is preserved by the sink via the dataset ref
|
| 243 |
-
)
|
| 244 |
-
|
| 245 |
-
return gen(), total, ds
|
| 246 |
-
|
| 247 |
-
|
| 248 |
-
SOURCE_PATHS_SNAPSHOT = "_source_paths.json"
|
| 249 |
-
|
| 250 |
-
|
| 251 |
-
def _bucket_snapshot_path(output_url: str) -> Tuple[str, str]:
|
| 252 |
-
"""Return (bucket_id, key) for the source-paths snapshot inside an output bucket."""
|
| 253 |
-
out_bucket_id, out_prefix = parse_bucket_url(output_url)
|
| 254 |
-
snapshot_key = (
|
| 255 |
-
f"{out_prefix}/{SOURCE_PATHS_SNAPSHOT}".lstrip("/")
|
| 256 |
-
if out_prefix
|
| 257 |
-
else SOURCE_PATHS_SNAPSHOT
|
| 258 |
-
)
|
| 259 |
-
return out_bucket_id, snapshot_key
|
| 260 |
-
|
| 261 |
-
|
| 262 |
-
def iter_bucket_images(
|
| 263 |
-
bucket_url: str,
|
| 264 |
-
shuffle: bool,
|
| 265 |
-
seed: int,
|
| 266 |
-
max_samples: Optional[int],
|
| 267 |
-
hf_token: Optional[str],
|
| 268 |
-
output_url: Optional[str] = None,
|
| 269 |
-
) -> Tuple[Iterator[SourceItem], int]:
|
| 270 |
-
"""Glob image files under a bucket prefix and stream them via HfFileSystem.
|
| 271 |
-
|
| 272 |
-
If `output_url` is a bucket, the resolved source-path list is snapshotted to
|
| 273 |
-
`<output>/_source_paths.json` on first run. Subsequent runs against the same
|
| 274 |
-
output prefix reuse that snapshot, so resume stays consistent even if the
|
| 275 |
-
source bucket grows or `--shuffle`/`--max-samples` would otherwise pick a
|
| 276 |
-
different subset on the second run.
|
| 277 |
-
"""
|
| 278 |
-
from huggingface_hub import HfApi, HfFileSystem
|
| 279 |
-
|
| 280 |
-
bucket_id, prefix = parse_bucket_url(bucket_url)
|
| 281 |
-
fs = HfFileSystem(token=hf_token)
|
| 282 |
-
base = f"{BUCKET_PREFIX}{bucket_id}/{prefix}".rstrip("/")
|
| 283 |
-
|
| 284 |
-
snapshot_bucket_id: Optional[str] = None
|
| 285 |
-
snapshot_key: Optional[str] = None
|
| 286 |
-
cached_paths: Optional[List[str]] = None
|
| 287 |
-
|
| 288 |
-
if output_url and is_bucket_url(output_url):
|
| 289 |
-
snapshot_bucket_id, snapshot_key = _bucket_snapshot_path(output_url)
|
| 290 |
-
snapshot_url = f"{BUCKET_PREFIX}{snapshot_bucket_id}/{snapshot_key}"
|
| 291 |
-
try:
|
| 292 |
-
with fs.open(snapshot_url, "rb") as f:
|
| 293 |
-
snapshot = json.load(f)
|
| 294 |
-
if snapshot.get("source_url") != bucket_url:
|
| 295 |
-
logger.warning(
|
| 296 |
-
f"Output prefix already has a snapshot referencing a "
|
| 297 |
-
f"different source ({snapshot.get('source_url')!r} vs "
|
| 298 |
-
f"{bucket_url!r}). Ignoring and re-listing."
|
| 299 |
-
)
|
| 300 |
-
else:
|
| 301 |
-
cached_paths = snapshot["paths"]
|
| 302 |
-
logger.info(
|
| 303 |
-
f"Reusing existing snapshot of {len(cached_paths)} source paths "
|
| 304 |
-
f"(written {snapshot.get('created_at', 'unknown')})"
|
| 305 |
-
)
|
| 306 |
-
except FileNotFoundError:
|
| 307 |
-
pass
|
| 308 |
-
except Exception as e:
|
| 309 |
-
logger.warning(f"Could not read existing snapshot ({e}); re-listing.")
|
| 310 |
-
|
| 311 |
-
if cached_paths is not None:
|
| 312 |
-
all_paths = cached_paths
|
| 313 |
-
else:
|
| 314 |
-
logger.info(f"Listing images under {base}")
|
| 315 |
-
all_paths = []
|
| 316 |
-
try:
|
| 317 |
-
for entry in fs.find(base, detail=False):
|
| 318 |
-
ext = Path(entry).suffix.lower()
|
| 319 |
-
if ext in IMAGE_EXTENSIONS:
|
| 320 |
-
all_paths.append(entry)
|
| 321 |
-
except FileNotFoundError as e:
|
| 322 |
-
raise ValueError(f"Bucket prefix not found: {base}") from e
|
| 323 |
-
|
| 324 |
-
if not all_paths:
|
| 325 |
-
raise ValueError(
|
| 326 |
-
f"No image files (any of {sorted(IMAGE_EXTENSIONS)}) under {base}"
|
| 327 |
-
)
|
| 328 |
-
|
| 329 |
-
all_paths.sort()
|
| 330 |
-
if shuffle:
|
| 331 |
-
rng = np.random.default_rng(seed)
|
| 332 |
-
rng.shuffle(all_paths)
|
| 333 |
-
if max_samples:
|
| 334 |
-
all_paths = all_paths[:max_samples]
|
| 335 |
-
|
| 336 |
-
# Persist the chosen list so resume runs see exactly this set.
|
| 337 |
-
if snapshot_bucket_id is not None and snapshot_key is not None:
|
| 338 |
-
api = HfApi(token=hf_token)
|
| 339 |
-
payload = {
|
| 340 |
-
"source_url": bucket_url,
|
| 341 |
-
"shuffle": shuffle,
|
| 342 |
-
"seed": seed,
|
| 343 |
-
"max_samples": max_samples,
|
| 344 |
-
"created_at": datetime.now(timezone.utc).isoformat(),
|
| 345 |
-
"paths": all_paths,
|
| 346 |
-
}
|
| 347 |
-
api.batch_bucket_files(
|
| 348 |
-
snapshot_bucket_id,
|
| 349 |
-
add=[(json.dumps(payload).encode(), snapshot_key)],
|
| 350 |
-
token=hf_token,
|
| 351 |
-
)
|
| 352 |
-
logger.info(
|
| 353 |
-
f"Wrote source-path snapshot ({len(all_paths)} paths) to "
|
| 354 |
-
f"hf://buckets/{snapshot_bucket_id}/{snapshot_key}"
|
| 355 |
-
)
|
| 356 |
-
|
| 357 |
-
total = len(all_paths)
|
| 358 |
-
logger.info(f"Found {total} images in bucket")
|
| 359 |
-
|
| 360 |
-
def key_for(path: str) -> str:
|
| 361 |
-
# Use the full bucket path (`buckets/<id>/<rel>`) as returned by
|
| 362 |
-
# fs.find. This is stable across reruns (so resume works), and the
|
| 363 |
-
# stored value in `source_path` is fully addressable — open via
|
| 364 |
-
# HfFileSystem directly with `hf://` re-prepended.
|
| 365 |
-
return path
|
| 366 |
-
|
| 367 |
-
def gen() -> Iterator[SourceItem]:
|
| 368 |
-
for path in all_paths:
|
| 369 |
-
with fs.open(path, "rb") as f:
|
| 370 |
-
data = f.read()
|
| 371 |
-
yield SourceItem(
|
| 372 |
-
key=key_for(path),
|
| 373 |
-
image=to_pil(data),
|
| 374 |
-
extras={"__source_path": key_for(path)},
|
| 375 |
-
)
|
| 376 |
-
|
| 377 |
-
return gen(), total
|
| 378 |
-
|
| 379 |
-
|
| 380 |
-
# ---------------------------------------------------------------------------
|
| 381 |
-
# Sinks
|
| 382 |
-
# ---------------------------------------------------------------------------
|
| 383 |
-
|
| 384 |
-
|
| 385 |
-
class DatasetRepoSink:
|
| 386 |
-
"""Buffer all results in memory, push once at end with dataset card + inference_info.
|
| 387 |
-
|
| 388 |
-
Two modes:
|
| 389 |
-
- `original_dataset` provided (dataset-repo source): preserve the source
|
| 390 |
-
schema (including Image-type columns) and just `add_column("layout", ...)`.
|
| 391 |
-
- `original_dataset` is None (bucket-image source): build a Dataset from
|
| 392 |
-
collected rows containing __source_path + layout.
|
| 393 |
-
"""
|
| 394 |
-
|
| 395 |
-
def __init__(
|
| 396 |
-
self,
|
| 397 |
-
repo_id: str,
|
| 398 |
-
*,
|
| 399 |
-
hf_token: Optional[str],
|
| 400 |
-
private: bool,
|
| 401 |
-
config: Optional[str],
|
| 402 |
-
create_pr: bool,
|
| 403 |
-
source_id: str,
|
| 404 |
-
original_dataset=None,
|
| 405 |
-
):
|
| 406 |
-
self.repo_id = repo_id
|
| 407 |
-
self.hf_token = hf_token
|
| 408 |
-
self.private = private
|
| 409 |
-
self.config = config
|
| 410 |
-
self.create_pr = create_pr
|
| 411 |
-
self.source_id = source_id
|
| 412 |
-
self.original_dataset = original_dataset
|
| 413 |
-
# Used when original_dataset is None: row-by-row buffer.
|
| 414 |
-
self._rows: List[Dict[str, Any]] = []
|
| 415 |
-
# Used when original_dataset is set: ordered layouts aligned with dataset rows.
|
| 416 |
-
self._layouts: List[str] = []
|
| 417 |
-
|
| 418 |
-
@property
|
| 419 |
-
def kind(self) -> str:
|
| 420 |
-
return "dataset"
|
| 421 |
-
|
| 422 |
-
def already_done(self) -> set:
|
| 423 |
-
return set() # dataset sink does a single push, no resume
|
| 424 |
-
|
| 425 |
-
def write(self, key: str, layout: List[Dict[str, Any]], extras: Dict[str, Any]) -> None:
|
| 426 |
-
layout_json = json.dumps(layout, ensure_ascii=False)
|
| 427 |
-
if self.original_dataset is not None:
|
| 428 |
-
self._layouts.append(layout_json)
|
| 429 |
-
return
|
| 430 |
-
row = {"__source_key": key, "layout": layout_json}
|
| 431 |
-
for k, v in extras.items():
|
| 432 |
-
if isinstance(v, (str, int, float, bool)) or v is None:
|
| 433 |
-
row[k] = v
|
| 434 |
-
self._rows.append(row)
|
| 435 |
-
|
| 436 |
-
def finalize(self, model_id: str, args_dict: Dict[str, Any]) -> None:
|
| 437 |
-
from datasets import Dataset
|
| 438 |
-
|
| 439 |
-
if self.original_dataset is not None:
|
| 440 |
-
if len(self._layouts) != len(self.original_dataset):
|
| 441 |
-
logger.warning(
|
| 442 |
-
f"Layout count ({len(self._layouts)}) != dataset rows "
|
| 443 |
-
f"({len(self.original_dataset)}); padding with empty layouts."
|
| 444 |
-
)
|
| 445 |
-
# Pad to keep add_column happy.
|
| 446 |
-
while len(self._layouts) < len(self.original_dataset):
|
| 447 |
-
self._layouts.append("[]")
|
| 448 |
-
ds = self.original_dataset.add_column("layout", self._layouts)
|
| 449 |
-
else:
|
| 450 |
-
if not self._rows:
|
| 451 |
-
logger.warning("No rows produced; nothing to push.")
|
| 452 |
-
return
|
| 453 |
-
ds = Dataset.from_list(self._rows)
|
| 454 |
-
if "__source_key" in ds.column_names:
|
| 455 |
-
ds = ds.rename_column("__source_key", "source_path")
|
| 456 |
-
|
| 457 |
-
inference_entry = build_inference_entry(model_id, args_dict)
|
| 458 |
-
|
| 459 |
-
if "inference_info" in ds.column_names:
|
| 460 |
-
logger.info("Updating existing inference_info column")
|
| 461 |
-
|
| 462 |
-
def _update(example):
|
| 463 |
-
try:
|
| 464 |
-
existing = (
|
| 465 |
-
json.loads(example["inference_info"])
|
| 466 |
-
if example["inference_info"]
|
| 467 |
-
else []
|
| 468 |
-
)
|
| 469 |
-
except (json.JSONDecodeError, TypeError):
|
| 470 |
-
existing = []
|
| 471 |
-
existing.append(inference_entry)
|
| 472 |
-
return {"inference_info": json.dumps(existing)}
|
| 473 |
-
|
| 474 |
-
ds = ds.map(_update)
|
| 475 |
-
else:
|
| 476 |
-
ds = ds.add_column(
|
| 477 |
-
"inference_info", [json.dumps([inference_entry])] * len(ds)
|
| 478 |
-
)
|
| 479 |
-
|
| 480 |
-
logger.info(f"Pushing {len(ds)} rows to {self.repo_id}")
|
| 481 |
-
push_kwargs = {
|
| 482 |
-
"private": self.private,
|
| 483 |
-
"token": self.hf_token,
|
| 484 |
-
"max_shard_size": "500MB",
|
| 485 |
-
"create_pr": self.create_pr,
|
| 486 |
-
"commit_message": f"Add PP-DocLayout layout predictions ({len(ds)} samples)"
|
| 487 |
-
+ (f" [{self.config}]" if self.config else ""),
|
| 488 |
-
}
|
| 489 |
-
if self.config:
|
| 490 |
-
push_kwargs["config_name"] = self.config
|
| 491 |
-
|
| 492 |
-
max_retries = 3
|
| 493 |
-
for attempt in range(1, max_retries + 1):
|
| 494 |
-
try:
|
| 495 |
-
if attempt > 1:
|
| 496 |
-
logger.warning("Disabling XET (fallback to HTTP upload)")
|
| 497 |
-
os.environ["HF_HUB_DISABLE_XET"] = "1"
|
| 498 |
-
ds.push_to_hub(self.repo_id, **push_kwargs)
|
| 499 |
-
break
|
| 500 |
-
except Exception as e:
|
| 501 |
-
logger.error(f"Upload attempt {attempt}/{max_retries} failed: {e}")
|
| 502 |
-
if attempt == max_retries:
|
| 503 |
-
logger.error("All upload attempts failed.")
|
| 504 |
-
raise
|
| 505 |
-
time.sleep(30 * (2 ** (attempt - 1)))
|
| 506 |
-
|
| 507 |
-
# Dataset card
|
| 508 |
-
from huggingface_hub import DatasetCard
|
| 509 |
-
|
| 510 |
-
card = DatasetCard(
|
| 511 |
-
create_dataset_card(
|
| 512 |
-
source=self.source_id,
|
| 513 |
-
model_name=args_dict["model_name"],
|
| 514 |
-
num_samples=len(ds),
|
| 515 |
-
processing_time=args_dict["processing_time"],
|
| 516 |
-
output_column="layout",
|
| 517 |
-
threshold=args_dict["threshold"],
|
| 518 |
-
layout_nms=args_dict["layout_nms"],
|
| 519 |
-
)
|
| 520 |
-
)
|
| 521 |
-
card.push_to_hub(self.repo_id, token=self.hf_token)
|
| 522 |
-
logger.info(
|
| 523 |
-
f"Done: https://huggingface.co/datasets/{self.repo_id}"
|
| 524 |
-
)
|
| 525 |
-
|
| 526 |
-
|
| 527 |
-
class BucketShardSink:
|
| 528 |
-
"""Write incremental parquet shards to a bucket prefix. Resumable."""
|
| 529 |
-
|
| 530 |
-
METADATA_FILE = "_metadata.json"
|
| 531 |
-
SHARD_PATTERN = "shard-{:05d}.parquet"
|
| 532 |
-
|
| 533 |
-
def __init__(
|
| 534 |
-
self,
|
| 535 |
-
bucket_url: str,
|
| 536 |
-
*,
|
| 537 |
-
hf_token: Optional[str],
|
| 538 |
-
shard_size: int,
|
| 539 |
-
include_images: bool,
|
| 540 |
-
resume: bool,
|
| 541 |
-
source_id: str,
|
| 542 |
-
):
|
| 543 |
-
from huggingface_hub import HfApi, HfFileSystem, create_bucket
|
| 544 |
-
|
| 545 |
-
self.bucket_url = bucket_url
|
| 546 |
-
self.bucket_id, self.prefix = parse_bucket_url(bucket_url)
|
| 547 |
-
self.hf_token = hf_token
|
| 548 |
-
self.shard_size = shard_size
|
| 549 |
-
self.include_images = include_images
|
| 550 |
-
self.resume = resume
|
| 551 |
-
self.source_id = source_id
|
| 552 |
-
|
| 553 |
-
self._api = HfApi(token=hf_token)
|
| 554 |
-
self._fs = HfFileSystem(token=hf_token)
|
| 555 |
-
|
| 556 |
-
# Make sure the bucket exists. Path inside the bucket is created lazily on first write.
|
| 557 |
-
try:
|
| 558 |
-
create_bucket(self.bucket_id, exist_ok=True, token=hf_token)
|
| 559 |
-
except Exception as e:
|
| 560 |
-
# If we don't have create rights but the bucket already exists, that's fine.
|
| 561 |
-
logger.warning(f"create_bucket('{self.bucket_id}') warning: {e}")
|
| 562 |
-
|
| 563 |
-
self._buffer: List[Dict[str, Any]] = []
|
| 564 |
-
self._next_shard_idx = self._discover_next_shard_idx()
|
| 565 |
-
self._completed_keys = self._discover_completed_keys() if resume else set()
|
| 566 |
-
if self._completed_keys:
|
| 567 |
-
logger.info(
|
| 568 |
-
f"Resume: found {len(self._completed_keys)} already-processed keys, will skip them"
|
| 569 |
-
)
|
| 570 |
-
|
| 571 |
-
@property
|
| 572 |
-
def kind(self) -> str:
|
| 573 |
-
return "bucket"
|
| 574 |
-
|
| 575 |
-
def already_done(self) -> set:
|
| 576 |
-
return self._completed_keys
|
| 577 |
-
|
| 578 |
-
# --- internal helpers ---
|
| 579 |
-
|
| 580 |
-
def _shard_path(self, idx: int) -> str:
|
| 581 |
-
return self._join(self.SHARD_PATTERN.format(idx))
|
| 582 |
-
|
| 583 |
-
def _join(self, name: str) -> str:
|
| 584 |
-
return f"{self.prefix}/{name}".lstrip("/") if self.prefix else name
|
| 585 |
-
|
| 586 |
-
def _list_existing_shards(self) -> List[str]:
|
| 587 |
-
try:
|
| 588 |
-
tree = self._api.list_bucket_tree(
|
| 589 |
-
self.bucket_id, prefix=self.prefix or None, recursive=True
|
| 590 |
-
)
|
| 591 |
-
except Exception:
|
| 592 |
-
return []
|
| 593 |
-
shards: List[str] = []
|
| 594 |
-
for item in tree:
|
| 595 |
-
path = getattr(item, "path", None)
|
| 596 |
-
ftype = getattr(item, "type", None)
|
| 597 |
-
if not path or ftype not in (None, "file"):
|
| 598 |
-
continue
|
| 599 |
-
base = Path(path).name
|
| 600 |
-
if base.startswith("shard-") and base.endswith(".parquet"):
|
| 601 |
-
shards.append(path)
|
| 602 |
-
return sorted(shards)
|
| 603 |
-
|
| 604 |
-
def _discover_next_shard_idx(self) -> int:
|
| 605 |
-
shards = self._list_existing_shards()
|
| 606 |
-
max_idx = -1
|
| 607 |
-
for s in shards:
|
| 608 |
-
stem = Path(s).stem # shard-00007
|
| 609 |
-
try:
|
| 610 |
-
max_idx = max(max_idx, int(stem.split("-")[-1]))
|
| 611 |
-
except ValueError:
|
| 612 |
-
continue
|
| 613 |
-
return max_idx + 1
|
| 614 |
-
|
| 615 |
-
def _discover_completed_keys(self) -> set:
|
| 616 |
-
import pyarrow.parquet as pq
|
| 617 |
-
|
| 618 |
-
keys: set = set()
|
| 619 |
-
for shard_path in self._list_existing_shards():
|
| 620 |
-
full = f"{BUCKET_PREFIX}{self.bucket_id}/{shard_path}"
|
| 621 |
-
try:
|
| 622 |
-
with self._fs.open(full, "rb") as f:
|
| 623 |
-
table = pq.read_table(f, columns=["__source_key"])
|
| 624 |
-
keys.update(table.column("__source_key").to_pylist())
|
| 625 |
-
except Exception as e:
|
| 626 |
-
logger.warning(f"Could not read keys from {shard_path}: {e}")
|
| 627 |
-
return keys
|
| 628 |
-
|
| 629 |
-
def _flush(self) -> None:
|
| 630 |
-
if not self._buffer:
|
| 631 |
-
return
|
| 632 |
-
import pyarrow as pa
|
| 633 |
-
import pyarrow.parquet as pq
|
| 634 |
-
|
| 635 |
-
# Build a stable schema. Skip the image column if not requested.
|
| 636 |
-
columns = ["__source_key", "layout"]
|
| 637 |
-
if self.include_images:
|
| 638 |
-
columns.append("__image_bytes")
|
| 639 |
-
# Carry through any extra string-coercible fields (e.g. __source_path).
|
| 640 |
-
extra_keys = sorted(
|
| 641 |
-
{k for row in self._buffer for k in row.keys() if k not in columns}
|
| 642 |
-
)
|
| 643 |
-
columns.extend(extra_keys)
|
| 644 |
-
|
| 645 |
-
table_dict = {c: [row.get(c) for row in self._buffer] for c in columns}
|
| 646 |
-
# pyarrow infers types from python objects; strings/bytes/lists handled fine.
|
| 647 |
-
table = pa.Table.from_pydict(table_dict)
|
| 648 |
-
|
| 649 |
-
buf = io.BytesIO()
|
| 650 |
-
pq.write_table(table, buf, compression="zstd")
|
| 651 |
-
data = buf.getvalue()
|
| 652 |
-
|
| 653 |
-
shard_remote = self._shard_path(self._next_shard_idx)
|
| 654 |
-
logger.info(
|
| 655 |
-
f"Writing shard {self._next_shard_idx} ({len(self._buffer)} rows, "
|
| 656 |
-
f"{len(data) / 1024 / 1024:.1f} MiB) to {shard_remote}"
|
| 657 |
-
)
|
| 658 |
-
self._api.batch_bucket_files(
|
| 659 |
-
self.bucket_id, add=[(data, shard_remote)], token=self.hf_token
|
| 660 |
-
)
|
| 661 |
-
self._next_shard_idx += 1
|
| 662 |
-
self._buffer.clear()
|
| 663 |
-
|
| 664 |
-
def write(self, key: str, layout: List[Dict[str, Any]], extras: Dict[str, Any]) -> None:
|
| 665 |
-
row: Dict[str, Any] = {
|
| 666 |
-
"__source_key": key,
|
| 667 |
-
"layout": json.dumps(layout, ensure_ascii=False),
|
| 668 |
-
}
|
| 669 |
-
if self.include_images and "__image_bytes" in extras:
|
| 670 |
-
row["__image_bytes"] = extras["__image_bytes"]
|
| 671 |
-
# Pass through string/numeric extras (skip raw PIL Image objects which
|
| 672 |
-
# the dataset source never injects directly into extras anyway).
|
| 673 |
-
for k, v in extras.items():
|
| 674 |
-
if k in row or k == "__image_bytes":
|
| 675 |
-
continue
|
| 676 |
-
if isinstance(v, (str, int, float, bool)) or v is None:
|
| 677 |
-
row[k] = v
|
| 678 |
-
self._buffer.append(row)
|
| 679 |
-
if len(self._buffer) >= self.shard_size:
|
| 680 |
-
self._flush()
|
| 681 |
-
|
| 682 |
-
def finalize(self, model_id: str, args_dict: Dict[str, Any]) -> None:
|
| 683 |
-
# Flush trailing rows.
|
| 684 |
-
self._flush()
|
| 685 |
-
# Write/update the metadata file alongside the shards.
|
| 686 |
-
meta = {
|
| 687 |
-
"model_id": model_id,
|
| 688 |
-
"model_name": args_dict["model_name"],
|
| 689 |
-
"task_mode": "layout-detection",
|
| 690 |
-
"source": self.source_id,
|
| 691 |
-
"threshold": args_dict["threshold"],
|
| 692 |
-
"layout_nms": args_dict["layout_nms"],
|
| 693 |
-
"shard_size": args_dict["shard_size"],
|
| 694 |
-
"include_images": self.include_images,
|
| 695 |
-
"last_run_at": datetime.now(timezone.utc).isoformat(),
|
| 696 |
-
"processing_time": args_dict.get("processing_time"),
|
| 697 |
-
}
|
| 698 |
-
meta_bytes = json.dumps(meta, indent=2).encode("utf-8")
|
| 699 |
-
meta_path = self._join(self.METADATA_FILE)
|
| 700 |
-
self._api.batch_bucket_files(
|
| 701 |
-
self.bucket_id, add=[(meta_bytes, meta_path)], token=self.hf_token
|
| 702 |
-
)
|
| 703 |
-
logger.info(
|
| 704 |
-
f"Done: https://huggingface.co/buckets/{self.bucket_id}"
|
| 705 |
-
+ (f"/{self.prefix}" if self.prefix else "")
|
| 706 |
-
)
|
| 707 |
-
|
| 708 |
-
|
| 709 |
-
# ---------------------------------------------------------------------------
|
| 710 |
-
# inference_info + dataset card
|
| 711 |
-
# ---------------------------------------------------------------------------
|
| 712 |
-
|
| 713 |
-
|
| 714 |
-
def build_inference_entry(model_id: str, args_dict: Dict[str, Any]) -> Dict[str, Any]:
|
| 715 |
-
return {
|
| 716 |
-
"model_id": "PaddlePaddle/" + args_dict["model_name"],
|
| 717 |
-
"model_name": args_dict["model_name"],
|
| 718 |
-
"model_size": MODEL_SIZES.get(args_dict["model_name"], "unknown"),
|
| 719 |
-
"task_mode": "layout-detection",
|
| 720 |
-
"column_name": "layout",
|
| 721 |
-
"timestamp": datetime.now(timezone.utc).isoformat(),
|
| 722 |
-
"threshold": args_dict["threshold"],
|
| 723 |
-
"layout_nms": args_dict["layout_nms"],
|
| 724 |
-
"backend": "paddleocr",
|
| 725 |
-
}
|
| 726 |
-
|
| 727 |
-
|
| 728 |
-
def create_dataset_card(
|
| 729 |
-
source: str,
|
| 730 |
-
model_name: str,
|
| 731 |
-
num_samples: int,
|
| 732 |
-
processing_time: str,
|
| 733 |
-
output_column: str,
|
| 734 |
-
threshold: float,
|
| 735 |
-
layout_nms: bool,
|
| 736 |
-
) -> str:
|
| 737 |
-
"""Render the dataset card markdown for the dataset-repo sink."""
|
| 738 |
-
if is_bucket_url(source):
|
| 739 |
-
source_link = f"[{source}]({source})"
|
| 740 |
-
else:
|
| 741 |
-
source_link = f"[{source}](https://huggingface.co/datasets/{source})"
|
| 742 |
-
|
| 743 |
-
return f"""---
|
| 744 |
-
tags:
|
| 745 |
-
- layout-detection
|
| 746 |
-
- document-processing
|
| 747 |
-
- paddleocr
|
| 748 |
-
- pp-doclayout
|
| 749 |
-
- uv-script
|
| 750 |
-
- generated
|
| 751 |
-
viewer: false
|
| 752 |
-
---
|
| 753 |
-
|
| 754 |
-
# Layout detection with {model_name}
|
| 755 |
-
|
| 756 |
-
Bounding-box layout predictions for images from {source_link}, produced by
|
| 757 |
-
PaddleOCR's [{model_name}](https://huggingface.co/PaddlePaddle/{model_name}).
|
| 758 |
-
|
| 759 |
-
## Processing details
|
| 760 |
-
|
| 761 |
-
- **Source**: {source_link}
|
| 762 |
-
- **Model**: PaddlePaddle/{model_name} ({MODEL_SIZES.get(model_name, "unknown")})
|
| 763 |
-
- **Samples**: {num_samples:,}
|
| 764 |
-
- **Processing time**: {processing_time}
|
| 765 |
-
- **Processing date**: {datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M UTC")}
|
| 766 |
-
- **Confidence threshold**: {threshold}
|
| 767 |
-
- **Layout NMS**: {"on" if layout_nms else "off"}
|
| 768 |
-
- **Output column**: `{output_column}` (JSON-encoded list of detections)
|
| 769 |
-
|
| 770 |
-
## Schema
|
| 771 |
-
|
| 772 |
-
Each row contains the original columns plus:
|
| 773 |
-
|
| 774 |
-
- `{output_column}`: JSON string. List of detections:
|
| 775 |
-
```json
|
| 776 |
-
[
|
| 777 |
-
{{"bbox": [x1, y1, x2, y2], "label": "text", "score": 0.97, "cls_id": 2}},
|
| 778 |
-
{{"bbox": [x1, y1, x2, y2], "label": "table", "score": 0.92, "cls_id": 5}}
|
| 779 |
-
]
|
| 780 |
-
```
|
| 781 |
-
Coordinates are in **original input-image pixel space** (top-left origin,
|
| 782 |
-
`[xmin, ymin, xmax, ymax]`).
|
| 783 |
-
- `inference_info`: JSON list tracking every model that has been applied to
|
| 784 |
-
this dataset (appended on each run).
|
| 785 |
-
|
| 786 |
-
## Usage
|
| 787 |
-
|
| 788 |
-
```python
|
| 789 |
-
import json
|
| 790 |
-
from datasets import load_dataset
|
| 791 |
-
|
| 792 |
-
ds = load_dataset("{{output_dataset_id}}", split="train")
|
| 793 |
-
detections = json.loads(ds[0]["{output_column}"])
|
| 794 |
-
for det in detections:
|
| 795 |
-
print(det["label"], det["score"], det["bbox"])
|
| 796 |
-
```
|
| 797 |
-
|
| 798 |
-
## Reproduction
|
| 799 |
-
|
| 800 |
-
```bash
|
| 801 |
-
hf jobs uv run --flavor l4x1 -s HF_TOKEN \\
|
| 802 |
-
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/pp-doclayout.py \\
|
| 803 |
-
{source} <output> --model-name {model_name}
|
| 804 |
-
```
|
| 805 |
-
|
| 806 |
-
Generated with [UV Scripts](https://huggingface.co/uv-scripts).
|
| 807 |
-
"""
|
| 808 |
-
|
| 809 |
-
|
| 810 |
-
# ---------------------------------------------------------------------------
|
| 811 |
-
# Main
|
| 812 |
-
# ---------------------------------------------------------------------------
|
| 813 |
-
|
| 814 |
-
|
| 815 |
-
def resolve_device(device: str) -> str:
|
| 816 |
-
if device == "gpu":
|
| 817 |
-
try:
|
| 818 |
-
import paddle # noqa: F401
|
| 819 |
-
|
| 820 |
-
if paddle.device.is_compiled_with_cuda() and paddle.device.cuda.device_count() > 0:
|
| 821 |
-
logger.info(
|
| 822 |
-
f"GPU available: {paddle.device.cuda.device_count()} device(s)"
|
| 823 |
-
)
|
| 824 |
-
return "gpu"
|
| 825 |
-
logger.warning("No CUDA GPU detected; falling back to CPU.")
|
| 826 |
-
return "cpu"
|
| 827 |
-
except Exception as e:
|
| 828 |
-
logger.warning(f"GPU check failed ({e}); falling back to CPU.")
|
| 829 |
-
return "cpu"
|
| 830 |
-
return device
|
| 831 |
-
|
| 832 |
-
|
| 833 |
-
def main(args: argparse.Namespace) -> None:
|
| 834 |
-
from huggingface_hub import login
|
| 835 |
-
|
| 836 |
-
start_time = datetime.now()
|
| 837 |
-
hf_token = args.hf_token or os.environ.get("HF_TOKEN")
|
| 838 |
-
if hf_token:
|
| 839 |
-
login(token=hf_token)
|
| 840 |
-
|
| 841 |
-
device = resolve_device(args.device)
|
| 842 |
-
|
| 843 |
-
# ---------- source ----------
|
| 844 |
-
original_dataset = None
|
| 845 |
-
if is_bucket_url(args.input_source):
|
| 846 |
-
src_iter, total = iter_bucket_images(
|
| 847 |
-
args.input_source,
|
| 848 |
-
shuffle=args.shuffle,
|
| 849 |
-
seed=args.seed,
|
| 850 |
-
max_samples=args.max_samples,
|
| 851 |
-
hf_token=hf_token,
|
| 852 |
-
output_url=args.output_target,
|
| 853 |
-
)
|
| 854 |
-
else:
|
| 855 |
-
src_iter, total, original_dataset = iter_dataset_images(
|
| 856 |
-
args.input_source,
|
| 857 |
-
image_column=args.image_column,
|
| 858 |
-
split=args.split,
|
| 859 |
-
shuffle=args.shuffle,
|
| 860 |
-
seed=args.seed,
|
| 861 |
-
max_samples=args.max_samples,
|
| 862 |
-
)
|
| 863 |
-
|
| 864 |
-
# ---------- sink ----------
|
| 865 |
-
if is_bucket_url(args.output_target):
|
| 866 |
-
sink: Union[BucketShardSink, DatasetRepoSink] = BucketShardSink(
|
| 867 |
-
args.output_target,
|
| 868 |
-
hf_token=hf_token,
|
| 869 |
-
shard_size=args.shard_size,
|
| 870 |
-
include_images=args.include_images,
|
| 871 |
-
resume=not args.no_resume,
|
| 872 |
-
source_id=args.input_source,
|
| 873 |
-
)
|
| 874 |
-
else:
|
| 875 |
-
sink = DatasetRepoSink(
|
| 876 |
-
args.output_target,
|
| 877 |
-
hf_token=hf_token,
|
| 878 |
-
private=args.private,
|
| 879 |
-
config=args.config,
|
| 880 |
-
create_pr=args.create_pr,
|
| 881 |
-
source_id=args.input_source,
|
| 882 |
-
original_dataset=original_dataset,
|
| 883 |
-
)
|
| 884 |
-
|
| 885 |
-
completed = sink.already_done()
|
| 886 |
-
|
| 887 |
-
# ---------- model ----------
|
| 888 |
-
if args.model_name not in VALID_MODELS:
|
| 889 |
-
raise ValueError(
|
| 890 |
-
f"Invalid model {args.model_name!r}. Choose from: {VALID_MODELS}"
|
| 891 |
-
)
|
| 892 |
-
logger.info(f"Loading PaddleOCR LayoutDetection model: {args.model_name} on {device}")
|
| 893 |
-
# PaddleX gates `import cv2` at module load time on
|
| 894 |
-
# `is_dep_available("opencv-contrib-python")`, which checks
|
| 895 |
-
# `importlib.metadata.version(...)`. We ship `opencv-contrib-python-headless`
|
| 896 |
-
# (same `cv2`, no system libGL.so.1 needed) — but that's a different
|
| 897 |
-
# distribution name, so the gate fails and `cv2` is never bound, causing
|
| 898 |
-
# NameErrors deep inside paddlex modules. Patch the metadata lookup to
|
| 899 |
-
# alias the GUI cv2 distros to the headless variant before importing
|
| 900 |
-
# paddleocr; this lets paddlex's own `import cv2` succeed naturally.
|
| 901 |
-
import importlib.metadata as _metadata
|
| 902 |
-
|
| 903 |
-
_orig_metadata_version = _metadata.version
|
| 904 |
-
|
| 905 |
-
def _patched_metadata_version(dep_name):
|
| 906 |
-
if dep_name in ("opencv-contrib-python", "opencv-python"):
|
| 907 |
-
for headless_alias in (
|
| 908 |
-
"opencv-contrib-python-headless",
|
| 909 |
-
"opencv-python-headless",
|
| 910 |
-
):
|
| 911 |
-
try:
|
| 912 |
-
return _orig_metadata_version(headless_alias)
|
| 913 |
-
except _metadata.PackageNotFoundError:
|
| 914 |
-
continue
|
| 915 |
-
return _orig_metadata_version(dep_name)
|
| 916 |
-
|
| 917 |
-
_metadata.version = _patched_metadata_version
|
| 918 |
-
|
| 919 |
-
from paddleocr import LayoutDetection
|
| 920 |
-
|
| 921 |
-
model = LayoutDetection(model_name=args.model_name, device=device)
|
| 922 |
-
|
| 923 |
-
# ---------- loop ----------
|
| 924 |
-
processed = 0
|
| 925 |
-
skipped = 0
|
| 926 |
-
errors = 0
|
| 927 |
-
pbar = tqdm(src_iter, total=total, desc=f"Layout {args.model_name}")
|
| 928 |
-
for item in pbar:
|
| 929 |
-
if item.key in completed:
|
| 930 |
-
skipped += 1
|
| 931 |
-
continue
|
| 932 |
-
try:
|
| 933 |
-
arr = pil_to_array(item.image)
|
| 934 |
-
results = model.predict(
|
| 935 |
-
arr,
|
| 936 |
-
batch_size=args.batch_size,
|
| 937 |
-
layout_nms=args.layout_nms,
|
| 938 |
-
)
|
| 939 |
-
if not results:
|
| 940 |
-
detections: List[Dict[str, Any]] = []
|
| 941 |
-
else:
|
| 942 |
-
detections = extract_detections(results[0])
|
| 943 |
-
if args.threshold and args.threshold > 0:
|
| 944 |
-
detections = [d for d in detections if d["score"] >= args.threshold]
|
| 945 |
-
except Exception as e:
|
| 946 |
-
logger.error(f"Error on {item.key}: {e}")
|
| 947 |
-
detections = []
|
| 948 |
-
errors += 1
|
| 949 |
-
|
| 950 |
-
extras = dict(item.extras)
|
| 951 |
-
if isinstance(sink, BucketShardSink) and args.include_images:
|
| 952 |
-
buf = io.BytesIO()
|
| 953 |
-
item.image.save(buf, format="PNG")
|
| 954 |
-
extras["__image_bytes"] = buf.getvalue()
|
| 955 |
-
|
| 956 |
-
sink.write(item.key, detections, extras)
|
| 957 |
-
processed += 1
|
| 958 |
-
|
| 959 |
-
duration = datetime.now() - start_time
|
| 960 |
-
processing_time_str = f"{duration.total_seconds() / 60:.2f} min"
|
| 961 |
-
logger.info(
|
| 962 |
-
f"Processed {processed} (skipped {skipped}, errors {errors}) in {processing_time_str}"
|
| 963 |
-
)
|
| 964 |
-
|
| 965 |
-
args_dict = {
|
| 966 |
-
"model_name": args.model_name,
|
| 967 |
-
"threshold": args.threshold,
|
| 968 |
-
"layout_nms": args.layout_nms,
|
| 969 |
-
"shard_size": args.shard_size,
|
| 970 |
-
"processing_time": processing_time_str,
|
| 971 |
-
}
|
| 972 |
-
sink.finalize(model_id=f"PaddlePaddle/{args.model_name}", args_dict=args_dict)
|
| 973 |
-
|
| 974 |
-
if args.verbose:
|
| 975 |
-
import importlib.metadata
|
| 976 |
-
|
| 977 |
-
logger.info("--- Resolved package versions ---")
|
| 978 |
-
for pkg in [
|
| 979 |
-
"paddlepaddle",
|
| 980 |
-
"paddlepaddle-gpu",
|
| 981 |
-
"paddleocr",
|
| 982 |
-
"huggingface-hub",
|
| 983 |
-
"datasets",
|
| 984 |
-
"pyarrow",
|
| 985 |
-
"pillow",
|
| 986 |
-
"numpy",
|
| 987 |
-
]:
|
| 988 |
-
try:
|
| 989 |
-
logger.info(f" {pkg}=={importlib.metadata.version(pkg)}")
|
| 990 |
-
except importlib.metadata.PackageNotFoundError:
|
| 991 |
-
logger.info(f" {pkg}: not installed")
|
| 992 |
-
logger.info("--- End versions ---")
|
| 993 |
-
|
| 994 |
-
|
| 995 |
-
# ---------------------------------------------------------------------------
|
| 996 |
-
# CLI
|
| 997 |
-
# ---------------------------------------------------------------------------
|
| 998 |
-
|
| 999 |
-
|
| 1000 |
-
def _print_usage_banner() -> None:
|
| 1001 |
-
print("=" * 80)
|
| 1002 |
-
print("PP-DocLayout layout detection")
|
| 1003 |
-
print("=" * 80)
|
| 1004 |
-
print(
|
| 1005 |
-
"\nDetect document layout regions (text/title/table/figure/formula/...)"
|
| 1006 |
-
)
|
| 1007 |
-
print("with PaddleOCR's PP-DocLayout-L (or M / S / plus-L variant).")
|
| 1008 |
-
print("\nModels:")
|
| 1009 |
-
for m in VALID_MODELS:
|
| 1010 |
-
print(f" {m:24s} {MODEL_SIZES.get(m, '')}")
|
| 1011 |
-
print("\nSources:")
|
| 1012 |
-
print(" - HF dataset repo: namespace/dataset")
|
| 1013 |
-
print(" - HF bucket of images: hf://buckets/namespace/bucket[/prefix]")
|
| 1014 |
-
print("\nSinks:")
|
| 1015 |
-
print(" - HF dataset repo (one push + dataset card):")
|
| 1016 |
-
print(" namespace/dataset")
|
| 1017 |
-
print(" - HF bucket (incremental shards, resumable):")
|
| 1018 |
-
print(" hf://buckets/namespace/bucket/run-name")
|
| 1019 |
-
print("\nExamples:")
|
| 1020 |
-
print("\n # Smoke test on L4 (dataset -> dataset)")
|
| 1021 |
-
print(" hf jobs uv run --flavor l4x1 -s HF_TOKEN https://huggingface.co/datasets/uv-scripts/ocr/raw/main/pp-doclayout.py \\")
|
| 1022 |
-
print(" davanstrien/ufo-ColPali pp-doclayout-smoke \\")
|
| 1023 |
-
print(" --max-samples 3 --shuffle --seed 42 --private")
|
| 1024 |
-
print("\n # Dataset -> bucket (incremental shards)")
|
| 1025 |
-
print(
|
| 1026 |
-
" hf jobs uv run --flavor l4x1 -s HF_TOKEN https://huggingface.co/datasets/uv-scripts/ocr/raw/main/pp-doclayout.py \\"
|
| 1027 |
-
)
|
| 1028 |
-
print(" davanstrien/ufo-ColPali \\")
|
| 1029 |
-
print(
|
| 1030 |
-
" hf://buckets/davanstrien/pp-doclayout-scratch/run1 \\"
|
| 1031 |
-
)
|
| 1032 |
-
print(" --max-samples 20 --shard-size 5")
|
| 1033 |
-
print("\n # Bucket of images -> dataset")
|
| 1034 |
-
print(
|
| 1035 |
-
" hf jobs uv run --flavor l4x1 -s HF_TOKEN https://huggingface.co/datasets/uv-scripts/ocr/raw/main/pp-doclayout.py \\"
|
| 1036 |
-
)
|
| 1037 |
-
print(
|
| 1038 |
-
" hf://buckets/davanstrien/pp-doclayout-images \\"
|
| 1039 |
-
)
|
| 1040 |
-
print(" pp-doclayout-from-bucket --private")
|
| 1041 |
-
print("\nFor full help, run: uv run pp-doclayout.py --help")
|
| 1042 |
-
print("=" * 80)
|
| 1043 |
-
|
| 1044 |
-
|
| 1045 |
-
def build_parser() -> argparse.ArgumentParser:
|
| 1046 |
-
p = argparse.ArgumentParser(
|
| 1047 |
-
description="PP-DocLayout layout detection over an HF dataset or bucket.",
|
| 1048 |
-
formatter_class=argparse.RawDescriptionHelpFormatter,
|
| 1049 |
-
)
|
| 1050 |
-
p.add_argument(
|
| 1051 |
-
"input_source",
|
| 1052 |
-
help="HF dataset id (namespace/dataset) OR hf://buckets/ns/bucket[/prefix]",
|
| 1053 |
-
)
|
| 1054 |
-
p.add_argument(
|
| 1055 |
-
"output_target",
|
| 1056 |
-
help="HF dataset id (namespace/dataset) OR hf://buckets/ns/bucket/run-name",
|
| 1057 |
-
)
|
| 1058 |
-
p.add_argument(
|
| 1059 |
-
"--model-name",
|
| 1060 |
-
default="PP-DocLayout-L",
|
| 1061 |
-
choices=VALID_MODELS,
|
| 1062 |
-
help="PaddleOCR layout model variant (default: PP-DocLayout-L)",
|
| 1063 |
-
)
|
| 1064 |
-
p.add_argument(
|
| 1065 |
-
"--device",
|
| 1066 |
-
default="gpu",
|
| 1067 |
-
choices=["gpu", "cpu"],
|
| 1068 |
-
help="Device for inference (default: gpu, falls back to cpu if CUDA missing)",
|
| 1069 |
-
)
|
| 1070 |
-
p.add_argument(
|
| 1071 |
-
"--batch-size",
|
| 1072 |
-
type=int,
|
| 1073 |
-
default=1,
|
| 1074 |
-
help="Per-image batch size passed to model.predict (default: 1)",
|
| 1075 |
-
)
|
| 1076 |
-
p.add_argument(
|
| 1077 |
-
"--threshold",
|
| 1078 |
-
type=float,
|
| 1079 |
-
default=0.5,
|
| 1080 |
-
help="Drop detections below this confidence (default: 0.5; 0 disables)",
|
| 1081 |
-
)
|
| 1082 |
-
p.add_argument(
|
| 1083 |
-
"--layout-nms",
|
| 1084 |
-
dest="layout_nms",
|
| 1085 |
-
action="store_true",
|
| 1086 |
-
default=True,
|
| 1087 |
-
help="Enable layout NMS (default: on)",
|
| 1088 |
-
)
|
| 1089 |
-
p.add_argument(
|
| 1090 |
-
"--no-layout-nms",
|
| 1091 |
-
dest="layout_nms",
|
| 1092 |
-
action="store_false",
|
| 1093 |
-
help="Disable layout NMS",
|
| 1094 |
-
)
|
| 1095 |
-
# Dataset-source-specific
|
| 1096 |
-
p.add_argument(
|
| 1097 |
-
"--image-column",
|
| 1098 |
-
default="image",
|
| 1099 |
-
help="Column containing images (dataset-repo source only, default: image)",
|
| 1100 |
-
)
|
| 1101 |
-
p.add_argument(
|
| 1102 |
-
"--split",
|
| 1103 |
-
default="train",
|
| 1104 |
-
help="Dataset split (dataset-repo source only, default: train)",
|
| 1105 |
-
)
|
| 1106 |
-
p.add_argument(
|
| 1107 |
-
"--max-samples", type=int, help="Limit number of samples (for testing)"
|
| 1108 |
-
)
|
| 1109 |
-
p.add_argument(
|
| 1110 |
-
"--shuffle", action="store_true", help="Shuffle source before processing"
|
| 1111 |
-
)
|
| 1112 |
-
p.add_argument(
|
| 1113 |
-
"--seed", type=int, default=42, help="Random seed for shuffle (default: 42)"
|
| 1114 |
-
)
|
| 1115 |
-
# Dataset-sink-specific
|
| 1116 |
-
p.add_argument(
|
| 1117 |
-
"--private", action="store_true", help="Private dataset output (dataset sink only)"
|
| 1118 |
-
)
|
| 1119 |
-
p.add_argument(
|
| 1120 |
-
"--config",
|
| 1121 |
-
help="Config/subset name when pushing to Hub (dataset sink only)",
|
| 1122 |
-
)
|
| 1123 |
-
p.add_argument(
|
| 1124 |
-
"--create-pr",
|
| 1125 |
-
action="store_true",
|
| 1126 |
-
help="Create PR instead of direct push (dataset sink only)",
|
| 1127 |
-
)
|
| 1128 |
-
# Bucket-sink-specific
|
| 1129 |
-
p.add_argument(
|
| 1130 |
-
"--shard-size",
|
| 1131 |
-
type=int,
|
| 1132 |
-
default=256,
|
| 1133 |
-
help="Rows per parquet shard for bucket sink (default: 256)",
|
| 1134 |
-
)
|
| 1135 |
-
p.add_argument(
|
| 1136 |
-
"--include-images",
|
| 1137 |
-
action="store_true",
|
| 1138 |
-
help="Embed source image bytes in bucket output shards (off by default)",
|
| 1139 |
-
)
|
| 1140 |
-
p.add_argument(
|
| 1141 |
-
"--no-resume",
|
| 1142 |
-
action="store_true",
|
| 1143 |
-
help="Disable resume scan when writing to a bucket sink",
|
| 1144 |
-
)
|
| 1145 |
-
# Auth + diagnostics
|
| 1146 |
-
p.add_argument("--hf-token", help="Hugging Face API token (else uses HF_TOKEN env)")
|
| 1147 |
-
p.add_argument(
|
| 1148 |
-
"--verbose",
|
| 1149 |
-
action="store_true",
|
| 1150 |
-
help="Log resolved package versions at the end",
|
| 1151 |
-
)
|
| 1152 |
-
return p
|
| 1153 |
-
|
| 1154 |
-
|
| 1155 |
-
if __name__ == "__main__":
|
| 1156 |
-
if len(sys.argv) == 1:
|
| 1157 |
-
_print_usage_banner()
|
| 1158 |
-
sys.exit(0)
|
| 1159 |
-
main(build_parser().parse_args())
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
qianfan-ocr.py
DELETED
|
@@ -1,628 +0,0 @@
|
|
| 1 |
-
# /// script
|
| 2 |
-
# requires-python = ">=3.11"
|
| 3 |
-
# dependencies = [
|
| 4 |
-
# "datasets>=4.0.0",
|
| 5 |
-
# "huggingface-hub",
|
| 6 |
-
# "pillow",
|
| 7 |
-
# "vllm>=0.15.1",
|
| 8 |
-
# "tqdm",
|
| 9 |
-
# "toolz",
|
| 10 |
-
# "torch",
|
| 11 |
-
# ]
|
| 12 |
-
# ///
|
| 13 |
-
|
| 14 |
-
"""
|
| 15 |
-
Convert document images to markdown using Qianfan-OCR with vLLM.
|
| 16 |
-
|
| 17 |
-
Qianfan-OCR is a 4.7B end-to-end document intelligence model from Baidu,
|
| 18 |
-
built on InternVL architecture with Qianfan-ViT encoder + Qwen3-4B LLM.
|
| 19 |
-
|
| 20 |
-
Features:
|
| 21 |
-
- #1 end-to-end model on OmniDocBench v1.5 (93.12) and OlmOCR Bench (79.8)
|
| 22 |
-
- Layout-as-Thought: optional reasoning phase for complex layouts via --think
|
| 23 |
-
- 192 language support (Latin, CJK, Arabic, Cyrillic, and more)
|
| 24 |
-
- Multiple task modes: OCR, table (HTML), formula (LaTeX), chart, scene text
|
| 25 |
-
- Key information extraction with custom prompts
|
| 26 |
-
- 1.024 PPS on A100 with W8A8 quantization
|
| 27 |
-
|
| 28 |
-
Model: baidu/Qianfan-OCR
|
| 29 |
-
License: Apache 2.0
|
| 30 |
-
Paper: https://arxiv.org/abs/2603.13398
|
| 31 |
-
"""
|
| 32 |
-
|
| 33 |
-
import argparse
|
| 34 |
-
import base64
|
| 35 |
-
import io
|
| 36 |
-
import json
|
| 37 |
-
import logging
|
| 38 |
-
import os
|
| 39 |
-
import sys
|
| 40 |
-
import time
|
| 41 |
-
from datetime import datetime
|
| 42 |
-
from typing import Any, Dict, List, Union
|
| 43 |
-
|
| 44 |
-
import torch
|
| 45 |
-
from datasets import load_dataset
|
| 46 |
-
from huggingface_hub import DatasetCard, login
|
| 47 |
-
from PIL import Image
|
| 48 |
-
from toolz import partition_all
|
| 49 |
-
from tqdm.auto import tqdm
|
| 50 |
-
from vllm import LLM, SamplingParams
|
| 51 |
-
|
| 52 |
-
logging.basicConfig(level=logging.INFO)
|
| 53 |
-
logger = logging.getLogger(__name__)
|
| 54 |
-
|
| 55 |
-
MODEL = "baidu/Qianfan-OCR"
|
| 56 |
-
|
| 57 |
-
PROMPT_TEMPLATES = {
|
| 58 |
-
"ocr": "Parse this document to Markdown.",
|
| 59 |
-
"table": "Extract tables to HTML format.",
|
| 60 |
-
"formula": "Extract formulas to LaTeX.",
|
| 61 |
-
"chart": "What trends are shown in this chart?",
|
| 62 |
-
"scene": "Extract all visible text from the image.",
|
| 63 |
-
"kie": None, # requires --custom-prompt
|
| 64 |
-
}
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
def check_cuda_availability():
|
| 68 |
-
"""Check if CUDA is available and exit if not."""
|
| 69 |
-
if not torch.cuda.is_available():
|
| 70 |
-
logger.error("CUDA is not available. This script requires a GPU.")
|
| 71 |
-
sys.exit(1)
|
| 72 |
-
else:
|
| 73 |
-
logger.info(f"CUDA is available. GPU: {torch.cuda.get_device_name(0)}")
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
def extract_content_from_thinking(text: str, include_thinking: bool = False) -> str:
|
| 77 |
-
"""
|
| 78 |
-
Extract final content from Qianfan-OCR's Layout-as-Thought output.
|
| 79 |
-
|
| 80 |
-
When --think is enabled, the model generates layout analysis inside
|
| 81 |
-
<think>...</think> tags before the final markdown output.
|
| 82 |
-
"""
|
| 83 |
-
if include_thinking:
|
| 84 |
-
return text.strip()
|
| 85 |
-
|
| 86 |
-
# If no thinking tags, return as-is
|
| 87 |
-
if "<think>" not in text:
|
| 88 |
-
return text.strip()
|
| 89 |
-
|
| 90 |
-
# Extract everything after </think>
|
| 91 |
-
think_end = text.find("</think>")
|
| 92 |
-
if think_end != -1:
|
| 93 |
-
return text[think_end + 8 :].strip()
|
| 94 |
-
|
| 95 |
-
# Thinking started but never closed — return full text
|
| 96 |
-
logger.warning("Found <think> but no </think>, returning full text")
|
| 97 |
-
return text.strip()
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
def make_ocr_message(
|
| 101 |
-
image: Union[Image.Image, Dict[str, Any], str],
|
| 102 |
-
prompt: str,
|
| 103 |
-
) -> List[Dict]:
|
| 104 |
-
"""Create vLLM chat message with image and prompt."""
|
| 105 |
-
if isinstance(image, Image.Image):
|
| 106 |
-
pil_img = image
|
| 107 |
-
elif isinstance(image, dict) and "bytes" in image:
|
| 108 |
-
pil_img = Image.open(io.BytesIO(image["bytes"]))
|
| 109 |
-
elif isinstance(image, str):
|
| 110 |
-
pil_img = Image.open(image)
|
| 111 |
-
else:
|
| 112 |
-
raise ValueError(f"Unsupported image type: {type(image)}")
|
| 113 |
-
|
| 114 |
-
pil_img = pil_img.convert("RGB")
|
| 115 |
-
|
| 116 |
-
buf = io.BytesIO()
|
| 117 |
-
pil_img.save(buf, format="PNG")
|
| 118 |
-
data_uri = f"data:image/png;base64,{base64.b64encode(buf.getvalue()).decode()}"
|
| 119 |
-
|
| 120 |
-
return [
|
| 121 |
-
{
|
| 122 |
-
"role": "user",
|
| 123 |
-
"content": [
|
| 124 |
-
{"type": "image_url", "image_url": {"url": data_uri}},
|
| 125 |
-
{"type": "text", "text": prompt},
|
| 126 |
-
],
|
| 127 |
-
}
|
| 128 |
-
]
|
| 129 |
-
|
| 130 |
-
|
| 131 |
-
def create_dataset_card(
|
| 132 |
-
source_dataset: str,
|
| 133 |
-
model: str,
|
| 134 |
-
num_samples: int,
|
| 135 |
-
processing_time: str,
|
| 136 |
-
batch_size: int,
|
| 137 |
-
max_model_len: int,
|
| 138 |
-
max_tokens: int,
|
| 139 |
-
gpu_memory_utilization: float,
|
| 140 |
-
prompt_mode: str,
|
| 141 |
-
think: bool,
|
| 142 |
-
include_thinking: bool,
|
| 143 |
-
image_column: str = "image",
|
| 144 |
-
split: str = "train",
|
| 145 |
-
) -> str:
|
| 146 |
-
"""Create a dataset card documenting the OCR process."""
|
| 147 |
-
model_name = model.split("/")[-1]
|
| 148 |
-
|
| 149 |
-
return f"""---
|
| 150 |
-
tags:
|
| 151 |
-
- ocr
|
| 152 |
-
- document-processing
|
| 153 |
-
- qianfan-ocr
|
| 154 |
-
- markdown
|
| 155 |
-
- uv-script
|
| 156 |
-
- generated
|
| 157 |
-
---
|
| 158 |
-
|
| 159 |
-
# Document OCR using {model_name}
|
| 160 |
-
|
| 161 |
-
This dataset contains OCR results from [{source_dataset}](https://huggingface.co/datasets/{source_dataset}) using Qianfan-OCR, Baidu's 4.7B end-to-end document intelligence model.
|
| 162 |
-
|
| 163 |
-
## Processing Details
|
| 164 |
-
|
| 165 |
-
- **Source Dataset**: [{source_dataset}](https://huggingface.co/datasets/{source_dataset})
|
| 166 |
-
- **Model**: [{model}](https://huggingface.co/{model})
|
| 167 |
-
- **Number of Samples**: {num_samples:,}
|
| 168 |
-
- **Processing Time**: {processing_time}
|
| 169 |
-
- **Processing Date**: {datetime.now().strftime("%Y-%m-%d %H:%M UTC")}
|
| 170 |
-
|
| 171 |
-
### Configuration
|
| 172 |
-
|
| 173 |
-
- **Image Column**: `{image_column}`
|
| 174 |
-
- **Output Column**: `markdown`
|
| 175 |
-
- **Dataset Split**: `{split}`
|
| 176 |
-
- **Batch Size**: {batch_size}
|
| 177 |
-
- **Prompt Mode**: {prompt_mode}
|
| 178 |
-
- **Layout-as-Thought**: {"Enabled" if think else "Disabled"}
|
| 179 |
-
- **Thinking Traces**: {"Included" if include_thinking else "Excluded"}
|
| 180 |
-
- **Max Model Length**: {max_model_len:,} tokens
|
| 181 |
-
- **Max Output Tokens**: {max_tokens:,}
|
| 182 |
-
- **GPU Memory Utilization**: {gpu_memory_utilization:.1%}
|
| 183 |
-
|
| 184 |
-
## Model Information
|
| 185 |
-
|
| 186 |
-
Qianfan-OCR key capabilities:
|
| 187 |
-
- #1 end-to-end model on OmniDocBench v1.5 (93.12)
|
| 188 |
-
- #1 on OlmOCR Bench (79.8)
|
| 189 |
-
- 192 language support
|
| 190 |
-
- Layout-as-Thought reasoning for complex documents
|
| 191 |
-
- Document parsing, table extraction, formula recognition, chart understanding
|
| 192 |
-
- Key information extraction
|
| 193 |
-
|
| 194 |
-
## Dataset Structure
|
| 195 |
-
|
| 196 |
-
The dataset contains all original columns plus:
|
| 197 |
-
- `markdown`: The extracted text in markdown format
|
| 198 |
-
- `inference_info`: JSON list tracking all OCR models applied
|
| 199 |
-
|
| 200 |
-
## Reproduction
|
| 201 |
-
|
| 202 |
-
```bash
|
| 203 |
-
uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/qianfan-ocr.py \\
|
| 204 |
-
{source_dataset} \\
|
| 205 |
-
<output-dataset> \\
|
| 206 |
-
--image-column {image_column} \\
|
| 207 |
-
--prompt-mode {prompt_mode} \\
|
| 208 |
-
--batch-size {batch_size}{" --think" if think else ""}
|
| 209 |
-
```
|
| 210 |
-
|
| 211 |
-
Generated with [UV Scripts](https://huggingface.co/uv-scripts)
|
| 212 |
-
"""
|
| 213 |
-
|
| 214 |
-
|
| 215 |
-
def main(
|
| 216 |
-
input_dataset: str,
|
| 217 |
-
output_dataset: str,
|
| 218 |
-
image_column: str = "image",
|
| 219 |
-
batch_size: int = 8,
|
| 220 |
-
max_model_len: int = 16384,
|
| 221 |
-
max_tokens: int = 8192,
|
| 222 |
-
temperature: float = 0.0,
|
| 223 |
-
top_p: float = 1.0,
|
| 224 |
-
gpu_memory_utilization: float = 0.85,
|
| 225 |
-
hf_token: str = None,
|
| 226 |
-
split: str = "train",
|
| 227 |
-
max_samples: int = None,
|
| 228 |
-
private: bool = False,
|
| 229 |
-
shuffle: bool = False,
|
| 230 |
-
seed: int = 42,
|
| 231 |
-
prompt_mode: str = "ocr",
|
| 232 |
-
think: bool = False,
|
| 233 |
-
include_thinking: bool = False,
|
| 234 |
-
custom_prompt: str = None,
|
| 235 |
-
output_column: str = "markdown",
|
| 236 |
-
config: str = None,
|
| 237 |
-
create_pr: bool = False,
|
| 238 |
-
verbose: bool = False,
|
| 239 |
-
):
|
| 240 |
-
"""Process images from HF dataset through Qianfan-OCR model."""
|
| 241 |
-
|
| 242 |
-
check_cuda_availability()
|
| 243 |
-
start_time = datetime.now()
|
| 244 |
-
|
| 245 |
-
HF_TOKEN = hf_token or os.environ.get("HF_TOKEN")
|
| 246 |
-
if HF_TOKEN:
|
| 247 |
-
login(token=HF_TOKEN)
|
| 248 |
-
|
| 249 |
-
# Build prompt
|
| 250 |
-
if custom_prompt:
|
| 251 |
-
prompt = custom_prompt
|
| 252 |
-
logger.info(f"Using custom prompt: {prompt[:80]}...")
|
| 253 |
-
else:
|
| 254 |
-
if prompt_mode == "kie":
|
| 255 |
-
logger.error("--prompt-mode kie requires --custom-prompt")
|
| 256 |
-
sys.exit(1)
|
| 257 |
-
prompt = PROMPT_TEMPLATES[prompt_mode]
|
| 258 |
-
logger.info(f"Using prompt mode: {prompt_mode}")
|
| 259 |
-
|
| 260 |
-
if think:
|
| 261 |
-
prompt = prompt + "<think>"
|
| 262 |
-
logger.info("Layout-as-Thought enabled (appending <think> to prompt)")
|
| 263 |
-
|
| 264 |
-
logger.info(f"Using model: {MODEL}")
|
| 265 |
-
|
| 266 |
-
# Load dataset
|
| 267 |
-
logger.info(f"Loading dataset: {input_dataset}")
|
| 268 |
-
dataset = load_dataset(input_dataset, split=split)
|
| 269 |
-
|
| 270 |
-
if image_column not in dataset.column_names:
|
| 271 |
-
raise ValueError(
|
| 272 |
-
f"Column '{image_column}' not found. Available: {dataset.column_names}"
|
| 273 |
-
)
|
| 274 |
-
|
| 275 |
-
if shuffle:
|
| 276 |
-
logger.info(f"Shuffling dataset with seed {seed}")
|
| 277 |
-
dataset = dataset.shuffle(seed=seed)
|
| 278 |
-
|
| 279 |
-
if max_samples:
|
| 280 |
-
dataset = dataset.select(range(min(max_samples, len(dataset))))
|
| 281 |
-
logger.info(f"Limited to {len(dataset)} samples")
|
| 282 |
-
|
| 283 |
-
# Initialize vLLM
|
| 284 |
-
logger.info("Initializing vLLM with Qianfan-OCR")
|
| 285 |
-
logger.info("This may take a few minutes on first run...")
|
| 286 |
-
llm = LLM(
|
| 287 |
-
model=MODEL,
|
| 288 |
-
trust_remote_code=True,
|
| 289 |
-
max_model_len=max_model_len,
|
| 290 |
-
gpu_memory_utilization=gpu_memory_utilization,
|
| 291 |
-
limit_mm_per_prompt={"image": 1},
|
| 292 |
-
enforce_eager=False,
|
| 293 |
-
)
|
| 294 |
-
|
| 295 |
-
sampling_params = SamplingParams(
|
| 296 |
-
temperature=temperature,
|
| 297 |
-
top_p=top_p,
|
| 298 |
-
max_tokens=max_tokens,
|
| 299 |
-
)
|
| 300 |
-
|
| 301 |
-
logger.info(f"Processing {len(dataset)} images in batches of {batch_size}")
|
| 302 |
-
logger.info(f"Output will be written to column: {output_column}")
|
| 303 |
-
|
| 304 |
-
# Process images in batches
|
| 305 |
-
all_outputs = []
|
| 306 |
-
|
| 307 |
-
for batch_indices in tqdm(
|
| 308 |
-
partition_all(batch_size, range(len(dataset))),
|
| 309 |
-
total=(len(dataset) + batch_size - 1) // batch_size,
|
| 310 |
-
desc="Qianfan-OCR processing",
|
| 311 |
-
):
|
| 312 |
-
batch_indices = list(batch_indices)
|
| 313 |
-
batch_images = [dataset[i][image_column] for i in batch_indices]
|
| 314 |
-
|
| 315 |
-
try:
|
| 316 |
-
batch_messages = [make_ocr_message(img, prompt) for img in batch_images]
|
| 317 |
-
outputs = llm.chat(batch_messages, sampling_params)
|
| 318 |
-
|
| 319 |
-
for output in outputs:
|
| 320 |
-
text = output.outputs[0].text.strip()
|
| 321 |
-
if think:
|
| 322 |
-
text = extract_content_from_thinking(text, include_thinking)
|
| 323 |
-
all_outputs.append(text)
|
| 324 |
-
|
| 325 |
-
except Exception as e:
|
| 326 |
-
logger.error(f"Error processing batch: {e}")
|
| 327 |
-
all_outputs.extend(["[OCR ERROR]"] * len(batch_images))
|
| 328 |
-
|
| 329 |
-
# Calculate processing time
|
| 330 |
-
processing_duration = datetime.now() - start_time
|
| 331 |
-
processing_time_str = f"{processing_duration.total_seconds() / 60:.1f} min"
|
| 332 |
-
|
| 333 |
-
# Add output column
|
| 334 |
-
logger.info(f"Adding '{output_column}' column to dataset")
|
| 335 |
-
dataset = dataset.add_column(output_column, all_outputs)
|
| 336 |
-
|
| 337 |
-
# Handle inference_info tracking
|
| 338 |
-
inference_entry = {
|
| 339 |
-
"model_id": MODEL,
|
| 340 |
-
"model_name": "Qianfan-OCR",
|
| 341 |
-
"column_name": output_column,
|
| 342 |
-
"timestamp": datetime.now().isoformat(),
|
| 343 |
-
"prompt_mode": prompt_mode if not custom_prompt else "custom",
|
| 344 |
-
"think": think,
|
| 345 |
-
"temperature": temperature,
|
| 346 |
-
"max_tokens": max_tokens,
|
| 347 |
-
}
|
| 348 |
-
|
| 349 |
-
if "inference_info" in dataset.column_names:
|
| 350 |
-
logger.info("Updating existing inference_info column")
|
| 351 |
-
|
| 352 |
-
def update_inference_info(example):
|
| 353 |
-
try:
|
| 354 |
-
existing_info = (
|
| 355 |
-
json.loads(example["inference_info"])
|
| 356 |
-
if example["inference_info"]
|
| 357 |
-
else []
|
| 358 |
-
)
|
| 359 |
-
except (json.JSONDecodeError, TypeError):
|
| 360 |
-
existing_info = []
|
| 361 |
-
existing_info.append(inference_entry)
|
| 362 |
-
return {"inference_info": json.dumps(existing_info)}
|
| 363 |
-
|
| 364 |
-
dataset = dataset.map(update_inference_info)
|
| 365 |
-
else:
|
| 366 |
-
logger.info("Creating new inference_info column")
|
| 367 |
-
inference_list = [json.dumps([inference_entry])] * len(dataset)
|
| 368 |
-
dataset = dataset.add_column("inference_info", inference_list)
|
| 369 |
-
|
| 370 |
-
# Push to hub with retry and XET fallback
|
| 371 |
-
logger.info(f"Pushing to {output_dataset}")
|
| 372 |
-
commit_msg = f"Add Qianfan-OCR results ({len(dataset)} samples)" + (
|
| 373 |
-
f" [{config}]" if config else ""
|
| 374 |
-
)
|
| 375 |
-
max_retries = 3
|
| 376 |
-
for attempt in range(1, max_retries + 1):
|
| 377 |
-
try:
|
| 378 |
-
if attempt > 1:
|
| 379 |
-
logger.warning("Disabling XET (fallback to HTTP upload)")
|
| 380 |
-
os.environ["HF_HUB_DISABLE_XET"] = "1"
|
| 381 |
-
dataset.push_to_hub(
|
| 382 |
-
output_dataset,
|
| 383 |
-
private=private,
|
| 384 |
-
token=HF_TOKEN,
|
| 385 |
-
max_shard_size="500MB",
|
| 386 |
-
**({"config_name": config} if config else {}),
|
| 387 |
-
create_pr=create_pr,
|
| 388 |
-
commit_message=commit_msg,
|
| 389 |
-
)
|
| 390 |
-
break
|
| 391 |
-
except Exception as e:
|
| 392 |
-
logger.error(f"Upload attempt {attempt}/{max_retries} failed: {e}")
|
| 393 |
-
if attempt < max_retries:
|
| 394 |
-
delay = 30 * (2 ** (attempt - 1))
|
| 395 |
-
logger.info(f"Retrying in {delay}s...")
|
| 396 |
-
time.sleep(delay)
|
| 397 |
-
else:
|
| 398 |
-
logger.error("All upload attempts failed. OCR results are lost.")
|
| 399 |
-
sys.exit(1)
|
| 400 |
-
|
| 401 |
-
# Create and push dataset card (skip when creating PR to avoid conflicts)
|
| 402 |
-
if not create_pr:
|
| 403 |
-
logger.info("Creating dataset card")
|
| 404 |
-
card_content = create_dataset_card(
|
| 405 |
-
source_dataset=input_dataset,
|
| 406 |
-
model=MODEL,
|
| 407 |
-
num_samples=len(dataset),
|
| 408 |
-
processing_time=processing_time_str,
|
| 409 |
-
batch_size=batch_size,
|
| 410 |
-
max_model_len=max_model_len,
|
| 411 |
-
max_tokens=max_tokens,
|
| 412 |
-
gpu_memory_utilization=gpu_memory_utilization,
|
| 413 |
-
prompt_mode=prompt_mode if not custom_prompt else "custom",
|
| 414 |
-
think=think,
|
| 415 |
-
include_thinking=include_thinking,
|
| 416 |
-
image_column=image_column,
|
| 417 |
-
split=split,
|
| 418 |
-
)
|
| 419 |
-
card = DatasetCard(card_content)
|
| 420 |
-
card.push_to_hub(output_dataset, token=HF_TOKEN)
|
| 421 |
-
|
| 422 |
-
logger.info("Qianfan-OCR processing complete!")
|
| 423 |
-
logger.info(
|
| 424 |
-
f"Dataset available at: https://huggingface.co/datasets/{output_dataset}"
|
| 425 |
-
)
|
| 426 |
-
logger.info(f"Processing time: {processing_time_str}")
|
| 427 |
-
logger.info(
|
| 428 |
-
f"Processing speed: {len(dataset) / processing_duration.total_seconds():.2f} images/sec"
|
| 429 |
-
)
|
| 430 |
-
|
| 431 |
-
if verbose:
|
| 432 |
-
import importlib.metadata
|
| 433 |
-
|
| 434 |
-
logger.info("--- Resolved package versions ---")
|
| 435 |
-
for pkg in ["vllm", "transformers", "torch", "datasets", "pyarrow", "pillow"]:
|
| 436 |
-
try:
|
| 437 |
-
logger.info(f" {pkg}=={importlib.metadata.version(pkg)}")
|
| 438 |
-
except importlib.metadata.PackageNotFoundError:
|
| 439 |
-
logger.info(f" {pkg}: not installed")
|
| 440 |
-
logger.info("--- End versions ---")
|
| 441 |
-
|
| 442 |
-
|
| 443 |
-
if __name__ == "__main__":
|
| 444 |
-
if len(sys.argv) == 1:
|
| 445 |
-
print("=" * 80)
|
| 446 |
-
print("Qianfan-OCR - End-to-End Document Intelligence")
|
| 447 |
-
print("=" * 80)
|
| 448 |
-
print("\n4.7B model from Baidu, #1 on OmniDocBench v1.5 (93.12)")
|
| 449 |
-
print("\nFeatures:")
|
| 450 |
-
print("- #1 end-to-end model on OmniDocBench v1.5 and OlmOCR Bench")
|
| 451 |
-
print("- Layout-as-Thought reasoning for complex documents (--think)")
|
| 452 |
-
print("- 192 language support")
|
| 453 |
-
print("- Multiple modes: OCR, table (HTML), formula (LaTeX), chart, scene text")
|
| 454 |
-
print("- Key information extraction with custom prompts")
|
| 455 |
-
print("\nExample usage:")
|
| 456 |
-
print("\n1. Basic OCR:")
|
| 457 |
-
print(" uv run qianfan-ocr.py input-dataset output-dataset")
|
| 458 |
-
print("\n2. With Layout-as-Thought (complex documents):")
|
| 459 |
-
print(" uv run qianfan-ocr.py docs output --think")
|
| 460 |
-
print("\n3. Table extraction:")
|
| 461 |
-
print(" uv run qianfan-ocr.py docs output --prompt-mode table")
|
| 462 |
-
print("\n4. Formula extraction:")
|
| 463 |
-
print(" uv run qianfan-ocr.py docs output --prompt-mode formula")
|
| 464 |
-
print("\n5. Key information extraction:")
|
| 465 |
-
print(
|
| 466 |
-
' uv run qianfan-ocr.py invoices output --prompt-mode kie --custom-prompt "Extract: name, date, total. Output JSON."'
|
| 467 |
-
)
|
| 468 |
-
print("\n6. Running on HF Jobs:")
|
| 469 |
-
print(" hf jobs uv run --flavor l4x1 \\")
|
| 470 |
-
print(" -s HF_TOKEN \\")
|
| 471 |
-
print(
|
| 472 |
-
" https://huggingface.co/datasets/uv-scripts/ocr/raw/main/qianfan-ocr.py \\"
|
| 473 |
-
)
|
| 474 |
-
print(" input-dataset output-dataset --max-samples 10")
|
| 475 |
-
print("\nFor full help, run: uv run qianfan-ocr.py --help")
|
| 476 |
-
sys.exit(0)
|
| 477 |
-
|
| 478 |
-
parser = argparse.ArgumentParser(
|
| 479 |
-
description="Document OCR using Qianfan-OCR (4.7B, #1 on OmniDocBench v1.5)",
|
| 480 |
-
formatter_class=argparse.RawDescriptionHelpFormatter,
|
| 481 |
-
epilog="""
|
| 482 |
-
Prompt modes:
|
| 483 |
-
ocr Document parsing to Markdown (default)
|
| 484 |
-
table Table extraction to HTML format
|
| 485 |
-
formula Formula recognition to LaTeX
|
| 486 |
-
chart Chart understanding and analysis
|
| 487 |
-
scene Scene text extraction
|
| 488 |
-
kie Key information extraction (requires --custom-prompt)
|
| 489 |
-
|
| 490 |
-
Examples:
|
| 491 |
-
uv run qianfan-ocr.py my-docs analyzed-docs
|
| 492 |
-
uv run qianfan-ocr.py docs output --think --max-samples 50
|
| 493 |
-
uv run qianfan-ocr.py docs output --prompt-mode table
|
| 494 |
-
uv run qianfan-ocr.py invoices data --prompt-mode kie --custom-prompt "Extract: name, date, total."
|
| 495 |
-
""",
|
| 496 |
-
)
|
| 497 |
-
|
| 498 |
-
parser.add_argument("input_dataset", help="Input dataset ID from Hugging Face Hub")
|
| 499 |
-
parser.add_argument("output_dataset", help="Output dataset ID for Hugging Face Hub")
|
| 500 |
-
parser.add_argument(
|
| 501 |
-
"--image-column",
|
| 502 |
-
default="image",
|
| 503 |
-
help="Column containing images (default: image)",
|
| 504 |
-
)
|
| 505 |
-
parser.add_argument(
|
| 506 |
-
"--batch-size",
|
| 507 |
-
type=int,
|
| 508 |
-
default=8,
|
| 509 |
-
help="Batch size for processing (default: 8)",
|
| 510 |
-
)
|
| 511 |
-
parser.add_argument(
|
| 512 |
-
"--max-model-len",
|
| 513 |
-
type=int,
|
| 514 |
-
default=16384,
|
| 515 |
-
help="Maximum model context length (default: 16384, reduce to 8192 if OOM on L4)",
|
| 516 |
-
)
|
| 517 |
-
parser.add_argument(
|
| 518 |
-
"--max-tokens",
|
| 519 |
-
type=int,
|
| 520 |
-
default=8192,
|
| 521 |
-
help="Maximum tokens to generate (default: 8192)",
|
| 522 |
-
)
|
| 523 |
-
parser.add_argument(
|
| 524 |
-
"--temperature",
|
| 525 |
-
type=float,
|
| 526 |
-
default=0.0,
|
| 527 |
-
help="Sampling temperature (default: 0.0, deterministic)",
|
| 528 |
-
)
|
| 529 |
-
parser.add_argument(
|
| 530 |
-
"--top-p",
|
| 531 |
-
type=float,
|
| 532 |
-
default=1.0,
|
| 533 |
-
help="Top-p sampling parameter (default: 1.0)",
|
| 534 |
-
)
|
| 535 |
-
parser.add_argument(
|
| 536 |
-
"--gpu-memory-utilization",
|
| 537 |
-
type=float,
|
| 538 |
-
default=0.85,
|
| 539 |
-
help="GPU memory utilization (default: 0.85)",
|
| 540 |
-
)
|
| 541 |
-
parser.add_argument("--hf-token", help="Hugging Face API token")
|
| 542 |
-
parser.add_argument(
|
| 543 |
-
"--split", default="train", help="Dataset split to use (default: train)"
|
| 544 |
-
)
|
| 545 |
-
parser.add_argument(
|
| 546 |
-
"--max-samples",
|
| 547 |
-
type=int,
|
| 548 |
-
help="Maximum number of samples to process (for testing)",
|
| 549 |
-
)
|
| 550 |
-
parser.add_argument(
|
| 551 |
-
"--private", action="store_true", help="Make output dataset private"
|
| 552 |
-
)
|
| 553 |
-
parser.add_argument(
|
| 554 |
-
"--shuffle", action="store_true", help="Shuffle dataset before processing"
|
| 555 |
-
)
|
| 556 |
-
parser.add_argument(
|
| 557 |
-
"--seed",
|
| 558 |
-
type=int,
|
| 559 |
-
default=42,
|
| 560 |
-
help="Random seed for shuffling (default: 42)",
|
| 561 |
-
)
|
| 562 |
-
parser.add_argument(
|
| 563 |
-
"--prompt-mode",
|
| 564 |
-
choices=list(PROMPT_TEMPLATES.keys()),
|
| 565 |
-
default="ocr",
|
| 566 |
-
help="Prompt mode (default: ocr)",
|
| 567 |
-
)
|
| 568 |
-
parser.add_argument(
|
| 569 |
-
"--think",
|
| 570 |
-
action="store_true",
|
| 571 |
-
help="Enable Layout-as-Thought reasoning (appends <think> to prompt)",
|
| 572 |
-
)
|
| 573 |
-
parser.add_argument(
|
| 574 |
-
"--include-thinking",
|
| 575 |
-
action="store_true",
|
| 576 |
-
help="Include thinking traces in output (default: only final content)",
|
| 577 |
-
)
|
| 578 |
-
parser.add_argument(
|
| 579 |
-
"--custom-prompt",
|
| 580 |
-
help="Custom prompt text (overrides --prompt-mode)",
|
| 581 |
-
)
|
| 582 |
-
parser.add_argument(
|
| 583 |
-
"--output-column",
|
| 584 |
-
default="markdown",
|
| 585 |
-
help="Column name for output text (default: markdown)",
|
| 586 |
-
)
|
| 587 |
-
parser.add_argument(
|
| 588 |
-
"--config",
|
| 589 |
-
help="Config/subset name when pushing to Hub (for benchmarking multiple models)",
|
| 590 |
-
)
|
| 591 |
-
parser.add_argument(
|
| 592 |
-
"--create-pr",
|
| 593 |
-
action="store_true",
|
| 594 |
-
help="Create a pull request instead of pushing directly",
|
| 595 |
-
)
|
| 596 |
-
parser.add_argument(
|
| 597 |
-
"--verbose",
|
| 598 |
-
action="store_true",
|
| 599 |
-
help="Log resolved package versions after processing",
|
| 600 |
-
)
|
| 601 |
-
|
| 602 |
-
args = parser.parse_args()
|
| 603 |
-
|
| 604 |
-
main(
|
| 605 |
-
input_dataset=args.input_dataset,
|
| 606 |
-
output_dataset=args.output_dataset,
|
| 607 |
-
image_column=args.image_column,
|
| 608 |
-
batch_size=args.batch_size,
|
| 609 |
-
max_model_len=args.max_model_len,
|
| 610 |
-
max_tokens=args.max_tokens,
|
| 611 |
-
temperature=args.temperature,
|
| 612 |
-
top_p=args.top_p,
|
| 613 |
-
gpu_memory_utilization=args.gpu_memory_utilization,
|
| 614 |
-
hf_token=args.hf_token,
|
| 615 |
-
split=args.split,
|
| 616 |
-
max_samples=args.max_samples,
|
| 617 |
-
private=args.private,
|
| 618 |
-
shuffle=args.shuffle,
|
| 619 |
-
seed=args.seed,
|
| 620 |
-
prompt_mode=args.prompt_mode,
|
| 621 |
-
think=args.think,
|
| 622 |
-
include_thinking=args.include_thinking,
|
| 623 |
-
custom_prompt=args.custom_prompt,
|
| 624 |
-
output_column=args.output_column,
|
| 625 |
-
config=args.config,
|
| 626 |
-
create_pr=args.create_pr,
|
| 627 |
-
verbose=args.verbose,
|
| 628 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|