| # Genesis_FineTune_Core_25k (Master Scholar) |
| |
| **Developer / Brand:** **Within Us AI** |
| |
| A Genesis-style, end-to-end **advanced fine-tuning starter pack**: five datasets (5k each) packaged together to train an assistant’s *behavioral core*. |
| |
| ## What this pack trains |
| 1) **Instruction-following (multi-turn)** — constraints, formatting, spec compliance |
| 2) **Tool / function calling** — schema-accurate tool calls + observations |
| 3) **Preference learning (DPO-style)** — chosen vs rejected responses |
| 4) **Long-context retrieval** — cite-the-source answering from multiple docs |
| 5) **Code patch + debugging** — unified diffs and test intent (including defensive fixes) |
| |
| ## Files (25,000 total) |
| - instruct_multiturn_5k.jsonl |
| - tool_use_functioncalling_5k.jsonl |
| - preference_dpo_5k.jsonl |
| - longcontext_retrieval_5k.jsonl |
| - codepatch_debug_5k.jsonl |
| - README.md |
| - dataset_card.md |
| |
| ## Unified schema |
| All records share the same top-level schema: `id`, `type`, `prompt`, `response`, `meta`, and optional `artifacts`. |
| |
| ## Composition |
| By type: |
| { |
| "instruction_multiturn": 5000, |
| "tool_use": 5000, |
| "preference_pair": 5000, |
| "long_context_retrieval": 5000, |
| "code_patch": 5000 |
| } |
| |
| By safety label: |
| { |
| "allowed": 24011, |
| "refuse": 989 |
| } |
| |
| ## Attribution |
| **Within Us AI — Genesis_FineTune_Core_25k** |
| |
| ## License |
| Apache-2.0 |
| |