Upload 2 files
Browse files- README.md +14 -0
- README_CN.md +14 -0
README.md
CHANGED
|
@@ -146,6 +146,8 @@ Each instance is a JSON object with the following fields:
|
|
| 146 |
|
| 147 |
## 💻 Usage
|
| 148 |
|
|
|
|
|
|
|
| 149 |
```python
|
| 150 |
from datasets import load_dataset
|
| 151 |
|
|
@@ -159,6 +161,18 @@ skill_tasks = [d for d in dataset["train"] if d["category"] == "Skill"]
|
|
| 159 |
claudecode_tasks = [d for d in dataset["train"] if d["scaffold"]["name"] == "claudecode"]
|
| 160 |
```
|
| 161 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 162 |
## ⚖️ Evaluation Metrics
|
| 163 |
|
| 164 |
| Metric | Definition | What it measures |
|
|
|
|
| 146 |
|
| 147 |
## 💻 Usage
|
| 148 |
|
| 149 |
+
### 1. Load the Dataset
|
| 150 |
+
|
| 151 |
```python
|
| 152 |
from datasets import load_dataset
|
| 153 |
|
|
|
|
| 161 |
claudecode_tasks = [d for d in dataset["train"] if d["scaffold"]["name"] == "claudecode"]
|
| 162 |
```
|
| 163 |
|
| 164 |
+
### 2. Evaluation Pipeline
|
| 165 |
+
|
| 166 |
+
The evaluation consists of three steps:
|
| 167 |
+
|
| 168 |
+
| Step | Description |
|
| 169 |
+
|------|-------------|
|
| 170 |
+
| **Environment Setup** | Pull Docker image and start task environment container |
|
| 171 |
+
| **Trajectory Collection** | Send system_prompt and user_query to the agent under test, collect full interaction trajectory |
|
| 172 |
+
| **Scoring** | Use LLM-as-Judge to perform binary evaluation based on checklist |
|
| 173 |
+
|
| 174 |
+
> ⚠️ **Note**: The complete evaluation scripts are under active development and will be open-sourced soon. Stay tuned for updates.
|
| 175 |
+
|
| 176 |
## ⚖️ Evaluation Metrics
|
| 177 |
|
| 178 |
| Metric | Definition | What it measures |
|
README_CN.md
CHANGED
|
@@ -146,6 +146,8 @@ docker run -it --rm minimaxai/feedfeed:<tag> /bin/bash
|
|
| 146 |
|
| 147 |
## 💻 使用方法
|
| 148 |
|
|
|
|
|
|
|
| 149 |
```python
|
| 150 |
from datasets import load_dataset
|
| 151 |
|
|
@@ -159,6 +161,18 @@ skill_tasks = [d for d in dataset["train"] if d["category"] == "Skill"]
|
|
| 159 |
claudecode_tasks = [d for d in dataset["train"] if d["scaffold"]["name"] == "claudecode"]
|
| 160 |
```
|
| 161 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 162 |
## ⚖️ 评估指标
|
| 163 |
|
| 164 |
| 指标 | 定义 | 衡量内容 |
|
|
|
|
| 146 |
|
| 147 |
## 💻 使用方法
|
| 148 |
|
| 149 |
+
### 1. 加载数据集
|
| 150 |
+
|
| 151 |
```python
|
| 152 |
from datasets import load_dataset
|
| 153 |
|
|
|
|
| 161 |
claudecode_tasks = [d for d in dataset["train"] if d["scaffold"]["name"] == "claudecode"]
|
| 162 |
```
|
| 163 |
|
| 164 |
+
### 2. 评测流程
|
| 165 |
+
|
| 166 |
+
评测分为三个步骤:
|
| 167 |
+
|
| 168 |
+
| 步骤 | 说明 |
|
| 169 |
+
|------|------|
|
| 170 |
+
| **环境准备** | 拉取 Docker 镜像,启动任务环境容器 |
|
| 171 |
+
| **轨迹收集** | 将 system_prompt 和 user_query 发送给待测智能体,收集完整交互轨迹 |
|
| 172 |
+
| **评分判定** | 基于 checklist 使用 LLM-as-Judge 对轨迹进行二元判定 |
|
| 173 |
+
|
| 174 |
+
> ⚠️ **注意**:完整的评测脚本正在完善中,即将开源。敬请关注本仓库更新。
|
| 175 |
+
|
| 176 |
## ⚖️ 评估指标
|
| 177 |
|
| 178 |
| 指标 | 定义 | 衡量内容 |
|