snap-stanford/stark
Viewer • Updated • 33.9k • 2.6k • 10
How to use GagaLey/MoR with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("question-answering", model="GagaLey/MoR") # Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("GagaLey/MoR", dtype="auto")This model card for our paper Mixture of Structural-and-Textual Retrieval over Text-rich Graph Knowledge Bases.
Code: https://github.com/Yoega/MoR
To set up the environment, you can install dependencies using Conda or pip:
conda env create -f mor_env.yml
conda activate your_env_name # Replace with actual environment name
pip install -r requirements.txt
To run the inference script, execute the following command in the terminal:
bash eval_mor.sh
This script will automatically process three datasets using the pre-trained planning graph generator and the pre-trained reranker.
bash train_planner.sh
bash run_reasoning.sh
bash train_reranker.sh
python script.py --model "model_name" \
--dataset_name "dataset_name" \
--azure_api_key "your_azure_key" \
--azure_endpoint "your_azure_endpoint" \
--azure_api_version "your_azure_version"
python script.py --model "model_name" \
--dataset_name "dataset_name" \
--openai_api_key "your_openai_key" \
--openai_endpoint "your_openai_endpoint"
Base model
meta-llama/Llama-3.2-3B-Instruct