Q-Heart: ECG Question Answering via Knowledge-Informed Multimodal LLMs
Paper โข 2505.06296 โข Published
How to use Manhph2211/Q-HEART with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="Manhph2211/Q-HEART", trust_remote_code=True) # Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("Manhph2211/Q-HEART", trust_remote_code=True, dtype="auto")How to use Manhph2211/Q-HEART with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "Manhph2211/Q-HEART"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Manhph2211/Q-HEART",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/Manhph2211/Q-HEART
How to use Manhph2211/Q-HEART with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "Manhph2211/Q-HEART" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Manhph2211/Q-HEART",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "Manhph2211/Q-HEART" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Manhph2211/Q-HEART",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use Manhph2211/Q-HEART with Docker Model Runner:
docker model run hf.co/Manhph2211/Q-HEART
After we have access to meta-llama/Llama-3.2-1B-Instruct model and install suitable transformers package version, we can run:
# transformers==4.43.3 accelerate==1.0.1 peft==0.13.2
from transformers import AutoModel
model = AutoModel.from_pretrained("Manhph2211/Q-HEART", trust_remote_code=True, dtype="auto")
Or
git clone https://github.com/manhph2211/Q-HEART.git && cd Q-HEART
conda create -n qheart python=3.9
conda activate qheart
pip install torch --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt
Download the checkpoint from here and place it at ckpts/pytorch_model.bin, then run evaluation:
python main.py --model_type meta-llama/Llama-3.2-1B-Instruct --mapping_type Transformer
@article{pham2025q,
title={Q-Heart: ECG Question Answering via Knowledge-Informed Multimodal LLMs},
author={Pham, Hung Manh and Tang, Jialu and Saeed, Aaqib and Ma, Dong},
journal={arXiv preprint arXiv:2505.06296},
year={2025}
}
@inproceedings{pham2025qheart,
title = {Q-HEART: ECG Question Answering via Knowledge-Informed Multimodal LLMs},
author = {Pham, Hung Manh and Tang, Jialu and Saeed, Aaqib and Ma, Dong},
booktitle = {Proceedings of the European Conference on Artificial Intelligence (ECAI)},
series = {Frontiers in Artificial Intelligence and Applications},
volume = {413},
pages = {4545--4552},
year = {2025},
publisher = {IOS Press},
doi = {10.3233/FAIA251356}
}
Base model
meta-llama/Llama-3.2-1B-Instruct