Instructions to use TuKoResearch/AuriStreamParallel-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use TuKoResearch/AuriStreamParallel-base with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("TuKoResearch/AuriStreamParallel-base", dtype="auto") - Notebooks
- Google Colab
- Kaggle
AuriStream Parallel - Speech Language Model
AuriStream Parallel is a discrete diffusion speech language model by Greta Tuckute and Klemen Kotar.
This repository contains shared model code for AuriStream Parallel checkpoints.
Overview
AuriStream Parallel uses:
- bidirectional transformer attention
- grouped token projection (
group_size=4by default) - parallel token heads
- partial-masking diffusion objective
Usage
Load a checkpoint repository that references this base code:
from transformers import AutoModel
model = AutoModel.from_pretrained(
"TuKoResearch/AuriStreamParallel100M_Group4_BigAudioDataset_180k",
trust_remote_code=True,
)
Files
configuration_auristream_parallel.py- Configuration classmodeling_auristream_parallel.py- Model implementation
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support