io.net - Inference Provider

#83
by katel13 - opened

Hi HF team,

We'd love to be considered as an Inference Provider on the Hub.

We're io.net, a GPU compute platform with an inference layer called IO Intelligence. We serve a broad catalog of open-source models via an OpenAI-compatible API, with a focus on competitive pricing and performance at scale. For HF's developer community, that means a cost-efficient option for open-source model inference without sacrificing speed or reliability.

We've reviewed the provider registration documentation and our engineering team is ready to start the integration work. Happy to move quickly.

You can reach me at kate@io.net. Looking forward to the conversation.

Sign up or log in to comment