Instructions to use RaymondLi/gpt-4-tokenizer with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use RaymondLi/gpt-4-tokenizer with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("RaymondLi/gpt-4-tokenizer", dtype="auto") - Transformers.js
How to use RaymondLi/gpt-4-tokenizer with Transformers.js:
// ⚠️ Unknown pipeline tag
- Notebooks
- Google Colab
- Kaggle
GPT-4 Tokenizer
A 🤗-compatible version of the GPT-4 tokenizer (adapted from openai/tiktoken). This means it can be used with Hugging Face libraries including Transformers, Tokenizers, and Transformers.js.
Example usage:
Transformers/Tokenizers
from transformers import GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained('Xenova/gpt-4')
assert tokenizer.encode('hello world') == [15339, 1917]
Transformers.js
import { AutoTokenizer } from '@xenova/transformers';
const tokenizer = await AutoTokenizer.from_pretrained('Xenova/gpt-4');
const tokens = tokenizer.encode('hello world'); // [15339, 1917]
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support