Instructions to use JuIm/ProteinLM with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use JuIm/ProteinLM with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="JuIm/ProteinLM")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("JuIm/ProteinLM") model = AutoModelForCausalLM.from_pretrained("JuIm/ProteinLM") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use JuIm/ProteinLM with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "JuIm/ProteinLM" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "JuIm/ProteinLM", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/JuIm/ProteinLM
- SGLang
How to use JuIm/ProteinLM with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "JuIm/ProteinLM" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "JuIm/ProteinLM", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "JuIm/ProteinLM" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "JuIm/ProteinLM", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use JuIm/ProteinLM with Docker Model Runner:
docker model run hf.co/JuIm/ProteinLM
ProteinLM
This is a custom configuration (336M parameters) of Google’s Gemma 2 LLM that is being pre-trained on amino acid sequences of 512 AA or less in length. Periodic updates are made to this page as training reaches new checkpoints.
The purpose of this model was to investigate the differences between ProGemma and ProtGPT (GPT-2 architecture) as it pertains to sequence generation. Training loss is ~1.6. Perplexity scores as well as AlphaFold 3’s ptm, pLDDT, and iptm scores are generally in line with ProtGPT’s scores for sequence lengths < 250, although the testing phase is still very early. I have yet to do testing for sequence lengths > 250. More robust testing is also required for lengths < 250 AA. In my very preliminary testing, HHblit e-values of ~0.1 are achieved without much guidance.
Controlled generation is not a capability of this model, and therefore serves as a method to significantly improve generation as, in principal, a sequence that performs a given function or resides in a particular cellular location can be generated.
In sequence generation, a top_k of 950 appears to work well as it prevents repetition. This is also seen in ProtGPT.
Below is code using the Transformers library to generate sequences using ProGemma.
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("JuIm/ProGemma")
tokenizer = AutoTokenizer.from_pretrained("JuIm/Amino-Acid-Sequence-Tokenizer")
progemma = pipeline("text-generation", model=model, tokenizer=tokenizer)
sequence = progemma("<bos>", top_k=950, max_length=100, num_return_sequences=1, do_sample=True, repetition_penalty=1.2, eos_token_id=21, pad_token_id=22, bos_token_id=20)
s = sequence[0]['generated_text']
print(s)
Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Tokenizers 0.19.1
- Downloads last month
- 6
Model tree for JuIm/ProteinLM
Unable to build the model tree, the base model loops to the model itself. Learn more.