Instructions to use nferruz/ProtGPT2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use nferruz/ProtGPT2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="nferruz/ProtGPT2")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("nferruz/ProtGPT2") model = AutoModelForCausalLM.from_pretrained("nferruz/ProtGPT2") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use nferruz/ProtGPT2 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "nferruz/ProtGPT2" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nferruz/ProtGPT2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/nferruz/ProtGPT2
- SGLang
How to use nferruz/ProtGPT2 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "nferruz/ProtGPT2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nferruz/ProtGPT2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "nferruz/ProtGPT2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nferruz/ProtGPT2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use nferruz/ProtGPT2 with Docker Model Runner:
docker model run hf.co/nferruz/ProtGPT2
Use model for calculating sequence perplexity
Hi,
Thank you for sharing your awesome work. I wonder this model can be used for reliably estimating the designed sequences' perplexity like with its parental GPT2 model? If yes, can you please give me some hints how to do it? As shown in this thread, take average loss from the whole sequence seems suboptimal (https://huggingface.co/docs/transformers/perplexity)
Bests,
Ai
Hi avuhong!
Thanks for reaching out! Yes, you can compute perplexity for each sequence separately and in fact, I'd recommend so as a good way to evaluate them.
You can easily do that with the following HuggingFace package: https://huggingface.co/spaces/evaluate-measurement/perplexity:
from evaluate import load
perplexity = load("perplexity", module_type= "measurement")
results = perplexity.compute(data=input_texts, model_id=' nferruz/ProtGPT2')
As you will see from the link, you can also pass several sequences to the computation with a list (copied from the link above):
perplexity = evaluate.load("perplexity", module_type="measurement")
input_texts = ["lorem ipsum", "Happy Birthday!", "Bienvenue"]
results = perplexity.compute(model_id='gpt2',
add_start_token=False,
data=input_texts)
print(list(results.keys()))
>>>['perplexities', 'mean_perplexity']
print(round(results["mean_perplexity"], 2))
>>>646.74
print(round(results["perplexities"][0], 2))
>>>32.25
I hope this helps let me know if there are further questions!
Ah! You can also find the code here: https://huggingface.co/spaces/evaluate-metric/perplexity/blob/main/perplexity.py
That's perfect, thank you very much ;)