Veiled Series
Collection
Models trained to be better at RP • 2 items • Updated
How to use soob3123/Veiled-Calla-12B with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="soob3123/Veiled-Calla-12B")
messages = [
{
"role": "user",
"content": [
{"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
{"type": "text", "text": "What animal is on the candy?"}
]
},
]
pipe(text=messages) # Load model directly
from transformers import AutoProcessor, AutoModelForImageTextToText
processor = AutoProcessor.from_pretrained("soob3123/Veiled-Calla-12B")
model = AutoModelForImageTextToText.from_pretrained("soob3123/Veiled-Calla-12B")
messages = [
{
"role": "user",
"content": [
{"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
{"type": "text", "text": "What animal is on the candy?"}
]
},
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use soob3123/Veiled-Calla-12B with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "soob3123/Veiled-Calla-12B"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "soob3123/Veiled-Calla-12B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/soob3123/Veiled-Calla-12B
How to use soob3123/Veiled-Calla-12B with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "soob3123/Veiled-Calla-12B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "soob3123/Veiled-Calla-12B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "soob3123/Veiled-Calla-12B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "soob3123/Veiled-Calla-12B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use soob3123/Veiled-Calla-12B with Docker Model Runner:
docker model run hf.co/soob3123/Veiled-Calla-12B
Mystery is at the heart of creativity. That, and surprise...As creative channels, we need to trust the darkness.
Beneath moonlight's gentle glow, Veiled Calla emerges - an enigmatic presence designed to weave immersive roleplay experiences through mysterious narratives and atmospheric storytelling. Shrouded in secrets and whispers, Veiled Calla crafts evocative scenarios where unspoken truths and subtle emotional undertones drive each unfolding tale.
Base model
google/gemma-3-12b-pt