HuggingFaceTB/smollm-corpus
Viewer β’ Updated β’ 237M β’ 57.2k β’ 454
How to use SmallDoge/Doge-60M-checkpoint with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="SmallDoge/Doge-60M-checkpoint", trust_remote_code=True)
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("SmallDoge/Doge-60M-checkpoint", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("SmallDoge/Doge-60M-checkpoint", trust_remote_code=True)
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use SmallDoge/Doge-60M-checkpoint with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "SmallDoge/Doge-60M-checkpoint"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "SmallDoge/Doge-60M-checkpoint",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/SmallDoge/Doge-60M-checkpoint
How to use SmallDoge/Doge-60M-checkpoint with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "SmallDoge/Doge-60M-checkpoint" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "SmallDoge/Doge-60M-checkpoint",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "SmallDoge/Doge-60M-checkpoint" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "SmallDoge/Doge-60M-checkpoint",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use SmallDoge/Doge-60M-checkpoint with Docker Model Runner:
docker model run hf.co/SmallDoge/Doge-60M-checkpoint
Doge uses wsd_scheduler as the training scheduler, which divides the learning rate into three stages: warmup, stable, and decay. It allows us to continue training on any new dataset from any checkpoint in the stable stage without spikes of the training.
Here are the initial learning rates required to continue training at each checkpoint: