How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="ubitech-edg/commandr-35b-cpt")
messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("ubitech-edg/commandr-35b-cpt")
model = AutoModelForCausalLM.from_pretrained("ubitech-edg/commandr-35b-cpt")
messages = [
    {"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
	messages,
	add_generation_prompt=True,
	tokenize=True,
	return_dict=True,
	return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
Quick Links

Command-R 35B โ€” CPT (Continual Pretraining with LoRA)

Model type: Causal Language Model
Base model: CohereLabs/c4ai-command-r-v01
License: Apache 2.0
Framework: Axolotl


Overview

commandr-35b-cpt is a continual-pretrained version of Cohere's Command-R 35B model, trained with LoRA adapters for efficient enregy doman adaptation. The goal of CPT is to extend the modelโ€™s general reasoning, factual grounding, and domain knowledge across science, governance, and energy-domain text.

Training was performed on the Leonardo EuroHPC system using Axolotl with DeepSpeed ZeRO-1 optimization.


Training Setup

Objective: Language modeling (unsupervised continual pretraining)
Adapter type: LoRA
Precision: bfloat16
Hardware: 8 nodes ร— 2 ร— NVIDIA A100 64GB GPUs
Framework: DeepSpeed ZeRO-1, Axolotl, PyTorch 2.5.1+cu121
Runtime: ~24 hours
Checkpoints: Saved every 1/5 of an epoch


Dataset

Public energy domain text sources:

  • arxiv.jsonl โ€” scientific and technical papers
  • gov.jsonl โ€” public governmental documents
  • news.jsonl โ€” news articles
  • wiki.jsonl โ€” Wikipedia text

Hyperparameters

Parameter Value
Sequence length 2048
Micro batch size 1
Gradient accumulation 4
Epochs 1
Max steps 10000
Learning rate 0.0002
LR scheduler cosine
Optimizer AdamW (8-bit)
Warmup steps 10
Weight decay 0.0
LoRA rank (r) 16
LoRA alpha 32
LoRA dropout 0.05
LoRA target modules q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
Gradient checkpointing โœ…
Flash attention โœ…
Auto resume โœ…
Loss watchdog threshold 5.0
Loss watchdog patience 3

Tokenizer

Tokenizer type: AutoTokenizer
Special token: <|end_of_text|> as pad_token

Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ubitech-edg/commandr-35b-cpt

Adapter
(2)
this model
Adapters
1 model

Dataset used to train ubitech-edg/commandr-35b-cpt