We’re excited to share that RapidFire AI now has native integration with Trackio for real-time experiment tracking — and we wanted to highlight it here since Trackio is built by the Hugging Face community.
The problem: When you’re sweeping across multiple fine-tuning or RAG configurations, keeping track of what’s working (and what’s not) gets messy fast. Especially when runs are executing in parallel.
The solution: RapidFire AI runs your experiments in hyperparallel (16-24x throughput, no extra resources), and Trackio gives you a live, local dashboard to monitor and compare every run as it happens. No accounts, no server, no cost — just pip install rapidfireai and set one environment variable.
What gets tracked automatically:
-
Fine-tuning: training loss, eval loss, learning rate, custom metrics (ROUGE-L, BLEU, etc.)
-
RAG pipelines: Precision, Recall, F1, NDCG@K, MRR, plus LLM-as-judge and code-based eval metrics
-
Run configs: all hyperparameters, LoRA settings, chunking strategies — everything you need to reproduce results
Quick setup:
import os
os.environ["RF_TRACKIO_ENABLED"] = "true"
from rapidfireai import Experiment
# Define your configs and run — Trackio handles the rest
Then view your dashboard with:
trackio show --project "my-experiment"
Links:
- Official integration guide: RapidFire AI Integration
- Tutorial notebooks (Colab-friendly): SFT | RAG
- RapidFire AI GitHub
- Trackio GitHub