A full fine-tune of unsloth/gemma-3-270m-it on the kth8/docker-compose-20000x dataset.

Usage example

System prompt

You are a helpful assistant.

User prompt

Show me the docker-compose.yml for this command: docker container run --name fba_blue-chip --cpuset-cpus 3,3 --workdir /home/proconsulates/sandor --log-driver gelf --log-opt gelf-address=udp://localhost:53559 --blkio-weight-device /dev/sdg8:439 ghcr.io/asparagine/gesturing:nightly --warn --full --read-only

Model Details

  • Base Model: unsloth/gemma-3-270m-it
  • Parameter Count: 268098176
  • Training Method: Full Fine-Tune (FFT) - All parameters updated.
  • Precision: torch.bfloat16

Training stats

  • global_step: 2307
  • training_loss: 0.03397511645447863
  • train_runtime: 6456.2772
  • train_samples_per_second: 2.858
  • train_steps_per_second: 0.357
  • total_flos: 3357685821331968.0
  • epoch: 1.0

Hardware

  • GPU: NVIDIA L4

Framework versions

  • Unsloth: 2026.3.4
  • TRL: 0.22.2
  • Transformers: 4.56.2
  • Pytorch: 2.10.0+cu128
  • Datasets: 4.3.0
  • Tokenizers: 0.22.2

License

This model is released under the Gemma license. See the Gemma Terms of Use for details.

Downloads last month
12
Safetensors
Model size
0.3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kth8/gemma-3-270m-it-Docker-Compose

Finetuned
(392)
this model

Dataset used to train kth8/gemma-3-270m-it-Docker-Compose