YAML Metadata Warning: The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
SAGE FRED-T5 — ONNX (INT8)
ONNX INT8-quantized version of ai-forever/FRED-T5-large-spell (SAGE spelling correction) for on-device Russian text correction in macOS apps.
Model Details
- Architecture: T5ForConditionalGeneration (FRED-T5-large fine-tuned for spelling)
- Format: ONNX INT8 quantized, encoder-decoder with KV-cache
- Size: ~488 MB total (INT8), ~1.0 GB (FP32)
- Input: Tokenized Russian text (BPE, vocab size 50364)
- Output: Corrected Russian text (seq2seq generation)
- Task prefix:
"Исправьте: "prepended to input text
Model Files
INT8 (recommended for on-device)
int8/encoder_model_int8.onnx— Encoder (43 MB)int8/decoder_with_past_model_int8.onnx— Decoder with KV-cache (71 MB)int8/decoder_model_int8.onnx— Decoder without cache (74 MB)int8/decoder_model_merged_int8.onnx— Merged decoder (293 MB)int8/vocab.json,int8/merges.txt— BPE tokenizerint8/config.json,int8/tokenizer.json, etc.
FP32 (full precision)
fp32/encoder_model.onnx— Encoder (171 MB)fp32/decoder_with_past_model.onnx— Decoder with KV-cache (281 MB)fp32/decoder_model.onnx— Decoder without cache (293 MB)
Usage
On-device Russian text correction of speech transcripts. Runs via ONNX Runtime C API with greedy decoding.
// Swift — Seq2Seq inference
let engine = Seq2SeqOnnxEngine()
try engine.loadModels(from: "/path/to/sage-fredt5/int8/")
let corrected = try engine.generate(inputIDs: tokenizer.encode("Исправьте: " + text))
Attribution
Base model FRED-T5-large and spelling fine-tune FRED-T5-large-spell by AI Forever / Sber AI. Part of the SAGE spelling correction framework. ONNX conversion and INT8 quantization by @smkrv.
Model tree for smkrv/sage-fredt5-onnx
Base model
ai-forever/FRED-T5-large-spell