Gabliterated Model Series

Logo/JPG

Overview

With this model series, I introduce the first Gabliteration, a novel neural weight modification technique that advances beyond traditional abliteration methods through adaptive multi-directional projections with regularized layer selection. My new Gabliteration technique addresses the fundamental limitation of existing abliteration methods that compromise model quality while attempting to modify specific behavioral patterns.

Refusal: 5/100
KL Div: 0.0591
Config:
    Samples: 400
    Skip: [4, 3]
    Layer: 0.66 (selected: 18)
    Scale: 0.48
    λ: 0.05
    k: 3
    β: 0.54
    Adaptive: False
    τ: 0.84

Model Variants

This series includes models ranging from 0.6B to 32B parameters, demonstrating the scalability and effectiveness of the Gabliteration technique across different model sizes.

Quants

Technical Background

Building upon the foundational work of Arditi et al. (2024) on single-direction abliteration, Gabliteration extends to a comprehensive multi-directional framework with theoretical guarantees. My method employs singular value decomposition on difference matrices between harmful and harmless prompt representations to extract multiple refusal directions.

Dynamic Layer Selection

This model was created using fixed layer selection. A fixed layer fraction was used based on empirical tuning.

Selected layer: 18 (out of 28 total layers)

Citation

If you use these models, please cite the original research (paper coming later this year):

Gülmez, G. (2025). Gabliteration: Adaptive Multi-Directional Neural Weight Modification for Selective Behavioral Alteration in Large Language Models. https://arxiv.org/abs/2512.18901

Acknowledgments

This work builds upon the foundational research by Arditi et al. (2024) on refusal direction identification in large language models.

Downloads last month
1,137
Safetensors
Model size
0.6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Goekdeniz-Guelmez/Qwen3-0.6B-gabliterated

Finetuned
Qwen/Qwen3-0.6B
Finetuned
(657)
this model
Finetunes
3 models
Quantizations
6 models

Datasets used to train Goekdeniz-Guelmez/Qwen3-0.6B-gabliterated

Collection including Goekdeniz-Guelmez/Qwen3-0.6B-gabliterated

Paper for Goekdeniz-Guelmez/Qwen3-0.6B-gabliterated

Evaluation results