Text Generation
MLX
Safetensors
Transformers
English
llama
quantllm
mlx-lm
apple-silicon
q8_0
text-generation-inference
8-bit precision
bitsandbytes
Instructions to use QuantLLM/Llama-3.2-3B-8bit-mlx with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use QuantLLM/Llama-3.2-3B-8bit-mlx with MLX:
# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # if on a CUDA device, also pip install mlx[cuda] # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("QuantLLM/Llama-3.2-3B-8bit-mlx") prompt = "Once upon a time in" text = generate(model, tokenizer, prompt=prompt, verbose=True) - Transformers
How to use QuantLLM/Llama-3.2-3B-8bit-mlx with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="QuantLLM/Llama-3.2-3B-8bit-mlx")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("QuantLLM/Llama-3.2-3B-8bit-mlx") model = AutoModelForCausalLM.from_pretrained("QuantLLM/Llama-3.2-3B-8bit-mlx") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- vLLM
How to use QuantLLM/Llama-3.2-3B-8bit-mlx with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "QuantLLM/Llama-3.2-3B-8bit-mlx" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "QuantLLM/Llama-3.2-3B-8bit-mlx", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/QuantLLM/Llama-3.2-3B-8bit-mlx
- SGLang
How to use QuantLLM/Llama-3.2-3B-8bit-mlx with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "QuantLLM/Llama-3.2-3B-8bit-mlx" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "QuantLLM/Llama-3.2-3B-8bit-mlx", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "QuantLLM/Llama-3.2-3B-8bit-mlx" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "QuantLLM/Llama-3.2-3B-8bit-mlx", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - MLX LM
How to use QuantLLM/Llama-3.2-3B-8bit-mlx with MLX LM:
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm # Generate some text mlx_lm.generate --model "QuantLLM/Llama-3.2-3B-8bit-mlx" --prompt "Once upon a time"
- Docker Model Runner
How to use QuantLLM/Llama-3.2-3B-8bit-mlx with Docker Model Runner:
docker model run hf.co/QuantLLM/Llama-3.2-3B-8bit-mlx
Llama-3.2-3B-8bit-mlx
Description
This is meta-llama/Llama-3.2-3B converted to MLX format optimized for Apple Silicon (M1/M2/M3) Macs.
- Base Model: meta-llama/Llama-3.2-3B
- Format: MLX
- Quantization: Q8_0
- Created with: QuantLLM
Usage
Generate text with mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("QuantLLM/Llama-3.2-3B-8bit-mlx")
prompt = "Write a story about Einstein"
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
text = generate(model, tokenizer, prompt=prompt, verbose=True)
With streaming
from mlx_lm import load, stream_generate
model, tokenizer = load("QuantLLM/Llama-3.2-3B-8bit-mlx")
prompt = "Explain quantum computing"
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
for token in stream_generate(model, tokenizer, prompt=prompt, max_tokens=500):
print(token, end="", flush=True)
Command Line
# Install mlx-lm
pip install mlx-lm
# Generate text
python -m mlx_lm.generate --model QuantLLM/Llama-3.2-3B-8bit-mlx --prompt "Hello!"
# Chat mode
python -m mlx_lm.chat --model QuantLLM/Llama-3.2-3B-8bit-mlx
Requirements
- Apple Silicon Mac (M1/M2/M3/M4)
- macOS 13.0 or later
- Python 3.10+
- mlx-lm:
pip install mlx-lm
Model Details
| Property | Value |
|---|---|
| Base Model | meta-llama/Llama-3.2-3B |
| Format | MLX |
| Quantization | Q8_0 |
| License | apache-2.0 |
| Created | 2025-12-20 |
About QuantLLM
This model was converted using QuantLLM - the ultra-fast LLM quantization and export library.
from quantllm import turbo
# Load and quantize any model
model = turbo("meta-llama/Llama-3.2-3B")
# Export to any format
model.export("mlx", quantization="Q8_0")
⭐ Star us on GitHub!
- Downloads last month
- 53
Model size
3B params
Tensor type
F32
·
F16 ·
I8 ·
Hardware compatibility
Log In to add your hardware
8-bit
Model tree for QuantLLM/Llama-3.2-3B-8bit-mlx
Base model
meta-llama/Llama-3.2-3B