M-Courtyard: Fine-tune LLMs on macOS with zero code (MLX + Ollama GUI)

Hi everyone!

Iโ€™d like to share M-Courtyard, an open-source macOS desktop app for fine-tuning
LLMs on Apple Silicon โ€” no code, no cloud, everything local.

It wraps the full pipeline in a native GUI:
:page_facing_up: Import docs โ†’ :robot: Generate training data (Ollama) โ†’ :wrench: LoRA fine-tune (mlx-lm)
โ†’ :test_tube: Chat-test โ†’ :package: One-click export to Ollama (Q4/Q8/F16)

Supports models from mlx-community: Qwen 3, DeepSeek R1, GLM, Llama 3, and more.

Requirements: macOS 14+, Apple Silicon (M1/M2/M3/M4), 16GB+ RAM recommended

Feedback and suggestions welcome!

1 Like