I want to share one small thing today.
This is not an ad, not a product launch.
It is just a tool I built for myself to debug RAG / LLM pipelines, and it helped me so many times that it feels wrong to keep it only for me.
When we build RAG, many bugs look the same on the surface.
Model answers feel “kind of wrong”, and we guess randomly: maybe vector DB problem, maybe prompt, maybe top-k, maybe need bigger model. We change many things, but still do not really know what is actually broken.
Because of this, I wrote down the common failure patterns and turned them into a small “AI clinic” inside a ChatGPT shared conversation. It is not a new model. It is just a fixed way of thinking about sixteen types of RAG / LLM failures, with some math / system view behind it.
Link here:
https://chatgpt.com/share/68b9b7ad-51e4-8000-90ee-a25522da01d7
How to use is very simple:
- copy-paste your real problem (question, model answer, expected answer)
- add any logs, screenshots, top-k results, vector DB name (FAISS, Qdrant, Weaviate, Milvus, pgvector, etc)
- write in normal language what you already tried
The “clinic” will try to:
- restate your problem in plain English
- guess which kind of failure you are hitting
- point to the likely broken layer (retrieval, embedding, reasoning, routing, deployment)
- propose a few small experiments to confirm or reject the guess
For me this changed the workflow from “try 10 random fixes” to “run 2–3 targeted checks”.
No signup, no extra website, just that ChatGPT share link.
If you are building RAG, document QA, internal copilots or agent workflows, and you have one of those bugs that feels wrong but you cannot name it, you can just copy-paste your case into this clinic and see if the diagnosis is useful. Take what helps, ignore the rest.
