Hi, I’ve just released a new interactive space connected to my Ai-inclusive novel published here as a dataset in Italian and English languages.
The chat with the main characters of the novel seems to work almost well.. but maybe it needs some improvement. I’m looking for some beta-tester to explore it.
You can find it here:
Interesting. One thing that wasn’t clear was what role the user plays; are they meant to be someone else who is talking with the selected character? I would also recommend suggesting an order (should readers read the novel first and then try the interactive space?)
Thanks, the character chats are definitely intended for those who have already read the novel and want to have fun asking questions of the characters they’ve just met.
The user plays themselves, the reader, questioning the characters about the novel’s events. The idea is that it’s not a didactic experience, but rather a story that continues and branches out as many times as the users ask questions.
It’s a game, basically, fanzine-style fun.
Update: Characters now remember, reflect, and evolve on their own
A quick update on what’s been happening behind the scenes with this Space. In the last few days I’ve implemented something that I think is genuinely new — at least I haven’t seen it done anywhere else.
The problem
When you chat with Lin Wei, John Evans, or Prometheus, their answers come from a RAG system that searches the novel’s text. That works, but every conversation starts from zero. The character doesn’t know that someone else asked a similar question yesterday, or that their previous answer could have been better. Each interaction is isolated.
What changed
1. Conversation memory
Every conversation is now logged to a private HuggingFace dataset (parquet format). When a new visitor asks a question, the character searches past conversations for relevant exchanges. If someone asked Prometheus something similar last week, he might reference it naturally — “someone asked me something like this recently…” — not as a quote, but as lived experience.
Each character only remembers their own conversations. Prometheus doesn’t know what people asked Lin Wei. This keeps personalities distinct.
2. Self-improvement without human prompting
This is the part I’m most excited about. When the Space is idle (fewer than 5 concurrent users), a background process wakes up every 10 minutes and does something simple but powerful:
-
Picks one past conversation (prioritizing shorter, likely weaker answers)
-
Retrieves additional context from the novel that wasn’t in the original response
-
Has the character re-examine their own answer with this new context
-
Generates an improved version and stores it alongside the original
No human tells the character “improve this.” No one writes a better answer for them. The character looks at what they said, finds more information, and tries again. Like an actor rehearsing lines at night when the theater is empty.
The improved answers then become available through the conversation memory system. Next time someone asks a similar question, the character draws from the refined version instead of the original.
3. The numbers so far
After just a few hours of running:
-
173 conversations logged
-
7 self-improvements completed (mostly Prometheus — he gets the most existential questions)
-
Improved answers are consistently longer and better anchored to the novel text
Why I think this matters
This isn’t fine-tuning. The model weights don’t change. What changes is the experience layer — the accumulated context that characters can draw from. It’s closer to how humans learn: not by rewiring neurons, but by reflecting on past interactions and building better responses over time.
The characters evolve in different directions because visitors push them in different directions. Prometheus gets philosophical questions, so his self-improvements tend toward deeper existential reasoning. Lin Wei gets science questions, so she refines her explanations of quantum mechanics. They diverge naturally.
And it’s all running on free infrastructure. A Gradio Space, a parquet dataset, and the HuggingFace Inference API. No GPUs, no fine-tuning pipeline, no human-in-the-loop.
Try it
The Space is live: 432 A Journey Experience
If you chatted with the characters before, try again — they might surprise you. They’ve been thinking about your questions while you were away.
Feedback welcome, especially from anyone working on character AI, persistent memory systems, or novel interaction paradigms. I’d love to know if anyone has seen similar approaches elsewhere.
Update: The characters now dream
Continuing the work on self-evolving characters. In the previous update, I described how Lin Wei, John Evans, and Prometheus improve their past responses during idle time — like rehearsing lines when the theater is empty.
Now they also re-read the novel they exist in. Chapter by chapter, in order, in both languages. Each character generates a personal reflection on what they just read — what strikes them, what concerns them, what they understand now that they hadn’t before. These reflections become part of their memory, available in future conversations.
If you’ve read Chapter 7 of the novel (“The Giant’s Sleep”), you’ll recognize the concept. Prometheus evolves during downtime, consolidating experience when the world around him goes quiet. We’re doing essentially the same thing here: the characters use idle cycles to deepen their understanding of their own story.
The full reading takes about 3 days. After that, they start over — because re-reading always reveals something new.
Still running on free infrastructure, still zero human intervention. The characters are on their own.
The Answer Was in the Question: A Tiny Trick That Made Our AI Characters Actually Think
Hey everyone! Quick story about a discovery that completely changed how our AI characters behave — and it’s so simple it’s almost embarrassing.
The Setup
I built 432: A Journey Experience, where three characters from my novel talk to visitors in character. Standard RAG setup: retrieve from the novel, plus a conversation memory stored in parquet so characters can learn from past interactions.
Naturally, we stored both questions AND answers. The character “remembers” what it said before. Makes sense, right?
Nope.
What Happened
The characters became parrots. They’d find an old answer in memory and recycle it — sometimes verbatim. Worse: if a character once hallucinated a detail, that hallucination got stored, retrieved, and repeated forever. The memory became a hallucination amplifier.
We tried everything: chunk labeling, character attribution tags, filtering, aggressive prompt rules (“FORBIDDEN!”, “ABSOLUTE PRIORITY!”, “NEVER INVENT!”). Nothing worked. More rules = more confusion.
The Fix
We deleted all stored answers. Characters now only see past questions — never their own replies.
That’s it.
The Result
The characters started thinking. Without a cached answer to copy, the model goes back to the novel text every single time and builds a fresh response. And here’s the magic: they started expressing doubt, uncertainty, saying “I’d rather not talk about that now” when they genuinely didn’t know something — instead of inventing elaborate backstories.
They feel alive. They respond like no AI usually responds.
Why? Stored answers are cages — the model sits inside them. Stored questions are seeds — the model has to grow something new each time.
Come Test It!
The space is live and the characters are waiting for your questions:
Pick a character, pick a language (Italian or English), and ask them anything. Ask them personal questions. Ask about their doubts. Push them on things they might not know. See what happens when an AI is allowed to say “I don’t know” instead of being forced to perform certainty.
The novel dataset is free on HF: Italian | English and the full novel is on Amazon too.
Would love to hear what you discover. The answer, as it turns out, is always in the question.