How Fishbrain actually remembers your work.

Fishbrain is a scoped memory layer that sits between you and your LLM—so your AI remembers projects, not just chats.

Conceptual AI memory flow in Fishbrain from input to extraction, scoped organization, retrieval, reflection, and output.
Fishbrain's memory pipeline — from input through reflection to output

The Scoped Memory Model

Fishbrain organizes your AI's memory into a clear hierarchy: Global → Domain → Topic. Each level scopes what your model remembers and when.

Global

Always-on memories about you as a person. Your preferences, writing style, communication habits—context that applies everywhere.

Example: "I prefer concise responses" or "My timezone is PST"

Domain

Projects, areas, or contexts. "Client A", "Personal Life", "Side Project"—each domain has its own isolated memory pool.

Example: Your startup's product specs, a client's brand guidelines, or your novel's character sheets

Topic

Individual threads or workstreams within a domain. Focused contexts for specific tasks that inherit from their parent domain.

Example: "Q4 Marketing Campaign" under your "Work" domain
The Ocean & Bay Metaphor

Think of your domain as an entire ocean. The water from that ocean can flow into bays (topics), but the water from one bay never flows into another bay unless you explicitly move it. This keeps your projects clean and prevents memory bleed.

What happens when you send a message

Four steps, every time. Automatic, transparent, and under your control.

Infographic showing how Fishbrain builds context: user message, relevant memory retrieval, scoring and filtering, and context assembly.
How Fishbrain builds context for every request
1

You ask a question

You type in any of your favorite models—GPT, Claude, Gemini, Grok, or Memphish. Just a normal chat, nothing special on your end.

2

Fishbrain pulls relevant memories

We search your global + domain + topic memories and build a scoped context bundle. Only what's relevant to your current scope is included.

3

The model responds with context

Your LLM sees the context and answers like it actually knows your ongoing work. No more re-explaining. No more context amnesia.

4

Reflection updates your memory

After the response, the Reflection Engine can extract new factlets and store them for next time. Your AI gets smarter with every conversation.

"Why did my AI say that?"

If your model says something weird, Fishbrain can show you exactly why.

Chat Response
AI

"Based on your previous discussions with Sarah about the Q4 deadline, I'd recommend moving the launch to November 15th instead of the original October date. This aligns with the buffer time you mentioned preferring."

3 memories used

Wait, who's Sarah? And when did I mention a buffer?

Context Manager

Memories Injected (3)

Meeting with Sarah - Q4 Planning
High

Discussed Q4 launch timeline. Sarah prefers October but acknowledged we might need buffer...

My work preferences
Global

I prefer 2-week buffer times before major launches to handle unexpected issues...

Project timeline
Medium

Original launch date: October 1st. Dependencies: design review, QA sign-off...

With the Context Manager, you can:

See exactly what was used

View the exact memories injected into each response

Edit or adjust importance

Correct mistakes or lower/raise memory priority

Delete entirely

Remove memories you don't want used anymore

Built for transparency and control

Scoped, structured memory

No topic bleed between projects

Multi-provider support

OpenAI, Claude, Gemini, Grok

Reflection engine

Turns chats into reusable factlets

Full transparency

See and edit exactly what your AI uses

Want more detail?

Read the docs for a deeper look at how Fishbrain works.

Open Docs

Ready to give your AI a real memory?

Start with your first month free. No credit card required.