Fishbrain API for your own apps
Fishbrain can act as a memory and context OS behind any LLM-backed application. Use the same scoped memory, reflections, and context inspection from your own code.
Drop-in memory layer, not another model.
Fishbrain sits between your app and your LLM provider. You handle the user experience; we handle the memory.
Your App
Sends user requests with context identifiers:
- User ID / account identifier
- Domain & Topic identifiers (project/thread)
- The user's message + optional metadata
Fishbrain
Handles memory retrieval and context composition:
- Looks up relevant memories for that user/domain/topic
- Builds a scoped context bundle
- Calls provider (or returns context for you to call)
LLM Provider
Your app receives the response:
- Show the answer to your user
- Optionally show "why did it say that?" view
- Fishbrain's reflection engine learns for next time
You focus on your product. Fishbrain handles memory persistence, retrieval, token budgeting, and context transparency—across any LLM provider.
Basic integration flow
Here's a simplified example of how your app talks to Fishbrain.
// Example integration with Fishbrain API
// Note: Endpoint URLs and payload shape should match the actual Fishbrain API.
// This is illustrative logic, not a live SDK snippet.
const FISHBRAIN_API_URL = "https://api.fishbrain.ai/v1/chat"; // placeholder URL
const FISHBRAIN_API_KEY = process.env.FISHBRAIN_API_KEY;
async function sendMessageWithMemory({
userId,
domainId,
topicId,
message,
}: {
userId: string;
domainId: string;
topicId: string;
message: string;
}) {
const response = await fetch(FISHBRAIN_API_URL, {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": `Bearer ${FISHBRAIN_API_KEY}`,
},
body: JSON.stringify({
userId,
domainId,
topicId,
message,
// Optional: specify provider, model, etc.
provider: "openai",
model: "gpt-4o",
}),
});
const data = await response.json();
return {
// The assistant's response (with context already injected)
assistantMessage: data.assistantMessage,
// Metadata about what memories were used (for transparency UI)
usedMemories: data.usedMemories || [],
// Token usage info
usage: data.usage,
};
}
// Usage in your app:
const result = await sendMessageWithMemory({
userId: "user_123",
domainId: "project_acme",
topicId: "onboarding_flow",
message: "How should we handle user verification?",
});
console.log(result.assistantMessage);
// → Response informed by all relevant memories for this user/project/topicThis is a conceptual example. Actual endpoint URLs, authentication methods, and payload structures will be provided when you get API access.
What Fishbrain handles for you
Scoped Memory
Per-user, per-project memory isolation
Reflection Engine
Auto-extract factlets from conversations
Token Budgeting
Automatic context window management
Context Inspector
See exactly what memories were used
Request API access
The Fishbrain API is available for teams and products that want a dedicated memory layer. Enterprise and Pro users can reach out for access, rate limits, and integration help.
For high-volume or embedded use cases, Enterprise pricing applies.
Security & data isolation
Every API call is scoped to your application and users. Row-level security ensures your users' memories never mix with other accounts. API keys are stored securely, and all traffic is encrypted.
Learn more about our security approach →