← threadline.to

Why every AI agent suffers from amnesia — and how to fix it

Every agent you build starts from zero. It doesn't have to.

The problem: every session starts from zero

Today's AI agents are brilliant — and completely forgetful. Every new chat, every new tab, every new ticket starts the same way: from a blank slate.

The agent doesn't remember your stack, the last bug you shipped, or the onboarding flow you've been iterating on all week. It doesn't know that you prefer bullet points over paragraphs, or that you're on a tight deadline.

Why this matters

When every session starts from zero, users end up repeating themselves. Again. And again.

The result: agents feel dumb, experiences feel generic, and users lose trust. The magic of "this thing knows me" never shows up.

The naive fix: stuff everything into context

The first instinct is obvious: just stuff more into the model's context window. Dump every previous message, every summary, every user fact into the system prompt.

That works — briefly. But it's:

The right fix: a persistent, structured context layer

Instead of re-sending the entire history every time, you want a living profile per user:

Stored once, updated over time, and re-used across every agent and every model you run.

How Threadline solves it

Threadline gives you a persistent, structured context layer that sits next to your existing AI stack. It works with any LLM, any framework, any product.

There are two primitives:

Context is user-owned: users can see, edit, and delete their profile in the trust dashboard. Agents only access what they've been explicitly granted.

Before vs after

Here's what a typical stateless agent looks like:

// Before: stateless agent
const system = "You are a helpful assistant."

const completion = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [
    { role: "system", content: system },
    { role: "user", content: userMessage },
  ],
})

And here's the same flow with Threadline:

// After: with Threadline
import { Threadline } from "threadline-sdk"

const tl = new Threadline({ apiKey: process.env.THREADLINE_KEY! })

const basePrompt = "You are a helpful assistant."
const { injectedPrompt, cacheHint } = await tl.inject(userId, basePrompt)

const completion = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [
    { role: "system", content: injectedPrompt },
    { role: "user", content: userMessage },
  ],
  ...(cacheHint?.recommended ? { extra_body: cacheHint.openaiParam } : {}),
})

const agentResponse = completion.choices[0]?.message?.content ?? ""

await tl.update({ userId, userMessage, agentResponse })

Same model, same base prompt — but now every interaction compounds. The agent remembers who it's talking to.

Get started

If you want your agents to stop forgetting everything, the quickest path is the Threadline quickstart. Drop in the inject + update pattern and ship a memory layer in minutes, not weeks.

Built by Threadline · threadline.to