Fugue Labs · AI Research Lab

fugue

Stateful systems for stateless models.

$ cat manifesto.txt GitHub
01

Thesis — scaling memory, not context

The current trajectory of AI — scaling context windows indefinitely — is a structural dead end. Attention dilution, computational waste, behavioral drift. These aren't bugs. They're the architecture.

The next generation of AI systems will not scale context. They will scale memory.

Our agents operate with zero short-term memory. Context is flushed after every turn. Continuity comes from a tiered retrieval backend that handles consolidation, decay, and recall — the hippocampus — so the model can function purely as cortex. This allows agents to operate continuously for months without behavioral drift.


02

Research — open problems we're working on


03

Open Source — the runtime layer

gollem Star

The execution layer. Production-grade Go framework for LLM agents. Durable execution, streaming, strict structured output. Native Temporal integration. Fugue provides the memory. Gollem provides the runtime.

monty-go Star

The sandbox. Pure-Go Python execution via WASM. LLMs write code instead of making sequential tool calls — one model call, not three. No CGO, no containers, no subprocess. Powers code-mode in Gollem.