GoatDB combines the best of both worlds.
Remote databases
PostgreSQL, MySQL, DynamoDB
Shared across devices with strong consistency. Designed for centralized workloads where all clients connect to one server.
Embedded databases
SQLite, LevelDB, RocksDB
Fast reads, no network. Designed for single-device workloads where data stays local to one process.
AI agents reason locally, run on any device, and collaborate across instances. They need both: local speed and shared state. That's GoatDB.
Local speed. Shared state. One key to protect.
Data lives in memory on every device. Reads run at microsecond speed — no network, no disk. Changes sync through the server automatically. Conflicts resolve with Git-style three-way merge at the field level. Every write is cryptographically signed with Ed25519.
Like Git, the server coordinates — but every client holds the full commit graph. If the server crashes, any client restores it. The only thing the operator protects is the root key. Scale horizontally by adding repositories — each one syncs, persists, and fails independently.
Devices share one central copy
Each device keeps a separate local copy
Server coordinates — clients can restore it.
Queries that stay fast
Queries subscribe to data and update incrementally — results stay current as data changes. 1,000× faster than SQLite in the browser at 10k items. GoatDB uses predicates because they naturally compose into live subscriptions. One concept for AI to learn, zero boilerplate to generate.
Autonomous and offline
Agents run the same code everywhere — backend, edge, browser — with no runtime-specific adapters. They work fully offline and sync through the server when connected. Clients only open the repositories they need — scale by adding repos, not expanding them.
Built for agents to write and run
AI coding tools scaffold a complete GoatDB app in a single prompt — schema, queries, sync. Pure TypeScript from end to end. Every commit is signed with Ed25519, so you always know which agent or user wrote what — cryptographic attribution, built in.
React hooks. Zero boilerplate.
Four hooks replace your entire state layer. useQuery subscribes to live data. useItem tracks a single document with field-level granularity. Built on useSyncExternalStore — the same primitive behind Zustand and Redux Toolkit. Direct mutation, auto-commit, no useCallback needed.
Built for the main thread
GoatDB is browser-native. Query scans run as cooperative coroutines that yield every ~20ms — one-third of a 60fps frame. The scheduler picks the shortest-running task first, so UI interactions always win. If a scan becomes irrelevant, cancel it mid-flight.
File I/O runs in a dedicated worker with zero-copy transfer. Query results persist to disk and restore on reopen — no re-scan on page load. On the server, the same queries run as synchronous loops with zero scheduling overhead.
UI waits for the full scan
UI stays responsive between chunks
Four steps. That's the whole API.
Define, query, sync, verify. No migrations. No ORM. No glue code.
Define & create
TypeScript schemas. No SQL, no migrations, no ORM. The source field lets you track which agent instance wrote each memory.
const AgentMemorySchema = {
ns: 'agent-memory',
version: 1,
fields: {
observation: { type: 'string', required: true },
confidence: { type: 'number', default: () => 0.5 },
source: { type: 'string', required: true },
recordedAt: { type: 'date', default: () => new Date() }
}
} as const;
// Create a memory item — source identifies which agent wrote it
const mem = db.create('/data/memories/pref-1', AgentMemorySchema, {
observation: 'User prefers concise responses',
confidence: 0.9,
source: 'preference-agent'
});
Query live
After the first scan, queries update incrementally — no re-query, no polling. In React, wrap any query with useQuery() for automatic re-renders on data changes — no useEffect, no subscriptions, no cleanup. Same predicate API on Deno, Node.js, and the browser.
// Warm query runs in microseconds — data is already in memory
const memories = db.query({
source: '/data/memories',
schema: AgentMemorySchema,
predicate: ({ item }) => item.get('confidence') > 0.7,
sortBy: 'recordedAt'
});
// In React: const memories = useQuery(db, { source: '/data/memories', ... });
// Results auto-update when data changes
for (const mem of memories.results()) {
console.log(mem.get('observation'));
}
It just syncs
Concurrent agents, multiple devices, offline edits — everything merges automatically with Git-style three-way merge. No conflict-resolution code to write.
// Agent A — offline, updates confidence
mem.set('confidence', 0.95);
// Agent B — offline, same item, updates observation
mem.set('observation', 'User prefers bullet points over paragraphs');
// Both reconnect — GoatDB merges automatically.
// Result: confidence = 0.95, observation = 'User prefers bullet points...'
Signed by default
Every commit is cryptographically signed with Ed25519. Verify which agent wrote what, build audit trails, or enforce authorization — without trusting the server.
// Every write is signed automatically with Ed25519
const task = db.create('/data/tasks/plan-1', TaskSchema, {
title: 'Analyze Q4 metrics',
assignedTo: 'analytics-agent',
status: 'pending'
});
task.set('status', 'complete');
// Verify which agent wrote what, cryptographically
task.commit.session; // Ed25519 session that signed this commit
Where GoatDB fits
GoatDB loads the full repository into memory at open time — under a second for 100k items. After that one-time cost, reads complete in ~1µs. Each write is individually signed with Ed25519, so bulk inserts trade throughput for cryptographic attribution — ~16× slower than unsigned SQLite writes. Instead of SQL joins, GoatDB uses predicate-based queries that subscribe to changes and update incrementally.
GoatDB is built for apps that AI tools scaffold in one prompt and agents that run on any device. For SQL analytics, pair it with PostgreSQL. For warehousing, pair it with ClickHouse — GoatDB handles the local-first layer.
See the full benchmarks →Open source. MIT licensed. Built in public.
GoatDB is built in public. The binary commit format, custom binary codec, and P2P sync protocol are all shipping now. Here's what's next — and where your help matters most.