Inside DotDB: Why Deterministic Systems Start Local
Why DotDB begins as a local storage contract, not a remote service
Why DotDB begins as a local storage contract, not a remote service
Deterministic systems rarely break first at the scheduler or the network layer. They break when nobody can reconstruct what a run actually did.
A run crashes and its logs are scattered across files. State reflects one moment, but the events that produced it reflect another. An engineer can reproduce the code path, but not the exact execution context. For conventional software that is annoying. For AI-native systems, it is a structural problem.
Dotlanth is trying to make execution replayable, inspectable, and eventually reliable enough for autonomous systems. That goal sounds lofty until you hit the first practical question: where do runs, logs, and state live while the system is still local-first? DotDB is our answer, and SQLite is its first backend.
Most modern systems are optimized for elasticity, not replayability. Logs are emitted asynchronously. State is spread across caches, databases, and queues. Retries blur the difference between something happening once and appearing once. By the time a failure matters, engineers can usually prove that a run went wrong, but not reconstruct the exact path that led there.
That gets worse with agents. An agent run is not just a stateless request. It has intent, intermediate decisions, tool calls, logs, and evolving state. Once an agent touches the world, debugging needs stronger guarantees than “we have some logs somewhere.”
The key insight is that deterministic compute does not start with a distributed database. It starts with a durable local ground truth for execution history.
That is the role of DotDB. DotDB is the logical store for run metadata, ordered logs, and state. It is not a claim that one database engine should rule every deployment forever. It is a storage contract: one model for runs, one model for ordered events, and one model for state snapshots and updates.
Mental model
DotVM executes intent
-> DotDB records the run
-> ordered logs preserve the story
-> state_kv preserves the latest durable state
-> replay and inspection start from evidence, not guessworkThat distinction matters because it separates the semantics from the substrate. Today, DotDB uses SQLite locally. Tomorrow, the same DotDB model can sit on other backends if the platform needs remote coordination or stronger replication. The engine can change later. The contract should not.
In other words: DotDB utilizes concrete databases as implementation backends. SQLite is the first one because it best matches the current problem shape. The deeper asset is the storage model itself.
For v26.1, DotDB is implemented as a local SQLite database stored under the project workspace, for example in .dotlanth/dotdb.sqlite.
The schema is intentionally narrow. It only covers what the runtime needs right now: durable runs, append-only logs, and a foundation for state persistence.
v26.1 schema shape
runs(id, created_at, updated_at, project_path, mode, status) run_logs(run_id, seq, ts, level, message) state_kv(namespace, key, value_blob, updated_at)
We are also using migrations from the beginning. That sounds like a small implementation detail until the first schema change lands. A deterministic platform cannot ask users to hand-edit runtime storage every time the internal model evolves. If DotDB is going to be a durable part of the developer experience, upgrades have to be automatic and predictable.
SQLite is the right fit here because it gives us transactions, ad hoc querying, low setup cost, and a debugging surface that normal engineers can actually inspect with standard tools.
An embedded KV store like sled looks appealing because it is lightweight and local. The problem is shape mismatch. DotDB does not only need key-value state. It also needs ordered run logs and ad hoc inspection by humans. A KV abstraction pushes too much of that structure into application code.
Plain files are great right up until they are not. Once you need atomic writes, schema evolution, reliable ordering, and structured queries by run ID, file-based persistence starts acting like a fragile homemade database. JSONL remains useful for export and tooling, but not as the authoritative store for replayable execution history.
A remote database may make sense later, especially if DotDB needs to coordinate across machines or support hosted workflows. But for a local-first alpha, that would optimize for future scale before we have stabilized the semantics of a single run. We would be adding ops, network dependency, and setup friction before proving the core model.
The upside is immediate. Local debugging gets better because DotDB is queryable and inspectable. The runtime gets a clear place to persist run identity, ordered logs, and state. Transactions give the system a more reliable story around durability and event ordering than loose files ever would.
The tradeoffs are real too. SQLite adds a dependency. Migrations become part of the product. Concurrency has to be handled carefully as features expand. Local persistence also creates a security surface: logs may contain sensitive information, so the workspace database has to be treated as private runtime data.
Choosing SQLite does not magically solve determinism. What it does is create the first honest substrate for it. With DotDB in place, Dotlanth can move toward replayable execution, better divergence analysis between runs, richer state tooling, and eventually backend portability without abandoning its local-first debugging model.
That is the important part. Reliable AI agents will need more than traces and dashboards. They will need execution histories that are durable, inspectable, and grounded in a storage contract that survives backend changes. DotDB is where that contract begins.
This is not a scale-first decision. It is a correctness-first one.
DotDB starts with SQLite because a deterministic compute platform first needs a single run to be understandable. Once that foundation is real, larger backends can extend it instead of redefining it.