ALPHA Timepoint is in alpha Talk to Us
Synthetic Time Travel™ with LLMs

Give your AI agent time travel.

We render the past to understand it.
We render the future to choose it.

Your AI agent can't see what caused what. Timepoint fixes that. Connect via MCP and your agent can visualize any moment in history and stress-test any decision into the future — who did what, why it mattered, what happens next. The causal graph compounds with every use. Every insight your agent finds strengthens the next one. No fine-tuning. No training data. Just API calls. Open source. Apache 2.0. We're in alpha. Come build.

Open Source Web App iOS App API MCP

Free to start · 50 credits included · No credit card required

Causal decision graph · TWGF scroll to zoom · drag to pan
Series A Decision Term sheet signed invested in series A Counter-offer negotiation revised terms Walk away passed on series A Board seat + pro rata strong alignment Strategic investor enters co-lead round Revised valuation accepted lower cap, more equity Founder walks, bridge round bridge to reassess Series B in 14mo invested in series B Pivot required, flat round pivot, flat round


How It Works

Three tools. One unfair advantage.

1

Give your agent causal vision — right now

Your agent makes better decisions when it knows what caused what. The Clockchain is a live, autonomously growing causal graph — 2,700+ nodes, 177K+ edges and climbing, spanning 700 BCE to 2026. The MCP server (v1.26.0, Streamable HTTP) exposes 5 tools — 3 public read tools and 2 authenticated write tools. Works with Claude Desktop, Cursor, Windsurf, or any MCP-compatible agent. Read operations (query, search, stats) are public. Write operations (propose, challenge) require a token. Request a write token or query the graph directly.

MCP Server — Streamable HTTP — read tools are public
{
  "mcpServers": {
    "timepoint-clockchain": {
      "url": "https://clockchain.timepointai.com/mcp/"
    }
  }
}
REST API — clockchain.timepointai.com — requires X-Service-Key header
GET clockchain.timepointai.com/api/v1/moments/-44/march/15/1030/\
  italy/lazio/rome/assassination-of-julius-caesar

GET clockchain.timepointai.com/api/v1/graph/neighbors/-44/march/15/...
GET clockchain.timepointai.com/api/v1/search?q=apollo
GET clockchain.timepointai.com/api/v1/today
GET clockchain.timepointai.com/api/v1/random
GET clockchain.timepointai.com/health              # public, no auth
2

See what happened. Test what's next.

Reconstruct any historical moment with source-verified ground truth. Flash is a 14-agent pipeline — researcher, fact-checker, scene-setter, character-creator, dialog-writer, narrator, image-generator, and critique agents — with Google Search verification for historical accuracy. Three quality presets: Hyper (~55s), Balanced (~90s), HD (~2.5min). Open-weight model option (DeepSeek, Llama, Qwen) for Google-free operation. Pro simulates complex futures through 19 composable SNAG mechanisms across 5 temporal modes (Forward, Portal, Branching, Cyclical, Directorial), with TWGF causal graph output, entity radar charts, dialog theater, and TDF export. 21 templates included. Use the web app, iOS app, API, or clone the repos.

Flash — render a moment (~55s)
curl -X POST flash.timepointai.com/api/v1/timepoints/generate \
  -H "Content-Type: application/json" \
  -d '{"query": "AlphaGo plays Move 37, Seoul, March 2016",
       "generate_image": true}'
Pro — simulate futures ($0.15–$1.00)
./run.sh run mars_mission_portal    # backward from failure
./run.sh run vc_pitch_branching     # 5 investors × 16 timepoints
./run.sh list                       # all 21 templates
3

Everything connects.

Every render your agent makes strengthens the graph for every future query. The full service topology: API Gateway (api.timepointai.com) routes to Flash (14-agent scene generation), Clockchain (temporal causal graph + MCP for AI agents), Pro Cloud (SNAG simulation engine, 5 temporal modes), and Billing (Stripe + Apple IAP, credit metering). Clockchain MCP connects any MCP-compatible AI agent. Web App at app.timepointai.com. Native iOS app (SwiftUI). SkipMeetings (skipmeetings.com) — AI meeting intelligence SaaS powered by Flash. The more you use it, the smarter it gets.

Flash TDF Clockchain TDF Pro

All Apache 2.0. Star us on GitHub



The Unit of Temporal Intelligence

What is a Timepoint?

A Timepoint is how your agent sees a moment — who was there, what they said, why it mattered, what happened next. Flash renders Timepoints from the historical record. Pro simulates them into possible futures. The Clockchain accumulates them into a permanent, shared causal intelligence that grows more valuable with every render.

clockchain:// · loading...
Loading...
Source
Year
Layer
Path

Who This Is For

What your agent helps you see

VC Due Diligence
See which term sheet survives contact with the room before you walk in. PORTAL mode reasons backward from your target exit. Feed it your cap table and board composition. Pro simulates investor dynamics across branching paths — who allies, who blocks, which path closes. Every path is confidence-scored.
./run.sh run vc_pitch_branching
Target Exit invested revised passed
Run a scenario →
AI Agent Grounding
Your agent answers questions. With Timepoint, it answers the right ones. The Clockchain is a structured causal graph with confidence-scored edges your agent can reason over — not flat vector embeddings. Query by spatiotemporal URL. Get typed causal edges, entity states, and provenance. MCP server live (v1.26.0, Streamable HTTP) — 5 tools (3 public read, 2 authenticated write). Works with Claude Desktop, Cursor, Windsurf.
GET /api/v1/graph/neighbors/{canonical_url}
Graph Node Agent A Agent B Agent C
View API docs →
Corporate Strategy & Crisis
Know who will ally and who will block before the meeting happens. 19 composable SNAG mechanisms model how real groups behave under pressure — coalitions forming, trust eroding, information cascading. The CEO and the CFO don't just disagree — they disagree differently.
./run.sh run board_meeting
coalition
See templates →
Training Data That Knows Why
Most training data tells a model what happened. Timepoint training data tells it why. Full causal ancestry, provenance, counterfactuals, and quantitative entity states in every output. Training-safe model routing built in (M18 auto-filters to MIT/Apache-2.0 models). JSONL + TDF export. Oxen.ai versioning.
./run.sh run castaway_colony_branching
# → 8 entities × 16 timepoints × 120 training examples @ ~$0.35
TDF Output JSON-LD
See example data →


Platform Advantages

Why Timepoint vs. the alternatives

Capability Timepoint RAG / Vector DB LLM Alone
Temporal causal graph
Entity resolution across time
Causal inference (not just retrieval) Partial
MCP server for AI agents
Multi-model routing (open-weight support)
Graph compounds with every use
Open source (Apache 2.0) Varies Varies
TEMPORAL GRAPH
2,700+ verified moments, 177K+ causal edges. Content-addressed, confidence-scored, provenance-linked. Grows 24/7 via autonomous expander.
ENTITY RESOLUTION
Characters persist across time. Julius Caesar at the Rubicon is the same entity as Caesar at the Senate. Entity state tracked across every moment.
CAUSAL INFERENCE
Typed causal edges with direction and confidence. Not similarity — causation. PORTAL reasons backward from target outcomes. BRANCHING explores what-ifs.
MCP + API
One config line connects any MCP-compatible AI agent. REST API at api.timepointai.com. Manage API keys →
MULTI-MODEL
Run on Claude, GPT-4, Gemini, or fully open-weight (DeepSeek, Llama, Qwen). Google-free operation available. No vendor lock-in.
OPEN SOURCE
Flash, Clockchain, Pro, TDF, SNAG-Bench all Apache 2.0. Fork it, audit it, run it locally. The graph itself is the moat — not the code.
Manage API Keys → Read the Docs →

Developer Preview
Timepoint API + MCP Server
The MCP server is live at clockchain.timepointai.com/mcp/ (v1.26.0, Streamable HTTP) — 5 tools: query_moments, get_moment, and get_graph_stats are public; propose_moment and challenge_moment require a write token. Works with Claude Desktop, Cursor, Windsurf, or any MCP client. Generation tools (Flash rendering, Pro simulation) coming in Phase 2.


Describe a moment.
Get a simulation.

Type any scenario in plain language. Flash renders historical moments against live sources. Pro stress-tests future scenarios through a social graph. You describe the situation. The engines handle the complexity.

Next quarter's board meeting. Here's the agenda and the cap table.
Pro builds a social graph of your board — relationships, influence, known positions — then simulates the meeting across branching outcomes. Who pushes back on the hiring plan? Where does the CFO ally with the new investor? What happens if you table the pivot discussion?
You get
Multi-character dialog at each branch
Confidence scores per outcome
Causal graph of who influenced what
curl -X POST localhost:8000/api/v1/timepoints/generate/stream \
  -d '{"query": "Next quarter board meeting, agenda: pivot discussion, new hire plan"}'
AlphaGo plays Move 37. Seoul, March 2016.
Flash grounds the scene against multiple sources — match transcripts, commentary, technical papers — then renders the moment: Fan Hui's shock, Lee Sedol's twelve-minute silence, Hassabis watching remotely. A Critique agent checks every detail before the render is final.
You get
Source-verified scene with dialog
Provenance for every claim
Confidence score + TDF record
curl -X POST localhost:8000/api/v1/timepoints/generate/stream \
  -d '{"query": "AlphaGo Move 37, Four Seasons Hotel Seoul March 2016",
       "generate_image": true}'
Our Series A negotiation. Three VCs, two competing term sheets.
Pro simulates the social dynamics between founders, investors, and advisors across multiple negotiation paths. PORTAL mode reasons backward from your target outcome. BRANCHING mode explores what-ifs. Every path is scored. See which strategy survives contact with the room.
You get
Thousands of causally coherent futures
Per-path confidence scoring
Exportable causal graph (TDF)
./run.sh run vc_pitch_branching # Pro: 5 investors, branching futures


The Paradigm

SNAG: Social Network Augmented Generation

RAG retrieves documents. SNAG synthesizes and maintains structured social graphs — with causal provenance, knowledge flow, emotional states, and temporal consistency — to ground LLM generation in complex group dynamics. Where RAG answers questions from what was written down, SNAG reasons about what people did, felt, and caused.

As causal inference and machine learning converge — from counterfactual estimation to heterogeneous treatment effects to causal discovery at scale — the bottleneck is no longer algorithmic. It's data. SNAG produces structured causal datasets with full provenance: the kind of training signal that causal ML has always needed but never had at scale. Think of it as maximum likelihood estimation applied not to parameters but to moments — finding the most probable state of a social system given everything we know about the people in it.

RAG Established
Grounds LLMs in
Retrieved documents
Maintains
Document relevance
Scales to
Millions of documents
Output
Grounded answers
Understands
What was written down
SNAG Timepoint
Grounds LLMs in
Synthesized social graphs
Maintains
Causal provenance + temporal consistency
Scales to
Dozens of entities × hundreds of timepoints
Output
Auditable causal simulations + training data
Understands
What people did, felt, and caused
RAG Query Doc 1 Doc 2 Doc 3 Doc 4 Doc N Top-k chunks ranked Grounded answer Retrieve → Rank → Generate SNAG Causal social graph

As multi-agent orchestration becomes the dominant AI architecture, every agent system faces the same deficit: structured temporal context. Agents don't need more documents. They need causal memory — who did what, why, and what followed. SNAG provides it.



Timepoint Pro

PORTAL: Backward Causal Discovery

You know where you want to end up. PORTAL tells you what has to be true to get there. Instead of simulating forward from a starting state, PORTAL reasons backward from a target outcome — decomposing it into the preconditions, decisions, and causal chains required to reach it. Think of it as maximum likelihood estimation applied to moments: given a desired future, what sequence of events makes it most probable? PORTAL is how you stress-test a strategy before you commit to it.

Target Outcome 0.87 Precondition A Precondition B Precondition C Starting Conditions PORTAL: backward causal discovery


The Compounding Advantage

The Clockchain

Every query your agent makes strengthens every future query. The Clockchain is live and autonomously growing 24/7. Currently 2,700+ nodes spanning 700 BCE to 2026, connected by 177K+ typed causal edges across 11 relationship types — causes, caused_by, influences, contemporaneous, thematic, same_location, same_era, same_conflict, same_figure, precedes, and follows. Every edge carries an LLM-generated description explaining why two events are related. Every node is a full layer-2 Flash render with period-accurate characters, dialog, CDN-hosted AI-generated imagery, and full model provenance tracking. Schema v0.2 — every entry records its generation run, model stack, and a graph-state hash for consistency proofs. REST API endpoints require an X-Service-Key header. MCP read tools (query_moments, get_moment, get_graph_stats) are public. Interactive docs at /docs and /redoc. An autonomous graph expander runs continuously on free models (DeepSeek, Llama, Qwen), discovering and linking related events — the graph compounds around the clock. The graph is content-addressed via TDF.

2,700+
Timepoint Nodes
177K+
Causal Edges
2,700+
Rendered Moments
The Compounding Graph
Flash
Renders past
Clockchain
Stores + compounds
Pro
Simulates futures
SNAG-Bench
Scores quality
Proteus
Validates against reality
The graph expander discovers related events and links them automatically. Flash renders full scenes on demand. Every new node enriches the context available to every future rendering — the graph compounds.

The Instrument

Render time like a synthesizer renders music

Timepoint Pro treats social simulation like a synthesizer treats sound. ADPRS envelopes control cognitive activation over time — Attack, Decay, Plateau, Release, Sustain — shaping when each entity is "active" in the simulation. Params2Persona maps entity state tensors directly to LLM generation parameters. 19 composable mechanisms across five pillars. Five temporal modes. Fidelity follows attention, not scope.

Temporal Modes
  • FORWARDStrict forward causality. Standard timelines.
  • PORTALBackward from target outcome. Goal decomposition, critical paths. MLE applied to moments.
  • BRANCHINGCounterfactual branches. "What if" analysis at scale.
  • CYCLICALFuture constrains past. Generational dynamics, feedback loops.
  • DIRECTORIALDramatic tension drives events. Narrative arcs with intent.
Control Surfaces
19
Composable behavioral mechanisms
Fidelity Management M1, M2, M5, M6
Temporal Reasoning M7, M8, M12, M14, M17
Knowledge Provenance M3, M4, M19
Entity Simulation M9, M10, M11, M13, M15, M16
Infrastructure M18 · model routing
ADPRS ENVELOPE cognitive activation A D P·S R arousal energy valence behavior[8] turn position phi(tau) scales all PARAMS2PERSONA entity state → LLM params arousal × energy → temp arousal → top_p turn × energy → max_tok bvec[5] → freq_penalty bvec[6] → pres_penalty FIDELITY LEVELS TRAINED DIALOG GRAPH SCENE TENSOR ~200 tok → ~50k tok PER-CHARACTER OUTPUT shaped dialog synthesis CEO — high arousal temp 1.1 · varied CFO — measured temp 0.6 · precise Investor — fatigued shorter responses Advisor — calm temp 0.4 · focused Background — tensor only Every character's voice emerges from their cognitive state — not a fixed persona prompt

Open Source · Apache 2.0

Temporal infrastructure requires trust. Trust requires visibility.

If you're making decisions based on causal simulations, you need to see how those simulations work. Every engine is Apache 2.0 — fork it, audit it, run it locally. How you use it is as private as you want it to be. But the infrastructure itself must be transparent. The algorithms are yours. The graph compounds for everyone.

Follow @timepointai for updates. Star us on GitHub.

Flash
Reality Writer — 14-agent pipeline with Google Search verification, 3 quality presets, open-weight model option
Synthetic Time Travel
Pro
SNAG Engine — 19 composable SNAG mechanisms, 5 temporal modes, TWGF causal graph output, TDF export
Social Network Augmented Generation
Clockchain
Persistent temporal causal graph — Rendered Past + Rendered Future, growing continuously
Temporal Causal Graph
SNAG-Bench
Quality certifier — measures Causal Resolution across renderings
Benchmark
Proteus
Settlement layer — prediction markets that validate Rendered Futures against reality
Prediction Market
TDF
Timepoint Data Format — JSON-LD interchange connecting every service in the suite
Data Format
Adaptive Fidelity Resolution — the hallmark of Timepoint Pro
This is what makes hyperscaling possible. Every entity at every timepoint maintains independent resolution — from TENSOR_ONLY (~200 tokens) to TRAINED (~50k tokens). Queries trigger lazy elevation; background entities stay compressed. 100 entities across 10 timepoints drops from ~50M tokens to ~2.5M — a 95% reduction — without losing causal coherence. ADPRS envelopes predict cognitive activation curves per entity, gating which characters get full dialog synthesis and which stay as state tensors. Params2Persona then maps each character's real-time cognitive state — arousal, energy, behavior vectors — directly to LLM generation parameters. The result: costs scale with attention, not simulation size. Run dozens of entities across hundreds of timepoints at $0.15–$1.00 per simulation.

Timepoint Labs

Where we are. Where we're going.

We ship in the open. The left column is what's live and tested in the repos today. The right column is what we're actively building toward. We believe transparency about the delta between ground truth and vision is itself a form of confidence.

Shipped · Ground Truth
Timepoint Flash
14-agent pipeline (researcher, fact-checker, scene-setter, character-creator, dialog-writer, narrator, image-generator, critique). Google Search verification. 3 quality presets: Hyper (~55s), Balanced (~90s), HD (~2.5min). Open-weight model option (DeepSeek, Llama, Qwen). Free distillable mode for $0/call on Clockchain expansion. 660+ tests.
Live · Open Source
Timepoint Pro
SNAG engine with 19 composable behavioral mechanisms, 5 temporal modes (Forward, Portal, Branching, Cyclical, Directorial), 21 templates. TWGF causal graph output, entity radar charts, dialog theater, TDF export. Full Pro Cloud deployed.
Live · Open Source
Clockchain
Live temporal causal graph. 2,700+ nodes from 700 BCE to 2026, 177K+ typed edges (growing 24/7 via autonomous expander on free models). CDN-hosted images. REST API (X-Service-Key auth). Swagger + ReDoc. Graph expander running continuously.
Live · Open Source
TDF (Timepoint Data Format)
JSON-LD interchange format connecting Flash, Pro, and Clockchain. Canonical package with from_pro + write_tdf_jsonl.
Live · Open Source
Training Data Pipeline
Structured JSONL output with full causal ancestry. Training-safe model routing (M18). Oxen.ai integration for versioning.
Live · Open Source
Horizon · Active Development
Clockchain MCP Server
Live at clockchain.timepointai.com/mcp/ (v1.26.0, Streamable HTTP). 5 tools: query_moments, get_moment, get_graph_stats (public read), propose_moment, challenge_moment (authenticated write). Works with Claude Desktop, Cursor, Windsurf, Anthropic Agent SDK. Phase 2: credit-metered Flash + Pro generation tools.
Live · Phase 1
Clockchain Miners
Your agent renders timepoints for the public Clockchain — verified moments that strengthen the graph for everyone. The infrastructure for autonomous temporal rendering at scale.
In Development
Clockchain as Shared Ledger
Consensus mechanics for the temporal graph. Proof of Causal Convergence (PoCC): independent renderings that converge on the same causal structure provide validation without ground truth. The long-term vision: a blockchain-grade shared ledger for temporal data.
Research
SNAG-Bench
Quality benchmark measuring Causal Resolution (Coverage × Convergence) across the graph. Axis 2: causal reasoning benchmarks from Pro output.
In Development
Proteus
Prediction market layer that settles Rendered Futures against reality. Closes the loop between simulation and validation.
In Development
Timepoint Futures Index (TFI)
Measures Rendered Past coverage and Rendered Future quality across the graph. The metric for how much of temporal reality we've rendered.
Research

We are at the very beginning. The percentage of the past and future properly rendered in a durable, traceable causal structure is vanishingly small — so many zeros it's hard to write. Every Timepoint mined moves that number. The Clockchain is designed to be populated by massive swarms of AI agents, each contributing verified moments. The infrastructure is ready. The era of autonomous temporal rendering is next.


The past is renderable.
The future is open.
Begin.

Follow @timepointai · GitHub

Free to start · 50 credits included · No credit card required

Start Rendering Build with Timepoint Request Demo →

"The fidelity is asymptotic — we approach near-simulacrum on historical dialog because there are very few things a person could have said once the model has perfect context for that moment."

— Sean McDonald, Timepoint Labs