HomeAbout meRésumé
Fullstack

Aleksandr
Vechenkov

Clew · A thread through anything you're learning.
↓ SCROLL FOR DETAIL

Clew, a thread through anything you’re learning

Open-source study tool. AI proposes a prerequisite graph, you review every mutation before it lands, and the graph itself is what the system reasons over, not a chat transcript.

2026  ·  live at clew.my  ·  open source here  ·  Python · FastAPI · TypeScript · React · SQLite
Numbers that come from the repo, not the marketing draft.
21
days solo, frontend to backend
25k
LOC across the repo (~10k Python backend, ~14k TypeScript frontend)
5
formal ADRs documenting product non-negotiables (mutation review, graph-first UI, edition boundaries)
6
hard rules in the proposal validator (connectivity, zone integrity, fidelity, no silent state, edge sanity, no deletes)
231
lines of JSON schema for proposal contracts, nothing flows into the graph un-typed
97
unit tests covering planner, validation, quiz closure, MCP tools, repository ops

How it started

I wanted to learn machine learning alongside school. I loaded the school curriculum and the ML topic list into an LLM and asked for a dependency analysis: what do I actually need from the curriculum to reach a given ML target?

The answer was four topics, not three months. Most of what I was being told to learn was orthogonal to the goal I cared about. The script that produced that answer became the project.

The category inversion

Every existing knowledge tool solves a variant of “organize more knowledge.” Clew goes the opposite way: cut a curriculum down to the minimum viable path to a specific goal. A different category, not a better note-taker.

Tool
Direction
Strategy
Obsidian / Roam
Organize knowledge
Linking + personal knowledge graph
Heptabase / Scrintal
Visual note-taking expansion
Spatial cards + canvases
Notion / Tana
Connected workspace
Structure + relational
Clew
Goal-directed minimum-path learning
Reduce curriculum to minimum viable path to a specific goal

What's in the repo

The interesting part isn’t the graph itself, it’s the scale of dependency reasoning behind it. Until very recently, an AI could not hold and reason over this many interconnected concepts at once. With the latest generation of models that became feasible, which is the only reason a tool like this is practical to build now, and would have been near-impossible a year ago.

Proposal pipeline, not free-form chat
Generation → typed validation → snapshot → user review → repository apply. Six hard rules in the validator (connectivity, zone integrity, source fidelity, edge enum sanity, no deletions, no silent completion). Violations fail explicitly with diagnostics.
Schema-first AI contract
Every graph mutation is a typed payload against a 231-line JSON schema (op_id, entity_kind, rationale per op). Proposals are inspectable and auditable, not a blob.
Closure gating that actually traverses the graph
A topic cannot claim mastery unless every prerequisite is closed. Enforced in the quiz service via depth-first prerequisite traversal, not a UI toggle.
Read-only MCP tools
External clients (Claude, Cursor) can query graph state and progress through the MCP server. None of those tools mutate the graph; mutation has to go through the proposal pipeline. The read-only contract is what makes it safe to expose.
Bidirectional Obsidian bridge
Importer reads explicit relation markers (e.g. [[Linear Algebra]]::requires), shows a preview before commit. Exporter writes markdown with optional zones, artifacts, and progress state. No forced migration, no lock-in.
Snapshot history per apply
Every applied proposal writes an immutable snapshot. Rollback restores the previous state from the snapshot store; it is how persistence is implemented, no separate undo system.

From the product

Workspace screens from the live build, graph surface, proposals, path navigation.

Learning workspace
Learning workspace
Learning workspace
Settings tab

Also in these roles

Fullstack is the primary lens, solo across frontend, backend, schema, AI layer. The other roles still apply.

AI engineer
Three concrete AI surfaces. Proposal planner: takes a learning goal plus the current graph, returns a typed payload (231-line JSON schema) of topics, edges, zones, ops, with a rationale per op. Validator: six hard rules that fail proposals explicitly, no silent application. Memory + thinking presets: tunable token budgets and selective context (graph, progress, quiz, frontier, selected topic) so the model never gets a wall of text it cannot use.
System builder
Held the whole product inside its original root idea, the build never sprawled and stays legible end-to-end. Five formal ADRs hold the boundaries: mutation must be reviewable and reversible, frontend stays graph-first not dashboard, local and hosted editions are separate code paths, external bridges (MCP, Obsidian) preserve the graph contract. Two providers (Gemini + OpenAI) behind one seam, BYO keys.
Founder
Started from a concrete problem: I wanted to learn ML alongside school, the dependency analysis showed 4 topics matter not 90 percent of the curriculum. Shipped solo, MIT-licensed, live at clew.my.
Open source, MIT. Code public on GitHub, hosted at clew.my. Pull request a topic graph and I’ll walk a live path through it.