Living Documentation

From messy product reality to disciplined product judgment.

Product OS is a local-first system for turning local source files, decisions, and work-in-flight into a durable product memory that can support prioritization, decomposition, validation, explanation, and curated outputs like weekly updates.

Core promise Reduce rote PM admin by making context reusable, inspectable, and strategically connected.
System of record Local files and approved canonical memory, not a pile of summaries.
Quality bar Reasoning must be evidence-backed, auditable, and able to explain itself.

The Ecosystem

The Product OS is a composed system. Each layer narrows ambiguity and increases judgment quality. It is designed so that raw context does not directly generate polished artifacts without passing through memory and reasoning first.

Layer 1 Inputs

CLI-first local file intake with normalized artifacts for multiple source types.

  • `inputs/`
  • markdown and text docs
  • pdf, docx, pptx
  • image files with OCR-aware fallback notes
Layer 2 Signals

Atomic observations extracted from source artifacts.

  • customer pain
  • decisions
  • constraints
  • metric movement
Layer 3 Review

Promotion queue where the system proposes durable meaning.

  • create
  • update
  • merge
  • edge proposals
Layer 4 Memory

The actual product brain, backed by evidence and human approval.

  • outcomes
  • initiatives
  • bets
  • decisions and assumptions
Layer 5 Reasoning

Explicit methodologies that operate on canonical memory first.

  • prioritize
  • decompose
  • validate
  • explain
Layer 6 Rendering

Outputs that are downstream of judgment, not substitutes for it.

  • weekly update
  • future bet briefs
  • future decision logs
  • future roadmaps
Local-first Human-approved memory Judgment over summarization Reasoning before rendering

The Memory System

Product OS is not a note archive. The memory layer is the real advantage because it preserves product reality in a form that later reasoning can trust. Signals are small, but promoted memory is durable and strategic.

Canonical entities

The system currently uses `problem`, `initiative`, and `work_item` internally. User-facing reasoning and rendering translate those into `outcome`, `initiative`, and `bet` where useful.

  • Problems capture meaningful outcome-level issues.
  • Initiatives capture strategic responses.
  • Work items capture tactical bets and execution moves.
  • Decisions, assumptions, metrics, and edges remain part of the broader graph.

Why review exists

Nothing durable should quietly mutate. The review layer is where the system proposes meaning and the human decides what becomes truth.

  • Promotions can create, update, merge, or reject.
  • Edges are proposed explicitly rather than inferred invisibly.
  • Evidence is attached before memory is committed.
  • Canonical memory always outranks rendered narratives.
inputs -> signals -> review -> memory

atomic context -> proposed meaning -> approved product reality

The Reasoning Workflow

The reasoning stack exists so the system can decide, structure, test, and articulate product judgment without improvising a different logic each time.

1

Prioritize

Decide what matters now. Default tactical use is a ranked bet view, but the system can also reason at the initiative and outcome horizons.

  • Uses importance, urgency, evidence strength, and judgment.
  • Keeps tactical work linked back to initiatives and outcomes.
  • Surfaces success signals and suggested time horizons.
2

Decompose

Decide how to break a chosen object down. The system is lens-aware, so it can decompose by leverage points, behavior, structure, or a mixed frame when needed.

  • Outcome to opportunities and initiatives.
  • Initiative to workstreams, hypotheses, and bets.
  • Recommends the lens for analysis, not the answer itself.
3

Validate

Check whether the chain is actually coherent enough to act on. Validation exists to stop the system from becoming fluent at weak product logic.

  • Tests outcome to initiative to bet coherence.
  • Checks evidence quality, assumptions, and success clarity.
  • Returns structured corrections, not just a score.
4

Explain

Translate the reasoning for yourself, a team, or leadership without changing the actual substance. Narrative is an adaptation layer, not a replacement for logic.

  • Supports tactical, strategic, and full-chain horizons.
  • Can optionally add a spoken narrative for influence and understanding.
  • Preserves evidence, uncertainty, and tradeoffs.

The Rendering Layer

Renderers turn memory plus reasoning into usable artifacts. The rule is simple: outputs must be downstream of approved memory and explicit judgment, never a shortcut around them.

Weekly update

The first renderer is a curated weekly narrative. It favors connected outcome and initiative framing, uses timeframe-aware curation, and stores approved updates as continuity artifacts in `memory/weekly_updates/`.

  • headline
  • what we learned
  • what we decided
  • what’s at risk
  • what’s next

Likely next renderers

Bet briefs, decision log entries, and later roadmap views all fit here. They should reuse the same connected chain rather than inventing a separate story per artifact.

  • bet brief
  • decision log entry
  • story or backlog draft

Roadmap role

A roadmap is a renderer, not a memory primitive. It should come after stronger sequencing, validation, and continuity, so it reflects real confidence instead of polished overstatement.

  • prioritized outcomes
  • sequenced initiatives
  • bets and dependencies

App Surface

The local app is now the primary reading and review surface for the MVP. Home / Today now integrates review pressure, continuity, and next-action nudges.

Launch locally

cd /path/to/product-os
uv run uvicorn productos.app.main:create_app --factory --reload

Then open http://localhost:8000 in the browser.

Current app workspaces

  • Home / Today
  • review workspace
  • reason workspace
  • render workspace

Interaction Layer

The next major layer is not more core logic. It is a better operating surface. The CLI is good at orchestration, but it is weak at review, comparison, approval, and polished reading. The interaction layer should solve that without replacing the engine.

Recommended direction

A local workspace app with a workflow-first early emphasis and a persistent workspace shape over time.

  • shared methodologies
  • context-aware shell
  • review and approval surfaces
  • reasoning inspection
  • rendering review

Why context matters

Contexts like `job` and `asteroid-belt` should share methods, not a single blended graph. The app should always make the active context visible and isolate memory by default.

  • separate inputs
  • separate review queues
  • separate memory graphs
  • optional explicit sharing later
shared methodology
        + context-aware local workspace app
        + isolated context memory

contexts/job/*
contexts/asteroid-belt/*

Workflow-first early

Early on, the system needs help creating memory, so guided flows matter more than a dense dashboard.

Workspace-first later

As memory accumulates, browsing, continuity, comparison, and review become more valuable than simple action launching.

App calls engine

The UI should orchestrate the existing core engine, not fork business logic into a second hidden system.

Current Commands

The current CLI already covers the full local-first path from ingestion to reasoning and the first renderer.

Ingest and extract

uv run productos ingest path/to/file-or-folder --source-date 2026-03-24
uv run productos ingest path/to/folder --recursive --source-date 2026-03-24

uv run productos signals extract transcript-weekly-product-review-2026-03-20
uv run productos review generate transcript-weekly-product-review-2026-03-20
uv run productos review list

Reason

uv run productos reason prioritize --view bet
uv run productos reason decompose --target-type initiative --target-id initiative-rewrite-onboarding-copy
uv run productos reason validate --target-type bet --target-id work-item-ship-onboarding-copy-test
uv run productos reason explain --target-type bet --target-id work-item-ship-onboarding-copy-test --audience exec --horizon full-chain --include-narrative

Render

uv run productos render weekly-update --from 2026-03-16 --to 2026-03-22
uv run productos render weekly-update --last 7d
uv run productos render weekly-update --this-week

Operating Principles

Memory before reasoning

Approved canonical memory is the default substrate. Raw artifacts are fallback material, not the first thing the system reasons over.

Reasoning before rendering

The renderer should never be the place where the real thinking first happens. Otherwise outputs become polished but unreliable.

No hidden logic

Important conclusions should expose evidence, confidence, tradeoffs, and what would change the conclusion.

Current State

The v1 ecosystem is already real enough to test end-to-end on local artifacts. The current emphasis is on making memory and reasoning durable before expanding the output surface too aggressively.

Implemented now

  • cli-first ingestion for markdown, pdf, docx, pptx, and image inputs
  • single-file and directory ingestion with recursive mode
  • normalized artifact storage with metadata, structure hints, and extraction notes
  • atomic signal extraction
  • review queue and promotion flow
  • canonical memory storage
  • prioritize, decompose, validate, explain
  • weekly update renderer with continuity storage
  • local app shell with home, review, reason, and render workspaces

Likely next moves

  • app trigger for local ingestion workflow
  • review and next-action guidance that reduces cli hopping
  • bet brief or decision log renderer
  • hardening around staleness and contradiction detection
  • real-artifact evaluation on a fuller week of work
  • later roadmap rendering once sequencing confidence improves