Today we’re open-sourcing PlanOpticon, our video analysis and knowledge extraction tool. It’s MIT licensed, available on PyPI, and ships with standalone binaries for macOS, Linux, and Windows.
pip install planopticon
That’s the whole install. One command to analyze a video:
planopticon analyze -i recording.mp4 -o ./output
What you get
Point PlanOpticon at any video — meeting recordings, training sessions, conference talks, screen shares — and it extracts:
- Full transcript with speaker identification
- Diagrams detected and recreated as editable Mermaid code
- Action items with assignees and deadlines
- Key points and structured summaries
- A knowledge graph of entities and relationships
- Reports in Markdown, HTML, and PDF
It works with OpenAI, Anthropic, and Google Gemini. Set whichever API keys you have; it auto-discovers models and routes each task to the best available provider. Transcription can run locally with Whisper — no API needed.
Why we built this
We’ve written about this in detail over the past week (Part 1, Part 2, Part 3). The short version: recorded meetings are the largest untapped source of institutional knowledge in most organizations, and nobody has time to rewatch them.
What’s in the box
- 222 tests, CI across Python 3.10–3.13
- 8 open issues tagged
good first issueandhelp wanted - Pre-commit hooks, contributing guide, issue templates
- Docs at planopticon.dev
- Standalone binaries — no Python required
Batch mode
Process entire folders of videos at once. PlanOpticon merges knowledge graphs across recordings, so patterns and relationships that span multiple sessions become visible.
planopticon batch -i ./recordings -o ./output
Works with local files, Google Drive, or Dropbox.
Get involved
We have issues ranging from YouTube URL support to S3 integration to custom prompt templates. If you work with video content and care about knowledge extraction, we’d love contributions.
