Today we’re open-sourcing PlanOpticon, our video analysis and knowledge extraction tool. It’s MIT licensed, available on PyPI, and ships with standalone binaries for macOS, Linux, and Windows.
pip install planopticon
That’s the whole install. One command to analyze a video:
planopticon analyze -i recording.mp4 -o ./output
Point PlanOpticon at any video — meeting recordings, training sessions, conference talks, screen shares — and it extracts:
It works with OpenAI, Anthropic, and Google Gemini. Set whichever API keys you have; it auto-discovers models and routes each task to the best available provider. Transcription can run locally with Whisper — no API needed.
We’ve written about this in detail over the past week (Part 1, Part 2, Part 3). The short version: recorded meetings are the largest untapped source of institutional knowledge in most organizations, and nobody has time to rewatch them.
good first issue and help wantedProcess entire folders of videos at once. PlanOpticon merges knowledge graphs across recordings, so patterns and relationships that span multiple sessions become visible.
planopticon batch -i ./recordings -o ./output
Works with local files, Google Drive, or Dropbox.
We have issues ranging from YouTube URL support to S3 integration to custom prompt templates. If you work with video content and care about knowledge extraction, we’d love contributions.