Capture -> Connect -> See

Your knowledge graph, built automatically as you browse.

Ocno turns pages, ideas, and recurring themes into a local-first knowledge graph you can revisit, inspect, and grow without manual PKM rituals.

Visual knowledge mapLocal-first memoryAuto-linked contextPKM without upkeepChrome extension

A visual map of pages, notes, and ideas

Connections appear automatically as you browse

A second brain you can actually explore

0:00 / 0:00
Product previewProduct preview dark

Auto-linked

Related pages and ideas connect themselves

Visual graph

See structure and context instead of a flat history log

Local-first

Your knowledge map stays on your machine

No upkeep

No tagging, copying, or backlink chores required

SECOND BRAIN

A second brain that works while you browse.

Click a capability to see how Ocno turns browsing into memory, on-page context, and a living knowledge graph.

NOW SHOWING

Automatically generated visual graph

Ocno turns pages and ideas into a living map you can inspect, traverse, and grow without manual upkeep.

Ask and inspect your graph

01

Automatically generated visual graph

Ocno turns pages and ideas into a living map you can inspect, traverse, and grow without manual upkeep.

02

Ask your brain in plain English

Query your memory naturally and jump into the right source, cluster, or connection without folders, tags, or exact keywords.

Context and recovery while you browse

03

Context right while you browse

Unlike traditional second-brain tools, Ocno brings relevant memory back into the page you are reading instead of pushing you into a separate workspace.

04

Recover what you saw before

Earlier pages, sources, and related threads resurface when they matter, so recall becomes part of browsing instead of a later search task.

KNOWLEDGE MAP

Your graph is built
locally and privately.

Ocno collects the visual graph from the pages you actually open, links them on-device, and keeps the resulting knowledge map private to you.

Architecture diagram showing local on-device data flow

The graph is collected from your browsing trail

When you open pages, Ocno captures their local context, titles, and relationships so they can appear as nodes in your graph. You do not need to manually save, tag, or rewrite what you read to preserve it.

Connections are computed on-device

Shared entities, repeated topics, and adjacent ideas are compared locally so the graph can cluster related pages and reveal structure without sending your reading history away for processing.

Your visual knowledge graph stays private

The resulting graph lives on your device as your own knowledge layer. It is not uploaded as a cloud profile, and the structure of what you know remains on your side of the boundary.

KNOWLEDGE

Built for visual thinking.

A useful PKM layer should show structure, preserve context, and remove the maintenance burden that makes most systems decay.

01Representation

See structure, not just history

  • Turn scattered tabs into a map of concepts and sources
  • Understand how ideas cluster instead of recalling isolated URLs
  • Use visual context to recover where a thought came from
  • Make browsing memory explorable instead of invisible
02Connection

Let ideas link themselves

  • Surface related pages through meaning and proximity
  • See recurring themes as they reappear across sessions
  • Build context continuity without manual curation
  • Navigate sideways through adjacent knowledge, not just backward
03PKM

A second brain without upkeep

  • No copy-paste pipeline required to preserve useful context
  • No need to tag every page just to find it later
  • Keep a living map of your reading without another inbox to manage
  • Make PKM fit browsing instead of interrupting it

What enabled this?

5-10x Faster

Thanks to its compiler-optimized engine and full GPU utilization, Ocno delivers up to 5–10× higher performance compared to traditional CPU-based in-browser model execution.

Smaller models are catching up 🔥

Smaller models like Qwen3 4B (GPU and 4GB RAM needed) can achieve quality comparable to large earlier models such as GPT-4o in many tasks proven by various benchmarks.

Model Compression & Quantization

Advances in quantization and model compression drastically reduced memory requirements, making powerful models practical on everyday devices. You can now run model comparable to GPT-4o on just 3–4 GB of RAM — enabling capabilities once limited to large cloud systems.

What hardware do you need?

AppleWindows

MacBook M1 (Apple Silicon)

  • Designed, developed, and extensively tested on Apple Silicon devices, including MacBook Pro M1 and MacBook Air M1.
  • Apple Silicon unified memory architecture and high-efficiency GPU acceleration provide excellent local AI inference performance.
  • Newer Apple Silicon chips (M2, M3, and beyond) deliver smoother performance.
  • Newer chips also enable larger and more capable models.

RAM

Minimum: 8GB

Good: 16GB

Best: 32GB+

Chip

Minimum: M1

Good: M1 Pro

Best: M3 Max+

Recommended Baseline

M1 Pro or newer16GB RAMApple Silicon

MAKER

Meet the Builder

I replaced my social life with shipping.

Claire Davies

Boris Haviar

Founder & Software Engineer
Your subscription could not be saved. Please try again.
✅ You’re on the list. We’ll email you as soon as early access opens. Want to help shape Ocno? Reply to the confirmation email with your biggest browser pain.

Be among the first and get special perks!

Get early access to Ocno, help shape what we build next through direct feedback, and unlock founder-circle perks reserved for early users.

We use Brevo as our marketing platform. By submitting this form you agree that the personal data you provided will be transferred to Brevo for processing in accordance with Brevo's Privacy Policy.