Signal -> Prioritize -> Discover

A feed for your mind, not for the algorithm.

Ocno helps you monitor what matters by ranking sources around your interests instead of generic engagement signals.

Local-first rankingPrivate interest profileNo social feed dependencyWorks offlineChrome extension

A feed ranked by your actual interests

Less noise and more relevant sources

Discover adjacent signals without doomscrolling

0:00 / 0:00
Product previewProduct preview dark

Signal-first

A feed ranked around your real interests

Lower noise

Avoid generic engagement bait and obvious hoaxes

Cross-source

See relevant articles and posts in one stream

Stay current

Monitor topics without endless manual filtering

HOW FEED RANKING WORKS

From browsing signal to a personal feed.

Ocno splits collection from ranking so the system can stay useful without turning the server into a profile of you.

Browse
1

You browse. Ocno listens and reads with privacy in mind.

As you browse, Ocno pays attention to the pages you open and the topics you spend time with. That taste signal stays local-first, so your raw browsing history does not need to become a cloud profile.

Privacy highlight: your personal taste layer stays on your device.

Precompute + Pull
2

Each day OCNO precomputes the full public update set into embeddings.

The server scans the public internet, turns the full day into embeddings, and your device pulls the entire bundle in about 3 seconds. Because you pull the whole day instead of a personalized slice, the server still cannot tell what you like.

Everyone gets the full prepared day. Preference only appears when ranking happens locally.

Rank
3

Your local ranker orders the day according to your taste.

Once the daily bundle is on-device, the local ranker scores it against your reading patterns and interests. The feed you see is shaped by your taste, not by a generic engagement algorithm.

Result: one daily stream, locally ranked for you.

ARCHITECTURE

Everything stays
on your machine.

No data leaves your device. Ever. Here's exactly what talks to what.

Architecture diagram showing local on-device data flow

Public articles are delivered in bulk

Each day, the OCNO server crawls public news sources and delivers the full article bundle to your device in bulk. Because everything arrives upfront, the server does not know what you actually read or like.

Your local model does the ranking

A local ranker compares that bundle against your own browsing context, memory, and reading profile - entirely on-device.

private

Personalisation stays local

The server provides the inputs, but it does not see the local signals or ranking logic that shape your feed. The relevance layer lives on your side of the boundary.

FEED

Designed for high-signal discovery.

A good feed should narrow the field, respect your interests, and keep the ranking logic on your side.

01Relevance

Rank what actually matters

  • Surface updates aligned with your real interests
  • Prioritize signal over generic popularity
  • Spend less time filtering and more time learning
  • Let curiosity shape ranking instead of engagement loops
02Breadth

Pull from across the web

  • Blend multiple public sources into one reading stream
  • Track adjacent topics without manual monitoring
  • Discover relevant angles you would not have searched for
  • Keep research discovery active and wide without losing focus
03Privacy

Keep ranking local-first

  • Your interest profile stays on-device
  • No dependency on external social ranking systems
  • Personalization without exposing your intent map
  • A private feed that still feels smart and current

What enabled this?

5-10x Faster

Thanks to its compiler-optimized engine and full GPU utilization, Ocno delivers up to 5–10× higher performance compared to traditional CPU-based in-browser model execution.

Smaller models are catching up 🔥

Smaller models like Qwen3 4B (GPU and 4GB RAM needed) can achieve quality comparable to large earlier models such as GPT-4o in many tasks proven by various benchmarks.

Model Compression & Quantization

Advances in quantization and model compression drastically reduced memory requirements, making powerful models practical on everyday devices. You can now run model comparable to GPT-4o on just 3–4 GB of RAM — enabling capabilities once limited to large cloud systems.

MAKER

Meet the Builder

I replaced my social life with shipping.

Claire Davies

Boris Haviar

Founder & Software Engineer
Your subscription could not be saved. Please try again.
✅ You’re on the list. We’ll email you as soon as early access opens. Want to help shape Ocno? Reply to the confirmation email with your biggest browser pain.

Be among the first and get special perks!

Get early access to Ocno, help shape what we build next through direct feedback, and unlock founder-circle perks reserved for early users.

We use Brevo as our marketing platform. By submitting this form you agree that the personal data you provided will be transferred to Brevo for processing in accordance with Brevo's Privacy Policy.