Signal-first
A feed ranked around your real interests
A feed ranked by your actual interests
Less noise and more relevant sources
Discover adjacent signals without doomscrolling


A feed ranked around your real interests
Avoid generic engagement bait and obvious hoaxes
See relevant articles and posts in one stream
Monitor topics without endless manual filtering
Get priority onboarding, founder-circle perks, and direct influence on what we build next.
HOW FEED RANKING WORKS
Ocno splits collection from ranking so the system can stay useful without turning the server into a profile of you.
As you browse, Ocno pays attention to the pages you open and the topics you spend time with. That taste signal stays local-first, so your raw browsing history does not need to become a cloud profile.
Privacy highlight: your personal taste layer stays on your device.
The server scans the public internet, turns the full day into embeddings, and your device pulls the entire bundle in about 3 seconds. Because you pull the whole day instead of a personalized slice, the server still cannot tell what you like.
Everyone gets the full prepared day. Preference only appears when ranking happens locally.
Once the daily bundle is on-device, the local ranker scores it against your reading patterns and interests. The feed you see is shaped by your taste, not by a generic engagement algorithm.
Result: one daily stream, locally ranked for you.
ARCHITECTURE
No data leaves your device. Ever. Here's exactly what talks to what.
Each day, the OCNO server crawls public news sources and delivers the full article bundle to your device in bulk. Because everything arrives upfront, the server does not know what you actually read or like.
A local ranker compares that bundle against your own browsing context, memory, and reading profile - entirely on-device.
The server provides the inputs, but it does not see the local signals or ranking logic that shape your feed. The relevance layer lives on your side of the boundary.
FEED
A good feed should narrow the field, respect your interests, and keep the ranking logic on your side.
Thanks to its compiler-optimized engine and full GPU utilization, Ocno delivers up to 5–10× higher performance compared to traditional CPU-based in-browser model execution.
Smaller models like Qwen3 4B (GPU and 4GB RAM needed) can achieve quality comparable to large earlier models such as GPT-4o in many tasks proven by various benchmarks.
Advances in quantization and model compression drastically reduced memory requirements, making powerful models practical on everyday devices. You can now run model comparable to GPT-4o on just 3–4 GB of RAM — enabling capabilities once limited to large cloud systems.
MAKER
I replaced my social life with shipping.
