Telegram Web
Show HN: Open source alternative to Perplexity Comet (Score: 151+ in 8 hours)

Link: https://readhacker.news/s/6xxwT
Comments: https://readhacker.news/c/6xxwT

Hey HN, we're a YC startup building an open-source, privacy-first alternative to Perplexity Comet.
No invite system unlike bunch of others – you can download it today from our website or GitHub: https://github.com/browseros-ai/BrowserOS
--- Why bother building an alternative? We believe browsers will become the new operating systems, where we offload much bunch of our work to AI agents. But these agents will have access to all your sensitive data – emails, docs, on top of your browser history. Open-source, privacy-first alternatives need to exist.
We're not a search or ad company, so no weird incentives. Your data stays on your machine. You can use local LLMs with Ollama. We also support BYOK (bring your own keys), so no $200/month plans.
Another big difference vs Perplexity Comet: our agent runs locally in your browser (not on their server). You can actually watch it click around and do stuff, which is pretty cool! Short demo here: https://bit.ly/browserOS-demo
--- How we built? We patch Chromium's C++ source code with our changes, so we have the same security as Google Chrome. We also have an auto-updater for security patches and regular updates.
Working with Chromium's 15M lines of C++ has been another fun adventure that I'm writing a blog post on. Cursor/VSCode breaks at this scale, so we're back to using grep to find stuff and make changes. Claude code works surprisingly well too.
Building the binary takes ~3 hours on our M4 Max MacBook.
--- Next/ We're just 2 people with a lot of work ahead (Firefox started with 3 hackers, history rhymes!). But we strongly believe that a privacy-first browser with local LLM support is more important than ever – since agents will have access to so much sensitive data.
Looking forward to any and all comments!
💩18👍5😁1
Matt Trout has died (Score: 150+ in 19 hours)

Link: https://readhacker.news/s/6xvT7
Comments: https://readhacker.news/c/6xvT7
2😢2🤔1
Show HN: Pangolin – Open source alternative to Cloudflare Tunnels (Score: 153+ in 8 hours)

Link: https://readhacker.news/s/6xymr
Comments: https://readhacker.news/c/6xymr

Pangolin is an open source self-hosted tunneled reverse proxy management server with identity and access control, designed to securely expose private resources through encrypted WireGuard tunnels running in user space.
We made Pangolin so you retain full control over your infrastructure while providing a user-friendly and feature-rich solution for managing proxies, authentication, and access, all with a clean and simple dashboard web UI.
GitHub: https://github.com/fosrl/pangolin
Deployment takes about 5 minutes on a VPS: https://docs.fossorial.io/Getting%20Started/quick-install
Demo by Lawrence Systems (YouTube): https://youtu.be/g5qOpxhhS7M/si=M1XTWLGLUZW0WzTv&t=723
Some use cases:
  - Grant users access to your apps from anywhere using just a web-browser

- Proxy behind CGNAT

- One application load balancer across multiple clouds and on-premises

- Easily expose services on IoT and edge devices for field monitoring

- Bring localhost online for easy access

A few key features:
  - No port forwarding and hide your public IP for self-hosting

- Create proxies to multiple different private networks

- OAuth2/OIDC identity providers

- Role-based access control

- Raw TCP and UDP support

- Resource-specific pin codes, passwords, email OTP

- Self-destructing shareable links

- API for automation

- WAF with CrowdSec and Geoblocking
9👍8
Final report on Alaska Airlines Flight 1282 in-flight exit door plug separation (Score: 150+ in 11 hours)

Link: https://readhacker.news/s/6xybE
Comments: https://readhacker.news/c/6xybE
Show HN: Cactus – Ollama for Smartphones (Score: 151+ in 14 hours)

Link: https://readhacker.news/s/6xxTa
Comments: https://readhacker.news/c/6xxTa

Hey HN, Henry and Roman here - we've been building a cross-platform framework for deploying LLMs, VLMs, Embedding Models and TTS models locally on smartphones.
Ollama enables deploying LLMs models locally on laptops and edge severs, Cactus enables deploying on phones. Deploying directly on phones facilitates building AI apps and agents capable of phone use without breaking privacy, supports real-time inference with no latency, we have seen personalised RAG pipelines for users and more.
Apple and Google actively went into local AI models recently with the launch of Apple Foundation Frameworks and Google AI Edge respectively. However, both are platform-specific and only support specific models from the company. To this end, Cactus:
- Is available in Flutter, React-Native & Kotlin Multi-platform for cross-platform developers, since most apps are built with these today.
- Supports any GGUF model you can find on Huggingface; Qwen, Gemma, Llama, DeepSeek, Phi, Mistral, SmolLM, SmolVLM, InternVLM, Jan Nano etc.
- Accommodates from FP32 to as low as 2-bit quantized models, for better efficiency and less device strain.
- Have MCP tool-calls to make them performant, truly helpful (set reminder, gallery search, reply messages) and more.
- Fallback to big cloud models for complex, constrained or large-context tasks, ensuring robustness and high availability.
It's completely open source. Would love to have more people try it out and tell us how to make it great!
Repo: https://github.com/cactus-compute/cactus
6
FP8 is ~100 tflops faster when the kernel name has "cutlass" in it (Score: 152+ in 4 hours)

Link: https://readhacker.news/s/6xzNX
Comments: https://readhacker.news/c/6xzNX
😁8🤯5🤩2🤔1
OpenFront: Realtime Risk-like multiplayer game in the browser (Score: 150+ in 10 hours)

Link: https://readhacker.news/s/6xzhH
Comments: https://readhacker.news/c/6xzhH
👍1
2025/07/14 08:30:35
Back to Top
HTML Embed Code: