Telegram Web
The murky economics of the data-centre investment boom (Score: 150+ in 1 day)

Link: https://readhacker.news/s/6D9ev
Comments: https://readhacker.news/c/6D9ev
Discord says 70k users may have had their government IDs leaked in breach (πŸ”₯ Score: 159+ in 3 hours)

Link: https://readhacker.news/s/6DdSc
Comments: https://readhacker.news/c/6DdSc
😁31🀩6πŸ‘4πŸ’©3
The weaponization of travel blacklists (Score: 150+ in 13 hours)

Link: https://readhacker.news/s/6DbXR
Comments: https://readhacker.news/c/6DbXR
Memory access is O(N^[1/3]) (❄️ Score: 150+ in 3 days)

Link: https://readhacker.news/s/6D2a9
Comments: https://readhacker.news/c/6D2a9
πŸ‘Ž7πŸ‘3
OpenAI, Nvidia fuel $1T AI market with web of circular deals (Score: 156+ in 6 hours)

Link: https://readhacker.news/s/6DdQf
Comments: https://readhacker.news/c/6DdQf

See also https://www.bloomberg.com/news/articles/2025-10-08/the-circu... (https://archive.ph/E7nGC)
😁7
Show HN: Recall: Give Claude memory with Redis-backed persistent context (Score: 150+ in 16 hours)

Link: https://readhacker.news/s/6Dcea
Comments: https://readhacker.news/c/6Dcea

Hey HN! I'm JosΓ©, and I built Recall to solve a problem that was driving me crazy.
The Problem:
I use Claude for coding daily, but every conversation starts from scratch. I'd explain my architecture, coding standards, past decisions... then hit the context limit and lose everything. Next session/ Start over.
The Solution:
Recall is an MCP (Model Context Protocol) server that gives Claude persistent memory using Redis + semantic search. Think of it as long-term memory that survives context limits and session restarts.
How it works:
- Claude stores important context as "memories" during conversations
- Memories are embedded (OpenAI) and stored in Redis with metadata
- Semantic search retrieves relevant memories automatically
- Works across sessions, projects, even machines (if you use cloud Redis)
Key Features:
- Global memories: Share context across all projects
- Relationships: Link related memories into knowledge graphs
- Versioning: Track how memories evolve over time
- Templates: Reusable patterns for common workflows
- Workspace isolation: Project A memories don't pollute Project B
Tech Stack:
- TypeScript + MCP SDK
- Redis for storage
- OpenAI embeddings (text-embedding-3-small)
- ~189KB bundle, runs locally
Current Stats:
- 27 tools exposed to Claude
- 10 context types (directives, decisions, patterns, etc.)
- Sub-second semantic search on 10k+ memories
- Works with Claude Desktop, Claude Code, any MCP client
Example Use Case:
I'm building an e-commerce platform. I told Claude once: "We use Tailwind, prefer composition API, API rate limit is 1000/min." Now every conversation, Claude remembers and applies these preferences automatically.
What's Next (v1.6.0 in progress):
- CI/CD pipeline with GitHub Actions
- Docker support for easy deployment
- Proper test suite with Vitest
- Better error messages and logging
Try it:
npm install -g @joseairosa/recall
# Add to claude_desktop_config.json
# Start using persistent memory
πŸ’©16❀2
A History of Large Language Models (❄️ Score: 153+ in 3 days)

Link: https://readhacker.news/s/6D3pE
Comments: https://readhacker.news/c/6D3pE
πŸ‘3πŸ”₯1
2025/10/10 18:43:45
Back to Top
HTML Embed Code: