💡 Daily feed summarizer agent
==tl;dr: A Claude Code skill + scheduled routine that reads Twitter (and ideally Reddit) once a day and produces a digest, so the slop never has to be consumed directly.==
I’ve been thinking about implementing a new skill and a Claude code routine that basically every day reads my Twitter feed and summarizes it so that I don’t have to consume the slop myself. Also same for Reddit, but I’m not sure if the Reddit API allows it. So for X, I know that there’s a CLI tool that can be used that was recently released, so I guess that can be used maybe. But for Reddit, I’m not sure.
Why?
- Consuming Twitter/Reddit directly is net-negative — high volume, low signal, actively engineered for engagement.
- A daily digest keeps the signal (interesting threads, relevant posts from followed accounts) while dropping the slop (outrage bait, recommendation algorithm drift).
- Fits the AI-native PKM vision: the agent consumes feeds on the human’s behalf, surfaces what matters, stores rest.
- Parallels the existing Feediary idea (scrape-based aggregator) but routes through the Claude Code agent loop instead of a standalone app.
What?
- Skill:
feed-digest(or similar) — fetches a feed source, filters/summarizes, writes a dated digest note into the vault. - Routine: scheduled daily (e.g., morning) via the existing Claude Code routine mechanism.
- Output: a single daily digest note per source (
0_Inbox/feeds/twitter-YYYY-MM-DD.md,reddit-YYYY-MM-DD.md) or a merged feed. Processed byprocess-inboxlike any other inbox item — the agent can link interesting items into related pages, drop the rest.
How?
- Twitter (X): a CLI tool for X was recently released — likely usable as the data source. Needs investigation: which tool, auth model, rate limits, whether the home timeline is accessible or only public search.
- Reddit: unclear whether the Reddit API still permits this post-2023-API-changes. Options:
- Official Reddit API (needs auth, subject to rate limits + access tier).
- Personal RSS feeds (Reddit exposes
.rssper-subreddit / per-user — cheaper path, no API key). - Third-party scraper — fragile, TOS risk.
- Summarization: Claude via the existing Claude Code agent. Each item scored (interesting vs. slop), then a digest produced with the interesting items summarized and the slop discarded.
- Inbox integration: digest is routed like any other inbox note. If an item warrants its own page (a thread worth keeping),
process-inboxpromotes it.
Open questions
- What’s the CLI tool for X/Twitter?
- Does the Reddit API still allow personal-feed reads in 2026?
- Per-source digests or merged “morning digest” across all feeds?
- Where do digests live long-term —
0_Inbox/feeds/(processed and deleted) or3_Resources/Feeds/<source>/<date>(kept as archive)?
Related
- 👨💻 Apps — Feediary (same problem, standalone-app framing).
- OpenClaw for PKM — “Proactive Task Execution” + “Controlled Feedback Loop” are the conceptual parents (daily digest, not random interruptions).
- 2026-01-10-Unified social feed with topic curation — earlier private memo raising the same frustration (Reddit + X aggregation, topic curation, API costs blocking it). This idea is the agent-loop-era answer.
- AGENTS.md — skill + routine contract.
- SKILL — would route the daily digests.
- SKILL — precedent for a skill that processes raw external input into wiki-ready form (Phase 0.5 action-item detection, routing, graph enrichment).
- Open-Source Model-Agnostic AI Platform — this skill would be a concrete routine on that kind of open platform.
- Agent-Native Software Architecture — the architectural pattern this skill sits inside: a skill (function) called by a sub-agent, using MCP/CLI connectors.