Vibe-Coding an Attention Firewall, w/ Steve Newman, creator of The Curve
TL;DR
Steve Newman, creator of Google Docs and founder of the Golden Gate Institute for AI, shares his suite of 15+ bespoke AI tools designed to filter overwhelming information flows and reclaim deep focus time, demonstrating an iterative 'vibe coding' approach that prioritizes personal utility over agent optimization.
📰 Taming Information Overload 2 insights
Static LLM summarization pipeline
Newman processes 50+ daily newsletters and podcasts through a simple RSS reader that feeds full text to an LLM with static prompts, generating two-level summaries without context memory or cross-referencing against past reads.
Rapid triage over perfect filtering
Rather than building complex systems to detect novelty against his reading history, he relies on 10-second skims of one-paragraph summaries to identify new angles on topics like AI model releases.
🛡️ The Attention Firewall 2 insights
Aggregated urgency filtering
A system pulls emails, Slack, WhatsApp, and Signal into a unified pipeline where an LLM evaluates each message against a one-page rubric of exceptions to surface only truly urgent items.
Dedicated monitor for interruptions
After 40 years of single-monitor use, Newman added a second screen exclusively to display his calendar and rolling urgent messages, eliminating the habit of checking messaging apps 30 times daily.
💻 Vibe Coding Philosophy 3 insights
Anti-token maxing mindset
Newman adheres to the principle that 'the agent's not important, I'm important,' prioritizing tools that reduce his time in the chair rather than optimizing agent performance or capability.
Iterative discovery over planning
He describes building his attention firewall through fumbling iteration without initial clear vision, noting that we're collectively experiencing a 'Cambrian explosion' of personal tooling without playbooks.
Conservative security boundaries
Despite historically borderline negligent security practices, he now maintains strict boundaries against auto-responding or autonomous action due to duty of care for others' data entrusted to him.
Bottom Line
Build AI tools iteratively to reclaim your own time and attention rather than optimizing for agent performance, while maintaining strict security guardrails when handling communications containing others' data.
More from Cognitive Revolution
View all
Welcome to AI in the AM: RL for EE, Oversight w/out Nationalization, & the first AI-Run Retail Store
This episode explores the radicalizing public response to AI existential risk through recent attacks on lab leaders, while featuring interviews on reinforcement learning for circuit design, independent AI governance models, and San Francisco's first fully AI-operated retail store.
It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast
AI safety researcher Ajeya Cotra warns that we are entering "crunch time"—a critical window where AI systems become capable of recursive self-improvement and automating AI R&D, potentially compressing 10,000 years of technological progress into decades while remaining briefly within human control.
Calm AI for Crazy Days: Inside Granola's Design Philosophy, with co-founder Sam Stephenson
Granola co-founder Sam Stephenson shares how the $1.5B AI note-taking app achieves rapid growth through a 'surprisingly unambitious' design philosophy that prioritizes frazzled users operating in 'System 1' thinking, leveraging organic viral loops from note-sharing rather than feature bloat.
Training the AIs' Eyes: How Roboflow is Making the Real World Programmable, with CEO Joseph Nelson
Joseph Nelson, CEO of Roboflow, explains that computer vision is roughly three years behind language models in capability, facing unique challenges due to the chaotic, heterogeneous nature of the physical world that demands specialized low-latency edge deployment rather than cloud-only inference.