Vibe Engineering Effect Apps — Michael Arnaldi, Effectful
TL;DR
Michael Arnaldi demonstrates "vibe engineering" by building a TypeScript project with AI agents, revealing that cloning library repositories directly into your codebase—rather than using npm packages—enables AI to learn patterns from source code, while strict TypeScript and custom lint rules act as essential guardrails.
🧠 LLM Architecture Constraints 2 insights
Context windows act as fixed memory arrays
Unlike human brains, LLMs cannot form long-term memories from conversations, operating solely within limited token windows that cause遗忘 of previous instructions.
Post-training freezes knowledge updates
Models receive no continuous learning after deployment, meaning they lack awareness of libraries released after their training cutoff dates.
📁 Repository Cloning Strategy 2 insights
Clone dependencies into project directories
Placing library source code directly in your repo (e.g., `repos/effect`) allows AI agents to analyze implementation patterns, whereas node_modules and gitignored files remain invisible to coding agents.
Source code supersedes documentation
AI agents trained on codebases perform better reading actual TypeScript implementations than parsing human-written documentation or using MCP servers.
🛡️ Safety Guardrails for AI Coding 2 insights
Maximize TypeScript strictness
Configure all diagnostics as errors and enable format-on-save to force AI agents to fix type issues immediately rather than accumulating technical debt.
Implement custom ESLint rules
Ban explicit type assertions (`as X`) and `any`/`unknown` types to prevent AI from circumventing Effect's type safety when it encounters complex type machinery.
⚙️ Development Workflow Setup 2 insights
Create agents.md instruction files
Maintain a living document listing available commands (type check, test) and repository locations to guide AI behavior without context pollution.
Use Effect v4 for agent safety
Effect's structured error handling and type safety prevent AI-generated code from creating unmaintainable "small" snippets lacking proper error management.
Bottom Line
Clone library repositories directly into your project as subtree sources rather than npm dependencies, while enforcing strict TypeScript and custom ESLint rules to constrain AI agents to type-safe patterns.
More from AI Engineer
View all
Agentic Search for Context Engineering — Leonie Monigatti, Elastic
Leonie Monigatti from Elastic argues that context engineering is fundamentally 80% agentic search, evolving from rigid RAG pipelines to dynamic agent-driven retrieval that must navigate diverse context sources through carefully curated, specialized search tools.
Playground in Prod - Optimising Agents in Production Environments — Samuel Colvin, Pydantic
Samuel Colvin demonstrates optimizing AI agent prompts in production using Jepper, a genetic algorithm library that breeds high-performing prompt variations, combined with Logfire's managed variables for structured configuration and deterministic evaluation against golden datasets.
Everything You Need To Know About Agent Observability — Danny Gollapalli and Ben Hylak, Raindrop
As AI agents grow more complex and autonomous, traditional pre-deployment testing fails to catch the infinite edge cases of production behavior. The video outlines a new observability paradigm combining explicit system metrics with implicit semantic signals and self-diagnostics to monitor agents in real-time.
Skills at Scale — Nick Nisi and Zack Proser, WorkOS
Nick Nisi and Zack Proser from WorkOS demonstrate how 'skills'—portable, markdown-based context units—solve the 'cold start' problem of AI coding agents by encoding constraints and deterministic scripts that can be shared across teams and projects, eliminating repetitive context reloading.