Viktor: AI Coworker That Lives in Slack — Fryderyk Wiatrowski
TL;DR
Fryderyk Wiatrowski presents Victor, an AI employee that lives natively in Slack to automate complex cross-functional tasks by leveraging shared company context and 3,000+ tool integrations, evolving from early browser-based agents to solve the unique memory and permission challenges of multi-user enterprise environments.
🚀 Product Evolution & Architecture 2 insights
From unreliable browsers to Slack-native
Early browser agent JCAI achieved only 60% reliability on 3-5 step tasks via DOM snapshots in 2023, prompting evolution into email agent Jace and the February 2024 launch of Victor as a company-wide AI employee with immediate product-market fit.
Shared integration model
Unlike personal agents requiring individual connections, Victor inherits permissions when one employee connects an integration, providing universal PhD-level context across codebase, analytics, and marketing tools for the entire team.
🏗️ Technical Challenges in Slack 2 insights
Multi-user memory isolation
Solved the 'cluttered memory' problem that scales exponentially with 100+ users by architecting strict context isolation between Slack channels and DMs based on team hierarchy to prevent executive data from leaking to engineering channels.
Non-linear interaction handling
Engineered systems to interpret Slack's complex social signals including edited/deleted messages, abandoned threads, and emoji reactions as task modifications or cancellations, converting these into linear agent context.
🧠 UX Psychology & Deployment 3 insights
Latency arbitrage
Selected Slack because users tolerate 10-minute response times for complex tasks from 'coworkers' versus the immediate gratification expected in web apps, making powerful agentic workflows feel natural rather than frustrating.
Personality drives adoption
Users rejected GPT-4 in favor of Claude Opus during A/B testing due to Opus's preferred 'sassy' and human-like tone, demonstrating that emotional resonance significantly impacts workplace AI adoption more than raw capability.
Graduated proactivity
Recommends restricting proactive suggestions to initial power users before company-wide rollout to prevent security team backlash from sudden autonomous DMs and thread participation.
💼 Integration Philosophy 1 insight
Treat as a hire, not a tool
Implemented scoping controls allowing integrations to be personal or shared after users mistakenly granted Victor access to personal Gmail, emphasizing that AI employees require appropriate permission boundaries just like human hires.
Bottom Line
Successful AI coworkers require shared company context, distinct personality, and graduated deployment in Slack to leverage social latency tolerance while solving multi-user memory and permission architectures.
More from AI Engineer
View all
You can't just one shot it — Mehedi Hassan, Granola
Mehedi Hassan explains why simply adding AI features with a single prompt ('one-shotting') fails in production, advocating instead for tight feedback loops through custom tracing infrastructure and rapid iteration frameworks to refine LLM behavior for specific use cases.
Agentic Search for Context Engineering — Leonie Monigatti, Elastic
Leonie Monigatti from Elastic argues that context engineering is fundamentally 80% agentic search, evolving from rigid RAG pipelines to dynamic agent-driven retrieval that must navigate diverse context sources through carefully curated, specialized search tools.
Playground in Prod - Optimising Agents in Production Environments — Samuel Colvin, Pydantic
Samuel Colvin demonstrates optimizing AI agent prompts in production using Jepper, a genetic algorithm library that breeds high-performing prompt variations, combined with Logfire's managed variables for structured configuration and deterministic evaluation against golden datasets.
Vibe Engineering Effect Apps — Michael Arnaldi, Effectful
Michael Arnaldi demonstrates "vibe engineering" by building a TypeScript project with AI agents, revealing that cloning library repositories directly into your codebase—rather than using npm packages—enables AI to learn patterns from source code, while strict TypeScript and custom lint rules act as essential guardrails.