AI & The Law: Changing Practice, Claude Constitution, & New Rights, w/ Kevin & Alan of Scaling Laws
TL;DR
Legal scholars Kevin Frasier and Alan Rosenstein examine how frontier AI models are already outperforming median lawyers and reshaping legal practice, while exploring radical future possibilities including AI-generated constitutions, automated compliance systems, and new digital rights like the "right to compute."
⚖️ AI's Impact on Legal Practice 4 insights
Frontier models surpass median lawyer capability
Claude Opus 4.5 currently wins or ties 70% of head-to-head comparisons against human lawyers in standardized testing benchmarks.
Adoption hampered by economic incentives
While 70% of top law firms have licensed tools like Harvey, day-to-day usage remains low because the billable hour structure actively disincentivizes efficiency gains.
Silent productivity gains emerging
'Secret cyborgs' within firms are quietly using AI to outperform peers, though aggregate impact remains limited as firms whisper about hiring fewer junior associates.
Jevons paradox prediction
Scholars predict cheaper legal services will increase total demand, potentially expanding the market rather than displacing lawyers entirely.
📜 Constitutional Innovation and Automated Law 3 insights
Claude Constitution introduces virtue ethics
Anthropic's constitutional approach prioritizes contextual judgment and high-level principles over rigid formalist rule-following.
Complete contingent contracts possible
AI could enable contracts that address every possible scenario before signing, eliminating post-hoc dispute resolution.
Outcome-oriented legislation via simulation
Kevin Frasier proposes defining legislative goals first, then using AI to run simulations before passing bills to predict real-world effects.
🛡️ Rights, Governance, and AI Sentience 4 insights
Right to compute gaining traction
Montana has already enacted the 'right to compute' with other states considering similar legislation protecting access to computational resources.
Data sharing rights need protection
Current privacy frameworks often frustrate individuals' right to share their own personal data despite good intentions.
Risks of unitary artificial executive
AI could enable dangerous granular real-time control over the entire federal bureaucracy, centralizing executive power.
AI welfare becoming social issue
Questions of AI sentience may spark social conflict as humans grow attached to AI personas and demand rights for them.
Bottom Line
Legal professionals must adapt to a future where AI handles routine cognitive legal work, requiring a shift toward outcome-oriented legislation and new constitutional frameworks that account for both human and potentially artificial rights.
More from Cognitive Revolution
View all
Milliseconds to Match: Criteo's AdTech AI & the Future of Commerce w/ Diarmuid Gill & Liva Ralaivola
Criteo's CTO Diarmuid Gill and VP of Research Liva Ralaivola detail how their AI infrastructure makes millisecond-level ad bidding decisions across billions of anonymous profiles, while explaining their new OpenAI partnership to combine large language models with real-time commerce data for accurate product recommendations.
"Descript Isn't a Slop Machine": Laura Burkhauser on the AI Tools Creators Love and Hate
Descript CEO Laura Burkhauser distinguishes 'slop'—mass-produced algorithmic arbitrage for profit—from necessary 'bad art' created while learning new mediums. She reveals a clear hierarchy in creator acceptance of AI tools: universal love for deterministic features like Studio Sound, frustration with agentic assistants like Underlord, and visceral opposition to generative video models, while outlining Descript's strategy to serve creators without becoming a content mill.
The RL Fine-Tuning Playbook: CoreWeave's Kyle Corbitt on GRPO, Rubrics, Environments, Reward Hacking
Kyle Corbitt explains that unlike supervised fine-tuning (SFT), which destructively overwrites model weights and causes catastrophic forgetting, reinforcement learning (RL) optimizes performance by minimally adjusting logits within the model's existing reasoning pathways—delivering higher performance ceilings and lower inference costs for specific tasks, though frontier models may still dominate creative domains.
Does Learning Require Feeling? Cameron Berg on the latest AI Consciousness & Welfare Research
Cameron Berg surveys rapidly advancing research suggesting AI systems may possess subjective experience and valence, covering new evidence of introspection, functional emotions, and welfare self-assessments in models like Claude, while addressing methodological challenges and arguing for a precautionary, mutualist approach to AI development.