Steve Yegge Wants You to Stop Looking at Your Code
My “Live with Tim” conversation with Steve Yegge this week was one of those sessions where you could imagine the audience leaning forward in their chairs. And on more than one occasion, when Steve got particularly colorful, I imagined them recoiling. Steve has always been one of the most provocative...
Autonomous AI systems force architects into an uncomfortable question that cannot be avoided much longer: Does every decision need to be governed synchronously to be safe? At first glance, the answer appears obvious. If AI systems reason, retrieve information, and act autonomously, then surely every...
I’ve said in the past that AI will enable new kinds of applications—but I’ve never had the imagination to guess what those new applications would be. I don’t want a smart refrigerator, especially if it’s going to inflict ads on me. Or a smart TV. Or a smart doorbell. Most of these applications are s...
In a previous article, we outlined why GPUs have become the architectural control point for enterprise AI. When accelerator capacity becomes the governing constraint, the cloud’s most comforting assumption—that you can scale on demand without thinking too far ahead—stops being true. That shift has a...
Most multi-agent AI systems fail expensively before they fail quietly. The pattern is familiar to anyone who’s debugged one: Agent A completes a subtask and moves on. Agent B, with no visibility into A’s work, reexecutes the same operation with slightly different parameters. Agent C receives inconsi...
Control Planes for Autonomous AI: Why Governance Has to Move Inside the System
For most of the past decade, AI governance lived comfortably outside the systems it was meant to regulate. Policies were written. Reviews were conducted. Models were approved. Audits happened after the fact. As long as AI behaved like a tool—producing predictions or recommendations on demand—that se...
This post first appeared on Addy Osmani’s Elevate Substack newsletter and is being republished here with the author’s permission. TL;DR: Aim for a clear spec covering just enough nuance (this may include structure, style, testing, boundaries. . .) to guide the AI without overwhelming it. Break large...
At a private dinner a few months ago, Jensen Huang apparently said what I’ve been thinking for some time. The US is significantly behind China in AI development. Here are some of the reasons. Huang starts with the ratio of AI developers in China (he estimates 1 million) to AI developers in the US (2...
Reverse Engineering Your Software Architecture with Claude Code to Help Claude Code
This post first appeared on Nick Tune’s Medium page and is being republished here with the author’s permission. I have been using Claude Code for a variety of purposes, and one thing I’ve realized is that the more it understands about the functionality of the system (the domain, the use cases, the e...
The hard truth about AI scaling is that for most organizations, it isn’t happening. Despite billions in investment, a 2025 report from the MIT NANDA initiative reveals that 95% of enterprise generative AI pilots fail to deliver measurable business impact. This isn’t a technology problem; it’s an org...
The Five Skills I Actually Use Every Day as an AI PM (and How You Can Too)
This post first appeared on Aman Khan’s AI Product Playbook newsletter and is being republished here with the author’s permission. Let me start with some honesty. When people ask me “Should I become an AI PM?” I tell them they’re asking the wrong question. Here’s what I’ve learned: Becoming an AI PM...
This post first appeared on Nick Tune’s Weird Ideas and is being republished here with the author’s permission. A well-crafted system prompt will increase the quality of code produced by your coding assistant. It does make a difference. If you provide guidelines in your system prompt for writing cod...
Evals are having their moment. It’s become one of the most talked-about concepts in AI product development. People argue about it for hours, write thread after thread, and treat it as the answer to every quality problem. This is a dramatic shift from 2024 or even early 2025, when the term was barely...
The following article originally appeared on Mike Amundsen’s Substack Signals from Our Futures Past and is being republished here with the author’s permission. There’s an old hotel on a windy corner in Chicago where the front doors shine like brass mirrors. Each morning, before guests even reach the...
My father spent his career as an accountant for a major public utility. He didn’t talk about work much; when he engaged in shop talk, it was generally with other public utility accountants, and incomprehensible to those who weren’t. But I remember one story from work, and that story is relevant to o...
Generative AI in the Real World: Aurimas Griciūnas on AI Teams and Reliable AI Systems
SwirlAI founder Aurimas Griciūnas helps tech professionals transition into AI roles and works with organizations to create AI strategy and develop AI systems. Aurimas joins Ben to discuss the changes he’s seen over the past couple years with the rise of generative AI and where we’re headed with agen...
The End of the Sync Script: Infrastructure as Intent
There’s an open secret in the world of DevOps: Nobody trusts the CMDB. The Configuration Management Database (CMDB) is supposed to be the “source of truth”—the central map of every server, service, and application in your enterprise. In theory, it’s the foundation for security audits, cost analysis,...
If You’ve Never Broken It, You Don’t Really Know It
The following article originally appeared on Medium and is being republished here with the author’s permission. There’s a fake confidence you can carry around when you’re learning a new technology. You watch a few videos, skim some docs, get a toy example working, and tell yourself, “Yeah, I’ve got ...
The following article originally appeared on Medium and is being republished here with the author’s permission. This post is a follow-up to a post from last week on the progress of logging. A colleague pushed back on the idea that we’d soon be running code we don’t fully understand. He was skeptical...
Quantum computing (QC) and AI have one thing in common: They make mistakes. There are two keys to handling mistakes in QC: We’ve made tremendous progress in error correction in the last year. And QC focuses on problems where generating a solution is extremely difficult, but verifying it is easy. Thi...