The Signal is Broken

When Performance Outruns Understanding in the Age of AI

Note: it is with great irony that I must confess that these are all my own thoughts, but very few of my own words.

Distorting the Competence Hierarchy

We tend to think of AI as a productivity tool—something that helps us work faster, write better, and make smarter decisions. But that framing misses what may be its most important effect.

AI is the first tool that allows people to perform beyond their own understanding. And that changes something fundamental.

For a long time, we’ve relied on a simple assumption: the quality of someone’s output reflects the depth of their thinking. Clear writing suggested clear thought. Strong strategy implied sound judgment. Polished communication signaled competence. AI breaks that link. It generates articulation without ownership, insight without struggle, and polish without depth. And because the output still looks like human excellence, we continue to interpret it through the old lens.

That creates a subtle but powerful distortion—one that doesn’t just affect how work gets done, but how we assign credit, build trust, and recognize real capability.

The Appearance of Competence

Kathy Sierra once observed that users don’t care about features—they care about what a product enables them to do. The best tools don’t just improve outcomes; they make the user feel more capable. At their best, they disappear.

AI is the most powerful version of that idea we’ve seen. It doesn’t just assist—it elevates. It helps leaders communicate more clearly, frame decisions more convincingly, and operate with a level of polish that once required years of experience.

We’ve all seen it: the executive who suddenly writes like a seasoned strategist, the meeting summary that reads like it came from a top-tier consultant, the off-the-cuff email that lands with perfect structure and clarity. No one questions the result—but it’s not always clear where that capability came from.

A critical element of this dynamic is attribution. The most effective tools make success feel like it belongs to the user, not the tool. With AI, that effect is amplified. There’s no ego to manage, no teammate to credit, no visible collaborator. Which means the human in the loop can quietly absorb the upside—and just as quietly, deflect the downside.

When Words Outrun Thought

This distortion becomes most visible in the written word. For people we know well, we can often tell when something has been heavily AI-assisted—not because it sounds non-human, but because it no longer sounds like them. The variation, the rough edges, the recognizable patterns of thought begin to disappear.

That reaction points to something deeper. We’ve long assumed that well-articulated ideas reflect well-formed thinking. Even when someone refined their phrasing, the effort required to produce clarity implied a certain level of understanding. AI weakens that assumption. It allows ideas to be expressed with a clarity that may exceed the author’s own grasp—the words can outrun the thinking.

And the boundary isn’t just unclear to others—it’s often unclear to us. You give a rough prompt, and the AI returns something that feels exactly like what you meant. It’s easy to believe the idea was yours all along—that the AI simply found better words. But if the thought were already complete, why didn’t you express it that way yourself?

More often, the AI is shaping the idea as much as it is expressing it. And once that happens, even in your own head, attribution begins to blur.

But that ambiguity has limits. When the output is far beyond someone’s normal capabilities—cinematic video, high-end design—the attribution becomes unavoidable. Even if no one says it explicitly, everyone understands that the tool did the heavy lifting. The signal snaps back into place.

Most knowledge work doesn’t live at those extremes. If Bob from accounting shows up Monday morning with a cinematic short film, everyone knows what happened. But if Bob writes a remarkably sharp memo—clear, structured, persuasive—it lands differently. It’s believable. In writing, strategy, and communication, improvement can sit just inside the range of plausibility—polished, but not unbelievable.

And that’s exactly where attribution becomes unstable.

Because now it isn’t decided by reality—it’s decided by incentives.

Credit Without Contribution

As soon as AI meaningfully contributes to an output, attribution becomes ambiguous—but it’s not just ambiguous, it’s selectively resolved. When the role of AI is obvious, attribution tends to follow reality. But in the far more common case—where the output remains within the range of plausibility—attribution is shaped by incentives.

Are these the author’s ideas, refined by AI? Or AI-generated ideas, adopted by the author? Are these the author’s words, or an LLM reconstruction designed to sound like them? The answers are rarely clear, and unlike human collaboration, there’s little pressure to resolve the ambiguity. AI doesn’t ask for credit. It doesn’t push back. It doesn’t correct the narrative.

So attribution begins to drift. The presentation lands well, the thinking seems sharp, and the room is impressed. No one asks how much was drafted, refined, or even originated elsewhere—and there’s little incentive to bring it up. When things go well, the human takes the credit. When they don’t, the reasoning can be quietly externalized.

Over time, credit becomes less tied to contribution, and more tied to what can plausibly be claimed.

The Incentives Don’t Care

Individually, these shifts may seem manageable. But together, they erode something larger.

For decades, our economic systems have relied on a simple assumption: strong output reflects real understanding—because better understanding tends to produce better outcomes. AI quietly breaks that link. It enables performance that no longer proves competence, and once that happens, the incentives meant to reward real understanding begin to wander.

And when incentives lose their anchor, the consequences aren’t evenly distributed.

At the lower levels, the impact is more direct. If AI can produce the output, the role itself becomes harder to justify. Work that once required a junior hire can now be generated, refined, and delivered with far fewer people in the loop. Here comes the RIF.

At the top, the dynamic is different.

AI can augment decision-making, strategy, and communication in ways that overlap significantly with leadership responsibilities. But unlike junior roles, leadership positions aren’t evaluated purely on replaceability—and the people in those roles have a say in how disruption is applied.

They won’t vote themselves out of relevance.

So the pressure concentrates downward, even as capability expands upward.

It’s easier to embrace efficiency when it threatens someone else’s role—and easier to trust the system when it’s working in your favor. Leaders will adopt AI where it strengthens their output and perceived effectiveness, but they have little reason to challenge the assumptions that make those gains possible.

So instead of correcting itself, the system adapts around the distortion.

Capital flows to the wrong decisions. Weak reasoning gets scaled. Organizations drift from reality while appearing more aligned than ever. A strategy that sounds airtight gets funded. A plan that reads convincingly gets approved. Only later does the underlying fragility show up.

A New Signal?

It’s reasonable to ask whether a new signal will replace the old one. Maybe the real skill is no longer raw output, but the ability to use AI effectively—to prompt, refine, and steer the system toward high-quality results.

There’s truth in that. The ability to work with AI is a real capability, and in many cases, a valuable one.

But it doesn’t fully solve the problem.

Because it’s harder to see. The process is hidden. The iterations are invisible. The judgment behind the output is difficult to evaluate from the outside. And the result can still exceed the user’s underlying understanding.

So even if AI fluency becomes a new signal, it’s a weaker one—less direct, less observable, and easier to misinterpret.

The original link hasn’t been restored… it’s been replaced with something harder to read.

Disruption Without Permission

At first, LLM empowerment looks like acceleration. Better output. Better communication. Better decisions.

But better output is no longer a reliable signal of who understands what.

And without that signal, it becomes harder to know whose judgment should carry weight.

In the near term, we still have residual signals to lean on. Past experience, prior work, and long-standing patterns of behavior provide context—ways to anchor trust when the output alone is no longer enough.

But those signals don’t always carry forward, especially in new domains or unfamiliar problems. In those cases, the system has less to rely on—and uncertainty lingers longer.

Which means we fall back to something more immediate: direct conversation, real-time reasoning, and the ability to explain and adapt under pressure.

Not because it’s perfect—but because it’s harder to outsource.

Our system assumes performance is tied to understanding—that the people producing the best work are the most capable decision-makers. AI quietly breaks that link. It enables performance that no longer proves competence, and once that happens, the incentives meant to reward real understanding begin to wander.

And when incentives lose their anchor, some people will rise on signals inflated by AI, while others are left behind even when their understanding is stronger.

Capital gets misallocated. Weak decisions get scaled. Organizations drift further from reality while appearing more aligned than ever.

That’s the danger.

We’ve built our systems on the belief that output reflects understanding.

For the first time, that’s no longer true.

And when performance can consistently exceed understanding, disruption isn’t just technological.

It’s structural. Buckle up.