Writing

Blog

I write to think in public about what organizations get wrong about AI and what the research actually supports.

Where Is the Evidence That Humans and AI Are Actually Co-Evolving?

Wilson and Daugherty (2018) proposed three mechanisms of collaborative intelligence. The enterprise AI literature can tell us who is using AI and how often. It cannot tell us whether the humans and the systems are getting better together. Co-evolution is the most consequential mechanism, and it is almost entirely unmeasured.

Research Notes

Is AI Fluency the Construct We Have Been Missing, or the One We Keep Reinventing?

Anthropic published its AI Fluency Index. I have been thinking about these constructs for two years. The study gets something important right: iteration predicts fluency, and polished outputs suppress evaluation. But fluency is necessary, not sufficient. The construct bridging individual fluency and organizational value is collaborative intelligence, and the mechanism producing compounding returns is co-evolution.

Research Notes

Can We Really Talk About AI Readiness as If It Is a Destination?

Every week I see another AI readiness assessment making the rounds. Score your organization. Get a maturity rating. But readiness toward what? The infrastructure question gets attention. The destination question does not. And without a destination, readiness optimizes for deployment speed rather than strategic value.

AI Strategy

What If the Cost Curve Breaks Before the Capability Curve?

We have been assuming that frontier AI requires frontier investment. But the most disruptive force may not be what AI can do, it may be how cheaply it can do it. DeepSeek trained a frontier-competitive reasoning model for $294,000. The compute moat is dissolving. The question is what replaces it.

AI Strategy

Are We in the Middle of a Reorg That Nobody Is Naming?

No one is announcing AI-driven restructuring. But reporting lines are shifting. Junior roles are being quietly absorbed or redefined. Competency models that mattered eighteen months ago are already outdated. The reorg is happening. We are just not calling it that.

Future of Work

What Can an AI-Generated Outlaw Country Band Teach Us About Collaboration?

I built an AI-generated outlaw country band called Southern Oracle. Three women, a debut album, and a production capacity I lacked but a creative vision entirely my own. What I learned about co-evolution, authorship, and the line between human and machine creativity that no strategy document could have taught me.

Personal

What If the Protocols End Up Mattering More Than the Models?

The entire conversation right now is about models. Who has the biggest one. Who has the fastest one. But a different question deserves attention: what about the connective tissue? The protocols that determine how AI systems talk to tools, to data, to each other? When Anthropic, OpenAI, and Google converge on shared protocol infrastructure, the implication for every other organization is clear. The protocol layer is not where you differentiate. It is where you participate or get designed around.

AI Strategy

Is Efficiency Actually the Wrong Story to Tell About AI?

Every AI pitch deck I see leads with efficiency. Save time. Cut costs. Do more with less. But efficiency is a ceiling, not a floor. The work redesign literature shows that the most consequential innovations do not optimize existing tasks. They create entirely new categories of work. So what happens when we stop asking what AI makes cheaper and start asking what AI makes newly possible?

Future of Work

Does Your AI Strategy Actually Need More Use Cases?

I keep hearing the same request: "We need more use cases." But the use-case mentality treats AI like a vending machine. Insert problem, receive solution. The organizations I see pulling ahead are not building use case libraries. They are building adaptive capacity. Job design research tells us that when work is dynamic, prescription fails. What if the goal is not to identify the right 50 applications, but to build a system that surfaces the next 500 on its own?

AI Strategy

Are We Measuring the Right Thing When We Measure AI Adoption?

Adoption is the metric everyone tracks. But what does it actually tell you? That someone logged in. That a button was clicked. IO research on training transfer has shown for years that exposure alone does not produce behavior change. You need reinforcement, feedback loops, environmental support. The organizations that scale AI successfully are not measuring adoption. They are measuring learning velocity: how fast teams integrate new capabilities into how they actually work.

AI Strategy

Is Resistance Actually the Most Useful Signal in Your AI Rollout?

There is a pattern I keep seeing. An organization deploys a new AI tool. Adoption is uneven. Leadership calls it a "change management problem" and brings in comms. But here is the thing. Decades of IO research on organizational change tell us that resistance is not noise. It is signal. When people push back on a system, they are surfacing a mismatch between how work actually happens and how someone assumed it happens. What if instead of overcoming resistance, we started listening to it as design feedback?

AI Strategy
Coming Soon
Coming Soon

What Happens If the Top Breaks Away?

On the bifurcation risk in organizational learning capacity, and what happens when a small number of organizations figure out human-AI integration while the rest are still running pilots.

AI Strategy
Coming Soon

Is the Portfolio Career the Natural Structure for an AI-Shaped Economy?

On composite professional identities, the shift from single-role careers, and what career architecture looks like when the unit of work is the project, not the position.

Future of Work
Coming Soon

Why Don't Lab Benchmarks Hold Up in the Real World?

On the gap between controlled AI evaluations and real-world performance, and what complexity, context, and organizational friction do to benchmarks.

Research Notes