AI Strategy

Can We Really Talk About AI Readiness as If It Is a Destination?

← Back to Blog

Every week I see another AI readiness assessment making the rounds. Score your organization. Get a maturity rating. But readiness toward what? Most assessments measure whether an organization can deploy AI: data quality, infrastructure, talent gaps, governance. They rarely ask what the organization is deploying AI to become. Cisco's 2025 AI Readiness Index found that only 15% of organizations have networks fully ready for AI, while 99% of top performers have a well-defined strategy. The infrastructure question gets attention. The destination question does not. Without a destination, readiness optimizes for deployment speed rather than strategic value.

The hypothesis: AI readiness assessments that measure deployment capability without defining an organization-specific north star for human-AI integration are producing misaligned investment, and the north star itself, not the readiness score, is the binding constraint on whether AI creates durable value.

Three Takeaways

First, readiness assessments import a linear stage-gate model into a nonlinear domain, and the deeper problem is that the gates lead to a generic destination rather than a bespoke one. Capability Maturity Models (Paulk et al., 1993) describe progression through defined stages toward known best practices. This works for stable domains. AI does not meet these conditions: the technology changes faster than any maturity model can track, and the right configuration of human and AI capability differs by industry, organization, and workflow. A financial services firm whose north star is augmenting compliance judgment has a different readiness profile than a healthcare system expanding diagnostic capacity. Generic scores flatten this variation. The organization scoring highest on a standard assessment may be the most efficiently misaligned: deploying AI fastest toward a destination it never defined.

Second, readiness without a north star defaults to the efficiency frame, and the efficiency frame produces predictable ceilings on both value creation and workforce engagement. When organizations do not define what they are getting ready for, assessments default to deployment speed and coverage. This imports the efficiency frame: AI's value is doing existing work faster and cheaper. Deci and Ryan's (1985) Self-Determination Theory demonstrates that sustained human performance depends on autonomy, competence, and relatedness. The efficiency frame appeals to none of these. It signals that the goal is to need people less. Organizations that define their north star around collaborative intelligence, where humans and AI expand what each can do through interaction, create a different readiness target. The WEF Future of Jobs Report 2025 found that 77% of employers recognize the need for reskilling to foster collaboration with AI, not just to cope with displacement. The north star determines whether readiness builds a system that grows human capability or progressively substitutes for it.

Third, the construct organizations actually need is learning architecture, not readiness scoring, and the north star determines what the learning architecture optimizes for. Garvin, Edmondson, and Gino (2008) identified three building blocks of a learning organization: a supportive learning environment, concrete learning processes, and leadership that reinforces learning. An organization with a low readiness score but strong learning architecture will adapt faster than one with a high readiness score and weak learning capacity. Kauffman's (1993) work on fitness landscapes formalizes this: in complex environments, optimizing toward a fixed peak is less effective than maintaining capacity to explore as the landscape shifts. But exploration without direction is expensive. The north star constrains the landscape to what matters for that firm, and collaborative intelligence provides a north star that optimizes for combined human-AI performance rather than AI coverage alone.

The Longer View

Developmental psychology provides the better model. Vygotsky's (1978) Zone of Proximal Development describes the space between what a learner can do independently and what they can do with support. Applied to organizations, the ZPD for AI integration shifts as capability develops. It is not a score but a design space requiring scaffolding matched to the learner's current edge. An organization whose north star is collaborative intelligence scaffolds toward expanding human judgment alongside AI. One whose implicit north star is automation efficiency scaffolds toward making humans unnecessary.

Complex adaptive systems theory identifies the structural limitation. Kauffman's (1993) fitness landscapes demonstrate that locking in on a single peak in a shifting environment produces brittle optimization. AI readiness assessments define a fixed peak. Learning architectures maintain capacity to explore. The north star provides the constraint that makes exploration tractable rather than random.

Brunsson's (1989) concept of organizational hypocrisy describes the gap between what organizations say, decide, and do. Readiness assessments can become a mechanism for this hypocrisy: the assessment becomes the deliverable rather than the means to capability. I have seen organizations where completing the assessment substituted for defining a north star. The score went up. The integration depth did not change.

My Two Cents

I have built readiness assessments. I have administered them. And I have watched organizations spend more energy optimizing their score than defining what they are getting ready for. The assessment tells you whether you can deploy. It does not tell you toward what end. That second question determines whether AI investment creates lasting value or accelerates toward a destination no one chose.

The north star I keep returning to is collaborative intelligence: a future state where humans and AI systems are both more capable because of how they work together. That destination is bespoke. It looks different in every organization because the judgment requirements, trust dynamics, and capability profiles are different. An organization that defines its north star this way asks different readiness questions. Not just can we deploy, but can we design human-AI workflows that expand what our people can do? Not just do we have the data infrastructure, but do we have the learning infrastructure to adapt as conditions change? Those are harder questions. They are also the right ones.

Try This

Before you run another readiness assessment, define your north star. What is your organization deploying AI to become? Then measure three things: how many teams are experimenting toward that north star (experimentation breadth), how fast experiments produce usable insights (learning velocity), and how effectively those insights spread (knowledge diffusion). These dynamic indicators tell you more about trajectory than any static maturity score.

Read to Learn More

Academic: Garvin, D. A., Edmondson, A. C., & Gino, F. (2008). Is yours a learning organization? Harvard Business Review, 86(3), 109–116.

Industry: Cisco. (2025). AI readiness index 2025.

References

Brunsson, N. (1989). The organization of hypocrisy. Wiley.

Cisco. (2025). AI readiness index 2025.

Deci, E. L., & Ryan, R. M. (1985). Intrinsic motivation and self-determination in human behavior. Plenum Press.

Garvin, D. A., Edmondson, A. C., & Gino, F. (2008). Is yours a learning organization? Harvard Business Review, 86(3), 109–116.

Kauffman, S. A. (1993). The origins of order. Oxford University Press.

McKinsey & Company. (2025). AI in the workplace: A report for 2025.

Paulk, M. C., Curtis, B., Chrissis, M. B., & Weber, C. V. (1993). Capability maturity model for software, version 1.1. Software Engineering Institute.

Vygotsky, L. S. (1978). Mind in society. Harvard University Press.

World Economic Forum. (2025). The future of jobs report 2025.