When my colleagues and I were developing our extended UTAUT framework for AI adoption (Wolfe et al., 2025), we reviewed the enterprise AI literature looking for evidence of collaborative intelligence: the proposition that humans and AI systems can develop capabilities together that neither achieves alone. What we found instead was a field almost entirely organized around adoption and deployment. The literature could tell us who was using AI and how often. It could not tell us whether the humans and the systems were getting better together over time.
That gap led me to look more closely at the mechanisms of collaborative intelligence (Wilson & Daugherty, 2018). They have proposed three: complementarity, where humans and AI contribute different strengths to a shared task; boundary-setting, where organizations define where human authority ends and AI autonomy begins; and co-evolution, where human capability and AI capability improve together through repeated interaction. Complementarity is the most documented. Boundary-setting is the most discussed in governance conversations. Co-evolution is the least measured and, I believe, the most consequential, because it is the only mechanism that produces durable capability growth rather than static efficiency.
The hypothesis: co-evolution between humans and AI systems is occurring in organizations but is not being measured, named, or documented in the enterprise AI literature, because the field's dominant measurement frameworks privilege deployment metrics over interaction quality.
Three Takeaways
First, the most rigorous longitudinal study of AI in organizations gets closest to describing co-evolution without ever naming it. The MIT Sloan Management Review and BCG Artificial Intelligence and Business Strategy research program has tracked AI implementation annually since 2017 across thousands of organizations. Their 2020 report found that only 10% achieved significant financial benefits, and that the distinguishing factor was organizational learning: companies that learned with AI, not just from it, were six times more likely to succeed (Ransbotham et al., 2020). The 2022 report established a bidirectional relationship: when individuals derive value from AI, organizations benefit as well (Ransbotham et al., 2022). The 2024 report found that organizations combining organizational learning with AI-specific learning were up to 80% more effective at managing uncertainty (Ransbotham et al., 2024). The 2025 report found that agentic AI "does not fit traditional management frameworks" (Ransbotham et al., 2025). Across eight years, this program has documented the conditions under which humans and AI systems improve organizational performance together. That is co-evolution. They have never called it that.
Second, the absence is a measurement problem, not an empirical one. The enterprise AI literature overwhelmingly measures deployment: adoption rates, usage frequency, time saved, cost reduced. These metrics capture complementarity, where human and AI contribute different strengths to a shared task. Complementarity is valuable but static. Co-evolution requires longitudinal measurement of capability change in both the human and the system over repeated interaction cycles. Argote (2011) established that organizational learning is observable through changes in knowledge, routines, and performance over time. The MIT SMR-BCG program has shown that learning with AI predicts financial benefit. But no study I have found tracks whether human judgment improved because of working with AI, or whether AI outputs improved because of human feedback, at the level of specific workflows over time. We have instruments for complementarity. We do not yet have instruments for co-evolution.
Third, I suspect co-evolution is happening, and I suspect it is hiding in the places where people describe AI as "indispensable" without being able to explain why. BCG's 2025 AI at Work survey found that employees at organizations that have reshaped workflows around AI report sharper decision-making and more strategic work. They describe AI as a collaborative partner rather than a tool. This language is suggestive. It implies a relationship that has evolved, not a tool that was adopted. When workers describe AI as indispensable rather than helpful, they may be reporting the felt experience of co-evolution: a gradual shift in their own capability through sustained interaction with a system that was also changing. Current measurement captures the outcome (perceived indispensability) without the mechanism (bidirectional capability improvement over time).
The Longer View
Organizational learning theory provides the foundational lens. Argyris and Schön (1978) distinguished single-loop learning (correcting errors within existing frameworks) from double-loop learning (questioning the frameworks themselves). Complementarity is single-loop: the human does their part, the AI does its part, errors are corrected within the existing division of labor. Co-evolution is double-loop: the interaction changes what the human can do, what the AI can do, and how the division of labor itself is structured. The MIT SMR-BCG finding that learning with AI predicts success is, I believe, a proxy measure for double-loop learning. But the field has not yet built the instruments to distinguish it from single-loop adaptation.
Vygotsky's (1978) zone of proximal development offers a complementary lens. ZPD describes the space between what a learner can do independently and what they can do with guidance. In human-AI interaction, the AI may function as a scaffold that expands the human's zone of capability, and the human's feedback may function as a scaffold that expands the AI's operational effectiveness. If this bidirectional scaffolding is occurring, it would look exactly like what the enterprise surveys report: people getting better at their jobs in ways they cannot fully attribute, systems getting more useful in ways that are difficult to disentangle from the humans using them.
My Two Cents
I have spent enough time in AI transformation to know that absence of evidence is not evidence of absence. The MIT SMR-BCG program is the most rigorous longitudinal data we have on AI in organizations, and it keeps circling the same finding: the organizations that succeed are the ones where humans and AI systems are learning together. That is co-evolution. It is the mechanism I have been looking for. It is just not being called that, and not being measured with instruments designed to detect it.
What we need are longitudinal studies tracking capability change in both the human and the system at the workflow level over repeated interaction cycles. Until those exist, co-evolution will remain something I believe is happening based on pattern recognition and indirect evidence rather than something I can point to in a dataset. I am comfortable with that uncertainty, but I want to name it clearly: this is a hypothesis in search of its measurement instrument.
If you are measuring AI's impact in your organization, ask whether your metrics capture how human capability has changed through interaction with AI, not just how AI has augmented existing capability. The difference between those two questions is the difference between complementarity and co-evolution, and the answer may determine whether your AI investment compounds or plateaus.
Read to Learn More
Academic: Argyris, C., & Schön, D. A. (1978). Organizational learning: A theory of action perspective. Addison-Wesley.
Industry: Ransbotham, S., Kiron, D., Khodabandeh, S., Iyer, S., & Das, A. (2025). The emerging agentic enterprise. MIT Sloan Management Review and Boston Consulting Group.
References
Argote, L. (2011). Organizational learning research: Past, present and future. Management Learning, 42(4), 439–446.
Argyris, C., & Schön, D. A. (1978). Organizational learning: A theory of action perspective. Addison-Wesley.
Ransbotham, S., Khodabandeh, S., Kiron, D., Candelon, F., Chu, M., & LaFountain, B. (2020). Expanding AI's impact with organizational learning. MIT Sloan Management Review and Boston Consulting Group.
Ransbotham, S., Kiron, D., Candelon, F., Khodabandeh, S., & Chu, M. (2022). Achieving individual and organizational value with AI. MIT Sloan Management Review and Boston Consulting Group.
Ransbotham, S., Kiron, D., Khodabandeh, S., Chu, M., & Zhukov, L. (2024). Learning to manage uncertainty, with AI. MIT Sloan Management Review and Boston Consulting Group.
Ransbotham, S., Kiron, D., Khodabandeh, S., Iyer, S., & Das, A. (2025). The emerging agentic enterprise. MIT Sloan Management Review and Boston Consulting Group.
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press.
Wilson, H. J., & Daugherty, P. R. (2018). Collaborative intelligence: Humans and AI are joining forces. Harvard Business Review. https://hbr.org/2018/07/collaborative-intelligence-humans-and-ai-are-joining-forces
Wolfe, D., Price, M., Choe, A., Kidd, F., & Wagner, H. (2025). Revisiting UTAUT for the age of AI: Understanding employees' AI adoption and usage patterns through an extended UTAUT framework. arXiv preprint arXiv:2510.15142.