For decades, organizations have designed work around specialization. Roles were scoped to a single discipline, grouped into functions, and staffed through skill taxonomies that reward depth over breadth. This fixed architecture made sense when coordination across disciplines was expensive and the rate of change was slow enough that expertise could be tracked within stable boundaries.
In practice, roles have always been broader than the job descriptions that define them. Many individuals undertake quiet heroics to keep work moving across boundaries they were never formally asked to cross. Deloitte’s skills-based organization research found that 71% of surveyed workers already perform work outside the scope of their job descriptions, and only 24% report doing the same work as others with the same title and level (Deloitte, 2023). AI is not creating this problem. It is exposing it faster than role architectures can adapt.
The concept itself is not new. Bell Labs used the “Member of Technical Staff” designation from its earliest days, a deliberately generalist title that emphasized flat structure and cross-disciplinary contribution over hierarchical specialization. The MTS model produced some of the most consequential innovations of the 20th century, from the transistor to information theory, precisely because it refused to confine technical talent to narrow functional lanes (ScoutNow, 2025). Today, companies including OpenAI, NVIDIA, and Snowflake have revived the MTS title for similar reasons, though the context has shifted from research labs to AI-native product organizations.
The pattern extends beyond job titles. Dr. Sarabeth Berk Bickerton’s work on hybrid professional identity argues that individuals increasingly integrate multiple professional identities (researcher, designer, strategist, educator) into a single coherent sense of self, and that traditional single-discipline titles fail to capture how they actually work (Berk, 2021). Berk’s framework distinguishes between holding multiple roles sequentially and genuinely fusing them into an integrated identity. The composite role, as we define it here, is the organizational architecture that makes that fusion deliberate rather than accidental.
The hypothesis: AI-era work demands explicit recognition of composite roles — positions that integrate capabilities from disciplines that were designed as separate, condensing the activities into a single performer. Organizations that use job design research to deliberately design and support these roles will outperform those that allow them to emerge informally.
We define a composite role as a position that deliberately integrates capabilities from multiple traditionally separate disciplines into a single performer, designed to collapse coordination costs (Thompson, 1967) in environments where AI has made single-discipline work insufficient.
The evidence is arriving in real time. Job postings for forward deployed engineers (a composite role integrating software engineering, product thinking, and client consulting) increased more than 800% between January and September 2025 (Indeed/Financial Times, 2025). Palantir pioneered the title, and now OpenAI, Anthropic, Google DeepMind, Salesforce, and dozens of AI-native companies are building FDE teams. OpenAI has FDE operations across Europe and the Middle East, led from London. Salesforce committed to 1,000 FDEs in April 2025. An independent analysis of 1,000 FDE job postings found that 79% are vertical-agnostic, requiring engineers who can learn new domains rapidly rather than specialize in one (Bloomberry, 2025). This post examines the composite role phenomenon through the forward deployed engineer as a case study.
Three takeaways
1. Composite roles are a structural response to a coordination problem AI has fundamentally altered, not just a talent trend.
Hackman and Oldham’s (1976) Job Characteristics Model identified five core dimensions that predict motivation and performance: skill variety, task identity, task significance, autonomy, and feedback. The forward deployed engineer scores high on all five. Skill variety is inherent in integrating engineering, domain understanding, and client engagement. Task identity comes from owning end-to-end delivery rather than a functional slice.
This is not accidental. Thompson (1967) theorized that organizations design coordination mechanisms to manage interdependence, and reciprocal interdependence, where outputs of each unit become inputs for others, requires the highest-bandwidth coordination. When AI makes technical implementation faster but organizational integration harder, the rational response is to collapse coordination costs by putting multiple capabilities in a single role.
The Tavistock Institute’s sociotechnical systems research, originating in British coal mines in the 1950s, established the foundational principle here: when you change the technical system, you must jointly redesign the social system or performance degrades (Trist & Bamforth, 1951). The composite role is exactly this kind of joint redesign, adapting role boundaries to match a technical reality where AI has made single-discipline work insufficient.
2. Composite roles challenge the assumption that expertise requires narrow specialization, and research on expertise acquisition supports this challenge.
Ericsson et al.’s (1993) deliberate practice framework established that expertise develops through domain-specific practice with feedback. This is widely interpreted as an argument for specialization. But Epstein (2019) demonstrated that the most impactful performers in complex, unpredictable environments develop breadth across domains alongside depth.
The FDE validates this empirically: Palantir described the role as resembling a startup CTO, requiring rapid domain learning, technical problem-solving, and stakeholder navigation simultaneously. The value comes not from depth in any single area but from integrating across areas at speed.
Critically, AI tooling is what makes this integration feasible at a level it wasn’t before. AI lowers the skill floor in adjacent disciplines enough that one person can credibly span them. A strong engineer can now use AI to accelerate domain learning, generate client-facing documentation, and prototype solutions in unfamiliar technical stacks, collapsing what previously required a team of specialists into a single capable performer augmented by AI. This is the mechanism the composite role depends on, and it explains why FDE demand has surged now rather than a decade ago.
Berk Bickerton’s research on hybrid professional identity reinforces this from the individual’s perspective. When someone integrates engineering, domain consulting, and product thinking into a fused identity rather than switching between them, they develop what Berk calls a “hybrid zone,” the intersection where their distinctive value lives (Berk, 2021). The composite role gives that hybrid zone organizational legitimacy. Without it, individuals performing this integration remain invisible to the systems that hire, promote, and develop them.
The Harvard Business Impact 2025 study found that static, role-based workforce planning is failing because AI transforms roles nonlinearly. Europe is seeing this play out in real time: IDC’s 2024 EMEA Employee Experience Survey found that 63% of European workers anticipate parts of their jobs will be automated within two years, and 70% of new positions are expected to be directly influenced by AI technology (IDC, 2024).
3. The critical gap is not whether composite roles emerge, but whether organizations design them deliberately or inherit them accidentally.
Morgeson and Humphrey’s (2006) Work Design Questionnaire established that job design predicts not only motivation but also role clarity and retention. When composite roles emerge informally, as they already have for 71% of workers performing outside their scope, they accumulate demands without structural support.
Karasek’s (1979) demand-control model predicts that high-demand, low-control work produces strain. Undesigned composite roles are high-demand by definition and low-control by default, because the performer lacks formal authority across the disciplines they integrate. The difference between a well-designed FDE and informal scope creep is not breadth of capability. It is whether the architecture supports the integration.
Even deliberately designed composite roles can fail. The PwC 2025 AI Jobs Barometer, covering nearly a billion job adverts across 15 EMEA countries, found that AI-exposed roles are disproportionately concentrated in white-collar occupations, meaning the coordination burden falls on a narrow segment of the workforce, intensifying the strain on exactly the people most likely to be in composite roles (PwC, 2025). If the role architecture doesn’t account for this concentration, burnout follows regardless of how well the role was designed on paper.
The longer view
Cognitive task analysis, developed by Crandall, Klein, and Hoffman (2006), offers the methodology for designing composite roles well. CTA extracts the cognitive demands, decision requirements, and judgment patterns embedded in expert performance. Applied to composite roles, CTA reveals where the integration is happening: which decisions require multiple disciplinary perspectives simultaneously, where judgment cannot be decomposed into separate functional inputs. This is how you move from a job description that lists skills to a role architecture that specifies how those skills interact under operational conditions.
Enid Mumford’s ETHICS methodology, developed at the Manchester Business School in the 1980s, anticipated this challenge from a different angle. Mumford argued that effective system design required participative methods that gave workers direct input into how their roles were structured around new technology (Mumford, 1983). The parallel to composite role design is direct: the people who will perform these integrated roles are the best source of insight into where the integration actually happens, what cognitive load it creates, and where it breaks down. Designing composite roles top-down, without input from the performers, risks the same failures Mumford identified in technology implementations four decades ago.
Our two cents
We have been watching the FDE model with professional interest because it validates something we have both observed for years: the future of AI-era work is not narrower specialization but deliberate integration across disciplines. The FDE works because it collapses the distance between building and deploying, between technical capability and organizational context. That integration is not a nice-to-have. It is the mechanism through which AI implementations actually succeed in complex environments.
What concerns us is that most organizations are not designing these roles. They are letting them happen, then wondering why the people in them burn out or underperform. The EU AI Act, in force since February 2025, now requires providers and users of AI systems to maintain staff with adequate AI literacy (Article 4; European Parliament, 2024). That obligation will increasingly demand composite capabilities — people who can bridge technical AI knowledge with domain expertise, compliance requirements, and operational realities. Organizations that treat role design as an engineering problem, with the same rigor applied to system design, will build the workforce the AI era demands. The rest will keep writing job descriptions that list twelve competencies and hope for the best.
What comes next
The series continues with how to design composite roles using cognitive task analysis and job design research (Part 2), what happens to career ladders, performance evaluation, and compensation when the unit of work is no longer a single discipline (Part 3), and additional composite role archetypes emerging in AI transformation (Part 4).
Audit your organization for informal composite roles. Look for the people performing across disciplines without formal authority or structural support. Ask: were these roles designed, or did they happen? The answer tells you whether your workforce architecture is keeping pace with your technology architecture.
Read to learn more
Pragmatic Engineer: What Are Forward Deployed Engineers? (2025)
SVPG: Forward Deployed Engineers (2025)
Harvard Business Impact: Rethinking Roles in the Age of Intelligent Machines (2025)
References
Berk, S. (2021). More Than My Title: The Power of Hybrid Professionals in a Workforce of Experts and Generalists. Houndstooth Press.
Bloomberry. (2025). What I learned analyzing 1,000 forward deployed engineer jobs.
Crandall, B., Klein, G., & Hoffman, R. R. (2006). Working minds: A practitioner’s guide to cognitive task analysis. MIT Press.
Deloitte. (2023). Navigating the end of jobs / Skills-based organisation research.
Epstein, D. (2019). Range: Why generalists triumph in a specialized world. Riverhead Books.
Ericsson, K. A., Krampe, R. T., & Tesch-Römer, C. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100(3), 363–406.
European Parliament. (2024). Regulation (EU) 2024/1689 (AI Act). Article 4.
Hackman, J. R., & Oldham, G. R. (1976). Motivation through the design of work. Organizational Behavior and Human Performance, 16(2), 250–279.
Harvard Business Publishing Corporate Learning. (2025). Rethinking roles in the age of intelligent machines.
IDC. (2024). EMEA Employee Experience Survey: AI and the future of work in Europe.
Indeed/Financial Times. (2025). Forward deployed engineer job postings data.
Karasek, R. A. (1979). Job demands, job decision latitude, and mental strain. Administrative Science Quarterly, 24(2), 285–308.
Katz, D., & Kahn, R. L. (1978). The social psychology of organizations (2nd ed.). Wiley.
Morgeson, F. P., & Humphrey, S. E. (2006). The Work Design Questionnaire (WDQ): Developing and validating a comprehensive measure for assessing job design. Journal of Applied Psychology, 91(6), 1321–1339.
Mumford, E. (1983). Designing human systems: The ETHICS method. Manchester Business School.
PwC. (2025). AI Jobs Barometer: EMEA report.
ScoutNow. (2025). Rebirth of the title: Member of Technical Staff.
Thompson, J. D. (1967). Organizations in action. McGraw-Hill.
Trist, E. L., & Bamforth, K. W. (1951). Some social and psychological consequences of the longwall method of coal-getting. Human Relations, 4(1), 3–38.