Organization Design for AI · Insight

Train the Monkey First: Why Your AI Operating Model Is the Hard Problem You’re Not Solving

Global AI spending will hit $2.52 trillion in 2026. Most of it is flowing into the pedestal. The monkey — the operating model — sits in the corner, ignored.

8 min read

April 13, 2026

HandsOn Insights

Astro Teller runs X, Alphabet’s moonshot lab. He uses a thought experiment to explain how his teams decide where to invest their time: imagine your goal is to train a monkey to recite Shakespeare while standing on a pedestal. You have two tasks — train the monkey and build the pedestal. Where do you start?

His answer: spend zero time on the pedestal. The pedestal is trivially solvable. The monkey is the hard problem. If the monkey can’t be trained, the pedestal is a waste of materials and hours. And yet, as Teller points out, teams gravitate to the pedestal first — because building it creates visible, demonstrable progress. You get something to show leadership. Training the monkey produces nothing but a long list of reasons why the problem is hard.

Global AI spending will hit $2.52 trillion in 2026 (Gartner). That number is up 44% year-over-year. Most of it flows into infrastructure — servers, accelerators, data center capacity. The pedestal. Meanwhile, 80% of organizations that have adopted AI have not redesigned a single workflow around it (McKinsey State of AI, 2025). The organizational structures that are supposed to absorb AI at scale — the roles, the decision rights, the team designs, the workflows — remain untouched. The monkey sits in the corner, ignored.

This piece argues that the operating model is the monkey, and almost every enterprise AI program is building pedestals.

The budget tells the story

The numbers are unambiguous. Enterprise AI budgets allocate 30–40% to infrastructure (compute, platforms, data pipelines), another 30–40% to model development and API costs, and 5–10% to training and change management (EC-Council, 2025). Organizational redesign — the structural changes required to actually absorb AI into how a company works — barely registers as a line item.

“AI adoption is fundamentally shaped by the readiness of both human capital and organizational processes, not merely by financial investment.”

— John-David Lovelock, Gartner · January 2026

The readiness isn’t there. According to McKinsey’s State of AI survey, only 21% of organizations using generative AI have redesigned even some of their workflows. The rest — nearly 80% — are layering AI on top of existing processes. Same org chart, same reporting lines, same role definitions, same decision-making protocols. A new tool dropped into an old structure.

This is the textbook definition of building the pedestal.

What happens when you train the monkey

Freeport-McMoRan, the world’s largest publicly traded copper producer, ran into this wall and broke through it. The company deployed AI models to optimize copper processing at its aging Bagdad mine in Arizona. The technology worked — the models could predict and improve ore concentration yields. Scaling across operations failed.

The problem was organizational. AI-generated recommendations sat unused because the people who operated the mills didn’t trust outputs from a system they hadn’t been part of building. Data scientists and metallurgists spoke different languages. Optimization suggestions that crossed departmental lines had no owner.

The general manager at Bagdad assembled a cross-functional team that pulled from every division the AI initiative would touch: data scientists, metallurgists, mining engineers, and members of Freeport’s central data-science group. They adopted agile sprint cycles. They created a product manager role for AI-driven process changes. They moved to quarterly OKR planning across operations.

+5% copper · +10% throughput · $350–500M EBITDA
After organizational redesign at Bagdad: copper production rose 5%, throughput exceeded 85,000 metric tons per day (+10%), and recovery improved by a full percentage point. System-wide implementation projected at +200M pounds per year. Source: McKinsey.

Freeport didn’t have better AI. They had a better operating model around the AI. They trained the monkey.

Why 94% of companies keep building pedestals

McKinsey’s 2025 data is clarifying. Only 6% of organizations generate 5% or more of their EBIT from AI. Those high performers are 3.6 times more likely to have redesigned their organization alongside AI deployment. 55% of them fundamentally reworked workflows when deploying AI. The other 94% did not.

BCG’s parallel finding lands in the same place: 60% of companies generate no material value from AI despite continued investment (BCG, “AI at Work 2025”).

Why the resistance? Teller’s insight applies directly. Organizational redesign is invisible, slow, uncomfortable, and politically charged. Buying servers is none of those things. A CTO can present a procurement decision to the board in a single slide. A CHRO explaining why twelve roles need to be restructured, three new team configurations need to be piloted, and decision-making authority needs to shift from function heads to cross-functional leads — that presentation doesn’t fit on a slide. It fits in a transformation program that takes quarters, not weeks.

The pedestal is fast, tangible, and easy to approve. The monkey is slow, ambiguous, and requires leadership to make uncomfortable structural calls.

93% of executives surveyed cite culture and organizational readiness as barriers to AI progress (HBR/PYMNTS, 2025). They know the monkey exists. They fund the pedestal anyway — because that’s the line item they know how to manage.

The six places the monkey hides

In organizational design for AI, there are six domains where the monkey-first principle applies. Each one requires structural decisions that most AI programs skip.

D01 · Foundation
Strategy & Value
Which use cases actually move EBIT? Most companies have 30–50 AI pilots. Fewer than 5 are tied to measurable business outcomes with owners and deadlines.
D02 · Foundation
Structure
Where does AI capability live — IT department, center of excellence, distributed across business units? A design decision with massive implications for speed, governance, and talent retention.
D03 · Activation
Decision Architecture
Who is authorized to make decisions based on AI outputs? This is the domain that separates organizations that scale from organizations that pilot forever.
D04 · Activation
Process & Workflow
The McKinsey 80%. In practice: AI as an optional input — a dashboard nobody checks, a recommendation nobody acts on. Workflow redesign puts AI output in the critical path.
D05 · Activation
Capabilities & Culture
Data literacy, prompt engineering, human-AI collaboration protocols, algorithmic risk awareness. EU AI Act Article 4 makes this a legal requirement since February 2025.
D06 · Foundation
System Governance
Who classifies AI systems by risk? Who maintains the inventory? Who monitors for drift, bias, and compliance? Without governance, every AI deployment is a liability.

Each of these domains is a monkey. Each one is harder than buying compute. And each one is required for AI to generate sustained business impact.

What Monday morning looks like

If you recognize your organization in the 94%, here’s the sequencing that separates pedestal-builders from monkey-trainers.

Week 1
01
Map what you actually have.
Inventory every AI initiative currently running. For each one, answer three questions: who owns the business outcome? Has the workflow been redesigned to make AI a critical input (not an optional dashboard)? Does someone have formal decision authority based on AI outputs? Most organizations discover that the answers are “nobody,” “no,” and “no.”
Week 2–4
02
Pick one initiative and redesign the operating model around it.
Choose the initiative closest to business impact. Assemble a cross-functional team — not a committee, a team with a product manager, shared OKRs, and sprint cadence. Redesign the workflow so the AI output is in the critical path. Assign decision rights explicitly. This is what Freeport did at Bagdad, and it’s what the 6% have in common.
Month 2–3
03
Build the governance spine.
Classify your AI systems. Assign ownership for the inventory. Establish a reporting line to the board or executive committee. This isn’t bureaucracy — it’s the precondition for scaling. Without governance, every additional AI deployment increases risk faster than it increases value.
Ongoing
04
Reallocate budget.
If your AI budget is 90% technology and 10% organizational change, invert the ratio on the change management line. The infrastructure will keep getting cheaper. The operating model redesign will not get easier by waiting.

The pedestal is already built

The uncomfortable truth for most enterprise AI programs in 2026: the pedestal is done. The compute exists. The models work. The APIs are available. The infrastructure has never been more accessible or more affordable.

The monkey — the organizational redesign, the workflow restructuring, the decision rights, the role definitions, the governance spine — is the only thing standing between your AI investments and measurable EBIT impact. Freeport-McMoRan proved it. McKinsey’s data confirms it. Gartner’s $2.52 trillion price tag quantifies the scale of the misallocation.

Astro Teller’s question is the right diagnostic: what’s the monkey in your AI program? If nobody in the room can answer that question, you’re building a very expensive pedestal.

What’s the monkey in your AI program?

Map it in 4 minutes — across all six domains.

The HandsOn AI Operating Model Diagnostic — a structured three-week engagement built on the six-domain framework. Start with the free assessment or go directly to the conversation.

Similar Posts