Should we eliminate Human Intelligence in AI?
Most execs come to me with one strategy: eliminate humans.
“Frictionless. Futuristic. Fully automated.”
It sounds efficient (like the Terminator).
But the paradox is that removing humans doesn’t always reduce costs.
Sometimes, it costs you the whole company.
Take Builder AI - the AI chat bot that collapsed earlier this year.
It was designed to make app development “as easy as ordering a pizza.”
In their (leaked) internal memo, the directive was blunt:
“Positioning must focus on our proprietary AI - human labour isn’t part of the story.”
And for a while their story worked:
Customers bought it. Investors loved it - Microsoft backed it at a valuation of $1.5B.
Yet behind the scenes, 700+ engineers were at the core of the product - doing the heavy lifting, while AI took all the credit.
They insisted humans didn’t matter.
Ironically, humans could’ve been the main differentiator.
Scale AI bet on that:
They didn’t bury the human layer. They built their brand on it.
Using a “human-in-the-loop” model, they built high-quality training data for AI:
Machines handle the scale of labelling, while 100,000 people review and curate.
And the market rewarded them: in June 2025, Scale secured $14.3B from Meta.
The second-largest private deal in history.
Builder AI could’ve played the same card.
Their 700 engineers could’ve been positioned as a premium tier:
AI generates the code, humans review and validate it.
Because that’s how value is maximised:
Machines provide the scale humans can’t.
Humans provide the judgment algorithms can’t.
Eliminating humans is easy.
Knowing where to keep them is strategy.
Which makes the core question:
Where should we position Human Intelligence in AI products to maximise value?
This article explains human-in-the-loop better than most I’ve read - worth a look: Humans in the Loop: Why AI Still Needs Us .