Thousands of CEOs Admit AI Had No Impact on Employment or Productivity, And It‘s Resurrecting a 40-Year-Old Paradox
Thousands of CEOs Admit AI Had No Impact on Employment or Productivity, And It‘s Resurrecting a 40-Year-Old Paradox
The Numbers That Stopped Everyone in Their Tracks
If you’ve been anywhere near a business publication in the last two years, you‘ve been bombarded with promises about artificial intelligence. It’s going to 10x your productivity. It‘s going to replace half your workforce. It’s going to rewrite the rules of the global economy.
So when a new study landed showing that nearly 90% of CEOs say AI has had no measurable impact on employment or productivity over the past three years, you could almost hear the collective record scratch across boardrooms worldwide.
The research, published by the National Bureau of Economic Research in February 2026, surveyed nearly 6,000 CFOs, CEOs, and executives across the United States, United Kingdom, Germany, and Australia, the first representative international data on firm-level AI use.
Here‘s the headline figure that’s making economists sit up straighter: more than 80% of firms reported zero impact on either employment or productivity from AI over the last three years. Some analyses put the figure even higher, around 90%.
And here‘s the kicker: about two-thirds of these executives reported using AI regularly. But their average usage? A whopping 1.5 hours per week. A quarter of them don’t use it at all.
Think about that for a second. Billions of dollars in investment. Endless earnings-call mentions (374 S&P 500 firms touted AI positively in calls from late 2024 through 2025 alone). And the people running these companies are spending less time with AI each week than most of us spend scrolling social media during lunch.
I‘ll admit, when I first read these numbers, I felt a weird mix of vindication and concern. Vindication because maybe we’re not all about to be replaced by chatbots tomorrow. Concern because, well, what exactly are we spending all this money on?
Wait, What‘s the Solow Paradox, and Why Should You Care?
This is where things get genuinely fascinating. Because what’s happening with AI right now isn‘t new. It’s not even particularly surprising to anyone who‘s studied economic history. In fact, it’s so familiar that economists have started dusting off a term that hadn‘t been in heavy rotation since the 1980s: the Solow productivity paradox.
The Original 1987 Observation
Robert Solow, a Nobel Prize-winning economist, made an observation in 1987 that would become one of the most quoted lines in technology economics: “You can see the computer age everywhere but in the productivity statistics.”
Here’s what he was reacting to. The 1960s had brought an explosion of new technologies, transistors, microprocessors, integrated circuits, memory chips. Businesses were buying computers. Offices were being “computerized.” Everyone expected these new machines to supercharge the economy.
Instead, productivity growth actually slowed — dropping from 2.9% annually between 1948 and 1973, to just 1.1% after 1973.
Computers Promised a Revolution. They Delivered Paperwork.
What went wrong? In short: newfangled computers were producing too much information, generating agonizingly detailed reports and printing them on reams of paper. Offices that were supposed to become sleek, efficient hubs of productivity turned into paper-choked bureaucracies where people spent more time managing the outputs of their new machines than actually getting work done.
Sound familiar?
It should. Because today, Apollo chief economist Torsten Sløk is saying almost exactly the same thing Solow said four decades ago: “AI is everywhere except in the incoming macroeconomic data. ”
The technology is ubiquitous. The earnings-call mentions are endless. But when you look at jobs reports, productivity metrics, or profit margins outside the Magnificent Seven, the signal is, well, it‘s basically static.
How the Paradox Finally Resolved Itself (and What That Means for AI)
Here’s the hopeful part of this history lesson: the productivity paradox did eventually resolve. But it took time. A lot of time.
Computers spread widely in the 1970s and 1980s. The productivity surge that everyone expected didn‘t arrive until the late 1990s and early 2000s — after companies had fundamentally redesigned their workflows around the technology.
In other words, you can’t just drop a new tool into an old system and expect magic. You have to rebuild the system around the tool. That takes years, sometimes decades, of trial, error, organizational restructuring, and cultural adaptation.
This pattern is so well-established that some economists argue technological revolutions almost always follow a J-curve: initial disruption and even productivity dips, followed by a steep climb once the technology is truly integrated.
Why Isn‘t AI Moving the Needle Yet? (Hint: It’s Not the Tech)
So why exactly are we stuck in the flat part of the J-curve? The NBER study and related research point to three overlapping reasons.
Adoption ≠ Integration
Here‘s a metaphor I keep coming back to: buying a gym membership is not the same as getting in shape. You can have the membership card in your wallet for years. You can even tell people you “go to the gym.” But if you’re only showing up for 1.5 hours a week and spending most of that time scrolling your phone between sets, the before-and-after photos aren‘t going to change.
Most companies have adopted AI the way people adopt gym memberships. They’ve signed up. They‘ve got the logo on their website. They might even have a “Head of AI” role. But they haven’t redesigned their jobs, their processes, or their organizational structures around the technology.
Stanford economist Nick Bloom, one of the study‘s co-authors, puts it bluntly: “As of yet, there has not been a big effect. ”
The “1.5 Hours a Week” Problem
This is the detail that stopped me cold. 1.5 hours per week. That’s the average time senior executives spend actually using AI. 69% of executives clock less than an hour weekly. 28% spend zero time with it.
Here‘s the uncomfortable question this raises: How can you possibly assess whether a technology works if you’re barely using it? How can you lead an AI transformation if you haven‘t personally experienced what the technology can (and can’t) do?
It‘s like a restaurant critic reviewing a place they’ve only seen from the parking lot.
The study notes that employees spend a bit more time with AI, about 1.8 hours per week on average, but that‘s still not exactly a full embrace. We’re in the very early days of the adoption curve, and most organizations are still treating AI as a curiosity rather than a core operational tool.
Measuring What Matters (and What We Can’t See)
There‘s also a measurement problem. Productivity, in economic terms, is “output per hour worked.” But AI’s early benefits often show up in ways that don‘t immediately register in that calculation.
Imagine a marketing team that uses AI to brainstorm 50 campaign ideas in an hour instead of 10. They’re working the same hours. The output (campaigns launched, revenue generated) hasn‘t changed yet. But the quality of the work might be improving in ways that pay off months or years down the line.
Or consider a software engineer using AI coding assistants. She’s writing more lines of code per hour, that‘s a clear productivity gain. But if the company hasn’t changed its pricing, its product roadmap, or its go-to-market strategy to capture that efficiency, the revenue numbers won‘t budge.
A related study from Duke, the Richmond Fed, and the Atlanta Fed found something intriguing: CFOs report productivity gains from AI averaging 1.8% in 2025, but when researchers calculated implied gains using actual revenue and employment data, those gains were much smaller.
John Graham, a finance professor at Duke and study co-author, explains: “It’s not really hitting the top line yet in full force. There is some level of delay in here for sure. ”
The Future Forecast: What CEOs Actually Expect
Here‘s where the story takes a turn from “maybe this is all hype” to “okay, they’re still betting big.”
The same CEOs who reported negligible past impact are forecasting substantial changes over the next three years. Specifically, they expect AI to:
- Increase productivity by 1.4%
- Boost output by 0.8%
- Cut employment by 0.7% (approximately 1.75 million jobs across the surveyed countries)
That employment projection is particularly interesting, and complicated. While executives expect net job reductions, individual employees surveyed see a 0.5% increase in employment from AI.
This gap in expectations isn‘t just a statistical curiosity. It suggests that employees are more optimistic about AI creating new roles and opportunities, while executives are more focused on efficiency gains and headcount optimization. The truth? Both are probably right, just looking at different time horizons and different parts of the workforce.
The Federal Reserve research also found that companies anticipate a shuffling of the workforce, with a shift away from routine clerical jobs toward more skilled technical roles. This mirrors historical patterns, technology doesn’t eliminate work, it changes what kind of work we do.
What This Means for You (Yes, You, Reading This)
Alright, enough economics. Let‘s talk about what actually matters: What do you do with this information?
If You’re a Business Leader
First: take a deep breath. You‘re not behind. The data shows that almost everyone is in the same boat, even the companies with the biggest AI budgets. The 1.4% productivity lift that executives are forecasting? That’s modest, not revolutionary. You have time to get this right.
But here‘s the catch: the gap between leaders and laggards is already widening. A PwC study from April 2026 found that 74% of AI value was captured by just 20% of firms. Those top performers are twice as likely to automate decisions without human intervention. They’re building data foundations and governance structures now, not later.
So here‘s the pragmatic, non-hype advice:
- Start with one workflow, not everything. Pick a single, measurable process, customer support triage, meeting summary generation, first-draft email responses, and implement AI there. Measure the before and after. Learn from what works and what doesn’t.
- Lead by example. If your team sees you using AI regularly, they‘ll follow. If they see you avoiding it, well… you know how that goes.
- Plan for a 3-5 year horizon. The ROI won’t show up in your next quarterly report. That‘s okay. Build the infrastructure, train your people, and expect a J-curve.
If You’re an Employee
I‘ll be direct: AI is not coming for your job tomorrow. The data says so. What it is doing, slowly but surely, is changing what “good” looks like in your role.
The best thing you can do right now is become AI-fluent. Not an expert. Not a prompt engineer. Just comfortable. Spend 30 minutes a week experimenting with whatever AI tools your company provides (or free ones like ChatGPT, Claude, or Perplexity). Learn what they do well and where they fail.
Here’s a small but meaningful example: A friend of mine in HR started using AI to draft job descriptions. It saved her maybe an hour a week, not life-changing. But over six months, that hour compounded into time she spent on higher-value work: actually talking to candidates, refining hiring processes, thinking strategically. That‘s the real productivity story. It’s not dramatic. It‘s cumulative.
The employees who thrive in the next five years won’t be the ones who resist AI. They‘ll be the ones who figure out how to make it boring, just another tool in their kit, like email or spreadsheets, that they use without thinking.
Thousands of CEOs have now admitted what many of us suspected: AI hasn‘t yet delivered the productivity revolution it promised. But that’s not a failure, it‘s a pattern. The same thing happened with computers in the 1980s, with electricity in the early 1900s, and with steam power before that.
Major technologies take time to reshape how we work. The gap between invention and impact is measured in decades, not quarters. As Nick Bloom and his colleagues put it: the effects are coming, they’re just not here yet.
So maybe the smartest thing we can do right now is stop asking “when will AI transform everything?” and start asking “what‘s one small thing I could do differently this week?”
Because if history is any guide, the revolution won’t arrive with a bang. It‘ll arrive quietly, one workflow at a time, until one day we look up and realize we can’t imagine working any other way.
Source Links
NBER Working Paper: The original “Firm Data on AI” study (nber.org/papers/w34836), essential primary source citation
Robert Solow‘s 1987 quote: The original New York Times Book Review piece where Solow made his famous observation (standupeconomist.com/pdf/misc/solow-computer-productivity.pdf)
Brookings productivity data: Historical context on 1948-1973 vs. post-1973 productivity slowdown (brookings.edu/wp-content/uploads/2016/06/199904.pdf)
Financial Times analysis: S&P 500 earnings call AI mentions (ft.com/content/e93e56df-dd9b-40c1-b77a-dba1ca01e473)
MIT Sloan research: The 40% productivity gain claim that set early expectations (mitsloan.mit.edu/ideas-made-to-matter/how-generative-ai-can-boost-highly-skilled-workers-productivity)
Stanford AI Index Report: Context on global AI investment levels (hai.stanford.edu/ai-index)