The Smart Position on AI is Between Panic and Propaganda
- Sheldon Dunn, MBA, PhD

- Mar 7
- 4 min read
Overclaiming the impact of AI is becoming a business model.
Don't get me wrong. I like large language models. I have been using them heavily for a long time, and I think they already create real value in research, writing, analysis, and certain kinds of operational work.
But using a tool is not the same as surrendering your judgment to the people selling it. Technology firms overclaim. Enthusiasts amplify. Markets reward bold stories. Meanwhile, actual adoption still has to survive the stubborn complexity of converting a "capability" into a customer service experience.
That is the tension behind the chart below. The real question is not whether AI matters.
It does.
The real question is whether the evidence being used to sell its labor-market impact is strong enough to justify the certainty being projected onto it.

The chart is not the problem. The sermon built on top of it is.
The chart itself is not what bothered me.
What bothered me was watching a theoretical exposure graphic get preached as though it had already measured the future of work.
Within hours, people were using it to announce that AI is “coming for all knowledge work,” that managers should prepare for sweeping white-collar disruption, and that the only real question is how fast the red will swallow the blue.
That is not analysis, it's hype laundering.
A vendor publishes a model. Influencers strip away the assumptions. Then the simplified version starts circulating as managerial truth. By the time it reaches LinkedIn, a speculative graphic has turned into economic scripture.
The blue shape is not destiny.
The blue area is not job loss. It is not even direct job coverage. It is a theoretical exposure estimate built by combining O*NET task data, Anthropic’s own usage data, and a 2023 framework (Eloundou et al., 2023) that scores tasks partly by whether an LLM could make them at least twice as fast. That estimate is then rolled up into very broad occupational categories.
Useful? Maybe. But it is still a model of possible task acceleration, not the labor market caught on camera (anthropic.com).
That distinction should matter a lot more than it apparently does.
People keep staring at the blue area as though it were a neutral measurement of what AI can “really do.” It is not. It is a constructed ceiling. And constructed ceilings have a funny habit of becoming destiny the moment evangelists repeat them often enough.
The red does not owe the blue anything.
The pious reading of this chart is that the red will eventually expand until it fills the blue.
Maybe it will, maybe it won't (probably not)
A capability estimate does not get to skip the hardest part of technology adoption: fitting into real work.
Real organizations are not clean task lists. They are handoffs, exceptions, trust problems, compliance demands, legacy systems, customer expectations, and local workarounds. Vendors are usually much better at imagining where their tools could fit than understanding where they actually will.
So the gap is not automatically evidence of delayed inevitability.
It may also be evidence that the ceiling was oversold.
The evidence does not support the swagger.
If people were reading the report carefully, the commentary would be much more restrained.
The report does not show broad white-collar displacement. Its main labor-market finding is that unemployment has not systematically increased for highly exposed workers since late 2022. It does find tentative evidence of slightly slower hiring for recent college graduates (22–25 years old) entering exposed occupations.
Those are signals to be aware of, not permission slips for CEOs, consultants, and AI evangelists to speak as though mass displacement is destiny.
What this chart actually revealed
The most revealing thing about Anthropic's report wasn't what it said about the future of work, but the discourse around it.
It showed how quickly people confuse “AI could speed up parts of a task” with “AI is coming for your white-collar jobs.” It showed how easily a (likely biased) vendor estimate becomes received wisdom once enough people repeat it confidently. And it showed, again, how much AI commentary is really just message amplification, instead of real insight.
That is the missing argument.
The manager’s job is not faith
Managers do not get paid to believe. They get paid to decide under uncertainty.
A serious manager does not need to choose between AI panic and AI worship. The useful stance is skepticism tied to operations.
Treat charts like this as prompts, not prophecies. Ask where the tool has already improved speed, quality, margin, or customer experience in a workflow like yours. Ask what extra verification, training, and redesign it requires. Ask which part of the claim is observed behavior and which part is vendor extrapolation.
That is the smart position on AI.
Not fear. Not faith. Judgment.
Footnote:
Anthropic’s report is a company-published research article, not a peer-reviewed journal publication. Eloundou et al. (2023) is an arXiv preprint, which also is not peer reviewed in the journal sense. That does not make either source useless, but it does mean they should be treated as provisional rather than as settled evidence.
References
Anthropic. (2026, March 5). Labor market impacts of AI: A new measure and early evidence. https://www.anthropic.com/research/labor-market-impacts
Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). GPTs are GPTs: An early look at the labor market impact potential of large language models (arXiv preprint No. arXiv:2303.10130). arXiv. https://arxiv.org/abs/2303.10130
National Center for ONET Development. (2026). ONET database (Version 30.2) [Database]. O*NET Resource Center. https://www.onetcenter.org/database.html




The article argues that managers should avoid both extreme reactions to AI, panic and blind hype, and instead approach it with skepticism and judgment. A point that stood out to me is that just because AI can theoretically speed up parts of a task does not mean it will automatically replace entire jobs. Real work environments include handoffs, exceptions, and trust issues that models often ignore. I've been noticed something similar in my own classes. AI tools can generate outlines, explanations, or even rough analyses very quickly. That definitely speeds up parts of the work, but it does not replace the actual thinking needed to understand the material or apply it correctly. For example, when working on finance or accounting…