Scenario Planning for AI and the “Jobless Future”

We all read it in the daily news. The New York Times reports that economists who once dismissed the AI job threat are now taking it seriously. In February, Jack Dorsey cut 40% of Block’s workforce, telling shareholders that “intelligence tools have changed what it means to build and run a company.” Block’s stock rose […]

Scenario Planning for AI and the “Jobless Future”

We all read it in the daily news. The New York Times reports that economists who once dismissed the AI job threat are now taking it seriously. In February, Jack Dorsey cut 40% of Block’s workforce, telling shareholders that “intelligence tools have changed what it means to build and run a company.” Block’s stock rose 20%. Salesforce has shed thousands of customer support workers, saying AI was already doing half the work. And a Stanford study found that software developers aged 22 to 25 saw employment drop nearly 20% from its peak, while developers over 26 were doing fine.

But how are we to square this news with a Vanguard study that found that the 100 occupations most exposed to AI were actually outperforming the rest of the labor market in both job growth and wages, and a rigorous NBER study of 25,000 Danish workers that found zero measurable effect of AI on earnings or hours?

Other studies could contribute to either side of the argument. For example, PwC’s 2025 Global AI Jobs Barometer, analyzing close to a billion job ads across six continents, found that workers with AI skills earn a 56% wage premium, and that productivity growth has nearly quadrupled in the industries most exposed to AI.

This is exactly the kind of contradictory, uncertain landscape that scenario planning was designed for. Scenario planning doesn’t ask you to predict what the future will be. It asks you to imagine divergent possible futures and to develop a strategy that improves your odds of success across all of them. I’ve used it many times at O’Reilly and have written about it before with COVID and climate change as illustrative examples. The argument between those who say AI will cause mass unemployment and those who insist technology always creates more jobs than it destroys is a debate that will only be resolved by time. Both sides have evidence. Both are probably right at some level. And both framings are not terribly helpful for anyone trying to figure out what to do next.

In a scenario planning exercise, you identify two key uncertainties and draw them as crossing vectors, dividing the possibility space into four quadrants. Each quadrant describes a different future. The power of the technique is that you don’t bet on one quadrant. You look for actions that make the most sense across all of them. And you’re not limited to doing this for only one uncertainty. You can repeat the exercise multiple times, each time expanding your sense of possible futures and clarifying your convictions about the most robust strategies for adapting to them.

For AI and jobs, the most obvious crossing vectors to model might seem to be how fast AI grows in its ability to replace human work and how quickly that capability is adopted. This is, in effect, scenario planning about whether the “AI is unprecedented” or “AI is normal technology” camp is correct. That might well be a useful pair of axes.

There’s no question that AI capability is accelerating. SWE-Bench scores for coding went from solving 4.4% of problems in 2023 to 71.7% in 2024, and we saw what was widely described as a “step change” beyond that in December of 2025. Anthropic’s new Mythos model seems to have upped AI capabilities even further. Even before Mythos, McKinsey estimated that today’s technology could in theory automate roughly 57% of current US work hours. But capability is not adoption. Goldman Sachs notes that AI appears to be suppressing hiring more than destroying existing jobs in the near term. Yale’s Budget Lab, analyzing US labor data from 2022 to 2025, found no massive shift in the share of workers across occupations. Deployment, not capability, seems to be the limiting factor.

As a result, it makes sense to me to synthesize these two factors (capability increase and rate of adoption) into a single vector that we can call the scale and size of impact. The question on this axis, therefore, is not so just “How good does AI get?” but also “How fast does the economy actually reorganize around it?”

What’s a good second vector to cross with this one? If you’ve read my book WTF? or other things I’ve written about the role of human choices in shaping the future, you probably won’t be surprised that the second vector I’ve chosen reflects my conviction that the future depends on whether AI capability is primarily used to achieve efficiencies in existing work or to do more, to solve new problems and serve more human needs.

When Dorsey says a smaller team can now do the same work, that’s efficiency. When Insilico Medicine uses AI to design a drug for idiopathic pulmonary fibrosis in a fraction of the time traditional discovery takes (with over 173 other AI-discovered drugs also now in clinical development and 15 to 20 entering pivotal Phase III trials this year), that’s not replacing a human job. That’s doing something that wasn’t being done before. But we shouldn’t content ourselves with the idea that the “do more” axis is just about technical breakthroughs. It might be in serving a vastly larger number of people far more effectively and efficiently. When Todd Park says that his company, Devoted Health, “is on a mission to dramatically improve the health and well-being of older Americans,” that is a call to do more. Given the size of the existing markets that need to be transformed, it is likely that even with 10x or 100x efficiency gains from AI, Devoted’s 1,000x mission might require more resources, including people.

What will be scarce?

I’ve always assumed that the “do more” orientation is chiefly a moral argument driven by human judgment about what kind of world we’d prefer to live in. As the IMF noted earlier this year, “Work brings dignity and purpose to people’s lives. That’s what makes the AI transformation so consequential.” A world of concentrated value capture leading to a split between those with capital to invest and a permanent unemployed underclass is the stuff of dystopian science fiction.

But it’s not just a matter of inequality and the importance of work to human self-esteem. I’ve also become convinced that companies that lean into new possibilities and expand markets do better than those that simply do the same things more cheaply. There are a number of fascinating economics arguments for why the jobless future is just not going to happen. These arguments provide useful guidance into the structural changes to the economy that workers, business leaders, and politicians should be planning for.

Noah Smith pointed to a draft economics paper by Garicano, Li, and Wu that helps explain how the trade-offs between efficiency and expanding output might impact jobs. Garicano, Li, and Wu note that “the effect of AI on an occupation depends not just on which tasks AI can perform but also on how costly it is to unbundle those tasks from the job.” They model jobs as bundles of tasks, and distinguish between “strongly bundled” jobs (where the same person has to do multiple interdependent tasks) and “weakly bundled” ones (where tasks can easily be split between a human and an AI). AI replaces the weakly bundled jobs first. But even for weakly bundled jobs, automation only replaces human labor after demand becomes inelastic, after AI is so productive at the task that making more of the output hits diminishing returns.

Until that point, increased productivity from AI can be focused on expanding output rather than shrinking headcount. That is another way of saying that whether AI replaces workers or augments them depends in large part on whether there is unmet demand to absorb the increased productivity. If we use AI only to do the same things more cheaply, we hit that inelastic point fast, and jobs disappear. If we use it to do new things, demand keeps expanding and people keep working. University of Chicago economist Alex Imas believes that just how much demand elasticity there is on a job by job basis is one of the big questions of our time.

But that’s not all. In a new essay called “What Will Be Scarce?” Imas points out that when a new technology makes one sector dramatically more productive, one part of the economy shrinks but another grows. When agriculture was mechanized, 40% of the American workforce moved off farms, but the economy actually grew, because people spent their rising real incomes on fundamentally different things. Imas argues, drawing on work by Comin, Lashkari, and Mestieri, that income effects account for over 75% of observed patterns of structural change. As people get richer, they want fundamentally different things.

What are those things? Imas calls it “the relational sector”: goods and services where the human element is itself part of the value; teachers, nurses, therapists, hospitality workers, artisans, performers, personal chefs, community curators, and more. He opens his piece with Starbucks. In pursuit of economic efficiency, the company tried to automate more and more of its operations. CEO Brian Niccol concluded that it was a mistake, that handwritten notes on cups, ceramic mugs, and good seats drove customer satisfaction. More baristas are being hired per store and automation is being rolled back.

But there’s far more to the relational sector than service jobs. Imas identifies a further dimension in what René Girard called mimetic desire, the idea that people don’t just want objects for their functional properties. They want things that others want, and they want them more when they’re scarce and exclusive. (Hobbes and Rousseau made this same point.) Imas’s experimental research shows that willingness to pay roughly doubles when people learn that others will be excluded from a product. And in new work with Graelin Mandel, he finds that AI involvement undermines the perceived exclusivity of a good. Human-made artwork gained 44% in value from exclusivity; AI-generated artwork gained only 21%. The mere involvement of AI made the work feel inherently reproducible.

This means the relational sector has naturally high income elasticity. If AI makes production cheaper and real incomes rise, spending shifts toward goods where the human element matters. This is Baumol’s cost disease working as a feature, not a bug: The sector that resists automation becomes relatively more expensive, and that’s precisely where spending and employment grow. This is an economic mechanism that could power the upper quadrants of the scenario grid that we will look at shortly, not just as a matter of moral choice but as a structural tendency of rich economies getting richer.

I’m going to include both Noah’s ideas and Alex’s in my scenario planning exercise, since they fit right in.

Four possible futures

Let’s look at how the two vectors cross each other and give us four futures.

Four futures vectors

Upper left: The Augmentation Economy. AI capability grows but adoption is gradual, and workers are augmented rather than replaced. A programmer who once wrote 100 lines of code a day now ships features that used to take a team. A nurse practitioner aided by AI diagnostic tools provides care that once required a specialist. A small business owner uses AI to access legal and financial services previously available only to large corporations. This is the quadrant where the PwC finding about the 56% wage premium makes the most sense. AI becomes a tool that makes individual workers more productive and more valuable, and the gains flow broadly. What makes this a positive, growing economy are at least in part the choices made by employers. They use the increased efficiency to build better services, not just to make them cheaper. Doctors and nurses have more time with patients and less time with paperwork. As services become more efficient, they can be offered to more people at lower cost.

Lower left: The Slow Squeeze. AI grows, adoption is gradual, and the primary use is efficiency. This is in many ways the most insidious quadrant, because it doesn’t look like a crisis. It looks like a normal economy with slightly fewer entry-level jobs each year, slightly more pressure on wages, and slightly less bargaining power for workers. That Stanford study on young software developers is a signal from this quadrant. So is the HBR finding that companies are laying off workers because of AI’s potential, not its performance. The Slow Squeeze is the world where companies use AI to pad margins without passing the gains along or investing in new capabilities.

Lower right: The Displacement Crisis. AI advances fast and is adopted rapidly, almost entirely for efficiency. This is the future the doomsayers warn about, the Citrini Research scenario of unemployment topping 10% and the S&P 500 tanking. Block’s 40% cut is a signal from this quadrant, whether or not Dorsey’s prediction that most companies will follow suit within a year turns out to be right. Deutsche Bank analysts warn that “AI redundancy washing,” companies blaming layoffs on AI that are really driven by other cost-cutting, will be a significant feature of 2026. But the fact that Wall Street rewarded Block with a 20% stock price jump for firing 4,000 people tells you what the current incentive structure is optimizing for.

Upper right: The Great Transformation. AI capability advances rapidly and is adopted fast, but the primary use is to do more, not just the same with less. Whole new industries emerge. The WEF’s projection of 170 million new roles by 2030 comes true, far exceeding the 92 million displaced. AI-driven drug discovery actually delivers on its promise. New forms of education, personalized to every learner, actually reach people the old system never served. The transition is still brutal, because the people losing old jobs and the people getting new ones are not the same people, in the same places, with the same skills. Brookings has identified 6.1 million workers with high AI exposure and low adaptive capacity, 86% of them women in clerical and administrative roles. But the net direction is toward more human capability, not less.

Imas’s framework suggests that this quadrant will feature an explosion of durable jobs in the relational sector. Some of these will be high touch service jobs: doctors, nurses, therapists, teachers, personal trainers, craft producers, experience designers, hospitality workers, and roles that haven’t been invented yet. The relational sector already employs nearly 50 million people in the US. But another big part of it will be creating exclusive products and services that become objects of desire. Art critic Dave Hickey calls this “the big beautiful art market” that happens when industrial products are “sold on the basis of what they mean rather than what they do.” The structural change model predicts that both of these areas will grow as a share of the economy, not because they resist automation as a technical matter but because not being automated is part of their value proposition.

Noah Smith’s taxonomy of future work also helps fill in what life may actually look like across these quadrants. He divides AI-affected jobs into three categories: specialists whose jobs are “strongly bundled” (for example, an experienced engineer whose judgment can’t be separated from the rest of what they do), salarymen (generalists whose value comes from knowing how to wrangle AI and plug its ever-shifting gaps, much like the Japanese corporate model where long-tenured employees rotate between divisions and accumulate firm-specific knowledge rather than portable technical skills), and small businesspeople (entrepreneurs who use AI as leverage to run what would previously have required a much larger team). This is the future that Steve Yegge envisions with its “millions of one-person startups.”

In the upper quadrants, all three categories thrive. Specialists do well because AI expands the scope of what their bundled expertise can accomplish. Salarymen thrive because companies that are doing more, not just doing the same with less, need people who can adapt to constantly changing tool capabilities within the context of their business. And small businesses proliferate because AI gives a one-person shop the productive capacity that used to require a department.

In the lower quadrants, specialists may survive, but salarymen face pressure as companies optimize for headcount reduction rather than capability expansion, and small businesses struggle because the efficiency-first economy compresses the margins they need to exist.

News from the future

In scenario planning, once you’ve chosen your vectors and imagined the resulting quadrants, you watch for “news from the future,” data points that signal which direction the world is actually heading. As with any scatter plot, the points are all over the map at first, but over time you start to see the trend lines emerge.

Right now, the signals are mixed.

News from the lower quadrants: Challenger, Gray & Christmas reports that AI was a significant contributing factor in nearly 55,000 US layoffs in 2025. Employee anxiety about AI-driven job loss has jumped from 28% in 2024 to 40% in 2026. 40% of employers globally told the WEF they plan to reduce their workforce where AI can automate tasks within five years. And the entry-level job market is tightening in ways that compound over time even if they don’t show up in headline unemployment numbers. Brookings found that the “gateway” occupations that serve as stepping stones from low-wage to middle-wage work are among the most exposed to AI, threatening career pathways, not just individual jobs.

News from the upper quadrants: The PwC wage premium data. The Vanguard finding that AI-exposed occupations are growing, not shrinking. The explosion of AI drug discovery programs. MIT’s David Autor has shown that 60% of today’s US employment is in job categories that didn’t exist in 1940. New task creation is how technology has always generated new work, and there’s no reason to believe AI is exempt from that pattern, unless we choose to use it only for efficiency.

There may also be some signal in reports that usage among developers is becoming more intensive and continuous, from multistep coding workflows to automated agents running in loops. Some engineers are “tokenmaxxing,” with engineers at companies like Meta treating AI consumption as a productivity benchmark. This is driving rapid revenue growth for AI providers but squeezing their margins as infrastructure costs rise. That margin pressure may sound like bad news, but it’s actually a classic pattern by which a technology crosses from “tool” to “infrastructure.” Cloud computing margins were terrible until scale and hardware improvements drove unit costs down, at which point the providers who had built habit and lock-in harvested enormous returns. AI inference costs have been dropping roughly 10x per year, and price competition is accelerating that decline. The margin squeeze is the mechanism by which AI becomes cheap enough to be ubiquitous. And the tokenmaxxing engineers are doing dramatically more iterations, more exploration, with more ambitious scope. That’s “doing more” behavior, not an efficiency behavior.

It’s still unclear, though, whether all those tokens are producing real value or whether some of this is the AI equivalent of crypto mining. If most of those tokens are productive, we’re looking at a productivity boom. If many are wasted, the adoption curve may have a big dip in it before industry matures. Either way, though, the direction is toward AI becoming economic and technology infrastructure. It’s important to remember that tokens spent trying out prototypes that are rejected are not necessarily wasted. They can be part of a new development process that’s expanding the space of possibilities.

News that doesn’t fit neatly into any quadrant: We appear to be in what Smith calls a “no-hire, no-fire” economy, where workers hunker down in their current jobs and refuse to switch, and companies keep them rather than hiring new workers. That’s consistent with a world where people sense that their portable technical skills are depreciating, so they cling to the firm-specific knowledge that still makes them valuable where they are. It’s also consistent with the NBER Denmark study finding task reorganization without job loss: AI is replacing tasks, not (yet) jobs. Nonetheless, it is clear that a dearth of entry level positions will be a serious issue.

A University of Pittsburgh researcher has been calling state unemployment offices one by one to assemble the granular data that doesn’t yet exist in federal statistics, because our measurement tools are not yet fine-grained enough to see what’s happening. If you’re confused about whether AI is causing job losses, he put it plainly: The likely problem is a lack of data. If AI is having an impact, we may just not be equipped to see it yet with the instruments we have. We’re getting new data points daily. Asking yourself which future they support can gradually increase your confidence in what is coming.

Robust strategy

The goal of a scenario planning exercise is to stretch your thinking so that you can make strategic choices that make sense regardless of which future unfolds. Scenario planners call this a “robust strategy.”

If you’re a business leader, the robust strategy is not to ask “How many people can I replace with AI?” It’s to ask “What can we do now that we couldn’t do before?” The companies that will thrive across all four quadrants are the ones that use AI to expand what’s possible, not just to shrink how much they have to spend. Aim for the upper right quadrant, and you’ll do better even if the rest of the world chooses otherwise.

That’s not just scenario planning. It’s Clay Christensen on the lessons of disruptive technologies. A disruptive technology is not defined by the markets it destroys but by the new markets and new possibilities it creates. As Christensen observed, RCA didn’t ignore the transistor; its leaders just thought it wasn’t good enough for its current customers. Sony embraced the new technology and created a new market of portable devices where the quality difference between transistors and vacuum tubes just didn’t matter. And of course, as Clay observed, the disruptive technology continues to improve.

If you’re a worker, one element of robust strategy is to band together, as the screenwriters guild did, and to make the case that the productivity gains from AI should be shared with workers and used to amplify their skills and efforts. Don’t resist AI, but instead use it to make yourself even more valuable. Use it to amplify your uniqueness. That is, lean into the augmentation economy. One of the things we’ve learned from the early advances in AI-enabled software engineering is that a great software engineer can get more out of AI than a vibe-coding beginner. This is true of other professions as well. Find ways that your human uniqueness makes the output of AI even more valuable.

Create professional associations that lean into mentorship and an AI-enriched career ladder, but aren’t afraid to take a political stance. The idea that providers of capital are entitled to all of the gains is a pernicious idea that has created an engine of inequality rather than of wide prosperity. It doesn’t have to be that way. Professional associations and other forms of solidarity are a possible source of countervailing power. (But don’t fall into the trap that many unions and professional associations do, of using that power to extract rents rather than increasing value for everyone.) Preferentially choose employers who are investing in training employees for a human + AI future, including at the beginning of the career ladder.

If you’re a specialist, deepen the parts of your expertise that are strongly bundled, the judgment and context and human relationships that can’t be separated from the technical work. If you’re a generalist inside a company, become the person who understands what AI can and can’t do and fills the gaps, whose value comes from adaptability and firm-specific knowledge rather than a fixed set of technical skills. And if you have entrepreneurial instincts, recognize that AI is creating leverage that may make it possible to run a viable business at a scale that previously couldn’t support one.

Imas’s work suggests that the most durable career paths may not be defined by which tasks AI can’t do (a moving target) but by whether the human element is part of what the customer is paying for. A restauranteur, a therapist, a teacher who knows your child, or a guide who knows the trail aren’t jobs that survive because AI hasn’t gotten to them yet. They’re jobs where human involvement is the product.

If you’re an entrepreneur, the robust strategy is the one it has always been: look at the world as it is, determine what work needs doing, and do it. Don’t build AI tools that replace humans doing things that are already being done adequately. Build AI tools that let humans do things that have never been done before.

If you’re a policymaker, the robust strategy is to invest in the transition regardless of how fast displacement turns out to be. Create policies that give workers more of a role in how AI is used. Support positions like those of the writers guild, which allow workers to get a share of the gains from using AI. And if capital runs wild with labor replacement, tax the gains so the efficiency can be redistributed. Decrease the working week.

Education and lifelong learning programs, portable benefits, support for geographic mobility, and investment in the industries of the future pay off in every quadrant. So does reducing the regulatory friction that keeps new entrants trapped in old cost structures, funding basic research that the market underinvests in, and building the kind of infrastructure (physical and institutional) that enables rapid adaptation.

The future is up to us

I’ll return to the theme that I sounded in my book WTF? What’s the Future and Why It’s Up To Us.

Every time a company uses AI to do what it was already doing with fewer people, it is making a choice for the lower half of the scenario grid. Every time a company uses AI to do something that wasn’t previously possible, to serve a customer who wasn’t previously served, to solve a problem that wasn’t previously solvable, it is making a choice for the upper half. These choices compound, for good or ill. An economy that uses AI primarily for efficiency will slowly hollow itself out.

Looking at the news from the future, both sets of signals are present. The question is which will dominate. AI will give us both the Augmentation Economy and the Displacement Crisis, in different measures in different places, depending on the choices we make.

Scenario planning teaches us that we don’t have to predict which future we’ll get. We do have to prepare for a very uncertain future. But the robust strategy, the one that works across every quadrant, is to focus on doing more, not just doing the same with less, and to find ways that human taste still matters in what is created. As long as there is unmet demand, as long as there are problems we haven’t solved and people we haven’t served, AI will augment human work rather than replacing it. It’s only when we stop looking for new things to do that the machines come for the jobs.

Share

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0