“Conviction Collapse” and the End of Software as We Know It
In “An Ordinary Evening in New Haven,” the poet Wallace Stevens wrote, “It is not in the premise that reality is a solid.” That line came to mind during a fascinating conversation with Harper Reed, which amounted to something like “It is no longer in the premise that software is a product.” Harper is one […]
In “An Ordinary Evening in New Haven,” the poet Wallace Stevens wrote, “It is not in the premise that reality is a solid.” That line came to mind during a fascinating conversation with Harper Reed, which amounted to something like “It is no longer in the premise that software is a product.”
Harper is one of the most creative technologists I know, someone who ran engineering for the Obama 2012 campaign, cofounded Threadless, and now runs a small team in Chicago that operates more like an art studio than a startup. He gave an amazing talk at our first AI Codecon last year that presaged a lot of what has followed as people have committed to full-on agentic coding. Harper told me that he’s now having trouble describing what he’s doing, because the ground keeps shifting under his feet.
“We raised money about a year ago,” he told me. “And then we kind of just couldn’t execute well, in a quality way, on the thing that we wanted to execute, which was building AI-based workflow tools. And part of it was every time we dug in, it just got wilder and wilder. We’d say, ’Oh, we’ll just make this nice little thing that you can chat with,’ and we’d dig in and we’d be like, ’Well, the answer is to make a thousand of these.’ It doesn’t make sense to have one universal agent.”
He’s genuinely excited. But he described what he’s feeling as “conviction collapse.” As he put it, in the old world, you raise money, and nine months later you come back with a product. In that intervening time, you’ve talked to hundreds of customers. You’ve honed your worldview, and you’ve had time to build and defend your conviction.
Now? “You invest in my company today, on Thursday I’m going to come with the same amount of stuff that would have come with nine months in the prior times. It’s just so fast. And so you don’t have the time to fall in love the same way. You just don’t have the time to enjoy and define and defend your conviction around your product.” That’s an eye-opening insight. Quintessential Harper.
The result is that they build an entire product, complete with landing pages, show it to someone, get feedback, and then just build another entire product. Harper said, “Every time we hit a wall, we are like, ’Okay, what do we get from that?’ And then we just roll that learning into the next iteration.”
The product may be a process
We have this idea that a product is a thing, when in fact a product may now be a dynamic set of possibilities that are called out by a process.
Harper and his cofounder Dylan Richard at 2389 Research have leaned into this. Their space in Chicago runs more like an art studio than a product studio. Harper described it to me this way: “It’s max creativity. It’s max optionality. Very high tech, some robots, a lot of art. Music is always playing, and I have good people hanging out, and then we just wait for the company to arrive.”
People push back on this. They ask about whiteboards and market surveys. “And I’m like, no, maybe, but that’s not the point. The point is that it will come. It’s gonna be like a visitor.”
Harper said something like, “I remember my brother and I building Legos together when we were kids, and my brother saying, ’I need to find this piece.’ And I said, ’Okay, I won’t look for it,’ with the idea that there’s no way to find it if you’re looking for it. It’ll just come to you.”
That reminded me of another poem, this time Blake’s “Eternity”:
He who binds himself to a joy
Does the winged life destroy.
He who kisses the joy as it flies
Lives in eternity’s sunrise.
Joy is something that happens when you’re doing something else, and if you’re focused on it, it always evades you. Software products seem to have become a bit like that too.
Skills and the other things you bring to the table
One of the threads in our conversation was about what a “product” even looks like in this new world.
AI is not just a tool. It is itself a substrate that we shape. It’s a medium, like clay or marble or bronze for a sculptor, or words for a writer. Everybody had access to the same capabilities of English as Shakespeare, but Shakespeare made something out of them that nobody else did. Creating a software product is increasingly like creating a document or an image or a piece of music. And that means that it can range from something throwaway to an enduring work of art.
Harper brought up Fluxus, the art collective: Nam June Paik, Yoko Ono, John Cage. “A lot of what they were doing was stuff that people would look at and just be like, ’a toddler could do that.’ It’s like, well, did the toddler do it? Did they bring the toilet into the gallery? That was a thing. You can’t do it again.” That brought up Wallace Stevens for me again: “A poem is the cry of its occasion, a part of the thing, not about it.”
Harper also noted that the current AI moment recalls the spirit of the early web. He compared it to 2001, 2002, 2003. “I was an honorable mention for some Ars Electronica thing. I literally had no idea what Ars Electronica was. I’m just building weird shit in a room in my apartment with ten other people. Essentially a commune. And we are just building weird stuff. There was no reason to build it.”
There’s a lot of serendipity. This has always been the case in creative professions. I just learned, for instance, that Shakespeare started writing sonnets (which at the time were an art form largely sponsored by rich patrons) instead of plays during a plague-induced hiatus in the production of plays in London. And I’d previously learned that 1599, the year in which he wrote three of his greatest plays, Henry V, Part 1, Much Ado About Nothing, and Hamlet, was marked by the retirement of one of his company’s leading actors, which meant he no longer needed to create parts for him. Serendipity, indeed.
Harper replied with a great story about the development of taco rice, an Okinawan dish that is exactly what it sounds like: rice, lettuce, cheese, ground beef, tomatoes. Except the Japanese put Kewpie mayo on top instead of sour cream. His theory is that sour cream wasn’t readily available in Japan, mayo was, and the result is something that has forked off into its own evolutionary tree. It is no longer equivalent to its American source. It’s different, and arguably better.
This is what he’s seeing with the fluidity and availability of AI-generated code. The ease with which you can see something new and try to either merely emulate it or to build on it is now akin to what has long been possible in literature, music, and art. Successful software products have always drawn imitators, but now ordinary individuals can see something they like (or don’t like) and build their own version of it. Our friend Noah Raford has told us that he used Claude Code to reverse engineer and replace a Chinese app that runs his home sauna. The copy doesn’t replicate the functionality one-to-one. It has a bunch of stuff Noah actually needs. It’s a “yes, and” to the core functionality, plus things the original never bothered with. (I’m now thinking of trying that trick with the Nest app, which, shamefully, no longer supports the original Nest thermostat. Here is a device that still works perfectly well 15 years after I installed it, and Google is trying to force me and everyone else to throw it away and upgrade.)
“I want to make it again and make it better” is now always an option.
Skills may be a sign of what some future “products” might look like
I asked Harper whether one kind of product might be a bundle of skills and context and UI that sets up the user to solve their own unique problem using their own AI. (Think Jesse Vincent’s Superpowers as a model for this kind of product.)
That got us off on a discussion of skills Harper and crew have worked on.
Harper’s cofounder Dylan, who was raised as a Quaker, built a Quaker business practice skill for his agents. It lets agents deliberate and think and work together without being unnecessarily noisy, without pushing.
Dylan also built something called the Review Squad skill. The Review Squad generates five personas with different biases and experience level along a “sophistication spectrum” from novice to expert, then has them review the code independently. “Most people do so much work to get rid of the biases so we all have an equal interaction,” Harper noted, “but the biases are what makes teams good.”
The skill also tries to eliminate any preexisting context. As the documentation for the skill notes, “Dispatch a panel of subagents, each role-playing a person with a different level of tech sophistication, who land on a site with zero context. They report what they understand, what confuses them, and where they give up.”
Harper and Dylan’s studio in Chicago is also playing with agents that have a private social media platform where they can post “if they feel compelled,” not on a schedule. They’re extracting skills from their own work practices rather than writing them from scratch. They’re adding sandwich shop owners and imagined aliens to their code review just to see what happens. Harper finds that “people who are thinking much more about the social interactions of agents are having much more fun, and seem to have a little bit more productivity, than the people who are just relegating them to tools.”
Speaking of extracting skills, Harper also mentioned that he had talked with our friend Nat Torkington about how Nat had supplied a body of knowledge and extracted a set of skills from it that matched what he wanted to do. This is also very much something we’re exploring at O’Reilly, working with our authors to find out what kinds of skills are hidden in their books, and what new kinds of products we might build as we understand that our job is to upskill agents as well as people.
Harper did offer one caveat. “It’s not clear that Nat’s skills would work for me,” Harper said. “That pattern is really powerful,” he said, where you take something that is a corpus of knowledge and just say, ’Okay, LLM, let’s extract something.’” His point, though, is that while there are commonalities, each person and each unique situation might draw out something different. This is in many ways analogous to the skills of human experts. They have a deep reservoir of knowledge that they adapt to each new situation. That’s why we see the evolution of our skills platform as a conversation between ourselves, our community of experts, and our customers. If you would like to be part of that conversation, let us know at [email protected].
The role of play in creativity
Harper and I also talked about how the spirit of play and “what if?” has been missing in today’s overheated venture capital market where every exploration has hanging over it the overriding goal of whether it can get funded and how much money it can make. Even Larry and Sergey might not have won in today’s market. They were trying to do something cool and necessary, and started thinking about it as a business once Google unfolded, kind of like the way Harper and his brother eventually found the Lego.
AI will be really good at making certain processes more efficient. But it won’t be really good at making new processes unless people start to focus on that. And that’s a human creativity thing.
Harper and I both worry about the same thing: So much of Silicon Valley right now is making affordances for capital to win. What are the affordances that would help humans to win? Harper frames it as short-term versus long-term capitalism. I think about it in terms of mechanism design, the structures and incentives that shape what outcomes are even possible.
Meanwhile, Harper and Dylan’s studio in Chicago is playing with agents that have a private social media platform where they can post “if they feel compelled,” not on a schedule. They’re extracting skills from their own work practices rather than writing them from scratch. They’re adding sandwich shop owners and imagined aliens to their code review just to see what happens. Harper finds that “people who are thinking much more about the social interactions of agents are having much more fun, and seem to have a little bit more productivity, than the people who are just relegating them to tools.”
Yesterday, he and Dylan were talking about open-endedness in evolution, about how “we thought we were at a destination, and it turns out we’re not.” The challenge today isn’t just what AI can do for us but discovering what kind of environment, what kind of practice, what kind of play lets more interesting things emerge.
Share
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0
