Generative AI in the Real World: The Year in AI with Ksenia Se
As the founder, editor, and lead writer of Turing Post, Ksenia Se spends her days peering into the emerging future of artificial intelligence. She joined Ben to discuss the current state of adoption: what people are actually doing right now, the big topics that got the most traction this year, and the trends to look […]
As the founder, editor, and lead writer of Turing Post, Ksenia Se spends her days peering into the emerging future of artificial intelligence. She joined Ben to discuss the current state of adoption: what people are actually doing right now, the big topics that got the most traction this year, and the trends to look for in 2026. Find out why Ksenia thinks the real action next year will be in areas like robotics and embodied AI, spatial intelligence, AI for science, and education.
About the Generative AI in the Real World podcast: In 2023, ChatGPT put AI on everyone’s agenda. In 2025, the challenge will be turning those agendas into reality. In Generative AI in the Real World, Ben Lorica interviews leaders who are building with AI. Learn from their experience to help put AI to work in your enterprise.
Check out other episodes of this podcast on the O’Reilly learning platform.
Transcript
This transcript was created with the help of AI and has been lightly edited for clarity.
00.00: All right, so today we have Ksenia Se. She is the founder and editor at Turing Post, which you can find at turingpost.com. Welcome to the podcast, Ksenia.
00.17: Thank you so much for having me, Ben.
00.20: Your publication obviously covers a lot of the most bleeding edge things in AI, but I guess let’s start with a heat check, which is around the state of adoption. So I talked to a lot of people in the enterprise about what they’re doing in AI. But I’m curious what you’re hearing in terms of what people are actually doing. So, for example, the big topics this year, at least in the startup world, are agents and multimodal reasoning. I think a lot of those are happening in the enterprise [to] various degrees. But what’s your sense in terms of the reality on the ground?
01.05: Yeah. I just recently came from [a] conference for software developers, and it was really interesting to see how AI is widely adopted by software developers and engineers. And it was not about vibe coding—it was people from Capital One, it was people from universities, from OpenAI, Anthropic, telling how they also implement AI in their daily work.
So, I think what we saw this year is that 2025 did not become the year of agents. You know, this conversation about “decade of agents.” But I think 2025 became the year where we got used to AI on many, many levels, including enterprise, business people, but also people who [are] building the infrastructure in the enterprises.
02.00:So, this conference you attended, as you mentioned, there were obviously the people building the tools, but there were also people who were using tools. Right? So, give us a sense of the perspective of the people using the tools.
02.14: So it was mostly a conference about coding. And there were people who are building these coding tools using different agentic workflows. But what was interesting is that there were people from OpenAI [and] Anthropic, and they were pushing the agenda for coders to start using their platforms more because it’s all connected inside. And then, it’s better for you to just use this platform. So it was an interesting talk.
And then there was a talk from MiniMax, which is a Chinese company. And it was super interesting that they have a completely different view on it and a different approach. They see coders and researchers and app developers together, everyone’s together, and that becomes a combination of using and building, and that’s very different. That’s very different from how Western companies presented [it] and how this Chinese company presented it. So I think that’s another thing that we see: just cross-pollination and building together inside different companies, different platforms.
03.34: I’m curious, did you get a chance to talk to people from nontool providers, like you mentioned Capital One, for example? So companies like those, which one associates with enterprise.
03.47: I haven’t talked to this person specifically, but he was talking a lot about trust. And I think that’s one of the biggest topics in enterprise. Right? How do we trust the systems? And then the topic of verification becomes one of the main ones for enterprises, specifically.
04.07: You mentioned that this year, obviously, we all chatted and talked and wrote and built with agents. But, it seems like the actual adoption in the enterprise is a bit slower than we expected. So what’s your sense of agents in the enterprise?
04.29: I was looking through the articles that I’ve written throughout this year because so many things happened, and it’s really hard to even remember what happened. But in the middle of the year was the “state of AI” [report] by Stanford University. And in this report they were saying that actually enterprises are adopting AI on many levels. And I think it’s a work in progress. It’s not agents, you know, [where you] take them and they work. It’s building these workflows and building the infrastructure for these agents to be able to perform work alongside humans. And the infrastructure level changes, on many different levels.
I just want to maybe go a little deeper on enterprise from your perspective because I think you know more about it. And I’m very curious what you see from an enterprise perspective.
05.26: I think that, actually, there’s a lot of piloting happening. A lot of people are definitely trying and building pilots, prototypes, but that large-scale automation is a bit slower than we thought it would be. So you mentioned coding—I think that’s one area where there’s a lot of actual usage, because that’s not necessarily customer-facing.
05.59: I think the distinction that people make is, you know, “Is this going to be internal or external?” It’s a big kind of fork in terms of how much are we going to push this? I think that one thing that people underestimated going into this, as you mentioned, is that there’s a certain level of foundation that you need to have in place.
A lot of that has to do with data, frankly, given that this current manifestation of AI really relies on you being able to provide it more context. So, it really is going to come down to your data foundation and all those integration points. Now when it comes to agents, obviously, there’s also the extra integration around tools. And so then that also requires some amount of preparation and foundation in the enterprise.
What’s interesting is that there’s actually three options for enterprises generally. The first is they take their existing machine learning platform that they were using for forecasting those kinds of things, structured data, and try to extend that to generative AI.
07.22: It’s a bit challenging, as you imagine, because the models are different, the workloads, the data pipelines are a little more challenging for generative AI. The second option is to do the end point. So you rely mainly on external services: “I’m just going to use API end points. Hopefully these end points allow me to do some amount of model customization like fine-tuning, maybe some RAG.”
07.48: But the challenge there, of course, is you kind of lose the skill set. You don’t develop the skills to push this technology further because you’re completely reliant on someone else, right? So your internal tech team doesn’t really get better. And then finally, the most bleeding-edge companies, mostly in tech—a lot of them here in Silicon Valley, actually—almost all the Silicon Valley startups are building custom AI platforms.
On the compute side, it’s comprised of three open source projects: PyTorch, Ray, and Kubernetes. And then some AI models at their disposal, like Kimi, DeepSeek, Gemma, open weights models. You’ve got PyTorch, AI Ray, and Kubernetes, the so-called PARK now.
But anyway, I kind of hijacked your interview. So let me ask you a question. Last year, as I mentioned, people were abuzz about reasoning because of the release of DeepSeek, and then multimodality and agents. So next year, what’s your sense of what the buzzwords will be, given that the current buzzwords, Ksenia, have not been actually kind of fully deployed yet. What will people be kind of excited about?
09.13: Yeah, we will keep talking about agentic workflows, for sure, for years to come. I would drop in a word: robotics. But before that, I would like to return to what you said about enterprises because I think here’s an important distinction about infrastructure and the companies that you mentioned that are building custom platforms, and actual usage.
Because I think this year, and as you mentioned, there were a lot of pilots and [there was] a lot of intention to use AI in enterprises. So it was someone very excited about AI and trying to bring it into enterprise. An interesting thing happened recently with Microsoft, who deployed everything they built to every one of their clients.
If you imagine how many enterprises are their clients, that becomes a different level of adoption [by] people who didn’t even sign up for being interested in AI. But now through Microsoft, they will be adopting it very quickly in their business environments. I think that’s very important for next year.
10.26: And Google is doing something similar, right?
10.29: Yeah. It’s just that Microsoft is much more enterprise-related. This adoption will be much bigger next year in the enterprise as well.
10.39: So you were saying robotics, which, by the way, Ksenia, the new marketing term is “embodied AI.”
10.47: Embodied AI,physical AI, yeah, yeah, yeah. But you know, robotics is still struggling with the thing that you mentioned. Data. There is not enough data. And I think that next year, with all this interest in spatial intelligence and world models in creating this new data, that [will be an] exciting year to observe. I don’t think we will be able to have domestic robots picking up our laundry and doing laundry, but we will be getting there slowly—five, six years. I don’t think it will be next year.
11.25: Yeah, it seems in robotics, they have their own kind of tricks for generating data: learning in the virtual world, learning by watching humans, and then some sort of hybrid. And then also there’s these robotics researchers who are kind of promoting this notion of the robotics foundation model, where rather than having a raw robot just learn everything from scratch, you build the foundation model, which you can just then fine-tune. Hey, instead of folding a towel, you will now fold the T-shirt. But then there’s all these skeptics, right?
I don’t know if you follow the work of Rodney Brooks. He’s like one of the grandfathers of robotics. But he’s a bit skeptical about the whole robotics foundation models. Particularly, he says that one of the main problems of this type of physical robotics is grasping. So it’s basically the sense of touch and the fingers, something we as humans take for granted, which he doesn’t believe that deep learning can get to. Anyway, again, I derailed your [interview]. So robotics. . .
12.53: You know, I think there are interesting things happening here in terms of creating data. Not synthetic data but actual data from the real world, because open source robotics becomes much more popular. And I think what we will see is that the interest is high, especially from children’s perspectives.
And it’s not that expensive now to 3D-print a robot arm and get on NVIDIA and get, I don’t know, a Jetson Thor computer. And then connect it together and start building these robotics projects. Open source; everything is out there now; LeRobot from Hugging Face. So that’s very exciting. And I think that [these projects] will expand the data.
13.40: By the way, Rodney Brooks makes a couple of interesting points as well. One is when we say the word “robotics” or “embodied AI,” we focus too much on this humanoid metaphor, which actually is far from reality. But the point he makes is [that] there’s a lot of robotics already in warehouses. And [they] are not humanoids. They’re just carts moving around.
And then the second point he makes is that robots will have to exist with humans. So those robots that move things around in a warehouse, they are navigating the same space as humans do. There’s going to be a lot of implications of that in terms of safety and just the way the robot has to coexist with humans. So embodied AI. . . Anything else that you think will explode in the popular mindset next year?
14.47: Yeah, I don’t know about “explode.”
14.50: Let me throw a term that, actually, I’ve been thinking a lot about lately, which is this “world model.” But the reason I say I have been thinking about it lately is because I’ve literally started reading about this notion of a world model, and then it turns out I actually came up with seven different definitions of “world.” But I think “world model,” if you look at Google Trends, is a trendy term, right? What do you think is behind the interest in this term “world model”?
15.27: Well, I think it’s all connected to robotics as well. It’s this spatial intelligence that’s also on the rise now, thanks to Fei-Fei Li, who is so very precise and stubborn [about] pushing this new term and creating a whole new field around her.
I was just reading her book The Worlds I See. And it’s fascinating how throughout her career, for the last 25, 30 years, she’s been so precise about computer vision, and now she’s so articulate about spatial intelligence and the world models that they build, that it’s all for better understanding how computers, how robotics, how self-driving can be reliable.
So I don’t know if world models will captivate a majority of the population, but it for sure will be one of the biggest research areas. Now, I’ll throw in the term “AI for science.”
16.35: Okay. Yeah, yeah, yeah. Kevin Weil at OpenAI just moved over to doing AI for science. I mean, it’s super exciting. So what specific applications in science, do you think?
16.50: Well, there is a bunch, right? Google DeepMind is of course ahead of everyone. And, what they’re building to create new algorithms that can solve many different scientific problems is just mind-blowing. But what it started was all these new startups appeared: AI for chemistry, AI for math, and AI science from Sakana AI. So this is one of the biggest movements, I think, that we will see developing more in the next year, because the biggest minds from big labs are moving into the startup area just because they’re so passionate about creating these algorithms that can solve scientific problems for us.
17.38: AI for math, I think, is natural because basically that’s how they test their models. And then AI for drug discovery because of the success of AlphaFold, and things like that. Are there any other specific verticals that you’re paying attention to besides those two? Is there a big movement around AI for physics?
18.07: AI for physics?
18.10: I think there are some people, but not to the extent of math.
18.14: I would say it’s more around quantum computing, all the research that’s happening around physics and going into this quantum physics world and—also not for the next year—but quantum computers are already here. We still do not fully know how to use them and for what, but NVIDIA is working hard to build this and the Q link to connect GPUs to QPUs.
This is also a very exciting area that just started actively developing this year. And I think next year we will see some interesting breakthroughs.
18.59: So I have a phrase for you which is, I think, likely next year. But don’t hold my feet to the fire: “AI bubble bursts.”
19.12: Well, let’s discuss what is the AI bubble?
19.15: There definitely seems to be an overinvestment in AI ahead of usage in revenue, right? So definitely, if you look at the preannounced commitments, I don’t know how hard or soft those commitments are due to data center buildout. We’re talking trillions of dollars, but as we mentioned, usage is lagging. You look at the biggest private companies in the space, OpenAI and Anthropic—the multiples are off the charts.
They have a lot of revenue, but their burn rates far exceed the revenue. And then obviously they have this announced commitment to build even more data centers. And then obviously there’s that weird circular financing dance that’s happening in AI, where NVIDIA invests in OpenAI and OpenAI invests in CoreWeave, and then OpenAI buys NVIDIA chips.
I mean, people are paying attention. But at the root of it is leverage. And the multiples just don’t make sense for a lot of people. So that’s what the bubble is. So, then, is next year going to be the year of reckoning? Is next year the day the music stops?
20.52: I don’t think so. I think there are a couple of bubbles that people discuss in the industry. Most [are] discussing the LLM bubble—that everyone is putting so much money into LLMs. But that’s actually not the main area, or it’s not the only one, it’s not how we get to superintelligence. There are also world models and spatial intelligence. There are also other sorts of intelligence, like causal, that we don’t even pay attention to much, though I think it’s super important.
So I think the attention will switch to other areas of research. It’s really needed. In terms of companies, well, OpenAI definitely needs to come up with some great business strategy because otherwise they will just burn through GPUs, and that’s not enough revenue. In terms of the loop—and you said the usage is lagging—the usage from users is lagging because not that many people are using AI.
21.58: But the revenue is lagging.
22.02: But if we think about what’s happening in research, what’s happening in science, in self-driving, this is a huge consumption of all this compute. So it’s actually working.
22.21: By the way, self-driving is also losing money.
22:26 But it’s something that’s happening. Now we can try Tesla to drive around, which is exciting. That was not the case two years ago. So I think it’s more of a bubble around some companies, but it’s not a bubble about AI, per se.
And some people, you know, compare it to the dot-com bubble. But I don’t think it’s the same because, back then, the internet was such a novelty. Nobody knew what it was. There was so much infrastructure to build. Everything was just new. And with AI, as you well know, and machine learning, it’s like the last 60 years of actual usage.
Like, you know, AI [was] with our iPhones from the very beginning. So I don’t think it’s an AI bubble. I think it’s maybe some business strategist bubble, but…
23.25: Isn’t that just splitting hairs? By the way, I lived through the dot-com bubble as well. The point is the financial fundamentals are challenging and will remain challenging.
The assumption is that there’s always going to be someone else to fund your next round, at a higher valuation. Imagine raising money on the down round. What would be the implication for your workforce? The morale? So anyway, we’ll see. We’ll see what happens. Clearly there’s other approaches to AI. But the point is that none of them seem to be what people are investing in at the moment. There’s a bit of a herd mentality.
If you go back to “Why did deep learning blow up?” well, because they did well in ImageNet. Before then no one was paying attention. So for one of these techniques to draw attention, they really need to do something like that. In AI and machine learning, it’s like search in some ways. So you’re looking for a model in the search space and you’re looking for different models. But right now everyone seems to be looking in the same area. In order to convince all these people to move to a different area, you have to show them some signs of hope, right?
But even after that, you still have all this build-out and debt. By the way, one thing that’s changed now is the role of debt. Debt used to be an East Coast thing, but now West Coast companies are starting to play around with financing some of these data centers with debt. So we’ll see. Hopefully I’m wrong.
25.51: You think it will burst, and if it will, how…?
25.56: I think there will be some sort of reckoning next year. Because basically at some point you’re going to…you have to keep raising money, and then you’re going to run out of places to raise money from. The Middle East also has a finite amount of money. And unless they can show real—the revenues [are] so, so lagging right now. Anyway, in closing, what other things are on your radar for ’26?
26.29: On my radar is how AI is going to change education. I think that’s super important. I think that’s lagging significantly both in schools and universities because the opportunities that AI provides—and we can talk about bad sides, we can talk about good stuff—but having kids who are growing into this new era and talking with AI with them and seeing how it can accelerate the acquiring of knowledge, I’m very inspired by that. And I think this is a topic that not that many people talk about, but it should completely change the whole educational system.
27.16: Yeah, I’m curious actually, because, you know, I was a professor in a previous life, and I can’t imagine, now, teaching the same way I would back then. Because back then you’re this person in front of the room who has all of the knowledge and authority. Which is completely not the case anymore. In light of that, what’s your role and how do you manage a classroom? AI is the kind of thing you can try to take away from students, but no, they’re going to use it anyway. So in light of that, what is your role and what should be the tools and guardrails?
28.01: I think one of the most important roles is to teach [how to] ask questions and fact check, because I think we forgot [that] with social networks. That was one of the biggest disadvantages of social networks. You just believe everything you see. And I think with generative AI, it’s so easy to be fooled.
So the role of the teacher becomes to tell you how to talk with these models and how to ask questions. I’m a big believer in asking the right question. So I think this is what trains critical thinking the most. And I think that’s the role of the teacher, helping, going deeper and deeper and deeper, and asking the best questions.
28.47: I want to close with this question, which is on the open weights models. So obviously right now the top open weights models are from China. Kimi, Moonshot. Alibaba. So are there any Western open weights models? I guess, Gemma. I’m not sure Mistral really counts, but Gemma might. I did talk to someone on Google’s Gemma team, and they said they could release even better models if they wanted to. The key is, if they want to, right? Obviously, the first mover here was Llama, which I don’t know if they’re going to continue. So, Ksenia, what’s going to be our source of Western open weights models?
29.37: Well, the Allen Institute for AI is pushing open source very heavily, and in November they released Olmo 3, which is fully open—not only weights—it’s all transparent. And this is just an amazing way to demonstrate to the closed labs how to do that. And one of the researchers at Ai2, Nathan Lambert, organized a sort of movement for Western open source. Hugging Face is doing this amazing job. And through their work, the companies like NVIDIA really use a lot of open source models, some of them open weights, some of them [aren’t]. But even OpenAI, I think, started to open up a little bit. Meta is moving kind of in a different direction, though.
30.35: Yeah, it’s kind of a TBD. We don’t know. Hopefully, they do something. Like I said, the Gemma team could release even better models, but someone has to convince them to do that. I guess I’m waiting for the time when I go to the LMArena leaderboard and I start seeing more Western open weights models again.
31.01: Well, they had the restriction of getting more revenue that they cannot solve.
31.07: And with that, thank you, Ksenia.
31.11: Thank you so much, Ben.
Share
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0
