Anthropic is one of the world’s leading AI model providers, especially in areas like coding. But its AI assistant, Claude, is nowhere near as popular as OpenAI’s ChatGPT.
According to chief product officer Mike Krieger, Anthropic doesn’t plan to win the AI race by building a mainstream AI assistant. “I hope Claude reaches as many people as possible,” Krieger told me onstage at the HumanX AI conference earlier this week. “But I think, [for] our ambitions, the critical path isn’t through mass-market consumer adoption right now.”
Instead, Krieger says Anthropic is focused on two things: building the best models; and what he calls “vertical experiences that unlock agents.” The first of these is Claude Code, Anthropic’s AI coding tool that Krieger says amassed 100,000 users within its first week of availability. He says there are more of these so-called agents for specific use cases coming this year and that Anthropic is working on “smaller, cheaper models” for developers. (And, yes, there are future versions of its biggest and most capable model, Opus, coming at some point, too.)
Krieger made his name as the cofounder of Instagram and then the news aggregation app Artifact before joining Anthropic nearly a year ago. “One of the reasons I joined Anthropic is that I think we have a unique role that we can play in shaping what the future of human-AI interaction looks like,” he says. “I think we have a differentiated take on that. How can we empower rather than just be a pure replacement for people? How do we make people aware of both the potentials and the limitations of AI?”
Given its history, Anthropic is considered to be one of the more cautious labs. But now it seems set on making its models less sanitized. The company’s latest release, Sonnet 3.7, will refuse to answer a prompt 45 percent less often than before, according to Krieger. “There are going to be some models that are going to be super YOLO and then other models that may be even more cautious. I’ll be really happy if people feel like our models are striking that balance.”
Krieger and I covered a lot of ground during our chat at HumanX — a condensed version of which you can read below. I asked him about how Anthropic decides to compete with its API customers, such as the AI coding tool Cursor, how product development works inside a frontier AI lab, and even what he thinks sets Anthropic apart from OpenAI…
The following interview has been edited for length and clarity:
When you’re building and thinking about the next couple of years of Anthropic, is it an enterprise company? Is it a consumer company? Is it both?
We want to help people get work done – whether it’s coding, whether it’s knowledge work, etc. The parts we’re less focused on are what I would think of as more the entertainment, consumer use case. I actually think there’s a dramatic underbuilding still in consumer and AI. But it’s less of what we’re focused on right now.
Having run a billion-user service, it’s really fun. It’s very cool to get to build at that scale. I hope Claude reaches as many people as possible, but I think, [for] our ambitions, the critical path isn’t through mass-market consumer adoption right now.
One is to continue to build and train the best models in the world. We have a fantastic research team. We’ll continue to invest in that and build on the things that we’re already good at and make those available via an API.
The other one is building vertical experiences that unlock agents. The way I think about it is AI doing more than just single-turn work for you, either for your personal life or in the workplace. Claude Code is our first take on a vertical agent with coding, and we’ll do others that play to our model’s advantages and help solve problems for people, including data integration. You’ll see us go beyond just Claude AI and Claude Code with some other agents over the coming year.
People really love Cursor, which is powered by your models. How do you decide where to compete with your customers? Because that’s ultimately what you’re doing with Claude Code.
I think this is a really delicate question for all of the labs and one that I’m trying to approach really thoughtfully. For example, I called Cursor’s CEO and basically all of our leading coding customers to give them a heads-up that we’re launching Claude Code because I see it as complementary. We’re hearing from people using both.
The same model that’s available in Claude Code is the same one that’s powering Cursor. It’s the same one that’s powering Windsurf, and it’s powering GitHub Copilot now. A year ago, none of those products even existed except for Copilot. Hopefully, we’ll all be able to navigate the occasionally closer adjacencies.
You’re helping power the new Alexa. Amazon is a big investor in Anthropic. How did that [product partnership] come about, and what does it mean for Anthropic?
It was my third week at Anthropic. They had a lot of energy to do something new. I was very excited about the opportunity because, when you think about what we can bring to the table, it’s frontier models and the know-how about how to make those models work really well for really complex use cases. What they have is an incredible number of devices and reach and integrations.
It’s actually one of the two things I’ve gotten to code at Anthropic. More recently, I got to build some stuff with Claude Code, which is great for managers because you can delegate work before a meeting and then catch up with it after a meeting and see what it did. Then, with Alexa, I coded a simple prototype of what it would mean to talk to an Alexa-type system with a Claude model.
I know you’re not going to explain the details of the Alexa deal, but what does it mean for your models?
We can’t go into the exact economics of it. It’s something that was really exciting for both of the companies. It really pushed us because, to do Alexa-type workflows really well, latency matters a ton. Part of the partnership was that we pulled forward probably a year’s worth of optimization work into three to six months. I love those customers that push us and set super ambitious deadlines. It benefits everybody because some of those improvements make it into the models that everybody gets to use now.
Would you like more distribution channels like Alexa? It seems like Apple needs some help with Siri. Is that something you guys would like to do?
I would love to power as many of those things as possible. When I think about what we can do, it’s really in that consultation and partnership place. Hardware is not an area that I’m looking at internally right now because, when we think about our current advantages, you have to pick and choose.
How do you, as a CPO, work at such a research-driven company like Anthropic? How can you even foresee what’s going to happen when there’s maybe a new research breakthrough just around the corner?
We think a lot about the vertical agents that we want to deliver by the end of this year. We want to help you do research and analysis. There are a bunch of interesting knowledge worker use cases we want to enable.
If it’s important for some of that data to be in the pretraining phase, that decision needs to happen now if we want to manifest that by midyear or even later. You both need to operate very, very quickly in delivering the product but also operate flexibly and have the vision of where you want to be in six months so that you can inform that research direction.
We had the idea for more agentic coding products when I started, but the models weren’t quite where we wanted to be to deliver the product. As we started approaching the 3.7 Sonnet launch, we were like, “This is feeling good.” So it’s a dance. If you wait until the model’s perfect, you’re too late because you should have been building that product ahead of time. But you have to be okay with sometimes the model not being where you needed it and be flexible around shipping a different manifestation of that product.
You guys are leading the model work on coding. Have you started reforecasting how you are going to hire engineers and headcount allocation?
I sat with one of our engineers who’s using Claude Code. He was like, “You know what the hard part is? It’s still aligning with design and PM and legal and security on actually shipping products.” Like any complex system, you solve one bottleneck, and you’re going to hit some other area where it is more constrained.
This year, we’re still hiring a bunch of software engineers. In the long run, though, hopefully your designers can get further along the stack by being able to take their Figmas and then have the first version running or three versions running. When product managers have an idea — it’s already happening inside Anthropic — they can prototype that first version using Claude Code.
In terms of the absolute number of engineers, it’s hard to predict, but hopefully it means we’re delivering more products and you expand your scope rather than just trying to ship the same thing a little bit faster. Shipping things faster is still bound by more human factors than just coding.
What would you say to someone who is evaluating a job between OpenAI and Anthropic?
Spend time with both teams. I think that the products are different. The internal cultures are quite different. I think there’s definitely a heavier emphasis on alignment and AI safety [at Anthropic], even if on the product side that manifests itself a little bit less than on the pure research side.
A thing that we have done well, and I really hope we preserve, is that it’s a very integrated culture without a lot of fiefdoms and silos. A thing I think we’ve done uniquely well is that there are research folks talking to product [teams] all the time. They welcome our product feedback to the research models. It still feels like one team, one company, and the challenge as we scale is keeping that.
- An AI industry vibe check: After meeting with a ton of folks in the AI industry at HumanX, it’s clear that everyone is becoming far less focused on the models themselves versus the actual products they power. On the consumer side, it’s true these products have been fairly underwhelming to date. At the same time, I was struck by how many companies are already having AI help them cut costs. In one case, an Amazon exec told me how an internal AI tool saved the company $250 million a year in costs. Other takeaways: everyone is wondering what will happen to Mistral, there’s a growing consensus that DeepSeek is de facto controlled by China, and the way a lot of AI data center buildouts are being financed sounds straight out of The Big Short.
- Meta and the Streisand effect: If you hadn’t heard of the new Facebook insider book by Sarah Wynn-Williams before Meta started trying to kill it, you certainly have now. While the company may have successfully gotten an arbitrator to bar Wynn-Williams from promoting the book for now, its unusually aggressive pushback has ensured that a lot more people (including many Metamates) are now very eager to read it. I’m only a few chapters in, but I’d describe the text as Frances Haugen-esque with a heavy dose of Michael Wolff. It would certainly make the basis of an entertaining movie — a fact that I’m sure Meta’s leaders are quite worried about right now.
- More headlines: Meta’s Community Notes is going to be based on X’s technology and start rolling out next week… Waymo expanded to Silicon Valley… Sonos canceled its video streaming box… There are apparently at least four serious bidders for TikTok, and Oracle is probably in the lead.
Some noteworthy job changes in the tech world:
- Good luck: Intel’s new CEO is Lip-Bu Tan, a board member and former CEO of Cadence.
- Huh: ex-Google CEO Eric Schmidt was named CEO of rocketship startup Relativity Space, replacing Tim Ellis.
- John Hanke is set to become the CEO of Niantic Spatial, an AR mapping spinoff that will live on after Niantic sells Pokémon Go and its other games to Scopely for $3.5 billion. The mapping tech has been what Hanke is the most passionate about, so this makes sense.
- Asana’s CEO and cofounder, Dustin Moskovitz, is planning to retire after the company finds a replacement.
- More shake-ups in Netflix’s gaming division: Mike Verdu, who originally stood up the team and was most recently leading its AI strategy, has left.
- A new startup called CTGT claims to have invented a way to modify how an AI model censors information “without modifying its weights.” Its first research paper is on DeepSeek.
- Responses to the White House’s requests for recommendations on AI regulation: OpenAI, Anthropic, Google.
- You know Apple has lost the plot when it gets roasted like this by John Gruber.
- Bluesky’s sold-out “world without Caesars” graphic tee, which CEO Jay Graber wore onstage at SXSW.
- Global smartwatch shipments fell for the first time ever in 2024.
- New York Magazine’s profile of Polymarket CEO Shayne Coplan.
- Tesla may be cooked.
If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.
As always, I want to hear from you, especially if you have feedback on this issue or a story tip. Respond here or ping me securely on Signal.