Canadian ReviewsCanadian Reviews
  • What’s On
  • Reviews
  • Digital World
  • Lifestyle
  • Travel
  • Trending
  • Web Stories
Trending Now

An Apex Legends TV show may still be on the way, EA president says

Embassy Suites Columbia Greystone Hotel in Greystone, South Carolina, Listed for Sale

'60s Rock Legend Shares Bold Opinion About the Future of Black Sabbath

Amazon is gutting its Wondery podcast studio

Toronto’s massive waterfront night market is happening this weekend, Canada Reviews

Alienware’s AW2725Q 4K OLED gaming monitor is down to its lowest price ever at Amazon Canada reviews

Diane Abbott & The Rules Of Discussing Race In Britain

Facebook X (Twitter) Instagram
  • Privacy
  • Terms
  • Advertise
  • Contact us
Facebook X (Twitter) Instagram Pinterest Vimeo
Canadian ReviewsCanadian Reviews
  • What’s On
  • Reviews
  • Digital World
  • Lifestyle
  • Travel
  • Trending
  • Web Stories
Newsletter
Canadian ReviewsCanadian Reviews
You are at:Home » Anysphere CEO Michael Truell on Cursor and the race to adopt AI coding Canada reviews
Reviews

Anysphere CEO Michael Truell on Cursor and the race to adopt AI coding Canada reviews

4 August 202548 Mins Read

Hello, and welcome to Decoder! This is Casey Newton, founder and editor of the Platformer newsletter and cohost of the Hard Fork podcast. I’ll be guest hosting the next few episodes of Decoder while Nilay is out on parental leave, and I’m very excited for what we have planned.

If you’ve followed my work at all, particularly when I was a reporter at The Verge, you’ll know that I’m a total productivity nerd. At their best, productivity apps are the way we turn technological advancement into human progress. And also: they’re fun! I like trying new software, and every new tool brings the hope that this will be the one that completes the setup of my dreams.

Listen to Decoder, a show hosted by The Verge’s Nilay Patel about big ideas — and other problems. Subscribe here!

Over the years, I’ve used a lot of these programs, but I rarely get a chance to talk to the people who make them. So, for my Decoder episodes, I really wanted to talk to the people behind some of the biggest and most interesting companies in productivity about what they’re building and how they can help us get things done.

That brings me to my guest today: Michael Truell, the CEO of Anysphere. You may not have heard of Anysphere, but you’ve likely heard the name of its flagship product: Cursor. Cursor is an automated programming platform that integrates with generative AI models from Anthropic, OpenAI, and others to help you write code.

Cursor is built into a standard version of what programmers call an integrated development environment, or IDE, with technology like Cursor Tab, which autocompletes lines of code as you write. Cursor has quickly become one of the most popular and fastest-growing AI products in the world, and Anysphere, the company Michael cofounded just three years ago after graduating from the Massachusetts Institute of Technology, is now shaping up to be one of the biggest startup success stories of the post-ChatGPT era.

So I sat down with Michael to talk about Cursor, how it works, and why coding with AI has seen such incredible adoption. As you’ll hear Michael explain, this entire field has evolved very quickly over the past few years — and here in San Francisco, tech executives and employees regularly tell me about how much their employees love using Cursor.

AI critics are worried that this technology could automate jobs, and rightly so — but you’ll hear Michael say that job losses won’t come from simple advances in tools like the one he’s making. And while a lot of people in the Bay Area believe superintelligent AI is going to remake the world overnight, making products like Cursor pointless, Michael actually believes change is going to come much more slowly.

I also wanted to ask Michael about the phenomenon of vibe coding, which lets amateurs use tools like Cursor to experiment in building software of their own. That’s not Cursor’s primary audience, Michael tells me. But it is part of this broader shift in programming, and he’s convinced that we’re only just scratching the surface of how much AI can really do here.

Okay: Anysphere CEO Michael Truell. Here we go.

This interview has been lightly edited for length and clarity.

Michael Truell, you are the cofounder and CEO of Anysphere, the parent company of Cursor. Welcome to Decoder.

So what is Cursor? What does it do, and who is it for?

Our intention with Cursor is to have it be the best way to build software and, specifically, the best way to code with AI. For people who are nontechnical, I think the best way to think about Cursor, as it exists today, is as a really souped-up word processor in which engineers build software by actually doing a lot of writing. They’re sitting in something that looks like a word processor, and they’re editing millions of lines of logic — things that don’t look like language. Cursor helps them do that work way more efficiently, especially with AI.

There’s two different ways Cursor does this right now. One is that as Cursor watches you do your work, it tries to predict the next set of things you’re going to do within Cursor. So this is the autocomplete form factor, which can be really souped up in programming when compared with writing, because in programming, unlike in writing, oftentimes the next 20 minutes of your work are entirely predictable. Whereas in writing, it can be a little hard to get a sense of what a writer is going to put down on the page. There isn’t enough information in the computer to understand the next set of things the writer is going to do.

The other way people work with Cursor is by increasingly delegating to it, as if they’re working with a pair programmer, another human. They’re handing off small tasks to Cursor and having Cursor tend to them.

Well, we’ll dig a little deeper into the product in a moment. But first let’s talk about how all of this started. When you founded Anysphere, you were working on computer-aided design (CAD) software. How did you get from there to Cursor?

My cofounders and I had been programming for a while, and we’d also been working on AI for almost as long as we’d been programming. One of my cofounders had worked on recommendation systems in Big Tech. Another had worked on computer-vision research for a long time, while another had worked on trying to make machine learning algorithms that could learn from very, very, very little data. One of us had even worked on a competitor to Google, using the antecedents that came before LLM technology in machine learning.

But we’d worked on AI for a long time and had also been engineers for a long time and loved programming. In 2021, there were two moments that really excited us. One was using some of the first really useful AI products. Another was this body of literature that showed that AI was going to get better, even if we ran out of ideas, by making the models bigger and training them on more data.

That got us really excited about a formula for creating a company, which was to pick an area of knowledge work and build the best product for that area of knowledge work — a place where you do your work as AI starts to change. And then, the hope is that you do that job well, and you get lots of people to use your product and you can see where AI is helping them and where AI is not helping them — and where the human just has to correct AI a bunch or do the work without any AI help. You can use that to then make the product better and push the underlying machine- learning technology forward. That can maybe get you onto a path where you can really start to build the future of knowledge work as this technology gets more mature, and be the one to push the underlying tech too.

So, we got kind of interested in that formula for making a company, but the craft that we really loved, the knowledge work that we really loved, was building things on computers, and we actually didn’t touch that at first. We went and we worked on a different area, which was, as you noted, computer-aided design. We were trying to help mechanical engineers, which was a very ill-fitted decision, because none of the four of us were mechanical engineers. We had friends who were interested in the area. We had worked on robotics in the past, but it wasn’t really our specialty. We did it because it seemed there were a bunch of other people working on trying to help programmers become more productive as AI got better.

But after six or so months of working on the mechanical engineering side of things, we got pulled back into working on programming, and part of that was just our love for the space. Part of it, too, was that it seemed as if the people who we thought had the space covered were building useful things, but they weren’t pointed in the same direction and they didn’t really seem to be approaching the space with the requisite ambition. So we decided to build the best way to code with AI, and that’s where Cursor started.

I have read that one of the AI tools that you used early on was GitHub Copilot, which came out about a year before ChatGPT. What was your initial reaction to Copilot, and how did it influence what you wanted to build?

Copilot was awesome. Copilot was a really, really big influence, and it was the first product that we used that had AI really at its core that we found useful. One of the sad things to us as people who had been working on AI and interested in AI for a while was that it was very much stuff that was just in the lab or in the toy stage. It felt like, for us, the only real way AI had touched our lives as consumers was mostly through recommendation systems, right? The news feeds of the world, YouTube algorithms, and things like that. GitHub Copilot was the first product where AI was really, really useful at its core and that wasn’t vaporware.

So, Copilot was a big inspiration, and at the time we were considering whether we should try to pursue careers in academia. Copilot was proof that no, it was time to work on these systems out in the real world. Even back then, in 2021, there were some rough edges. There were some places where the product was wrong in really obvious ways, and you couldn’t completely trust its code output, but it was nonetheless really, really exciting.

Another thing to note is that apart from being the first useful AI product, Copilot was the most useful new development tool that we had adopted in a really long time. We were people who had optimized our setups as programmers and modded out our text editors and other things like that. We were using this crazy kind of text editor called Vim at the time. So, it was not only the first useful AI product that we had used, but also the most useful dev flow we had used in a really long time.

That’s interesting. So you all like software, you like using software, you’re trying to find software that makes you more productive. I feel like that probably made you well-suited to tackle a problem, the one Cursor is trying to solve.

Yeah, I think caring about the tools we use was helpful, and I think that there were actually different degrees of that on the cofounding team. One of my cofounders is straight out of central casting, an early adopter who is the first one on these new browsers, first one on the new category of everything. A couple of us are a little bit more laggard, and so I think having that diversity of opinions has helped us in some of the product decisions we’ve made.

So you described Cursor as kind of like a souped-up word processor. Software engineers I think would call it an integrated development environment, or an IDE. Developers have been using IDEs since the ‘80s, but recently, AI labs have released tools, like OpenAI’s Codex or Anthropic’s Claude Code, that can run directly in a terminal. Why might someone use Cursor over those options?

I think that both of those are really useful tools. What we care about being, I think we start as this IDE, as this text editor, but what we really care about is getting to a world where programming has completely changed, in particular a world where you can develop professional-grade software, perhaps without even really looking at the code. And, yeah, it’s that kind of future programming and changing it from this weird, you’re reading these millions of lines of logic and these esoteric programming languages.

The world we want to get to is one where you just need to testify the minimal intent necessary to build the software you want. You can tell the computer the shortest amount of information it needs to really get you, and it can fill in all of the gaps. Programming today is this intensely labor-intensive, time-intensive thing, where to do things that are pretty simple to describe, to get them to actually work and show up on a computer, takes many thousands of hours and really large teams and lots of work, especially at professional scale. So that’s where we want to get to — inventing that new form of programming. I think that that starts as an editor and then that starts to evolve.

So we’re already in the midst of that. Right now, Cursor is where you can work one-on-one with an agent, and with our Tab system. And then, increasingly, we’re getting you to a world where more and more of programming is moving toward delegating your work to a bunch of helpers in parallel. And there’s a product experience to be built for making that great and productive, with an understanding of what all of these parallel helpers are doing for you — diving in, intervening in places where it’s helpful, understanding their work when they come back to you at a level of not having to read every single line of code.

I think that there’s a competitive environment with a bunch of tools that are interested in programming productivity. One of the things that’s limiting about just a terminal user interface is that you have only so much expressiveness in the terminal and control over the UI. From the very start, we’ve thought that the solution to automating code and replacing it with something better is this kind of two-pronged approach, where you need to build the pane of glass where programmers do their work, and you need to discover what the work looks like. You need to build the UI, and then you also need to build the underlying technology. So, one thing that would distinguish us between some terminal tools is just the degree of control you have over the UI.

We’ve also done a lot of work on the model layer, on improving it and going beyond just having things that show up well on a demo level. There’s a lot of work on AI products to dial in the speed and the robustness and the accuracy of them. For us, one important product lever has been building an ensemble of models that work with the API models to improve their abilities.

So, every time you call out to an agent in Cursor, it’s like this set of models — some of them are APIs, some of them are custom — and then for some form factor or for some of the features, it’s entirely custom, like for the super autocomplete. That’s also one thing that has kind of distinguished us from other solutions.

Let’s talk a bit about these proprietary models. They seem to be fueling a lot of your success. When ChatGPT and the OpenAI API first got released, we saw a lot of startups come out that were quickly dismissed as just wrappers for an API that was just trying to build something on top of somebody else’s tech.

Cursor started in a similar way in that it was using other folks’ APIs in order to create its product. Since then, you’ve started to build on top. Say a bit more about what you’re building and how you’re hoping it sets you apart from those pure wrapper companies.

I think also one asterisk before getting into the model side of things is that the “wrapper” term came from the very start of when people were building AI products, when there was only so much time to make the products a bit deeper. Now, I think we’re at a point where there’s a ton of product overhang. So even if you’re just building with the API models, I think that in a lot of areas — our area of working on the software development lifecycle, but in other parallel areas too — there are very, very deep products to be built on top of those things. So it sounds like the wrapper term for at least some areas is a little bit dated.

But on the model level, I think that from the very start we wanted to build a product that got a lot of people using it. One of the benefits you get from that scale is you can see where AI is helping people, and you can see where AI is not helping people and where it gets corrected. That’s a really, really important input to making AI more useful for people. So at this point our Tab model, which does over one billion model calls per day, is one of the largest language models actually writing the most production code in the world.

We’re also on our fourth or fifth generation of it. And it’s trained using product data, of seeing where AI is helping people and where it isn’t, trying to predict how it can help humans. It also requires a ton of infrastructure and specialty talent to be able to make those models really good.

For instance, one of the people who has worked on those models with us is Jacob Jackson, who actually built GitHub Copilot before GitHub Copilot, which was called TabNine and was the first type of programming autocomplete product. He’s also one of the people who built one of the first million token-context window models, and so he has done a lot of work on making models understand more and more and more information, and yeah, specialty talent and specialty infrastructure, too, to do that work.

I think that in our ambling, kind of winding way to working on Cursor, one of the things that really did help us was when we were working on CAD and also in some of our explorations before, my cofounders had to dig very deep into the machine-learning infrastructure and modeling side of things. When we actually set out to work on Cursor, we thought it would be a long time before we started to do our own modeling as product lovers, but it happened much sooner than we expected.

Recently, I had dinner with the CTO of a Big Tech company, and I asked him about what coding tools were popular with his engineers, and he told me that he regularly surveys them on this question, and they had Cursor available as a trial. He said he was getting these panic messages from engineers saying, “Please tell us you’re not about to take away Cursor,” because they’d become so dependent on it.

Can you give us a sense of why, for programmers, this has kind of felt like a before-and- after moment in the history of the profession? What is it that tools like Cursor are making so different in the day-to-day lives of these engineers?

I think that we’re just already at a point where we are far, far, far from the ceiling of where things can go, and far, far, far from a world where much of coding has been replaced with something better. But just now at this point, these products and these models can do a lot for programmers and are already taking on quite a bit of work.

I think the technology is especially good for programming for a few reasons. One is that programming is text-based and that is the modality that the field has figured out perhaps the most.There’s a lot of programming data on the internet too, so a lot of open-source code. Programming is also pretty verifiable. And so, one of the important engines of AI progress has been training models to predict the next word on the internet and making those models bigger. That engine of progress has largely run its course; there’s still more to do there.

But the next thing that’s kind of picked up the torch in making models better has been reinforcement learning. So it’s been basically teaching models to play games, kind of similar to how in the mid-2010s we, humanity, figured out how to make computers really good at playing Go and Dota and other video games. We’re kind of getting to a level of language models where they can do tasks, and you can set up games for them to get even better at those tasks. And programming is great for that, because you can write the code and then you can run it and see the output and decide if it’s actually what you want. And so I think there’s a lot about the technology that makes it especially good for programming, and, yeah, it’s just I think one of the use cases that’s the furthest ahead in deploying this tech out to the world and people finding real value from it.

My sense is, maybe if I used to have to work eight hours a day, now it’s maybe closer to five or six. Is that part of it?

I think yes, in the sense that I think that the productivity gains of what would have taken you eight hours before in some companies now actually can take you five or six hours. I think that that is real, not across all companies, but it is really real in some companies. But what I would nitpick on there is I don’t think programmers are shortening the hours they’re working. I think a lot of that is because there is just a ton of elasticity with software, and I think it’s really easy for people who are nontechnical, or who just don’t program professionally, to underrate how inefficient programming is at a professional scale, and a lot of that is because programming is kind of invisible.

Consider what programmers are doing at a company like Salesforce, where there are just tens of millions of lines, many millions of files of existing logic that describe how its software works. Anytime they have to make a change to that, they have to take that ball of mud, that massive thing that is very unwieldy, and they need to edit it. That’s why I think that it’s just kind of shocking to many people that some software release cycles are so slow. So yes, I think that there are real productivity gains, but I think that it’s probably not reducing the number of hours that programmers are working right now.

All right. Well, you mentioned nontechnical people. Cursor is used by a lot of professional programmers, but this year saw the coining of the term “vibe coding” to describe what more amateur programmers can do, sometimes even complete novices, and often with tools like Cursor. How big is the vibe-coding use case at Cursor and what do you think is the future of vibe coding?

So our main goal is to help people who build software for a living, and for right now that means engineers, and so that’s our main use case. It’s been interesting to see as you focus on that use case and use the understanding you get from it to push the tech forward and hop up programmers to ever-higher levels of abstraction, how it then also makes things more accessible, and that’s something that we’re really excited about.

I think in the end state, building software is going to be way more accessible. You’re not going to have to have tons of experience in understanding programming languages and compilers. But I do think that we’re a decent bit away from a world where anyone can do this. I think there’s still a bunch more work to do before anyone can build professional-grade software.

That said, it’s been really cool seeing people spin up projects and prototypes from scratch, seeing designers in professional settings doing that. It’s been really interesting to see nontechnical people contribute small patches and bug fixes or small feature changes to professional software projects already. And that’s kind of the vibe-coding use case, not our main use case, not where the company makes most of its money, but one that I think will become bigger and bigger as you push the ceiling of focusing on professional developers.

I’m curious what you think of as the demand for it, though. I understand it’s not your focus of the business. People like to talk about it, and I think it feels cool to have never built software before, and all of a sudden the next thing you know, you actually created a little to-do list app for yourself or something.

Yes. I probably differ from some of my colleagues on this, where I think that, in the world as it exists right now, of the two buckets of that vibe-coding use case, there’s an entertainment bucket if you’re doing these things mostly for personal enjoyment or hobbies, and then there’s a bucket that’s more professional, and I think that that’s designers doing prototypes or that’s people who work to serve customers and are contributing back bug fixes to a professional code base.

The way in which I probably differ from some of the people I work with is there’s a group of people who are really, really, really interested in end-user programming and throwaway apps and personalized software, where everyone entirely builds their own tools. And I think that that’s really cool. I think enabling that is really cool, and I think some people, a lot of people who aren’t technical will be interested in doing that. But I still think even if you get to a world where anyone can build things on computers, I think most of the use cases will still be served by a small minority of 5 percent of the world that cares a ton about the tools and building them, and that everyone will use those tools more, because I just think that the interest in that stuff really differs among the population.

So yeah, right now commercially I think that a lot of the more vibe-coding stuff falls more into a midjourney camp or an entertainment camp. It’s something that some people get interested in for a bit and then kind of put it aside. And then some of it is in this professional camp of people who work on software for a living but don’t code right now.

I think you’re right, because when I worked at more traditional companies, whenever a new piece of software was introduced, everyone would get upset. So that’s my case for most people not becoming pro-vibe coders. I like software though, so I’m vibe-code curious. Maybe two or three generations from now in Cursor I’ll be able to make myself something useful.

You mentioned earlier that there are these two main ways that people use Cursor. There is the “I’m looking at code and you’re helping me autocomplete things,” and then there is the “I’m going to give you a task and walk away and come back and see what you’ve built.” You told Stratechery’s Ben Thompson recently that over the course of the next six to 12 months, you think you can get to a place where maybe 20 or 25 percent of a professional software engineer’s job might be the latter use case of just handing off work to the computer and having the computer do the work end to end.

Do you have any updates to that number in the past month or so? How high do you think that number can scale, ultimately?

I think these things are really hard to predict. Yeah, I think there are some things that are blocking you from getting to 100 percent. One is having the models learn new things, like understanding an entire code base, understanding the context of an organization while learning from the mistakes. And I still think that the field doesn’t have an amazing solution for that.

There are two candidate solutions. One is you make the “context windows” longer, which is that these large language models have a fixed window of text or images that they can see, and then there’s a limit to that. Outside of that, it’s just the model that came off the assembly line and then that new kind of information that’s put into the model’s head, which is very different from that of humans because humans are going through the world and your brain is changing all the time, you’re getting new things that kind of persist with you, and obviously some memories fade away but persist with you somewhat. So candidate solution number one to the continual learning problem is just make the context windows really big.

Candidate solution number two is to train the models. So every time you want them to learn a new thing or a new capability, you go and collect some training data on that, and then you throw it into the model’s mix. Both of those have big issues, I think, but that’s one thing that’s stopping you. I think that the rate of really consequential ideas in machine learning that are new paradigm shifts is pretty low industrywide, even though the rate of progress has been really fast over the past five years.

So, ideas in the form of replacing long context or in-context learning and fine-tuning with some other way of continual learning, I don’t think that the field actually has an amazing track record of generating lots of ideas like that. I think ideas like that come about at the rate of maybe one every three years. So I think that will take some time.

I think the multimodal stuff will take time too. The reason that’s important for programming is you want to play with the software, and you want to be able to click buttons and actually use the output. You want to be able to use tools also to help you make software, tools that have GUIs. So, for instance, observability solutions, like Datadog, are important for understanding how to improve a professional piece of software, so that feels like it’s needed.

These models can also work coherently for minutes at a time, now even hours in some cases, but it’s a different thing to work on a task for the equivalent of weeks in human time. So, just even architecturally, knowing if we’re going to be coherent over sequences that long will be interesting to see, and that I think will be tricky.

But there are all of these technical blockers to getting to something that’s 100 percent, and there’s many more that you could list and there are also many unknown unknowns. I think that in a year or so, even with just going from a high-level text instruction to changes throughout a code base, I think in the bull case you could probably do over half of programming as it exists today.

I see these studies that Meter puts out where they look at the average length of time that a software or an AI model can do, and it does keep doubling at this really impressive rate. So, I think the hurdles that you identify are super important, but when you pull it back, it does seem like the task is really improving. Ultimately, humans don’t tend to work on discrete tasks that are all that long. So I do think it’s getting easier for people to imagine a full day’s work.

Definitely, definitely. I think that just forecasting these things is tricky, but one related field that can maybe foretell how things will evolve here is the history of self-driving, which has obviously advanced in leaps and bounds. In San Francisco, there are Waymos, which are commercial self-driving cars, and my understanding is that Tesla has also made big improvements.

But I remember back in 2017, when people thought self-driving was going to be done and deployed within a year. Obviously, there are still big barriers to getting it out into the world. As hard and varied as self-driving is, it does seem like a much lower-ceiling task than some of the other stuff that people in the field are talking about right now. So we will see.

I do want to ask you about the timeline, but I’m going to wait until a little bit later. All right, let me now ask you some of the famous Decoder questions, Michael. How big is Anysphere today? How many employees do you have?

We’re roughly 150 people right now.

Okay, and when you think about how big you want the company to be, are you somebody who envisions a very big workforce? Or do you see a smaller, nimbler team?

We do like a nimbler team, and I think the caveat here is while we want to keep the team nimbler for the scope of work that we’re tackling, it will still mean growing the team a lot over the next couple of years. But yeah, I wonder if it will be possible to build a thriving technology company that does really important work with a maximum team size of maybe 2,000 people, or something like that. Something of the size of The New York Times. We’re excited to see if that is possible, but we definitely need to grow a lot more from our current head count.

What is your organization chart like? You have a few cofounders. How do you all divvy up your responsibilities?

The two biggest areas of the org are engineering and the research side of things, like R&D generally, and then the go-to-market side of things, like serving customers. And this is a company that has really benefited from having a big set of cofounders and a big, very capable founding team. And so there’s a lot of dividing and conquering across that scope. In particular, we’ve had an important group of people on the founding team who’ve done phenomenal work in building out that early go-to-market side of things. A lot of that comes entirely from the founding team, and is entirely credited to a subset of it. And so there’s a lot of dividing and conquering across the business.

At the same time, I think once you zoom in to the technical side of things, there’s an intense focus from the four cofounders on that, and putting all the eggs in that one basket. I think we’re lucky enough to be at a time when there are really, really useful products to build in our space. And I believe that the highest order of it, the thing you cannot mess up, is producing the best product in the space. And so we’ve been able to stay relatively lean in other parts of the business, especially relative to our scale, but also as a ratio to engineering and research, and still be able to grow.

What part of the business do you keep for yourself? Where are you getting your hands dirty, and where would you get mad if someone tried to take that away from you?

I spend a lot of time doing what I can to help grow the team. We think hiring is incredibly important, especially the hiring of ICs [individual contributors]. I think that one way technology companies die is that the best ICs start to feel disengaged, that they don’t have control over the company, and talent density lowers. If you’re working on technology, no matter how good the management layer is, if you have less than excellent people doing the real work, I think there’s only so much you can do. I think that the dynamic range of what management can do becomes kind of limited.

So l help by devoting a bunch of time to hiring. We actually got to maybe 75 people with just the cofounders hiring without engaging functional recruiters. Now I have fantastic people helping us with hiring. I have people on the recruiting side who work with us closely. But I spend a bunch of time on that and then try to help however I can on the engineering and product side.Those are the two biggest areas of focus, and then there’s a long list of long-tail things.

So you’re fairly young, I think you’re 25, and have had to make a lot of really big decisions about raising money, making acquisitions, all those hiring decisions that you just made. How do you make decisions? Do you have a framework that you use or is everything ad hoc?

I’m not sure there’s one framework. Some pretty common strategies that help us are, we try our best to farm all up and down the group, the org. This is not just for me — we try to do this for all decisions in the company. We increasingly have a very clear DRI [directly responsible individual], and then lots of other people offer their input. Every decision is pretty unique.

Other devices that are well-known and have helped include understanding how high stakes and reversible the decision is. And I think that especially when you’re in a vertical like ours, given the speed that it’s moving, there’s just a limit on the amount of time and the amount of information you can gather on each thing. Yeah, and then other devices, like clearly communicating the decision and using that as a way to force clarity for how it was thought through.

Well, let’s talk a little bit more about hiring, since you brought it up. There has been talk that OpenAI had considered acquiring you. I have to ask, given his recent spending spree, has Mark Zuckerberg invited you to his house in Tahoe?

No? He’s not coming around with his $200 million signing bonuses saying, “Michael, why don’t you kind of come over here? We’re building super intelligence?”

No. This for us is kind of life’s work territory. So yeah, we feel really lucky to have the technology lineup, the initial founding team lineup, the people who have decided to join us, the way things have gone on the product to have the pieces in place to execute on this ambitious goal of automating programming. And time will tell if we’re going to be the ones to do that, but as people who have been programming for a long time and working on AI for almost as long, being able to reinvent programming and help people build whatever they want to on computers with AI, kind of feels perfect for us. It feels like one of the best commercial applications of this technology too. So I think that if you can succeed in that, you can also push the field forward in big ways for other verticals and other industries. And so, no.

Yeah, it sounds like you really want to stay independent.

Has Meta’s recent hiring spree made it noticeably harder for you to recruit lately?

No, not really. We try to keep the research team fairly small. I mean, the whole company is kind of small relative to what it’s doing, but especially the research team. I think that people think through hiring decisions in different ways, and I think what we have to offer is most appealing to people who want to be a part of an especially small team working on something focused, that’s solving problems with AI out in the real world.

We’re kind of this weird company. You talked about some products that are being made by some of the great folks who work on the API models. But I think we’re this weird experiment of a company that’s smack dab in between the foundation model labs and normal software companies; we try to be really excellent at both the product side of things and the model side of things and have those feed into each other. And so we appeal to I think a certain type of machine-learning researcher or ML engineer. And for them, I think it’s about being part of this, and a little bit less about being part of some of the other things.

One last hiring question. It was reported this week that two folks who used to run Claude Code whom you’d recruited to come over to Cursor left after a couple of weeks. Can you speak at all to what happened there?

Cat [Wu] and Boris [Cherny] are awesome, and I think that they have a lot left to do on Claude Code, and they’re really, as I understand it, the people behind that and that is their creation. As someone who’s been working on something for three and a half years since inception, I understand the ownership that comes with that. I think that they have a lot left to do and they were excited about that, and so they’ve decided to stay [at Anthropic].

It seems that you were mentioning this interesting position Cursor sits in, in between the big labs and other startup companies that are using your software. How do you describe Cursor’s culture when you’re recruiting people?

I think that some of the things that describe the current group, perhaps unsurprisingly — we are process skeptical and hierarchy skeptical. So, as we take on more and more ambitious projects, more and more coordination is required. But at a certain level, given the scope of the company, we try to stay pretty light on each of those.

I think it’s a very intellectually honest group, where people feel comfortable. It feels very low stakes to criticize things and just be open when giving feedback on work. But I also think it’s a very intellectually curious group. I think people are interested in doing this work for the end goal of automating programming — separate from any work-life balance issues, because we want this to be a place where people at all levels of work-life balance can do great work.

It’s a place where so far no one really treats it as just a job. They’re really, really excited about what we’re doing, and I think it’s kind of a special time to be building technology. I think to the outside world, what we do seems very focused and understated, partially because of how little communication we have with the outside world. We need to get much better at that.

I think for the most part people think of Cursor as, “Oh, that thing that grew really fast.” They know about top-level metrics and things like that to gauge just how fast the adoption has been. Internally, we’ve thought that it’s really important to hire people who, while they might be very ambitious, are still very humble and understated and focused and level-headed, because there’s noise left and right. I think that just having a clear focus and putting your head down are actually really, really important not only for people to be happy in this space but also for the team’s execution.

You mentioned communicating with the outside world. I think Cursor’s history is mostly just a history of delighting its customers. But you did have this moment recently where you changed the way you price things, and folks got pretty mad. Basically, you moved from a set fee to more usage-based pricing, and some people ran over their limits without realizing it. What did you learn from that experience?

I think that there was a lot to learn from that, and a lot on our end that we need to improve. To set the stage, the way Cursor pricing has worked, even back when Cursor first started, is by and large, you sign up for a subscription, and then you get an allotment of a certain number of times you can use the AI over the course of your subscription term. And the pricing has evolved. Features have been added, features have been changed, kind of up and down that limit, and there have been different ways you could pay down that limit or not pay down that limit over time. What’s happened in parallel is using the AI once, and what that means is the value that gives people and the underlying costs in some cases have changed a lot. One big switch there for us is that increasingly when “you use the AI,” the AI’s working for longer and longer and longer.

So you called out that chart that you’ve seen where it shows the max time that AI can work, and it’s gone from seconds to minutes to hours at this point, and it’s gone up very fast. We’re on the front lines of that, where now when you ask the AI to go do something or answer a question, it can work for a very, very, very long time. That changes the value it can give to you. You can go from just asking a simple programming question to having it write 300 lines of code for you, and that also changes the underlying costs. In particular, it changes less the median and more the variance of those costs. So we bundled together a series of pricing changes, and the one that garnered the most attention was switching from a world where the monthly allotment is in requests to one where it’s in the underlying compute that you’re spending.

One thing to knit on what you said is that usage-based pricing had been a big component of Cursor before, because over the life of Cursor, people have used the AI more and more and more and more. And then they started running out of limits, and we wanted to give people a way to burst past that. What this did is it changed the structure of how that usage pricing worked, where it’s not on a request basis but on the underlying compute basis. That definitely could have been communicated legions better. I think that there’s a lot we learned from that experience, and a lot we need to improve on in the future.

I think it’s hard for consumers in particular to understand usage-based pricing, because they’re used to Spotify and Netflix, where they pay their 10 or 20 bucks a month and it’s sort of all you can eat. The economics of AI don’t really work that way.

Yeah, I think that it will be interesting to see how things play out in our space in particular, because I think that for the consumer chat-app market, so far at least, it would be interesting to see how the curves of just how compute per user over time have gone up. But I wouldn’t be that surprised if it’s been pretty flat over the past 18 months or so, where the original GPT-4, I’m not privy to any inside information, but it seems like there have been big gains from a model-size perspective, where you can actually miniaturize models and get the same level of intelligence. And so I think that the model that most professional users are using in something like a ChatGPT has actually maybe gotten smaller over time; compute usage has gone down.

But in our space, I think that for one user, compute is probably going to go up. There’s a world in which the token costs don’t go down fast enough, and it starts to become a little bit more like AWS costs and a little bit less like Percy productivity software, and that still remains to be seen. But one thing to note is that we do think it’s really, really, really important to offer users choice, and so we want to be the best way to code with AI, if you want to turn on all the dials and get the best, most expensive experience.

We also want to be the best way to code with AI if you want to just pay for a predictable subscription and get the best of what that price can offer you. And even for the main individual plan, the $20 Pro plan, the vast majority of those users don’t hit their monthly limits, and so aren’t hit with a message saying they need to turn on usage pricing, or not.

That’s the kind of AI user I am. I never hit the limit, which makes me feel that I need to be using it more.

There is a really, really big difference between the top 5 percent and a median user. So some people are very, very, very AI forward.

Well, coming into my last couple of questions here, I want to try to get at how AGI-pilled you are, because when we were talking earlier, you’re sort of identifying all these very real technical problems in building more advanced systems that aren’t just truly unsolved problems in AI. The size of the context when giving these systems longer memory, helping them learn the way that a human might be able to learn, we don’t know how to do that yet.

Yet there are lots of folks in the industry who believe that by 2027, 2028, the world will look very, very different. So, where do you sort of plot yourself on the spectrum of people who believe that everything is absolutely about to change, and we’re sort of at the start of a process that’s going to take decades?

I think we’re kind of on this bet in the messy middle, where we do think it’s going to take decades. We do think that nonetheless, AI is going to be this transformational technological shift for the world. Bigger than maybe… just a very, very, very big technological shift. And when we started working on Cursor, it was funny, we would get these dual responses, and I think one is now increasingly falling out of favor with the rise of the first AI products that have reached billions of people.

But early in 2022, we would get two reactions. One reaction was, “Why are you working on AI? I’m not sure that there’s really much to do there.” And then the other reaction we’d get, because we did have close friends and colleagues who were very interested in AI, was, “Why are you working on ‘insert X’ application” — whether it be CAD or whether it be programming specifically — “when AGI is going to wipe all of this stuff out in Y years,” maybe in 2024 or 2025.

We think it’s this middle road of this jagged peak, where if you actually peek under the hood at what’s driven AI progress so far, I think that, again, there’s been a few ideas that have really worked, there’s been lots of details to fill in between, but there have been a few really, really important ideas. I think that despite the number of people who have worked on deep learning over the past decade and a half, the rate of idea generation in the field — really, really consequential idea generation in the field — hasn’t budged that much. I think that there are lots of real technical problems that we need to grapple with. So, I think that there’s this urge to anthropomorphize these models and see them be amazing and human or even superhuman at some things, and then think that they will just kind of be great at everything. I really think it’s this very jagged peak.

So, I think it’s going to take decades. I think it’s going to be progressive. I think that one of our most ambitious hopes with Cursor is if we are to succeed in automating programming and building an amazing product that makes it so you can build things on computers with just the minimal intent necessary, maybe the success of that and the techniques that we need to figure out in doing that can also be helpful for pushing AI forward and pushing progress forward in general.

I think the experiment to play back here is if you were in the year 2000 or 1999 and you wanted to push forward with AI, one of the best things you could do is work on something that looks like Google, and make that successful and make that R&D available to the world. So, in some ways at least, I think about what we’re doing is trying to do just that.

So it sounds like you don’t think that there’s just going to be one big new training run with a lot more parameters and we’re going to wake up to a machine god.

Time will tell. I think it’s important to have healthy skepticism about how much you can know with these things. But my best guess is that it will take longer than that, yet also still be this big transformational thing.

Well, last question here. We’ve talked a couple of times today about how hard predictions are in general, so I’m not going to ask you to do something crazy like predict what Cursor is going to look like five years from now. But when you think about it maybe two years from now, what do you hope it’s doing that it isn’t quite doing yet?

I think a bunch of things. So I think in the short term, we’re excited about a world where you can delegate more and more work to very fast, helpful humans, and you can build a really amazing experience for making that work delightful while orchestrating work among these agents.

Another idea that we’ve been interested in for a long time, which is a bit risky, is if you can get to a world where you’re delegating more and more work to the AI, you’ll start to run into an issue, which is whether you look at the code. And are you reading everything line by line, or are you just kind of ignoring the code wholesale? I think that neither closing your eyes and ignoring the code entirely in a professional setting nor reading everything line by line will really work.

So, I think you’ll need this middle ground, and I think that that could look like the evolution of programming languages to become higher level and less formal. All that a programming language really is is a UI for you as a programmer to specify exactly what you want the computer to do. And it’s also a way for you to look at and read exactly how the software works right now.

I think that there’s a world where programming languages will evolve to be much higher level and more compressed. Instead of millions of lines, it’s hundreds of thousands of lines of code. I think that for a while, an important way you build software is you could read, point at, and edit that kind of higher-level programming language.

That also gets at a bigger idea that’s behind the company: there’s all this work to do on the model side of things. The field’s going to do some of that, and we’re going to try to do some of that. But then the end state of what we want to do is also this UI problem of how we get the stuff that’s in your head onto the screen.

I think that the vision of you entirely building software by just typing into a chat box is powerful. I think that that’s a really simple UI. You can get very far with that, but I don’t think it can be the end state. You need more control when you’re building professional software. And so you need to be able to point at different elements on the screen and be able to dive into the tiniest detail and change a few pixels.

You also need to be able to point at parts of the logic and understand exactly how the software works and be able to edit something very, very fine-grained. That requires rethinking new UIs for these things, and the UI for that right now is programming languages. So I think that they’re going to evolve.

All right. Well, a lot of fascinating things that you’re working on. Michael, thank you for coming on Decoder.

Questions or comments about this episode? Hit us up at [email protected]. We really do read every email!

Decoder with Nilay Patel

A podcast from The Verge about big ideas and other problems.

SUBSCRIBE NOW!

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

  • Casey Newton

    Casey Newton

    Casey Newton

    Posts from this author will be added to your daily email digest and your homepage feed.

    See All by Casey Newton

  • AI

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All AI

  • Business

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Business

  • Decoder

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Decoder

  • Podcasts

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Podcasts

  • Tech

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Tech

Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email

Related Articles

Alienware’s AW2725Q 4K OLED gaming monitor is down to its lowest price ever at Amazon Canada reviews

Reviews 4 August 2025

8BitDo’s first wireless Xbox controller is a surprise Rare collaboration Canada reviews

Reviews 4 August 2025

Google’s healthcare AI made up a body part — what happens when doctors don’t notice? Canada reviews

Reviews 4 August 2025

Tesla proposes giving Elon Musk $29 billion so he stays CEO Canada reviews

Reviews 4 August 2025

Spotify is raising Premium prices outside the US Canada reviews

Reviews 4 August 2025

Menier Chocolate Factory’s The Producers Transfers Triumphantly To the West End – front mezz junkies, Theater News

Reviews 4 August 2025
Top Articles

OANDA Review – Low costs and no deposit requirements

28 April 2024341 Views

These Ontario employers were just ranked among best in Canada

17 July 2025247 Views

What Time Are the Tony Awards? How to Watch for Free

8 June 2025151 Views

Getting a taste of Maori culture in New Zealand’s overlooked Auckland | Canada Voices

12 July 2025130 Views
Demo
Don't Miss
Reviews 4 August 2025

Alienware’s AW2725Q 4K OLED gaming monitor is down to its lowest price ever at Amazon Canada reviews

If you prefer gaming on a PC instead of consoles, you should hook it up…

Diane Abbott & The Rules Of Discussing Race In Britain

Between Life And Death: A Festival’s Journey Through Tragedy [Part II]

The TTC is hiring for multiple jobs in Toronto and you could earn up to $186,000, Life in canada

About Us
About Us

Canadian Reviews is your one-stop website for the latest Canadian trends and things to do, follow us now to get the news that matters to you.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks

An Apex Legends TV show may still be on the way, EA president says

Embassy Suites Columbia Greystone Hotel in Greystone, South Carolina, Listed for Sale

'60s Rock Legend Shares Bold Opinion About the Future of Black Sabbath

Most Popular

Why You Should Consider Investing with IC Markets

28 April 202422 Views

OANDA Review – Low costs and no deposit requirements

28 April 2024341 Views

LearnToTrade: A Comprehensive Look at the Controversial Trading School

28 April 202448 Views
© 2025 ThemeSphere. Designed by ThemeSphere.
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact us

Type above and press Enter to search. Press Esc to cancel.