On Monday, Anthropic released a hotly anticipated memory function for its Claude chatbot.
In a YouTube video, the company demonstrated a user asking what they had been chatting about with Claude before their vacation. Claude searches past chats to read and summarize them for the user, then asks if they’d like to move on and keep working on the same project.
“Never lose track of your work again,” the company wrote. “Claude now remembers your past conversations, so you can seamlessly continue projects, reference previous discussions, and build on your ideas without starting from scratch every time.”
The feature works across web, desktop, and mobile, and it can keep different projects and workspaces separate. It started rolling out to Claude’s Max, Team, and Enterprise subscription tiers today — just go to “Settings” under “Profile” and switch the feature on under “Search and reference chats” — and the company said other plans should receive access soon.
But there’s an important caveat here: It’s not yet a persistent memory feature like OpenAI’s ChatGPT. Claude will only retrieve and reference your past chats when you ask it to, and it’s not building a user profile, according to Anthropic spokesperson Ryan Donegan.
Anthropic and OpenAI have been going head-to-head in the AI arms race for quite a while, racing to roll out competing features and functionalities — like voice modes, larger context windows, and new subscription tiers — as they both raise ever-increasing funding amounts. Last week, OpenAI launched GPT-5, and Anthropic is currently looking to close a round that could value it as high as $170 billion.
Memory functions are another way leading AI companies are looking to attract and keep users on one chatbot service, increasing “stickiness” and user engagement.
Chatbots’ memory functions have been the subject of online debate in recent weeks, as ChatGPT has been both lauded and lambasted for its references to users’ past conversations, with some users controversially treating it as a therapist and others experiencing mental health struggles that some are referring to as “ChatGPT psychosis.”