ChatGPT wants to remember everything you’ve told it

ChatGPT wants to remember everything you’ve told it

OpenAI tries to fill a major blind spot in chatbots: their lack of long-term knowledge about the user.

BY Mark Sullivan

OpenAI announced on Monday that its popular ChatGPT chatbot will now remember user details including basic information, hobbies, and prompt history. The chatbot’s enhanced memory will be available only to subscribers of the $20/month ChatGPT Plus service.

A demonstration video supplied by OpenAI shows a user providing information about their pets, then asking the chatbot to create an image of said pets riding on a surfboard in the ocean. Or, ChatGPT might include information about pet accommodations when asked to research a vacation destination. 

The new feature, which is optional and can be turned off, is not yet available to users in Europe or Korea. 


To add to the chatbot’s memory, the user can type in a simple fact about themselves—for example, “I love surfing and I use a Channel Islands longboard and a O’Neill wetsuit.” A “memory updated” tag then appears, meaning the information has been permanently stored with the chatbot. Another button shows all the data points the bot has remembered, all of which can be deleted by the user at any time.

To test out the new feature, I dropped a PDF of my resume into ChatGPT’s dialog window and asked the chatbot  to remember my credentials. 

I then asked it to “Please help me generate a plan for finding a job.” But ChatGPT’s answer was entirely generic and contained no specialized tactics for finding work in my field. I then asked, “Based on what you know about my skills and experience, what types of positions should I pursue?” It then generated a list of jobs specific to my field, or adjacent to my field. So, the chatbot will call up relevant facts from its memory, but the wording of the prompt matters.


From the very beginning of the AI chatbot craze that began last year, the bots’ lack of personal or specialized knowledge, and their tendency to make up facts (i.e., hallucinate), have limited their usefulness (though researchers have made significant progress in finding ways to stop chatbots from hallucinating). OpenAI’s new memory feature marks an attempt to make ChatGPT a more personalized and useful personal assistant. 

But actively training ChatGPT to be more knowledgeable about the user creates ongoing work for the user. Other companies such as Microsoft or Google might be in a better position to personalize chatbots than OpenAI, because they already likely host a lot of the user’s personal and/or work information, which they could easily access. 

There are also privacy concerns with ChatGPT’s new memory. For one, it’s unclear whether or not OpenAI “owns” the data users ask the bot to remember, or whether the company can then use the remembered data for its own purposes, such as to train its models. (OpenAI didn’t immediately respond to my questions about ownership and privacy.)

My own privacy concerns weren’t assuaged after I tried to delete my phone number (included in my résumé) from ChatGPT’s memory. Five minutes after doing so, the chatbot dutifully included the number in one of its generated answers.


Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld 

Fast Company