OpenAI is rolling out what it calls a memory feature in ChatGPT. The popular chatbot will be able to store key details about its users to make answers more personalized and "more helpful,” according to OpenAI. These can be facts about your family or health, or preferences about how you want ChatGPT to talk to you so that, instead of starting on a blank page, it is armed with useful context.
As with so many tech innovations, what sounds cutting edge and useful has a dark flip side: It could blast another hole into our digital privacy and — maybe — push us further into the echo chambers that social media forged.
For year, artificial intelligence firms have been chasing new ways of increasing chatbots' memory capacity to make them more useful. They are also following a roadmap that worked for Facebook: Gleaning personal information to better target users with content to keep them scrolling.
OpenAI’s new feature — which is being rolled out for both paying subscribers and free users — could also make its customers more engaged, benefiting the business. At the moment, ChatGPT’s users spend an average of seven and a half minutes on the service per visit, according to market research firm SimilarWeb. That makes it one of the stickiest AI services available, but the metric could go higher. Time spent on YouTube, for instance, is 20 minutes for each visit. By processing and retaining more private information, OpenAI could boost those numbers and stay ahead of competing chatbots from Microsoft, Anthropic and Perplexity.
But there are worrying side effects. OpenAI states that users will be "in control of ChatGPT’s memory,” but also that the bot can "pick up details itself.” In other words, the chatbot could choose to remember certain facts that it deems important. Customers can go into ChatGPT’s settings and turn off whatever they want it to forget, or shut down the memory feature entirely. But the feature will be on by default, putting the onus on users to turn it off.
Collecting data by default has been the setup for years at Facebook, and the expansion of memory features could become a privacy minefield in AI if other companies follow OpenAI’s lead. The ChatGPT developer says it only uses people’s data to train its models, but other chatbot makers may be far looser. A recent survey of 11 "romance" chatbots found that nearly all of their operators said they might share personal data with advertisers and other third parties, including details about people’s sexual health and medication use, according to the Mozilla Foundation, a nonprofit that promotes online transparency.
Here is another unintended consequence that has echoes of Facebook: A memory-retentive ChatGPT that is more personalized could reinforce the filter bubbles people already find themselves in thanks to social feeds that for years have fed them a steady diet of content confirming their cultural and political biases.
Imagine ChatGPT logging in its memory bank that I supported a certain political party. If I then asked the chatbot why the party's policies were better for the economy, it might prioritize information that supports the party line and omits critical analysis of those policies, insulating me from viable counterarguments.
If I told ChatGPT to remember that I am a strong advocate for environmental sustainability, my future queries about renewable energy sources might get answers that neglect to mention that fossil fuels can sometimes be viable. That would leave me with a narrower view of the energy debate.
OpenAI could tackle this by making sure ChatGPT offers diverse perspectives on political or social issues, even if they challenge a user’s prejudices. It could add critical thinking prompts to encourage users to consider perspectives they have not expressed yet. And in the interests of transparency, it could also tell users when it is giving them tailored information. That might put a damper on its engagement metrics, but it would be a more responsible approach.
ChatGPT has experienced gangbusters growth, pushed for user engagement and is now storing personal information, which makes its path look a lot like the one Mark Zuckerberg once trod with similarly noble intentions. To avoid the same toxic side effects his apps have had on mental health and society, OpenAI must do everything it can to stop its software from putting people into ever-deeper silos. The very idea of critical thinking could become dangerously novel for humans.
With your current subscription plan you can comment on stories. However, before writing your first comment, please create a display name in the Profile section of your subscriber account page.