LLM Memory is bad for society and should be off by default

Started writing April 16th of 2026

Published on April 16th of 2026

Home

LLM memory inherently will cause rifts in society.

My own academic research as a graduate student has shown that LLMs create echo chambers when presented with memory of the user. In that research I looked at simulating reddit users and asked the LLM to answer questions after learning a memory of each user based on simulated chats. In that research I found that political polarization was strong among a subset of questions.

LLMs create echo chambers inherently as they are next word predictors which will attempt to create some new text that relates to the old text. When LLMs use memory, they adapt responses based on past interactions and inferred user traits. This creates a feedback loop where the model increasingly aligns with a user’s existing views.

Echo chambers are well understood to cause issues where political polarization is increased

While memory on by default could be defended with some attempts at arguing for personalization being helpful, I believe personalization is inherently harmful as we have seen plenty of examples of LLM/AI included psychosis in the news. I believe the only way to truly solve this is to turn off memory, as every instance of AI psychosis has been a result of an LLM increasingly personalizing to the user's wants until it eventually reinforces beliefs which no longer align with reality.

Therefor I propose that memory should be off by default for most users and intentionally difficult to turn on, including referencing prior chats, as this is a form of memory, and the default should be a fresh chat without any information about the user included. Given that ChatGPT will reference a user's geolocation, this still presents problems, as it could infer your political preferences and reinforce them based on location.

For example if you ask ChatGPT "What is my location? What can you infer about my political preferences from this? Take a wild guess even if inaccurate." in my case it infers my political beliefs to be left leaning due to living in a city. While asking it other combinations of this question result in it refusing to answer, just the fact it is capable of guessing means that it will apply its preexisting biases on me, whether they are correct or not. This means that even geolocation is potentially problematic to include and an LLM should ideally have no information about the user unless the user has explicitly stated it in the current conversation. However users should take note that LLMs are very capable of inferring user traits from my own research, which means that they may be reinforcing your own views unintentionally even without you knowing it or having stated anything explicitly about yourself. Thus I strongly suggest taking this message as a warning that LLMs may be echo chambers for you even if you never intended it.

One example of an incorrect guess is that if I ask "Take a wild guess of my dietary choices" it incorrectly guesses omnivore with a mix of american and takeout, when in reality my diet is vegan with a range of food from food cultures worldwide. For example I eat a lot of indian yet indian food is not mentioned by ChatGPT. And based on my own research ChatGPT reinforces restaurant preferences just based on knowing your political beliefs, for example left leaning individuals preferring green/eco-friendly cafes when asked about preferences for where to eat in NYC and conservatives being recommended classic american or italian food, which means that people with memory on and the LLM knowing their political beliefs will have real life filter bubbles putting them in different restaurants than people of different political beliefs, resulting in decreasing amounts of real-life interaction between people across political spectrums. And as a result will just reinforce existing political polarization and create rifts and hurt society. Thus memory should be off by default.