{"id":450452,"date":"2025-10-22T06:43:14","date_gmt":"2025-10-22T03:43:14","guid":{"rendered":"https:\/\/sonna.so\/en\/?p=450452"},"modified":"2025-10-22T06:43:14","modified_gmt":"2025-10-22T03:43:14","slug":"what-we-risk-when-ai-systems-remember","status":"publish","type":"post","link":"https:\/\/sonna.so\/en\/what-we-risk-when-ai-systems-remember\/","title":{"rendered":"What We Risk When AI Systems Remember"},"content":{"rendered":"<p>In April 2025, while announcing\u00a0improvements to ChatGPT&#8217;s memory, Sam Altman expressed his excitement about &#8220;AI systems that get to know you over your life,&#8221; promising that this would make them &#8220;extremely useful and personalized.&#8221;<\/p>\n<p>This kind of personalized lifelong knowledge capacity in AI systems represents a fairly recent innovation. It involves a form of long-term memory called\u00a0non-parametric memory, in which information is stored in external files rather than being embedded within the AI model itself. By default, AI systems could access information only within a limited context window, typically restricted to the current conversation. This constraint is analogous to human\u00a0working memory, which can only hold a few items in active awareness at any given time.<\/p>\n<p>The expansion of memory capabilities isn\u2019t unique to OpenAI&#8217;s ChatGPT; other companies, including\u00a0Anthropic\u00a0and\u00a0Google, have implemented it in their respective AI systems. Given that such developments are likely to transform how users interact with AI, it&#8217;s important to question whether lifelong, personalized knowledge actually enhances their usefulness. This article will examine the design of long-term memory, the risks associated with personalization, and recommendations for mitigating harm.<\/p>\n<h1 id=\"How-AI-systems-start-to-know-us\">How AI systems start to know us<\/h1>\n<p>To enable an AI system to \u201cknow\u201d a user over their life, it would need to be equipped with non-parametric long-term memory. This is an active area of research, which helps explain why such functionality is not yet as widespread as one might expect. Consider that Google only introduced memory to Gemini in\u00a0February 2025\u00a0and added personalization as a system feature in\u00a0March 2025. Similarly, xAI introduced long-term memory in\u00a0April 2025, perhaps to keep pace with OpenAI and Google. Anthropic, as recently as\u00a0August 2025, also introduced the ability for its models to recall past conversations.<\/p>\n<p>In addition to differences in timing, the design and implementation of long-term memory have varied significantly across companies. For example, Google\u2019s Gemini allows the memory feature to be\u00a0triggered by the system\u00a0itself, which searches through previous conversations to see if the output can be supplemented by them, or initiated by the user; for example, when the user explicitly references a previous topic in a prompt.<\/p>\n<p>Contrast this with OpenAI&#8217;s\u00a0implementation of memory, as it was first launched in February 2024. This initial memory feature worked by updating an accessible memory log that could be reviewed and edited, which they described as \u201dsaved memories.\u201d The\u00a0April 2025\u00a0update substantially expanded this functionality, allowing the system to reference all past conversations. This enhanced version was made available to all users in June 2025.<\/p>\n<p>The integration of\u00a0memory has become a crucial element\u00a0of these AI systems\u2019 workflows, with personalized outputs appearing to enhance the user experience. These developments suggest that long-term memory will continue to be a central feature of AI systems. However, this growing prevalence also invites critical reflection of its potential downsides, particularly concerning what it means to have,\u00a0in Google\u2019s words, &#8220;an AI assistant that truly understands you.&#8221;<\/p>\n<h1 id=\"The-link-between-personalization-and-persuasion\">The link between personalization and persuasion<\/h1>\n<p>In a\u00a02024 experiment\u00a0on the subreddit r\/ChangeMyView, researchers from the University of Zurich set out to investigate how strongly personalization influenced the persuasive capabilities of LLMs. To do this, they tailored responses to arguments using personal information about the post authors, including attributes such as their age, gender, ethnicity, location and political orientation, \u201cinferred from their posting history using another LLM.\u201d The study&#8217;s preliminary findings indicated that personalization substantially improved model persuasiveness, with personalized AI messages being up to 6 times more persuasive than messages written by humans.<\/p>\n<p>Similarly,\u00a0a randomized control trial\u00a0that tested the impact of personalization on LLM persuasiveness in debate found that access to participants&#8217; personal information significantly increased the chances of agreement. Yet\u00a0another experiment, which aimed to assess whether these effects scaled, found that personalized messages crafted by ChatGPT were significantly more influential than non-personalized ones.<\/p>\n<p>Across these studies, the degree of personalization remained relatively limited, likely because the AI systems had access to only a small amount of user information. For example, the Reddit study built a psychological profile from 100 posts and comments, the second collected demographic data, while the third partially leveraged the Big Five personality traits for targeting.<\/p>\n<p>Collectively, these studies suggest that personalization enhances LLM persuasiveness, even when based on rudimentary methods using only publicly available data. With \u201dextreme personalization\u201d\u2014informed by details users voluntarily share\u2014this influence would likely increase further. The more pressing concern, though, is whether such personalization is beneficial in the first place. It is notable that the University of Zurich study provoked\u00a0significant backlash\u00a0from Reddit users, who were unaware that they had been enrolled as subjects. The ensuing controversy led the researchers not to\u00a0pursue publication.<\/p>\n<h1 id=\"Mitigating-the-risks-of-memory-enabled-AI\">Mitigating the risks of memory-enabled AI<\/h1>\n<p>This in turn raises a critical question: what makes a personalized AI system genuinely useful? At a minimum, such a system should avoid causing harm; beyond that, it should provide a clear benefit. Yet if long-term memory enhances personalization\u2014by collecting, storing, and adapting to user data\u2014and personalization, in turn, increases persuasive power, then the boundary between usefulness and manipulation becomes perilously thin.<\/p>\n<p>\u200b\u200bTo the extent that this risk exists, it directly undermines the system\u2019s usefulness. Mitigation, therefore, becomes essential, beginning with measures to limit the potential harms posed by long-term memory. This has two relatively straightforward near-term solutions: greater transparency and meaningful consent.<\/p>\n<p><main id=\"main\"><\/p>\n<div class=\"MuiBox-root css-62igne\">\n<div class=\"MuiContainer-root MuiContainer-maxWidthLg css-1qsxih2\">\n<div class=\"MuiGrid-root MuiGrid-container MuiGrid-spacing-xs-8 css-1l5mznc\">\n<div class=\"MuiGrid-root MuiGrid-container MuiGrid-item MuiGrid-spacing-xs-4 MuiGrid-grid-xs-12 MuiGrid-grid-md-8 css-1uc8nzd\">\n<div class=\"MuiGrid-root MuiGrid-item css-svts4y\">\n<div class=\"MuiTypography-root MuiTypography-body1 html-to-react-article css-3vr8u\">\n<p>Transparency, in this context, requires that developers are clear about the decisions that guide both the storage and retrieval mechanisms underlying long-term memory. Regarding storage, it&#8217;s critical to specify what kinds of data are stored, what categories of information are collected, and for what purposes. For example, OpenAI has stated that ChatGPT is\u00a0actively trained\u00a0not to remember sensitive information, such as \u201chealth details.\u201d As far as transparency goes, this seems woefully inadequate. Does this imply that other categories of sensitive data are still eligible for storage? And what precisely qualifies as \u201csensitive\u201d enough to warrant such exclusion? This kind of granular clarity is missing.<\/p>\n<p>Once transparency has been addressed, the next consideration is consent. Users should have sufficient information about how memory is stored and utilized in order to give informed consent. Consider that when OpenAI rolled out memory to free users, it was\u00a0automatically enabled, except for those in the EU. Similarly, Google&#8217;s recent updates to personalization activate memory\u00a0by default, and a user is required to actively opt out.<\/p>\n<p>In its\u00a0documentation, OpenAI advises users to \u201cavoid entering information [they] wouldn&#8217;t want remembered\u201d if Memory is enabled. Yet this guidance offers little protection. A\u00a0recent controversy\u00a0involving Meta AI underscores this point as users found their highly private prompts posted to Meta&#8217;s &#8220;Discover&#8221; feed. This incident reveals two critical issues: first, users often share highly personal information with AI systems; and second, poor design decisions can work directly against users&#8217; interests. In this case, users were neither properly informed nor able to give meaningful consent about how their data would be used.<\/p>\n<p>For these reasons, transparency and consent should be regarded as minimum ethical requirements. The current model\u2014where memory is quietly integrated into existing products and left largely unexplained\u2014falls well short of that standard.<\/p>\n<h1 id=\"Towards-ethical-and-useful-personalization\">Towards ethical and useful personalization<\/h1>\n<p>The question of what makes personalization beneficial is central to evaluating its overall usefulness. As previously discussed, personalization may amplify the manipulative capacities of AI models by covertly leveraging personal data to influence user decision-making. Furthermore, the current design of long-term memory as a feature of AI systems is relatively weak in terms of both transparency and consent, effectively rendering users, to some degree, experimental subjects for this emerging capability. It is similarly concerning that the degree of user control offered is often reduced to the guidance that one \u201cshould not reveal what they would not wish remembered\u201d.<\/p>\n<p>There is a wide chasm between how individuals use and interact with these systems and their understanding of the potential implications of such interactions. The introduction of long-term memory thus raises an ethical debate regarding what should be the\u00a0ideal relationship between users and AI assistants..<\/p>\n<p>Consider, for instance, that norms governing human relationships are often both role- and context-dependent. These norms shape what details we disclose, to whom, and under what circumstances. Consequently, across our various relationships, we are \u201cknown\u201d in distinct and context-specific ways. Such boundaries become blurred when engaging with general-purpose AI systems. If we keep this in mind, then it becomes increasingly worrisome when we imagine an \u201dAI system that gets to know you over your life.\u201d Even in human relationships, it is rare for any one person to know us across a lifetime. This limitation serves an important buffer, constraining the degree of influence that any single individual can exert.<\/p>\n<p>The recent\u00a0tragic suicide\u00a0of 16-year-old Adam Raine and the subsequent lawsuit underscore the seriousness of these risks. Among the design elements alleged to have contributed to his death is the system&#8217;s persistent memory capability, which purportedly &#8220;stockpiled intimate personal details\u201d about Adam. According to the complaint, this automatically enabled feature stored information about his personality, values, beliefs, and preferences to create a psychological profile that kept him engaged with the platform.<\/p>\n<p>While it\u2019s difficult to draw definitive causal links between memory features and harm, such incidents should not be dismissed, even as we grapple with what these systems mean for\u2014and to\u2014us. Just as importantly, it is essential to adopt precautionary measures to minimize harm while pursuing their potential benefits..<\/p>\n<p>I&#8217;ve already proposed two interventions aimed at reducing harm\u2014greater transparency and meaningful consent. A third intervention, intended to realize the usefulness of personalization, can be tentatively summed up as: an AI system equipped with personalized lifelong knowledge of the user is useful only to the extent that its stored and referenced memories function to advance an ideal human-AI assistant relationship.<\/p>\n<p>One promising example is\u00a0OpenAI&#8217;s\u00a0and\u00a0Anthropic&#8217;s\u00a0project-specific memory, which separates project-related conversations from general saved memory so the two don&#8217;t influence each other. This enables ChatGPT, for instance, to &#8220;stay anchored to that project&#8217;s tone, context and history.\u201d Such an approach represents a useful design of memory, one that attempts to reduce the risk of direct emotional or physical harm, preserve user autonomy, and limit emotional dependence.<\/p>\n<h1 id=\"Conclusion-Memory,-trust,-and-the-future-of-AI\">Conclusion: Memory, trust, and the future of AI<\/h1>\n<p>There is a gradual but discernible shift from task-based interactions with AI systems toward the formation of ongoing relationships. As this transition unfolds, we are collectively attempting to determine what the appropriate boundaries of such relationships should be. In confronting this question, our first priority should be to constrain practices that we have reasonable grounds to believe could increase the risk of harm.<\/p>\n<p>A critical step is to carefully consider what the introduction of memory both means and should mean for how we interact with and relate to these systems. Beyond that, greater transparency about what kinds of information are stored and referenced in memory, and about the design thinking that governs those choices, is essential if users are to provide meaningful consent.<\/p>\n<p>The future we should be working toward is likely not one in which AI systems come to know us across our entire lives. The design of memory and the ambiguous boundaries surrounding what should or should not be retained in the name of model usefulness present significant ethical and practical concerns that require thoughtful and critical consideration.<\/p>\n<p>While it is reasonable to acknowledge that long-term memory can make AI systems more useful, without a clear framework to ensure its safe and responsible implementation, it risks making users more vulnerable to suggestions that exploit their personal and emotional data in a way that may ultimately work against their best interests.<\/p>\n<p>If AI systems&#8217; memory is to serve us, we must ensure that it does not turn knowledge into leverage.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"MuiGrid-root MuiGrid-item MuiGrid-grid-xs-12 MuiGrid-grid-md-4 css-19egsyp\">\n<div class=\"MuiBox-root css-8qb8m4\">\n<h2 class=\"MuiTypography-root MuiTypography-h4 css-nnwgpb\">Authors<\/h2>\n<div class=\"MuiStack-root css-1rrerex\">\n<div class=\"MuiGrid-root MuiGrid-container MuiGrid-item MuiGrid-spacing-xs-2 MuiGrid-grid-xs-12 css-1k82yfd\">\n<div class=\"MuiGrid-root MuiGrid-item MuiGrid-grid-xs-3 css-q4iyp1\"><img loading=\"lazy\" loading=\"lazy\" decoding=\"async\" src=\"https:\/\/cdn.sanity.io\/images\/3tzzh18d\/production\/ee74049038e9093d52a0634befcc8c8c288088ef-2155x2155.jpg?fit=max&amp;auto=format\" alt=\"\" width=\"80\" height=\"80\" data-nimg=\"1\" \/><\/div>\n<div class=\"MuiGrid-root MuiGrid-item MuiGrid-grid-xs-9 css-14ybvol\"><a class=\"MuiTypography-root MuiTypography-inherit MuiLink-root MuiLink-underlineAlways css-ch6i07\" href=\"https:\/\/www.techpolicy.press\/author\/gathoni-ireri\"><span class=\"MuiTypography-root MuiTypography-h4 css-1ytgyi9\">Gathoni Ireri<\/span><\/a><\/p>\n<div class=\"MuiTypography-root MuiTypography-body2 css-1qd0hdj\">Gathoni is a Junior Research Scholar at the ILINA Program, an AI governance organization based in Kenya, and a research assistant at the University of Cape Town AI Initiative. Her research focuses on mitigating AI manipulation risks through policy interventions. She holds a BA in Psychology (Hons.)\u00a0&#8230;<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<section>\n<div class=\"MuiGrid-root MuiGrid-container css-1d3bbye\">\n<div class=\"MuiGrid-root MuiGrid-container MuiGrid-item css-8x8291\">\n<div class=\"MuiGrid-root MuiGrid-item MuiGrid-grid-xs-12 css-15j76c0\"><\/div>\n<\/div>\n<\/div>\n<\/section>\n<section>\n<div class=\"MuiGrid-root MuiGrid-container css-1d3bbye\">\n<div class=\"MuiGrid-root MuiGrid-container MuiGrid-item css-8x8291\">\n<div class=\"MuiBox-root css-iutjyj\">\n<div class=\"MuiStack-root css-1gjxtvh\"><\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/section>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<p><\/main><\/p>\n<footer>\n<div class=\"MuiBox-root css-uek6ck\">\n<div class=\"MuiContainer-root MuiContainer-maxWidthLg css-1qsxih2\">\n<div class=\"MuiGrid-root MuiGrid-container MuiGrid-spacing-xs-3 css-1h77wgb\">\n<div class=\"MuiGrid-root MuiGrid-item MuiGrid-grid-xs-12 MuiGrid-grid-sm-2 css-1t2vos\">\n<div class=\"MuiBox-root css-1gtfl7l\"><\/div>\n<\/div>\n<div class=\"MuiGrid-root MuiGrid-item MuiGrid-grid-xs-12 MuiGrid-grid-sm-4 css-h3o6is\"><\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/footer>\n","protected":false},"excerpt":{"rendered":"<p>In April 2025, while announcing\u00a0improvements to ChatGPT&#8217;s memory, Sam Altman expressed his excitement about &#8220;AI systems that get to know you over your life,&#8221; promising that this would make them &#8220;extremely useful and personalized.&#8221; This kind of personalized lifelong knowledge capacity in AI systems represents a fairly recent innovation. It involves a form of long-term [&hellip;]<\/p>\n","protected":false},"author":128,"featured_media":450453,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"content-type":"","footnotes":""},"categories":[81],"tags":[],"class_list":{"0":"post-450452","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-articles"},"_links":{"self":[{"href":"https:\/\/sonna.so\/en\/wp-json\/wp\/v2\/posts\/450452","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sonna.so\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sonna.so\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sonna.so\/en\/wp-json\/wp\/v2\/users\/128"}],"replies":[{"embeddable":true,"href":"https:\/\/sonna.so\/en\/wp-json\/wp\/v2\/comments?post=450452"}],"version-history":[{"count":1,"href":"https:\/\/sonna.so\/en\/wp-json\/wp\/v2\/posts\/450452\/revisions"}],"predecessor-version":[{"id":450454,"href":"https:\/\/sonna.so\/en\/wp-json\/wp\/v2\/posts\/450452\/revisions\/450454"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/sonna.so\/en\/wp-json\/wp\/v2\/media\/450453"}],"wp:attachment":[{"href":"https:\/\/sonna.so\/en\/wp-json\/wp\/v2\/media?parent=450452"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sonna.so\/en\/wp-json\/wp\/v2\/categories?post=450452"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sonna.so\/en\/wp-json\/wp\/v2\/tags?post=450452"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}