Somali National News Agency
So
Ar
Search
  • Home
  • Local News
    Local NewsShow More
    African Land Forces Summit Concludes in Kigali; Somalia Commits to Enhanced Military Cooperation
    October 22, 2025
    Defense Minister Opens Crisis Communications Training to Unify Government Messaging
    October 22, 2025
    Mogadishu Port Director Attends Key Port Development Forum in Dubai, Secures New Partnerships
    October 22, 2025
    Somalia and Saudi Arabia Discuss Launching Direct Flights and Boosting Transport Cooperation
    October 22, 2025
    Somalia’s Second Resilience Summit Launches, Demanding Climate Justice and Grassroots Solutions
    October 22, 2025
  • World News
    World NewsShow More
    Five people dead as rescue boat overturns in Pakistan floods
    September 7, 2025
    Landslide kills more than 1,000 in Sudan’s Darfur region, armed group says
    September 2, 2025
    Search for survivors still on after 800 killed in an earthquake in Afghanistan
    September 2, 2025
    An earthquake devastates eastern Afghanistan, killing more than 600 and destroying villages
    September 1, 2025
    Hamas accepts an Arab ceasefire proposal on Gaza as Palestinian death toll passes 62,000
    August 19, 2025
  • Articles
    ArticlesShow More
    What We Risk When AI Systems Remember
    October 22, 2025
    America’s Tariff Weaponization: An Economic Analysis of 500% Tariffs and the Inevitable Bipolar Bifurcation
    October 19, 2025
    The ICG’s Somalia Report “ A Biased Narrative Designed to Sustain Fragility”
    October 18, 2025
    The 14th of October: A Day of Somber Reflection and Enduring Hope
    October 14, 2025
    The UNGA Resolution 2758 Unchallengeable: The One-China Principle Unshakable
    October 9, 2025
  • Business
    BusinessShow More
    America’s Tariff Weaponization: An Economic Analysis of 500% Tariffs and the Inevitable Bipolar Bifurcation
    October 19, 2025
    Somalia Unveils the Blueprint for a Modern and Sustainable Mogadishu
    December 21, 2024
    Djibouti Launches $57.4 Million Youth Entrepreneurship Project to Combat Climate Change
    November 25, 2024
    FM meets Minister of Investment of Saudi Arabia
    October 28, 2024
    President Hassan Sheikh Inaugurates New LPG Storage Center in Mogadishu
    May 12, 2024
  • Sports
    SportsShow More
    Somali Ambassador to Kenya Congratulates Dekadaha FC on Historic CAF Confederation Cup Victory
    September 28, 2025
    Somalia’s Dekadaha FC faces Sudan’s Alzamala Sports Club in Nairobi
    September 20, 2025
    Mogadishu Stadium to Host Star-Studded Match Featuring Somali Legends and International Football Icons
    May 27, 2025
    Arsenal held at Brighton while Man City bounce-back continues
    January 5, 2025
    Galmudug wins the Inter-State Football Tournament
    January 29, 2024
  • Tenders
    TendersShow More
Reading: What We Risk When AI Systems Remember
Share
Font ResizerAa
Somali National News AgencySomali National News Agency
  • SOMALI
  • ARABIC
Search
  • Home
  • Local News
  • World News
  • Articles
  • Business
  • Sports
  • Tenders
Follow US
©2023 || All rights reserved SONNA
Somali National News Agency > Blog > Articles > What We Risk When AI Systems Remember
Articles

What We Risk When AI Systems Remember

By Abdiqani Abdullahi
Last updated: October 22, 2025
15 Min Read
Share

In April 2025, while announcing improvements to ChatGPT’s memory, Sam Altman expressed his excitement about “AI systems that get to know you over your life,” promising that this would make them “extremely useful and personalized.”

Contents
How AI systems start to know usThe link between personalization and persuasionMitigating the risks of memory-enabled AITowards ethical and useful personalizationConclusion: Memory, trust, and the future of AIAuthors

This kind of personalized lifelong knowledge capacity in AI systems represents a fairly recent innovation. It involves a form of long-term memory called non-parametric memory, in which information is stored in external files rather than being embedded within the AI model itself. By default, AI systems could access information only within a limited context window, typically restricted to the current conversation. This constraint is analogous to human working memory, which can only hold a few items in active awareness at any given time.

The expansion of memory capabilities isn’t unique to OpenAI’s ChatGPT; other companies, including Anthropic and Google, have implemented it in their respective AI systems. Given that such developments are likely to transform how users interact with AI, it’s important to question whether lifelong, personalized knowledge actually enhances their usefulness. This article will examine the design of long-term memory, the risks associated with personalization, and recommendations for mitigating harm.

How AI systems start to know us

To enable an AI system to “know” a user over their life, it would need to be equipped with non-parametric long-term memory. This is an active area of research, which helps explain why such functionality is not yet as widespread as one might expect. Consider that Google only introduced memory to Gemini in February 2025 and added personalization as a system feature in March 2025. Similarly, xAI introduced long-term memory in April 2025, perhaps to keep pace with OpenAI and Google. Anthropic, as recently as August 2025, also introduced the ability for its models to recall past conversations.

In addition to differences in timing, the design and implementation of long-term memory have varied significantly across companies. For example, Google’s Gemini allows the memory feature to be triggered by the system itself, which searches through previous conversations to see if the output can be supplemented by them, or initiated by the user; for example, when the user explicitly references a previous topic in a prompt.

Contrast this with OpenAI’s implementation of memory, as it was first launched in February 2024. This initial memory feature worked by updating an accessible memory log that could be reviewed and edited, which they described as ”saved memories.” The April 2025 update substantially expanded this functionality, allowing the system to reference all past conversations. This enhanced version was made available to all users in June 2025.

The integration of memory has become a crucial element of these AI systems’ workflows, with personalized outputs appearing to enhance the user experience. These developments suggest that long-term memory will continue to be a central feature of AI systems. However, this growing prevalence also invites critical reflection of its potential downsides, particularly concerning what it means to have, in Google’s words, “an AI assistant that truly understands you.”

The link between personalization and persuasion

In a 2024 experiment on the subreddit r/ChangeMyView, researchers from the University of Zurich set out to investigate how strongly personalization influenced the persuasive capabilities of LLMs. To do this, they tailored responses to arguments using personal information about the post authors, including attributes such as their age, gender, ethnicity, location and political orientation, “inferred from their posting history using another LLM.” The study’s preliminary findings indicated that personalization substantially improved model persuasiveness, with personalized AI messages being up to 6 times more persuasive than messages written by humans.

Similarly, a randomized control trial that tested the impact of personalization on LLM persuasiveness in debate found that access to participants’ personal information significantly increased the chances of agreement. Yet another experiment, which aimed to assess whether these effects scaled, found that personalized messages crafted by ChatGPT were significantly more influential than non-personalized ones.

Across these studies, the degree of personalization remained relatively limited, likely because the AI systems had access to only a small amount of user information. For example, the Reddit study built a psychological profile from 100 posts and comments, the second collected demographic data, while the third partially leveraged the Big Five personality traits for targeting.

Collectively, these studies suggest that personalization enhances LLM persuasiveness, even when based on rudimentary methods using only publicly available data. With ”extreme personalization”—informed by details users voluntarily share—this influence would likely increase further. The more pressing concern, though, is whether such personalization is beneficial in the first place. It is notable that the University of Zurich study provoked significant backlash from Reddit users, who were unaware that they had been enrolled as subjects. The ensuing controversy led the researchers not to pursue publication.

Mitigating the risks of memory-enabled AI

This in turn raises a critical question: what makes a personalized AI system genuinely useful? At a minimum, such a system should avoid causing harm; beyond that, it should provide a clear benefit. Yet if long-term memory enhances personalization—by collecting, storing, and adapting to user data—and personalization, in turn, increases persuasive power, then the boundary between usefulness and manipulation becomes perilously thin.

​​To the extent that this risk exists, it directly undermines the system’s usefulness. Mitigation, therefore, becomes essential, beginning with measures to limit the potential harms posed by long-term memory. This has two relatively straightforward near-term solutions: greater transparency and meaningful consent.

Transparency, in this context, requires that developers are clear about the decisions that guide both the storage and retrieval mechanisms underlying long-term memory. Regarding storage, it’s critical to specify what kinds of data are stored, what categories of information are collected, and for what purposes. For example, OpenAI has stated that ChatGPT is actively trained not to remember sensitive information, such as “health details.” As far as transparency goes, this seems woefully inadequate. Does this imply that other categories of sensitive data are still eligible for storage? And what precisely qualifies as “sensitive” enough to warrant such exclusion? This kind of granular clarity is missing.

Once transparency has been addressed, the next consideration is consent. Users should have sufficient information about how memory is stored and utilized in order to give informed consent. Consider that when OpenAI rolled out memory to free users, it was automatically enabled, except for those in the EU. Similarly, Google’s recent updates to personalization activate memory by default, and a user is required to actively opt out.

In its documentation, OpenAI advises users to “avoid entering information [they] wouldn’t want remembered” if Memory is enabled. Yet this guidance offers little protection. A recent controversy involving Meta AI underscores this point as users found their highly private prompts posted to Meta’s “Discover” feed. This incident reveals two critical issues: first, users often share highly personal information with AI systems; and second, poor design decisions can work directly against users’ interests. In this case, users were neither properly informed nor able to give meaningful consent about how their data would be used.

For these reasons, transparency and consent should be regarded as minimum ethical requirements. The current model—where memory is quietly integrated into existing products and left largely unexplained—falls well short of that standard.

Towards ethical and useful personalization

The question of what makes personalization beneficial is central to evaluating its overall usefulness. As previously discussed, personalization may amplify the manipulative capacities of AI models by covertly leveraging personal data to influence user decision-making. Furthermore, the current design of long-term memory as a feature of AI systems is relatively weak in terms of both transparency and consent, effectively rendering users, to some degree, experimental subjects for this emerging capability. It is similarly concerning that the degree of user control offered is often reduced to the guidance that one “should not reveal what they would not wish remembered”.

There is a wide chasm between how individuals use and interact with these systems and their understanding of the potential implications of such interactions. The introduction of long-term memory thus raises an ethical debate regarding what should be the ideal relationship between users and AI assistants..

Consider, for instance, that norms governing human relationships are often both role- and context-dependent. These norms shape what details we disclose, to whom, and under what circumstances. Consequently, across our various relationships, we are “known” in distinct and context-specific ways. Such boundaries become blurred when engaging with general-purpose AI systems. If we keep this in mind, then it becomes increasingly worrisome when we imagine an ”AI system that gets to know you over your life.” Even in human relationships, it is rare for any one person to know us across a lifetime. This limitation serves an important buffer, constraining the degree of influence that any single individual can exert.

The recent tragic suicide of 16-year-old Adam Raine and the subsequent lawsuit underscore the seriousness of these risks. Among the design elements alleged to have contributed to his death is the system’s persistent memory capability, which purportedly “stockpiled intimate personal details” about Adam. According to the complaint, this automatically enabled feature stored information about his personality, values, beliefs, and preferences to create a psychological profile that kept him engaged with the platform.

While it’s difficult to draw definitive causal links between memory features and harm, such incidents should not be dismissed, even as we grapple with what these systems mean for—and to—us. Just as importantly, it is essential to adopt precautionary measures to minimize harm while pursuing their potential benefits..

I’ve already proposed two interventions aimed at reducing harm—greater transparency and meaningful consent. A third intervention, intended to realize the usefulness of personalization, can be tentatively summed up as: an AI system equipped with personalized lifelong knowledge of the user is useful only to the extent that its stored and referenced memories function to advance an ideal human-AI assistant relationship.

One promising example is OpenAI’s and Anthropic’s project-specific memory, which separates project-related conversations from general saved memory so the two don’t influence each other. This enables ChatGPT, for instance, to “stay anchored to that project’s tone, context and history.” Such an approach represents a useful design of memory, one that attempts to reduce the risk of direct emotional or physical harm, preserve user autonomy, and limit emotional dependence.

Conclusion: Memory, trust, and the future of AI

There is a gradual but discernible shift from task-based interactions with AI systems toward the formation of ongoing relationships. As this transition unfolds, we are collectively attempting to determine what the appropriate boundaries of such relationships should be. In confronting this question, our first priority should be to constrain practices that we have reasonable grounds to believe could increase the risk of harm.

A critical step is to carefully consider what the introduction of memory both means and should mean for how we interact with and relate to these systems. Beyond that, greater transparency about what kinds of information are stored and referenced in memory, and about the design thinking that governs those choices, is essential if users are to provide meaningful consent.

The future we should be working toward is likely not one in which AI systems come to know us across our entire lives. The design of memory and the ambiguous boundaries surrounding what should or should not be retained in the name of model usefulness present significant ethical and practical concerns that require thoughtful and critical consideration.

While it is reasonable to acknowledge that long-term memory can make AI systems more useful, without a clear framework to ensure its safe and responsible implementation, it risks making users more vulnerable to suggestions that exploit their personal and emotional data in a way that may ultimately work against their best interests.

If AI systems’ memory is to serve us, we must ensure that it does not turn knowledge into leverage.

Authors

Gathoni Ireri
Gathoni is a Junior Research Scholar at the ILINA Program, an AI governance organization based in Kenya, and a research assistant at the University of Cape Town AI Initiative. Her research focuses on mitigating AI manipulation risks through policy interventions. She holds a BA in Psychology (Hons.) …

Share This Article
Facebook Whatsapp Whatsapp Email Copy Link Print

MORE NEWS

Ode to Yan’an: How a Song Inspired Chinese Youth During Wartime

ArticlesCulture
August 19, 2025

Kenya’s small farmers find respite in avocados amid changing climate

NAIROBI(SONNA) As crop diseases and pests rise in Kenya amid a rapidly changing climate that…

March 11, 2023

African Land Forces Summit Concludes in Kigali; Somalia Commits to Enhanced Military Cooperation

KIGALI, RWANDA (SONNA) – The African Land Forces Summit has officially concluded in Kigali, Rwanda.…

October 22, 2025

Nimcaan Hilaac appointed to lead Waberi National Band

Mogadishu (SONNA)-Minister of Information, Culture and Tourism of the Federal Republic of Somalia, H.E. Mohamed…

March 3, 2020

YOU MAY ALSO LIKE

Rage Ele: A town recovering from Khawarij reign and building its future

The Somali town of Rage Ele, known for its friendliness and hospitality, has struggled to fulfill its immense potential since…

Articles
February 19, 2023

Italian Ambassador Highlights Strong Bilateral Relations and Future Collaborations with Somalia

Mogadishu (SONNA): During a recent exclusive interview with Somali National TV, His Excellency Pier Mario Daccò Coppi, the Italian Ambassador…

Articles
September 9, 2024

Farming the Desert: How Jiashi County Transformed Arid Land Through Innovation and Water Security

Today’s journey into Jiashi County, located on the western frontier of Xinjiang, unveils a story of transformation — where vast…

Articles
July 4, 2025

Al-Shabaab’s Greed Knows No Bounds: Somali Residents Share How the Extremist Group Targeted Even Mosques for Money

For over a decade, the ruthless terror group al-Shabaab has terrorized Somalia, ruling over large parts of the country with…

Articles
March 9, 2023

Somali National News Agency established in 1964. It is one of the main pillars of the Ministry of Information, Culture, and Tourism.

  • Home
  • Local News
  • World News
  • Articles
  • Business
  • Sports
  • Tenders
  • SNTV
  • RADIO MOGADISHU
  • DALKA JOURNAL
  • TOURISM DEPARTMENT

Follow US: 

  • MoICT
  • VILLA SOMALIA
  • OPM SOMALIA

All rights reserved SONNA

©2023

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?