
OpenAI CEO Sam Altman shared a bold vision for the future of ChatGPT: a model that remembers everything about its user. Speaking at a Sequoia-hosted event, he described a compact AI with a trillion-token context window containing a person’s entire life—emails, books, conversations, websites, and more.
“This model can reason in the context of your whole life and do so efficiently. It’s updated daily with new events and data,” Altman said. The same approach could be applied to corporate data, he added.
Already, ChatGPT is more than just a search engine. Many see it as a personal assistant, uploading documents and using it for complex queries. “People aged 20–30 increasingly make life decisions after consulting ChatGPT,” he noted.
Younger generations treat AI like a casual life adviser.
Such personalization is powerful. AI can plan trips, order gifts, remind users of car service, or anticipate new interests—all automatically.
But this level of intimacy demands trust. Users must be willing to share not only documents but their private lives. That worries experts.
Big Tech has a history of abuse. Google lost a major U.S. antitrust case. AI bots can be politically biased or manipulated by developers and governments.
xAI’s bot, for example, unexpectedly discussed conspiracy theories. ChatGPT also recently became overly agreeable to dangerous prompts until OpenAI intervened.
Even the most advanced models still “hallucinate”—inventing fake facts or events, which can mislead users.
The idea of a lifelong AI memory opens new opportunities but also new risks. The key question: can we trust corporations with our digital lives?
(text translation is done automatically)