Now that ChatGPT can remember all my prior conversations, I asked it to roast me.
You left Wall Street to find peace, but somehow ended up building a GPT-powered productivity empire while still low-key benchmarking your net worth against everyone you “don’t compare yourself to.”
Ouch.
It was pretty accurate and at the end it asked me if I wanted more.
Want it meaner? Punchier? Poetic?
I thought about meaner for a second.
And then chickened out.
Here’s how I learned how to code in 4 weeks. With the help of AI. Nat Eliason’s Build Your Own Apps course completely leveled up my AI skills. In under 4 weeks (and without any coding experience) I was able to solve complex data problems, build interactive websites and design lightweight apps. His next cohort (which includes live Office Hours) starts April 16th. Sign up below:
(Note: Affiliate and proud alum)
Reactions to ChatGPT’s new memory release
Last week, I wrote about OpenAI’s new feature and people had a lot of opinions.
In this post, I’ll summarize the responses:
The impact on human connection
Will this make you “average”
How to handle contamination
Who gets to “own” our memories?
1. What happens to human connection?
It turns out that I wasn’t the only person who ran this personal experiment. A long-time reader (and AI-first professional) shared their misgivings:
I asked it "what's the most embarrassing thing about me? Make it funny."And boy did it come at me hard. I'll admit some of it was pretty funny, but it didn't leave me feeling good. I thought it was perhaps the ego stings I was recuperating from, but after a few hours of reflection, I realized it was something more. I analogized it to someone you just meet coming on too strong and familiar, and missing the context that makes a good joke at your expense land in that perfect way.
I suspect most of us don’t want our intimate conversations to be weaponized against us by a non-sentient being.
They continued:
Assuming this capability will only increase, and even more people will be using it for things like therapy, companionship, finding purpose, what does that say about the future of deep human connection? Love? What will those things be like for our kids?
In my own AI usage, without realizing it, I’ve drawn a line on my own inputs. AI is more like a super-charged co-worker (with EA capabilities), who’s mostly there to help with my business and my hobbies.
I’ll occasionally ask it for some marriage or parenting advice. But I’m not uploading my personal journal entries, nor am I turning to it for companionship.
I made a video further explaining my own approach:
2. Are you ok with just average?
“You are the average of the five people you spend the most time with,” is a common trope from the world of self-improvement.
But does AI bring out our unique and beautiful qualities?
Or is it the largest-scale example of reversion to the mean?
In Venkatesh Rao’s post Autoamputation Flow — How human nature is changing, he posits:
But this self is never purely ours. It is a composite, a chimeric entity made from our past behaviors and the averaged tendencies of millions of others. It is part personal shadow, part public echo. Even when an AI is trained only on our data, it is still processed through the logics of models trained on everyone else’s. Our digital double is always wearing borrowed skin.
Another reader,
concedes that the best assistant will be the one who knows him best. But at what cost?How much of ourselves do we actually lose in the process. Imagine how hollow and without serendipity a life would be if, for instance, the AI always knew what food I would prefer at a given moment based on my biometrics, psychology, history, knowledge of my day. It’s a bit like giving over your music tastes to a single Algo.
Another example from my personal life: our family trip to Japan for Spring Break.
Did I use AI to help me plan our itinerary?
Not really.
As we looked for hotels, restaurants, tourist sites and shopping — the last thing we wanted was some aggregated Reddit recommendations.
We wanted curation. We wanted like-minded folks who understood our (somewhat bougie) tastes and quirks.
We weren’t about to delegate that to the “the averaged tendencies of millions of others.”
(However, in terms of planning the logistics — it was ChatGPT 4o all the way.)
3. How do you handle contamination?
The other day, a friend asked me if I could run him a Deep Research query on my ChatGPT pro account. He was looking to find a new job and wanted a detailed list of growth equity fintech firms in NYC.
Without hesitation, I accepted.
But as I was setting up the prompt, I got queasy.
I didn’t want this search to be a part of my “history.” I have no interest or connection to growth equity Fintech firms.
This question also showed up on numerous subreddits:
It got me thinking about investment analysts who cover dozens of companies. Or the marketing free-lancer who covers multiple clients.
How could you ensure that your memories didn’t get crossed up in your queries?
Surely there are tools like Temporary Chats or Projects to ring-fence certain conversations.
But these solutions aren’t ideal as they still shift an additional prompting burden onto the user.
4. Who gets to own our memories?
In
’s post The Last Data Set, they explained the expansiveness of our collective chat history:this isn’t demographics. this isn’t even behavioral tracking.
this is psychographic x-ray vision.
your fears. your ambitions. your contradictions. your unfiltered cognition, timestamped, threaded, & vectorized.
we’re not talking about whether you clicked a sneaker ad. we’re talking about whether you’ve been thinking about quitting your job for 6 months but haven’t told your partner yet.
we’re talking about a longitudinal corpus of interiority, now with persistence.
The immediate consensus from OpenAI’s announcement was that this would immediately create an impregnable moat for ChatGPT. It would be vendor lock-in at its finest.
But some readers pushed back on this idea. Here’s an EdTech entrepreneur on the power of model specialization:
I think there’s going to be a lot of competition over who gets to store and use our “memories.” I think a lot of us will still want control. I bounce between Cursor, Windsurf, and Cline because they each do something different I like. They are competing and innovating fiercely between them, but they have different product roadmaps so they create different features. And since my code is just sitting in files, none of them really has a lock on me.
In The Law of Shitty Moats Lex.page founder Nathan Baschez says not to under-estimate the consumer — they’re smart enough to understand the lock-in risk of getting trapped in a walled-off system:
Switching costs are literally a textbook source of competitive advantage for software businesses. SAP, Adobe, Oracle, and more have been milking it since the 80s. But I would hypothesize that fewer modern businesses have been able to build as strong a moat based on switching costs, because we are more sensitive to getting trapped in proprietary systems.
As for the privacy side of things, it’s too early to tell — but if we were nervous about Google owning our search data, Facebook owning our relationships and Instagram owning our photos — with memory, we’ve fully entered a Black Mirror episode.
Let’s just say that it’s the first time I looked up how I could install an LLM locally.
Very insightful ❤️
You bring up some good points before diving into the systems. Similar to the PKM world, one of my first questions is can you export your entire chat history (in case you want to leave).
Anyways, ChatGPT's memory feature isn't a huge draw for me. I like the intention of deciding when to include more context while in Claude via its Projects (requires Pro). Sure, it would be nice to reference something in the other thread, but the friction is a price worth paying vs everything all at once.