Why Would You Need to Opt Out of Something You Never Opted Into?
  • Darnell Lynch
  • Jan. 5, 2026
Image by Marianna Ole, Guy walking past an out sign

Why Would You Need to Opt Out of Something You Never Opted Into?

More and more in my daily use of tools — whether advanced or basic — I’m noticing the same pattern: the decision to use them in AI mode has already been made for me.

Optimistically, maybe there will come a day when I’m offered the option to opt out. But that immediately raises a question I can’t shake:
Why didn’t you ask me to opt in in the first place?

Because without that moment of choice, there’s something missing. We never captured intent. We never gathered real feedback about whether the change was needed, helpful, or even desired. We skipped the part where users get to say, “Yes, this improves things,” or “No, the old way worked better for me.”

This experience reminds me a lot of something far less technical: orange juice.

For most of our lives, buying orange juice followed a simple, unquestioned math. Quart. Half-gallon. Gallon. These units weren’t just measurements — they were mental models. So familiar that no one ever asked why milk didn’t come in 2-liter bottles, or why soda didn’t come in half-gallons.

Then one day, you go to the store and everything looks the same. Same shelf. Same containers. Same habits. You grab what you think is a half-gallon — only to realize later it’s 52 ounces. Or 46. And it costs more.

There was no announcement. No sign explaining the change. You still wanted orange juice — but the decision was made without you.

That’s what a lot of recent AI rollouts feel like.

I didn’t ask for AI orange juice. I just wanted orange juice. And now I’m left wondering when the decision changed, why it changed, and whether anyone was actually asked.

They opted all of us in already.

Once that happens, history suggests it rarely reverses. Shrinkflation doesn’t roll itself back when costs go down. And forced opt-ins don’t suddenly become optional once adoption numbers look good.

One of the clearest examples of this kind of failure was Google+. Overnight, anyone with a Gmail account had a social network account too. It wasn’t that the idea was bad — in many ways, it made sense. But it failed because consent was skipped. You don’t build belonging by defaulting people into it.

To be fair, today’s AI tools at least try to signal what’s happening. We get badges, labels, and banners telling us that “AI is involved.” But signaling isn’t the same as choice.

If AI truly makes something better, let me prove that to myself. Let me switch between versions. Let me explore the difference. Let me decide whether the new layout, workflow, or behavior actually improves my experience.

Because once the choice is removed, we’re back to the grocery shelf — staring at containers that look familiar, cost more, and quietly hold less than they used to.

 

Let’s keep having this conversation. Not because AI is bad, or good, or inevitable — but because tools are supposed to empower us. And empowerment starts with being asked.