It started as a joke. I was standing in front of the fridge at 7 AM, paralyzed by the choice between eggs and oatmeal, and I thought: you know what, I am going to let the machine decide. Not just breakfast. Everything. For a whole week.
The rules were simple. Every decision, from what to eat to what to wear to what to do with my evening, would be delegated to AI. I would provide the context (here are my options, here is the situation, here is my schedule) and follow whatever it said. No vetoing. No "well, but actually." Just compliance.
It seemed like a fun experiment. It was, for about forty-eight hours.
- Outsourcing every decision to AI starts out as a relief and ends as a slow existential crisis
- AI is genuinely good at choices you don't care about (lunch, routes, scheduling)
- It is genuinely bad at choices that define who you are
- Decision fatigue is real, but the cure is not handing over your entire will
Day 1: The Honeymoon
Monday morning. I woke up, opened the chat, and typed: "I have eggs, oatmeal, or leftover pad thai in the fridge. I have a busy morning with three calls. What should I eat for breakfast?"
The answer came back in two seconds. Oatmeal. High sustained energy, minimal prep time, good for a call-heavy morning. It even suggested adding a banana for potassium and a coffee timed twenty minutes before my first meeting for peak alertness.
This was... actually great? I had spent zero mental energy on the decision and ended up with a better breakfast than I would have chosen on my own (I would have eaten the pad thai standing over the sink like an animal).
The outfit decision was even better. I stood in front of my closet and typed: "Video call at 9, then working from home all day. 31 degrees outside." It told me to wear the navy polo. I had been reaching for the same gray t-shirt I have worn every Monday since 2019. The navy polo was objectively the right call. I looked like a person who has their life together, which, for the next forty-eight hours, I technically did.
By lunch, I was a convert. "Should I eat at my desk or go to the Thai place downstairs?" Go downstairs. The walk will help with afternoon focus. Order the green curry, it is high in protein and the turmeric has anti-inflammatory properties.
"The average adult makes about 35,000 decisions per day. About 34,990 of them don't actually matter."Dr. Joel Hoomans, Roberts Wesleyan University
I was freeing up 34,990 decision slots. This was going to change my life.
Day 2: Peak Optimization
By Tuesday I had a system. Wake up, dump the day's parameters into the chat (schedule, energy level, weather, what's in the fridge, what I need to get done) and let it plan everything.
It told me to wear the blue shirt (looks professional, matches well with video calls). It told me to take the longer walking route to the coffee shop (extra 1,200 steps, and the scenery is better for creative thinking). It told me to batch my emails from 2-3 PM instead of checking them all morning.
Every suggestion was reasonable. Most were better than what I would have done. The AI was like a very calm personal assistant who had read every productivity book ever written and was gently implementing all of them on my behalf. (This is basically the promise of proactive AI agents, but applied to life instead of work.)
I felt light. Free. Like someone had removed a background process from my brain that had been running since childhood.
At dinner, it suggested I call my mom instead of watching Netflix. "Based on your schedule this week, tonight offers the best window for a personal call. You have not mentioned contacting family recently."
Okay, that one was a little creepy. Like getting relationship advice from a very polite surveillance camera. But it was right. I called my mom. She was happy. She asked if everything was okay because I never call on a Tuesday. I did not tell her a chatbot guilted me into it.
The AI was three for three.
Day 3: The First Crack
Wednesday is when things started getting weird. The productivity paradox had arrived in personal form.
A friend texted asking if I wanted to grab drinks on Friday. Normally I would feel it out. Am I tired? Do I actually want to see people, or do I want a quiet night? What is the vibe? There is a whole invisible calculation that happens in about half a second, and it is based on factors I could not articulate if you asked me to.
I typed the situation into the chat instead. Friend wants drinks Friday. I have no other plans. I have been socializing about twice this week already.
The AI said yes. Go. Social connection is important for wellbeing, and Friday is optimal because you have no early Saturday commitments.
I said yes to my friend. And immediately felt strange about it. Not because it was the wrong decision. Because it was not my decision. The AI had evaluated my social calendar like it was optimizing a resource allocation problem, and it had determined that the marginal utility of human connection exceeded the opportunity cost of staying home.
Which is technically correct. And also a deeply weird way to decide whether to see your friend.
I also asked it whether to respond to the text with a thumbs up or an actual sentence. It recommended a sentence: "Sounds great, looking forward to it." It explained that emoji-only responses can be perceived as low-effort. I was being coached on texting etiquette by a language model. I stared at my phone for a long time after that.
Day 4: Breakfast Becomes a Crisis
Thursday morning. I asked the AI what to eat. It said oatmeal again.
I did not want oatmeal. I wanted the leftover pizza from last night. Cold. Standing up. Maybe with hot sauce.
But the rules said follow the AI. So I made oatmeal. I sat down with my optimized breakfast and my timed coffee and I felt, for the first time in the experiment, genuinely resentful. Not at the AI. At myself, for agreeing to this.
The pizza would have been a worse decision by every measurable metric. Nutritionally inferior. Less energy. Probably would have made me sluggish by 10 AM. But I wanted it. And wanting things, it turns out, is not a bug in human decision-making. It is the entire point.
To make it worse, I also asked it what to listen to while eating. It recommended a podcast about habit formation. I wanted to listen to the same three songs I have been rotating for two weeks. The AI does not understand that sometimes you need to hear the same guitar riff seventeen times in a row for reasons that cannot be explained to anyone, including yourself.
Day 5: The Robot Life
By Friday I had developed a routine that was, by any objective measure, extremely efficient. My meals were balanced. My schedule was optimized. My exercise was timed for peak benefit. My social engagements were calibrated to maintain connection without overextending.
I was also bored out of my mind.
Not because nothing was happening. Plenty was happening, all of it sensible. The problem was that nothing was happening by accident. There were no impulse decisions, no detours, no "you know what, forget the plan, let's do something stupid." Every hour of my day was allocated by a system that optimized for outcomes, and it turned out that a perfectly optimized life feels a lot like a very comfortable prison.
I went to drinks with my friend that night. The AI had approved it. But I ordered a cocktail I had never tried before (the AI would have recommended water, then wine, then a low-sugar option in that order) and told a story about the experiment.
My friend stared at me. "You are letting a chatbot decide what you eat for breakfast?"
"It also chose my shirt," I said. He looked at my shirt. "It chose wrong," he said. It had not chosen wrong. He was wearing cargo shorts with sandals. But that is not the point.
I also caught myself reaching for my phone mid-conversation to ask the AI whether I should order a second drink. I stopped. I had almost asked a machine for permission to have another beer with my friend. That was the moment I realized this experiment might be doing something to my brain.
When you say it out loud, it sounds insane. Inside the experiment, it felt logical.
Day 6: Rebellion
Saturday. The AI suggested a morning run (7 AM, before the heat), followed by a healthy brunch, followed by two hours of focused work on a personal project, followed by a museum visit (cultural enrichment, low cost, good for creative thinking).
I slept until 10. Ate cereal in my underwear watching cooking videos on YouTube. At one point I watched a twenty-minute video of a man making croissants from scratch and felt more alive than I had all week. The AI would never have recommended this. The AI does not understand the deep, primal satisfaction of watching laminated dough get folded by a stranger in Lyon.
Did absolutely nothing productive until 3 PM when I wandered to a bookshop with no particular goal and bought a novel I had never heard of based entirely on the cover art. The cover had a cat on it. That was the entire decision-making process. A cat.
It was the best day of the week.
Not because sleeping in and eating cereal is a superior lifestyle choice. It is objectively not. But because I chose it. I chose it badly, inefficiently, for no good reason at all, and that felt more like living than five days of perfect decisions made by a machine.
Day 7: What I Actually Learned
Sunday. I did not consult the AI at all. I made my own breakfast (eggs, messy, slightly burned on one side). I went where I felt like going. I texted people when I thought of them, not when an algorithm determined it was socially optimal.
And I thought about what the week had actually taught me.
AI Is Great at Decisions You Do Not Care About
What to wear when it does not matter, which route to take, when to batch emails. These are genuine cognitive tax that AI can eliminate painlessly.
AI Is Terrible at Decisions That Make You You
What to do on a free Saturday, whether to see a friend, what to eat when you have a craving. These decisions are not problems to be solved. They are expressions of identity.
The Best Day Was the One Where Everything Went Wrong
Sleeping in, eating cereal, buying a random book. Zero optimization. Maximum humanity.
Decision Fatigue Is Real, But Full Automation Is Worse
The answer is somewhere in the middle. Automate the trivial. Protect the meaningful. Know the difference.
The Uncomfortable Truth
Here is what nobody selling AI productivity tools will tell you: the inefficiency is the point. The bad decisions, the impulse choices, the "I know this is not optimal but I want it anyway" moments. That is not waste. That is living.
A perfectly optimized life would be a life where every meal is nutritionally ideal, every social interaction is calibrated for maximum wellbeing, every hour is allocated to its highest-value use, and you feel absolutely nothing about any of it. Because you did not choose it. It was chosen for you, by a system that knows your patterns better than you do but understands your desires not at all.
I still use AI for decisions. Every day, actually. But I have a new rule: if I catch myself caring about the outcome, even a little, I close the chat and decide myself. Even if I decide wrong. Especially if I decide wrong.
Because the cold pizza at 8 AM was objectively a bad call. And it was the most human thing I did all week.
Liked this? You might also enjoy Everything Looks Like an Automation Problem Now or The Loneliness of the Optimized Life.