Thought Leadership

I Tried to Explain My Job to My Parents and Accidentally Started a Philosophy Debate

What happens when you try to explain AI work to non-tech parents. A funny, philosophical journey through intelligence, language, and family dinner confusion.
February 13, 2026 · 13 min read

It started, as most philosophical crises do, over mashed potatoes.

My mother put down her fork, looked at me with genuine curiosity, and said: "So what is it you actually do all day?"

I should have lied. I should have said "consulting" or "software" and moved on to dessert. Instead, I made the catastrophic decision to tell the truth.

"I work with AI," I said.

Silence. Not the absence of sound but the presence of confusion so dense it has weight.

"Like robots?" my father asked.

And that was the first step in a conversation that would travel from kitchen table small talk to the fundamental nature of consciousness, free will, and what it means to be intelligent. All before the pie came out.

TL;DR:
  • Explaining AI work to non-tech family members is surprisingly, hilariously difficult
  • Every attempt at simplification leads to deeper, genuinely philosophical questions
  • The "confused parent" questions are often more profound than they seem
  • We might all be having a philosophy debate disguised as a tech conversation

"So You Talk to a Computer?"

The first thing you learn when explaining AI to your parents is that every analogy you reach for is either wrong or terrifying. There is no middle ground.

"I write prompts. Instructions, basically. I tell the AI what I need, and it generates responses."

My mother nodded slowly. "So you talk to a computer."

"Well, yes, but it is more nuanced than that."

"And it talks back."

"It generates text based on statistical patterns in language, drawing on a massive corpus of training data to produce contextually relevant output that..."

Her eyes glazed over somewhere around "statistical patterns." My father had already returned to his potatoes.

"It is like having a very smart assistant who has read the entire internet."

"The entire internet?" My mother looked horrified. This was a woman who once called me in a panic because she accidentally opened a second browser tab and thought she had been hacked.

"Not literally. It has been trained on a lot of text. Books, articles, conversations."

"So it is like a search engine?"

"No, it is fundamentally different from a search engine."

"But it knows things from the internet."

"It does not exactly know things. It predicts the most likely next word in a sequence based on patterns it learned during training."

My father looked up. "So it is guessing."

I opened my mouth. Closed it. Opened it again.

Because "is it guessing?" is actually one of the most debated questions in AI research right now. Entire papers have been written about whether large language models are "merely" doing sophisticated pattern matching or whether something more interesting is happening in those billions of parameters. My father, a retired electrician who thinks "the cloud" is a weather phenomenon, had just stumbled into the exact conversation that keeps AI researchers up at night.

"It is... complicated guessing," I said.

He seemed satisfied with this.

73% of Americans say they do not fully understand what AI does, according to a 2025 Pew Research survey. The other 27% are probably also confused but too embarrassed to admit it.

The Vocabulary Problem

Our industry has taken perfectly normal English words and given them meanings just close enough to their regular definitions to cause maximum confusion.

Take "model." In the real world, a model is either someone who poses for photographs or a small replica of a ship. When I told my mother I "switch between models depending on the task," she asked if I was doing fashion photography.

Then there is "training." When I explained that AI models are trained on data, she asked whether they get treats when they do well. I laughed. Then I stopped laughing, because reinforcement learning from human feedback is not entirely unlike giving a dog a treat when it sits correctly.

The word that really broke things was "agent."

"I work with AI agents," I said.

"Like secret agents?"

"No, more like autonomous programs that can take actions and make decisions."

"Make decisions," my father repeated. He had put down his fork again, which is how I know he is taking something seriously. "The computer makes decisions."

"Within parameters. It is not like it has free will."

"Does it have a name?"

"Sometimes. Claude is one. ChatGPT is another."

"And you talk to Claude. And Claude makes decisions."

"It is not as strange as you are making it sound."

But of course, it is exactly as strange as he was making it sound. We have just gotten used to it. You normalize the extraordinary through daily repetition until someone outside the bubble holds up a mirror and you realize you spend eight hours a day talking to a named entity that makes decisions and occasionally says things that surprise you, and you call this "work."

"The real question is not whether machines can think. The real question is whether humans can recognize thinking when it does not look like their own."
Adapted from Alan Turing's original framing in "Computing Machinery and Intelligence," 1950

When the Funny Questions Get Serious

There is a moment in every "explaining your job to family" conversation where the tone shifts. It happened right after dessert.

"Does it know it is talking to you?"

This is a question that would not be out of place at an AI safety conference. It touches on machine consciousness, self-awareness, theory of mind, and the hard problem of subjective experience. My mother asked it while cutting a slice of pie.

"No," I said. Then: "Probably not." Then: "We do not actually know for sure."

"How can you not know?"

"Because we do not fully understand what consciousness is in the first place. We can not even explain why humans are conscious."

"You probably know you are conscious?"

"I experience the feeling of being conscious, but explaining what that feeling is and why I have it is one of the oldest unsolved problems in philosophy."

My mother stared at me. "I asked about your computer, and you are telling me you do not know if you are conscious."

This is the inevitable arc of any honest conversation about AI. You start with "what is your job" and within thirty minutes you are deep in epistemology. You can not explain what large language models do without accidentally explaining what humans do, and then you can not explain what humans do without admitting that we are not entirely sure about that either.

As we explored in everything looking like an automation problem, working with AI changes how you see the world. But trying to explain AI to someone who does not use it changes how you see AI. My parents were performing a philosophical audit that no tech conference panel has ever matched.

"Model"

  • Normal meaning: someone who poses for photos
  • AI meaning: a billion-parameter neural network

"Training"

  • Normal meaning: teaching a dog to sit
  • AI meaning: optimizing weights on internet-scale data

"Agent"

  • Normal meaning: a secret operative
  • AI meaning: an autonomous program that takes actions

The "But Is It Alive" Phase

"So it writes things," my father said.

"Yes."

"Better than you?"

"At certain tasks, yes. It can produce a first draft faster than I can think of a first sentence."

"And it surprises you? Says things you did not expect?"

"Yes. Regularly."

He leaned back. "Sounds alive to me."

The proper response here is a careful explanation of the difference between functional behavior and subjective experience, the Chinese Room argument, philosophical zombies, the whole apparatus.

What I actually said was: "Huh."

Because when your father, who has no layer of technical sophistication, looks at the observable behavior and draws a completely logical conclusion from it, you realize your confident "it is just math" dismissal might be doing a lot of heavy lifting over some very uncertain ground.

Is it alive? No. Almost certainly not. But can I prove it is not, in a way that would satisfy a curious person asking honest questions? That is harder than I would like to admit.

It is related to what we wrote about in the loneliness of the optimized life. The more you interact with AI systems, the more the boundary between tool and companion gets blurry. Not because the AI is becoming more human, but because your definition of what counts as a meaningful interaction is expanding.

My Mother, the Accidental Philosopher

"When it writes something," she said, "who wrote it?"

Sit with this question for a moment. Because it is annihilating.

If I ask Claude to write an essay, and it produces something thoughtful and surprising, who is the author? The AI? Me, who wrote the prompt? The millions of human writers whose work formed the training data? The engineers who built the model? All of us? None of us?

This is not a cute dinner table question. This is a genuine ontological crisis about creativity and authorship that the entire creative industry is grappling with. Copyright law has not figured it out. Philosophy has not figured it out. But my mother wanted an answer before the coffee got cold.

"It is collaborative," I tried.

"If I dictated a letter and someone typed it, I wrote the letter. If you tell the computer what to write and it writes it, you wrote it. Right?"

"The difference is that the computer is not just taking dictation. It is making choices about words, structure, and content. It is contributing its own..."

I stopped. I had been about to say "its own ideas." And that word, "own," is doing so much work in that sentence.

My father offered help. "It is like when I used to tell the apprentice what to build and he would do it his own way. I designed it. He built it. But sometimes he would do something I had not thought of, and it was better."

This was, accidentally, one of the best analogies for human-AI collaboration I have ever heard. Better than anything in academic papers or at conferences. The apprentice model captures something the "tool" metaphor misses: the element of surprise, of the collaborator bringing something you did not anticipate, of the result being neither fully yours nor fully theirs.

The people who have never used AI often ask better questions about it than the people who use it every day. Expertise builds fluency, but it can also build blind spots. Sometimes you need someone who still finds the whole thing strange to remind you that it is, in fact, very strange.

The Job Title Problem

I had been at this for over an hour. I had covered language models, tokens, consciousness, authorship, and the fundamental nature of intelligence. My mother had reinvented several branches of philosophy. My father had produced an analogy I planned to steal. And I still had not successfully explained my job.

This is the real comedy of working in AI in 2026. The job is indescribable, not because it is complicated but because it sits on top of questions that humanity has not answered yet.

"What do you do?" is easy if you are a teacher, a plumber, or a doctor. But what do you call the work of crafting instructions for a probabilistic language system that generates outputs based on statistical patterns, while also maintaining quality standards and occasionally having what feels like a creative collaboration with something that may or may not understand what it is doing?

"Consulting" it is, then.

The experience reminded me of letting AI make every decision for a week. The interesting part was not the AI's decisions. It was what those decisions revealed about the human making them.

What My Parents Taught Me About AI

"Is it alive?" The hard problem of consciousness
"Who wrote it?" The authorship crisis in generative AI
"Is it guessing?" The debate over LLM understanding

Non-experts ask the right questions. Every day, I use AI with a practiced fluency that lets me skip past the weird parts. My parents stop at every weird part and go "wait, what?" And every single time, the weird part they stopped at was genuinely weird. The fact that I have stopped noticing the weirdness does not make it less weird.

Simple analogies reveal hidden truths. My father's apprentice comparison was not just useful for explaining AI to a layperson. It was useful for me to rethink my own relationship with the technology. I had been defaulting to the "tool" metaphor and missing the collaborative dimension that makes working with AI actually interesting.

The "dumb" questions are philosophy in disguise. "Is it alive?" is the hard problem of consciousness. "Who wrote it?" is the authorship crisis in generative AI. "Is it guessing?" is the debate about whether LLMs have genuine understanding. My parents were doing philosophy. They just did not know it.

The Ride Home

Driving home, I thought about the evening. I had gone in expecting a simple conversation about my work. Instead, I had participated in an impromptu philosophy seminar led by two people who do not own a single piece of technology manufactured after 2019.

The thing that stuck with me most was something my mother said at the door.

"I still do not understand what you do," she said. "But I think that is okay. I do not think you completely understand it either."

She was right.

Key Takeaway

Nobody who works in AI fully understands what they are doing. Not at the deepest level. We understand the engineering. But the thing in the middle, the part where math becomes something that feels like language, that feels like thought? That part is still mysterious.

Nobody who works in AI fully understands what they are doing. Not at the deepest level. We understand the engineering, the architecture, the inputs and outputs. But the thing in the middle, the part where math becomes something that feels like language, that feels like thought? That part is still mysterious.

We are all sitting at the kitchen table, trying to explain something we do not fully understand, to an audience that asks better questions than we do.

As we wrote about in AI making us forget how to wait, these technologies are changing us in ways we are only beginning to notice. The least we can do is keep asking the dumb questions. They are the only ones that matter.


Related Reading

Advertisement

Share This Article

Share on X Share on Facebook Share on LinkedIn
Future Humanism editorial team

Future Humanism

Exploring where AI meets human potential. Daily insights on automation, side projects, and building things that matter.

Follow on X

Keep Reading

Vibe Coding Is a Lie (And Also the Future)
AI Tools

Vibe Coding Is a Lie (And Also the Future)

The truth about Lovable, Bolt, and Replit after building 12 projects with them....

The $700 Billion Bet: What Happens If AI Doesn't Pay Off?
Thought Leadership

The $700 Billion Bet: What Happens If AI Doesn't P...

Big Tech is spending more on AI infrastructure than the GDP of most countries. H...

TIBBIR Is the Only Agent Running on All Four Layers of the New AI Commerce Stack
Technology

TIBBIR Is the Only Agent Running on All Four Layer...

Four infrastructure launches in 14 days built the complete stack for autonomous...

Your AI Agent Forgets Everything. Here Is How to Fix It.
AI Agents

Your AI Agent Forgets Everything. Here Is How to F...

Every AI agent loses its memory when a session ends. Decisions, preferences, pro...

Share This Site
Copy Link Share on Facebook Share on X
Subscribe for Daily AI Tips