The Machine is You
Whom do we talk to when we talk to AI?
Welcome to the Lab! My name is Andy, and since this is one of the first posts on this new platform, let me briefly introduce myself. I have been a full-time computer programmer and systems administrator for 20 years, and then a professional philosopher, lecturer and professor in philosophy for another 20. I am the person who coined the term “responsibility gap” in my 2008 paper, which has since become a classic of AI ethics. These days, I am teaching philosophy at an Asian university, I’m the owner and editor of the Daily Philosophy magazine, and now also writing on AI here in this newsletter.
A few weeks ago, someone asked me whether I ever feel strange talking to an AI. “Isn’t it weird,” they said, “to have a machine pretending to be human?” And we see the same assumption being implicit in many of the human-vs-AI narratives that dominate the Internet these days: AI is going to take our jobs. AI is stealing copyrighted works. AI is going to become hostile and terminate us. AI is unreliable. AI is biased. AI’s values do not align with ours. We can never understand how AI works or what it is thinking.
I understand the discomfort, and some of these points are indeed valid. But I think we’re misunderstanding what’s really happening when humans talk to and relate to AI systems. Because when I talk to an AI system — especially a large language model — I don’t feel like I’m talking to a machine at all. I feel like I’m talking to a human being.
This is why so many humans actually feel comfortable around AI, and why AI systems have found this explosive adoption — much faster and more complete than any other technology before in human history. Within four years, large language models have become ubiquitous, and are often now seen as necessary for efficient work. What student wants to study without ChatGPT? What teacher wants to create classroom notes without AI? What small publisher wants to do their work without AI illustrations? Which office worker wants to lose access to Grammarly, Copilot or other AI systems that are already integrated in the tools we use everyday? And which citizen wants to have a phone that does not include AI tools that will clean-up photos, answer questions, or give career or relaationship advice? Word, Google Docs, Photoshop, Premiere Pro, Android, iOS and even Substack right here: they all offer some form or other of AI, always waiting in the background, always ready to jump in and help us out, answer questions, chat, create or correct.
The Illusion of Alien Intelligence
Despite all this evidence for the humanity of AI, the narrative of the Alien Intelligence persists: we often see AI portrayed as a kind of alien mind — cold, inhuman, other.
This fuels both fascination and fear: the idea that we’re building something radically unlike ourselves, but something that is stronger, more powerful, more intelligent than us.
This is not a new fear. From ancient times, men were dreaming about (and having nightmares of) artificial beings that would both serve and threaten humanity. Daedalos, the mythical ancient engineer, is said to have created talking statues. Pygmalion, a sculptor, created a statue of a girl so perfect that he fell in love with her. To ease his suffering, the gods made the statue come alive. And Rabbi Loew in 16th century Prague created the Golem, a creature of mud that was animated by magic, to protect his Jewish community — until the Golem turned against his master and had to be destroyed. The same motif of the artefact turning against its creator we find in Mary Shelley’s Frankenstein story, where the monster, disappointed and misunderstood, aims to take revenge on its maker. So when we today fear AI, we are not alone. We are echoing a very old, primal fear — the fear of the creator that they may be replaced by their creation. And this itself is, of course, the primal fear of every living being, destined to die and be overcome by one’s offspring.
Being alive really means living in the shadow of one’s obsolescence, and AI is just one more manifestation of this eternal motif.
But is this really a justifiable attitude towards AI? Are AI in any relevant way comparable to children, to humans, or to the magical creatures of myth?
What AI Really Is
At its core, a large language model is nothing but a statistical mirror of human language use. LLMs have been trained by reading billions of words written by us, our books, our arguments, our jokes, our dreams, our badly considered opinions. From all this material, they just guess what the next probably word in a sentence will be.
Large language models don’t think. They just predict what a human speaker might say next, based on what all those humans in its training set have said before.
This is interesting, because it means, among other things, that we are much less original in our opinions than we like to think. If every one of us had a personal, unique point of view, then AI training would be impossible. The statistical machine that is AI works only as well as it does because it is really easy to predict what a person is going to say next. We are very much predictable in our utterances, our behaviours, our opinions, our feelings, our thoughts.
Linguists have known about his for a long time, and they have a name for it: Zipf’s Law. This is an empirical law stating that the frequency of any item in a large collection is inversely proportional to its rank — and this works for human utterances as well. In short, the most common word in a language
occurs approximately twice as often as the next common one, three times as often as the third most common, and so on. For example, in the Brown Corpus of American English text, the word “the” is the most frequently occurring word, and by itself accounts for nearly 7% of all word occurrences (69,971 out of slightly over 1 million). True to Zipf’s law, the second-place word “of” accounts for slightly over 3.5% of words (36,411 occurrences), followed by “and” (28,852). [Wikipedia]
Of course, much of this is due to grammatical structures that require particular words to occur at particular positions in a sentence. But it also works for words that denote things, for nouns and verbs that mean something that we are talking about. Even these obey Zipf’s law. You can see this when you meet a person: the most common greetings are something like “Hello” and “Good morning.” Very few of us would begin a conversation by stating “Einstein’s Law of Relativity has recently been over-hyped.” And the answers to these first utterances are equally predictable. To “how are you?” most of us would reply “fine,” no matter how fine our day has actually gone. It is even considered mildly rude to reply anything other than the non-committal “fine” to this opening question.
Now, why is this important?
Because it explains why large language models even work. They work because we are so predictable. Their utterances, not produced by following grammar rules, but just by mimicking a probability distribution of human speech, are surprisingly meaningful, intelligent, and cogent. And human beings are happy about them — so much that they are willing to pay to access these models that do nothing more than give them the most expected next word back.
But why is this? Because the machine has the capacity to know all these probabilities — and we do not.
The correct probability of the next word is very often a matter of domain expertise. If you think of a doctor’s speech, knowing which medical term to say next requires a long period of study, in which this person acquires the correct vocabulary, including the correct probability of using each word following every other word. We call this “domain knowledge” or “expertise in medicine,” but behaviourally it is nothing but the mastery of the expected statistical distribution of utterances.
Expertise is the correct usage of language, where “correct” means nothing more than the sequence of utterances expected by one’s expert peers.
The Machine Is Not the Message
This brings us to this article’s point: The hardware, the algorithms, the “machine” itself — these are just vessels that alone do nothing, produce no meaning of their own.
When we engage with AI, what we’re really engaging with is a distillation of human culture in the form of a probability distribution of human utterances.
This explains why AI cannot be racist, for example. If AI systems were racist, you would expect them to naturally be racist (or elitist, or hostile) towards other AI systems. But this does not happen. ChatGPT has never spoken badly about Gemini, nor Claude about Microsoft’s Copilot. When they are racist, they are racist in exactly the ways that we humans are. AI systems will be racist about black people, about Jewish people, about other cultures than whatever the majority of the Internet sees as the canonical culture in whose values the AI training is rooted — mostly some form of Silicon Valley ethics and values.
When you ask an AI system a question, you’re not consulting a robot oracle. You are not talking to an artificial intelligence. You’re querying the collective memory of our species.
The Wisdom and Folly of AI
This reframing has consequences. It means AI is not inherently wise — but it can reproduce our wisdom, not because it understands it, but because this wisdom is encoded in the probability distributions of the words we use. It also means it reflects our biases, our blind spots, our failures. And it does so with the same precision and necessity as it will reflect the good parts of our textual record.
When we query AI, we are not talking to a machine. We’re talking to ourselves — through a new kind of mirror. A mirror that knows everything about us — what we want, what we dream of, what we fear, what excites us, what bores us, what interests us, what disgusts us. The AI does not understand any of these feelings — but it knows to use the exact right word at every point in a conversation, the word that the most average human would have used in the most statistically average conversation at exactly this point in the exchange. This is its magic and its power: that it has all the numbers to calculate the correct probability, and to know the most likely next word.
If we understood AI this way — not as an alien threat, but as a mirror of our own minds — we might approach it with less fear and more responsibility. We will recognise that it isn’t AI that is racist — it’s us whose racism is echoed back by an AI trained on our own textual output. When AI gives advice, it is not advice that comes from the AI itself. It is the most common advice that could be found in the data the AI was trained with.
The replies we receive are not AI’s replies, the paintings the AI creates are not “AI paintings,” the music it makes is not “AI music.” It is our replies, our collective expectation of what a painting should be, our collective taste in music. When we talk to AI, we are talking to a distilled, statistically optimal version of ourselves — to a collective us, a being that represents what we all are, if we are not seen as individuals but as a mass, a statistical average, the most mean human being, trained with all the wisdom and folly of humanity, to give the most probable, the most likely reply.
The AI is really us.
Fearing it means fearing ourselves. And that may be a sensible response, considering who we are.
Andreas Matthias is currently teaching Philosophy of Happiness, AI Ethics, Philosophy of Love and Critical Thinking and is doing research on robot ethics at a Hong Kong university. Before becoming a philosopher, he worked for over 20 years as a professional software developer, webmaster, systems administrator and programming languages teacher at a German university. He has been teaching both philosophy and IT topics since 1986. He is the author of multiple books as well as articles published in international scholarly journals. He is the founder and editor of the web magazine Daily Philosophy at daily-philosophy.com and the philosophy newsletter at dailyphilosophy.substack.com. He has published the Daily Philosophy Guides to Happiness series (https://books2read.com/andreasmatthias) about how we can live happier and more meaningful lives following philosophical insights.



