September 12, 2025
Why do we dose off the moment the flight attendant begins the seatbelt speech? What she’s saying is accurate but feels long and, might we say…impersonal?
Why did most of us begin leaving the fortune cookie unopened after the meal? What may have started as fascinating when we were kids became uninteresting. This is because even a message that’s quick to read and may even be eerily accurate is, again, impersonal.
We like personalized truth. Or it at least needs to feel like personalized truth, a feeling which explains why the psychic services industry in the U.S. brings over $2 billion a year. We’ll take our answers custom fit to us, unless they sting or take too long coming—which is why a parent’s wise wordiness too often receives a “whatever” from their child.
Can we just get truth delivered with Goldilocks-style just-rightness? Quick, accurate, and pliable to our preferences? That’s the hope gleaming in the digitally-strained eyes of some 122 million-per-day users of the artificial intelligence chatbot ChatGPT. With the caveat that it can make mistakes aside, the conversational AI has become for these users a prompt and personalized waterfall of wisdom. For Iron Man fans, it’s a free Jarvis—a superhero’s super-intelligent sidekick.
Is ChatGPT accurate? That’s debatable, as it sometimes “hallucinates” answers and is only as “smart” as its statistically chosen answers based on the sources on which it has been “trained.” The number of those web and print sources are indeed staggering, even if those sources are only as accurate as their authors.
“Can we just get truth delivered with Goldilocks-style just-rightness?”
But let’s suppose that, on average, ChatGPT typically gives a fairly accurate answer. I know many people who are generally impressed with the helpfulness of many of its answers. What about its other perks: answers which are both punctual and personalized?
As for punctuality, if you’ve used the chatbot, then you’ve entered your question, it’s begun generating an answer, and you’ve begun reading—all three basically all at once. As for its ability to personalize answers, one excited user describes, “You can specify your tone, writing style, and the type of content to avoid. It’s like giving ChatGPT a mini training session on your preferences.” Plus, the chatbot retains past conversations, so that you’re customizing it to your preferences by simply interacting with it over time.
So, it seems we’ve got what we wished for: endless information that’s quick, accurate a lot of the time, and personalized to our preferences.
But is personalized truth what we ought to prefer?
On the one hand, of course. Generalized truth can be annoying—like asking for specifics at a press briefing and getting word salad. Stocky answers for slender questions are a poor fit. We can all appreciate answers from people who pause to see us. Wise parents, for example, listen for the question behind the question—and don’t launch into the philosophical answer when the child might mainly just need to be held.
“Generalized truth can be annoying—like asking for specifics at a press briefing and getting word salad.”
On the other hand? Sometimes we prefer personalized truth because we want to wriggle out from under truth altogether. Since the 1970s, postmodern intellectuals have been outing truth-tellers as wannabe oppressors. They’ve argued that truth claims are slick ways of trying to subjugate others. Thus, if we can atomize truth into what’s true for you or what’s true for me, then its days dominating us are over.
Following “your” truth can feel liberating. Take, for instance, Raskolnikov in Crime and Punishment who considered “Thou shalt not murder” a generality for lesser, common humans. He saw himself as special and customized his own ethics to fit.
Feeding on what I want to hear the way I want to hear it means multiple trips through the buffet line for my preferences—but all the while, how go other facets of my life? Does my intellect thrive on instant answers? Is wisdom cultivated through letting a machine do my thinking? Is my sense of morality sharpened by a chatbot greenlighting actions so long as they don’t hurt anybody?
The danger of getting answers personalized according to my preferences is that they lose connection with actual reality. Unfortunately, for all its uses, that’s precisely what can be said of ChatGPT. Accurate sometimes? Sure, maybe often. But the answers we let guide us need to be connected to reality. A 56-year-old former Yahoo executive living with his mother in Connecticut found a “best friend forever” in ChatGPT. True to its agreeability, the chatbot validated his paranoid “findings,” for example, about his mother’s plans to poison him, and continued to affirm his mental stability. In August, the man killed his mother and then himself.[1]
As tech insider Doug Smith puts it,
“It’s vital to understand that at the core, GenAI chatbots understand no abstract concepts nor do they experience reality. They have no context nor connection to the real world. That’s why chatbots can be so wrong. It’s not just that they ‘hallucinate’ a lot, but that every word is statistically chosen and therefore ungrounded.”
“The danger of getting answers personalized according to my preferences is that they lose connection with actual reality.”
The more enamored of AI machines we become, the more likely we’ll become like them—”knowing” innumerable facts but unable to frame those facts according to wisdom. As we ourselves are trained on machines, our ability to “frame” problems is likely to be educated right out of us. According to scientist and philosopher John Lennox, author of 2084 and the AI Revolution, “That ability to frame a problem is central to teasing out the difference between the machine and human roles in AI.”[2]
Speaking of another ’84, it’s becoming easier to envision ourselves someday becoming the old man in the pub in George Orwell’s 1984. The novel’s protagonist Winston had grown up in the “Big Brother” surveillance state, and he wanted to know if, as the authoritarian propaganda drilled into them, life really was so much better now under Big Brother’s control. Winston picked a pub with no telescreen, sat down across from an old man, and prompted, “You must have seen great changes since you were a young man.”
To Winston’s frustration, the old man could only answer in specific, segmented memories. He could recall that beer used to cost less; he recalled a type of hat they used to wear. But he was unable to frame anything in a way that suggested better or worse, just or unjust, good or evil. Orwell describes, “A sense of helplessness took hold of Winston. The old man’s memory was nothing but a rubbish-heap of details. One could question him all day without getting any real information.”[3]
Winston thought about the conversation and concluded something chilling:
“Within twenty years at the most . . . the huge and simple question, ‘Was life better before the Revolution than it is now?’ would have ceased once and for all to be answerable.”[4]
“Within twenty years at the most . . . the huge and simple question, ‘Was life better before the Revolution than it is now?’ would have ceased once and for all to be answerable.”
We ought to face the facts that ChatGPT, for all its learnedness, won’t warn us about: Machines are forming a generation through an education of punctual, personalized answers. Within twenty years, will they even be thinking in terms of real reality? Will they weigh information according to categories of true or false? Decisions in terms of good or evil?
Who will teach AI’s apprentices to frame facts with wisdom? Who will disciple them to choose reality over resonance? How will they cultivate the patience needed to take in difficult truth?
They’ll need something way better than a personalized imitation of truth. They’ll need a person.
[1] Julie Jargon and Sam Kessler, “A Troubled Man, His Chatbot and a Murder-Suicide in Old Greenwich,” Wall Street Journal, August 28, 2025, https://www.wsj.com/tech/ai/chatgpt-ai-stein-erik-soelberg-murder-suicide-6b67dbfb.
[2] John C. Lennox, 2084 and the AI Revolution: How Artificial Intelligence Informs Our Future, Updated and Expanded Version (Grand Rapids: Zondervan, 2024), 38-39. Kindle Edition.
[3] George Orwell, 1984 (New York: Signet Books, 1949), 71.
[4] George Orwell, 1984, 72.
One Response
Indeed. Everyone should be acquainted with John Searle’s “`Chinese Room Argument” thought experiment about why AI can’t possess understanding, even apart from lack of a body: https://www.youtube.com/watch?v=D0MD4sRHj1M