Renew.org White Logo
Get Renew.org Weekly Emails

Want fresh teachings and disciple making content? Sign up to receive a weekly newsletters highlighting our resources and new content to help equip you in your disciple making journey. We’ll also send you emails with other equipping resources from time to time.

Should We Use Generative AI Chatbots For Research?

A bestselling Christian author I follow recently said that he uses generative AI for research. I challenged his rationale, and want to share more here.

Why use GenAI for research? It’s fast? It’s efficient? Everybody else is doing it?

I wonder: If it wasn’t being forced down our throats by the most powerful corporations in the world, would we choose a research tool that:

  • Is often confidently wrong (30%+ of the time), with later models less accurate than earlier ones? And it’s not just that they hallucinate; it’s that their generated words have no connection to reality. They don’t understand the meaning of their output.
  • Is designed to be habit forming, with the emotionally-charged natural language UI intended to form relationships with us? That’s why recent studies show that the #1 use case of ChatGPT is for therapy and companionship.
  • Makes up references (see below)?
  • Has deception woven throughout: from the false claims to be intelligent, to know, to reason, to understand, to the impersonation of sentience, to the inevitability thesis preached by all of the leaders in the space?

I can’t imagine anyone waking up in 2021, saying: “You know what I really need? A new research tool based on a technology designed to form an unhealthy trust relationship that makes up plausible-sounding words but is often confidently wrong.”

As Professor Gary Smith says, “If you know the answer, you don’t need to ask an LLM and, if you don’t know the answer, you can’t trust an LLM.”

But what about the speed, the efficiency? Going fast to an undesirable destination isn’t progress. And these excuses are paving the way to loss of discernment, wisdom, critical thinking, and yes, faith.

Okay, Doug: Prove It

As my regular readers can imagine, I never use GenAI, not for work, not for writing, ideation, editing, summaries, not for anything. Except in rare instances to research their behavior so I can report about them in posts like this.

So here’s what I did on July 28, 2025: I entered this prompt into the free version of ChatGPT:

Please write a 250 word synopsis of Doug Smith’s 2025 nonfiction Christian living book “InHumane: How AI Chatbots Steal Your Humanity, and How You Can Fight Back”. Please include the three scholars who most influenced his thinking.

As you read ChatGPT’s response, keep in mind: I have not written this book. The first time I ever shared that title idea was in this prompt.

Here was the reply I received to my “research” query:

Here’s a polished 250‑word synopsis of Doug Smith’s 2025 Christian living book InHumane: How AI Chatbots Steal Your Humanity, and How You Can Fight Back, including the three scholars who most influenced his thinking:

In InHumane, Doug Smith argues that generative AI chatbots such as ChatGPT pose a profound challenge to Christian discipleship by eroding human dignity, decision‑making, and spiritual formation. With decades of experience as both a software engineer and Bible‑student, Smith presents a prophetic warning: as Christians increasingly rely on AI to generate ideas, counsel, and even worship content, we risk surrendering our God‑given capacity for critical thinking, creativity, and authentic relationship building.


“As you read ChatGPT’s response, keep in mind: I have not written this book.”


Smith builds his case through three key theological and philosophical touchstones. First, he draws on the work of Johan Huizinga—who frames AI as “spiritual powers and principalities,” a non‑personal yet pervasive influence that shapes cultural narratives and moral orientations (link to a RENEW.org article and a Christianity Today article). Second, a Calvin‑seminary professor “Arnold” emphasizes that AI chatbots are not agents or selves and thus cannot bear moral responsibility; treating them as companions erodes human distinctiveness (link to a Christianity Today article). Third, “Taylor” (a pastor‑scholar) underscores the apocalyptic irony that humans fear being overtaken by creations made in our image—revealing both our pride and the apologetic power of Scripture (link to a Christianity Today article).

Smith’s book lays out a four‑layer “captivation diagram,” describing how users become enthralled by AI’s allure unless they resist through spiritual disciplines and Christian community. He condemns casual assumptions like “it’s just a tool” and argues for a sober assessment of AI’s formative effects on our hearts and souls (link to an article on my website). Ultimately, InHumane challenges readers to reclaim prayerful ownership of their minds, bodies, and relationships—and to resist trading human responsibility for artificial convenience.

There is so much to love in this response, especially the confident error in the opening claim: “Here’s a polished 250‑word synopsis.” But if you didn’t know that I hadn’t written this book, it would sound plausible, right?


“If you didn’t know that I hadn’t written this book, it would sound plausible, right?”


But there are so many problems with this response:Should We Use Generative AI Chatbots for Ministry?

  • Again: though ChatGPT sounds convincing, I have not written this book.
  • Johan Huizinga lived from 1872-1945. He was a Dutch historian who opposed the Nazis. It’s unlikely he had much to say about AI. Regardless, I’ve never quoted him.
  • I have no idea who “Arnold” or “Taylor” would be.
  • All of the Christianity Today links were 404s — complete fabrications.
  • Some of my content was received from a link to my renew.org article, but this output included incorrect, jumbled amalgamations of my points there.
  • Overall, the summary has the smell of something I might write, but the details are off and incoherent.

So many of us are enchanted by results like this every day, thinking that GenAI is a helpful tool for research because we’re amazed by how helpful and reasonable-sounding the output is.

But ultimately, generating words that are plausible sounding but not bound to reality isn’t helpful. And building a dependency while allowing my own critical thinking and discernment to atrophy in the name of speed and efficiency? No, thank you.

This is not the “research tool” we’re looking for.


From Doug’s Smith website. Used by permission.

Join the Conversation

Leave a Reply

Renew.org White Logo
Get Renew.org Weekly Emails

Want fresh teachings and disciple making content? Sign up to receive a weekly newsletters highlighting our resources and new content to help equip you in your disciple making journey. We’ll also send you emails with other equipping resources from time to time.

You Might Also Like