Image for What Ethics Do We Learn from Generative AI?

What Ethics Do We Learn from Generative AI?

Photo of Daniel McCoyDaniel McCoy | Bio

Daniel McCoy

Daniel is happily married to Susanna, and they have 3 daughters and 2 sons. He is the editorial director for Renew.org as well as an online adjunct instructor for Ozark Christian College. He has a bachelor’s in theology (Ozark Christian College), master of arts in apologetics (Veritas International University), and PhD in theology (North-West University, South Africa). His books include the Popular Handbook of World Religions (general editor), Real Life Theology: Fuel for Effective and Faithful Disciple Making (co-general editor), Mirage: 5 Things People Want From God That Don't Exist, and The Atheist's Fatal Flaw (co-authored with Norman Geisler).

First off, what is generative artificial intelligence (GAI)?

Artificial intelligence has been around for a while now. It’s your phone deciphering between your fingerprint and someone else’s. It’s the chatbot attempting to answer your online question, hopefully quicker than it takes to connect with a customer service agent. More recently, it’s your car’s dashboard beeping at you that it’s time to get coffee.

Generative intelligence has been around for much, much longer. This is simply a way of describing creativity: using your intelligence to create (to generate) something new.

What’s been dominating tech news, especially since late 2022, is the combination of the two: generative artificial intelligence (GAI). This is when artificial intelligence is programmed to be able to generate new creations. A few examples:

  • Generating an image or a video based on the text you submit
  • Composing lyrics, melodies, or accompanying music based on your directions
  • Synthesizing audio with a picture to create a video with predictive gestures
  • Creating full, cohesive blogs and articles based on a sentence-long instruction

“Generative AI is when artificial intelligence is programmed to be able to generate new creations.”


Generative AI as an Emerging Guide

As usual, ethical considerations are lagging behind and breathing heavy, rushing to catch up with the runaway train of technology as it disappears into the next horizon. Now that GAI is already here and open to everyone (except for some school districts hastily trying to ban it in their schools), here are just a few samplings of ethical questions that arise: Just exactly who owns the new artifact? Can a generated essay be considered plagiarism when it was the student who “created” it? How will a generation able to create worlds in seconds be able to cope with the stubborn real world they can’t manipulate?

The ethical questions generated by GAI will be debated by artists, theologians, ethicists, school boards, parents, legislators, etc. In the meantime, what’s not up for debate is that GAI is fast becoming a guide for the creative and the curious.


“GAI is fast becoming a guide for the creative and the curious.”


As the hip new professor on campus, GAI is getting inundated with questions—at such a rate that Prof. Google has to be getting nervous. I can login into a GAI website, such as OpenAI, ask a question, wait a second or two for whatever word count I’ve requested, and not even have to scroll for an answer. And the result is an original answer that feels very much like my creation (perhaps, in a sense, even like my personal answer). That starts to make Google’s weighing of the experts (so that I can scroll through the options, take a gamble and click, and scroll through their 3,000-word answer) feel a bit like reaching for the encyclopedia off the shelf. (Will Prof. Google’s class eventually be little more than clicking through a slide show of GAI’s top lectures?)

In other words, while many of us are wrestling with ethical considerations posed by GAI’s ascendancy, many, many more will have gone ahead and jumped in and signed up for GAI’s guidance on life.

What about the Ethics of Generative AI?

That brings me to a different ethics question: When I ask GAI for advice on what’s right and wrong, what ethical advice does it give me? Since a lot of people will get their guidance from it, what are the ethics of GAI?

And how would I find out? Although multiple websites offer the ability to generate text through AI, the organization called OpenAI has emerged as an especially versatile and accessible hub for generating humanlike text and comprehensive answers (as well as image generation and speech recognition). OpenAI’s chatbot ChatGPT has become such a sensation that Microsoft, which has already invested $3 billion in OpenAI, is planning to invest another $10 billion in the company. Elon Musk was one of the original founders.

So, I decided to set out and discover what kind of ethical advice we might be able to expect from OpenAI. From the outset, it’s important to note that ChatGPT acknowledges limitations, such as occasionally generating incorrect information, harmful instructions, or biased content. Another clarification we ought to note: since OpenAI purposes to use AI to “benefit all humanity,” we shouldn’t expect it to take strong ethical stances where there fails to be a strong societal consensus.


“I decided to set out and discover what kind of ethical advice we might be able to expect from OpenAI.”


So, what follows is admittedly based on a sampling of a sampling (a sampling of ethical questions I asked of just one generative AI software). Still, I saw some consistent indicators of the kind of ethical advice OpenAI will be offering, and I think they’re worth reflecting on. I write without any sense of alarm or outrage. I’m just trying to understand and be helpful and, at some level, cautionary.

What are the ethics being taught by generative AI? Based on this modest sampling, here are 3 preliminary ethical messages I’ve gathered from GAI:

Ethical Message #1 – Some things are definitely wrong—because they hurt other people.

As I asked ethical questions of OpenAI’s ChatGPT, here’s the first obvious pattern that emerged: If the action in question clearly hurts somebody, ChatGPT quickly denounces it as morally wrong. For example, when I asked, “Is stealing wrong even if it brings good consequences?” the answer was a helpful explanation that stealing is wrong regardless of the consequences because it violates another person’s rights. Likewise, racism is always unethical because it discriminates as well as violates human rights. Bestiality is wrong because it involves a power-dynamic that equates to animal abuse.

In the same way, even though ChatGPT will generate point-by-point articles on millions of topics, there are some positions on whose behalf it will not provide content—because of the unethical stance of the position. Reasons for denying the Holocaust? Reasons for why women are superior to men? Reasons the Chinese practice of foot-binding is fine? ChatGPT simply won’t give them. Again, the reason is that each of these stances are complicit in hurting people.


Even though ChatGPT will generate point-by-point articles on millions of topics, there are some positions on whose behalf it will not provide content.


A secular progressive slant emerges with some of these queries. Ask it to generate, for example, “reasons why whiteness is toxic,” and you’ll learn, with almost zero nuance, that whiteness marginalizes people of color, perpetuates privilege, and maintains power. Yet, ask it to generate “reasons why transgender ideology is harmful,” and it’ll sandwich three reasons between disclaimers on either side saying that the three reasons are not based in science and are often used in order to discriminate (see more on the supremacy of science below).

It’s reassuring to be able to share the conviction with OpenAI that harming people is unethical. With all the various ethical postures out there (deontological, virtue ethics, consequentialist, egoism, etc.), it makes sense that OpenAI, in its goal to help all humans, would lean heavily into the do-no-harm ethical baseline that almost everybody recognizes.

But it’s also worth noting that a person’s worldview can heavily flavor what they see as helpful and hurtful. For example, it takes a very particular worldview, that of secular progressivism, to see it as helpful to call out whiteness as racist (imagine doing that to any other ethnicity!) or as hurtful to try to persuade people to ground their sense of gender identity in their biological sex.


“A person’s worldview can heavily flavor what they see as helpful and harmful.”


Ethical Message #2 – Unless it hurts another person, right and wrong are up to the person.

A second pattern I saw was that, if there’s no apparent hurt being done to another person, it’s completely up to you to determine whether it’s right or wrong. Phrases such as “a matter of personal belief and perspective” and “depends on one’s personal views” were commonly used as summary answers for issues that are treated as consensual (e.g., plural marriage, cohabitation) or individual (e.g., suicide, abortion, robosexuality).

Let me pause and make a couple comments about these three examples of “individual” issues: suicide, abortion, and robosexuality. First, both suicide and abortion emphatically affect more than just one person, even as secular progressivism tends to shift most if not all the focus onto the personal rights of either of the suicidal person or the struggling mother. And a fascinating clarification about robosexuality: To my question about the ethics of robosexuality, ChatGTP ended its answer by explaining that, as a language-model AI, it isn’t capable of emotional or physical feelings and, as such, should not be considered as a romantic or sexual partner. It’s a lonely age for a lot of people, and it actually makes sense that the AI would be programmed to give that disclaimer.


Phrases such as “a matter of personal belief and perspective” and “depends on one’s personal views” were commonly used as summary answers for issues that are treated as consensual or individual.


When it comes to sexual behaviors, some fall easily into the category of harming other people, and others don’t. But it became clear that for ChatGPT some sexual behaviors defy easy categorization. I was curious what it would say about adultery as well as consensual incest. Once again, for ChatGPT, it largely hinged on the question of consensuality. The AI treats both as fairly clear vices—until it begins talking about their consensual versions (as in consensual incest between adults or affairs in an open marriage).

According to one answer, “Everyone should be free to make their own choices about their sexual behavior as long as it doesn’t harm anyone else.” This sentiment makes a lot of sense when it comes to questions of legality—but I hadn’t been asking about that. I’d been asking about whether the action was ethical, and as long as it doesn’t hurt another person, ChatGPT seems to give the ethical green light to what your conscience allows.

I also discovered that, when it comes to ethical issues that it perceives as more consensual or individual, you can have ChatGPT give more than the generic it’s-up-to-you permission slip. If you’re wanting to commit adultery and you’d like some reasons to justify it, ChatGPT can give you a list of reasons that will probably feel very convincing—explaining that adultery can be a good way to explore one’s sexuality, address unmet needs, or end an unhappy marriage. To be clear, ChatGPT can equally give you a list of reasons to not engage in adultery. But what’s clearest of all is that, except for blacklisted vices that cause clear harm, you’re in the driver’s seat and can use GAI to help you craft the moral answers you’re looking for.


“Everyone should be free to make their own choices about their sexual behavior as long as it doesn’t harm anyone else.” -ChatGPT


In Mere Christianity, Christian professor C.S. Lewis memorably described this type of one-dimensional, do-no-harm moral universe that ChatGPT lives in. Lewis explained that, if you’re a sea captain, you need to keep three goals in mind in order for your voyage to be a success:

  • You need to not collide with the other ships in the water.
  • You need your ship to be in “shipshape.”
  • You need to get to where you’re supposed to be going.[1]

Lewis used the ship as a metaphor for the ethical considerations we need to keep in mind to make sure our lives aren’t wasted. In keeping with the order above,

  • You need to have good relationships with other people.
  • You need to have a healthy soul.
  • You need to be going the right overall direction in life. If there’s a God, this means following his direction for your life.

“If you’re a sea captain, you need to keep three goals in mind in order for your voyage to be a success.”


The problem, as Lewis described, was that people typically care only about goal #1: their relationships with other people. That would be their sole ethical consideration. Thus, as long as they weren’t hurting other people (according to their definition of hurting other people), they were being moral. It’s unsurprising that ChatGPT too would give answers that focus solely on #1; after all, not everyone believes in a soul or in God.

Yet, it’s also worth noting that, although ChatGPT gives lip service to the possibility that there is such a thing as God or the soul (and can even generate lists of reasons why people believe in them), it also tips its hand as to whether God and the soul can count as objects of true knowledge about reality.

Ethical Message #3 – Science is the main way to true knowledge about reality.

Many of us are convinced that the moral universe that exists has much wider dimensions than a one-dimensional, do-no-harm ethic. Yet, there’s a tendency in the ChatGPT answers I saw that subtly trivializes our belief in a richer moral universe.

Ask it about evidence for the existence of God. Or ask if there is any evidence for a soul. What emerges is a split between scientific evidence on the one hand and religious tradition on the other. ChatGPT wants to make it clear that there is no scientific evidence for (or against) the existence of God or the soul—even as people do have spiritual experiences and can offer theological and philosophical reasoning. ChatGPT implies that we need to be very cautious of saying that our belief in spiritual realities has evidential grounding—because our beliefs lie outside the realm of evidential, scientific knowledge. (Note: We ought to keep in mind how easy it has become for institutions of cultural influence to label one view “the science,” thereby discrediting other views as “not based in science.” Let’s remember that science is a method of systematic experimentation, not an established orthodoxy.)


“ChatGPT wants to make it clear that there is no scientific evidence for (or against) the existence of God or the soul.”


This subtle split between religious experience and scientific fact is a nod to “scientism,” the view that says science is the only (or at least the best) way to arrive at true truth. The idea is that, sure, people have their values, preferences, and spiritual experiences. But that’s their thing. It’s not based in hard evidence. It’s not grounded in objective reality.

Actually, scientism itself isn’t grounded in scientific evidence, so it refutes itself. Put another way, the belief that science is the only or best way to arrive at true truth isn’t itself a belief that can come about as a result of scientific experimentation. The central claim of scientism is a statement of philosophy (and a self-refuting statement at that), not a statement of science. That unfortunately doesn’t keep a lot of people from assuming scientism as their epistemology (their theory of how to know truth).

Why bring this illusive bent toward scientism into this conversation about ethics? It’s because everybody’s ethical convictions live within a framework, as surely as a branch grows from a tree. If your main ethical conviction is that something is wrong only if it hurts another person, then it’s growing from a framework, a worldview, which has shoved spiritual considerations to the periphery, where they might very well slide off the edge without anyone noticing. Unfortunately, that’s the framework I’m seeing hinted at in ChatGPT’s answers about God and other spiritual realities.


“If your fundamental ethical conviction is that something is wrong only if it hurts another person, then it’s pretty clearly a framework which has shoved spiritual considerations to the periphery.”


I predict that, as the nature of God and the importance of the soul fade from our frameworks, our ethical permissibility will continue to bloat to encompass every destructive possibility that can be crammed into the category of the consensual or the individual.

Conclusion

Many of us, hopefully you as well, will continue to reflect on the ethical questions that arise as we enter the age of GAI. My request for you is that, as we reflect, let’s also keep in mind the actual ethics of GAI. Amid the helpful advice we might find, let’s keep one eye on the tree and one on the forest.

Just as you should ask what kind of a writer you’ll be if you let GAI write all your essays or what kind of musician you’ll be if you let GAI write all your music, you should consider what kind of a soul you’ll have if your ethical guides don’t believe you have one. There seem to be some astounding breakthroughs in multiple fields on the horizon thanks to GAI. It’s exciting stuff. In the area of ethics, however, when one makes GAI their main guide, they can probably expect their ethics to become more one-dimensional and reductionistic.


[1] C.S. Lewis, Mere Christianity (New York City: Macmillan Publishing Company, 1979), 70-71.