AI Gurus

20 Oct 2023

Multiple technological threads are coming together, bringing us closer to where a long-ago scenario I envisioned could be reality: “Good morning, missionary candidate. You’ll be interviewed by C. S. Lewis.”

AI is getting close to the point where it can reliably generate videos. Some of these videos can be “deepfakes” for a range of nefarious purposes. Some can be “good” videos, however. One example is’s tech, which can fix things like whether you are looking at the screen. Some are areas where we aren’t sure yet whether it’s “good” or “bad”: the generation of video is moving to the point where not just the living but also the dead can be generated (Axios).

Audio generation is even further along, as evidenced by amazing translation tech (Heygen, Lipdub), the ability to generate voices used to read articles (Matter), and the ability to edit a transcript of a podcast and have the audio changed from the transcript with a voice identical to your own (Descript). Of course, it’s also leading to a lot of fakery (NYT on “AI Obama and fake newscasters”).

AI large language models can be trained on a specific text. Google has recently released an AI trained on medical text, for example, and is trained to provide summaries, bullet points, etc., of sermon text. ChatGPT-4.5 is stunningly good, particularly for describing subjects, exploring them, and brainstorming ideas. And, of course, ChatGPT-4 can now use your voice as input and respond with generated voice.

So, imagine: an LLM trained on the entire corpus of CS Lewis’ work (plus possibly all the commentaries about him and his work?), coupled with an audio generator trained on his voice (for example, I’m not sure if there’s any video of Lewis at all, but there are still photographs, and possibly, eventually, an AI could generate video based on that. At any rate, voice interaction with an AI model based on an individual should be possible very soon.

There are a variety of problems with this. Most fall into the “can we” category–questions of intellectual property rights, geopolitical issues far more significant than cartoons about Muhammad, and problems of marketing, money, power, and inequalities. Of greater interest to me is the “should we” category, and I see three principal challenges there.

First, when we express Christianity as something to be known rather than a person to be followed, we are very close to an automatable task. Already, a few churches, as a lark, have run an experiment with automation of the service by AI. These weren’t very good. But what if the experience gets better (as tech inevitably does and is already doing), and it’s not about automating the service so much as providing an entertaining alternative with the veneer of theological soundness? The sermon can be automated, and it can come in lots of flavors.

An AI can vomit out what it has been trained on but can’t provide insights gained through the lived experience of faith. Computational prowess can mimic spiritual wisdom in the way it talks, but nothing the individual knew or learned beyond what they had written.

“Hacking the AI” can happen on the fringes of “what is written,” where what a person thought or believed in their soul is ambiguous. You could make Lewis say things he would never have said. Not just the profane or the blasphemous–an AI model may quote Lewis verbatim, but can it capture the nuances of his theological positions? An LLM “Lewis” could adopt theological stances–perhaps very subtle!–that he wouldn’t have actually agreed with. The risk of misrepresenting him (or any other theologian) in this form is non-trivial. It could lead people into substantial theological errors because they take the “wisdom” at face value without the critical thinking that often accompanies human-led spiritual direction. They think they’re quoting Lewis, but an LLM is not a search engine.

Second, this one-to-one process can feel like an interaction with a mentor—but it isn’t. Already, people are using AI as a kind of personal therapist. The allure of talking to a historical figure–even an AI model–could be irresistibly fascinating. As we bring a “digital resurrection” into our spiritual lives, we might find ourselves in a narrative where the line between reality and simulation becomes unsettlingly blurred. Such a personal experience has enough of a tinge of interaction that it might give us enough excuse to avoid the communal aspects of following Jesus. If you can hear a Lewis AI speak for half an hour to an hour on a theological subject you’re interested in, why go to Bible study or church?

A personal mentor available 24/7, answering all the questions you ask and none you don’t, circumvents the hard but necessary work of forming real, messy, human connections. We might ask what psychological and spiritual needs such interactions would fulfill–and which ones they would fail to. Such an AI could provide answers–but would it broach things we are avoiding? Would it challenge us? Or would it be a very comforting, unchallenging, knowledge-based spirituality that informs without ever bringing up difficult topics or asking me what I did about what we talked about last week?

We are on the cusp of an intersection between knowledge, entertainment, and spirituality, which fits right into the American/Western church–personalized, on-demand, entertaining, thought-provoking, yet barely challenging. Let’s add to this mix any change-of-life phase or wildcard event that disrupts your social circle and puts you in an entirely new place.

If you can’t go to church–if you’re in a place where a church is hard to find, or in a pandemic-style lockdown, or you’ve become disaffiliated from a regular religious structure–a “digital chapel for one” with an AI mentor (teacher? Rabbi? Guru? master?) could become a popular alternative. Already, people can lose themselves in short-form videos for hours. What if you could “talk” to a “spiritual giant of the past,” telling yourself that the responses are trained on what that person actually said–so it’s almost like legitimately talking to them? What if you could have Lewis, Tim Keller, Tozer, and Oswald Chambers as your “small group”? What if you were in a VR (a la Apple Vision), with holograms of them, and they talked with you and amongst themselves?

What happens to the communal aspect if an AI provides individualized, personalized, tailored theological content? In terms of Christianity, how do we “one another”?

So far, AI is useful for a lot of things, but imagined interactions can be very vanilla. It lacks “soul,” and certainly doesn’t capture what a real conversation might be like. But with some fine-tuning and work, it might be made to feel a lot more realistic. Someone who is not very familiar with these people–who perhaps knows them only through memes, soundbites, and famous quotes–might be entranced.

None of this is far-fetched. The tech to achieve it is practically here. In the long run, it may be a very niche case or become quite prevalent. What might such a future do to Christianity in the West, interaction with the lost, training, and mission mobilization?

Additional future scenarios: digital chapels with AI guests, theological degrees for AI figures, AI-generated spiritual proverbs and memes influencing cultural narratives, AI gods and prophets and spiritual leaders, AI cults, digital gnosticism (secret knowledge found in the virtual world–” out there is our destiny”), AI missionaries, AI priests for congregations, VR simulations of heaven and hell, lifehacking becomes spiritual optimization, digitally spread ‘heresy’ viruses in religious wars between denominations, new strands of eschatology & conspiracy theory spread by popular AI figures, marketing to support the generation of new AI gurus (premium packages), gamification of spirituality to keep followers engaged, male- and female-centric religious platforms and leaders, first religious experience through AI/VR settings, The Scriptures as Memes (a meme-Bible?), AI leaders as evangelists for AI ethics/the ‘soul’ of the machine…