Episode Transcript
[00:00:04] Speaker A: Hi everybody. Welcome along to another episode of the Dispatchers podcast. My name is Brendan Malone. It is great to be back with you again. And today we are carrying on with part five of our six part series on artificial intelligence, digital narcissism and human dignity. Our examination of AI from the perspective of authentic anthropology from the Christian vision of reality. So let's just jump straight in to part five. And you remember previously, as I always keep reminding you, and I'm going to remind you again today, all glory goes to God for anything good you receive in the session, all the rubbish that's my fault, Please God, you will forget it before you've even logged off today.
So where have we been and what are we going to cover in these last two episodes? Well, previously we've already talked about human dignity and understanding an authentic human anthropology, what it is to be human, what it means to have human dignity. Obviously this is essential because you have a technology with AI that is mimicking certain human traits and that can create great confusion in people's minds as it is let alone in a culture where there is a massive confusion about what it is to be authentically human and to have a situation where we're living in a society with a lack of authentic anthropology. Parts three and four we looked at morality and technology. So examining and engaging with that sort of that issue that plagues and has plagued our culture from the very beginning, that desire to take control and dominance and then what that has manifested like in the modern era and the implications for this regarding technology.
And then also in particular, what a Christian approach, a moral approach to technology might actually look like. What we're going to do in these final two episodes is we're really going to take a laser focus now and we're going to apply a lot of what we've already talked about and we're going to really hone in very specifically on artificial intelligence and AI related topics. The first thing that I want to do though, before we go any further in this episode is I want to actually play you a brief reflection that was given by Pope Leo approximately a year or so ago.
And it's on this topic and I think it's brief, but I think it's a profoundly beautiful place for us to start and to remind ourselves again of what we've already looked at in some of these previous episodes before we jump into taking a laser focused examination of artificial intelligence. So let's have a listen to this reflection. Now.
[00:02:39] Speaker B: Human beings are called to be co workers in the work of creation, not merely passive consumers of content generated by artificial technology.
Our dignity lies in our ability to reflect, choose freely, love unconditionally, and enter into authentic relationships with others.
Artificial intelligence has certainly opened up new horizons for creativity, but it also raises serious concerns about about its possible repercussions on humanity's openness to truth and beauty, the capacity for wonder and contemplation.
Recognizing and safeguarding what characterizes the human person and guarantees his or her balanced growth is essential for establishing an adequate framework for managing the consequences of artificial intelligence.
[00:03:43] Speaker A: It seems to me that that little reflection there from Pope Leo XIV is actually a really great place to start. It's a good summation of where we've already come from, and it also sort of lays a bit of a basic framework, if you like, for what I want to do in these next couple of episodes is to really get us recognizing the fact that if we're going to engage well with AI, we need some of those fundamentals in place first.
And what I want to do in this session is I want to examine just a little bit of the backstory of AI and then I want to look at.
I don't want this to be like some sort of, you know, down a Dave situation where you often hear this. I think with AI, there's generally the two extremes you get are people who are all in with no proper moral discernment, no proper prudent human discernment, they're just all in boots and all AI it's all good buy up, sell up, lots of it, we need more of it, etc. Etc. Or people often on the other side who have just a hostility to AI.
There's a real presentation often of it as being the end of the world as we know it, this the most evil and disastrous thing that humanity's ever encountered, et cetera, et cetera. And they will not allow debate that will brook no debate with anyone on the topic.
In actual fact, what we really need and what I'm trying to do in this, these final two episodes of this podcast series is really say, okay, let's be discerning about this.
Are there goods here? And there are actually morally good uses of AI that we can point to. And so we're going to point to some of them in this episode. And then what we'll do in the next and final episode is we will look at, at some of the evils and the risks associated with AI and more importantly, we'll conclude by talking about what we can actually do to take the power back and ensure that we navigate this. Well, so first thing I want to say though, just to give a little bit of AI's backstory so we're clear on this. And to clarify, I think, a couple of important points that often get missed in this particular debate. It can be confused, not necessarily confusing, but a confused debate. Just because I don't think we've really agreed on the terms, or probably it's fair to say, at the cultural conversation level, necessarily understood the terms of the debate and what we're often actually talking about. There's a little bit of confusion there. So I think it's important to clarify that before I jump into any of that, though.
I think one important point to recognize is that AI has absolutely been overhyped in some quarters. And what we saw at the start of last year, 2025, and even earlier in 2024, there were claims being made and promises being made about AI that were extremely grandiose. Now at the time, they just sounded like impressive, amazing developments in human technology.
And, you know, we'd somehow we'd managed to come on leaps and bounds.
Looking back at the end of 2026 though, or, sorry, the end of 2025 though, and it was apparent to anybody who was discerning that in actual fact, AI had been overhyped in the previous 12 to 18 months. It had been completely oversold. Now, this is understandable because the overhyping was coming from primarily two sources. Those people who are actually invested in the business of AI and they wanted more investors to invest in their companies or, or they wanted more people to become customers. So they're overhyping what they have as a product and they're trying to actually draw more investment and more sales at the other side of it. So that's one place it was coming from. And the other side of it was from people who just, I guess, are kind of tech obsessed. They love tech. They are all in on AI and they just took the promises at face value and they amplified them. People in particular here, I'm thinking of who had social platforms and, and some sort of influencer type presence. And they just went and ran with it. And there were, I think in that group, there were people as well, a lot of them who actually are really quite enamored with the idea of AI and the power that it will give. And so they ran with it. The end result was a public who were absolutely misled about where AI truly was at and what was actually coming in the immediate future down the pipeline. As I said, by the end of the year, it became Apparent that AI wasn't quite delivering, as some people had overhyped it to be in the previous 12 to 18 months. Now, that doesn't mean, though, that AI is not powerful. That doesn't mean that there aren't risks here.
No, far from it. These things are still real. AI is still powerful and there are still risks associated with it. And so we need to consider this. Well, so let's do that by first of all just giving a very quick backgrounder. And this is, I just want to clarify some points that I think are important to consider and understand here. And as I said, this is not a comprehensive history of AI by any stretch of the imagination. But it is generally accepted, as I understand things, that this guy here, John McCarthy, is like the grand pappy of AI.
Now others, you know, have other people that they attribute as sort of being the grandfather of this whole thing. But I think generally it is accepted that John McCarthy is kind of like the officially recognised first guy in all of this. The reason being that in 1956 he hosted, as a cognitive and computer scientist, he hosted a seminar at Dartmouth University in the United States, where he's obviously from. And the seminar was on what he called artificial intelligence. And what he meant by that phrase was the idea of making a machine behave in ways that would be called intelligent if a human were so behaving. Now, what I find interesting about this particular definition is you can already see effectively what the issue, the conceit here with AI is. And in fact, I've had some people comment back to me that they don't even like the term intelligence and artificial intelligence because it creates a false impression. And I think there's real truth to that. I'm of the same ilk. I don't like the fact that it's called intelligence because it actually isn't intelligence.
Remember, we talked about this in Parts one and Part two, what actually constitutes authentic intelligence. And I referred specifically to Thomas Aquinas and some of his key, that very brief whistle stop sort of summary of some of his key points about intelligence. And those things are obviously not present with AI. And so when you use this phrase, which has a clear ontological reality at stake, and you apply it in another scenario to a machine, you're not talking about the same thing and you create confusion. And I think that's like, this is the reality of this very definition, having a machine behave in ways that would be called human intelligence, sorry, if a human were so behaving.
But in actual fact, this is not a human. That's doing these things. And so it's not intelligent. It's kind of like a company.
Let's say you have a mobile phone company, you pay your monthly bill. And they started describing everyone who paid their bill on time as their monthly winners because of the fact that if you don't pay your bill on time, you end up losing and being sent to the debt collectors. Now, there's a certain truth in that, in that you avoid the losses and the losing of being referred to a debt company, but on the flip side, you're not actually winning anything in that scenario just by paying your bills on time. And there's something similar, I think, going on with artificial intelligence. There's a.
A very powerful linguistic verbal engineering that's happening here and that has shaped people's perceptions in ways that aren't good and don't give them an accurate picture of what's really going on.
Now, John McCarthy, on the back of this conference that he hosted, this led to the establishment of a research initiative with a very specific aim of creating machines that could perform tasks normally associated with human intelligence and behaviour. And sort of. The rest is history effectively. And obviously this is a very brief backstory and it is by no means even remotely close to being comprehensive. There are other players and other factors here, but it's generally understood this is where we get the concept of AI from.
Now, there's some key points we need to understand about artificial intelligence that I think, and I referred to this earlier, there's been a muddying of the public conversation as a result of this.
So when people think about AI, not only are you grappling with all of the previous sort of cultural memory and mythology around that. Like we talked about something like HAL from 2001 A Space Odyssey or, you know, Skynet from the Terminator movies, all of that kind of stuff. The, the, the fiction, the dystopian stuff, the films and the entertainment media that sort of shaped perceptions. You're grappling with all of that and people sort of lumped that in there.
But there's also just a general confusion, I think, about where we are at and the different categories of things that particularly those who are, you know, the avowed enthusiasts of artificial intelligence, the architects, the designers and the people who are all in on this where they would actually like things to go, but where we are not currently at. So where we are at right now is we have what is rightly called narrow AI.
And narrow AI is something that will perform specific and limited functions. So think about translating or answering questions like a Chatbot does or generate visual images for users with 57 fingers on each hand. You know, that kind of a thing. That's what narrow AI is, and that's what we have.
And narrow AI, even before we get to the next category of artificial intelligence AI, like narrow AI, even that can be deceptive because it, like when you, particularly when you're online, like if you're talking to a chatbot, it feels like you're actually talking to someone or something that's a person, but it's not a person.
And obviously it wouldn't be a human person, it would be a machine person. But it's not even that.
What, what it's doing, narrow AI is it's using statistical inference. So this is not logical deduction, which is what a person does.
So what the AI does is it analyzes massive amounts of data to identify patterns and then mimic human cognitive processes normally seen in problem solving. And that is very compelling. If you're watching that from the outside, you're interacting with that as an observer, it's very, very deceptive in the illusion that it creates. And so you can see why people are very impressed by and can be easily fooled by AI.
But then you just have to stop for a second and think about something like, for example, generating visual images.
And still today, artificial intelligence, even though it's got better at generating images, generally speaking, the really impressive stuff that you see online is not something that someone just entered a quick sort of very vague prompt into an AI program and then said, you know, give me an image of Benjamin Netanyahu landing the Apollo space mission, and suddenly you've got this perfect image. Generally, that's not what's happening. It's a bit more focused than that now. The technology has improved, but still you can see what is previously known, or had been known as the uncanny Valley effect, where something just isn't quite right. You could actually see that often in those images. And most consistently, it's like fingers and things like that. AI is terrible at fingers.
So there is absolutely something very impressive happening here. But as I said, there's also been an overhyping and overselling at the same time. We need to also be aware that that technology, even the visual image creation technology, is getting better and better and more and more powerful very, very quickly. But where we're at right now is we are dealing with narrow AI. But that's not the only part of the conversation. And this is, I think we often this a bit of confusion amongst the general population about what you mean when you talk about AI and what the potentials are and all that kind of stuff. Because those who are the committed enthusiasts of AI, they are really committed to the idea of what is called artificial general intelligence, sometimes general AI. It will have a framing like that.
And basically what artificial general intelligence is, is a system that would have the ability to work across all cognitive domains to perform all of the tasks that human intelligence can. It would be the complete picture.
So as you can imagine, this would be a whole different kettle of fish and much more powerful. But that's not even the end of the story, because the really committed enthusiasts actually want to see us achieve super intelligence. What is super intelligence? Well, I'm glad you asked. It is artificial general intelligence that has surpassed the limits and abilities of even the greatest of human minds.
So in other words, this would be a machine person, is what is being proposed here. And you would have a machine that could actually think and would have an intelligence quote unquote, that is greater than even the greatest of human genius minds.
And so this is something we don't have this. But this is what the avowed and committed enthusiasts desire and are really pushing for. And I think it's key to understand that because it's not unusual to have people confused about this or not even aware of some of these terms and criteria.
So, like I said, I don't want to be someone who just presents this issue as if somehow it's all doom and gloom, because in actual fact, there are some good and moral uses which for AI, and that's what I want to do to round out this particular episode, is I want to look at some examples of moral uses of artificial intelligence that actually contribute something of benefit to the human experience and to the world in which we live. Now, remember in a previous episode, we grappled with the question of science and technology, and we looked at what a Christian definition might be for a good and moral use of science and technology. And that definition was as follows.
Science is a method that enables us to discern truths about the natural material world. And technology is a tool, or I guess a set of tools that arise from our increasing knowledge about the natural material world.
Both of these, the practice of science, its applications, and the type of technology we are creating and the way we are applying it. They are good if they are governed by moral virtue and they are working for the flourishing of human persons and also of the common good. The common good are those essential goods that are good that are necessary for human flourishing for all persons, irregardless of who you are or where you are. So, for example, peace.
Everyone recognizes that to live in a peaceful state enables you to experience a greater degree of human flourishing than to live in a state of chaos and violence and war, etc. So the common good are all of those types of goods that are common to every human person, common to humanity, and are essential for our flourishing. And the common good is a effectively a type of balancing act where it understands that you must balance both the legitimate good of the individual and also the legitimate good and wellbeing of the community. It doesn't do what collectivism does, which tries to subsume the person and treat them as if they don't exist and have no worth or no place as an individual. And it doesn't do what liberalism does, which elevates the individual and effectively treats the community and the wellbeing of the community as secondary to the individual. It holds both in balance.
So when we're thinking about technology, it is good, it is moral, if it is governed by moral virtue and working for the flourishing of human persons in the common good. So, keeping that definition in mind as a jumping off point, let's just look at some very clear examples that we can point to of artificial intelligence that absolutely is good and moral in its usage because it meets that criteria. And you can probably think of others as well. When I presented this lecture series earlier this year, there were people actually opened up the floor and asked people if they could think of other moral uses. And there were some really good and interesting examples that came to the fore.
So the first one is AI and its ability to translate between different languages. And it's not just the ability to have a translation service in your pocket, for example.
Now, there is a caveat here. We'll get to that caveat in just a second. But basically, imagine you're in a situation where you are temporarily in a different country or a different culture, you're passing through. Or you're in a situation where you have an emergency and you desperately need to understand and get help. It might be a spiritual emergency. I need to get to mass.
It might be a health emergency, I need to get to a hospital. But you can't speak the language and it's not clear to you where you should be going.
And imagine a translation tool that is driven by AI that would actually allow you to speak to and hear back language. And so translation on the spot instantly would be a profoundly good use of artificial intelligence. But it's not just that. Imagine the potential. Let's say, for example, you're a doctor who is doing some sort of emergency relief work in a foreign country, you've been flown in on the hoof. You don't understand the culture. Like, maybe you've been there once or twice, but you're not really steeped in it. You don't speak the language of the locals and you're dealing with, I don't know, an outbreak of something like Ebola. And so you're trying to respond and do the best that you can to help and aid these people. But language is obviously going to be a barrier. And what's important here is not just that you can understand what they're saying to you, but you can really understand the way they use different idioms and phrases, because they might say a thing. And in your culture, let's say you've come from France or you've come from America, or you've come from New Zealand. In your culture, that particular idiom might have a specific meaning that is very different to the way in which that phrase or that idiom is being used by the local people that you're trying to help. And so it can be very easy. Even if you've understood what they've said, like the actual words and they're translated to you, you don't quite understand what the actual key idioms are. And so you've misunderstood. Even though you know exactly what they've told you and it's been translated for you, you've actually misunderstood the reality behind the words of what they're trying to communicate to you. And so AI would be a powerful tool at solving these kinds of problems and aiding people. Now, the caveat, of course, is this, that if you were going to go into a culture permanently, you should not have an AI between you and that culture. There's something inhuman about that. There's. There's something dysfunctional, there's something lacking there. So we're talking here about emergency situations or where you're passing through, that kind of a scenario. But if you are actually living in a culture, then it seems pretty obvious, I think, that the humane way to go about that is to actually embrace the culture, to live amongst the people, to learn the language, to understand the local customs, et cetera, et cetera, not to have an AI, something digital, something artificial between you and the people. And you can imagine how this could actually amplify an already existing problem in an even worse way. So we've already got a problem in the west of certain cultures who come into Western nations and they don't integrate well and they effectively ghettoise. They don't enculturate. And what they do is they maintain their own cultures, often which are antithetical to the Western Christian tradition, sometimes even hostile to it, and they maintain those customs and they ghettoise in there. Now, imagine a situation in which you've got an AI that means you don't even have to learn the language of that country you are now living in. And so there's this total isolation going on with a machine between us and people who potentially are our neighbors. Obviously that would be the caveat, that would be an example of where this technology would not be a good thing. But absolutely it would be a good and moral usage of AI to use it in those other translation situations where you have that need another moral use of AI is of course using it for research purposes. Again, there's a little caveat here, and what we're talking about here is not using it to write because it's not read writing, is it? But to produce research papers, but an actual tool that allows you to go and find information so that you can then write a report or write a presentation or write an essay or whatever it might be. But as a research tool, it can be extremely powerful and the ability to actually get access to a lot of information very quickly can, as you can easily understand, be extremely helpful. Now, the problem, and this is the caveat I've used AI for. I don't use it a lot, but I do use it as a research tool. If you've Googled lately, you'll probably be aware of the fact that there are different forms of search assistants now that are available on the different search engines, and they are AI search assistants. They go looking for the information.
The problem though is, and still this is an issue, is that AI is not discerning, it's just grabbing what it thinks is the best information from the information that's available to it. And often what that means is that AI as a research tool, they're not all research tools that are AI, are created equal, shall we say. And some of the more expensive for the high end, the really, you know, I guess boutique, maybe academic ones by the sounds of things. They're phenomenal in what they are doing and the nature in which they weed out the rubbish. But a lot of the sort of typical popular tools, not as good. And you've always gotta double check, I've. I've seen this myself, actually, where an AI research tool has said something very authoritatively and it's provided citations, it's given what looks like very clear, legitimate references for what it is actually telling you, but in actual fact it's not correct. And when you go and do a bit more of the leather work, the shoe work yourself, and do a bit more digging, you discover that it's actually led you astray. So in that regard, there is still a challenge with AI and research, but as it becomes more powerful and as the tools become more refined, you can see how this absolutely would be a good and moral use of AI. There is one other risk here with research tools, though, and that is the ability to corrupt a culture and its understanding of issues.
Because if you can control the research tool and you have imposed filters upon it to actually ensure that the information that it is providing to people is politically leading in nature and it's hiding other information that is considered politically inconvenient, then obviously you're going to have a problem and the potential for a culture, a society to be led astray very quickly.
You know, you don't have to be a rocket scientist to see how that could happen with something like that.
The next area, another moral use of AI in which I think, well, certainly for me, I find this the most exciting. And I think it really does in a lot of ways show the most promise and potential for human flourishing. And that's in the area of medical diagnosis. The ability of AI to very quickly diagnose medical conditions is really something to behold. And it seems to be getting better and better now. Obviously the important thing to understand here, and this is something people often forget, it's not that the AI is sitting there like some supercomputer doctor. That's just like Doogie Howser, MD for those who are old enough to get that reference. If you're not going to Google Doogie Howser, MD. It's not like Doogie Howser, MD, the digital version of that, this super genius doctor that you're dealing with. Again, remember, it's narrow AI, it's grabbing already existing information. And where is that coming from? It's coming from human medical professionals and it's coming from the practice of human medicine. And the fact that we just have so many scans available to us and we have a developed history of the last hundred years or so, there's a lot of records it can draw from. And that's obviously one of the reasons why it's actually, I think it's so good and impressive in this arena and it's getting more powerful all the time.
There was a news story you might have heard about at the end of last year, where a group of this was a demonstration in China and you had a group of doctors who went head to head with two different AI models. And what happened was the doctors were given a scenario and the AI models were given the same scenario, and then they had to come up with a diagnosis as quickly as they possibly could. And so, remember, you've got a team of doctors here, not just one doctor, but a team of them. And then you've got these two AIs. Now, one of the AI models was an international one. I think it might have even been from memory, if I remember correctly. I think it was a French AI model, but regardless, it was a non Chinese international model. And then there was a Chinese AI model doing the diagnosing as well. Unsurprisingly, the Chinese AI model came out with the fastest of any of the three possible diagnostic methods, whether that's the team of doctors or the other two different AIs. So make of that what you will. That could be a little bit of propaganda. But the difference wasn't huge. And the important point here is not which AI model actually turned out to be the fastest. It was the.
The speed at which both AI models were and how. How much of a gap there was between their speed of diagnosis versus a group of medical experts. And in a nutshell, and it really is right to describe this, I think it's jaw dropping. In its display of AI power, both of the AI models reportedly delivered their diagnoses in under two seconds.
Meanwhile, the team of doctors reportedly took about 13 minutes to come up with theirs. So you can imagine the potential. Imagine not just in a GP's office, there's real potential here. And again, you've got to balance this, obviously, with the human component. It would be awful if a patient walked into a GP surgery and there were no human doctors there, and you're literally just speaking to a machine and then having a machine spit back information at you. There's something dysfunctional about that. But imagine a human doctor who has a diagnostic tool at their disposal and imagine in a GP setting, the implications of what that could mean. Obviously, it's saving time, which means it's not just efficiency, this is about being able to help more patients in a day and take the burden off a healthcare system. So there's a real positive benefit to this. This is about giving people clarity and peace of mind. It's about saving resources where, you know, you can be a bit more precise in what's going on. So that's one setting and that's impressive enough. But even more importantly, imagine this being applied in the practice of emergency medicine and all of a sudden you can see the potential that, the real life saving potential, potential potential of this particular technology and this use of AI where what you've got is like cutting edge life or death scenarios where every minute and a lot of those situations counts and you have got an AI tool that can give you back valuable minutes that the patient needs to be saved or to experience a better outcome at the other side of, of the medical treatment they're receiving.
To me I'm most excited by this. There's huge potential here, another moral use and related to this in the area of medicine, and this is another one I'm really quite excited about, is the ability to find increasing numbers of uses for off label drugs. So a medicine is produced, it's produced for a particular condition or purpose, it's put onto the market, it's sold for that purpose.
But then often you can discover, oh, if we take that drug that was sold onto the market for fixing headaches and we use it with, you know, people who might experience a high risk of blood clots, it can act as a blood thinner. And so there's an off label use for a drug that was sold into the market or produced for a completely different reason.
Now, as I understand it, off label drug use, the reason we don't see more of it is because it is so costly and time consuming to, to actually do the research to figure out all of these off label uses and off label use. I think generally speaking, the drug companies don't really stand to profit as much, so there's just no incentive to do it. But AI is a whole different kettle of fish. AI has the ability to scan that information very quickly and it's not the same labor intensive or cost intensive process anymore. And so what this means is potentially as AI continues to work away, we are going to discover more and more off label uses. And potentially some of these will even be life saving off label uses. If not, they will be life improving off label uses for different drugs and there will be a real benefit to human flourishing in that. And I think that's a really good and exciting thing.
Another example, and this is one that maybe you hadn't thought of before, but is in the area of meteorology, particularly in tracking.
And so think about the risks of a storm or a hurricane or a tornado, that kind of thing. I also often wonder to myself potentially about whether in time AI might even become, if we can discover a few more parameters about how the natural world works in this regard Whether it might even become an amazing warning tool, early warning tool for earthquakes, that would be something phenomenal if that could happen.
But certainly when it comes to earthquakes right now, the ability for AI to give us really good geotech information about where to build and how to build structures in certain areas and how to make them stronger and safer, that's obviously one big advantage. Now in the area of meteorology though, the ability of an AI to track a storm or a, you know, hurricane, something that's of risk, a weather event, and to give very precise information about the scope and the nature of exactly what is being dealt with and where it is going to track. And also the ability for it to very quickly update and tell you if there is a change with human meteorologists involved. This is obviously going to take more time.
But with AI, its ability to crunch numbers and data very quickly obviously is going to provide a really big advantage in protecting the general population and, and lowering the risks that previously would have just been par for the course when dealing with this kind of stuff. Last but not least is an app that I use actually, and it's something called Truthly. It's a Catholic app and it is a research tool. But to me this is a great example of AI being applied. Well, and what it does is you can ask questions, I'll often use it. This is effectively a research tool that I'll use.
So if I need quotes from one of the Church Fathers or I need to understand precisely the debate about a finer point of theology, I can actually ask Truthly to give me an answer. And it will give me, generally speaking, it's a pretty good research tool. Like all research tools, you gotta sometimes double check and just make sure. But it is a pretty good research tool. And what I really like about Truthly as a tool in this regard is not just that it opens up the faith to the people who potentially they wouldn't necessarily know where to go to look for answers to certain questions, or they might not have access to a patristics library of the Church Father's writings. That sort of stuff now is sort of easily accessible for people. But it's also the way Truthly works truthfully at the end of the answer that it gives you, it gives you a whole very comprehensive series of replies to your questions. But what it also does is, is at the end, at the end it asks you a reflective meditative question about the issue. You know, how does knowing this about the Church Fathers and their views on transubstantiation of the Eucharist, how does that affect your participation in Holy Communion and coming to the communion table, the Lord's Supper, you know, so it gets you not. It's not just feeding you information as a consumer, it's trying to get you to. To contemplate a bit more deeply some of these issues.
So that's, as I said, a very brief series of examples of where I think quite clearly we can see a good and moral use of artificial intelligence technology. So it's not all doom and gloom is the point I'm trying to make. However, there are great risks involved. Sorry. As well. And that's what we're going to do in the next and final episode is we're actually going to begin that episode by looking at some of those serious risks that must be navigated and that we need to be aware of. And then we're going to conclude our final episode by talking about how we can take the power back, how we can actually ensure that we are not slaves to AI, that we have a healthy human anthropology and authentic anthropology that is not just something we know about. Oh, yeah, I'm the Imago Day.
But that awareness of being the Imago DEI must be lived if it is actually to be fruitful and to allow us to flourish and to have any deep meaning and benefit in the world. So it's one thing to know it, it's another thing to live at. How do we live the Imago Day? Well, in relation to AI and other forms of technology. We'll talk about that in the next episode. Thanks for tuning in. Don't forget, live by goodness, truth and beauty, not by lies. And I'll see you next time on the Dispatchers. Hi there. If you're enjoying our content, then why not consider becoming a paid supporter of our work? You can do that at either Substack or Patreon and the link for both are in the show notes for this episode. If you do become a supporter, then you'll get access to exclusive content, early release content, and also you'll be helping to fund all of the offline work that we do as well, all of the youth camps and the events that we speak at and all that other stuff that happens that you don't see online.
A huge thank you to all of our paid subscribers. It's thanks to you that this episode is made possible.