華文

Podcast: Can AI Be Compassionate?

March 13, 2026

Caroline Emmer De Albuquerque Green and Geshe Lodoe Sangpo

Originally recorded for Accelerating AI Ethics, University of Oxford Institute for Ethics in AI.

Caroline Green speaks with Geshe Lodoe Sangpo — a Tibetan Buddhist monk and scholar — in McLeod Ganj, Dharamsala, about whether AI can ever be truly compassionate. Drawing on Buddhist ethics, they explore interdependence, equality, and why compassion is a way of being rather than a skill that can be trained into a system.

In brief

This episode brings the civic care framework into dialogue with Buddhist ethics. It is a natural companion to the 6-Pack of Care episode, deepening the question of what moral competence really means — and what AI systems may never be able to replicate.

Key takeaways

Listen

The full podcast will be available on 19 April 2026.

Full transcript

Caroline Green: If I think of interdependence, I understand it as us all being dependent on each other. We only make sense because of other human beings. We only exist because of others. We can only sustain ourselves because of others. We only make sense because of others. I mean, what would a teacher be without the pupils? What would a doctor be without patients?

Geshe Lodoe Sangpo: Yes, on the very conventional level, that is the core. Not only just human to human, but also the interdependence on the larger scales — ecosystem, everything dependent on the outside external resources. We are very much dependent on the external resources as well. Therefore, if you are really able to understand not only conceptually what interdependence is, but more at an experiential level, then it can really serve as a driving force for your attitude, for the mindset, for the emotion that you bring up in your mentality.

Caroline Green: Is that at the core of the concept of compassion?

Geshe Lodoe Sangpo: Yes, compassion is built on that foundation as well as on the concept of equality. That is the foundation. Seeing everyone equal. Seeing everyone have the capacity or the potential to really attain higher spiritual realisation, equally. Physically, some people may be a little bit differently able, but mentally we share the same essence. In Buddhism, we talk about everyone having the nature of the Buddha. The seed is there. Same. It doesn't matter what physical condition you are in. So therefore, seeing everyone equal is a foundation for generating, cultivating genuine compassion — unconditional compassion. The instructions given in Buddhism teachings are designed not to have a good career, but to make people more whole, more embodied — to embody compassion, caring, loving kindness. All this comes from practice. Practice comes from mental exercise. You have to face certain challenges. Those challenges give us lessons. Those challenges strengthen our mental resilience.

An example of compassion manifested in the world

Caroline Green: We are currently in Dharamsala, India, and yesterday we went to visit a school that your friend set up for local children from very poor families. I would say that is a beautiful example of how compassion manifested into something in the world. Where somebody saw the suffering of these children on the street, and through a very difficult process — facing resistance from the local community, finding that local schools didn't work for these children because of what they had experienced in their lives — he set up a beautiful school that has now grown into something supporting 700 children, with its own curriculum where ethics plays a fundamental role. This is an example of somebody who didn't just feel compassion, but through resilience, over many years, responded to it.

Geshe Lodoe Sangpo: But like that monk — he witnessed a horrible situation, those slum children going through so much. His compassion propelled him to come up with something. He went through so much. And when His Holiness the Dalai Lama said publicly, "Whatever I have is yours" — he didn't just mean financial support, but courage. He was encouraged. That is one of the most beautiful examples of compassion in action.

What would a compassionate AI system look like?

Caroline Green: I've been hearing some people working on AI in companies talking about concepts like compassion and care. What do you think about a compassionate AI system? Do you think something like that can exist?

Geshe Lodoe Sangpo: That's a good question. Compassionate AI. I can think of it on two different levels. AI can maybe act very compassionate — having all the information, all the conceptual elements of compassion practices, and based on that data, it can outwardly appear as a very compassionate AI. But on the experiential level, on the feeling level, we human beings have a unique characteristic: we can really sense each other's expression. If you become a very compassionate person, you can immediately perceive other people's expression, and according to that, share your thoughts or be compassionate in a very contextual way, with wisdom. There is no universal compassionate approach. Everything has to be done in a very contextual, perspective-taking way. And therefore our response to other people's needs can change in a very compassionate way.

I don't think AI can behave in that manner. Because our past experiences, the care and love that we receive from other people — all these factors leave some kind of imprint on our mental stream. And when we act, all this experience plays an important role. It's not just calculation. It's not just a business game. The felt experience has to be the driving force, the underlying mechanism behind compassionate behaviour. In Buddhism, intention is not just an algorithm or calculation — it is a fundamental deep care for other people that comes through cultivation, that comes through experience. Then one can really manifest genuine care, genuine love, unconditional compassion. Otherwise, we can outwardly do a very compassionate approach, but at a deeper level — the sincerity, the deep care of emotional experience — it is very hard to understand. So the possibility of AI compassion — in that sense, I think it is very difficult.

Caroline Green: There will be people who say: "We are training our AIs to be compassionate, so it is a way of being for them, and they show a compassionate way of interaction with humans. And maybe they'll even be embodied as robots, able to scan people's faces, pinpoint their emotions. We already have AI systems that can perceive pain on people's faces. So that would be algorithmic care, algorithmic compassion." But what I'm getting from what you just said is: it's not just how you respond to other people, it's also what it does to you. And what the intention is behind it. For somebody who wants to be genuinely compassionate, it's about growing in spirit. And perhaps that's the limitation of an AI system.

Geshe Lodoe Sangpo: Yes. And another thing: when we see someone going through trouble, tragedy, or distress, the outward action of a compassionate person can sometimes appear pleasant or unpleasant. It may sound very uncompassionate, look uncompassionate, but by seeing the long-term benefit, sometimes the action — the deep intention — is very kind and compassionate, but the action has to be a little bit firm, even a little aggressive. Compassion comes in different forms, different expressions. Maybe AI can always be very kind, very soothing, just resolving the current temporary distress — not really understanding the long-term goal, the long-term benefit, not able to take that into consideration through genuine care and feeling. Sometimes the expression, the action, speaks louder than the word coming from a compassionate person.

Caroline Green: So it's multi-layered. Multi-dimensional compassion. There's no straightforward answer.

Can AI help people who are practising compassion?

Caroline Green: Can AI work for humans who are practising what you've just told us about? Because it takes a lot of practice to become compassionate.

Geshe Lodoe Sangpo: My knowledge of AI is very limited. But AI can be a great asset. It depends on how we use it or how it's developed. The inner growth only has to come from quite rigorous practice. AI can maybe give you some resources or some information. But the information is not happiness. It's not compassion. Compassion is not a skill. Compassion is a way of being. If you gain some information, that information may stay on a very cognitive level. You can be very smart, but at the same time you can be very non-compassionate, selfish. These days, through AI, you can get lots of information in a very instant moment. Sometimes this can be very overwhelming. But the inner growth comes from felt experience.

For example, Audrey Tang explained this to me — if you want to go to a gym and strengthen your muscle, and you have an assistant like a robot AI, and you say, "Let the robot go there and do the exercise on my behalf." The robot can lift so many weights and you get a high score. But does it really strengthen your muscle? No. It gives you the outcome you wanted. But strengthening the muscle has to be done by yourself. Otherwise, atrophy will occur. Similarly, mental exercise goes in that same direction. Emotional growth, cognitive growth has to come from our own personal journey and exercise. Sorting out everything can have a negative impact. Integrating AI into that kind of system in a harmful way — we need to be careful.

From scepticism to hope — Civic AI and compassionate relationship-building

Geshe Lodoe Sangpo: Before having interaction with you, Audrey Tang, and Tenzin Yangtso throughout this week together in Dharamsala, I had a negative impression about AI — less hopeful, more scared. Now you have introduced me to Civic AI — AI not just used to gain lots of power in a single unit, but as more constructive relationship-building. Seeing the healthy, constructive building of more healthy relationships, and perhaps the unhealthy part AI can remove and give more assets with respect to healthy relationship. Now AI moving in that direction is wonderful, very hopeful. Otherwise, AI can be very detrimental, catastrophic. Human growth has to come from inner exercise — that part should not be compromised. Otherwise, the young generation may overly depend on AI, overuse it, and compromise emotional and cognitive growth. That part should not be forgotten.

Caroline Green: You just mentioned Civic AI — the way that we're proposing that AI, we refer to them as kamis — smaller, very contextualised AI systems — can help specific communities and groups of people. Here, what's really at the core is still the humans — the way that they relate with each other. AI is serving them, supporting them in whatever situation they are in. The humans are at the centre of it.

Geshe Lodoe Sangpo: His Holiness the Dalai Lama said AI will never be able to be like humans because of human creativity — the ability to create new things. And I think that is one of the most beautiful examples of compassion. AI can be a great asset. But the human — and other species — has to be at the centre, not the AI. We can use AI in a very constructive way, like the Civic AI you are heading into. Just boosting AI's capacity to its maximum and consuming so much energy — if it becomes very destructive, what is the purpose? Human growth, connection, human-to-human bond — that is very important.

Home