Header
Story

Can an AI system be empathic at all?

Can an artificial intelligence system be empathic at all?

Empathy is the key to successful human relationships in the long run. But what does empathy actually mean, how do you recognize and define it? Can you learn empathy or just feel it? Read now what this means for AI and how empathy can improve our customer experience in the long term.

First of all: The definition of empathy from the Cambridge Dictionary is straightforward: Empathy is the ability to share another person's feelings or experiences. To share feelings or experiences by imagining what it would be like to be in that person's situation.

But it's not as simple as it sounds. Because what people feel cannot be read exclusively from their written or spoken words which an artificial intelligence system with state-of-the-art technology can analyze today. Digitally mediated human-computer interactions lack many of our usual interpersonal information cues. For example, AI can't see if the human is looking someone directly in the eye, and cannot detect many of the necessary cues in body language, gestures or facial expressions. It also lacks information on language type and tone of voice, which means that it misses many of the human cues which build trust.

So you can see the challenge here. Artificial intelligence is based on logic, not emotion. The advantage of this is that the machine will be predictable at all times (at least in theory, of course), because it also knows no moods. The machine always gives only the answers that it is taught or can synthesize from what it has learnt. So if you now build a system in such a way that it's friendly, respectful, reliable and trustworthy, you build trust. The trust comes from the fact that the system can remember preferences. This incites increased confidence, because you quickly learn that you can rely on it, for example, not to provide inappropriate recommendations that are completely unreasonable in terms of price or product quality.

 

Let us define intelligence and what we think it might be.

Let's indulge in a little philosophical excursion here: The question of whether computers will ever become sentient is itself a rather anthropomorphic question; after all, there is no reason to believe that human intelligence – with consciousness, emotions and animalistic drives such as reproduction, aggression and self-preservation – is the only possible form of intelligence. The human brain is an organism assembled by natural selection to ensure the survival and reproductive success of a hairless ape. AI is not subject to Darwinian selection. It therefore seems risky to assume from the outset that computer intelligence should be similar to human intelligence, unless its human designers are actively trying to make it so.

To illustrate, here is a simple analogy: Both a bird and an airplane can fly. But only one can flap its wings. Another lesson from biology seems to be that complex cognitive processes can occur without sentience. For example, the brain builds a visual image of the world from primitive concepts such as edges, motion, light and dark. All of this occurs beyond the reach of conscious awareness. Only the finished product – the final image of the world as we see it with our eyes – is presented to us for examination.

So maybe it's not so important that the machine has empathy with us. Maybe it's enough if it can fake empathy. Or do we need to empathize with AI because it can't feel empathy?

Stay tuned.

Yours, Jan