Empathetic VR

MCF Intersection

A regular snapshot of the trends, news and research in the world of philanthropy — and its impact on business.

Connect

Imagine looking down at your body and discovering you’re a different gender or race. You’re in someone’s virtual shoes, experiencing the world as he or she, or even they, sees it –- joy, fear, prejudice, and how others perceive you.  

Virtual reality (VR), as never before, can help us do this –- by widening our horizons, helping us understand what it’s like to be someone else. But can VR, artificial intelligence (AI) and other emerging technologies, help make us more empathetic and make the world a kinder place?

Technology means it’s never been easier to communicate. Despite this, it’s apparently never been harder to empathise. A growing chorus of voices are saying technology is at the root of our lack of empathy.

A brutal millennial cocktail of gaming, social media, reality television and the cult of the individual have left us less likely to empathise, say media reports. A recent UK poll by YouGov found that more than half (51 per cent) of respondents believe empathy levels are falling, while just 12 per cent believe we are growing more empathetic. This echoes US research from 2010 showing a 40 per cent decline in college students’ capacity for empathy over two decades.

However, as our ability declines, our interest is rising. Google now houses an ‘empathy lab’ and, as a term, empathy has been googled in the US more and more since 2004. Furthermore, academic studies around empathy are being published in ever increasing numbers.

Everyone – apart from some psychopaths –- can learn to empathise, say neuroscientists. In fact, some primates demonstrate empathy, and even young babies show basic levels of empathy.

To teach it, we first need to understand what it is, writes Dr Sara Konrath, a Canadian social scientist who has developed an empathy app for young people. One academic identifies at least eight different definitions of empathy.

At the emotional end of the scale, it’s about literally ‘feeling’ what another person experiences. At a cognitive level, it’s about perspective -– understanding rather than imagining someone’s experience, says Elise Ogle, who studied the effects of immersive virtual-reality on empathy at Stanford University’s Virtual Human Interaction Lab (VHIL) in the US. “You see or understand something (cognitive empathy), or you feel your own emotions (affective empathy) –- or you might have a mix of the two,” Ogle says.

This ability can help us respond with appropriate compassion. Empathy is the social glue that holds society together, and the foundation of human relations. But if we ‘feel’ too much, it might also backfire, making us too upset to engage with another person or respond to hardship.

"Technology has always been about designing for the cognitive layer of experience. Now with paradigms of assistants (such as Google Assistant, Alexa, Siri etc), how do we design for the invisible, emotional layer?"

MCF Intersection

A regular snapshot of the trends, news and research in the world of philanthropy — and its impact on business.

Connect

How do we learn? Traditionally we’ve learned to put ourselves in others’ shoes by face-to-face communication, good parenting, by reading fiction, and, more recently, watching television and films. But are immersive technologies such as virtual reality more powerful and effective?

“Think about it as a spectrum of experience,” says Ogle, who now works at Limbix, a US-based company that builds virtual reality to improve mental health treatment. “At one end, there’s real life, which is the most powerful experience. VR is a step down, but closer than print or television.”

At Stanford’s VHIL, she helped create the experimental first-person content ‘Becoming Homeless’. Put on a VR headset and this content transports a user to ‘live’ the experience of losing a job, a flat, and go to living on the streets and feeling vulnerable -– even experiencing gut fear as a threatening figure looms on a night bus.

Or imagine feeling what it’s like to meet racism face to face -– this is another immersive-reality project from Columbia University developed with Stanford’s VHIL. ‘1,000 Cut Journey’ allows the viewer to become a black man, encountering the realities of prejudice as a child, adolescent and young adult -– an emotional and disturbing experience.

“People have been touched,” says Professor Courtney Cogburn, who showcased her content at Tribeca Film Festival in 2018. “They’ve expressed learning something new about racism. We’ve tried to capture subtleties that not everyone picks up as well as the more in-your-face stuff.”

Scientists look at “embodiment” in VR, says Ogle, which dictates how immersed a person feels in an experience. “This is about how you take the first-person perspective of another. You look down at your body and see someone else. Body transfer is the psychological feeling you can become someone else,” she says.

It’s not something you forget easily, the lab’s research shows. Stanford psychologists run rigorous studies to differentiate VR experiences and measure the impact of these on people’s attitudes and behaviour. Typically, says Ogle, this immersive experience proves more powerful than other media.

Furthermore, VR is an innovative and growing area, says Cogburn. “It will become an important tool to understand experiences that are different from our own in a way that other kinds of media are incapable of doing.”

While VR headsets are becoming more affordable and sophisticated, content and graphics have typically been relatively clunky. This also raises the question about whose responsibility it is to craft material that engages and educates.

Dozens of studies show that empathy can be taught and learned, confirms social scientist Konrath. “But many methods are costly and difficult to inject into everyday interactions,” she writes.

To overcome this challenge, she has developed an app, the Random App of Kindness (RAKi), which offers a series of nine mini games designed to sharpen a young person’s empathy skills -– give the crying baby what it needs, help granny cross the road or recognise emotions in facial expressions and so on.

Teenagers (10-17-year-olds) who played with the app for two months were more likely to help a person in distress, Konrath’s preliminary research showed. “Technology isn’t going to go away, so we need to find ways to use it in a positive way,” she writes.

To be effective, technology needs to match its users, says former agony aunt Suzi Godson, founder of an award-winning safe social media app that helps teenagers cope with certain issues. With some 7,000 young users, social enterprise MeeTwo runs on empathy. Teenagers post problems anonymously and -– crucially -– weigh in to support each other, sometimes with advice, but often just with commiseration or understanding. Trained moderators are ready to step in if required.

“It’s better to simplify and do a couple of things really well,” says Godson. Her solution gets around barriers to one-to-one counselling, she says -– the cost, the availability and the risk that young people will just say what they think a counsellor wants to hear. Teenagers say they like the lack of hierarchy and the anonymity behind the app.

While MeeTwo currently hooks up real people, Godson’s team is looking to build in AI in order to scale up the app. Apparently knowing that you are communicating with a machine rather than a human doesn’t blunt the benefits of that exchange.

When a tutor at Plymouth Arts College developed an emotive chatbot designed to detect the mood of its students -– many from disadvantaged backgrounds -– they took to it like ducks to water. “Students have a more intuitive understanding of the system than the teachers,” says Angus Reith, who’s behind the project. “We’re allowing students to voice their feelings, giving them an opportunity for non-human, non-judgemental conversation.” They know they are talking to a machine, but still find the exchange comforting.

Empathy Vr2

MCF Intersection

A regular snapshot of the trends, news and research in the world of philanthropy — and its impact on business.

Connect

Research shows that the more ‘human’ new information technology appears and sounds, the more honest and dependable people find it. When we communicate with avatars, we like them to look and sound like us, says Professor Justine Cassell, a human-computer interaction expert at Carnegie Mellon University. AI trained to relate to humans can be a powerful tool.

Cassell spent years observing how children interact, and her research has helped computer scientists build ‘virtual children’ –- life-size screen avatars underpinned with some hefty layers of artificial intelligence.

One such creation, ‘Alex’, can interact at a naturalistic level with children -– but is trained with a deep ability to collaborate and build rapport, thanks to Cassell’s real-life observations. Alex might chit-chat, grin, respond with a “don’t worry, I suck at maths too”, for instance. All while it builds social bonds and gently helps the child learn.

“They know Alex is not a real child,” says Cassell. “They’ll say, ‘oh, Alex crashed’, but that doesn’t stop them from being able to relate to it,” she says. Alex isn’t trained in empathy – but in rapport-building.

Rapport is a reciprocal sense of ‘getting on’, understanding one another, and has powerful benefits. “Empathy is one-directional --– you feel for someone else. Rapport is two-way. And there’s a ton of research that shows its positive effects on health, education, ageing, work. It’s a really potent phenomenon.”

Furthermore, we’re mistaken in pursuing autonomous, independent AI systems equipped to act on their own agency, says Cassell. “We are not autonomous, we’re interdependent,” she adds. “If our holy grail became interdependency, then we’d be less worried about AI taking our jobs because we’d be building systems that worked with us, creating bonds, rather than replacing us.”

Leaders at Davos in 2017 were able to experience Cassell’s creation first hand. She took ‘Sara’, the ‘Socially Aware Robot Assistant’, to the World Economic Forum to help world leaders meet people with similar interests and navigate the conference.

This kind of AI could have huge beneficial roles more widely, Cassell believes. “Data shows after two years in a refugee camp, people turn to crime. In some camps, they simply don’t have enough officers to interview and process people,” she says. “What if you could create an AI that could build rapport with refugees and conduct that first interview, think how that would help. There’s data that rapport leads to honest answers. This has massive potential.”

The trouble with technology is that it allows users to escape the consequences of their actions -– this is obvious in the immersive environments of first-person shooter games –- which comes back to design. “Gaming leads you to believe there are no consequences to social interactions, but obviously in the real world there are,” says Jamie Callis, a self-confessed former games addict from Wales who sought counselling when his addiction became out of control.

Social media also skews our empathetic responses, says Priya Lakhani, founder and CEO of Century Tech, which uses cognitive neuroscience to help teachers. “When you are mean to someone online, you don’t have to witness how he or she reacts to that nasty message. Face to face, you’d immediately learn how that other person was feeling –- they’d cry or become upset. Technology helps you avoid that, so it could easily drive a lack of empathy. When building an application, companies need to be very aware of potential negatives.”

Furthermore, the growing presence of persuasive technology, designed to influence and sometimes reinforce consumer choices, can have a reductive effect. “When AI is programmed, it has a goal in mind,” says Lakhani, “and a lot of AI is designed to drive patterns. If you rely too much on AI to give you information, or social prompts, it risks reinforcing your beliefs. That’s dangerous. Empathy is about understanding the feelings and viewpoints of others. AI can remove that serendipity and just feed you with what you like.” In an ideal world, says Lakhani, AI would be designed with empathy and social and moral values in mind.  

Currently, technology is tone deaf, says Danielle Krettek, founder and principal of Google’s Empathy Lab, which is housed within the company design group. Her team’s research feeds into the design of Google’s AI Assistant, and how smart technology interacts with real people. While we might swipe our phones thousands of times a day, our devices don’t know how we feel, she says. This is a design flaw. “Technology has always been about designing for the cognitive layer of experience,” she told a San Francisco technology conference. “Now with paradigms of assistants (such as Google Assistant, Alexa, Siri etc), how do we design for the invisible, emotional layer?”

Humans are emotional, messy beings, Krettek says. An AI can’t feel emotions such as joy. “The most human things are really hard for machines.” Emotional intelligence is, she says, the ability to recognise an emotion in yourself, and recognise it in someone else. “So an empathetic moment requires that both beings are feeling things. That’s not possible for a machine.”

This shouldn’t stop technology being designed with humanity in mind, equipped to make what she calls an empathetic leap. “We’re entering a different paradigm, make no mistake. We’re learning from lessons of the past. The next ten years are going to be very different.” She looks to a future where humans have a “harmonious connection” with technology “that feels almost human in next-level intuitiveness, versus something that is trying to be like us and never will be”.

If technology is part of the cause of the decline in empathy, it can become part of the solution, say social scientists. However, it can also manipulate and exaggerate anti-social and polarised positions. Ultimately, it’s only as good as the human creators behind it. And they need to be empathetic.