Empathy isn’t a corporate slogan

Inadvertent Algorithmic Cruelty” is an emotional piece by Eric Meyer who lost his daughter earlier in the year and was reminded of his painful loss by what is seemingly a thoughtless Facebook product feature. It was an attempt by Facebook to be more human instead of being a utility, but in the end it inadvertently ended up upsetting Eric and others.

It was a rude and very real reminder that, no matter how well-intentioned, our software-enabled society is far from being empathetic and understanding of human reality. Facebook, which as a company serves 1.25 billion people, wants to be empathetic. And so do others like them. The question is how, and how fast, can we make software take on empathetic qualities, especially as we continue to pray at the altar of “growth at any cost.”
Psychology Today defines empathy as:

Empathy is the experience of understanding another person’s condition from their perspective. You place yourself in their shoes and feel what they are feeling. Empathy is known to increase prosocial (helping) behaviors.

My own interpretation of empathy isn’t too different. It involves shedding the cloak of selfishness and donning the garb of patience. Empathy means taking the time. The grow-grow-grow culture of today’s Internet companies, that live at network speed, doesn’t have enough of that.

How does a company with more than a billion people develop an empathy gene? How indeed, when it has to meet aggressive growth targets to appease Wall Street, keep upstarts like Snapchat at bay, and keep pushing Google’s kidneys. How indeed, does it then act human, when to succeed it needs to be a mean machine.

Speaking at a recent conference, Facebook’s director of product design, Margaret Gould Stewart, said that it now refers to its users not as users anymore — instead they are calling them people. It has changed the internal dashboards from “Daily Average Users” to “Daily Average People.” The company has also created an “Empathy Team” whose task is to make engineers and designers understand what it is like to be a user or an advertiser.

The superficiality of those comments and the underlying actions (or changing dashboard language) taken by the company made me wonder if Facebook even knows the meaning of empathy, prompting me to tweet that “If you have to create an “empathy team” then one thing is clear — you don’t really know what empathy is.”

A few years ago, I pointed out that Google’s lack of social DNA is why it would constantly fail at all its attempts to succeed at “social.” Similarly if there is a fault in Facebook, it lacks that inherent humanness. Facebook is a tool whose core addictive value is in allowing us to star in the movie of our lives. It is not a tool that encourages empathy, instead it is a tool for rewarding others with fractional attention, through likes and LOLs. It is a playground of words without meaning.

facebook traffic-6981891

The company’s culture is driven by pushing people to spend more and more of their time inside Facebook. It is a data informed culture, where everything is optimized for bringing you back, again and again. Sure, you can label global warming as “climate change,” but the consequences to our planet remain the same. Similarly, you can re-label users as “people,” but the end results aren’t going to change because the company wants people to behave like addicts — give me more likes, lols and photos without meaning.

To be clear, I don’t think that people who work for Facebook are evil. Instead they are part of a corporate machine whose job is to control all of our attention, for as long as possible. On the other hand, Facebook having empathy would mean a wholesale cultural graft towards a different way of thinking, developing and interacting with people. A more social and humane software would be programmed to not bring up bad memories and, if anything, bring up emotionally meaningful memories.

“Algorithms are essentially thoughtless,” Myers astutely noted in his piece. “They model certain decision flows, but once you run them, no more thought occurs.” Yes, and that is why Facebook should start by re-thinking its algorithms — by thinking of them more like a person.

At present, a company like Facebook’s typical teams are made up of:

  • Product designer. Responsible for visual, interaction, and product design.
  • Researcher. Conducts qualitative and quantitative research.
  • Engineer(s). Typically one to four engineers per team.
  • Product manager. Responsible not just for project managing, but also ensuring products ship on time and product quality.

Just as designers are involved in crafting the final products, perhaps software teams that deal with people and society start embedding anthropologists. Remember, a decade ago, designers were part of a separate group and now designer-engineers are a common occurrence. Facebook takes a lot of pride in being data informed (vs the data driven culture of Google) but it needs to be informed better and more importantly it needs to ask the right questions.

Facebook’s “Year in Review” feature for instance seems like such a clever and obvious idea that everyone should want to create their own “greatest hits.” (The sheer number of times it has shown up in my timeline is a testimonial of its success.) Except life isn’t perfect. There is death, disease and tragedy and they are all painful memories. And perhaps, smarter, empathetic software, like a reasonable person’s response, would avoid bringing up particularly painful memories.

Looking in from the outside, to an amateur’s eyes, the “year in review” looks like a rather simplified visualization that looks at the most common denominators — most likes and comments — and uses them to create moments. Instead, Facebook (or whomever) will need to look more closely at ambient sensor data embedded in our interactions with the Z-Machine (my nickname for the ever-growing Facebook Cloud.)

For instance, if someone is using their phone from a specific location and they don’t work there — say a hospital, the usage cluster metadata should be a trigger for the system to look for an even — a baby photo, is a happy event. And otherwise, the posts might indicate something else, worthy of future development.

Similarly our photos — there is a lot of them on Facebook— carry a lot of emotional information and perhaps it would make sense for Facebook’s algorithms to try and infer more and thus build a greater, more meaningful and thus a more human story. With over 1.25 billion people, it isn’t an easy task to become human and develop empathy. At least, not yet!

A year ago, in a piece for Gigaom, I pointed out that, “Data is used like a blunt instrument, a scythe trying to cut and tailor a cashmere sweater. What will it take to build emotive-and-empathic data experiences? Less data science and more data art — which, in other words, means that data wranglers have to develop correlations between data much like the human brain finds context. It is actually not about building the fanciest machine, but instead about the ability to ask the human questions.”

On the surface it might need some kind of artificial intelligence, but it won’t any time soon. Facebook’s AI chief Yann LeCun recently said, “Science fiction often depicts AI systems as devoid of emotions but I don’t think real AI is possible without emotions.” Clearly, Facebook is thinking about these things. And whenever that happens, in the interim, the company has to remember that all the software for these systems for now is written by humans. So it is for humans to define and create empathetic systems, that worry about how the photo of a recently lost daughter might make a dad feel.

Update: In a series of tweets, Evan Selinger, a professor at Rochester Institute of Technology shared his thoughts on what he feels are the challenges. I thought they were worth aggregating. He and I aren’t too far in our thinking around these issues.

FWIW, I think the “algorithmic cruelty” case is less about empathy & more about inherent risk of outsourcing intimate decisions, including how to construct a personal narrative, to algos optimized to expedite a process that requires conscientious, self-initiated decisions. Otherwise norms of “care” will be baked into code that some will find “careless.” Unless & until AI can grasp nuanced matters of interpersonal context, this is a *moral* matter for coders.

I think we’re just emphasizing slightly different things. Given the AI gap at issue, I think the right thing to do is avoid automating. Unless & until AI can grasp nuanced matters of interpersonal context, this is a *moral* matter for coders some will find “careless”. that requires conscientious, self-initiated decisions. Otherwise norms of “care” will be baked into code that certain things, rather than try to mitigate against the harm by adding humane oversight.

But, yes, in general, I completely agree thatre-thinking design by embedding anthropologists–or sociologists or philosophers–is a great idea. In the end, a key issue is how we humans make *meaning*. I don’t believe there is or can be a science that can fully reduce meaning down 2 formal components. Consequently any endeavor that requires addressing meaning with sensitivity necessitates human input.

More of Evan’s writings are here.

Update: January 15, 2016: The Empathy cartoon is by Dave Walker and is quite fantastic and makes the whole empathy debate understandable in one glance. A full year went by without me linking to his website. I am mortified for the oversight and not asking Dave’s permission. You can view more of cartoonist Dave Walker’s works on this website. Here is the empathy cartoon he drew. Thanks Dave for still letting me include this in the story, despite my tardiness. 

empathy-e1372105249597

 

%d bloggers like this: