Rhodri Davies, Programme Leader, Giving Thought

Rhodri Davies

Head of Policy

Charities Aid Foundation

The role of giving

Is technology making us care less about each other?


7 June 2016

One of the key themes of our work here at Giving Thought is looking at what impact new developments in technology might have on the ways in which people are able to support causes. In general our focus is positive, because it seems clear that there are many ways in which new technologies can help to make it easier to give effectively and to address broader challenges in our society. However, it is important to remain mindful of the fact that technology may bring challenges as well as opportunities.

People like us

One of the challenges that I have become most aware of recently is social siloing: ie the way in which technology allows us to tailor our experience and interactions and thereby ensure that we only ever deal with people who are “like us”.

The recent experience of the EU Referendum here in the UK highlighted the dangers of this phenomenon: many of those who voted “remain” took to social media following the result not only to vent their anger but also to express incredulity at the result. I lost count of the number of times I heard people expressing bemusement or scepticism about the result because “no one they knew had voted leave”.

Social capital

What this highlighted is that social media is very good at strengthening the sense of connection between people who already share the same interests and views, but often terrible at connecting people with divergent opinions and views.

To paraphrase the work of the famous sociologist Robert Putnam, social media is an effective way of increasing bonding social capital (i.e. the links between those who are members of some defined community of interest) but not so effective at increasing bridging social capital (ie links between people who are members of different communities of interest).

Opposing views

The danger here is that if you spend too much time in the echo chamber of social media, having your views confirmed and repeated back to you, you can get not only a misplaced sense of the degree to which your views represent those of the mainstream but also come to hold those views in a more extreme form, because you are not subject to the ameliorating effect of having to take into account other points of view.

We have seen the impact of this very clearly following the EU referendum, where those on either side of the debate are not content merely to express happiness or dismay at the result, but also feel the need to extend vitriol toward those on the other side because they simply cannot understand how someone could hold such an opposing view on the issue.

This siloing effect is only likely to get worse in the future, as other technologies which seek to personalise goods and services in order to make them “more effective” narrow our range of interaction and experience even further. For example, there is a growing consensus in tech circles that conversation-based interfaces will eventually replace the visual interfaces that we currently rely on. (To understand what this means, think of using things like Apple’s voice-activated virtual assistant Siri or Amazon’s Echo, rather than a traditional web browser based on images and words on a page).
 

Understanding the bots

These non-visual interfaces rely on an AI or “bot” to present you with answers to the questions you pose, by linking you with goods, services and information that meet your needs. Obviously in order to do this, the bot needs some way to choose between all the available options, and it is going to do this on the basis of information about your past behaviour and the behaviour of people like you. This is the same basic principle that underpins the “recommendations” function offered by online shopping platforms like Amazon or the tailored advertising one gets on social media sites such as Facebook.

However, one crucial difference is that it will be far less apparent to the user that this is what is happening: instead of being presented with suggestions that they can choose to follow up or ignore, users will simply be presented with tailored options as if they are the definitive answers to questions. Hence the process of personalisation becomes part of the interface itself, rather than a feature of any particular resource accessed via the interface.
 

Narrowing views?

So what is the upshot of this for philanthropy? The main concern is that one of its key motivating factors may be eroded: namely awareness of need. An important starting point (and indeed ongoing driver) for many philanthropists is an awareness of the difficulties faced by people less fortunate than themselves.

But if our experience of the world is increasingly limited to interactions only with those of similar socio-economic status and views, then we are far less likely to come into contact with people from walks of life different to our own. This may well mask the existence or extent of certain social problems.

And when people are not aware of the challenges faced by those less fortunate than themselves because they are insulated from them coming into contact with them, they are less likely to think about trying help them. Research from the US, for instance, shows that wealthy people living in areas made up of other wealthy people are less generous than those who live in more economically diverse neighbourhoods.
 

Visible philanthropy

Conversely, as I argue in my book Public Good by Private Means, one of the defining features of the great flowering of philanthropy in Victorian Britain is that poverty was not only very prevalent but also very visible; so it would have been almost impossible for those with wealthy not to be aware of the huge problems facing the wider society in which they lived.

Granted, it is not always comfortable to interact with those whose views and values differ from one’s own, or be faced with the sometimes harsh realities of how others are forced to lead their lives. However, if we do not then we face two major risks. The first, as highlighted above, is simply that we will not be aware of certain problems and thus will not devote any of our efforts towards addressing them.

The other risk, as we have seen during the EU referendum here in the UK, is that when our certainty in our own thoughts and opinions becomes reinforced by the echo-chamber effect of interacting constantly with like-minded people, it becomes far easier to see those who do not share our views as somehow “alien” or “other”. Eventually, if this process continues unchecked, it would be possible to dehumanise those outside one’s own social grouping and use this to justify not helping them when they are in need.
  

For the love of humanity

If technology makes us increasingly solipsistic - isolating us all within our own little bubbles of experience and interaction - then we may well come to care less about others, particularly when they are perceived to be “not like us”.

Philanthropy could well suffer as a result, as the “love for humanity” which is the etymological root of the word becomes “love for humanity, as long as they are like me”. And that is a worrying thought.