Rhodri Davies, Programme Leader, Giving Thought

Rhodri Davies

Head of Policy

Charities Aid Foundation

The role of giving


20 April 2017

More often than not, when the impact of new technologies on the world of charity is being considered, the focus is on the way in which these technologies could offer new ways of addressing social and environmental problems. You can see this in all the various examples of “tech for good” initiatives, which usually seek to apply a hacker mindset to social problems in the hope of finding innovative approaches that will deliver better outcomes.

Slightly less consideration has been given to the possible impact of technologies on the ways in which donors can engage with charitable organisations, or on the ways in which these organisations are run, although there is some interesting thinking out there. (We have been looking at both of these angles through our work here at Giving Thought).

Where there has been little focus so far, however, is on the  impact new technologies might have on charities in terms of creating entirely new social problems that will need to be addressed. It is important to be clear that the technologies themselves are not good or bad – they are merely tools. And like any tool, they can be put to good use or bad: a hammer can be used to build a shelter for the homeless, but it can also be used as a weapon. But what makes many of the new technologies emerging in the world today different is their scale, complexity and level of connectivity. This means that there is far more potential for unintended consequences to occur, and for those consequences to have effects at a systemic level.

We are already seeing examples of the ways in which technology can create new social problems. For instance, even twenty years ago, the idea that access to the internet should be seen as a basic human right or that the inability to use it properly constitutes a social problem would have seemed absurd. However, in 2016, the UN passed a resolution declaring that access to the internet is in fact a basic human right, and there are now a number of charities whose main social mission is to promote digital inclusion by teaching people how to use the internet and other technologies (eg the Good Things Foundation).

The pace of change in technology is accelerating, and we appear to be on the cusp of large-scale adoption of a number of technologies that could have pretty fundamental and transformative effects on society. So what new challenges and social issues might these bring, which the charities of the future will need to address?

This is only the tip of the iceberg, though. The pace of change in technology is accelerating, and we appear to be on the cusp of large-scale adoption of a number of technologies that could have pretty fundamental and transformative effects on society (such as artificial intelligence (AI), blockchain technology, augmented/virtual reality and bio-enhancement of various kinds). So what new challenges and social issues might these bring, which the charities of the future will need to address?  Here are some ideas:



As we have already highlighted, having access to the internet and the necessary skills to use it are already seen as vital elements of ensuring that people can play their full part in society. This is true because the internet has come to occupy such a central role in our lives as a way of connecting and as a means to access services; particularly since the advent of smartphones and other mobile devices which make it possible to access the internet pretty much anywhere. At the same time, organisations have become so reliant on internet-based models of delivery that many services are now difficult or even impossible to access if you don’t have the internet. This includes key services like banking and even state-run public services.

However, we may only have seen the merest glimpse of the future takeover of our lives by technology. The next generation of interfaces that allow us to access the internet are likely to look very different from the screen-based ones we are used to today. There will be a huge growth in non-visual, conversation-based interfaces (along the lines of Apple’s Siri, Amazon’s Alexa, Google’s Home or Microsoft’s Cortana) which are driven by AI. There will also be virtual reality interfaces, which enable the user to interact with the internet from within a computer-generated world and augmented reality interfaces, which overlay elements of these virtual worlds onto the real world.

Each of these will bring their own specific problems (which we shall consider further below), but in general terms they will also become so deeply entangled in the fabric of our everyday existence that to lose access to them or to lack the requisite skills to harness them will cause massive disenfranchisement. Civil society organisations will increasingly find themselves having to fight for the rights of people who are being marginalised by exclusion via technology, and trying to overcome the challenges they face.

Civil society organisations will increasingly find themselves having to fight for the rights of people who are being marginalised by exclusion via technology, and trying to overcome the challenges they face.


Loss of access to technology that you have come to rely on is not just a rights issue: there is a growing body of evidence that it can cause symptoms of actual physical harm. For example, there are many studies showing that some people’s reliance on their mobile phones takes on the quality of an addiction (which has been pun-tastically dubbed “nomophobia”), and that they therefore suffer withdrawal symptoms if it is taken away from them in the same way that a drug user would.  Furthermore, an increasing number of people’s entire sense of identity and self is tied up with the ways in which they present themselves online in various contexts, so losing access to the internet is tantamount to losing part of themselves. And again, the more that we become reliant on other technology, the more of a problem this is going to be.

For health charities, particularly those dealing with mental issues or with young people, this is likely to cause a whole slew of new challenges. They will be required to deal with the consequences of addiction in individual cases, but also to shape the overall policy debate so that we can find ways to ensure that our relationship with technology does not become malign.


In addition to exclusion from technology causing problems, technology can itself cause people to become excluded in other ways. There is growing awareness of the way in which algorithms can entrench and exacerbate existing biases and thus cause people to become excluded or marginalised. For instance, algorithms which operate on data sets concerning areas in which there is inherent racial bias - such as criminal justice – merely amplify that bias and produce results that look racist, unless deliberate action is taken to prevent this happening.

Algorithms can also create group-level exclusion, as a result of the way in which they create “filter bubbles”. We can already see this happening in the form of social media echo chambers - where people interact only with those who share similar views, and as a result find their own views becoming hardened and their tolerance of those with different views diminished.

This problem will become significantly worse when the algorithms that filter our experience are not confined to particular social media platforms, but rather are embedded in the very fabric of the interfaces that we use for all our technological interactions. In this scenario, not only will the filter bubbles become all-pervasive but they will be insidious, because we won’t even be aware that we are operating within them. (Imagine, for instance, that you are relying on an AI operator to direct you when driving to a particular within a city, and it takes you on a longer route that avoids an area of poverty and deprivation because the business owners in the first location have paid an AI-optimisation company to minimise negative associations with their area. But you would have no idea that this was happening you would merely have a particular view of the area that you wrongly assumed to be objectively determined).

When the algorithms that filter our experience are not confined to particular socila media platforms, but rather are embedded in the very fabric of the way we interact with technology, not only will our filter bubbles become all-pervasive but they will be insidious: we won't even be aware that we are operating within them.

These are examples of exclusion that occur to some degree as the unintended consequences of applying new technologies. However, these technologies may also be applied with the deliberate intention of exclusion. For instance, blockchain technology – the decentralised public ledger technology best known as the basis for Bitcoin – could enable a new level of ‘radical transparency’, as it may be possible for an individual’s entire transaction history to be viewed by any user of the system. (Although those with sufficient tech savvy may also be able to use blockchain to ensure their anonymity, which creates an additional asymmetry).

This paradigm shift in terms of transparency isn’t just about blockchain, either. Once you factor in other developments such as the rapid growth of the Internet of Things, the adoption of wearable technology, or large-scale drone surveillance, it is easy to see how vast amounts of data about your past behaviour, preferences and so on will be obtainable. In this scenario, good old-fashioned human bias and prejudice could simply be applied on the basis of an unprecedented level of knowledge about other people’s background and history.

It is quite possible that some people – most likely those already at the margins of society – will find themselves caught in a pincer movement: the victims of both the blind biases of AI algorithms and the knowing biases of humans who want to use the information available in radically transparent systems to discriminate against particular groups or individuals. This will present significant new challenges for many charitable organisations.

Those at the margins of society may find themselves caught in a pincer movement: the victims of both the blind bias of AI algorithms and the knowing bias of humans.

To spell out how fundamental this problem could be, consider the example of citizenship. The Estonian government has, for a number of years, offered an e-residency service via which any individual anywhere in the world can apply for a digital identity guaranteed by the Estonian government that will enable them to register a business in the country and operate internationally. This has subsequently been integrated with blockchain technology and used to create an immutable record of identity that can be used by refugees and migrants and enable them to overcome some of the hurdles they often face as a result of not having official ID documentation (which is a great idea).

It is likely that more countries will follow this model in future. And there is nothing to say that the role of guarantor for new forms of online ID would have to be limited to governments: what if large, multinational corporates decided it was something they were interested in (i.e. could you become a citizen of Google or Facebook one day?) There might be positive aspects to this scenario, but one very clear negative one is that it would make the fundamental right of citizenship into a fully-marketised commodity, and thus open to the sorts of exclusionary bias outlined above. And when the stakes are that high, the cost of being excluded is enormous.

This is not just about computer technology either. Recent developments in biotechnology (most notably the development of the CRISPR-cas9 genome-editing technology) have made it possible for the first time to permanently modify stretches of DNA within living cells and organisms. This could have an enormous positive impact on many genetic diseases and conditions (and on the charities currently working to address them). However, it also raises the possibility of designer genetics, where those who have the money are able to pay to alter their own genome (and that off their offspring) in order to remove or minimise flaws and maximise desired characteristics. It is easy to see how this could be the thin end of a wedge that resulted in those who were unable to avoid such improvements being discriminated against or excluded (for instance when it came to healthcare, or to getting insurance). Once again, civil society organisations would have to take on the role of defending these marginalised individuals and groups.



There is growing concern that technological developments are having a negative effect on the development of children. Of course, one always has to be careful not to fall into the trap of thinking that ‘everything was better when I was growing up’ and thus imposing views based on rose-tinted nostalgia to today’s children. However, the sheer pace of change of technology over the last two or three decades means that the world in which children are now growing up really does seem fundamentally different in many ways.

Our understanding of how children develop has struggled to keep pace with this. For example, we do not really know what effect use of touch screen technology from a very early age has on the development of motor skills, what impact the speed of information availability will have on attention spans, or what an increasingly sedentary, indoor existence might mean for the general physical development of children. In each of these cases there seem to be justifiable causes for concern, and it seems certain that future technological developments will only exacerbate these issues.

Charities have always taken a leading role with respect to child development and welfare issues, and it is almost certain that they will have to do so when it comes to understanding and combatting the potential developmental impacts of new technology.



One problem that applies equally to children and adults is that many of the social standards and rules that exist within our societies do not apply in the same way in online or virtual contexts, and this can have problematic consequences. There is already a widely-recognised problem with bullying, abuse and “trolling” behaviour on social media. Often this behaviour is demonstrated by people who wouldn’t dream of behaving in a similar fashion in the real world, but who find that the anonymity and sense of removal in the online world empowers them to act in ways that most people would consider reprehensible.

Many of the social standards and rules that exist within our societies do not apply in the same way in online or virtual contexts, and this can have problematic consequences.

Similar, if more pronounced, problems are beginning to emerge in the world of virtual and augmented reality. There have already been cases of people committing crimes such as sexual assault within virtual worlds, which have raised concerns about the emotional and psychological impact on victims and deep ethical and legal questions about how to deal with such behaviour. To what extent should we apply real-world rules and standards; particularly since many online and virtual environments exist, at least in part, specifically to allow people to engage in behaviour that would be considered taboo in the real world? (One has only to think of the huge success of the Grand Theft Auto franchise, which I think it is safe to say is not entirely due to a widespread love of driving.)

In addition to people deliberately engaging in negative behaviour in online and virtual contexts,there is the related problem that in some circumstances people may be made to behave in ways that they would not choose to, because someone else has gained partial or total control of their online avatar or identity. It is hard to find a real-world analogue of this problem, but perhaps the best comparison would be the many examples in fiction of people being subjected to mind control through hypnosis or something similar, and made to commit crimes by some evil master puppeteer (see here for an entertaining exploration of the legal implications of such a scenario).

In VR this scenario is not that fantastical, and it is relatively easy to see how it could be achieved. But what is perhaps more likely is that rather than total identity takeover by malicious criminals; we will see much more widespread and subtle partial erosion of control as a way of guiding behaviour. There is a utopian version of this scenario in which the technology can be harnessed in order to implement subtle nudges that prompt and reinforce positive social behaviour by a benevolent government. However, there is also a dystopian version in which our actions are manipulated for self-interested or commercial reasons (one can easily see how many opportunities would be opened up for new forms of advertising: e.g. not only could a clothing manufacturer make personalised suggestions about garments you might like, but could temporarily take control of your avatar in order to give you the sense of actually wearing them.)

It is hard to know what the impact of a gradual erosion of one’s sense of personal agency might be, but one feasible possibility is that it would feed into a sense of diminished responsibility and lack of consequences, and that this might amplify the decay of existing social and moral boundaries. This would have repercussions not only in the virtual world, but in the real one as well; and charities will be in the front line of dealing with them.



Spending large amounts of time within online or virtual environments may affect one’s behaviour and ability to function out here in the real world. This could take a number of forms (which are explored in more detail in a fascinating paper called “Real Virtuality: A Code of Ethical Conduct. Recommendations for Good Scientific Practice and the Consumers of VR-Technology” by Michael Madary and Thomas Metzinger:

  • Depersonalization/Derealisation Disorder: These are dissociative disorders in which a person has recurrent feelings of being ‘outside themselves’ observing their own actions (depersonalization) or inside themselves but detached from their surroundings (derealisation). These are conditions that predate the invention of technologies such as VR, but there is evidence that long-term immersion in virtual environments can trigger these conditions. (There is some obvious intuitive sense to this, as if you spend the majority of time in a world that you know in some way to be ‘unreal’ it would not be surprising if these feelings continued even when you were back in the real world).
  • Loss of empathy: VR technology can be used as a powerful tool to increase empathy, but it is also possible that it could have the opposite effect if the unreality of virtual situations was constantly reinforced to the point where people became numbed to the feelings of others within the virtual environment and then carried this attitude into real-world situations
  • Lack of social skills: Many of the social cues that we unconsciously rely on in our interactions (e.g. facial expression, body language etc.) might be missing or different in a virtual environment. Those who spend large periods of time in these environments, particularly young people who do so during important developmental phases, may fail to develop a proper understanding of these cues and thus find themselves less able to interact effectively with others in the real world.
  • Loss of sense modalities: as well as social cues, there are other contextual elements of our real-world interactions that play a hugely important role in memory and understanding, and which may be missing in virtual environments (e.g. smell, touch, external sounds etc.).
  • Active dislike/hatred of physical environment: One serious negative effect of spending large periods of time in virtual environments may be that people develop an active hatred of the physical world. If you spend the majority of time in a virtual environment where you have been able to craft your identity carefully and where you are comfortable with the rules and social mores, it is easy to see how the prospect of spending time in a real world where you are tied to your physical body and different rules apply (and where you may lack certain key social skills, as outlined above) is going to be unappealing. This may produce feelings of anger and frustration and even lead to violence. We can see early forms of this sort of problem in the growing number of tragic cases of “gamer rage” in which domestic violence incidents resulting in injury or even death are sparked by someone being interrupted while playing a video game.



Spending long periods of time in a virtual environment could have negative effects on your personal wellbeing. Partly this is as a direct result of the reduction in real-world social interaction, which is almost universally recognised as a vital element of maintaining good mental and emotional health. There is already evidence which suggests that the growing reliance on technology for interactions between grandparents and their children/grandchildren who live elsewhere can lead to increased rates of depression among older people.

This problem will get worse if our number of our social interactions declines, and many new technologies make this a distinct possibility. In addition to VR, which makes it possible to exist in a world other than the real one for long periods of time, the advent of widespread affordable 3D home printing may mean that the number of situations in which one has to go to a shop or office to get a particular product or service will be drastically reduced, and hence the opportunity for everyday, small–scale social interactions will also be diminished.

Automation will also have a profound impact here. It is already being suggested that AI and machines could replace the majority of existing jobs in the foreseeable future. Given that the workplace plays such an incredibly important role in many people’s lives, the loss of so many jobs could result in a significant reduction in social interactions and thus harm many people’s personal wellbeing (even if you happen to hate your colleagues, you do at least have to interact with them!)

AI and machines could replace the majortiy of jobs in the forseeable future.Given that the workplace plays such an important role in many people's lives, this could result in a significiant reduction in social interactions and reduce levels of wellbeing.

Furthermore, as it stands our concepts of self-worth and identity are closely bound up with notions of productivity and gainful employment, so it is far from clear what a future world without work might mean for our sense of self and wellbeing.

The volume and speed of information available via new technologies may also cause problems. We  all already recognise the sense of anxiety that drives us to check our phones far more often than we really need to, for fear that we might be “missing out” in some way; or the sense of tension that can accompany efforts to choose between a vast range of  similar options when looking for a product or service online (having recently renewed my car insurance, I can speak from experience about this one). Then imagine what it would be like in a world where the internet as we know it has become an “Internet of Everything”, encompassing not only webpages but virtual worlds and all the smart objects in the ‘internet of things”- all spewing out vast quantities of information every second. I get tired just thinking about it!

If people spend more and more time in virtual environments, is there a real risk that they will ignore their real-workd physical wellbeing?

There is finally the more pragmatic question of what might happen to our physical health. If people spend more and more time in virtual environments, is there a real risk that they will ignore their real-world physical wellbeing? This will be a particular problem if the interfaces we use to access these worlds are ones that allow us to spend large periods of time being sedentary, as it will do little to reverse the global obesity epidemic that we already face. However, it is possible that augmented reality or conversational interfaces could avoid this problem, or even incentivise greater physical activity.

Once again, all of these potential negative impacts on physical, mental and emotional wellbeing will have all sorts of implications for charities that may have to deal with them in the future.


Overpopulation is already thought by many to be a major problem (including, famously, Sir David Attenborough). This is driven in part by the fact that people are living longer than ever before. New developments in medicine and biotechnology may make it possible to extend life even further: perhaps even indefinitely. This has become a big focus for a number of philanthropists and companies: Google famously invested heavily in the life-sciences start-up Calico and announced that it was part of an ambition to “cure death”, and they have subsequently been joined on that quest by many others. What this will mean in terms of environmental repercussions, when we are already putting a severe strain on the earth’s resources, only time will tell.

As well as the population-level impacts of extending lifespans, there may be individual-level challenges too. If people live well beyond a natural human lifespan (150 years or more), they may suffer physical and psychological problems that we can’t currently foresee. We also don’t know what it might mean for things like family dynamics and personal relationships: these are all predicated on normal human lifespans, and on the idea that people eventually die - but what if they don’t? Will people want to stay with one person for their whole lives, when that life might be 200 years or more? Similarly, if technology enables people to remain fertile for far longer, will they choose to have children in ‘clusters’ throughout their life? What will this mean for traditional notions of marriage, parenthood and family? Given that charities often play an important role when these structures break down, how will they have to adapt?


Even when many technologies make it increasingly easy to escape the real world by going online or into a virtual environment, we have to remember that the hardware they rely on continues to exist here and as such has tangible environmental impacts. Many of these technologies are extremely energy-intensive, and thus add to the demand for power generation (which currently still includes fossil fuels). Some also require constant cooling in order to absorb the vast amount of heat they generate, and often the most effective way of doing this is to build gigantic facilities in very cold places, which also tend to be pristine natural environments; for example the Facebook data centre on the Arctic circle at Luleå.

There is also the problem that many pieces of technological hardware contain materials that are highly toxic so they need to be handled in particular ways when it comes to disposal or recycling. Unfortunately, the high cost of doing so means that a huge trade has sprung up in illegal e-waste. Much of this makes its way to the developing world, where companies in countries with less strict regulation and enforcement can make vast sums by flouting safety rules.

It is possible that technological developments could mitigate some of the potential environmental damage caused by growing demand for technology. Revolutions in clean/renewable technology or battery technology could alleviate some of the concerns about power consumption and generation. Likewise the advent of widespread, affordable 3D printing might do a lot to combat the problem of waste if it allowed people to print goods when they needed them and then easily  recycle them into component parts for reprinting. It will all depend whether these technologies keep pace with the technologies that are going to generate the increased demand.


Widening income inequality is one of the defining issues of our time, and a causal factor in many other social issues. So an important question about many of the new technological developments that are predicted to shape our future is: will they increase or decrease inequality within society?

An important question about many of the new technological developments that are predicted to shape our future is: will they increase or decrease inequality within our society?

Technologies such as the internet or mobile phones are often argued to have had a democratising effect, and this is surely true to some extent:  a child in rural India can now get free access to courses from some of the world’s top educational institutions, while a farmer in Sub-Saharan Africa can use a mobile phone to access information that ensures he is able to get a fair price for his crops and banking services that allow him to accept and make payments easily. However, whilst some people are definitely enfranchised by technology, there are also those who lose out. Older people, for instance, often lack the skills to use new technology and as a result may find themselves left behind as more and more products and services comes to rely on those technologies.

In the future, the divide between the technological haves and have-nots could become enormous. As automation replaces many traditional jobs, a divide might also open up between those who no longer work and those who are still able to work, or even own the technology. A country-level divide could also open up between those countries that are able to take advantage of this “fourth industrial revolution” and those that cannot, so there would be a widening of global inequality between the ‘technologically developed’ and ‘technologically developing’ world. This could then lead to further inequality as the environmental impacts of our increased reliance on technology are shared out unequally; with the developing world becoming a power station to meet the energy needs of the developed world or a dumping ground for waste materials.

Hopefully this doesn’t all sound too apocalyptic or dystopian! I’m fundamentally very optimistic about the potential for technology to drive huge improvements in the way we address social challenges and also to transform the ways in which people are able to support causes. That is why one of our key themes at Giving Thought is “the future of doing good”, and we will continue to explore new technologies in order to understand their impact on philanthropy and the work of charities.

However, I do believe that blind optimism is just as bad as cynicism. It is crucial that we think through the negative consequences of technological developments instead of just buying into the often over-inflated hype, as that way we stand a better chance of ensuring that if these things do come to pass, they will do far more good than harm and benefit the many rather than the few.